record_id,title,abstract,year,label_included,duplicate_record_id 1,Sinogram-based motion correction of PET images using optical motion tracking system and list-mode data acquisition,"A head motion during brain imaging has been recognized as a source of image degradation and introduces distortion in positron emission tomography (PET) image. There are several techniques to correct the motion artifact, but these techniques cannot correct the motion during scanning. The aim of this study is to develop a sinogram-based motion correction (SBMC) method to correct directly the head motion during PET scanning using a motion tracking system and list-mode data acquisition. This method is a rebinning procedure by which the lines of response (LOR) are geometrically transformed according to the current values of the six-dimensional motion data. Michelogram was recomposed using rebinned LOR and motion corrected sinogram was generated. In the motion corrected image, the blurring artifact due to motion was reduced by SBMC method.",2002,0, 2,A fault tolerant control architecture for automated highway systems,A hierarchical controller for dealing with faults and adverse environmental conditions on an automated highway system is proposed. The controller extends a previous control hierarchy designed to work under normal conditions of operation. The faults are classified according to the capabilities remaining on the vehicle or roadside after the fault has occurred. Information about these capabilities is used by supervisors in each of the layers of the hierarchy to select appropriate fault handling strategies. We outline the strategies needed by the supervisors and give examples of their detailed operation,2000,0, 3,Fault tolerant memory design for HW/SW co-reliability in massively parallel computing systems,"A highly dependable embedded fault-tolerant memory architecture for high performance massively parallel computing applications and its dependability assurance techniques are proposed and discussed in this paper. The proposed fault tolerant memory provides two distinctive repair mechanisms: the permanent laser redundancy reconfiguration during the wafer probe stage in the factory to enhance its manufacturing yield and the dynamic BIST/BISD/BISR (built-in-self-test-diagnosis-repair)-based reconfiguration of the redundant resources in field to maintain high field reliability. The system reliability which is mainly determined by hardware configuration demanded by software and field reconfiguration/repair utilizing unused processor and memory modules is referred to as HW/SW Co-reliability. Various system configuration options in terms of parallel processing unit size and processor/memory intensity are also introduced and their HW/SW Co-reliability characteristics are discussed. A modeling and assurance technique for HW/SW Co-reliability with emphasis on the dependability assurance techniques based on combinatorial modeling suitable for the proposed memory design is developed and validated by extensive parametric simulations. Thereby, design and Implementation of memory-reliability-optimized and highly reliable fault-tolerant field reconfigurable massively parallel computing systems can be achieved.",2003,0, 4,Efficient color correction approach for phase unwrapping based on color-encoded digital fringe projection,"A highly efficient color correction approach based on color-encoded fringe projection is proposed, which combine color image segmentation and color intensity interpolation technique. Only 24 designed color patterns are projected and recorded to implement the process with a high brightness DLP projector and a color camera. To establish the correspondence between the designed color intensity and recorded color intensity, the recorded image is firstly segmented into some adjacent grid region by neighboring pixel intensity fitting error, the grid region is then grown to the region boundary employing some process algorithm, thirdly, the region number is labeled and adjusted based on the designed color pattern by searching the region centre coordinate and applying a man-machine conversation method, finally, the color correspondence relation is established according to the designed color pattern pixel index and the labeled grid region number of recorded image. While doing the color correction, firstly, the initial color intensity is searched according to the minimum color distance between the recorded color and designed color. Secondly, color interpolation is implemented to obtain the true color intensity correspondence to recorded color. The proposed approach validity is testified by experiment results.",2010,0, 5,High-performance line conditioner with output voltage regulation and power factor correction,"A high-performance line conditioner with excellent efficiency and power factor is proposed. The line conditioner consists of a three-leg rectifier-inverter, which operates as a boost converter and a buck converter. This boost-buck topology enables constant output voltage regulation, irrespective of input voltage disturbances. In addition the three-leg bridge can reduce the number of switching devices and system loss, while maintaining the capabilities of power factor correction and good output voltage regulation. The power factor controller for the single-phase pulse-width modulated (PWM) rectifier is derived using the feedback linearisation concept. The inverter side acts as a voltage regulator with current-limiting capability for impulsive loads. The disturbance of input voltage is detected using a fast-sensing technique. Experimental results obtained on a 3 kVA prototype show a normal efficiency of over 95% and input power factor of over 99%.",2004,0, 6,Study on Fault Diagnosis Expert System for Power Supply Circuit Board on Vxi Bus,A high-tech information electronic equipment of some given type is designed in order to proceed automatically fault detection and improve the efficiency and accuracy of diagnosis. This thesis which is a part of the program introduces the research of algorithm of fault diagnose expert system of a power supply circuit board of an electronic device and algorithm realization and example proving on the hardware platform. It's quicker and more convenient to locate fault on the circuit boards with this equipment. It's proved that this expert system can solve the problems of high cost and long intervals of maintenance and keep the equipment in a stable status,2006,0, 7,Fault diagnosis systems development for Fuel Cell Vehicle,"A hydrogen-powered fuel cell vehicle is developed, in which a distributed control and communication system based on CAN (Controller Area Network) is built. For vehicle diagnostic purpose, a new on-board fault diagnosis strategy is presented. There are two efficient automotive diagnostic systems based on CAN designed and implemented in this paper: (1)CANoe is a powerful CAN development tool. A fault diagnosis environment based on CANoe is established to satisfy the needs of on-board and off-board fault diagnosis application of FCV. By setting up the communication interface between CANoe and Access, the vehicle fault codes are collected and stored. Meanwhile a database is designed for the management of fault information. (2) A hand-held fault diagnosis equipment as well as a windows analyzer interface is set up. All fault information from FCVpsilas CAN network can be gotten easily by the equipment. With the Serial Communication between the equipment and PC, the fault codes stored in the equipment can be read, analyzed and disposed by PC.",2008,0, 8,Non-uniformity correction and calibration of a portable infrared scene projector,"A key attribute of any tester for FLIR systems is a calibrated uniform source. A uniform source ensures that any anomalies in performance are artifacts of the FLIR being tested and not the tester. Achieving a uniform source from a resistor array based portable infrared scene projector requires implementation of nonuniformity correction algorithms instead of controlling the bonding integrity of a source to a cooler, and the coating properties of the source typical of a conventional blackbody. The necessity to perform the non-uniformity correction on the scene projector is because the source is a two-dimensional array comprised of discrete resistive emitters. Ideally, each emitter of the array would have the same resistance and thus produce the same output for a given drive current. However, there are small variations from emitter to emitter over the thousands of emitters that comprise an array. Once a uniform output is achieved then the output must be calibrated for the system to be used as test equipment. Since the radiance emitted from the monolithic array is created by flowing current through micro resistors, a radiometric approach is used to calibrate the differential output of the scene projector over its dynamic range. The focus of this paper is to describe the approach and results of implementing non-uniformity correction and calibration on a portable infrared scene projector.",2002,0, 9,The Two-Level-Turn-Model Fault-Tolerant Routing Scheme in Tori with Convex and Concave Faults,"A kind of routing scheme with the ability to tolerate the faults is necessary in the massively parallel multiprocessors. In this paper, we have proposed a kind of fault-tolerant routing scheme in the tori networks. The new routing scheme is called the two-level-turn-model routing scheme, which is based on our investigation of the fault-tolerant properties of the turn-model. Through employing two specific kinds of turn model, our routing scheme could tolerate the convex faults and the concave faults both with a few limitations to their shape. At most five virtual channels would be used to avoid the deadlock occurrence in the tori, no matter whether the fault regions are connected and no matter where the faults locate. Actually, if the fault regions encompass no physical boundary nodes in the tori, totally four virtual channels, each pair for each turn model, would be sufficient to preclude the occurrence of the deadlock. At last, the simulation shows the effectiveness of our scheme.",2009,0, 10,Slicing and dicing bugs in concurrent programs,"A lack of scalable verification tools for concurrent programs has not allowed concurrent software development to keep abreast with hardware trends in multi-core technologies. The growing complexity of modern concurrent systems necessitates the use of abstractions in order to verify all the expected behaviors of the system. Current abstraction refinement techniques are restricted to verifying mostly sequential and simpler concurrent programs. In this work, we present a novel incremental underapproximation technique that uses program slicing. Based on a reachability property, an initial backward slice for a single thread is generated. The information in the program slice is coupled with a concrete execution to drive the lone thread; generating an underapproximation of the program behavior space. If the target location is reached in the underapproximation, then we have an actual concrete trace. Otherwise, the initial single-thread slice is refined to include another thread that affects the reachability of the target location. In this case, the concrete execution only considers the two threads in the slice and preemption points between the threads only occur at locations in the slice. This refinement process is repeated until the target location is reached or is shown to be unreachable. Initial results indicate that the incremental technique can potentially allow the discovery of errors in larger systems using fewer resources and produce a better reduction in systems that are correct.",2010,0, 11,Cross-layer error resilience for robust systems,"A large class of robust electronic systems of the future must be designed to perform correctly despite hardware failures. In contrast, today's mainstream systems typically assume error-free hardware. Classical fault-tolerant computing techniques are too expensive for this purpose. This paper presents an overview of new techniques that can enable a sea change in the design of cost-effective robust systems. These techniques utilize globally-optimized cross-layer approaches, i.e., across device, circuit, architecture, runtime, and application layers, to overcome hardware failures.",2010,0, 12,Search-based Prediction of Fault-slip-through in Large Software Projects,"A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.",2010,0, 13,Error visualization of tetrahedral subdivision approach for trilinear interpolation,"A linear interpolation scheme inside a tetrahedral cell often causes a large interpolation error when field values change drastically. However, the error has not been analysed and visualized thoroughly In order to understand the error distribution inside a tetrahedral cell, we propose two types of error norms, which are both an interpolation function error norm and a field data error norm. These error norms make it possible to compare the linear interpolation function with a trilinear interpolation function that has often been used in rectilinear grid cell, and visualize the error distribution by using iso-surface display",2000,0, 14,Logic fault test simulation environment for IP core-based digital systems,A logic fault test simulation environment for core-based digital systems is proposed in this paper. The simulation environment emulates a typical built-in self-test (BIST) environment with test pattern generator that sends its outputs to a circuit under test (CUT) and the output streams from the CUT are fed into a response data analyzer. The developed simulator is suitable for testing digital IP cores. The paper describes in details the test architecture and application of the logic fault simulator. Some partial simulation results on ISCAS 85 combinational and ISCAS 89 sequential benchmark circuits are provided.,2009,0, 15,Blocking vs. Non-Blocking Coordinated Checkpointing for Large-Scale Fault Tolerant MPI,"A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and non-blocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks",2006,0, 16,Induction Motor-Drive Systems with Fault Tolerant Inverter-Motor Capabilities,"A low-cost fault tolerant drive topology for low- speed applications such as ""self-healing/limp-home"" needs for vehicles and propulsion systems, with capabilities for mitigating transistor open-circuit switch and short-circuit switch faults is presented in this paper. The present fault tolerant topology requires only minimum hardware modifications to the conventional off-the-shelf six-switch three-phase drive, with only the addition of electronic components such as triacs/SCRs and fast-acting fuses. In addition, the present approach offers the potential of mitigating not only transistor switch faults but also drive related faults such as rectifier diode short-circuit fault or dc link capacitor fault. In this new approach, some of the drawbacks associated with the known fault mitigation techniques such as the need for accessibility to a motor neutral, overrating the motor to withstand higher fundamental rms current magnitudes above its rated rms level, the need for larger size dc link capacitors, or higher dc bus voltage, are overcome here using the present approach. Given in this paper is a complete set of simulation results that demonstrate the soundness and effectiveness of the present topology.",2007,0, 17,IP core logic fault test simulation environment,"A low-level logic fault test simulation environment for embedded systems directed specifically towards application-specific integrated circuits (ASICs) and intellectual property (IP) cores is proposed in the paper. The developed simulation environment emulates a typical builtin self-testing (BIST) architecture with automatic test pattern generator (ATPG) that sends its outputs to a circuit (core) under test (CUT) and the output streams from the CUT are fed into an output response analyzer (ORA). The paper delineates the development of the test architecture, test application and fault injection including the relevance of the logic fault simulator.in great details. Some results on simulation on specific IP cores designed using combinations from ISCAS 85 combinational and ISCAS 89 sequential benchmark circuits are provided as well for evaluation.",2010,0, 18,Surface Defects Inspection System Based on Machine Vision,"A machine vision based tinplate surface inspection system was developed. The system was composed of two parallel line scan CCD cameras, a special designed wide field illumination, which can overcome the vibration of tinplate, and a software based on SOM (Self-Organizing Feature Map) neural network. The images of tinplate were captured by cameras. All kinds of defects candidates such as pinholes, scallops, dust and scratches were found out, and their features can be extracted and selected from images. These candidates were distinguished by the SOM neural network to find out real defects. The inspection speed reached up to 1.4 m/s, and the resolution was 0.1 mm, and recognition rate was 95.45%.",2010,0, 19,Research on real-time error measurement in curve grinding process based on machine vision,"A machine vision image measurement system for online monitoring of the wheel wear degree during the curve grinding process is designed and developed. The measurement apparatus and its principle of operation are introduced in detail. Real-time image of work piece and wheel in the grinding zone is gathered by CCD camera installed in the grinder. For the purpose of increasing the measurement precision, a new edge detection approach combining Zernike moments operator with Prewitt operator is proposed. The edge of the finished work piece is located with sub-pixel level accuracy, and then the machining error of the work piece is calculated on-line by comparing with the theoretical curve of the work piece. An application of its validity and the experimental results are also given. Experimental results demonstrate the proposed measurement method in this paper is effective, and its detection precision and results are reasonable.",2006,0, 20,Inspection system for detecting defects in a transistor using Artificial neural network (ANN),"A machine vision system based on ANN for identification of defects occurred in transistor fabrication is presented in this paper. The developed intelligent system can identify commonly occurring errors in the transistor fabrication. The developed machine vision and ANN module is compared with the commercial MATLAB software and found results were satisfactory. This work is broadly divided into four stages, namely intelligent inspection system, machine vision module, ANN module and Inspection expert system. In the first a system with a camera is developed to capture the various segments of the transistor. The second stage is the image processing stage, in this the captured bitmap format image of the transistor is filtered and its size is altered to an acceptable size for the developed ANN using Set Partitioning Hierarchical Tree (SPIHT). These modified data are given as input to the ANN in the third stage. Generalized ANN with Back propagation algorithm is used to inspect the transistor. The ANN is trained and the weight values are updated in such a way that the error in identification is the least possible. The output of ANN is the inspected report. The developed system is explained with a real time industrial application. Thus, the developed algorithms will solve most of the problems in identifying defects in a transistor.",2010,0, 21,Masquerade detection augmented with error analysis,"A masquerade attack, in which one user impersonates another, may be one of the most serious forms of computer abuse. Automatic discovery of masqueraders is sometimes undertaken by detecting significant departures from normal user behavior, as represented by a user profile formed from system audit data. A major obstacle for this type of research is the difficulty in obtaining such system audit data, largely due to privacy concerns. An immense contribution in this regard has been made by Schonlau et al., who have made available UNIX command-line data from 50+ users collected over a number of months. Most of the research in this area has made use of this dataset, so this paper takes as its point of departure the Schonlau et al. dataset and a recent series of experiments with this data framed by the same researchers . In extending that work with a new classification algorithm, a 56% improvement in masquerade detection was achieved at a corresponding false-alarm rate of 1.3%. In addition, encouraging results were obtained at a more realistic sequence length of 10 commands (as opposed to sequences of 100 commands used by Schonlau et al.). A detailed error analysis, based on an alternative configuration of the same data, reveals a serious flaw in this type of data which hinders masquerade detection and indicates some steps that need to be taken to improve future results. The error analysis also demonstrates the insights that can be gained by inspecting decision errors, instead of concentrating only on decision successes.",2004,0, 22,Fabric defect detection based on open source computer vision library OpenCV,"A method for fabric defect detection based on OpenCV with rich computer vision and image processing algorithms and functions is presented. Firstly, OpenCV image processing functions implement fabric image preprocessing. We use morphological opening and closing operations to segment image because of their blur defects. Secondly, seed filling algorithm is applied to connect broke lines to keep defect edge smoothing. Finally, the edge detection function is to complete accurate positioning defects. Experimental results under Borland C++ Builder 6.0 show that OpenCV based fabric defect detection methods are simple, high code integration, accurate defects positioning, which can be applied to develop real-time fabric defect detection system.",2010,0, 23,Detection and Correction of Lip-Sync Errors Using Audio and Video Fingerprints,"A method for measuring and maintaining time synchronization between an audio and video stream is described. Audio and video fingerprints are used to create a combined audio/video synchronization signature (A/V Sync Signature) at a known reference point. This signature is used at later points to measure audio/video timing synchronization relative to the reference point. This method may be used, for example, to automatically detect and correct audio/video synchronization (i.e. lip-sync) errors in broadcast systems and other applications. Advantages of the method described over other existing methods include that it does not require modification of the audio or video signals, it can respond to dynamically changing synchronization errors, and it is designed to be robust to modifications of the audio/video signals. While the system requires data to be conveyed to the detection point, this data does not need to be synchronized with, or directly attached to, the audio or video streams. As this method uses fingerprints it also enables other fingerprinting applications within systems, such as content identification and verification. In addition, it may be used to maintain synchronization of other metadata associated with audio/video streams.",2009,0,7206 24,A Method for Optimum Test Point Selection and Fault Diagnosis Strategy for BIT of Avionic System,"A method for optimum test point selection and the fault diagnosis strategy which is based on the fault message matrix and features of BIT is proposed. The fault message matrix is divided based on the weight of the test points The diagnosis strategy is determined using dividing the fault message matrix and the thought of detecting first and isolating next. Result shows that the optimum method is suitable for BIT to select the appropriate test points and fault diagnosis procedure. Besides, average numbers of test steps were reduced.",2009,0, 25,Crosstalk-Insensitive Method for Testing of Delay Faults in Interconnects Between Cores in SoCs,"A method for reliable measurement of interconnect delays is presented in the paper. The mode of test vectors generation never induces crosstalks. That is why the delay measurement is reliable. Also, minimization of ground bounce noises and reduction of power consumption during the test is an additional advantage. The presented method allows also localizing and identifying static faults of both stuck-at (SaX) and short types. The paper deals with the hardware that is necessary for implementing the method. The techniques for test data compression, that allow substantial reduction of data volume transferred between SoC and ATE, are also proposed.",2007,0, 26,Error correction using data hiding technique for JPEG2000 images,"A method of error correction for JPEG2000 images is proposed in this paper. The method uses the layer structure that is a feature of the JPEG2000 and an error correction code. The upper layers of the code stream are coded using an error correcting code, and the parity data are hidden in the lowest layer. The hidden data are used for error correction at the decoder. Several error correction codes with different strength are selected for the main header, packet headers, and bodies. Since the resulting code stream has the same data structure as a standard JPEG2000 code stream, it can be decoded with a general decoder. Simulation results demonstrated the effectiveness of the proposed method.",2003,0, 27,Zero frequency error locking of widely tunable lasers in high spectral efficiency systems using optical injection phase lock loops,"A method of locking widely tunable lasers with zero frequency error relative to supplied optical and microwave references despite lasers temperature variations was demonstrated. Locking was maintained when changing laser submount temperature from 18C to 23C, while channel spacing variations were kept under 1 Hz. Using a two OIPLL system, 10 Gbit/s transmission at 18 GHz channel spacing was achieved.",2002,0, 28,A practical methodology for experimental fault injection to test complex network-based systems,"A methodology for dependability assessment of complex computer systems, such as fault tolerant grids, is presented in this paper. The methodology uses communication fault injection and was built by adapting a widely accepted approach for performance analysis. To demonstrate its applicability and usefulness, the methodology was applied to a third party grid platform using a fault injector we are developing. The paper reasons about the advantages of this methodology to perform fault injection campaigns in the prototype phase of a system.",2009,0, 29,A design of the low-pass filter using the novel microstrip defected ground structure,"A new defected ground structure (DGS) for the microstrip line is proposed in this paper. The proposed DGS unit structure can provide the bandgap characteristic in some frequency bands with only one or more unit lattices. The equivalent circuit for the proposed defected ground unit structure is derived by means of three-dimensional field analysis methods. The equivalent-circuit parameters are extracted by using a simple circuit analysis method. By employing the extracted parameters and circuit analysis theory, the bandgap effect for the provided defected ground unit structure can be explained. By using the derived and extracted equivalent circuit and parameters, the low-pass filters are designed and implemented. The experimental results show excellent agreement with theoretical results and the validity of the modeling method for the proposed defected ground unit structure",2001,0, 30,Inverted defected ground structure for microstrip line filters reducing packaging complexity,"A new defected ground structure for microstrip line circuits was introduced by keeping the ground plane of the circuit fully metallized and etching the slots on the superstrate, which is directly lain on the top of the substrate. The metal of the superstrate is connected by via holes to the ground plane. The structure has the great advantage in reducing the packaging complexity, since it can be directly based on the carrier block without the need of machining a recessed region in it. Moreover, a higher Q-factor is obtained for this kind of structures. The low-pass filter based on this structure was designed, fabricated and measured. The DGS structure located on the superstrate provides the transmission zeros improving the steepness of the transmission characteristic and the attenuation in the stop-band. The filter insertion losses are better than 0.4 dB. The measured data fit well the results of MoM simulation.",2008,0, 31,Error concealment using affine transform for H.263 coded video transmissions,"A new error concealment method is proposed that uses motion estimation to consider actual motions, such as rotation, magnification, reduction, and parallel motion, in moving pictures. Since many videos include a variety of complex three-dimensional motions, the proposed method uses an affine transform to estimate the motion of lost data more accurately, thereby producing a higher peak signal-to-noise ratio value and better subjective video quality",2001,0, 32,A fast and efficient H.264 error concealment technique based on coding modes,"A new error concealment technique based on the coding modes is proposed for H.264 video sequences. The motion-compensation modes (i.e., block-partitioning types) of surrounding macroblocks are employed to predict the mode of a lost macroblock. This adaptive selection mechanism is combined with a refined set of candidate motion vectors. Experimental results show that the proposed method, as compared to the technique used in the JM reference software, provides 1 to 2 dB gain in PSNR with only 50% increase in computation time.",2010,0, 33,Hierarchical defect-oriented fault simulation for digital circuits,"A new fault model is developed for estimating the coverage of physical defects in digital circuits for given test sets. Based on this model, a new hierarchical defect oriented fault simulation method is proposed. At the higher level simulation we use the functional fault model, at the lower level we use the defect/fault relationships in the form of defect coverage table and the defect probabilities. A description and the experimental data are given about probabilistic analysis of a complex CMOS gate. Analysis of the quality of 100% stuck-at fault test sets for two benchmark circuits in covering physical defects like internal shorts, stuck-opens and stuck-ons. It has been shown that in the worst case a test with 100% stuck-at fault coverage may, have only 50% coverage for internal shorts in complex CMOS gates. It has been shown that classical test coverage calculation based on counting defects without taking into account the defect probabilities may lead to considerable overestimation of results",2000,0, 34,New fault tolerant robotic central controller for space robot system based on ARM processor,"A new fault tolerant robotic central controller with dual processing modules is introduced. Each processing module is composed of 32 bit ARM RISC processor and other commercial-off-the-shelf (COTS) devices. As well as, a set of fault handling mechanisms is implemented in the robotic central controller, which can tolerate a single fault. The robotic central controller software based on VxWorks is organized around a set of processes that communicate between each other through a routing process. Considering the demanding of the extremely tight constraints on mass, volume, power consumption and space environmental conditions, the new fault tolerant robotic central controller has been developed. Its excellent data processing capability is enough to meet the space robot missions.",2008,0, 35,Class-based neural network method for fault location of large-scale analogue circuits,"A new method for fault diagnosis of large-scale analogue circuits based on the class concept is developed in this paper. A large analogue circuit is decomposed into blocks/sub-circuits and the nodes between the blocks are classified into three classes. Only those sub-circuits related to the faulty class need to be treated. Node classification reduces the scope of search for faults, thus reduced after-test time. The proposed method is more suitable for real-time testing and can deal with both hard and soft faults. Tolerance effects are taken into account in the method. The class-based fault diagnosis principle and neural network based method are described in some details. Two non-trivial circuit examples are presented, showing that the proposed method is feasible.",2003,0, 36,A new approach to fault-tolerant wormhole routing for mesh-connected parallel computers,"A new method for fault-tolerant wormhole routing in arbitrary dimensional meshes is introduced. The method was motivated by certain routing requirements of an initial design of the Blue Gene supercomputer at IBM Research. The machine is organized as a three-dimensional mesh containing many thousands of nodes and the routing method should tolerate a few percent of the nodes being faulty. There has been much work on routing methods for meshes that route messages around faults or regions of faults. The new method is to declare certain nonfaulty nodes to be ""lambs."" A lamb is used for routing but not processing, so a lamb is neither the source nor the destination of a message. The lambs are chosen so that every ""survivor node,"" a node that is neither faulty nor a lamb, can reach every survivor node by at most two rounds of dimension-ordered (such as e-cube) routing. An algorithm for finding a set of lambs is presented. The results of simulations on 2D and 3D meshes of various sizes with various numbers of random node faults are given. For example, on a 32 32 32 3D mesh with 3 percent random faults and using at most two rounds of e-cube routing for each message, the average number of lambs is less than 68, which is less than 7 percent of the number 983 of faults and less than 0.21 percent of the number 32,768 of nodes.",2004,0, 37,Unreliability tracing technique for system components based on the fault tree analysis,"A new method is developed for tracing the unreliability contributions of system components and recognizing the system weak parts using the fault tree analysis (FTA). The method is based on the minimum cut set (MCS) algorithm for evaluating the system reliability using the FTA and the proportional sharing principle (PSP). The fault tree can be expressed as MCSs and the system unreliability can be mathematically expressed by a certain terms of probability of occurrence of MCSs. A unreliability tracing (UT) principle is proposed for allocating the probability of each term to the basic events fairly and reasonably, and then the direct contribution relationship between the system unreliability and basic events can be established. The system UT sharing factors (UTSFs) are derived to easily identify the weakness parts in a system. The applicability of the proposed methods is illustrated by case studies of a simple system and a multiple-output power distribution system.",2010,0, 38,The discovery of the fault location in NIGS,"A new method is discovered for calculating the fault distance of the overhead line of the Neutral Indirect Grounded System (NIGS) in power distribution networks, in which the single phase to ground fault point or distance is difficult to detect, because the zero sequence current is in lower value. It is found that the information of the fault distance is kept in the zero sequence voltage vector which may be measured at the tail terminal of the questioned line by digging the data. Then an algorithm to calculate the fault location on the overhead lines is proposed by considering that the zero sequence voltage vector at the tail terminal. The value of the zero sequence voltage is determined by the fault location, and the phase angle also contains the distance traveled by the load current to the fault point. The system analysis for parameters is conducted for the NIGS by considering line is actual and by the two terminals' parameters.",2010,0, 39,Bearing Fault Diagnosis Based on Feature Weighted FCM Cluster Analysis,"A new method of fault diagnosis based on feature weighted FCM is presented. Feature-weight assigned to a feature indicates the importance of the feature. This paper shows that an appropriate assignment of feature-weight can improve the performance of fuzzy c-means clustering. Feature evaluation based on class separability criterion is discussed in this paper. Experiment shows that the algorithm is able to reliably recognize not only different fault categories but also fault severities. Therefore, it is a promising approach to fault diagnosis of rotating machinery.",2008,0, 40,Development of simulation model based on directed fault propagation graph,"A new method of simulation model is presented in this paper in order to deal with system based fault mode and effect analysis model in modern complex system with large structure. Directed fault propagation graph model based on fault influence degree is proposed and fault propagation model is put forward. With the definition of direct fault propagation influence degree and indirect fault propagation influence degree is introduced, the algorithm of propagation and search method for fault propagation model is discussed. Visualization simulation system based on directed fault propagation graph is developed with object oriented method according to the proposed fault analysis model. The Simulation system can used for fault propagation analysis and fault influence of exist complex system, simulation result can be validated and verified by control area network platform, the method is useful for fault diagnosis and analysis model in modern large complex system.",2010,0, 41,Application of genetic algorithms to pattern recognition of defects in GIS,"A computerized pattern recognition system based on the analysis of phase resolved partial discharge (PRPD) measurements, and utilizing genetic algorithms, is presented. The recognition system was trained to distinguish between basic types of defects appearing in gas-insulated system (GIS), such as voids in spacers, moving metallic particles, protrusions on electrodes, and floating electrodes. The classification of defects is based on 60 measurement parameters extracted from PRPD patterns. Classification of defects appearing in GIS installations is performed using the Bayes classifier combined with genetic algorithms and is compared to the performance of the other classifiers, including minimal-distance, percent score and polynomial classifiers. Tests with a reference database of more than 600 individual measurements collected during laboratory experiments gave satisfactory results of the classification process",2000,0, 42,FADES: a fault emulation tool for fast dependability assessment,"A confident use of deep submicron VLSI systems requires the study of their behaviour in the presence of faults. Field-programmable gate arrays (FPGAs) are being used to conduct this study by means of fault injection in a very fast way. However, FPGA-based fault injection tools are mainly focused on classical faults like stuck-at and bit-flip, and do not cover fault models related to new semiconductor technologies like delay, pulse, stuck-open, short, open-line, bridging, and indetermination. Moreover, these tools usually require a deep fault injection background to use them. This paper presents FADES, a tool for the early and fast dependability evaluation of VLSI systems. FADES is able to inject the whole set of considered faults and also enables non-skilled users to assess their systems' dependability. The main advantages and drawbacks of FADES are reported, and some open challenges for further research are identified",2006,0, 43,Single-stage Flyback converter for led driver with inductor voltage detection power factor correction,"A constant current output Flyback converter with power factor correction used for LED driving is presented in this paper. The inductor voltage detection method is applied and acquired the inductor voltage to generate the control signal. Based on this design principle, the inner loop of input current shaping is eliminating, the input voltage sensing and multiplier are also not necessary. The simulation and experimental results are provided to demonstrate the effectiveness of the control scheme based on the lab prototype boards. The output load condition is set from full load to light load, and the results show that the system drives the LEDs with high power factor.",2010,0, 44,Control chart of mean with low alpha error probability,"A control chart of adjustment center imbalance of process with low alpha error probability and stability to unknown distribution parameter is designed. At the heart of algorithm is hypothesis check criterion. According to results of current controlled parameter measurements at every step is made a calculation of line inclination value and is tested hypothesis of its equality to null. If we accept this hypothesis, we consider the current process to be disordered.",2008,0, 45,Superconducting Fault Current Limiter Application for Reduction of the Transformer Inrush Current: A Decision Scheme of the Optimal Insertion Resistance,"A conventional superconducting fault current limiter (SFCL) is usually only connected to a power system for fault current limitation. The study described in this paper, however, attempts to use the hybrid SFCL application to reduce the transformer inrush current. To accomplish this, this paper first suggests the concepts to expand the scope of the SFCL application in the power system. The power system operator should first determine the proper amount of current-limiting resistance (CLR) of the hybrid SFCL. Therefore, this paper suggests a decision scheme of the optimal insertion resistance in an SFCL application to reduce the transformer inrush current. This scheme and the SFCL model are implemented using the electromagnetic transient program (EMTP). We determine the optimal CLR by EMTP simulation, and this value is applied to model the SFCL by the EMTP. The simulation results show the validity and effectiveness of the suggested scheme and the ability of the SFCL to reduce the inrush current.",2010,0, 46,Detection and Diagnosis of Recurrent Faults in Software Systems by Invariant Analysis,"A correctly functioning enterprise-software system exhibits long-term, stable correlations between many of its monitoring metrics. Some of these correlations no longer hold when there is an error in the system, potentially enabling error detection and fault diagnosis. However, existing approaches are inefficient, requiring a large number of metrics to be monitored and ignoring the relative discriminative properties of different metric correlations. In enterprise-software systems, similar faults tend to reoccur. It is therefore possible to significantly improve existing correlation-analysis approaches by learning the effects of common recurrent faults on correlations. We present methods to determine the most significant correlations to track for efficient error detection, and the correlations that contribute the most to diagnosis accuracy. We apply machine learning to identify the relevant correlations, removing the need for manually configured correlation thresholds, as used in the prior approaches. We validate our work on a multi-tier enterprise-software system. We are able to detect and correctly diagnose 8 of 10 injected faults to within three possible causes, and to within two in 7 out of 8 cases. This compares favourably with the existing approaches whose diagnosis accuracy is 3 out of 10 to within 3 possible causes. We achieve a precision of at least 95%.",2008,0, 47,A Cost-Effective Dependable Microcontroller Architecture with Instruction-Level Rollback for Soft Error Recovery,"A cost-effective, dependable microcontroller architecture has been developed. To detect soft errors, we developed an electronic design automation (EDA) tool that generates optimized soft error-detecting logic circuits for flip-flops. After a soft error is detected, the error detection signal goes to a developed rollback control module (RCM), which resets the CPU and restores the CPU's register file from the backup register file using a rollback program routine. After the routine, the CPU restarts from the instruction executed before the soft error occurred. In addition, there is a developed error reset module (ERM) that can restore the RCM from soft errors. We also developed an error correction module (ECM) that corrects ECC errors in RAM after error detection with no delay overheads. Testing on a 32- bit RISC microcontroller and EEMBC benchmarks showed that the area overhead was under 59% and frequency overhead was under 9%. In a soft error injection simulation, the MTBF of random logic circuits, and the MTBF of RAM were 30 and 1.34 times longer, respectively, than those of the original microcontroller.",2007,0, 48,CRINet: A secure and fault-tolerant data collection scheme using 3-way forwarding and group key management in wireless sensor networks,"A critical security threat in a WSN is the compromising of sensor nodes. Not only can attackers use such vulnerability to eavesdrop on the dataflow, but could also inject bogus information into the network. However, most current secure data collection methods trade fault-tolerant ability for end-to-end protection, thus with poor performance. This work proposes CRINet, a secure and fault-tolerant data collection scheme with group key management mechanism. To achieve high reliability, sensing data would be transferred to the sink through multi-path. EBS is applied in CRINet for group key management in order to reduce re-key efforts. Simulation results demonstrate that CRINet scheme is superior in terms of data confidentiality and availability.",2009,0, 49,In-line wafer inspection data warehouse for automated defect limited yield analysis,"A data warehouse approach for the automation of process zone-by-zone defect limited yield analysis is presented in this paper. The system employs pre-calculation of adder defects extraction and clustered defects recognition, a newly developed wafer-wise defect record structure, and a graphical user interface purpose-designed for data selection navigation. Analysis time can be reduced to less than 1% of that of benchmarked conventional procedures",2000,0, 50,Design of circular split-ring type Defected Ground Structure as elliptic filters,"A Defected Ground Structure in the metallic ground plane of a microstrip line is attractive solution for achieving finite pass band, rejection band and slow-wave characteristics. A split-ring shaped DGS structure has a high selectivity would be preferable owing to the demand for currently expanding communication systems within finite spectrum resources. First of all a split-ring type DGS unit slot is designed underneath a pair of T-shaped microstrip line. [1-4]. Thus just applying a T section at the upper plane of DGS unit a lowpass filter can be changed to a sharp response highpass filters.",2009,0, 51,Depth Image-Based Temporal Error Concealment for 3-D Video Transmission,"A depth image-based error concealment algorithm for 3-D video transmission is proposed, which utilizes the strong correlations between 2-D video and its corresponding the depth map. We first investigate the internal characteristics of the macroblock in the depth map, and then take advantage of these characteristics to recover accurately the lost motion vector for the corrupted blocks, with the joint consideration of the neighbor information and the corresponding depth. Experimental results show that the proposed method provides significant improvements in terms of both objective and subjective evaluations.",2010,0, 52,Operating MicroGrid Energy Storage Control during Network Faults,"A MicroGrid is expected to operate both as a sub system connected to the main grid or as an islanded system. However the provision of fault currents, in an islanded MicroGrid consisting only of micro-generation interfaced with relatively low-current power electronics, is a serious system protection issue. This paper presents the novel concept of using the central energy storage system (flywheel) as the main fault current source in islanded mode. The three-phase MicroGrid test rig used at University of Manchester, and the flywheel control system are described. The importance of accurate systems modeling of the whole microgrid and energy storage unit is shown. A fault study is carried out on the test rig and in PSCAD. The flywheel inverter system is shown to contribute enough fault current for a sufficient duration to cause the system protective device to clear the fault.",2007,0, 53,A cost-driven lithographic correction methodology based on off-the-shelf sizing tools,"A minimum feature sizes continue to shrink, patterned features have become significantly smaller than the wavelength of light used in optical lithography. As a result, the requirements for dimensional variation control, especially in critical dimension (CD) 3/spl sigma/, has become more stringent. To meet these requirements, resolution enhancement techniques (RET) such as optical proximity correction (OPC) and phase shift mask (PSM) technologies are applied. These approaches result in a substantial increase in mask costs and make the cost of ownership (COO) a key parameter in the comparison of lithography technologies. No concept of function is injected into the mask flow; that is, current OPC techniques are oblivious to the design intent, and the entire layout is corrected uniformly with the same effort. We propose a minimum cost of correction (MinCorr) methodology to determine the level of correction for each layout feature such that prescribed parametric yield is attained with minimum total RET cost. We highlight potential solutions to the MinCorr problem and give a simple mapping to traditional performance optimization. We conclude with experimental results showing that substantial RET costs may be saved while maintaining a given desired level of parametric yield.",2003,0, 54,CPN model for a Hierarchical Fault Tolerance Protocol for Mobile Agent systems,"A mobile agent (MA) is an autonomous and identifiable software process that travels through a network of heterogeneous machines and acts autonomously on behalf of the user. Improving the survivability of MA in presence of various faults is the major issue concerned with implementation of MA. This paper presents a hierarchical fault tolerance protocol (HFTP) for mobile agents, which can tolerate host failure, system failure as well as link failure by grouping the hosts within a network and rear guard based migration of MA in the global network. It also presents Colored Petri Net (CPN) based architectural modeling of HFTP, which includes systematic specification, design and implementation of components of the system. Various useful results have been drawn by simulation as well as data collector and monitoring tools. We also present a formal analysis of the protocol.",2008,0, 55,Torsional oscillations of the turbine-generator due to network faults,"A model of the electromechanical system, suitable for the analysis of torsional oscillations due to the power system's faults, is established. Results of an example of computer simulation of transient torsional torques in the shaft-line due to a three-phase fault and the subsequent fault clearing, as obtained by the model, are presented. Effect of chosen fault clearing time is discussed.",2010,0, 56,A model-based approach for fault-tolerant control,"A model-based controller architecture for fault-tolerant control (FTC) is presented in this paper. The controller architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. The FTC architecture consists of two central parts, fault detection and isolation (FDI) part and a controller reconfiguration part. The theoretical basis for the architecture will be given followed by an investigation of the single parts in the architecture. At last, system interconnection will be considered with respect to the described controller architecture.",2010,0, 57,Sensor fault tolerant generic model control for nonlinear systems,"A modified Strong Tracking Filter (STF) is used to develop a new approach to sensor fault tolerant control. Generic Model Control (GMC) is used to control the nonlinear process while the process runs normally because of its robust control performance. If a fault occurs in the sensor, a sensor bias vector is then introduced to the output equation of the process model. The sensor bias vector is estimated on-line during every control period using the STF. The estimated sensor bias vector is used to develop a fault detection mechanism to supervise the sensors. When a sensor fault occurs, the conventional GMC is switched to a fault tolerant control scheme, which is, in essence, a state estimation and output prediction based GMC. The laboratory experimental results on a three-tank system demonstrate the effectiveness of the proposed Sensor Fault Tolerant Generic Model Control (SFTGMC) approach.",2000,0, 58,Research of Remote Fault Diagnosis System Based on Multi-Agent,"A Multi-agent based Remote fault Diagnosis system is an important system for high speed and automation which can not only monitor the status of the remote device, but serve for the remote device. Remote Fault diagnosis system are vital aspects in automation process, in this sense, remote diagnosis systems should support decision-making tools, the enterprise thinking and flexibility. In this paper a kind of Remote Diagnosis System based on multi-agent is presented. This model is based on a generic framework using multi-agent systems. Specifically, this paper analyses the architecture of Remote Fault Diagnosis System and the collaboration mechanism between Agents. The method brought forward in the paper was generally applicable to a general fault diagnosis.",2010,0, 59,A Multiple Faults Test Generation Algorithm Based on Neural Networks and Chaotic Searching for Digital Circuits,A multiple faults test generation algorithm based neural networks for digital circuits is proposed in this paper because the test generation for multiple faults in digital circuits is more difficult. This algorithm change multiple faults into single fault firstly and constructs the constraint network of the fault for the single fault circuit with method of neural networks. The test vectors for multiple faults in the original circuit can be obtained by solving the minimum of energy function of the constraint network for the fault with chaotic searching method. The experimental results on some international standard circuits demonstrate the feasibility of the algorithm.,2010,0, 60,Multiwave interaction analysis of a coaxial Bragg structure with a localized defect introduced in sinusoidal corrugations,"A multiwave interaction formulation is presented to investigate the effects of a localized defect on the reflective spectrum of a coaxial Bragg structure with sinusoidal corrugations. Good agreement has been achieved between the theoretical results obtained by the present formulation and those simulated by the software HFSS, which confirms the validity and the significance of the multiwave interaction formulation. It is found that, the localized defect creates defected eigenmodes within each reflective band gap of the initial standard Bragg structure, which the parameter can be controlled by the location of the localized defect.",2009,0, 61,An algorithm for dividing ambiguity sets for analog fault dictionary,"A new algorithm for dividing ambiguity sets based on the lowest error probability for analog fault dictionary is proposed. The problem of tolerance affecting diagnostic accuracy in analog circuits is discussed. A statistical approach is used to derive the probability distribution of the tolerances of the output signal characteristics both in the absence and in the presence of faults in the circuit. For example, in this paper, Monte Carlo technique has been applied for the analysis of tolerance. The lowest error probabilities are computed according to Bayesian strategy. Using the PSpice software package, a detailed simulation program was developed to implement the proposed technique. The simulation software was packaged and then integrated with a symbolic analysis program that divides the ambiguity sets and structure the software package for the analysis before testing in the fault dictionary. Furthermore, the proposed approach can be easily extended to select the testing nodes leading to the selection of optimized nodes for the analog fault diagnosis.",2002,0, 62,A Zero Module Current Obtaining Approach Based on Magnetic Induction for Single Phase Grounding Fault,"A new approach based on the magnetic field induction is presented to obtain transient zero module current for single phase grounding gault of overhead lines. The paper analyses the characteristics of magnetic field around the overhead lines and presents the magnetic field under the lines is proportional to the zero module current, and the zero module current can be measured by inducting the magnetic field. The paper proposes a zero-module current obtaining approach using a hall sensor to induct magnetic field, and elaborates the solution to the key issues in practical applications, at last simulation and experiment results demonstrate the feasibility of the approach.",2010,0, 63,Parametric fault trees with dynamic gates and repair boxes,"A new approach is proposed to include s-dependencies in fault tree (FT) models. With respect to previous techniques, the approach presented in this paper is based on two peculiar powerful features. First, adopting a parameterization technique, referred to as parametric FT (PFT), to fold equal subtrees (or basic events) in order to resort to a more compact FT representation. It is shown that parameterization can be conveniently adopted as well for dynamic gates. Second, PFT can be modularized and each module translated into a high level colored Petri net in the form of a stochastic well-formed net (SWN). SWN generate a lumped Markov chain and the saving in the dimension of the state space can be very substantial with respect to standard (non colored) Petri nets. Translation of PFT modules into SWN has proved to be very flexible, and various kinds of new dependencies can be easily accommodated. In order to exploit this flexibility a new primitive, called repair box, is introduced. A repair box, attached to an event, causes the starting of a repair activity of all the components that failed as the event occurs. In contrast to all the previous FT based models, the addition of repair boxes enables the approach to model cyclic behaviors. The proposed approach as dynamic repairable PFT (DRPFT) was referred to. A tool supporting DRPFT is briefly described and the tool is validated by analyzing a benchmark proposed recently in the literature for quantitative comparison [H. Zhu et al., 2001].",2004,0, 64,Fault-accommodating thruster force allocation of an AUV considering thruster redundancy and saturation,"A new approach to the fault-accommodating allocation of thruster forces of an autonomous underwater vehicle (AUV) is investigated in this paper. This paper presents a framework that exploits the excess number of thrusters to accommodate thruster faults during operation. First, a redundancy resolution scheme is presented that considers the presence of an excess number of thrusters along with any thruster faults and determines the reference thruster forces to produce the desired motion. This framework is then extended to incorporate a dynamic state feedback technique to generate reference thruster forces that are within the saturation limit of each thruster. Results from both computer simulations and experiments are provided to demonstrate the viability of the proposed scheme",2002,0, 65,Design and realization of a new compact branch-line coupler using defected ground structure,"A new compact branch-line directional coupler is proposed combined the T-model branch line coupler with defected ground structure (DGS). Using transmission theory, the parameter selection limit about the T-Model equivalent structure is discussed firstly. Then a T-model branch line coupler with DGS is proposed and optimum designed with simulation software. The measurement results show that the proposed coupler has the advantages as compact and well passband flatness.",2008,0, 66,"Decoding of the (24, 12, 8) extended golay code up to four errors","A new decoder is proposed to decode the (24, 12, 8) binary extended Golay code up to four errors. It consists of the conventional hard decoder for correcting up to three errors, the detection algorithm for four errors and the soft decoding for four errors. For a weight-4 error in a received 24-bit word, Method 1 or 2 is developed to determine all six possible error patterns. The emblematic probability value of each error pattern is then defined as the product of four individual bit-error probabilities corresponding to the locations of the four errors. The most likely one among these six error patterns is obtained by choosing the maximum of the emblematic probability values of all possible error patterns. Finally, simulation results of this decoder in additive white Gaussian noise show that at least 93% and 99% of weight-4 error patterns that occur are corrected if the two Eb/N0 ratios are greater than 2 and 5 dB, respectively. Consequently, the proposed method can achieve a better percentage of successful decoding for four errors at variable signal-to-noise ratios than Lu et al.'s algorithm in software. However, the speed of the method is slower than Lu et al.'s algorithm.",2009,0, 67,Determining the Amount of Audio-Video Synchronization Errors Perceptible to the Average End-User," The Media and Acoustics Perception Lab (MAPL) designed a study to determine the minimum amount of audio-visual synchronization (a/v sync) errors that can be detected by end-users. Lip synchronization is the most noticeable a/v sync error, and was used as the testing stimuli to determine the perceptual threshold of audio leading errors. The results of the experiment determined that the average audio leading threshold for a/v sync detection was 185.19 ms, with a standard deviation of 42.32 ms. This threshold determination of lip sync error (with audio leading) will be widely used for validation and verification infrastructures across the industry. By implementing an objective pass/fail value into software, the system or network under test is held against criteria which were derived from a scientific subjective test. ",2008,0, 68,"Design and development of a 15 kV, 20 kA HTS fault current limiter","A 15 kV class high temperature superconducting fault current limiter was developed as part of a Department of Energy Superconductivity Partnership Initiative (SPI) Phase II effort. This is an inductive/electronic fault current limiter (FCL) that can double as a fast sub-cycle solid state breaker. The said device was shipped to Southern California Edison (SCE) Center Substation at Norwalk, CA from General Atomics on June 15, 1999. Preliminary high voltage and high current testing was conducted. The pre-commercial FCL unit houses three of the world's largest Bi-2223 coils (solenoids each with an outside diameter of 1 m and a coil length of 0.75 m), collaborated by GA and IGC. These coils will operate at 35 K and be able to carry a continuous DC current of 2000 A as well as an AC pulsed current of 9000 A. Detailed specification of the FCL device and a brief description of its various subsystems will be given. Finally, test results at Center Substation are summarized and future work outlined. This Phase II FCL device is important as it has the potential to become the first major commercial product for HTS power utility application.",2000,0, 69,"Automated fault data collection, analysis, and reporting","A brief summary of the NERC Classification and Standards, NERC standards for Digital Fault Recorders and requirements for Automated Fault reporting are presented. This paper describes a method for meeting these requirements. A common architecture is proposed to implement an automated data collection tool utilizing the existing infrastructure. Two data transfer tools are described & compared. In addition to meeting NERC requirements for Disturbance Monitoring Equipment, the benefits of implementing an enterprise-wide Network Based Fault Data Collection and Analysis system are discussed.",2009,0, 70,Validation of guidance control software requirements specification for reliability and fault-tolerance,"A case study was performed to validate the integrity of a software requirements specification (SRS) for guidance control software (GCS) in terms of reliability and fault-tolerance. A partial verification of the GCS specification resulted. Two modeling formalisms were used to evaluate the SRS and to determine strategies for avoiding design defects and system failures. Z was applied first to detect and remove ambiguity from a part of the natural language based (NL-based) GCS SRS. Next, statecharts and activity-charts were constructed to visualize the Z description and make it executable. Using this formalism, the system behavior was assessed under normal and abnormal conditions. Faults were seeded into the model (i.e., an executable specification) to probe how the system would perform. The result of our analysis revealed that it is beneficial to construct a complete and consistent specification using this method (Z-to-statecharts). We discuss the significance of this approach, compare our work with similar studies, and propose approaches for improving fault tolerance. Our findings indicate that one can better understand the implications of the system requirements using Z-statecharts approach to facilitate their specification and analysis. Consequently, this approach can help to avoid the problems that result when incorrectly specified artifacts (i.e., in this case requirements) force corrective rework",2002,0, 71,Analysis of Hyperion data with the FLAASH atmospheric correction algorithm,"A combination of good spatial and spectral resolution make visible to shortwave infrared spectral imaging from aircraft or spacecraft a highly valuable technology for remote sensing of the Earth's surface. Many applications require the elimination of atmospheric effects caused by molecular and particulate scattering; a process known as atmospheric correction, compensation, or removal. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) atmospheric correction code derives its physics-based algorithm from the MODTRAN4 radiative transfer code. A new spectra; recalibration algorithm, which has been incorporated into FLAASH, is described. Results from processing Hyperion data with FLAASH are discussed.",2003,0, 72,Comparison of the main types of fault-tolerant electrical drives used in vehicle applications,"A comparative study of several fault-tolerant electrical drives is presented in this paper. As the application is concerned, the authorspsila attention was oriented towards the vehicle transportation. Thus, the main electrical drives under study are: the induction, the switched reluctance and the permanent magnet synchronous machine, respectively. The present work will explore the aforementioned drivespsila capabilities in terms of fault-tolerant operation. The authors present a substantial study for the fault-tolerant issue in electrical drives by using finite element method (FEM). During this numerical analysis many phenomenon will be emphasized and, coming together with some tests, final conclusions will be depicted.",2008,0, 73,A new protection algorithm for EHV transmission line based on singularity detection of fault transient voltage,A new non-unit voltage protection algorithm for extra high voltage (EHV) transmission lines is presented in this paper. The singularity of fault transient voltage signals in one terminal is utilized to discriminate clearly between internal and external faults. Lipschitz exponent (LE) is obtained from wavelet modulus maximum to detect singularity of signals. A typical 500 kV EHV transmission system has been simulated by ATP to evaluate the scheme. The simulation results show that this scheme is capable of providing correct responses under various system configurations and fault conditions,2005,0, 74,Position and speed sensorless control for PMSM drive using direct position error estimation,"A new position and speed sensorless control approach is proposed for permanent magnet synchronous motor (PMSM) drives. The controller directly computes an error for the estimated rotor position and adjusts the speed according to this error. The derivation of the position error equation and an idea for eliminating the differential terms, are presented. The proposed approach is applied to a vector controlled PMSM AC drive and phase locked loop (PLL) control is employed for speed adjustment. Several simulations are carried out. The proposed control scheme is verified by experiments using a 3.7 kW salient pole PMSM",2001,0, 75,A Method for the Automatic Selection of Test Frequencies in Analog Fault Diagnosis,"A new procedure for the selection of test frequencies in the parametric fault diagnosis of analog circuits is presented. It is based on the evaluation of algebraic indices, as the condition number and the norm of the inverse, of a sensitivity matrix of the circuit under test. This matrix is obtained starting from the testability analysis of the circuit. A test index (T.I.) that permits the selection of the set of frequencies that better leads to locating parametric faults in analog circuits is defined. By exploiting symbolic analysis techniques, a program that implements the proposed procedure has been developed. It yields the requested set of frequencies by means of an optimization procedure based on a genetic algorithm that minimizes the T.I. Examples of the application of the proposed procedure are also included.",2007,0, 76,A Gibbs-sampler approach to estimate the number of faults in a system using capture-recapture sampling [software reliability],"A new recapture debugging model is suggested to estimate the number of faults in a system, , and the failure intensity of each fault, . The Gibbs sampler and the Metropolis algorithm are used in this inference procedure. A numerical illustration suggests a notable improvement on the estimation of and compared with that of a removal debugging model",2000,0, 77,Improved unsynchronized two-end algorithm for locating faults in power transmission lines,"A new two-end algorithm for locating faults in a single power transmission line is presented. Presence of the extra link between the line terminals is taken into account. A distance to fault is determined from unsynchronized measurements of voltages from both line ends and with additional, limited use of currents. Measured currents are utilized under the condition that they come from current transformers, which are not saturated. As a result of that, certain improvement of fault location - when comparing to the other known methods - is achieved. The delivered new algorithm has been tested with the fault data obtained from versatile ATP-EMTP simulations. The sample examples and results of fault location accuracy evaluation are reported and discussed in the paper.",2003,0, 78,Triggered vacuum switch-based fault current limiter,"A new type of fault current limiter (FCL) is proposed based on the triggered vacuum switch (TVS). The TVS-based FCL (TFCL) is mainly composed of a capacitor, a current-limiting reactor connected with the capacitor in series, and a TVS connected with the capacitor in parallel. With TVS at the off or on state, the whole TFCL behaves as conventional series compensation or fault current limitation, respectively. Compared with other types of FCLs, such as superconductivity, thyristor/GTO-based FCL, TFCL is distinguished by its characteristics, such as high capacity, loss free, and low price. The digital simulation and prototype experiment based on the LC resonant test circuit show that it is feasible to develop TFCL",2001,0,7188 79,Fault Tolerant Non-trivial Repeating Pattern Discovering for Music Data,"A non-trivial repeating pattern is commonly used in analyzing the repeated part of a music object and looking for the theme. Non-trivial repeating patterns exclude those patterns included in other longer patterns such that they can reduce the redundancy and speedup music search. So far, existing approaches discover a repeating pattern in such a way that the sequence of notes in a music object appears more than once in exactly matching. If we allow the similar sequences with partial different notes also being a repeating pattern, it can reduce the number of repeating patterns and construct more efficient music indexes. The more accurate music theme also could be analyzed. Therefore, in this paper, we propose a fault-tolerant non-trivial repeating pattern discovering technique. The experimental results show that our approach can not only reduce the number of non-trivial repeating patterns but also improve the hit ratios of queries for music databases",2006,0, 80,Semiconductor production schedule check and correction technique through mobile agent system,"A novel agent system that can check and correct machine schedules in semiconductor production was proposed The system consists of stationary machine agents that control their machine schedules, mobile agents that move between the machines, machine schedule files described in XML, and a time buffer stage where lots wait to keep the machine schedules. The real-time scheduling system proceeds according to the rules of mobile agent generation, machine scheduling, and machine and mobile agents collaboration. The system was confirmed through the computer simulation, supposing a small-scaled semiconductor production line",2005,0, 81,A new algorithm of improving fault location based on SVM,"A fault location algorithm using estimated line parameters is provided in this paper. The characteristic of this algorithm is using estimated line parameters. And the influence of the line parameters is eliminated. Support vector machines theory is used to estimate transmission line parameters, which is a nonlinear black box modeling problem. The historical data is used as training sample. EMTP simulation shows that this method notability improves the accuracy of fault location.",2004,0,7225 82,Modelling and analysing fault propagation in safety-related systems,"A formal specification for analysing and implementing multiple fault diagnosis software is proposed in this paper. The specification computes all potential fault sources that correspond to a set of triggered alarms for a safety-related system, or part of a system. The detection of faults occurring in a safety-related system is a fundamental function that needs to be addressed efficiently. Safety monitors for fault diagnosis have been extensively studied in areas such as aircraft systems and chemical industries. With the introduction of intelligent sensors, diagnosis results are made available to monitoring systems and operators. For complex systems composed of thousands of components and sensors, the diagnosis of multiple faults and the computational burden of processing test results are substantial. This paper addresses the multiple fault diagnosis problem for zero-time propagation using a fault propagation graph. Components represented as nodes in a fault propagation graph are allocated with alarms. When faults occur and are propagated some of these alarms are triggered. The allocation of alarms to nodes is based on a severity analysis performed using a form of failure mode and effect analysis on components in the system.",2003,0, 83,Gate level fault diagnosis in scan-based BIST,"A gate level, automated fault diagnosis scheme is proposed for scan-based BIST designs. The proposed scheme utilizes both fault capturing scan chain information and failing test vector information and enables location identification of single stuck-at faults to a neighborhood of a few gates through set operations on small pass/fail dictionaries. The proposed scheme is applicable to multiple stuck-at faults and bridging faults as well. The practical applicability of the suggested ideas is confirmed through numerous experimental runs on all three fault models",2002,0, 84,Grid Connection to Stand Alone Transitions of Slip Ring Induction Generator During Grid Faults,"A grid connected power generation systems based on the superior controllers of an active and reactive power are useless during a grid failures like grid short-circuit or line braking. Therefore the change of operation mode from grid connection to stand alone allows for uninterruptible supply of a selected part of grid connected load. However, in the stand alone operation mode the superior controllers should provide fixed amplitude and frequency of the generated voltage in spite of the load nature. Moreover, a soft transition from grid connection mode to stand alone operation requires that, the mains outage detection method must be applied. A grid voltage recovery requires change of the generator operational mode from stand alone to grid connection. However, the protection of a load from rapid change of the supply voltage phase is necessary. This may be achieved by synchronization of the generated and grid voltages and controllable soft connection of the generator to the grid. The paper presents the transients of controllable soft connection and disconnection to the grid of the variable speed doubly fed induction generator (DFIG) power system. A description of the mains outage detection methods for the DFIG is based on the grid voltage amplitude and frequency measurement and comparison with a standard values. Also an angle controller, between generated and grid voltages, for synchronization process is described. The short description of the sensorless direct voltage control of the autonomous doubly fed induction generator (ADFIG) is presented. All the presented methods are proved based on PSIM simulation software and in a laboratorial conditions and the oscillograms with a test results are presented in the paper. A 2.2 kW slip-ring induction machine was applied as a generator and 3.5 kW DC motor was used as a primary mover to speed adjusting. A switching and sampling frequencies are equal to 8 kHz. For filtering the switching frequency distortions in the output volta- - ge external capacitances equal to 21 muF per phase are connected to the stator. The control algorithm is implemented in a DSP controller build on a floating point ADSP-21061 with an Altera/FPGA support",2006,0, 85,Jump Simulation: A Technique for Fast and Precise Scan Chain Fault Diagnosis,"A diagnosis technique is presented to locate seven types of single faults in scan chains, including stuck-at faults and timing faults. This technique implements the Jump Simulation, a novel parallel simulation technique, to quickly search for the upper and lower bounds of the fault. Regardless of the scan chain length, Jump Simulation packs multiple simulations into one so the simulation time is short. In addition, Jump Simulation tightens the bounds by observing the primary outputs and scan outputs of good chains, which are ignored by most previous techniques. Experiments on ISCAS'89 benchmark circuits show that, on the average, only three failing patterns are needed to locate faults within ten scan cells. The proposed technique is still very effective when failure data is truncated due to limited ATE memory",2006,0, 86,Diagnosis of single stuck-at faults and multiple timing faults in scan chains,"A diagnosis technique to locate single stuck-at faults and multiple timing faults in scan chains is presented. This technique applies single excitation (SE) patterns, in which only one bit is flipped in the presence of multiple faults. With SE patterns, the problem of unknown values in scan chains is eliminated. The diagnosis result is therefore deterministic, not probabilistic. In addition to the first fault, this technique also diagnoses the remaining timing faults by applying multiple excitation patterns. Experiments on benchmark circuits show that average diagnosis resolutions are mostly less than five, even for the tenth fault in the scan chain.",2005,0, 87,Armature Fault Diagnostics of a Commutator Motor using the Frequency Response Analysis,"A diagnostic method for armatures of commutator motors using the frequency response analysis method (FRA) is presented in this paper. The proposed method can, beside the detection of the fault itself, recognize and categorize the fault types. In a first step, the magnetic and capacitive couplings of the armature single components are analyzed. Thereafter, the position-dependent impedance curves of healthy and faulty armatures are obtained by measurements and used for the detection of winding faults. Taking as criteria the main resonant points of the frequency and position dependent armature impedance curve, a diagnostic method that considers three different fault types is developed.",2007,0, 88,Calculation of transverse voltages of communication lines induced by the fault current of power system,"A double-line model is presented to calculate the transverse voltage of communication lines induced by the fault current of power lines. By dividing the communication line into several fictitious segments, a chain composed by the coupling P1-type circuit with distributed source is formed. The enhanced node voltage analysis (ENVA) is also developed in order to evaluate such a kind of model. The ENVA cuts the number of nodes down greatly because of treating the active and coupling impedance branches as a whole. In addition, the transverse voltages in time domain can be obtained easily from those calculated in frequency domain by means of fast Fourier transform. The numerical examples prove the validity and efficiency of the method by comparison with analytical results. The model is of significance to the design and the rights-of-way selection of power lines and communication lines.",2002,0, 89,Coverage method for FPGA fault logic blocks by spares,"A fault coverage method for digital system-on-chip by means of traversal the logic block matrix to repair the FPGA components is proposed. A method enables to obtain the solution in the form of quasioptimal coverage for all faulty blocks by minimum number of spare tiles. A choice one of two traversal strategies for rows or columns of a logic block matrix on the basis of the structurization criteria, which determine a number of faulty blocks, reduced to the unit modified matrix of rows or columns is realized.",2010,0, 90,Design and Construction of a Magnetic Fault Current Limiter,A fault current limiter using permanent magnets has been designed and its performance simulated using a two-dimensional dimensional time-stepping finite-element method incorporating a model of hysteresis for hard magnetic materials.,2006,0, 91,Geometric and shading correction for images of printed materials using boundary,"A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.",2006,0, 92,Optimized design of a low-pass filter using defected ground structures,"A novel three-pole low-pass filter is designed using low impedance microstrip line and one DGS section in this paper. An equivalent circuit model of a defected ground structure (DGS) is applied to study the characteristics of DGS. Parameters of the model are extracted from the EM simulation results by matching it to a one-pole low-pass filter. The lumped element values of the low-pass filter are optimized in circuit simulator by applying the circuit model of DGS. It is demonstrated that the filter can provide a sharp rate of attenuation in the stop-band as predicted. To further verify this method, a filter using DGS is fabricated measured. The comparison between simulation and measurement confirms the effectiveness of the proposed method.",2005,0, 93,Automobile engine fault diagnosis using neural network,A number of diagnostic systems for vehicle maintenance/repair have been developed in recent years. These systems are employed for diagnosing variety of faults in the vehicle and are available at service level. We have made an attempt to design a diagnostic system for detection of faults based on neural network. The system developed is based on a fault table for the engine. Such a diagnostic module is aimed at increasing the utility of the system,2001,0, 94,"A comparative analysis of network dependability, fault-tolerance, reliability, security, and survivability","A number of qualitative and quantitative terms are used to describe the performance of what has come to be known as information systems, networks or infrastructures. However, some of these terms either have overlapping meanings or contain ambiguities in their definitions presenting problems to those who attempt a rigorous evaluation of the performance of such systems. The phenomenon arises because the wide range of disciplines covered by the term information technology have developed their own distinct terminologies. This paper presents a systematic approach for determining common and complementary characteristics of five widely-used concepts, dependability, fault-tolerance, reliability, security, and survivability. The approach consists of comparing definitions, attributes, and evaluation measures for each of the five concepts and developing corresponding relations. Removing redundancies and clarifying ambiguities will help the mapping of broad user-specified requirements into objective performance parameters for analyzing and designing information infrastructures.",2009,0, 95,Packet-level iterative errors-and-erasures decoding for SFH spread-spectrum communications with Reed-Solomon codes and differential encoding,"A packet-level iterative detection technique that employs errors-and-erasures decoding has been described previously for SFH communications using Reed-Solomon coding. The technique enhances the performance of the SFH system in intersymbol-interference channels with only a minimal increase in complexity over one-shot errors-and-erasures decoding. In this paper, the performance of iterative EE decoding is considered for a SFH system with differentially encoded transmissions. It is shown that the use of differential encoding improves the performance of packet-level iterative detection in an AWGN channel with only a modest increase in detection complexity, and it also improves the performance in an intersymbol-interference channel in many instances. The packet size, the target probability of error, and the channel impulse response are considered, and the effect of each on the performance gain and the complexity is examined",2005,0, 96,Decorrelating compensation scheme for coefficient errors of a filter bank parallel A/D converter,"A parallel A/D conversion scheme with a filter bank for low-IF receivers is presented. The analysis filters of the filter bank divide the frequency components of the received signal, and achieve parallel A/D conversion. Therefore, the required conversion rates and the resolution of the A/D converters can be reduced and the receiver can demodulate wideband signals. As the analysis filters consist of analog components, their coefficients include errors. These errors cause mutual interference between signals in orthogonal frequencies. In order to remove this interference, a decorrelating compensation scheme is proposed.",2002,0, 97,"Pattern recognition-a technique for induction machines rotor fault detection ""eccentricity and broken bar fault""","A pattern recognition technique based on Bayes minimum error classifier is developed to detect broken rotor bar faults and static eccentricity in induction motors at the steady state. The proposed algorithm uses stator currents as input without any other sensors. First, rotor speed is estimated from stator currents, then appropriate features are extracted. The produced feature vector is normalized and fed to the trained Bayes minimum error classifier to determine if motor is healthy or has incipient faults (broken bar fault, static eccentricity or both). Only number of poles and rotor slots are needed as pre-knowledge information. Theoretical approach together with experimental results derived from a 3 hp AC induction motor show the strength of this method. In order to cover many different motor load conditions data are derived from 10% to 130% of the rated load for both a healthy induction motor and an induction motor with a rotor having 4 broken bars and/or static eccentricity.",2001,0, 98,Response space construction for neural error correction,"A physiological neuron model that incorporates the recognized prototype of an inhibitory synapse was analyzed in terms of the effects of isolated inhibitory post-synaptic potentials on its ongoing behavior. The nonstationary, transient activity resulting from these perturbations cannot be analyzed in terms of motion on some attractor (because of long-duration aftereffects) nor linearized (as the perturbations are large). Instead, results suggest that changes in the value of either of the system's slow state variables may be used to construct a global response space, within which all attractors and nonstationary behaviors exist.",2004,0, 99,NoC Interface for fault-tolerant Message-Passing communication on Multiprocessor SoC platform,A prevalent design paradigm in electronic systems design is the usage of multiple programmable processors on general purpose Multiprocessor System-on-Chip (MPSoC) platforms where processors and other sub-systems communicate through communication infrastructures called Network-on-Chip (NoC). This paper presents a new approach to a NoC Interface (NI) called Micronswitch Interface (MSI) designed for message-passing communication with a light-weight Micron Message-Passing (MMP) protocol on Micronmesh MPSoC platform. The operation of the MSI Hardware (HW) and Software (SW) are tightly coupled with that of the MMP protocol in order to improve communication performance. The MSI provides mechanisms for efficient buffer management and fault-tolerant communication which will be necessary for reliable and efficient operation of the MPSoCs. Performance analyses show that the MSI is also able to produce a good throughput and latency.,2009,0, 100,Extended fault modeling used in the space shuttle PRA,"A probabilistic risk assessment (PRA) has been completed for the space shuttle with NASA sponsorship and involvement. This current space shuttle PRA is an advancement over past PRAs conducted for the space shuttle in the technical approaches utilized and in the direct involvement of the NASA centers and prime contractors. One of the technical advancements is the extended fault modeling techniques used. A significant portion of the data collected by NASA for the space shuttle consists of faults, which are not yet failures but have the potential of becoming failures if not corrected. This fault data consists of leaks, cracks, material anomalies, and debonding faults. Detailed, quantitative fault models were developed for the space shuttle PRA which involved assessing the severity of the fault, detection effectiveness, recurrence control effectiveness, and mission-initiation potential. Each of these attributes was transformed into a quantitative weight to provide a systematic estimate of the probability of the fault becoming a failure in a mission. Using the methodology developed, mission failure probabilities were estimated from collected fault data. The methodology is an application of counter-factual theory and defect modeling which produces consistent estimates of failure rates from fault rates. Software was developed to analyze all the relevant fault data collected for given types of faults in given systems. The software allowed the PRA to be linked to NASA's fault databases. This also allows the PRA to be updated as new fault data is collected. This fault modeling and its implementation with FRAS was an important part of the space shuttle PRA.",2004,0, 101,Teaching the Art of Fault Diagnosis in Electronics by a Virtual Learning Environment,"A virtual learning environment (VLE) to improve understanding of simple faul tfinding was created from a series of Web pages, an online quiz with automated marking, and a local-area-network-based simulator. It was tested on 57 first-year students (in 2002) and 69 students in 2003, taking a module in engineering design in electrical engineering in which a battery charger was designed and constructed. The results indicate that there was better than 100% improvement in the number of working battery chargers in both tested years. In addition, the students who used the VLE produced more working chargers and were better able to identify circuit blocks than those that did not. The learning approach is described by the adaptive character of thought cited in the present paper.",2005,0, 102,Wavelet neural network method for fault diagnosis of push-pull circuits,"A wavelet neural network method for fault diagnosis of push-pull circuits is presented. Firstly, output voltage signals under faulty conditions are obtained with simulation. Then wavelet coefficients of output voltage signals are gained by Daubechies wavelet decomposition, and faulty feature vectors are extracted from coefficients. After training the networks by faulty feature vectors, the wavelet neural networks model of the circuit fault diagnosis system is built. The simulation result shows the fault diagnosis method of the push-pull circuits with wavelet neural network is effective.",2005,0, 103,Do stack traces help developers fix bugs?,"A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports.",2010,0, 104,An Edge-Adaptive Block Matching Algorithm for Error Concealment,A widely-used block matching algorithm (BMA) for error concealment may suffer from the deteriorated quality of a concealed block that includes multiple objects in different motion directions. This paper proposes an edge-adaptive BMA that decomposes a damaged 16 times 16 macroblock (MB) into four 8 times 8 blocks and conceals the 8 times 8 blocks together only when they belong to the same object. The edge-adaptive BMA detects edges on MB boundaries and uses the number and positions of the edges to determine which 8 times 8 blocks belong to the same object. The proposed algorithm improves PSNR by an average of 0.27 dB compared with the existing BMA for error concealment.,2007,0, 105,A fault-tolerant protocol for energy-efficient permutation routing in wireless networks,"A wireless network (WN) is a distributed system where each node is a small hand-held commodity device called a station. Wireless sensor networks have received increasing interest in recent years due to their usage in monitoring and data collection in a wide variety of environments like remote geographic locations, industrial plants, toxic locations, or even office buildings. Two of the most important issues related to a WN are their energy constraints and their potential for developing faults. A station is usually powered by a battery which cannot be recharged while on a mission. Hence, any protocol run by a WN should be energy-efficient. Moreover, it is possible that all stations deployed as part of a WN may not work perfectly. Hence, any protocol designed for a WN should work well even when some of the stations are faulty. The permutation routing problem is an abstraction of many routing problems in a wireless network. In an instance of the permutation routing problem, each of the p-stations in the network is the sender and recipient of n/p packets. The task is to route the packets to their correct destinations. We consider the permutation routing problem in a single-hop wireless network, where each station is within the transmission range of all other stations. We design a protocol for permutation routing on a WN which is both energy efficient and fault tolerant. We present both theoretical estimates and extensive simulation results to show that our protocol is efficient in terms of energy expenditure at each node even when some of the nodes are faulty. Moreover, we show that our protocol is also efficient for the unbalanced permutation routing problem when each station is the sender and recipient of an unequal number of packets.",2005,0, 106,Architectural and Behavioral Modeling with AADL for Fault Tolerant Embedded Systems,"AADL is an architecture description language intended for model-based engineering of high-integrity systems. The AADL Behavior Annex is an extension allowing the refinement of behavioral aspects described through AADL. When implementing Distributed Real-time Embedded system, fault tolerance concerns are integrated by applying replication patterns. We considered a simplified design of the primary backup replication pattern to express the modeling capabilities of AADL and its annex. Our contribution intends to give accurate description of the synchronization mechanisms integrated in this example.",2010,0, 107,ATE applied into fault modeling and fault diagnosis of AC servo motor PWM driver system,"AC servo motor PWM driver system (including power module, power PWM driver board, cable, motor and photoelectric encoder/decoder) is a key sub-system of semiconductor assembly and packaging equipment. Aimed at its high fault rate, in this document we build the fault models of system based on PWM (Pulse Width Modulation) voltage, controller command and position feedback, find the test method of main faults, use ATE idea and method to perform test requirement analysis, resource allocation and driver program design, open the closed loop to decouple the system into LRUs (Line Replaceable Units) and locate the fault LRU(s) combining open loop and closed loop, on-line and off-line test. It shows that the fault modeling and ATE diagnosis above is successful by lots of experiments and test verification and has been applied into machine monitoring and fault diagnosis efficiently and effectively.",2005,0, 108,Dynamic Behavior of DFIG-Based Wind Turbines during Grid Faults,"According to grid codes issued by utilities, tripping of wind turbines following grid faults is not allowed. Besides, to provide voltage support to the grid mandatory reactive current supply is necessary. To enable wind turbines ride-through low voltage periods special protection measures have to be implemented. In this paper the behavior of DFIG based wind turbines during grid faults is discussed and elucidated using simulation results. It is shown that with properly designed crowbar and DC-link chopper even zero voltage ride-through is possible.",2007,0, 109,An Initial Study on the Bug Report Duplication Problem,"According to recent work, duplicate bug report entries in bug tracking systems impact negatively on software maintenance and evolution productivity due to, among other factors, the increased time spent on report analysis and validation, what in some cases take over 20 minutes. Therefore, a considerable amount of time is lost mainly with duplicate bug report analysis. This work presents an initial characterization study using data from bug trackers from private and open source projects, in order to understand the possible factors that cause bug report duplication and its impact on software development.",2010,0, 110,A self-correcting active pixel sensor using hardware and software correction,Active pixel sensor (APS) CMOS technology reduces the cost and power consumption of digital imaging applications. We present a highly reliable system for the production of high-quality images in harsh environments. The system is based on a fault-tolerant architecture that effectively combines hardware redundancy in the APS cells and software correction techniques.,2004,0, 111,A model-based approach to adding autonomic capabilities to network fault management system,"Adding autonomic capabilities to network management systems provides great promise in delivering high QoS while lowering operation and maintenance cost. In this paper, we present a model-based approach to adding autonomic capabilities to a fault management system for cellular networks. We propose the use of modeling techniques to specify software failures and their dispositions at the model level for the target system. This facilitates the deployment of a control loop for adding autonomic capabilities into the system architecture, which include self-monitoring, self-healing, and self-adjusting. Our case study on the intelligent network fault management system illustrates the proposed approach by adding and deploying these autonomic capabilities derived from self-model specifications, to mitigate the risk of specified failures and maintain the level of healthiness of the system, dynamically and effectively.",2008,0, 112,Dispersion-Error Optimized ADI FDTD,"ADI-FDTD method is efficient in solving fine RF/microwave structures due to its unconditionally stable characteristics. However, it suffers from large dispersions with the increase of time steps. In this paper, an error-minimized ADI-FDTD method is proposed that is less dispersive as compared to the conventional ADI-FDTD method. It is formulated in such a way that no extra memory or simulation time is required in its computations. It is still unconditionally stable but with much less dispersion errors. Numerical examples are presented to demonstrate its efficiency and accuracy",2006,0, 113,Fault-tolerant multimedia communication networks with QoS-based checkpoint protocol,"Advanced computer and network technologies have lead to the development of computer networks. Here, an application is realized by multiple processes located on multiple computers connected to a communication network. Each process computes and communicates with other processes by exchanging messages through communication channels. Mission-critical applications are required to be executed fault-tolerantly. This paper proposes novel consistency of global checkpoints in multimedia communication networks. Unlike the conventional consistency, it allows for processes to take local checkpoints during communication events and to lose a part of a message in recovery. In addition, we show a checkpoint protocol based on the proposed consistency. The checkpoint protocol is nonblocking for supporting time-constrained applications. In addition, it is QoS-based where a QoS parameter is global consistency",2001,0, 114,Exploring Fine-Grained Fault Tolerance for Nanotechnology Devices With the Recursive NanoBox Processor Grid,"Advanced molecular nanotechnology devices are predicted to have exceedingly high transient fault rates and large numbers of inherent device defects compared to conventional CMOS devices. We describe and evaluate the Recursive NanoBox Processor Grid as an application specific, fault-tolerant, parallel computing system designed for fabrication with unreliable nanotechnology devices. In this study we construct hardware description language models of a NanoBox Processor cell and evaluate the effectiveness of our recursive fault masking approach in the presence of random errors. Our analysis shows that complex circuits constructed with encoded lookup tables can operate correctly despite 2% of the nodes being in error. The circuits operate partially correct with up to 4% of the nodes being in error",2006,0, 115,A General QoS Error Detection and Diagnosis Framework for Accountable SOA,"Accountability is a composite measure for different but related quality aspects. To be able to ensure accountability in practice, it is required to define specific quality attributes of accountability, and metrics for each quality attribute. In this paper, we propose a quality detection and diagnosis framework for the service accountability. We first identify types of quality attributes which are essential to manage QoS in accountability framework. We then present a detection and diagnosis model for problematic situations in services system. In this model, we design situation link representing dependencies among quality attributes, and provide information to detect and diagnose problems and their root causes. Based on the model, we propose an integrated model-based and case-based diagnosis method using the situation link.",2008,0, 116,Dealing with dormant faults in an embedded fault-tolerant computer system,"Accumulation of dormant faults is a potential threat in a fault tolerant system, especially because most often fault tolerance is based on the single-fault assumption. We investigate this threat by the example of an automotive steer-by-wire application based on the Time-Triggered Architecture (TTA). By means of a Markov model we illustrate that the effect of fault dormancy can degrade the MTTF of a system by several orders of magnitude. We study potential remedies, of which transparent online testing proves to be the most powerful one, while taking a hot spare offline temporarily to test it provides a more feasible solution, though with tight constraints regarding the test duration.",2003,0, 117,Event-based motion correction in PET transmission measurements with a rotating point source,"Accurate attenuation correction is important for quantitative positron emission tomography (PET) imaging. In PET transmission measurement using external rotating radioactive sources, object motion during the transmission scan can affect measured attenuation correction factors (ACFs), causing incorrect radiotracer distribution or artefacts in reconstructed PET images. Therefore a motion correction method for PET transmission data could be very useful. In this paper we report a compensation method for rigid body motion in PET transmission measurement, in which transmission data are motion-corrected event-by-event, based on known motion, to ensure that events that traverse the same path through the object are recorded on the same LOR. After motion correction, events detected on different LORs may be recorded on the same transmission LOR. To ensure that the corresponding blank LOR records events from the same combination of contributing LORs, the list mode blank data are spatially transformed event-by-event based on the same motion information. The proposed method has been verified in phantom studies with continuous motion.",2010,0, 118,Correcting Base-Assignment Errors in Repeat Regions of Shotgun Assembly,"Accurate base-assignment in repeat regions of a whole genome shotgun assembly is an unsolved problem. Since reads in repeat regions cannot be easily attributed to a unique location in the genome, current assemblers may place these reads arbitrarily. As a result, the base-assignment error rate in repeats is likely to be much higher than that in the rest of the genome. We developed an iterative algorithm, EULER-AIR, that is able to correct base-assignment errors in finished genome sequences in public databases. The Wolbachia genome is among the best finished genomes. Using this genome project as an example, we demonstrated that EULER-AIR can 1) discover and correct base-assignment errors, 2) provide accurate read assignments, 3) utilize finishing reads for accurate base-assignment, and 4) provide guidance for designing finishing experiments. In the genome of Wolbachia, EULER-AIR found 16 positions with ambiguous base-assignment and two positions with erroneous bases. Besides Wolbachia, many other genome sequencing projects have significantly fewer finishing reads and, hence, are likely to contain more base-assignment errors in repeats. We demonstrate that EULER-AIR is a software tool that can be used to find and correct base-assignment errors in a genome assembly project",2007,0, 119,Gabor transform based fault locator for transmission lines,"Accurate estimation of current and voltage transient parameter is critical for efficient and accurate fault location computation. In this paper, a new fault location scheme is proposed for transmission systems using Gabor transform for signal processing purposes instead of the conventional Fourier methods. The transform is distinctive with more accurate performance specially when dealing with some certain circumstances such as sudden signal changes, dc decaying, non integer harmonics and non stationary quantities for fault signals. For a better extraction of Gabor coefficients, a dedicated artificial neural network is employed. The contribution of this new transform to power system fault location is evaluated through various simulation tests using the Electromagnetic Transient Program EMTP. Simulation results show the potential of the proposed transform in accurate estimation of phasors for more accurate fault location estimation.",2006,0, 120,Point defects in quartz crystals and their radiation response - a review [quartz resonator applications],A short review of Al-related point deflects and their radiation effects is presented. These defects exhibit spectroscopic signals which are monitored by a variety of experimental techniques. This discussion is useful to prospective researchers in the area of precision quartz resonators for frequency control in aerospace applications. Irradiation of quartz crystals at 77 K before and after irradiation at 300 K coupled with sweeping can be used for estimating the role of various point defects for their contribution in estimating the frequency offsets in quartz crystals in a radiation environment.,2004,0, 121,Preserving non-programmers' motivation with error-prevention and debugging support tools,"A significant challenge in teaching programming to disadvantaged populations is preserving learners' motivation and confidence. Because programming requires such a diverse set of skills and knowledge, the first steps in learning to program can be highly error-prone, and can quickly exhaust whatever attention learners are willing to give to a programming task. Our approach to preserving learners' motivation is to design highly integrated support tools to prevent the errors they would otherwise make. In this paper, the results of a recent study on programming errors are summarized, and many novel error-preventing tools are proposed.",2003,0, 122,On the 2-Adic Complexity and the k-Error 2 -Adic Complexity of Periodic Binary Sequences,"A significant difference between the linear complexity and the 2-adic complexity of periodic binary sequences is pointed out in this correspondence. Based on this observation, we present the concept of the symmetric 2-adic complexity of periodic binary sequences. The expected value of the 2-adic complexity is determined, and a lower bound on the expected value of the symmetric 2-adic complexity of periodic binary sequences is derived. We study the variance of the 2-adic complexity of periodic binary sequences, and the exact value for it is given. Because the k-adic complexity of periodic binary sequences is unstable, we present the concepts of the kappa-error 2-adic complexity and the k-error symmetric 2-adic complexity, and lower bounds on them are also derived. In particular, we give tighter upper and lower bounds for the minimum k-adic complexity of l-sequences by substituting two symbols within one period.",2008,0, 123,Symbolic verification and error prediction methodology,"A SIMD platform provides higher computing power by executing multiple data simultaneously, but this feature is making the design and verification harder. This paper aims at helping designer with an efficient methodology for verifying and evaluating the performance of a SIMD PLX platform design. Using Mathematica, a computer algebra system (CAS), and reverse engineering techniques, the correctness of and errors accumulated in running multimedia applications on this PLX platform can be precisely evaluated, which would not be easily done without human interventions before. The proposed methodology can be easily automated and adapted to other SIMD platforms.",2007,0, 124,Distance estimation technique for single line-to-ground faults in a radial distribution system,A simple yet powerful algorithm to estimate the distance to a single line-to-ground fault on a distribution feeder is proposed. The algorithm is implemented in a power monitor instrument and the estimation of distance is made within the instrument itself. The algorithm is designed to work where the only available data to the instrument are a single point measurement taken at the substation and the positive and zero-sequence impedance of the primary feeder. The single point measurement consists of three-phase voltage and current waveforms. Network topology data are not available to the algorithm. The new technique accommodates computational power and data constraints while maintaining adequate accuracy of the measurements,2000,0, 125,A novel economical single stage battery charger with power factor correction,"A single stage AC-DC topology with power factor correction is proposed for battery charger applications. Desired features for battery charger such as low cost, fast charging, charge profile programmability, high efficiency and high reliability are fully achieved by means of proposed solution. Additionally, its multiphase operation configuration provides easy power scaling. The proposed approach is superior to conventional ferro-resonant regulation widely used for EV (electrical vehicle) charger applications. It is especially suitable to low cost and high power applications. The feasibility and practical value of the proposed approach are verified by the experimental results from a 1 kW product prototype.",2003,0, 126,Reducing Corrective Maintenance Effort Considering Module's History,"A software package evolves in time through various maintenance release steps whose effectiveness depends mainly on the number of faults left in the modules. The testing phase is therefore critical to discover these faults. The purpose of this paper is to show a criterion to estimate an optimal repartition of available testing time among software modules in a maintenance release. In order to achieve this objective we have used fault prediction techniques based both on classical complexity metrics and an additional, innovative factor related to the modules age in terms of release. This method can actually diminish corrective maintenance effort, while assuring a high reliability for the delivered software.",2005,0, 127,"Low-cost, software-based self-test methodologies for performance faults in processor control subsystems","A software-based testing methodology for processor control subsystems, targeting hard-to-test performance faults in high-end embedded and general-purpose processors, is presented. An algorithm for directly controlling, using the instruction-set architecture only, the branch-prediction logic, a representative example of the class of processor control subsystems particularly prone to such performance faults, is outlined. Experimental results confirm the viability of the proposed methodology as a low-cost and effective answer to the problem of hard-to-test performance faults in processor architectures",2001,0, 128,A Novel Protecting Method for Induction Motor Against Faults Due to Voltage Unbalance and Single Phasing,"A system is designed and implemented to make protection for faults due to single phasing, voltage unbalance and under voltage in a 3-ph induction motor. In the system, three potential transformers with transformation ratio 400/5V are connected to each phase of induction motor. The sampling circuit is realized such that the low AC signals taken from transformer's secondary windings are converted into DC values. Control process is implemented using Microsoft PIC 16F877 microcontroller. Sampled DC values are transmitted to microcontroller using analog to digital converter unit. Measured values are continuously compared with reference values by means of software. When a voltage unbalance, under voltage or single phasing are sensed, the system opens the normally closed contactor by activating the 12 V, 10 A, DC relay and thus cuts the power supply to the induction motor. The corresponding fault is displayed in the seven segment LED display unit and alerts the operator. Trip and reset delays prevent nuisance tripping due to rapidly fluctuating power line conditions.",2007,0, 129,Research and Application of Fault-Tolerance Based on Watershed Model Grid Platform,"A systematic scheme to form the watershed computational platform was developed based on lightweight Grid technique in this paper. The scheme that takes advantage of widely deployed local network makes full use of the non dedicated distributed computing resources. To overcome the vagary of overall system, MPICH-T a trust model based fault tolerant model was adopted, and the checkpoint based on pessimistic log can ensure that process repeats in single node and task migration on multi-nodes. and the transplant of system is guaranteed on the watershed model Grid platform, lastly several experiments were made on this platform and the results show that this platform has better performance though has a slightly time delay and the fault-tolerance mechanism based on MPICHT model is a nice choice suiting to the watershed model Grid platform.",2008,0, 130,A fault-tolerant architectural approach for dependable systems,"A system's structure enables it to generate its intended behavior from its components' behavior. A well-structured system simplifies relationships among components, which can increase dependability. With software systems, the architecture is an abstraction of the structure. Architectural reasoning about dependability has become increasingly important because emerging applications are increasingly complex. We've developed an architectural approach for effectively representing and analyzing fault-tolerant software systems. The proposed solution relies on exception handling to tolerate faults associated with component and connector failures, architectural mismatches, and configuration faults. Our approach, a specialization of the peer-to-peer architectural style, hides inside the architectural elements the complexities of exception handling and propagation. Our goal is to improve a system's overall reliability and availability by making it tolerant of nonmalicious faults.",2006,0, 131,Self-organizing maps for automatic fault detection in a vehicle cooling system,"A telematic based system for enabling automatic fault detection of a population of vehicles is proposed. To avoid sending huge amounts of data over the telematics gateway, the idea is to use low-dimensional representations of sensor values in sub-systems in a vehicle. These low-dimensional representations are then compared between similar systems in a fleet. If a representation in a vehicle is found to deviate from the group of systems in the fleet, then the vehicle is labeled for diagnostics for that subsystem. The idea is demonstrated on the engine coolant system and it is shown how this self-organizing approach can detect varying levels of clogged radiator.",2008,0, 132,Comparison of Voltage and Flux Modulation Schemes of StatComs Regarding Transformer Saturation During Fault Recovery,"A transformer might be driven into saturation by faults in the connected system. In a transmission system with a static synchronous compensator and transformers connected at the point of common coupling, different modulation schemes utilized by the voltage source converter influence the transformer saturation in different ways. This paper compares the saturation effect during fault recovery when two alternative modulation schemes are utilized: voltage modulation and flux modulation. The comparison shows that utilization of flux modulation scheme tends to soften the saturation problem during fault recovery. However, this is achieved at a higher converter transient peak current.",2008,0, 133,Implementation of Web-Based Fault Diagnosis Using Improved Fuzzy Petri Nets,"According to the current application and maintenance situation of numerical control equipment (NCE), a novel remote fault diagnosis expert system is designed to prevent fault occurrence and quicken the recovering process by online real-time monitoring the working state of NCEs. The article addresses the overall framework and relevant application technology of fault diagnosis system (FDS) and emphasizes on the establishment of fuzzy expert system (FES). Improved fuzzy Petri nets (FPNs) model and concurrent reasoning algorithm are applied to handle the fuzziness and concurrency of fault and inadequate and uncertain information. Utilization of simple matrix operation to realize complicated reasoning process that simplifies the diagnostic reasoning decision-making process. Meanwhile, it can be realized easily by computer programming. Finally, a practical fault instance is presented to demonstrate the feasibility and validity of this method.",2009,0, 134,Study of online fault diagnosis for distributed substation based on Petri nets,"According to the distributed substation protection configuration and the principle of fault-clearance, a new model of fault diagnosis based on Petri net is proposed in this paper. In Petri net diagnosis model, all kinds of fault have a specific token which make it easily and clearly to find the fault location and understand the sequence of the fault events. The diagnostic is implemented by solving some matrix equations, which has a fast computational speed and a definite diagnostic result. The approach proposed is particular suitable for substation fault online diagnosis. Specially, in this model, the differential protection of the transformer and the bus bars are concerned as well as over current protection.",2010,0, 135,A fault analysis and design consideration of pulsed-power supply for high-power laser,"According to the requirements of driving flashlamps, the design of a pulsed-power supply (PPS), based on capacitors as energy storage elements, is presented. Special consideration is given to some possible faults such as capacitor internal short-circuit, bus bar breakdown to ground, flashlamp sudden short or break (open circuit), and closing switch restrike in the preionization branch. These faults were analyzed in detail, and both fault current and voltage waveforms are shown through circuit simulation. Based on the analysis and computation undertaken, the pulsed-power system design and protection requirements are proposed. The preliminary experiments undertaken after circuit simulation demonstrated that the design of the PPS met the project requirements.",2003,0, 136,Laser Spectrum Measurement and Correction Based on Virtual Instrument Techniques,"According to the requirements of laser beams' spectrum measurement and correction, the laser spectrum measurement constructed with WDS4A and WDS4C raster and based on virtual instruments (Vis) technique are provided. The detailed methods and applied programs based on techniques of Graphical programming for instrumentations platform Lab VIEW are also introduced. After the description to the principles and the construction of the laser spectrum measure system, the policies to software and modules of the system are carefully discussed, including software structure and system driver configuration, raster control module, spectrum energy measure module, and spectrum calibration module. Meanwhile, the details to realize the laser spectrum correction methods and the raster control for the test are delivered. At last, measure data of the CO2 laser spectrums based on VI techniques in practical implement of different environment are given. Practical examples indicate that, by the use of VI technique, the precision is excelled to plusmn0.015 mum as well as the automation levels have been improved.",2007,0, 137,Immunevonics: Avionics fault tolerance inspired by the biology system,"A novel approach to Advanced Avionics fault tolerance is proposed that takes inspiration from the intersection of human immune system and nervous system as a method of fault recognition and removal. The human body's defenses-the immune system, the sense of pain, and the healing processes-could serve as a conceptual model for high-confidence systems in advance avionics fault tolerance. In this paper we propose a hybrid fault tolerance instrument that suitable for Advanced Avionics system, by use distribute `body' immune system and central `nervous' system management principle. Through an artificial immune algorithm the proposed Advanced Avionics `body' immune system will learn to differentiate between acceptable and abnormal states and transitions within the `immunized' system. Potential faults can then be flagged and suitable recovery methods invoked to return the system to a safe state. When the `body' immune system can not handle the fault then a `pain' message being reported to Operating System health monitor system (OS-HM) and central `nervous' system will tolerant the fault through a three layered Preconscious health monitor and conscious fault handling strategy.",2009,0, 138,A novel approach to fault diagnostics and prognostics,A novel fault diagnostics and prognostics algorithm based on hidden Markov model (HMM) is proposed. The algorithm combines fault diagnostics and prognostics in a unified framework. The algorithm has been fully tested by using experimental data from a rotating shift testbed in our laboratory.,2003,0, 139,A novel fault injection method for system verification based on FPGA boundary scan architecture,"A novel fault injection (a.k.a. fault insertion) method to facilitate the development of high quality system test is presented in this paper In this method, we utilize the existing boundary scan (BS) architecture of an FPGA to inject a hardware fault condition at any pin of the FPGA on a circuit board. Existing user-defined instructions of most FPGA BS architectures and the newly proposed design of their corresponding user-defined scan registers (USRs) constitute the proposed fault injection architecture. No new instruction, and no modification of the existing test access port (TAP) controller and BS registers are required. In addition, it is possible to reconfigure where and what type of faults to be injected asynchronously via the BS architecture while the system is online. Although the proposed method incurs at least additional delay through a multiplexer on the pin where a fault is injected, the programmability of an FPGA enables us to add fault injection logic only to where fault injection function is desired. Hence, area overhead and performance impact can be significantly reduced.",2002,0, 140,Development on surface defect holes inspection based on image recognition,"A novel method of surface defect holes inspection was proposed based on both 2D and 3D image processing & recognition. In this method, the first step is to detect the holes in the binary image converted by the 3D image which is scanned by a 3D laser scanner, and the second step is to confirm the defect holes by dimension calculation using the data of the scanned 3D image. The software is developed in MATLAB.",2010,0, 141,Rate control for low delay H.264/AVC transmission over channels with burst error,"A rate control approach is proposed to deal with the low delay H.264/AVC video transmission over channels with burst errors by applying stochastic optimization technique. Based on the exponential rate-distortion and the linear variance prediction model, the one pass rate control algorithm will take into account the channel state and round trip delay, and make an immediate decision on the optimal rate allocation for the video frame. Simulation results show that for different end to end delay constraints and round trip delay, the number of lost frames is significantly reduced, and the average reconstruction peek signal to noise ratio is improved by 0.5-1.6dB, compared with the reference rate control scheme [ARA, 01]",2006,0,5504 142,Non-inductive variable reactor design and computer simulation of rectifier type superconducting fault current limiter,"A rectifier type superconducting fault current limiter with noninductive reactor has been proposed by the authors. The concept behind this SFCL is that the high impedance generated during superconducting to normal state of the trigger coil limits the fault current. In the hybrid bridge circuit of the SFCL, two superconducting coils: a trigger coil and a limiting coil are connected in anti-parallel. Both the coils are magnetically coupled with each other and could have the same value of self inductance so that they can share the line current equally. At fault time when the trigger coil current reaches a certain level, the trigger coil changes from superconducting state to normal state. This super to normal transition of the trigger coil changes the current ratio of the coils and therefore the flux inside the reactor is no longer zero. So, the equivalent impedance of both the coils is increased and limits the fault current. We have carried out computer simulation using PSCAD/EMTDC and observed the results. Both the simulation and preliminary experiment shows good results. The advantage of using hybrid bridge circuit is that the SFCL can also be used as circuit breaker.",2005,0, 143,"State-of-the-art, single-phase, active power-factor-correction techniques for high-power applications - an overview","A review of high-performance, state-of-the-art, active power-factor-correction (PFC) techniques for high-power, single-phase applications is presented. The merits and limitations of several PFC techniques that are used in today's network-server and telecom power supplies to maximize their conversion efficiencies are discussed. These techniques include various zero-voltage-switching and zero-current-switching, active-snubber approaches employed to reduce reverse-recovery-related switching losses, as well as techniques for the minimization of the conduction losses. Finally, the effect of recent advancements in semiconductor technology, primarily silicon-carbide technology, on the performance and design considerations of PFC converters is discussed.",2005,0, 144,Automated detection of injected faults in a differential equation solver,"Analysis of logical relationships between inputs and outputs of a computational system can significantly reduce the test execution effort via minimizing the number of required test cases. Unfortunately, the available specification documents are often insufficient to build a complete and reliable model of the tested system. In this paper, we demonstrate the use of a data mining method, called Info-Fuzzy Network (IFN), which can automatically induce logical dependencies from execution data of a stable software version, construct a set of non-redundant test cases, and identify faulty outcomes in new, potentially faulty releases of the same system. The proposed approach is applied to the Unstructured Mesh Finite Element Solver (UMFES) which is a general finite element program for solving 2D elliptic partial differential equations. Experimental results demonstrate the capability of the IFN-based testing methodology to detect several kinds of faults injected in the code of this sophisticated application.",2004,0, 145,A dynamic fault localization algorithm using digraph,"Analyzed here is a dynamic learning fault localization algorithm based on directed graph fault propagation model and feedback control. Input and output of the algorithm are named as fault and hypothesis respectively. Because of the complexity and uncertainty of fault and symptom, it's difficult to accurately model the relationship of them in probabilistic fault localization. Fault localization algorithm depends on the prior specified model, and the parameter and structure of model is approximate correct and often differ from the real situation. So we propose DMCA+ algorithm which has 3 features: reduce the requirement for accuracy of initial conditions; statistically learn to automatically adapt the probability distribution of fault occurrence while localizing fault; generalize the MCA+ algorithm of no feedback. The feedback learning is similar with proportional adjusting of PID control, but increment is sensitive to detection rate because little increment adjusts output too slowly and big will result in a large number of error hypotheses. The simulation results show the validity and efficiency of dynamic learning under complex network. In order to promote detection rate, optimizing measures are also discussed.",2009,0, 146,Implementation of e-beam proximity effect correction using linear programming techniques for the fabrication of asymmetric bow-tie antennas,"Antenna-coupled tunnel junction diodes have recently been offering great advantages for IR and Terahertz detection applications. Fabrication has been a major constraint in our ability to field these devices. The first obstacle is the relatively small size of the antenna. As the length of the wave to be detected gets smaller, the size of the antenna shrinks according to the A/4 rule. This eliminates the use of traditional photolithographic fabrication techniques, which fails in the nanometer geometry range. For this reason, e-beam lithographic technique is used. The second challenge appears in the fabrication of the tunnel junction. The tunnel junction part of the device is formed by sandwiching an insulation layer in between two conductor antenna parts. Previously, many fabrication techniques were offered for the vertical conductor-insulator-conductor (CIC) structures where two metal layers overlap each other forming a tunnel junction vertical to the antenna surface. However, planar CIC structures have become more popular because they enable the surface plasmon excitement across the tunnel junction barrier. The fabrication of planar tunnel junction requires the patterning of a nano-size gap that will enable the tunneling of the electrons in between two conductor antenna wings. At this critical location, e-beam proximity effect (pixel-to-pixel beam interactions) becomes a very important issue to be addressed in order to create a nanometer-range accuracy gap.",2009,0, 147,Integrated fault-tolerant multicast and anycast routing algorithms,"Anycast is a new communication service defined in IPv6 (Internet Protocol Version 6 for the next generation). An anycast message is the one that should be delivered to the `nearest' member in a group of designated recipients. Anycast and multicast mechanisms may be integrated to provide better services. A group of replicated (or mirrored) servers that provides anycast service may also provide multicast services and needs multicast to consistently update, whereas anycast routing may help a multicast request to reach the `nearest' member in a multicast group. A novel integration routing protocol is presented for both multicast and anycast messages communications in the Internet. The protocol is composed of four algorithms: (1) dynamic anycast routing algorithm for efficient transmission of anycast messages over the Internet to a group of servers. (2) integrated anycast routing with core-based tree technique based on multicast routing algorithms taking advantage of short delay, high throughput and load sharing. (3) Fault-tolerant algorithms for both anycast and multicast routing using backup paths restoring techniques. The performance figures have demonstrated the benefits of anycast routing in reducing end-to-end packet delay, and attaining load balance and fault-tolerance for multicast",2000,0, 148,Context-Aware Adaptive Applications: Fault Patterns and Their Automated Identification,"Applications running on mobile devices are intensely context-aware and adaptive. Streams of context values continuously drive these applications, making them very powerful but, at the same time, susceptible to undesired configurations. Such configurations are not easily exposed by existing validation techniques, thereby leading to new analysis and testing challenges. In this paper, we address some of these challenges by defining and applying a new model of adaptive behavior called an Adaptation Finite-State Machine (A-FSM) to enable the detection of faults caused by both erroneous adaptation logic and asynchronous updating of context information, with the latter leading to inconsistencies between the external physical context and its internal representation within an application. We identify a number of adaptation fault patterns, each describing a class of faulty behaviors. Finally, we describe three classes of algorithms to detect such faults automatically via analysis of the A-FSM. We evaluate our approach and the trade-offs between the classes of algorithms on a set of synthetically generated Context-Aware Adaptive Applications (CAAAs) and on a simple but realistic application in which a cell phone's configuration profile changes automatically as a result of changes to the user's location, speed, and surrounding environment. Our evaluation describes the faults our algorithms are able to detect and compares the algorithms in terms of their performance and storage requirements.",2010,0, 149,Edge Defect Detection in Ceramic Tile Based on Boundary Analysis Using Fuzzy Thresholding and Radon Transform,"Applying image processing technology and machine vision in industry have had significant development in recent decade. Tile and ceramic industry was not excluded form this matter. By using image processing techniques in production line of this industry, it is possible to detect surface defections such as edge defections, cracks and coloring defections. Surface defection is the most common type of defection in tile and ceramic industry. In this article a new and simple approach for detecting surface defections using Fuzzy thersholding and morphological operand and Radon transforms as well as boundary analysis is introduced. The available algorithm in the articles has good ability in determining number and place of defections and displaying them. Compared with results of other methods, the used algorithm in this article has better results on actual data base, in presence of camera noise and environmental lighting effects.",2008,0, 150,Design of arc fault detection system based on CAN bus,"Arc fault detection system (AFDS) is a device intended to protect the power system against the arc fault that may cause fire. When there is an arc fault, the scale of fault current is lower than the initialization of most of the protection devices installed in the lowers, hence AFDS is an effective device to detect the arc fault successfully and interrupt the circuit in time. The characteristics of arc, how it ignites, and what losses it may cause were discussed. The basic structure of AFDS and the primary principles of that AFDS detect arc fault were proposed. For efficiently realizing the global optimization, the single function problem of conventional arc fault circuit interrupter (AFCI) was solved by setting up a detection system. Composed with detectors, controllers and host computer database, system achieved the automatic detection of arc fault, high temperature fault and leakage current fault to protect the conductors, the equipments and ensure human's safety. Detectors, controllers and host computer database were communicated with CAN bus. Device realized the expectations of reducing the communication cost and improving the communication quality.",2009,0, 151,Evolution of fault-tolerant and noise-robust digital designs,Artificial evolution has been shown to generate remarkable systems of exciting novelty. It is able to automatically generate digital circuit designs and even circuits that are robust to noise and faults. Extensive experiments have been carried out and are presented here to identify more clearly to what extent artificial evolution is able to generate robust designs. The evolved circuits are thoroughly tested whilst being exposed to noise and faults in a simulated environment and the results of their performance are presented. The evolved multiplier and adder circuits show a graceful degradation as noise and failrate are increased. The functionality of all circuits is measured in a simulated environment that to some extent takes into account analogue electronic properties. Also included is a short overview of some recent work illustrating the robustness and tolerance of bio-inspired hardware systems.,2004,0, 152,MATLAB Design and Research of Fault Diagnosis Based on ANN for the C3I System,Artificial neural networks (ANN) are an information-processing method of a simulation of the structure for biological neurons. C3I system as a modern combat unit can control and command the army action and can communicate to others. This paper makes a research on the approach of the artificial neural network for fault diagnosis of C3I system and constructs a fault diagnosis system of C3I system with ANN. And the system can analyze fault phenomena and detect C3I system fault. It will greatly improve the response to the C3I system fault diagnosis and maintenance efficiency.,2010,0, 153,A Test of Artificial Neural Network-Based Gross Error Detection Used in Conversion of GPS Heights,"Artificial neural networks (ANN) has been used in GPS heights conversion. It is impossible to avoid gross errors in conversion of GPS heights. So, the paper puts forward a kind algorithm to detect gross error considering of ANN used in conversion of GPS heights. After the three tests, the algorithm proposed for disposing of gross error is validated in conversion of GPS heights, and some practicable conclusions are received.",2008,0, 154,Investigation on flow instrument fast fault detection based on LSSVM predictor,"Aimed at the issue of real time fault diagnosis in Flow Totalizer, a new type predictor based on Least Squares Support Vector Machine (LSSVM) was put forward. By comparing predictive value with flow meter output value, fault diagnosis was carried out. Predictive errors and the speed of prediction were considered in this algorithm, by dealing with compromise between them, the samples were selected and a higher predicting speed was ensured. The analysis and simulation results showed that the relative higher speed could be acquired by training LSSVM with the selected-samples and reducing the precision of predictor. So this type of predictor was more suitable in real time fault detection.",2010,0, 155,Power Cable Fault Diagnosis System Based on LabVIEW,"Aiming at the online fault diagnoses of the power cable, an online Power Cable Monitoring system is built by using the virtual instrument platform Lab VIEW. In the system, the monitoring surface is designed by using the graphical programming language, and the data acquisition card PCI6221 is used to capture the data and to monitor the state of the online cable. The experiment shows that the functions of the friendly interface, the visual display of the online data of the power cable traveling waves, and the reliable data processing measure are implemented. The system provides the hardware support for the online cable monitoring and fault diagnosis.",2010,0, 156,GIS based multilevel intelligent fault diagnosis system on electric power equipment,"Along with the application which AM/FM/GIS has been used in distribution network strengthening, at present it has become a technique development direction that SCADA should integrate with AM/FM/GIS. In pace with electrified wire netting scale growing immensely, the equipment requirements are increasing and their function is more and more advanced. It can not satisfy the requirements of a practical system to adopt traditional concentrated and single fault diagnosis methods. We designed a distributed multilevel intelligent fault diagnosis method in this paper. First resolve the equipment fault diagnosis of the whole system into a fault diagnosis of each subsystem, then, according to the characteristic of the equipment type, transfer many sorts of fault diagnosis means from the knowledge-base to carry on cooperation fault diagnosis under the control of multilevel intelligent fault diagnosis tactics. Leave the final diagnosis result with a fault database to offer system-making-policy and alarm information.",2004,0, 157,Fast and Efficient Bright-Field AAPSM Conflict Detection and Correction,"Alternating-aperture phase shift masking (AAPSM), a form of strong resolution enhancement technology, will be used to image critical features on the polysilicon layer at smaller technology nodes. This technology imposes additional constraints on the layouts beyond traditional design rules. Of particular note is the requirement that all critical features be flanked by opposite-phase shifters while the shifters obey minimum width and spacing requirements. A layout is called phase assignable if it satisfies this requirement. Phase conflicts have to be removed to enable the use of AAPSM for layouts that are not phase assignable. Previous work has sought to detect a suitable set of phase conflicts to be removed as well as correct them. This paper has two key contributions: 1) a new computationally efficient approach to detect a minimal set of phase conflicts, which when corrected will produce a phase-assignable layout, and 2) a novel layout modification scheme for correcting these phase conflicts with small layout area increase. Unlike previous formulations of this problem, the proposed solution for the conflict detection problem does not frame it as a graph bipartization problem. Instead, a simpler and more computationally efficient reduction is proposed. This simplification greatly improves the runtime while maintaining the same improvements in the quality of results obtained in Chiang (Proc. DATE, 2005, p. 908). An average runtime speedup of 5.9times is achieved using the new flow. A new layout modification scheme suited for correcting phase conflicts in large standard-cell blocks is also proposed. The experiments show that the percentage area increase for making standard-cell blocks phase assignable ranges from 1.7% to 9.1%",2007,0, 158,Analog Circuit Fault Simulation Based on Saber,"Although analog circuit simulation tool like Saber software are numerous, however, it is lack of software which can simulate analog circuits affected by fault modes. Research in the fields of analog circuit fault simulation has not achieved the same degree of success as digital circuits, because of the difficulty in modeling the more complex analog behavior. This article presents a new approach to this problem by simulating the good and fault circuits in Saber and introduces the general circuit fault simulation process. In view of failure mechanism under the components, some novel approaches for fault modeling are proposed. Fault injection and simulation interface based on Saber are detailed in this paper. This method is verified by an example and the actual engineering value is indicated.",2010,0, 159,A New Approach to Improving the Test Effectiveness in Software Testing Using Fault Collapsing,"Although mutation is one of the practical ways of enhancing the effectiveness of the test cases to be applied to an application under test, it could be sometimes infeasible for there being too many assumed faults and mutants to be operated in a larger scale system so that the mutation operating becomes time-consuming and even prohibited. Therefore, the number of faults assumed to exist in the software under test should be reduced. Fault collapsing is a common way of reducing the number of faults in hardware testing. However, this strategy can now be well implanted into the area of software testing. In this paper, we utilize the concept of fault dominance and equivalence, which has long been used in hardware testing, for revealing a novel way of reducing the number of faults assumed to hide in software systems. Once the number of faults assumed in software is decreased sharply, the effectiveness of mutation testing would be greatly enhanced. Examples and experimental results are presented to illustrate the effectiveness and the helpfulness of the technology proposed in the paper",2006,0, 160,The Application of Topological Gradients to Defect Identification in Magnetic Flux Leakage-Type NDT,An inverse problem is formulated to indentify the shape and size of the defects in a nonlinear ferromagnetic material using the signal profile from magnetic flux leakage-type NDT. This paper presents an efficient algorithm based on topological shape optimization which exploits the topological gradient to accelerate the process of shape optimization to identify the defect. Topological gradient images for the cracks are obtained using 2-D and 3-D finite element models. Robustness of this imaging method in the presence of noise is also evaluated.,2010,0, 161,Stator current monitoring to detect mechanical faults in medium size induction motors,"An investigation into the use of stator current monitoring to detect mechanical irregularities in a medium-size, skewed induction motor is presented. Through the use of analytical mmf and permeance functions, the frequencies of air gap flux density harmonics may be identified, allowing the prediction of induced voltage and current harmonics in the stator windings. The development of a test rig to create both static and dynamic eccentricity conditions is described. The test facility also allows rapid access to the motor bearings, allowing the investigation of faulty bearings. The presented experimental results confirm the theory related to static and dynamic eccentricity. Experiments carried out with a contaminated bearing indicate that techniques used to detect eccentric conditions may also be valid for the detection of bearing degradation.",2004,0, 162,Error Corrections in Outdoor Cylindrical near Field Radar Antenna Measurement System,"An outdoor cylindrical near field antenna measurement system was designed and fabricated in Guadalajara (Spain) for measuring large L-band RADAR antennas. This design was presented in EuCap 2006 by Martin, F., et al. (2006), where the theoretical error analysis and the main description of the complete system were presented. This paper presents the solutions adopted for solving some of the problems presented in the outdoor design: the first of them due to temperature variations, the second one to the effect of the wind and the third one to the reflections in the ground.",2007,0, 163,Scatter Correction Method for X-Ray CT Using Primary Modulation: Theory and Preliminary Results,"An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images. A scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure. The key hypothesis of the algorithm is that the high-frequency components of the X-ray spatial distribution do not result in strong high-frequency signals in the scatter. A calibration sheet with a checkerboard pattern of semitransparent blockers (a ""primary modulator"") is inserted between the X-ray source and the object. The primary distribution is partially modulated by a high-frequency function, while the scatter distribution still has dominant low-frequency components, based on the hypothesis. Filtering and demodulation techniques suffice to extract the low-frequency components of the primary and hence obtain the scatter estimation. The hypothesis was validated using Monte Carlo (MC) simulation, and the algorithm was evaluated by both MC simulations and physical experiments. Reconstructions of a software humanoid phantom suggested system parameters in the physical implementation and showed that the proposed method reduced the relative mean square error of the reconstructed image in the central region of interest from 74.2% to below 1%. In preliminary physical experiments on the standard evaluation phantom, this error was reduced from 31.8% to 2.3%, and it was also demonstrated that the algorithm has no noticeable impact on the resolution of the reconstructed image in spite of the filter-based approach. Although the proposed scatter correction technique was implemented for X-ray CT, it can also be used in other X-ray imaging applications, as long as a primary modulator can be inserted between the X-ray source and the imaged object",2006,0, 164,Analog circuit fault diagnosis based on artificial neural network and embedded system,Analog circuit fault diagnosis system based on S3C2410 embedded board is achieved in this paper. The hardware and software design are presented. Momentum addition BP neural network algorithm with embedded system is applied to that system. Real-time data collection and on-line detection of analog circuit fault condition are designed as the basic functions in the embedded system. Diagnosis system of intelligence and minimization is realized actually.,2009,0, 165,Fault-tolerant training of neural networks in the presence of MOS transistor mismatches,"Analog techniques are desirable for hardware implementation of neural networks due to their numerous advantages such as small size, low power, and high speed. However, these advantages are often offset by the difficulty in the training of analog neural network circuitry. In particular, training of the circuitry by software based on hardware models is impaired by statistical variations in the integrated circuit production process, resulting in performance degradation. In this paper, a new paradigm of noise injection during training for the reduction of this degradation is presented. The variations at the outputs of analog neural network circuitry are modeled based on the transistor-level mismatches occurring between identically designed transistors. Those variations are used as additive noise during training to increase the fault tolerance of the trained neural network. The results of this paradigm are confirmed via numerical experiments and physical measurements and are shown to be superior to the case of adding random noise during training",2001,0, 166,Multiple Fault Models for Timed FSMs,"An implementation under test (IUT) can be formally described using finite-state machines (FSMs). Due to the presence of inherent timing constraints and variables in a communication protocol, an IUT is modeled more accurately by using extended finite-state machines (EFSMs). However, infeasible paths due to the conflicts among timing condition and action variables of EFSMs can complicate the test generation process. The fault detection capability of the graph augmentation method given in M. U. Uyar et al. (2005) and M. A. Fecko et al. (2000) are analyzed in the presence of multiple timing faults. The complexity increases with the consideration of the concurrent running and expiring of timers in a protocol. It is proven that, by using our graph augmentation models, a faulty IUT will be detected for the multiple occurrences of pairwise combinations of a class of timing faults",2006,0, 167,Compensation is not enough [fault-handling and compensation mechanism],"An important problem in designing infrastructure to support business-to-business integration (B2Bi) is how to cancel a long-running interaction (either because the user has changed their mind, or in response to an unrecoverable failure). We review the fault-handling and compensation mechanism that is now used in most workflow products and business process modeling standards. We then use an e-procurement case-study to extract a set of requirements for an effective cancellation mechanism, and we show that the standard approach using fault-handling, and compensation transactions is not adequate to meet these requirements.",2003,0, 168,Bug reports retrieval using Self-organizing Map,"An important process when implementing complex software systems consist of documenting the bugs found in that software. However, since many developers are working at the same time on the project, a bug may easily be reported multiple times, resulting in duplicated bug reports. Therefore, developers responsible for fixing bugs may spend time and effort reading and trying to understand two bugs that actually are the same. This way, we propose in this paper an approach for identifying duplicated bug reports that combines document indexing and self-organizing maps (SOM). The results of our experiments show that at most 69% of duplicated bug reports were identified, representing saving of time and effort for the developers.",2008,0, 169,DOORS: towards high-performance fault tolerant CORBA,"An increasing number of applications are being developed using distributed object computing middleware, such as CORBA. Many of these applications require the underlying middleware, operating systems, and networks to provide end-to-end quality of service (QoS) support to enhance their efficiency, predictability, scalability, and fault tolerance. The Object Management Group (OMG), which standardizes CORBA, has addressed many of these application requirements in the Real-time CORBA and Fault-Tolerant CORBA specifications. We provide four contributions to the study of fault-tolerant CORBA middleware for performance-sensitive applications. First, we provide an overview of the Fault Tolerant CORBA specification. Second, we describe a framework called DOORS, which is implemented as a CORBA service to provide end-to-end application-level fault tolerance. Third, we outline how the DOORS' reliability and fault-tolerance model has been incorporated into the standard OMG Fault-tolerant CORBA specification. Finally, we outline the requirements for CORBA ORB core and higher-level services to support the Fault Tolerant CORBA specification efficiently",2000,0, 170,Fault diagnosis of analog circuit based on support vector machines,"An innovative method based on support vector machines is presented to diagnose the fault of analog circuit. Firstly, in order to get enough fault samples, the circuit program is compiled in MATLAB software to obtain expressions of output signals. Secondly, fault samples are sent into Support Vector Machines to train Support Vector Machines. Thirdly, the test samples are classified by trained Support Vector Machines. Finally, an example of analog circuit fault diagnosis is provided. The result shows that this method has the advantages of simple algorithm, high efficiency, high accuracy, great capability in generalization and classification.",2009,0, 171,A interturn fault protection method of HV shunt reactors based on unbalanced parameter detection,"An interturn fault protection method of the HV shunt reactor is put forward in this paper, which is based on the unbalanced parameter detection. This method is established on the basis of the time-domain parameter model, and uses the electrical quantities at two terminals of the HV reactor to calculate the electrical parameters of each phase by mean of least squares algorithm. Suppose that the interturn short-circuit faults of the HV phase-separated shunt reactors impossibly occur at three phases at the same time, we could identify whether the interturn short-circuit fault of the HV reactor occurs by designing the comprehensive detection criterion which could reflect three-phase unbalanced parameters and parameter mutation. The interturn protection of HV reactors could avoid the impact from power swings and frequency deviation. This method is based on the measurement and the comparison of per-phase equivalent inductance parameters of the reactor. The setting of the protection is simple and convenient to implement. Furthermore, the protection has high sensitivity. Because this interturn protection detects the variation of per-phase inductance parameters, it could avoid the influence from the system operating mode and the fault while those based on the principle of sequence components cannot. It is also adaptive to the installation location of the voltage transformer (PT), and could be easily applied to HV reactors of both transmission lines and busbars.",2010,0, 172,A CORBA design pattern to build load balancing and fault tolerant telecommunication software,"As a mature distributed object computing middleware, CORBA is being used more and more widely in many fields to build large scale distributed software system. In telecommunication system, load balance and fault tolerance are especially important because of strict requirements for high reliability and capacity. In former studies, load balance and fault tolerance are considered separately more often. This paper introduces a new CORBA design pattern, named GenericFactory pattern, and discusses how to combine load disturbance and fault tolerance effectively. Advantages of this pattern are pointed out.",2003,0, 173,Research on high-speed fuzzy reasoning with CPLD for fault diagnosis expert system,"As an effective method for diagnosis reasoning, fuzzy reasoning is hard to meet the real-time challenge for its complex process and time-consuming. According to the principle of conventional fuzzy reasoning with software, a new method to design expert system fuzzy reasoning with CPLD for fault diagnosis is presented. In the new method, fuzzy operating is realized by function transform with ROM, and CPLD provides logic control and process coordination for fuzzy reasoning. After all, the whole fuzzy reasoning is finished with hardware, not software. It is validated by many experiments that the speed of fuzzy reasoning with this method is faster than traditional modes, and it can be applicable to many on-line diagnosis systems based on single-chip controller or DSP (digital signal processor).",2009,0, 174,Online Estimation of Architectural Vulnerability Factor for Soft Errors,"As CMOS technology scales and more transistors are packed on to the same chip, soft error reliability has become an increasingly important design issue for processors. Prior research has shown that there is significant architecture-level masking, and many soft error solutions take advantage of this effect. Prior work has also shown that the degree of such masking can vary significantly across workloads and between individual workload phases, motivating dynamic adaptation of reliability solutions for optimal cost and benefit. For such adaptation, it is important to be able to accurately estimate the amount of masking or the architecture vulnerability factor (AVF) online, while the program is running. Unfortunately, existing solutions for estimating AVF are often based on offline simulators and hard to implement in real processors. This paper proposes a novel way of estimating AVF online, using simple modifications to the processor. The estimation method applies to both logic and storage structures on the processor. Compared to previous methods for estimating AVF, our method does not require any offline simulation or calibration for different workloads. We tested our method with a widely used simulator from industry, for four processor structures and for 100 to 200 intervals of each of eleven SPEC benchmarks. The results show that our method provides acceptably accurate AVF estimates at runtime. The absolute error rarely exceeds 0.08 across all application intervals for all structures, and the mean absolute error for a given application and structure combination is always within 0.05.",2008,0, 175,Using MPLS fault recovery mechanism and bandwidth reservation in network-on-chip,"As CMOS technology scales down into the deep submicron (DSM) domain, devices and interconnects are subject to new types of malfunctions and failures that are harder to predict and avoid with the current system-on-chip (SoC) design methodologies. In this paper we compare four reconfigurable fault recovery mechanism and path restoration schemes, namely, Haskin, Makam, Simple Dynamic and Shortest Dynamic in real network, in the sense of on chip network design methodology. These schemes are simulated by using NS-2 that provides the advantage of reusability of entities, thus increases the NoC fault-tolerant and reliability with Quality-of-Service.",2010,0, 176,Improvement the NOC Bandwidth and fault Tolerant by Multipath routing in three-dimensional topologies for multi-media applications,"As CMOS technology scales down into the deep submicron (DSM) domain, devices and interconnects are subject to new types of malfunctions and failures that are harder to predict and avoid with the current system-on-chip (SoC) design methodologies. We propose a combination of a topology and Multi-path routing which can increase fault-Tolerant and Communication load which is suitable for multimedia applications. We compare the performance of Fat-Tree, 2d-Mesh and 3d-Mesh architectures using Multi-path routing in the sense of on chip network design methodology. The simulations of each of the architectures are done with IP and Multi-path routing, two-dimensional and three-dimensional topologies. We also carry out the high level simulation of on chip network using NS-2 to verify the analytical analysis.",2010,0, 177,An Application of Semantic Annotations to Design Errors,"As current engineered systems (e.g. aviation systems) have been equipped with automated and computer-based artefacts, human-system interaction (e.g. human computer interaction) has been an important issue. Design errors that are attributable to human-system interaction failures are not pure engineering design issues, but a multidisciplinary subject with related other areas such as management, psychology, physiology or ergonomics. To identify such design errors (called design-induced errors) in accident reports is important for designing more reliable systems. However, the lack of precise definitions of the concept of design-induced error and the diversity of expression of such failures make it difficult to retrieve relevant documents from accident reports. This paper describes how an ontology and annotation scheme can help to overcome such limitations. Engineering designers can be assisted by the developed ontology and annotation scheme to reason on the issues of design induced error",2006,0, 178,Reduction of faults in software testing by fault domination,"Although mutation testing is one of the practical ways of enhancing test effectiveness in software testing, it could be sometimes infeasible in practical work for a large scale software so that the mutation testing becomes time-consuming and even in prohibited time. Therefore, the number of faults assumed to exist in the software under test should be reduced so as to be able to confine the time complexity of test within a reasonable period of time. This paper utilizes the concept of fault dominance and equivalence, which has long been employed in hardware testing, for revealing a novel way of reducing the number of faults assumed to hide in software systems. Once the number of faults assumed in software is decreased sharply, the effectiveness of mutation testing will be greatly enhanced and become a feasible way of software testing. Examples and experimental results are presented to illustrate the effectiveness and the helpfulness of the technology proposed in the paper.",2007,0, 179,Analysis of errors for uplink array of 34-m antennas for deep space applications,"Although the technologies for large arrays of distributed reflector antennas with just downlink (receiving) capability have been well defined and proven for deep space applications, a similar architecture, i.e., the arraying of distributed reflector antennas for uplink (transmitting) applications has not been proven, tested, or built yet. In previous papers (Hurd, 2005) the need, feasibility, technology challenges and high-level system issues of a large array of reflector antennas with uplink capability for the future deep space network (DSN) were discussed. In particular, the primary design drivers, cost drivers, and technology challenges for uplink array phase calibration were addressed together with some preliminary test results with the 34-m antenna exciters. It is now of great interest to obtain the key requirements for the current Deep Space Network (DSN) 34-m antennas so that they can operate in an uplink array mode. The successful demonstration of the DSN 34-m antennas in uplink array mode serves as a prototype and a key milestone for the future large array development. In this paper, simulation and analysis of the current DSN 34-m antennas in an uplink array mode were discussed",2005,0, 180,Equivalence between Weight Decay Learning and Explicit Regularization to Improve Fault Tolerance of RBF,"Although weight decay learning has been proposed to improve generalization ability of a neural network, many simulated studies have demonstrated that it is able to improve fault tolerance. To explain the underlying reason, this paper presents an analytical result showing the equivalence between adding weight decay and adding explicit regularization on training a RBF to tolerate multiplicative weight noise. Under a mild condition, it is proved that explicit regularization will be reduced to weight decay.",2008,0, 181,Fault location on series-compensated transmission line using measurements of current differential protective relays,An accurate algorithm for locating faults on a series-compensated line is presented. This algorithm can be applied with current differential protective relays since two-end currents and one-end voltage are utilized as the fault locator input signals. The algorithm applies two subroutines and the procedure for indicating the valid subroutine. The algorithm has been evaluated using the fault data of ATP-EMTP versatile simulations of faults on a series-compensated transmission line. The presented example shows the validity of the presented algorithm and its high accuracy.,2010,0, 182,Switch fault diagnosis of PM brushless DC motor drive using adaptive fuzzy techniques,An adaptive neuro-fuzzy inference system (ANFIS) is developed to diagnose open switch faults of PM brushless dc motor drives. Features extracted under healthy and faulty operations using wavelet transform are used to train ANFIS. Testing of the proposed diagnostic system shows it could not only diagnose the fault but identify the faulty switch as well. Good agreement between experimentation and simulation is obtained.,2004,0, 183,An adaptive scheme for handling deletion spelling errors' for an intelligent e-learning system,"An adaptive scheme for handling spelling errors by an e-learner while responding to the e-learning system through typed-in single-word responses is presented in this paper. To simulate the behaviour of a human instructor, the system preprocesses the input word with respect to spelling errors due to wrong letter or missing letter. The appropriately encoded input is then to fed into a neural net that intelligently recognizes the correct response, in spite of minor spelling mistakes committed by the learner. Results show that the scheme intelligent recognises the misspelled words as expected to be done by a human instructor.",2010,0, 184,Coupled field-circuit-mechanical model of an electromagnetic actuator operating in error actuated control system,"An algorithm of coupled field-circuit simulation of the dynamics of an electromagnetic linear actuator operating in error actuated control system is presented. The software consists of three main parts: (a) numerical model of the actuator dynamics which includes equations of a transient electromagnetic field in a non-linear conducting and moving medium, (b) discrete model of electric circuit and (c) optimization solver. Numerical implementation is based on the finite elements. The influence of the PID controller settings on the actuator operation is shown. In order to find optimal parameters of the system the genetic algorithm is applied. The simultaneous optimization of both: actuator structure and regulator settings has been carried out.",2008,0, 185,An Atmospheric Correction Parameter Calculator for a single thermal band earth-sensing instrument,"An atmospheric correction tool has been developed for public web site access for the Landsat-5 and Landsat-7 thermal band. The Atmospheric Correction Parameter Calculator uses the National Centers for Environmental Prediction (NCEP) modeled atmospheric global profiles for a particular date, time and location as input. Using commercially-available MODTRAN software and a suite of integration algorithms, the site-specific atmospheric transmission, and upwelling and downwelling radiances are derived. These calculated parameters can be applied to single band thermal imagery from Landsat-5 Thematic Mapper (TM) or Landsat-7 Enhanced Thematic Mapper Plus (ETM+) to infer an at-surface kinetic temperature for all the pixels in the scene. Given the TM and ETM+ Band-6 instrument calibration uncertainties in Top-of-Atmosphere temperature are 1.0 and 0.6 K, respectively, then the corresponding uncertainties in the inferred surface temperatures are approximately 2-3 K.",2003,0, 186,A Hierarchy Management Framework for Automated Network Fault Identification,"An autonomous diagnosis approach of faulty links is proposed in this paper,. Given the information by which paths a designated network node with management responsibilities can communicate with certain other nodes, and can't communicate with another set of node, with the help of building diagnosis model and computing probability of link's failure the node with management responsibilities would like to identify as quickly as possible a ranked list of the most probable failed network links, and furthermore, accurately check out which links have failed by testing. Based on this approach, a hierarchy network management architecture is designed to deal with the fault diagnosis for a heterogeneous network environment. The simulation shows that this approach has the features of real-time, higher accuracy and autonomy, especially, it will occupy a few bit of bandwidth and even require no bandwidth.",2008,0, 187,Controller design and real-time fault diagnosis for a humanoid robot,"An effective controller is crucial for a humanoid robot since a humanoid robot generally has more than thirty DOFs to be controlled in real-time and needs to deal with information of multiple sensors. On the other hand, real-time fault diagnosis is increasingly important for the humanoid robot due to its mechanism and control complexity and inherent instability of risking tipping itself over. In this paper, we propose a distributed controller consisting of the online planning sub-system and the motion control sub-system based on CAN bus and Ethernet for humanoid robots. Moreover, a real-time fault diagnosis method is proposed to observe the most probable faults, such as joint over-limit, force/torque sensor failure, encoder failure, and inertial sensor failure. The effectiveness of our designed controller and fault diagnosis was confirmed by experiments on our newly-built humanoid robot.",2010,0, 188,A randomized error recovery algorithm for reliable multicast,"An efficient error recovery algorithm is essential for a liable multicast in large groups. Tree-based protocols (RMTP, TMTP, LBRRM) group receivers into local regions and select a repair server for performing error recovery in each region. Hence a single server bears the entire responsibility of error recovery for a region. In addition, the deployment of repair servers requires topological information of the underlying multicast tree, which is generally not available at the transport layer. This paper presents RRMP, a randomized reliable multicast protocol which improves the robustness of tree-based protocols by diffusing the responsibility of error recovery among all members in a group. The protocol works well within the existing IP multicast framework and does not require additional support from routers. Both analysis and simulation results show that the performance penalty due to randomization is low and can be tuned according to application requirements",2001,0, 189,A pipelined architecture for real-time correction of barrel distortion in wide-angle camera images,"An efficient pipelined architecture for the real-time correction of barrel distortion in wide-angle camera images is presented in this paper. The distortion correction model is based on least-squares estimation to correct the nonlinear distortion in images. The model parameters include the expanded/corrected image size, the back-mapping coefficients, distortion center, and corrected center. The coordinate rotation digital computer (CORDIC) based hardware design is suitable for an input image size of 10281028 pixels and is pipelined to operate at a clock frequency of 40 MHz. The VLSI system will facilitate the use of a dedicated hardware that could be mounted along with the camera unit.",2005,0, 190,A Fault Management Architecture for Wireless Sensor Network,"Advancement in wireless communication and electronics has made possible the development of low cost sensor networks. Wireless sensor networks (WSNs) facilitate monitoring and controlling of physical environment from remote location with better accuracy. They can be used for various application areas (e.g. health, military, home). Due to their unique characteristics, they are offering various research issues that are still unsolved. Sensors energy cannot support long haul communication as changing energy supply is not always possible in WSN. Also, failures are inevitable in wireless sensor networks due to inhospitable environment and unattended deployment. Therefore fault management is an essential component of any network management system. In this paper we propose a new fault management architecture for wireless sensor networks. In our solution the network is partitioned into a virtual grid of cells to support scalability and perform fault detection and recovery locally with minimum energy consumption. Specifically, the grid based architecture permits the implementation of fault detection in a distributed manner and allows the failure report to be forwarded across cells. A cell manager and a gateway node are chosen in each cell to perform management tasks. Cell manager and gateway nodes coordinate with each other to detect faults with minimum energy consumption. We assume a homogeneous network where all nodes are equal in resources. The architecture has been evaluated analytically and compared with different proposed solutions.",2008,0, 191,Fault-Tolerant Routing Switcher Topologies for Centralized Distribution Systems,"Advances in digital storage technology have made it possible to store vast amounts of program material in a central location. In addition, advances in digital distribution technology have made it possible to move this material more efficiently between facilities and, ultimately, deliver it via multiple channels to a mass audience. At the heart of the network, switching and routing must be deterministic, error free, and fault free.",2004,0, 192,HVDC converter modeling and harmonic calculation under asymmetric faults in the AC system,"After a detailed analysis of the dynamic processes in the HVDC converter when asymmetric faults occur in the ac system, a sequence-component model of converter based on the advanced switching functions is present. Furthermore, a linear direct method of harmonic calculation is proposed. In this method, both the effect of the harmonic interaction between the ac and dc system and the dynamic processes in the converter are considered. It is demonstrated that quantitative analysis basis for harmonic suppression, filter configuration and setting calculation of relaying can be provided by this method which is verified by comparison with the results obtained by dynamic simulation.",2009,0, 193,Redefining and testing interconnect faults in Mesh NoCs,An extended fault model and novel strategy to tackle interconnect faults in network-on-chips are proposed. Short faults between distinct channels are considered in a cost-effective test sequence for mesh NoC topologies based on XY routing.,2007,0, 194,Fault diagnosis for using TPG low power dissipation and high fault coverage,"BIST TPG (built in self test) for low power dissipation and high fault coverage presents a low hardware overhead test pattern generator (TPG) for scan-based built-in self-test (BIST) that can reduce switching activity in circuits under test (CUTs) during BIST and also achieve very high fault coverage with reasonable lengths of test sequences. The proposed BIST TPG decreases transitions that occur at scan inputs during scan shift operations and hence reduces switching activity in the CUT. The BIST TPG comprises of two TPG's, LT-RTPG and 3-weight WRBIST. Test patterns generated by the LT-RTPG detect easy-to-detect faults and test patterns generated by the 3-weight WRBIST detect faults that remain undetected after LT-RTPG patterns are applied. The BIST TPG does not require modification of mission logics, which can lead to performance degradation. Recently, techniques to reduce switching activity during BIST have been proposed. A straightforward solution is to reduce the speed of the test clock during scan shift operations. However, since most test application time of scan-based BIST is spent for scan shift operations, this will increase test application time by about a factor of if scan flip-flops are clocked at speed during scan shift operations. Larger reduction in switching activity is achieved in large circuits. Experimental results also show that the BIST-TPG can be implemented with low area overhead. Larger reduction in switching activity is achieved in large circuits. Experimental results also show that the BIST-TPG can be implemented with low area overhead.",2010,0, 195,Injecting bit flip faults by means of a purely software approach: a case studied,"Bit flips provoked by radiation are a main concern for space applications. A fault injection experiment performed using a software simulator is described in this paper. Obtained results allow us to predict a low sensitivity to soft errors for the studied application, putting in evidence critical memory elements.",2002,0, 196,A high efficient boost converter with power factor correction,"Boost converter is widely used as active power factor correction (PFC) pre-regulator. Its input voltage range is universal (90-265 V), and its output voltage is regulated at about 380 V. At low line (90 V) the switch's rms current is high, so the conduction loss of power switch MOSFET is large and the efficiency of whole converter is very low. This paper proposes a new control method that the output voltage varies with the input voltage change. Under this control the MOSFET's on-time is shortened, and the switch's RMS current decreases, which reduces the conduction loss and increases the boost converter efficiency. The distribution of power loss is analyzed by computing software (mathcad 2000) and the realization of this special control method is given. A 1200 W boost power factor corrector with average current control is built up. In order to improve the diode's turn-off loss the performance of a 600 V, 12 A silicon carbide (SiC) Schottky diode is also experimentally evaluated. Measurements of overall efficiency and reverse recovery behavior are compared between SiC diode and fast recovery diode.",2004,0, 197,A Novel Fast Error-resilient Video Coding Scheme for H.264,"Both perceptually and statistically, compressed video with large or disordered motion is sensitive to errors. In this paper, we propose a novel fast error-resilient video coding scheme, which is based on significant macroblock (MB) determination and protection. The scheme takes three impact factors (inter-block mode, motion vector difference and SAD value) to build a statistical model. The model takes error concealment (EC) into consideration in advance and generates several parameters for further significant degree (SD) evaluation for MBs. During encoding, we build an SD table for each frame based on the parameters and pick up those MBs with the largest SD values as significant MBs (SMBs). Few additional computations are induced into SMB determination, thus make our scheme practical in real time video coding scenarios. Simulations show that the scheme has an acceptable SMB determination accuracy and the corresponding protection method can prevent errors effectively",2006,0, 198,Multi-level Bounded Model Checking to detect bugs beyond the bound,"Bounded Model Checking is a widely used technique both in hardware and software verification. However, it cannot be applied if the bounds (number of time frames to be analyzed) becomes large. Therefore it cannot detect bugs that can be observed only through very long sequence counter-examples. In this paper, we present a method connecting multiple BMCs by sophisticated uses of inductive approach and symbolic simulation. The proposed method can check unbounded properties by analyzing loop behaviors in the design with decision procedures. In our verification flow, a property is automatically decomposed and refined instead of designs. First, a property is decomposed not to consider the reachability from the initial states of the design. Next, if a counter-example is found, the condition to enter it is generated by symbolic simulation. Finally, the reachability from the initial states to the states where the condition becomes true is checked inductively by another Bounded Model Checking. If they are not reachable from the initial states, then the property is refined not to enter the unreal counter-example. Key observation here is that each BMC does not need to process so many time frames as compared with pure BMC from initial states. Therefore, the proposed method can process much larger bounds. Experimental results with two examples have confirmed this advantage.",2008,0, 199,Error-Related EEG Potentials Generated During Simulated BrainComputer Interaction,"Brain-computer interfaces (BCIs) are prone to errors in the recognition of subject's intent. An elegant approach to improve the accuracy of BCIs consists in a verification procedure directly based on the presence of error-related potentials (ErrP) in the electroencephalogram (EEG) recorded right after the occurrence of an error. Several studies show the presence of ErrP in typical choice reaction tasks. However, in the context of a BCI, the central question is: ldquoAre ErrP also elicited when the error is made by the interface during the recognition of the subject's intent?rdquo We have thus explored whether ErrP also follow a feedback indicating incorrect responses of the simulated BCI interface. Five healthy volunteer subjects participated in a new human-robot interaction experiment, which seem to confirm the previously reported presence of a new kind of ErrP. However, in order to exploit these ErrP, we need to detect them in each single trial using a short window following the feedback associated to the response of the BCI. We have achieved an average recognition rate of correct and erroneous single trials of 83.5% and 79.2%, respectively, using a classifier built with data recorded up to three months earlier.",2008,0, 200,Normalization of illumination conditions for ground based hyperspectral measurements using dual field of view spectroradiometers and BRDF corrections,"BRDF effects present in dual field-of-view spectroscopy datasets were investigated. A data-driven normalization procedure was developed by decomposing the target BRDF into a target specific Lambertian component and a bi-directional component characterizing a group of similar targets,. The normalization method was used to convert reflectance factors obtained under cloud obscured conditions into clear sky conditions. An evaluation on four targets measured under different illumination conditions suggests that the normalization can reduce relative reflectance errors between 400 and 1800 nm from 15% to less than 5% even under full cloud obscuration. At higher wavelengths a decreased signal-to-noise ratio increases the error level.",2009,0, 201,Reliable backup routing in fault tolerant real-time networks,"Broadband integrated services digital networks (B-ISDN) are aimed to transport both real-time traffic and non real-time traffic. Many of these applications require quality of service (QoS) guarantees. In the literature, not much work is found to provide QoS guarantees based on fault tolerance. Reliability of network links is considered as one of the parameters when providing QoS guarantees to applications. Considering reliability of network links as a parameter for QoS guarantees gives the applications more flexibility in choosing the network resources. A new terminology for dispersity routing is presented which will be useful in providing QoS guarantees based on reliability. Dispersity routing transmits the traffic along multiple paths. Also, a reliable backup resource allocation method is presented that can be used in the context of dispersity routing for fault tolerant real-time networks. An assumption is made that higher capacity is assigned to the links which are more reliable. This will help in availability of resources for a longer period of time. Also, reliability of links is considered to compute multiple paths along with shortest path metric.",2001,0, 202,A dynamic technique for eliminating buffer overflow vulnerabilities (and other memory errors),"Buffer overflow vulnerabilities are caused by programming errors that allow an attacker to cause the program to write beyond the bounds of an allocated memory block to corrupt other data structures. The standard way to exploit a buffer overflow vulnerability involves a request that is too large for the buffer intended to hold it. The buffer overflow error causes the program to write part of the request beyond the bounds of the buffer, corrupting the address space of the program and causing the program to execute injected code contained in the request. We have implemented a compiler that inserts dynamic checks into the generated code to detect all out of bounds memory accesses. When it detects an out of bounds write, it stores the value away in a hash table to return as the value for corresponding out of bounds reads. The net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander). With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking Web sites. Our compiler eliminates these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.",2004,0, 203,Predicting Re-opened Bugs: A Case Study on the Eclipse Project,"Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on the Eclipse project. We structure our study along 4 dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed on), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). Our case study on the Eclipse Platform 3.0 project shows that the comment and description text, the time it took to fix the bug, and the component the bug was found in are the most important factors in determining whether a bug will be re-opened. Based on these dimensions we create decision trees that predict whether a bug will be re-opened after its closure. Using a combination of our dimensions, we can build explainable prediction models that can achieve 62.9% precision and 84.5% recall when predicting whether a bug will be re-opened.",2010,0, 204,On Identifying Bug Patterns in Aspect-Oriented Programs,"Bug patterns are erroneous code idioms or bad coding practices that have been proved fail time and time again. They mainly arise from the misunderstanding of language features, the use of erroneous design patterns or simple mistakes sharing the common behaviors. Aspect-oriented programming (AOP) is a new technique to separate the cross-cutting concerns for improving modularity in software design and implementation. However, there is no effective debugging technique for aspect-oriented programs until now and none of the prior researches focused on the identification of bug patterns in aspect-oriented programs. In this paper, we present six bug patterns in AspectJprogramming language and show the corresponding example for each bug pattern to help to illustrate the symptoms of these patterns. We take this as the first step to provide an underlying basis on testing and debugging of AspectJ programs.",2007,0, 205,A discriminative model approach for accurate duplicate bug report retrieval,"Bug repositories are usually maintained in software projects. Testers or users submit bug reports to identify various issues with systems. Sometimes two or more bug reports correspond to the same defect. To address the problem with duplicate bug reports, a person called a triager needs to manually label these bug reports as duplicates, and link them to their ""master"" reports for subsequent maintenance work. However, in practice there are considerable duplicate bug reports sent daily; requesting triagers to manually label these bugs could be highly time consuming. To address this issue, recently, several techniques have be proposed using various similarity based metrics to detect candidate duplicate bug reports for manual verification. Automating triaging has been proved challenging as two reports of the same bug could be written in various ways. There is still much room for improvement in terms of accuracy of duplicate detection process. In this paper, we leverage recent advances on using discriminative models for information retrieval to detect duplicate bug reports more accurately. We have validated our approach on three large software bug repositories from Firefox, Eclipse, and OpenOffice. We show that our technique could result in 17-31%, 22-26%, and 35-43% relative improvement over state-of-the-art techniques in OpenOffice, Firefox, and Eclipse datasets respectively using commonly available natural language information only.",2010,0, 206,Automatic Identification of Bug-Introducing Changes,"Bug-fixes are widely used for predicting bugs or finding risky parts of software. However, a bug-fix does not contain information about the change that initially introduced a bug. Such bug-introducing changes can help identify important properties of software bugs such as correlated factors or causalities. For example, they reveal which developers or what kinds of source code changes introduce more bugs. In contrast to bug-fixes that are relatively easy to obtain, the extraction of bug-introducing changes is challenging. In this paper, we present algorithms to automatically and accurately identify bug-introducing changes. We remove false positives and false negatives by using annotation graphs, by ignoring non-semantic source code changes, and outlier fixes. Additionally, we validated that the fixes we used are true fixes by a manual inspection. Altogether, our algorithms can remove about 38%~51% of false positives and 14%~15% of false negatives compared to the previous algorithm. Finally, we show applications of bug-introducing changes that demonstrate their value for research",2006,0, 207,A comparison of bug finding tools for Java,"Bugs in software are costly and difficult to find and fix. In recent years, many tools and techniques have been developed for automatically finding bugs by analyzing source code or intermediate code statically (at compile time). Different tools and techniques have different tradeoffs, but the practical impact of these tradeoffs is not well understood. In this paper, we apply five bug finding tools, specifically Bandera, ESC/Java 2, FindBugs, JLint, and PMD, to a variety of Java programs. By using a variety of tools, we are able to cross-check their bug reports and warnings. Our experimental results show that none of the tools strictly subsumes another, and indeed the tools often find nonoverlapping bugs. We discuss the techniques each of the tools is based on, and we suggest how particular techniques affect the output of the tools. Finally, we propose a meta-tool that combines the output of the tools together, looking for particular lines of code, methods, and classes that many tools warn about.",2004,0, 208,Variable block size error concealment scheme based on H.264/AVC non-normative decoder,"As the newest video coding standard, H.264/AVC can achieve high compression efficiency. At the same time, due to the high-efficiently predictive coding and the variable length entropy coding, it is more sensitive to transmission errors. So error concealment (EC) in H.264 is very important when compressed video sequences are transmitted over error-prone networks and erroneously received. To achieve higher EC performance, this paper proposes variable block size error concealment scheme (VBSEC) by utilizing the new concept of variable block size motion estimation (VBSME) in H.264 standard. This scheme provides four EC modes and four sub-block partitions. The whole corrupted macro-block (MB) will be divided into variable block size adaptively according to the actual motion. More precise motion vectors (MV) will be predicted for each sub-block. We also produce a more accurate distortion function based on spatio-temporal boundary matching algorithm (STBMA). By utilizing VBSEC scheme based on our STBMA distortion function, we can reconstruct the corrupted MB in the inter frame more accurately. The experimental results show that our proposed scheme can obtain maximum PSNR gain up to 1.72 dB and 0.48 dB, respectively compared with the boundary matching algorithm (BMA) adopted in the JM11.0 reference software and STBMA.",2007,0, 209,Improving the Table Boundary Detection in PDFs by Fixing the Sequence Error of the Sparse Lines,"As the rapid growth of PDF documents, recognizing the document structure and components are useful for document storage, classification and retrieval. Table, a ubiquitous document component, becomes an important information source. Accurately detecting the table boundary plays a crucial role for many applications, e.g., the increasing demand on the table data search. Rather than converting PDFs to image or HTML and then processing with other techniques (e.g., OCR), extracting and analyzing texts from PDFs directly is easy and accurate. However, text extraction tools face a common problem: text sequence error. In this paper, we propose two algorithms to recover the sequence of extracted sparse lines, which improve the table content collection. The experimental results show the comparison of the performance of both algorithms, and demonstrate the effectiveness of text sequence recovering for the table boundary detection.",2009,0, 210,Exploit failure prediction for adaptive fault-tolerance in cluster computing,"As the scale of cluster computing grows, it is becoming hard for long-running applications to complete without facing failures on large-scale clusters. To address this issue, checkpointing/restart is widely used to provide the basic fault-tolerant functionality, yet it suffers from high overhead and its reactive characteristic. In this work, we propose FT-Pro, an adaptive fault management mechanism that optimally chooses migration, checkpointing or no action to reduce the application execution time in the presence of failures based on the failure prediction. A cost-based evaluation model is presented for dynamic decision at run-time. Using the actual failure log from a production cluster at NCSA, we demonstrate that even with modest failure prediction accuracy, FT-Pro outperforms the traditional checkpointing/restart strategy by 13%-30% in terms of reducing the application execution time despite failures, which is a significant performance improvement for long-running applications.",2006,0, 211,Application of neural network and DS evidence fusion algorithm in power transformer fault diagnosis,"As the transformer's fault types and fault positions are complexity, complementarity, redundancy and strong characteristics of uncertainty, a satisfactory diagnosis of transformer fault types and fault position may not be obtained if only one technique is used. In this paper, a synthetic diagnosis method using neural network and DS evidence theory for transformer fault diagnosis is presented, combining DGA data with application of data fusion theory. This method has advantage of both neural and DS evidential theory, it can effectively solve the problem of uncertainty, and improve the fault diagnosis system's accuracy and reliability. A simulation example in this paper illustrates that the method combing of neural network with DS evidence theory is indeed greatly improved integration of the credibility of the data, making diagnostic systems easy to design, high precision and prone to operation. It can fulfill users' requirement perfectly.",2010,0, 212,Compiler-Managed Software-based Redundant Multi-Threading for Transient Fault Detection,"As transistors become increasingly smaller and faster with tighter noise margins, modern processors are becoming increasingly more susceptible to transient hardware faults. Existing hardware-based redundant multi-threading (HRMT) approaches rely mostly on special-purpose hardware to replicate the program into redundant execution threads and compare their computation results. In this paper, we present a software-based redundant multi-threading (SRMT) approach for transient fault detection. Our SRMT technique uses compiler to automatically generate redundant threads so they can run on general-purpose chip multi-processors (CMPs). We exploit high-level program information available at compile time to optimize data communication between redundant threads. Furthermore, our software-based technique provides flexible program execution environment where the legacy binary codes and the reliability-enhanced codes can co-exist in a mix-and-match fashion, depending on the desired level of reliability and software compatibility. Our experimental results show that compiler analysis and optimization techniques can reduce data communication requirement by up to 88% of HRMT. With general-purpose intra-chip communication mechanisms in CMP machine, SRMT overhead can be as low as 19%. Moreover, SRMT technique achieves error coverage rates of 99.98% and 99.6% for SPEC CPU2000 integer and floating-point benchmarks, respectively. These results demonstrate the competitiveness of SRMT to HRMT approaches",2007,0, 213,Solid-state fault current limiters: Silicon versus silicon carbide,"As utilities face increasing fault currents in their systems as a result of increasing demand and/or deployment of new technologies, fault current limiters promise a solution that will mitigate the need for replacing existing breakers as well as being a general protective device for elements connected to the grid. This paper describes some recent advances in semiconductor-based fault current limiting technology including both the more mature silicon developments along with early developments using silicon carbide. The capabilities and limitations of these technologies are compared and contrasted. Some example scenarios of FCLs have been analyzed and are briefly described along with advanced features that semiconductor FCLs may bring to the solution space.",2008,0, 214,Enhanced Error Vector Magnitude (EVM) Measurements for Testing WLAN Transceivers,"As wireless LAN devices become more prevalent in the consumer electronics market, there is an ever increasing pressure to reduce their overall cost. The test cost of such devices is an appreciable percentage of the overall cost, which typically results from the high number of specifications, the high number of distinct test set-ups and equipment pieces that need to be used, and the high cost of each test set-up. In this paper, we investigate the versatility of EVM measurements to test the variable-envelope WLAN (wireless local area networks) receiver and transmitter characteristics. The goal is to optimize EVM test parameters (input data and test limits) and to reduce the number of specification measurements that require high test times and/or expensive test equipment. Our analysis shows that enhanced EVM measurements (optimized data sequence and limits, use of RMS, scale, and phase error vector values) in conjunction with a set of simple path measurements (input-output impedances) can provide the desired fault coverage while eliminating lengthy spectrum mask and noise figure tests",2006,0, 215,A bug you like: A framework for automated assignment of bugs,"Assigning bug reports to individual developers is typically a manual, time-consuming, and tedious task. In this paper, we present a framework for automated assignment of bug-fixing tasks. Our approach employs preference elicitation to learn developer predilections in fixing bugs within a given system. This approach infers knowledge about a developer's expertise by analyzing the history of bugs previously resolved by the developer. We apply a vector space model to recommend experts for resolving bugs. When a new bug report arrives, the system automatically assigns it to the appropriate developer considering his or her expertise, current workload, and preferences. We address the task allocation problem by proposing a set of heuristics that support accurate assignment of bug reports to the developers.",2009,0, 216,Fault-Tolerant Reconfiguration System for Asymmetric Multilevel Converters Using Bidirectional Power Switches,"Asymmetric multilevel converters can optimize the number of levels by using H-bridges scaled in the power of three. The shortcoming of this topology is that the H-bridges are not interchangeable, and then, under certain faulty conditions, the converter cannot operate. A reconfiguration system based on bidirectional electronic valves has been designed for three-phase cascaded H-bridge inverters. Once a fault is detected in any of the insulated gate bipolar transistors of any H-bridge, the control is capable to reconfigure the hardware keeping the higher power bridges in operation. In this way, the faulty phase can continue working at the same voltage level by adjusting its gating signals. Some simulations and experiments with a 27-level inverter, to show the operation of the system under a faulty condition, are displayed.",2009,0,5616 217,Orbit drift correction using correctors with ultra-high DAC resolution,At BESSY the planned continuous orbit drift correction could not go into routine operation as originally foreseen: the resolution of the 3 mrad correctors controlled by 16 bit DACs was insufficient and perturbed specific experiments unacceptably. Now a novel 216 bit coarse/fine type I/O board solved this problem while preserving the full dynamic range of the correctors. Permanent correction activity no more deteriorates experimental conditions. A typical orbit definition within +/- 5 m at more than 90% of the BPMs during a day is achieved. Even large perturbations caused by e.g. decaying superconducting wave length shifter currents or residual effects of undulator operations are adequately suppressed,2001,0, 218,Application of a New Image Recognition Technology in Fabric Defect Detection,"At present, defect detection during the manufacturing are still finished by man, there are a lot of weaknesses by this way, such as low detection efficiency, high miss rate. All of these affect the production quality seriously and restrict the further improvement of production efficiency. This paper presents a method using Fisher classifier in computer image pattern recognition for defect detection and grade scoring of fabric, and gives the realization of software programming and testing. The test results show that the defect recognition rate is 94%.",2008,0, 219,A Fault Diagnostic Method for EFI Engine Based on MATLAB Software Package,"At present, the diagnostic instruments used widely here and abroad is not entire, which can not diagnose the mechanical fault without fault code. In order to solve this problem, this paper presents a method for fault diagnosis of electronic fuel injection (EFI) engine using radial basis function (RBF) neural network. By connecting MATLAB software package and ACCESS database, a fault diagnosis program is set up and a fault without code can be found. Meanwhile, the comparison has been done between RBF network and back propagation (BP) network. The simulation experimental results show that the RBF model is more feasible and successful than BP model and makes fault diagnosis easier.",2008,0, 220,Use of a simple storage ring simulation for development of enhanced orbit correction software,"At the Advanced Photon Source (APS) most of the testing of minor operational software is done during accelerator studies time. For major software changes, such as the porting of the complex workstation-based orhit control software to an EPICS IOC, much of the testing was done 'offline' on a test IOC. A configurable storage ring simulator was created in a workstation with corresponding control system records for correctors and orhit readbacks. The simulator??s features will he described as well as the method used to develop and debug the most recent improvement of the APS orhit control software, among others. The simulator is also useful in general-purpose software testing.",2003,0, 221,Machine Current Signature Analysis as a Way for Fault Detection in Squirrel Cage Wind Generators,"At the moment renewable generation systems are increasing its presence. The paper is about how machine current signature analysis (MCSA) can reliably diagnose faults in squirrel cage generators. This paper focuses on the experimental investigation for incipient fault detection and fault detection methods, suitably adapted for use in wind generator systems using squirrel cage. The proposed system diagnoses asynchronous generators having three types of faults such as broken rotor bars, short circuit of stator windings and bearing fault. After processing current data the classical fast Fourier transform is applied to detect characteristics under the healthy and various faulted conditions with MCSA.",2007,0, 222,Shadow checker (SC): A low-cost hardware scheme for online detection of faults in small memory structures of a microprocessor,"At various stages of a product life, faults arise from different sources. During product bring up, logic errors are dominant. During production, manufacturing defects are main concerns while during operation, the concern shifts to aging defects. No matter what the source is, debugging such defects may permit logic, circuit or physical design changes to eliminate them in future. Within a processor chip, there are three broad categories of structures, namely the large memory structures such as caches, small memory structures such as reorder buffer, issue queue, and load-store buffers and the data-path. Most control functions and data steering operations are based on small memory structures and they are hard to debug. In this paper, we propose a lightweight hardware scheme, called shadow checker to detect faults in these critical units. The entries in these units are tested by means of a shadow entry that mimics intended operation. A mismatch traps an error. The shadow checker shadows an entry for a few thousand cycles before moving on to shadow another. This scheme can be employed to test chips during silicon debug, manufacturing test as well as during regular operation. We ran experiments on 13 SPEC2000 benchmarks and found that our scheme detects 100% of inserted faults.",2010,0, 223,Whose bug is it anyway? The battle over handling software flaws,"Attacks exploit vulnerabilities in software code. They come in many forms: logic attacks, Trojan horses, worms and viruses, and variants of each. They serve a host of purposes: corporate espionage, white-collar crime, social ""hacktivism,"" terrorism, and notoriety. Greater connectivity, more complex software, and the persistence of older protocols ensure growing vulnerability. End users lose time and money when networks go down. Software vendors lose face and market share. Security researchers struggle to keep pace with the bugs to keep businesses operating safely. The only people with no complaints are the hackers, who reverse-engineer patches released by vendors to exploit the holes. It's enough to make you nostalgic for the old days of the Nimba and Code Red viruses, when attacks came six months after vendors released patches. Blaster attacks began three weeks after release. Security experts anticipate so-called ""zero day"" vulnerabilities, in which attacks precede patches. Although marathon patching sessions have become the norm for harried IT administrators, even top-of-the-line patch management can't keep up with malicious code's growing sophistication.",2004,0, 224,Fault localization using visualization of test information,"Attempts to reduce the number of delivered faults in software are estimated to consume 50% to 80% of the development and maintenance effort according to J.S. Collofello ans S.N. Woodfield (1989). Among the tasks required to reduce the number of delivered faults, debugging is one of the most time-consuming according to T. Ball and S.G. Eick and Telcordia Technologies, and locating the errors is the most difficult component of this debugging task according to I. Vessey (1985). Clearly, techniques that can reduce the time required to locate faults can have a significant impact on the cost and quality of software development and maintenance.",2004,0, 225,Attenuation correction in MR-PET scanners with segmented T1-weighted MR images,"Attenuation correction of PET data acquired in new hybrid MR-PET scanners which do not offer the possibility of a measured attenuation correction can be done in different ways. A previous report of our group described a method which used attenuation templates. The present study utilizes a new knowledge-based segmentation approach applied on T1-weighted MR images. It examines the position and the tissue membership of each voxel and segments the head volume into attenuation-differing regions: brain tissue, extracerebral soft tissue, skull, air-filled nasal and paranasal cavities as well as the mastoid process. To examine this new approach three groups of subjects having MRI and PET were chosen, the selection criterion being the different MR scanners, while the PET scanner was the ECAT HR+ in all cases: 1) four subjects with 1.5T MR images and CPFPX PET scans, 2) four subjects with 3T MR images and Altanserin PET scans, and 3) three brain tumor patients with 3T MR images from the hybrid MR-BrainPET scanner and FET PET scans. Furthermore, a single subject had 3T MR images, a FDG PET scan, and an additional CT scan. All segmented T1-weighted MR images were converted into attenuation maps for 511 KeV photons with coefficients of 0.096 1/cm for brain tissue, 0.146 1/cm for skull, 0.095 1/cm for soft tissue, 0.054 1/cm for the mastoid process, and 0.0 1/cm for nasal and paranasal cavities. The CT volume was also converted from the Hounsfield units into attenuation coefficients valid for 511 keV photons. The 12 segmented-based attenuation (SBA) maps as well as the CT-based attenuation (CBA) map were first filtered by a 3D Gaussian kernel of 10 mm filter width and then used to reconstruct the corresponding PET emission data. These were compared to the PET images attenuation corrected using the conventional PET-based transmission data (PBA). Relative differences (RD) were calculated from ROIs. For the single subject the RD of CBA data exhibit a mean of 1.66%?0.84% with a rang- from -0.88% to 3.42%, while the RD's mean of SBA data is 1.42%?2.61% (range from -4.12% to 4.66%). Comparing the results obtained with the SBA correction only, the RD for 1) range from -6.10% to 2.56% for cortical regions and from -6.99% to 5.64% for subcortical regions; for 2) they range from -7.33% to 2.33% for the cortical regions, subcortical ones being not drawn due to the not significant tracer uptake; for 3) the mean over the three subjects resulted in 0.89%?1.10% for ROIs at 48% threshold of the image's maximum and in 2.25%?1.50% for ROIs at 72% threshold. ROIs on the healthy contra-lateral grey matter show a mean of -3.24%?0.87%. In conclusion, the first attenuation correction results obtained with the new segmented-based method on a strongly heterogeneous collective are very promising. Further improvements of the method will be focused on the delineation of the skull.",2009,0, 226,"Off-line error prediction, diagnosis and recovery using virtual assembly systems","Automated assembly systems often stop their operation due to the unexpected failures occurred during their assembly process. Since these large-scale systems are composed of many parameters, it is difficult to anticipate all possible types of errors with their likelihood of occurrence. Several systems were developed in the literature, focusing on online diagnosing and recovering the assembly process in an intelligent manner based on the predicted error scenarios. However, these systems do not cover all of the possible errors and they are deficient in dealing with the unexpected error situations. The proposed approach uses Monte Carlo simulation of the assembly process with the 3D model of the assembly line to predict the possible errors in an offline manner. After that, these predicted errors can be diagnosed and recovered using Bayesian reasoning and genetic programming. A case study composed of a peg-in-hole assembly was performed and the results are discussed. It is expected that with this new approach, errors can be diagnosed and recovered accurately and costly downtime of robotic assembly systems will be reduced.",2001,0, 227,An on-line monitoring and multi-layer fault diagnosis system of electrical equipment based on geographic information system,"Automated mapping/facilities management/geographic information system (AM/FM/GIS), which provides a powerful way to process graphic and non-graphic information, can construct a spatial database system with topological structure and analysis function by combining diversified information in power system with geographic position-related graphic information. Based on the AM/FM/GIS and on-line monitoring system, an integrated system is put forward which can implement state monitoring, multi-layer fault diagnosis and assess the faults. By using this integrated system, latent fault and defect can be eliminated, loss due to power cut is reduced and the reliability of running power system is improved. Application indicates it is economical, pragmatic and has excellent performance.",2005,0, 228,Update on distribution system fault location technologies and effectiveness,Automatic fault location is an area of significant interest and research in the industry. This paper provides an update on the work performed to date with various utilities and the fault location systems. Basic information on the techniques used to locate faults is provided as well as several examples of where these techniques have been deployed.,2009,0,7353 229,Analysis of Error Sources Towards Improved Form Processing,"Automatic form processing is an important application of document analysis subject. Such a system requires to be trained and tested on a standard database of forms collected from real-life. However, to the best of our knowledge, the only such available databases are NIST Special Databases. These databases consist of images of synthesized form documents. On the other hand, recently we developed a form database, samples of which had been taken from the real-life. ISIFormReader, a form processing system, also developed recently, has been tested using these real-life samples. An intensive study of the processing errors showed that writers' idiosyncracies are one of the major reasons of such errors as analyzed in U. Bhattacharya, et al., (2006). In the present paper, we investigated various other sources of errors which together cause a major concern. These include sample forms which are low in contrast, noisy, smudgy, skewed, scaled disturbing its aspect ratio and so on. An analysis of errors due to similar such sources is important towards development of an improved form processing system.",2006,0, 230,A Multi-step Simulation Approach toward Secure Fault Tolerant System Evaluation,"As new techniques of fault tolerance and security emerge, so does the need for suitable tools to evaluate them. Generally, the security of a system can be estimated and verified via logical test cases, but the performance overhead of security algorithms on a system needs to be numerically analyzed. The diversity in security methods and design of fault tolerant systems make it impossible for researchers to come up with a standard, affordable and openly available simulation tool, evaluation framework or an experimental test-bed. Therefore, researchers choose from a wide range of available modeling-based, implementation-based or simulation-based approaches in order to evaluate their designs. All of these approaches have certain merits and several drawbacks. For instance, development of a system prototype provides a more accurate system analysis but unlike simulation, it is not highly scalable. This paper presents a multi-step, simulation-based performance evaluation methodology for secure fault tolerant systems. We use a divide-and-conquer approach to model the entire secure system in a way that allows the use of different analytical tools at different levels of granularity. This evaluation procedure tries to strike a balance between the efficiency, effort, cost and accuracy of a system's performance analysis. We demonstrate this approach in a step-by-step manner by analyzing the performance of a secure and fault tolerant system using a JAVA implementation in conjunction with the ARENA simulation.",2010,0, 231,Layout to Logic Defect Analysis for Hierarchical Test Generation,"As shown by previous studies, shorts between the interconnect wires should be considered as the predominant cause of failures in CMOS circuits. Fault models and tools for targeting these defects, such as the bridging fault test pattern generators have been available for a long time. However, this paper proposes a new hierarchical approach based on critical area extraction for identifying the possible shorted pairs of nets on the basis of the chip layout information, combined with logic-level test pattern generation for bridging faults. Experiments on real design layouts will show that only a fraction of all the possible pairs of nets have non-zero shorting probabilities. Furthermore, it will also be proven at the logic-level that nearly all such bridging faults can be tested by a simple and robust one-pattern logic test. The methods proposed in this paper are supported by a design flow implementing existing commercial and academic CAD software.",2007,0, 232,TIP-OPC: a new topological invariant paradigm for pixel based optical proximity correction,"As the 193 nm lithography is likely to be used for 45 nm and even 32 nm processes, much more stringent requirement will be posed on optical proximity correction (OPC) technologies. Currently, there are two OPC approaches - the model-based OPC (MB-OPC) and the inverse lithography technology (ILT). MB-OPC generates masks which is less complex compared with ILT. But ILT produces much better results than MB-OPC in terms of contour fidelity because ILT is a pixel based method. Observing that MB-OPC preserves the mask shape topologies which leads to a lower mask complexity, we combine the strengths of both methods - the topology invariant property and the pixel based mask representation. To the best of our knowledge, it is the first time that this topological invariant pixel based OPC (TIP-OPC) paradigm is proposed, which fills the critical hole of the OPC landscape and potentially has many new applications. Our technical novelty includes the lithography friendly mask topological invariant operations, the efficient fast Fourier transform based cost function sensitivity computation and the TIP-OPC algorithm. The experimental results show that TIP-OPC can achieve much better post OPC contours compared with MB-OPC while maintaining the mask shape topologies.",2007,0, 233,Fault tolerance techniques for high capacity RAM,"As the complexity and size of the embedded memories keep increasing, improving the yield of embedded memories is the key step toward improving the overall chip yield of a SOC design. The most well known way to improve the memory yield is by using redundant elements to replace the faulty cells. However, the repair efficiency mainly depends on the type, and the amount of redundancy; and on the redundancy analysis algorithms. Therefore, new types of redundancy based on divided bit-line (DBL), and divided word-line (DWL) techniques are proposed in this work. A memory column (row), including the redundant column (row), is partitioned into column blocks (row blocks), respectively. A row/column block is used as the basic replacement element instead of a row/column for the traditional approaches. Based on the new types of redundancy, three types of fault-tolerant memory (FTM) systems are also proposed. If a redundant row/column block is used as the basic replacement element, then the row block-based FTM (RBFTM)/column block-based (CBFTM) system is used. If both the DWL, and DBL techniques are implemented onto a memory chip, then the hybrid FTM (HFTM) system is achieved. The storage and remapping of faulty addresses can be implemented with a CAM (content addressable memory) block. To achieving better repair efficiency, a novel hybrid block-repair (HBR) algorithm is also proposed. This algorithm is suitable for hardware implementation with negligible overhead. For the HFTM system, the hardware overheads are less than 0.65%, and 0.7% for 64-Kbit SRAM, and 8-Mbit DRAM, respectively. Moreover, the repair rate can be improved significantly. Experimental results show that our approaches can improve the memory fabrication yield significantly. The characteristics of low power and fast access time of DBL and DWL techniques are also preserved.",2006,0, 234,Research on software defect prediction based on data mining,"As the development of computer technology, software system becomes more and more complicated. Because of human's ability limit, there must be a lot of defects generated in the software development life cycle. This paper reviewed the state of art in the field of software defect management and prediction, and presented data mining technology briefly. Finally, proposed an ideal software defect management and prediction system, researched and analyzed several software defect prediction methods based on data mining techniques and specific models (Bayesian Network and PRM). With this system, we can efficiently draw up some prevention and solution scheme to guide the development of new software.",2010,0, 235,Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project,"As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models.",2007,0, 236,Bi-direction Motion Vector retrieval based error concealment scheme for H.264/AVC,"As the newest video coding standard, H.264/AVC adopts the high-efficiently predictive coding and variable length entropy coding to achieve high compression efficiency. On the other side, transmission errors become the major problem faced by video broadcasting service providers. Error concealment (EC) here is adopted to handle slices with huge conjunctive corrupted areas inside. Considering error propagation from corrupted slice to succeeding ones is the key factor affecting the video quality, this paper proposes a novel temporal EC scheme including the bi-direction motion vector (MV) retrieval method and an adaptive EC ordering basing on it. Background and motional steady shift part of slice will be given top and second priority, respectively. Combined with our proposed improved boundary matching algorithm (IBMA) which provides more accurate distortion function, experiments results show that our proposal achieves better performance under different error rate channel, compared with EC algorithm adopted in H.264 reference software.",2009,0, 237,Design and realization on the fault diagnostic flat based on virtual instrument for warship equipment,"Based on virtual instrument (VI) technology, Delphi and database, etc., the fault diagnostic flat for shipboard equipment is developed in order to avoid various abuses that conventional methods brought. The modularization and universalization are proposed in its database-based design concept, realized the design of software and hardware. It broke through conventional check diagnosis patterns for warships equipment, resolved difficult to overcome problems brought on adopting existing conventional fashions examined and repaired shipboard equipment, greatly shortened the cycle of maintenance for naval warships equipment. It was proved by experiments that the flat system has merits, such as operation simpleness, high testing precision, strong flexibility and reliability and extensibility, and economical practicability. Also it is of some values for developing the other fault diagnostic instrument.",2010,0, 238,Fault Location Using Sparse IED Recordings,"Basic goal of power system is to continuously provide electrical energy to users. Like with any other system, failures in power system can occur. In those situations it is critical that remedial actions are applied as soon as possible. To apply correct remedial actions it is very important that accurate fault condition and location are detected. In this paper, different fault location algorithms followed with description of intelligent techniques used for implementation of corresponding algorithms are presented. New approach for fault location using sparse measurements is examined. According to available data, it decides between different algorithms and selects an optimal one. New approach is developed by utilizing different data structures in order to efficiently implement algorithm decision engine, which is presented in paper.",2007,0, 239,Navigation error analysis for the rocketplane XP,"bd Systems has supported the navigation architecture development of the Rocketplane XP as part of its responsibility as the vehicle guidance, navigation, and control (GN&C) subsystem lead contractor. The Rocketplane XP is a reusable sub-orbital space-plane being designed by Rocketplane Limited, Inc. for the commercial space tourism market. The horizontal take-off/horizontal landing vehicle, which will be one of the world's first true manned aerospace vehicles, has a short development schedule and began operation in 2007. bd Systems performed a detailed trade study to select the navigation subsystem and developed high-fidelity error models for use in the overall vehicle six-degree-offreedom (6DOF) simulation in order to determine the effects of navigation errors on vehicle performance.",2009,0, 240,Modeling and simulation of inner defect in impulse storage capacitor,"Because of big capability and small volume, impulse storage capacitor was found that the fast impulse would do much damage to capacitor insulation. Based on the electrical discharge mechanism, several classical defects of capacitor were put forward in this paper. To estimate the status of insulation, the electrical field distribution of defects should be analyzed carefully. As the most popular defect in storage capacitor, inner defect models had been designed for FEA (finite element analysis). Through simulation and analysis, the result proved that the different size and location of inner defect in insulation would result in dissimilar partial concentration and aberrance of electrical field distribution.",2005,0, 241,A method for fault diagnosis of analog circuits based on optimized fuzzy inference,"Because of the difficulty of disposing of fuzzy inference rules for fault diagnosis, a systematic approach for fault diagnosis of analog circuits based on optimized fuzzy inference is presented. The fuzzy logic system for based on fuzzy inference analog circuits fault diagnosis, quantum-inspired evolutionary algorithm (QEA) is used to optimize the membership function of the rules of the fault diagnosis fuzzy logic system, then self adapted genetic algorithm to select the optimum fuzzy rule aggregate, so the number of fuzzy rules is decreased to make it easy to dispose fault diagnosis of analog circuits. The simulation results of a analog power magnifier circuits show the fault diagnosis method of the analog circuits with optimized fuzzy inference is effective.",2010,0, 242,An Integrated Platform for Collaborative Simulation and Fault Diagnosis,"Before application to real manufacturing, thorough evaluation of working performance and the fault diagnosis strategy is absolutely necessary. A comprehensive multifunctional simulation platform based on collaborative simulation and fault diagnosis is presented. Based on virtual prototype, database, network communication and fault diagnosis technologies, the platform allows users to model in a structural and hierarchical way, study the behaviors of individual components and the interaction of subsystems, as well as adjust the fault diagnosis solutions in a simulation environment. An integrated framework is developed to manage the model databases, assemble the models, run the simulation application and display the simulation result. An embedded fault diagnosis module which can help users have a good command of fault recovery is employed to verify the reliability of fault diagnosis solutions. The developed integrated platform is shown to be a valuable tool both for performance simulation and fault diagnosis education. A case of a large-scale erecting vehicle has been introduced to demonstrate the concept",2006,0, 243,A Fault-Tolerant Framework for Web Services,"Being the new generation middleware, Web services have been enjoying great popularities in recent years. The high usability of the Web service is becoming a new focus for research. According to the demands of Web services, a fault-tolerant Web services framework named FW4WS is presented. In the article, we set forth the framework and the workflow of the system in detail.",2009,0, 244,Robot error detection using an artificial immune system,"Biology has produced living creatures that exhibit remarkable fault tolerance. The immune system is one feature that enables this. The acquired immune system learns during the life of the individual to differentiate between self (that which is normally present) and non-self (that which is not normally present). This paper describes an artificial immune system (AIS) that is used as an error detection system and is applied to two different robot based applications; the immunization of a fuzzy controller for a Khepera robot that provides object avoidance and a control module of a BAE Systems RASCALTM robot. The AIS learns normal behavior (unsupervised) during a fault free learning period and then identifies all error greater that a preset error sensitivity. The AIS was implemented in software but has the potential to be implemented in hardware. The AIS can be independent to the system under test, just requiring the inputs and outputs. This is not only ideal in terms of common mode and design errors but also offers the potential of a general, off-the-shelf, error detection system; the same AIS was applied to both the applications.",2003,0, 245,Error Evaluation of BAQ Algorithm for Internal Calibration Data of Spaceborne SAR,"BAQ is the efficient algorithm for SAR echoes data compression that has the Gauss distribution. Internal calibration is the important system resource, which is to calibrate the system gain change before and after the imaging and also provide more accurate reference function for the range compression. But the echoes are different from calibration data. This paper discussed whether the error brought by BAQ compression algorithm could be accepted by this system. Both the theoretical simulation and the experimental results indicate that the internal calibration data only can be directly transmitted but not compressed, which has the important reference to engineering design.",2006,0, 246,A Color Error Correcting Model for Scanning Input Image,"Based on analyzing the color rendering principle of scanning objects and the causes of color error, a new algorithm of color space conversion for scanning image is proposed. First, some parameters of Neugebauer equation and Yule-Nielson equation are reinterpreted which makes them able to be used for non-dot image and which can only be used for dot image originally. Then, the paper presents the deduction procedures of the color correction equation. Finally the experimental results show that the algorithm may result in more accurate approximation compared with some typical and mainstreams.",2006,0, 247,Fault diagnosis of power circuits based on SVM ensemble with quantum particles swarm optimization,"Based on least squares wavelet support vector machines (LS-WSVM) ensemble with quantum particle swarm optimization algorithm (QPSO), a systematic method for fault diagnosis of power circuits is presented. Firstly, wavelet coefficients of output voltage signals of power circuits under faulty conditions are obtained with wavelet lifting decomposition, and then faulty feature vectors are extracted from the disposed wavelet coefficients. Secondly, a boosting strategy is adopted to select faulty feature vectors automatically for LS-WSVM-based multi-class classifiers, QPSO is applied to select the optimal values of the regularization and kernel parameters of multi-class LS-WSVM. So the multi-class LS-WSVM ensemble model with boosting for the power circuits fault diagnosis system is built. The simulation result of push-pull circuits shows that the fault diagnosis method of the power circuits using LS-WSVM ensemble with QPSO is effective.",2008,0, 248,Fault diagnosis for power circuits based on SVM within the Bayesian framework,"Based on least squares wavelet support vector machines (LS-WSVM) within the Bayesian evidence framework, a systematic method for fault diagnosis of power circuits is presented. In this paper, the Bayesian evidence framework is applied to select the optimal values of the regularization and kernel parameters of multi-class LS-WSVM classifiers. Also wavelet coefficients of output voltage signals of power circuits under faulty conditions are obtained with wavelet lifting decomposition, and then faulty feature vectors are extracted from the disposed wavelet coefficients. The faulty feature vectors are used to train the multi-class LS-WSVM classifiers, so the model of the power circuits fault diagnosis system is built. In push-pull circuits, this method is applied to diagnose the faults of the circuits with simulation; the results show that the fault diagnosis method of the power circuits with LS-WSVM within the Bayesian evidence framework is effective.",2008,0, 249,Based on Compact Type of Wavelet Neural Network Tolerance Analog Circuit Fault Diagnosis,"Based on the classical wavelet neural network, this paper put forward a sort of improved multiple-input multiple-output compact type of wavelet neural network, adopted adaptive learning rate and additional momentum BP algorithm to carry out training, studied its tolerance analog circuit fault diagnosis applications. Simulation results displayed that the compact type of wavelet neural network learning is fast, it can be effective diagnosed and located to tolerance analog circuit fault.",2009,0, 250,Research on Multi-Sensor Information Fusion for the Detection of Surface Defects in Copper Strip,"Based on the defects detection on the surface of the copper strips, this paper firstly studies how to enhance system stability with the multi-sensors information fusion method. This method combines infrared, visible light and laser sensors to deal with defects detection, utilizes fuzzy logic and neural network to carry on the sensor's management, and uses wavelet transformation in image fusion. Experimental results show that this method can effectively detect surface defects in copper strips. Furthermore, it enhances the accuracy of recognizing and classifying, and makes the overall system more automatic and intelligent.",2009,0, 251,A Demonstration on Influencing Factors of FDI Location Choice: Based on Co-Integration and Error Correction Model,"Based on the relevant data in the year of 1983-2008,the paper utilizes Co-integration Test and Error Correction Model to inspect the influencing factors on FDI Location Choice in China. With a such research, the paper comes to the conclusion, that is,there is a long co-integration relation among them,fixed asset investment,R, former accumulated foreign capital amount, loan balance of financial corporation and FDI have a positive correlation, while W,the city person and FDI have a negative correlation;This paper can provide policy reference.",2010,0, 252,Using a periodic square wave test signal to detect crosstalk faults,"Built-in self test (BIST) scheme simplifies the detection of crosstalk faults in deep-submicron VLSI circuits in the boundary scan environment. The scheme tests for crosstalk faults with a periodic square wave test signal under applied random patterns generated by a linear feedback shift register (LFSR), which is transconfigured from the embedded circuit's boundary scan cells. The scheme simplifies test generation and test application while obviating the fault occurrence timing issue. Experimental results show that coverage for the induced-glitch type of crosstalk fault for large benchmark circuits can easily exceed 90%.",2005,0, 253,Design and implementation of Weapons Fault Diagnosis Expert System Platform,"By analyzing the distinguishing features of the current popular weapons fault diagnosis expert system, as well as using component-based software design and complex knowledge representation methods, this paper proposes a solution of overall design and a description of user interfaces about the Weapons Fault Diagnosis Expert System Platform. It studies and discusses the complex knowledge representation about the reusable expert system. The implementation essentials in a variety of reasoning mechanisms are also discussed. It is proved that, a special weapons fault diagnosis expert system can be generated using this expert system platform and the special knowledge of the weapons fault.",2010,0, 254,Mechanism of Defects Formation in Ti1-xVxCoSb Semiconductor Solid Solution,"By means of of the combined X-ray diffraction the different ratio of the lattice sites occupation by Co and (Ti,V) atoms in the Ti1-xVxCoSb crystals was found. This is equal to doping the TiVCoSb semiconductor with two kinds of acceptor impurities. The break of metallic conductivity in the n-type semiconductor at the increase of the donor impurity concentration was revealed, and it was explained by the simultaneous initiation of acceptor impurity.",2007,0, 255,Byzantine fault tolerance can be fast,"Byzantine fault tolerance is important because it can be used to implement highly-available systems that tolerate arbitrary behavior from faulty components. We present a detailed performance evaluation of BFT, a state-machine replication algorithm that tolerates Byzantine faults in asynchronous systems. Our results contradict the common belief that Byzantine fault tolerance is too slow to be used in practice, BFT performs well so that it can be used to implement real systems. We implemented a replicated NFS file system using BFT that performs 2% faster to 24% slower than production implementations of the NFS protocol that are not fault-tolerant.",2001,0, 256,REIK: A Novel P2P Overlay Network with Byzantine Fault Tolerance,"Byzantine faults in a peer-to-peer (P2P) system are re- sulted from adversarial and inconsistent peer behaviors. Faulty peers can disrupt the routing functions in the peer joining and lookup schemes. Byzantine attackers may col- lude with each other to paralyze the entire P2P network op- erations. We discover a novel DHT-based overlay networks (REIK) with Byzantine fault tolerance. REIK based on a ring which embeds an inverse Kautz digraph IK(d, m), to enable multi-path P2P routing. The inverse Kautz network provides multiple entry points and multiple routes between node pair. The REIK overlay is the first constant degree and O(log n) diameter DHT scheme with constant congestion and Byzantine fault tolerance. For large d 2, the REIK overlays handle random and Byzantine faults effectively, far beyond the capability of Chord and CAN.",2007,0, 257,Error analysis and experimental tests of CATRASYS (Cassino Tracking System),"CATRASYS (Cassino Tracking System) is a low-cost, easyily operated system for monitoring large displacements together with rotation angles of a suitable end-effector, which can be easily attached to any mechanical system. In this paper we present basic performance of CATRASYS by using an analysis for error evaluation and showing experimental tests that have been carried out at the Laboratory of Robotics and Mechatronics in Cassino with available robots",2000,0, 258,A Block Device Driver for Parallel and Fault-tolerant Storage,"Cauchy-Reed/Solomon is an XOR-based erasure-tolerant coding scheme which is widely used for reliable distributed storage and fault-tolerant memory. A variety of different codes can be specified, depending on the number of parallel operating storage resources and the desired strength of fault tolerance. First we present an approach to parameterize the codes for different systems and requirements, such as the desired parallelism and reliability. Based on this parameterization, a Linux block device driver was developed which is evaluated in this paper.",2010,0, 259,A Survey of Methods for Detection of Stator-Related Faults in Induction Machines,"As evidenced by industrial surveys, stator-related failures account for a large percentage of faults in induction machines. The objective of this paper is to provide a survey of existing techniques for detection of stator-related faults, which include stator winding turn faults, stator core faults, temperature monitoring and thermal protection, and stator winding insulation testing. The root causes of fault inception, available techniques for detection, and recommendations for further research are presented. Although the primary focus is online and sensorless methods that use machine voltages and currents to extract fault signatures, offline techniques such as partial discharge detection are also examined. Condition monitoring, fault diagnostics, insulation testing, interlaminar core faults, partial discharge (PD), temperature monitoring, turn faults.",2007,0,2090 260,Bright-field AAPSM conflict detection and correction,"As feature sizes shrink, it will be necessary to use AAPSM (alternating-aperture phase shift masking) to image critical features, especially on the polysilicon layer. This imposes additional constraints on the layouts beyond traditional design rules. Of particular note is the requirement that all critical features be flanked by opposite-phase shifters, while the shifters obey minimum width and spacing requirements. A layout is called phase-assignable if it satisfies this requirement. If a layout is not phase-assignable, the phase conflicts have to be removed to enable the use of AAPSM for the layout. Previous work has sought to detect a suitable set of phase conflicts to be removed, as well as correct them. The contributions of this paper are the following: (1) a new approach to detect a minimal set of phase conflicts (also referred to as AAPSM conflicts), which when corrected will produce a phase-assignable layout; (2) a novel layout modification scheme for correcting these AAPSM conflicts. The proposed approach for conflict detection shows significant improvements in the quality of results and runtime for real industrial circuits, when compared to previous methods. To the best of our knowledge, this is the first time layout modification results are presented for bright-field AAPSM. Our experiments show that the percentage area increase for making a layout phase-assignable ranges from 0.7-11.8%.",2005,0, 261,A technique for real-time correction of measurement instrument transducers frequency responses,"As it is well known, transducers are the most important components of a measurement system: they converts the physical quantity, which has to be measured, in an electrical one, processed in turns by electronic instruments. It is also known that they are the major source of uncertainty, which it is desirable to be as low as possible; obviously, their costs grow, sometimes exponentially, with their performances. Since digital measurement equipments, based on data acquisition systems and digital processors, represent nowadays the core of most electronic measuring instrument, it is possible to use them in order to compensate errors coming from transducers, without increasing their costs. So in this paper a digital technique for the correction of transducers errors is presented: it has got a low computational burden, and therefore it can be implemented in real-time even on low cost digital processors.",2008,0, 262,Trace-based microarchitecture-level diagnosis of permanent hardware faults,"As devices continue to scale, future shipped hardware will likely fail due to in-the-field hardware faults. As traditional redundancy-based hardware reliability solutions that tackle these faults will be too expensive to be broadly deployable, recent research has focused on low-overhead reliability solutions. One approach is to employ low-overhead (ldquoalways-onrdquo) detection techniques that catch high-level symptoms and pay a higher overhead for (rarely invoked) diagnosis. This paper presents trace-based fault diagnosis, a diagnosis strategy that identifies permanent faults in microarchitectural units by analyzing the faulty corepsilas instruction trace. Once a fault is detected, the faulty core is rolled back and re-executes from a previous checkpoint, generating a faulty instruction trace and recording the microarchitecture-level resource usage. A diagnosis process on another fault-free core then generates a fault-free trace which it compares with the faulty trace to identify the faulty unit. Our result shows that this approach successfully diagnoses 98% of the faults studied and is a highly robust and flexible way for diagnosing permanent faults.",2008,0, 263,Secure and fault-tolerant voting in distributed systems,"Concerns about both security and fault-tolerance have had an important impact on the design and use of distributed information systems in the past. As such systems become more prevalent, as well as more pervasive, these concerns will become even more immediately relevant. We focus on integrating security and fault-tolerance into one, general-purpose protocol for secure distributed voting. Distributed voting is a well-known fault-tolerance technique. For the most part, however, security had not been a concern in systems that used voting. More recently, several protocols have been proposed to shore up this lack. These protocols, however, have limitations which make them particularly unsuitable for many aerospace applications, because those applications require very flexible voting schemes (e.g., voting among real-world sensor data). We present a new, more general voting protocol that reduces the vulnerability of the voting process to both attacks and faults. The algorithm is contrasted with the traditional 2-phase commit protocols typically used in distributed voting and with other proposed secure voting schemes. Our algorithm is applicable to exact and inexact voting in networks where atomic broadcast and predetermined message delays are present, such as local area networks. For wide area networks without these properties, we describe yet another approach that satisfies our goals of obtaining security and fault tolerance for a broad range of aerospace information systems",2001,0, 264,Fault-tolerant aspects of MPC,"Concerns control in the event of equipment failure. Model predictive control (MPC) offers a promising basis for fault-tolerant control. Since MPC relies on an explicit internal model, one could dealing with failures by updating the internal model, and letting the online optimiser work out how to control the system in its new condition. This relies on several assumptions: that the nature of the fault can be located, and its effects modelled; that the model can be updated, essentially automatically; and that the control objectives can be left unaltered after the failure. The first 2 of these may be possible using fault detection and isolation (FDI), and the management of complex models. The technologies concerned seem to offer a very powerful combination",2000,0, 265,Falcon: fault localization in concurrent programs,"Concurrency fault are difficult to find because they usually occur under specific thread interleavings. Fault-detection tools in this area find data-access patterns among thread interleavings, but they report benign patterns as well as actual faulty patterns. Traditional fault-localization techniques have been successful in identifying faults in sequential, deterministic programs, but they cannot detect faulty data-access patterns among threads. This paper presents a new dynamic fault-localization technique that can pinpoint faulty data-access patterns in multi-threaded concurrent programs. The technique monitors memory-access sequences among threads, detects data-access patterns associated with a program's pass/fail results, and reports dataaccess patterns with suspiciousness scores. The paper also presents the description of a prototype implementation of the technique in Java, and the results of an empirical study we performed with the prototype on several Java benchmarks. The empirical study shows that the technique can effectively and efficiently localize the faults for our subjects.",2010,0, 266,Which concurrent error detection scheme to choose ?,"Concurrent error detection (CED) techniques (based on hardware duplication, parity codes, etc.) are widely used to enhance system dependability. All CED techniques introduce some form of redundancy. Redundant systems we subject to common-mode failures (CMFs). While most of the studies of CED techniques focus on area overhead, few analyze the CMF vulnerability of these techniques. In this paper, we present simulation results to quantitatively compare various CED schemes based on their area overhead and the protection (data integrity) they provide against multiple failures and CMFs. Our results indicate that, for the simulated combinational logic circuits, although diverse duplex systems (with two different implementations of the same logic function) sometimes have marginally higher area overhead, they provide significant protection against multiple failures and CMFs compared to other CED techniques like parity prediction",2000,0, 267,A study of the internal and external effects of concurrency bugs,"Concurrent programming is increasingly important for achieving performance gains in the multi-core era, but it is also a difficult and error-prone task. Concurrency bugs are particularly difficult to avoid and diagnose, and therefore in order to improve methods for handling such bugs, we need a better understanding of their characteristics. In this paper we present a study of concurrency bugs in MySQL, a widely used database server. While previous studies of real-world concurrency bugs exist, they have centered their attention on the causes of these bugs. In this paper we provide a complementary focus on their effects, which is important for understanding how to detect or tolerate such bugs at run-time. Our study uncovered several interesting facts, such as the existence of a significant number of latent concurrency bugs, which silently corrupt data structures and are exposed to the user potentially much later. We also highlight several implications of our findings for the design of reliable concurrent systems.",2010,0, 268,Comparison of Morlet wavelet filter for defect diagnosis of bearings,"Condition monitoring helps to avoid unexpected failures of equipments. Rolling element bearings are critical components in rotating equipments. Vibration analysis is a common method used for defect detection and diagnosis of rotating equipments without effecting their operation. The measured vibration signal contains noise, modulations and low frequency components due to unbalance, misalignment, structural losseness etc. Since the impulses due to bearing defects are having low amplitude, it is difficult to detect and identify the location of bearing defect from raw vibration signals, especially during the initial stages of defect development. Since there are more transfer segments, detection of inner race and rolling element defects are also challenging. Morlet wavelet filter (MWF) can be used for denoising of vibration signals so that condition monitoring of bearings can be performed from the denoised signals. The parameters of the wavelet need to be optimized before denoising is performed. Two algorithms used for optimization of MWF are compared in this paper. First algorithm use Shannon entropy and kurtosis for optimization of shape factor and scale of the wavelet respectively. Second algorithm uses kurtosis for optimization of wavelet parameters. Experiments are performed to obtain vibration signal of bearings with defect induced in the rolling element. MWF optimized using the proposed methods were used to denoise the vibration signals. The filtered signals are compared and performances of the algorithms are evaluated.",2010,0, 269,Correction of scattered radiation for cone-beam computed tomography at high X-ray energies,Cone-beam computed tomography (CT) using X-ray tubes of high energy (450 keV) faces the problem of strong artifacts and a significant contrast degradation in reconstructed images. System components of cone-beam CT scanners operating at high X-ray energies have to be optimized to reduce the amount of scattered photons hitting the detector. In addition it is mandatory to apply scatter correction algorithms. A prototype of a cone-beam CT system equipped with a 450 kV industrial X-ray tube has been developed within the framework of a European research project. The influences of scattered radiation generated by the object have been extensively evaluated using Monte Carlo (MC) simulations. Furthermore scattering reduction and correction methods have been developed. A key task was the implementation of a new hybrid method for the fast and accurate calculation of the scattering intensity distribution in X-ray projections for industrial cone-beam CT.,2008,0, 270,Fault diagnosis box based on Cloud Computing,"Considering the development of Internet and the diversity and complication of fault diagnosis, combined with the practice, we put forward a kind of fault diagnosis box. Efficiency of the fault diagnosis will be significantly improved based on internet. The user input his requirement in the terminal by way of diagnosis box, and then diagnosis platform can analyze the user's needs. The service which can satisfy the need is based on Cloud Computing, and the Cloud Computing can make many expensive hardware resources shared.",2010,0, 271,The selection and creation of the rules in rules-based optical proximity correction,"Considering the efficiency and veracity of rules-based OPC applied to recent large-scale layout, we firstly point out the importance of the selection and creation of rules in rules-based OPC. Our discussion addresses the crucial factors in selecting and creating rules as well as how we select and create more concise and practical rules-base. Based on our ideas we suggest four primary rules and as a result we show some rule data in table. The automatic construction of the rules-base called OPCL is an important part of the whole rules-based OPC software",2001,0, 272,A Novel SVC VoD System with Rate Adaptation and Error Concealment over GPRS/EDGE Network,"Considering the high packet losses and low, varying bandwidth of GPRS/EDGE network and limited computation power of handheld devices, we present and implement a novel SVC video-on-demand system for hand-held devices over GPRS/EDGE network. For the purpose of handling varying bitrate, we propose a priority based layer switching (PLS) adaptation scheme for SVC stream, which not only performs online 3-D adaptation more quickly by a simple parser, but also optimizes the video quality in a R-D sense under the bandwidth constraint. To resist packet losses, a motion-detection based adaptive error concealment (MDA) algorithm is proposed, which can achieve a PSNR gain of up to 3dB comparing to existing methods, while maintaining low complexity. Moreover, our proposed system was implemented and tested over existing GPRS/EDGE network deployed in China. The test results demonstrate that the proposed system and schemes have performance advantage in terms of quicker data rate adaptation, higher PSNR and lower overhead.",2008,0, 273,Fine-Grained Fault Tolerance for Process Variation-Aware Caches,"Continuous scaling in CMOS fabrication process makes circuits more vulnerable to process variations, which results in variable delay, malfunctioning, and/or leaky circuits. Caches are one of the biggest victims of process variations due to their large sizes and minimal cell features. To mitigate the impacts of process variations on caches, we propose to localize the effects of process variations at a word level, not at the conventional cache set, cache way, or cache line level. Faulty words are disabled or shut down completely and accesses to those words are bypassed to a small set of word-length buffers. This technique is shown to be effective in reducing performance penalty due to process variations and in increasing the parametric yield up to 90% when subjected to the performance constraints.",2010,0, 274,A fault-tolerant P-Q decoupled control scheme for static synchronous series compensator,"Control of nonlinear devices in power systems relies on the availability and the quality of sensor measurements. Measurements can be corrupted or interrupted due to sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software (referred to as missing sensor measurements in this paper). This paper proposes a fault-tolerant control scheme (FTCS) for a static synchronous series compensator (SSSC). This FTCS consists of a sensor evaluation and (missing sensor) restoration scheme (SERS) cascaded with a P-Q decoupled control scheme (PQDC). It is able to provide effective control to the SSSC when single or multiple crucial sensor measurements are unavailable. Simulation studies are carried out to examine the validity of the proposed FTCS. During the simulations, single and multiple phase current sensors are assumed to be missing, respectively. Results show that the SERS restores the missing data correctly during steady and transient states, including small and large disturbances, and unbalanced three-phase operation. Thus, the FTCS continuously provides effective control to the SSSC with and without missing sensor measurements",2006,0, 275,Missing-Sensor-Fault-Tolerant Control for SSSC FACTS Device With Real-Time Implementation,"Control of power systems relies on the availability and quality of sensor measurements. However, measurements are inevitably subjected to faults caused by sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software. These faults, in turn, may cause the failure of power system controllers and consequently, severe contingencies in the power system. To avoid such contingencies, this paper presents a sensor evaluation and (missing sensor) restoration scheme (SERS) by using auto-associative neural networks (auto encoders) and particle swarm optimization. Based on the SERS, a missing-sensor-fault-tolerant control is developed for controlling a static synchronous series compensator (SSSC) connected to a power network. This missing-sensor fault-tolerant control (MSFTC) improves the reliability, maintainability, and survivability of the SSSC and the power network. The effectiveness of the MSFTC is demonstrated by a real-time implementation of an SSSC connected to the IEEE 10-machine 39-bus system on a Real Time Digital Simulator and TMS320C6701 digital signal processor platform. The proposed fault-tolerant control can be readily applied to many existing controllers in power systems.",2009,0,6385 276,A fault location algorithm for urban distribution network with DG,"Conventional power distribution system is radial in nature, characterized by a single source feeding a network of downstream feeders. The distribution automation fault location, primarily considering the fault current amplitude signals, has traditionally been designed assuming the system to be radial. However, the system with distributed generation (DG) may no longer radial, which means more fault direction signals to be needed for fault location. The paper suggests a novel algorithm, based on fault current amplitude difference of zone. The algorithm realizes direction detection using the difference of short - circuit capacity between system source and DG. The PSS/E simulation results indicate the method could locate fault zone correctly in urban distribution network with DG.",2008,0, 277,Improving error resilience of scalable H.264 (SVC) via drift control,"Common error concealment schemes mitigate errors for frames in which losses occur only, even though errors propagate to future frames. Drift control is generally challenging due to lack of reliable basis for determining what needs to be corrected and how. In this paper, we show that for scalable or multi-layer video, an available base-layer can serve as such basis to allow continous error drift checking and correction of higher layers even when the base-layer is of much lower spatial resolution. The associated algorithm is low-complexity, incurs no additional bit cost, and experiments using SVC reference software show PSNR improvement of up to 5 dB over concealment methods without drift control.",2010,0, 278,Comparison and application of different VHDL-based fault injection techniques,"Compares different VHDL-based fault injection techniques: simulator commands, saboteurs and mutants for the validation of fault tolerant systems. Some extensions and implementation designs of these techniques have been introduced. Also, a wide set of non-usual fault models have been implemented. As an application, a fault tolerant microcomputer system has been validated. Faults have been injected using an injection tool developed by the GSTF. We have injected both transient and permanent faults on the system model, using two different workloads. We have studied the pathology of the propagated errors, measured their latencies, and calculated both detection and recovery coverages. Preliminary results show that coverages for transient faults can be obtained quite accurately with any of the three techniques. This enables the use of different abstraction level models for the same system. We have also verified significant differences in implementation and simulation cost between the studied injection techniques",2001,0, 279,Silk Texture Defect Recognition System Using Computer Vision and Artificial Neural Networks,"Competiveness of textile industries depends on the quality control of production. In order to minimize production cost, effort is directed towards less defectiveness and time spent on production operations. More accuracy in silk texture defect identification should be maintained so as eliminate any abnormality in the silk texture that hinders its acceptability by the consumer. In this paper, silk texture defect identification is achieved by implementing artificial neural network (ANN) technique. Methodology for feature selection that leads to high recognition rates and to simpler classification systems architectures is presented.",2009,0, 280,"Using composition to design secure, fault-tolerant systems","Complex systems must be analyzed in smaller pieces. Analysis must support both bottom-up (composition) and top-down (refinement) development, and it must support the consideration of several critical properties, e.g., functional correctness, fault tolerance and security, as appropriate. We describe a mathematical framework for performing composition and refinement analysis and discuss some lessons learned from its application. The framework is written and verified in PVS",2000,0, 281,Error propagation in the reliability analysis of component based systems,"Component based development is gaining popularity in the software engineering community. The reliability of components affects the reliability of the system. Different models and theories have been developed to estimate system reliability given the information about system architecture and the quality of the components. Almost always in these models a key attribute of component-based systems, the error propagation between the components, is overlooked and not taken into account in the reliability prediction. We extend our previous work on Bayesian reliability prediction of component based systems by introducing the error propagation probability into the model. We demonstrate the impact of the error propagation in a case study of an automated personnel access control system. We conclude that error propagation may have a significant impact on the system reliability prediction and, therefore, future architecture-based models should not ignore it",2005,0, 282,Testing Approach of Component Security Based on Fault Injection,"Component-Based Software Engineering (CBSE) has been the research focus in the field of software engineering at present. But problems with the reliability and security of components have not yet been resolved, which worried the component developer and user. Testing the software components is an important approach which guarantees and enhances their reliability and security. This paper proposes a testing approach of component security based on fault injection (TAFI), and then defines and discusses requirement specification of components security and fault injection model. In addition, 31 software components are analyzed using our approach based on fault injection model. The case study shows that our approach is effective and operable.",2007,0, 283,A Flexible Fault-Tolerance Mechanism for the Integrade Grid Middleware,"Computer grids have attracted great attention of both academic and enterprise communities, becoming an attractive alternative for the execution of applications that demand huge computational power, allowing the integration of computational resources spread through different administrative domains. The dynamic nature of the grid infrastructure, its high scalability, and great heterogeneity exacerbates the likelihood of errors occurrence, imposing fault tolerance as a major requirement for grid middlewares. This paper describes a flexible fault-tolerance mechanism implemented on integrate grid middleware that allows the customization of several fault tolerance parameters and the combination of different fault tolerance techniques. This paper also presents several experiments that measure the benefits of our approach, considering several different execution environments scenarios.",2007,0, 284,Characterizing Microarchitecture Soft Error Vulnerability Phase Behavior,"Computer systems increasingly depend on exploiting program dynamic behavior to optimize performance, power and reliability. Prior studies have shown that program execution exhibits phase behavior in both performance and power domains. Reliabilityoriented program phase behavior, however, remains largely unexplored. As semiconductor transient faults (soft errors) emerge as a critical challenge to reliable system design, characterizing program phase behavior from a reliability perspective is crucial in order to apply dynamic fault-tolerant mechanisms and to optimize performance/reliability trade-offs. In this paper, we compute run-time program vulnerability to soft errors on four microarchitecture structures (i.e. instruction window, reorder buffer, function units and wakeup table) in a high-performance out-of-order execution superscalar processor. Experimental results on the SPEC2000 benchmarks show a considerable amount of time varying behavior in reliability measurements. Our study shows that a single performance metric, such as IPC, cache miss or branch misprediction, is not a good indicator for program vulnerability. The vulnerabilities of the studied microarchitecture structures are then correlated with program code-structure and run-time events to identify vulnerability phase behavior. We observed that both program code-structure and run-time events appear promising in classifying program reliability phase behavior. Overall, performance counter based schemes achieved an average Coefficient of Variation (COV) of 3.5%, 4.5%, 4.3% and 5.7% on the instruction queue, reorder buffer, function units and the wakeup table, while basic block vectors offer COVs of 4.9%, 5.8%, 5.4% and 6% on the four studied microarchitecture structures respectively. We found that in general, tracking performance metrics performs better than tracking control flow in identifying reliability phase behavior of applications. To our knowledge, this paper is the first to characterize program reliability phase behavior at the microarchitecture level.",2006,0, 285,Optimal cost-effective design of parallel systems subject to imperfect fault-coverage,"Computer-based systems intended for critical applications are usually designed with sufficient redundancy to be tolerant of errors that may occur. However, under imperfect fault-coverage conditions (such as the system cannot adequately detect, locate, and recover from faults and errors in the system), system failures can result even when adequate redundancy is in place. Because parallel architecture is a well-known and powerful architecture for improving the reliability of fault-tolerant systems, this paper presents the cost-effective design policies of parallel systems subject to imperfect fault-coverage. The policies are designed by considering (1) cost of components, (2) failure cost of the system, (3) common-cause failures, and (4) performance levels of the system. Three kinds of cost functions are formulated considering that the total average cost of the system is based on: (1) system unreliability, (2) failure-time of the system, and (3) total processor-hours. It is shown that the MTTF (mean time to failure) of the system decreases by increasing the spares beyond a certain limit. Therefore, this paper also presents optimal design policies to maximize the MTTF of these systems. The results of this paper can also be applied to gracefully degradable systems.",2003,0, 286,Overview of an automatic underground distribution fault location system,"Con Edison uses power quality monitors to locate faults on its primary distribution underground network. The power quality monitors serve as the voltage and current sensors in an automatic fault location system. Fault measurements captured by the meters are downloaded automatically, integrated into a relational database, and processed for impedance calculations. The impedance calculations combined with up-to-date distribution circuit models and geographic information system data are used to build estimated fault location tables and map displays. The systems are integrated on Con Edison's intranet and used in real-time by numerous groups within Con Edison including operations, system protection, and power quality. The system can detect and locate both single-phase faults and multi-phase faults. It sends alerts when subcycle faults and magnetizing inrush current transients are detected. For single-phase faults, the system's accuracy regularly exceeds 80% success in estimating the fault location within 10% of the total number of the feeder structures. In 2008, the system was expanded to incorporate data from feeder relays and in 2009 it may be expanded to include data from transmission digital fault recorders. This document will present an overview of some of the parameters and practices for finding faults in place every day at Con Edison.",2009,0, 287,Real-time correction of distortion image based on FPGA,"Correcting infrared camera distortion is necessary in target tracking and object recognition system. The existent FPGA algorithm didn't utilize sufficiently the advantage of the parallel processing and leaded to the results that a great deal of the system resources were consumed and the running speed was slowed down. The paper analyzed the existing problems such as serial structure in other algorithms, proposed a new parallel algorithm and realized it with the lowest resources. The experiments carried on the chip-virtex5 produced by Xilinx company show that the proposed algorithm has a good real-time performance, use less resource than previous structure and realize the correction of distortion on FPGA on line.",2010,0, 288,A class of random multiple bits in a byte error correcting and single byte error detecting (Stb/EC-SbED) codes,"Correcting multiple random bit errors that corrupt a single DRAM chip becomes very important in certain applications, such as semiconductor memories used in computer and communication systems, mobile systems, aircraft, and satellites. This is because, in these applications, the presence of strong electromagnetic waves in the environment or the bombardment of an energetic particle on a DRAM chip is highly likely to upset more than just one bit stored in that chip. On the other hand, entire chip failures are often presumed to be less likely events and, in most applications, detection of errors caused by single chip failures are preferred to correction due to check bit length considerations. Under this situation, codes capable of correcting random multiple bit errors that are confined to a single chip output and simultaneously detecting errors caused by single chip failures are attractive for application in high speed memory systems. This paper proposes a class of codes called Single t/b-error Correcting-Single b-bit byte Error Detecting (Stb/EC-SbED) codes which have the capability of correcting random t-bit errors occurring within a single b-bit byte and simultaneously indicating single b-bit byte errors. For the practical case where the chip data output is 8 bits, i.e., b = 8, the S38/EC-S8ED code proposed in this paper, for example, requires only 12 check bits at information length 64 bits. Furthermore, this S38/EC-S8ED code is capable of correcting errors caused by single subarray data faults, i.e., single 4-bit byte errors, as well. This paper also shows that perfect S(b-t)b/EC-SbED codes, i.e., perfect Stb/EC-SbED codes for the case where t = b - 1, do exist and provides a theorem to construct these codes.",2003,0, 289,Error Correction of Noisy Interleaved Block Ciphers,"Correction of noisy cipher is a challenging task. Classical error detection and correction methods are not suitable for encrypted data. Previous work has been done on correcting noisy block ciphers using cipher and plaintext characteristics. For certain amount of errors, when error correction using cipher characteristics fails, the language properties of plaintext data were used, instead, to eliminate noise. However, this method requires an iterative process, and there are cases that may occur when unique solution cannot be achieved. In this paper, error detection and correction is performed at the receiver end, without any changes to the encryption algorithm, using only cipher characteristics. Interleaving the cipher text before transmission and deinterleaving after reception cause bursts of channel errors to be spread out in time, and, thus, to be in the correction capability of, only, cipher characteristic approach.",2010,0, 290,Soft errors: is the concern for soft-errors overblown?,"Cosmic ray particles have the ability to either toggle the state of memory elements or create unwanted glitches in combinational logic that may be latched by memory elements. As supply voltages reduce and feature sizes become smaller in future technologies, soft error tolerance is considered a significant challenge for designing future electronic systems. In some cases, the impact of soft errors is easily overblown. The real challenge at hand is to consider the kind of soft error protection and recovery mechanisms that can be provided while meeting other system parameters such as power consumption, performance, area usage and criticality of a failure. Another challenge is to understand the interactions of other optimizations targeted at other constraints such as performance or power consumption with soft error rates. It is important for system designers to perform soft error analysis to avoid repetitions of widely-publicised soft error failures",2005,0, 291,DMT and DT2: two fault-tolerant architectures developed by CNES for COTS-based spacecraft supercomputers,"COTS (commercial off-the-shelf) electronic components are attractive for space applications. However, computer designers need to solve a main problem as regards their SEE (single event effect) sensitivity. The purpose of fault tolerance studies conducted at CNES (the French Space Agency) is to prepare the space community for the significant evolution linked to the usage of COTS components. CNES has patented two fault-tolerant architectures with low recurring costs, mass and power consumption, as compared to conventional architectures as e.g. the TMR (triple modular redundancy) one. The former, referred to as DMT, is time redundancy based and minimises recurring costs. It is mainly intended for but not limited to scientific missions. The latter, referred to as DT2, is based on a structural duplex architecture with minimum duplication and is suited for high-end application missions",2006,0, 292,Simulation of Attacks on Network-based Error Detection,"CRC and checksum are two error-detecting mechanisms widely used in the computer network. A novice evaluating simulation model based on attacking these two codes is proposed and the correspondent evaluating methods are discussed. In this model, the size and content of any data packet is produced by random number generator and the changes of the packet's content is implemented by simulating natural and manual attacks. The results show that these two error-detecting codes have the strong ability against natural attacks, but no ability against manual attacks which facilitate the destruction of data authentication and data-accessing availability.",2007,0, 293,Crisp--A Fault Localization Tool for Java Programs,"Crisp is an Eclipse plug-in tool for constructing intermediate versions of a Java program that is being edited. After a long editing session, a programmer will run regression tests to make sure she has not invalidated previously tested functionality. If a test fails unexpectedly, Crisp allows the programmer to select parts of the edit that affected the failing test and to add them to the original program, creating an intermediate version guaranteed to compile. Then the programmer can re-execute the test in order to locate the exact reasons for the failure by concentrating on those affecting changes that were applied. Using Crisp, a programmer can it- eratively select, apply, and undo individual (or sets of) affecting changes and, thus effectively find a small set of failure-inducing changes. Crisp is an extension to our change impact analysis tool, Chianti, [6].",2007,0, 294,Testing for interconnect crosstalk defects using on-chip embedded processor cores,"Crosstalk effects degrade the integrity of signals traveling on long interconnects and must be addressed during production testing. External testing for crosstalk is expensive due to the need for high-speed testers. Built-in self-test, while eliminating the need for a high-speed tester, may lead to excessive test overhead as well as overly aggressive testing. To address this problem, we propose a new software-based self-test methodology for system-on-chip (SoC) devices based on embedded processors. It enables an on-chip embedded processor core to test for crosstalk in system-level interconnects by executing a self-test program in the normal operational mode of the SoC. We have demonstrated the feasibility of this method by applying it to test the interconnects of a processor-memory system. The defect coverage was evaluated using a system-level crosstalk defect simulation method.",2001,0, 295,Energy-Efficient Fault-Tolerant Mechanism for Clustered Wireless Sensor Networks,"Clustering is an effective topology control and communication protocol in wireless sensor networks (sensornets). However, the harsh deployed environments, the serious resource limitation of nodes, and the unbalanced workload among nodes make the clustered sensornets vulnerable to communication faults and errors, which undermine the usability of the network. So mechanisms to improve the robustness and fault-tolerance are highly required in real applications of sensornets. In this paper, a distributed fault-tolerant mechanism called CMATO (Cluster-Member-based fAult-TOlerant mechanism) for sensornets is proposed. It views the cluster as an individual whole and utilizes the monitoring of each other within the cluster to detect and recover from the faults in a quick and energy-efficient way. CMATO only needs the local knowledge of the network, relaxing the pre-deployment of the cluster heads and a k-dominating set (k>1) coverage assumptions. This advantage makes our mechanism flexible to be incorporated into various existing clustering schemes in sensornets. Furthermore, CMATO is able to deal with failures of multiple cluster heads, so it effectively recovers the nodes from the failures of multiple cluster heads and the failures of links within the cluster, gaining a much more robust and fault-tolerant sensornets. The simulation results show that our mechanism outperforms the existing cluster-head based fault-tolerant mechanism in both fault coverage and energy consumption.",2007,0, 296,Research on a Novel Method for Measuring Volumetric Error of Coordinate Measuring Machine,"CMM (Coordinate Measuring Machine) is used as an accurate measuring instrument in manufacturing and design of products. High accurate CMM is required by the development of super-finish, micro-machinery, micro-electronic mechanical systems (MEMS). The precision of CMM is influenced by its volumetric error and dynamic error. The volumetric error is the major component during slowly probing, but the dynamic error is not omitted during fast probing. Compensating the volumetric error is an effective approach to improve the precision of CMM. The volumetric error in the working space of CMM must be measured and the error model must be set up before compensating. In this paper, a novel method to measure 5 volumetric errors of a guide way at the same time by 3-beam laser interferometer is proposed. The error compensating model is validated by experiment. The method can also be used to measure the dynamical error of CMM and CNC machine in real-time. The research can be referred to design the super higher accurate CMM and the accurate CNC machine.",2010,0, 297,Detouring: Translating software to circumvent hard faults in simple cores,"CMOS technology trends are leading to an increasing incidence of hard (permanent) faults in processors. These faults may be introduced at fabrication or occur in the field. Whereas high-performance processor cores have enough redundancy to tolerate many of these faults, the simple, low-power cores that are attractive for multicore chips do not. We propose Detouring, a software-based scheme for tolerating hard faults in simple cores. The key idea is to automatically modify software such that its functionality is unchanged but it does not use any of the faulty hardware. Our initial implementation of Detouring tolerates hard faults in several hardware components, including the instruction cache, registers, functional units, and the operand bypass network. Detouring has no hardware cost and no performance overhead for fault-free cores.",2008,0, 298,Detection or isolation of defects? An experimental comparison of unit testing and code inspection,"Code inspections and white-box testing have both been used for unit testing. One is a static analysis technique, the other, a dynamic one, since it is based on executing test cases. Naturally, the question arises whether one is superior to the other, or, whether either technique is better suited to detect or isolate certain types of defects. We investigated this question with an experiment with a focus on detection of the defects (failures) and isolation of the underlying sources of the defects (faults). The results indicate that there exist significant differences for some of the effects of using code inspection versus testing. White-box testing is more effective, i.e. detects significantly more defects while inspection isolates the underlying source of a larger share of the defects detected. Testers spend significantly more time, hence the difference in efficiency is smaller, and is not statistically significant. The two techniques are also shown to detect and identify different defects, hence motivating the use of a combination of methods.",2003,0, 299,Estimating the number of faults remaining in software code documents inspected with iterative code reviews,"Code review is considered an efficient method for detecting faults in a software code document. The number of faults not detected by the review should be small. Current methods for estimating this number assume reviews with several inspectors, but there are many cases where it is practical to employ only two inspectors. Sufficiently accurate estimates may be obtained by two inspectors employing an iterative code review (ICR) process. This paper introduces a new estimator for the number of undetected faults in an ICR process, so the process may be stopped when a satisfactory result is estimated. This technique employs the Kantorowitz estimator for N-fold inspections, where the N teams are replaced by N reviews. The estimator was tested for three years in an industrial project, where it produced satisfactory results. More experiments are needed in order to fully evaluate the approach.",2005,0, 300,Coefficient-based test of parametric faults in analog circuits,"Coefficient-based test (CBT) is introduced for detecting parametric faults in analog circuits. The method uses pseudo Monte Carlo simulation and system-identification tools to determine whether a given circuit under test (CUT) is faulty. From the circuit description, and component tolerance specifications, the tolerance boxes of all circuit transfer-function coefficients are precomputed and used during the test. Using input/output signal information, the test procedure attempts to extract the CUT's transfer function. When this extraction is complete-if one or more of these measured transfer-function coefficients are found to be outside their tolerance boxes-the circuit is declared faulty.",2006,0, 301,Advanced fault-tolerance techniques for a color digital camera-on-a-chip,"Color digital imagers contain red, green and blue subpixels within each color pixel. Defects that develop either at fabrication time or due to environmentally induced errors over time can cause a single color subpixel (e.g., R) to fail, while leaving the remaining colors intact. This paper investigates seven software correction algorithms that interpolate the color of a pixel based on its nearest neighbors. Using several measurements of color error, all seven methods were investigated for a large number of digital images. Interpolations using only information from the single failed color (e.g., R) in the neighbors gave the poorest results. Those using all color measurements and a quadratic interpolation formula, combined with the remaining subpixel colors (e.g., G and B) produced significantly better results. A formula developed using the CIE color coordinates of tristimulus values (X, Y, Z) yielded the best results",2001,0, 302,Research on fault diagnosis of HT-60 drilling rig based on neural network expert system,"Combining the characteristics of drilling rig fault, a solution of fault diagnosis expert system based on artificial neural network is proposeed. The fault diagnosis system is designed for HT-60 drilling rig, which acquires knowledge by neural network and diagnoses by expert system. The system with characteristics of self-learning and self-adaptive can acquire knowledge from existing data in order to achieve the purpose of expanding knowledge, which maks up the inadequacies of traditional expert system. Through analyzing a variety of common faults and solutions, the software interface is established by using the Force Control software to achieve fault diagnosis which is based on artificial neural network expert system.",2010,0, 303,Semantic Impact and Faults in Source Code Changes: An Empirical Study,"Changes to source code have become a critical factor in fault predictions. Text or syntactic approaches have been widely used. Textual analysis focuses on changed text fragments while syntactic analysis focuses on changed syntactic entities. Although both of them have demonstrated their advantages in experimental results, they only study code fragments modified during changes. Because of semantic dependencies within programs, we believe that code fragments impacted by changes are also helpful. Given a source code change, we identify its impact by program slicing along the variable def-use chains. To evaluate the effectiveness of change impacts in fault detection and prediction, we compare impacted code with changed code according to size and fault density. Our experiment on the change history of a successful industrial project shows that: fault density in changed and impacted fragments are higher than other areas; for large changes, their impacts have higher fault density than changes themselves; interferences within change impact contribute to the high fault density in large changes. Our study suggests that, like change itself, change impact is also a high priority indicator in fault prediction, especially for changes of large scales.",2009,0, 304,Automatic Detection of In-field Defect Growth in Image Sensors,"Characterization of in-field defect growth with time in digital image sensors is important for measuring the quality of sensors as they age. While more defects were found in cameras exposed to high cosmic ray radiation environments, comparing the collective growth rate of different sensor types has shown that CCD imagers develop twice as many defects as APS imagers, indicating that CCD imagers may be more sensitive to radiation. The defect growth of individual imagers can be estimated by analyzing historical image sets captured by individual cameras. This paper presents a defect tracing algorithm, which determines the presence or absence of defects by accumulating Bayesian statistics collected over a sequence of images. Recognizing the complexity of image scenes, camera settings, and local clustering of defects in color images (due to demosaicing), refinements of the algorithm have been explored and the resulting detection accuracy has increased significantly. In-field test results from 3 imagers with a total of 26 defects have shown that 96% of the defects' dates were identified with less than 10 days difference compared to visual inspection. In addition to our continuous study of in-field defects in high-end digital SLRs, this paper presents a preliminary study of 10 cellphone cameras. Our test results address the comparison of defects types, distribution and growth found in low-end and high-end cameras with significantly different pixel sizes.",2008,0, 305,Adaptive Correction of Errors from Recognized Chinese Ink Texts Based on Context,"Chinese ink texts can not be converted into encoded texts until their writing characters are correctly recognized. There are many errors in recognized Chinese ink texts even importing language models because Chinese ink texts are free forms and mixed with other languages, as well as their Chinese characters have a large set and complex structures. Recognized writing characters may contain wrong language types, symbols, words, and word pairs. A direct selection and input approach based on context is proposed to adaptively correct theses errors. Each writing characterpsilas recognition candidates are fully visualized. Users can naturally and easily correct recognition errors with direct operations. Userspsila intensions are identified from userspsila gestures and objects invoked by them. Recognized Chinese ink texts can provide multi-levels of information after correction. We have conducted experiments using real-life Chinese ink texts and compared the proposed approach with others. Experimental results demonstrate that the proposed approach is effective and robust.",2009,0, 306,An OpenMP Approach to Modeling Dynamic Earthquake Rupture Along Geometrically Complex Faults on CMP Systems,"Chip multiprocessors (CMP) are widely used for high performance computing and are being configured in a hierarchical manner to compose a CMP compute node in a parallel system. OpenMP parallel programming within such a CMP node can take advantage of the globally shared address space and on-chip high inter-core bandwidth and low inter-core latency. In this paper, we use OpenMP to parallelize a sequential earthquake simulation code for modeling spontaneous dynamic earthquake rupture along geometrically complex faults on two CMP systems, IBM POWER5+ system and SUN Opteron server. The experimental results indicate that the OpenMP implementation has the accurate output results and the good scalability on the two CMP systems. Further, we apply the optimization techniques such as large page and processor binding to the OpenMP implementation to achieve up to 7.05% performance improvement on the CMP systems without any code modification.",2009,0, 307,Calculating the fault coverage for dual neighboring faults using single stuck-at fault patterns,"Chip structures shrink rapidly, but the particles causing the defects do not shrink in the same degree, thus multiple faults are more and more frequent in today's deep sub-micron chips. Scan test patterns are usually calculated to detect single stuck-at faults, and they detect also 'nearly all' multiple faults if at least one of the faults is detectable as a single fault. 'Nearly all' implies that there are exceptions, and indeed sometimes two single-stuck-at faults can only be detected when occurring alone, but not if they occur together. This phenomenon is called fault masking and has been extensively discussed in the literature, but only in considering each pair of possible faults having the same likelihood to occur. In reality, however, pairs of neighboring faults have a much higher likelihood than pairs of distant faults. Using layout and pattern data of a commercial circuit, the extent of fault masking is calculated both for neighboring faults, and for distant faults.",2008,0, 308,Transient-fault recovery for chip multiprocessors,"Chip-level redundant threading with recovery (CRTR) for chip multiprocessors extends previous transient-fault detection schemes to provide fault recovery. To hide interprocessor latency, CRTR uses a long slack enabled by asymmetric commit and uses the trailing thread state for recovery. CRTR increases bandwidth supply by pipelining communication paths and reduces bandwidth demand by extending the dependence-based checking elision.",2003,0,1587 309,FTCloud: A Component Ranking Framework for Fault-Tolerant Cloud Applications,"Cloud computing is becoming a mainstream aspect of information technology. The cloud applications are usually large-scale, complex, and include a lot of distributed components. Providing highly reliable cloud applications is a challenging and critical research problem. To attack this challenge, we propose FTCloud which is a component ranking based framework for building fault-tolerant cloud applications. FTCloud employs the component invocation structures and the invocation frequencies to identify the significant components in a cloud application. An algorithm is proposed to automatically determine optimal fault tolerance strategy for these significant components. The experimental results show that by tolerating faults of a small part of the most significant components, the reliability of cloud application can be greatly improved.",2010,0, 310,Exploring machine learning techniques for fault localization,"Debugging is the most important task related to the testing activity. It has the goal of locating and removing a fault after a failure occurred during test. However, it is not a trivial task and generally consumes effort and time. Debugging techniques generally use testing information but usually they are very specific for certain domains, languages and development paradigms. Because of this, a neural network (NN) approach has been investigated with this goal. It is independent of the context and presented promising results for procedural code. However it was not validated in the context of object-oriented (OO) applications. In addition to this, the use of other machine learning techniques is also interesting, because they can be more efficient. With this in mind, the present work adapts the NN approach to the OO context and also explores the use of support vector machines (SVMs). Results from the use of both techniques are presented and analysed. They show that their use contributes for easing the fault localization task.",2009,0, 311,Accurate microarchitecture-level fault modeling for studying hardware faults,"Decreasing hardware reliability is expected to impede the exploitation of increasing integration projected by Moore's Law. There is much ongoing research on efficient fault tolerance mechanisms across all levels of the system stack, from the device level to the system level. High-level fault tolerance solutions, such as at the microarchitecture and system levels, are commonly evaluated using statistical fault injections with microarchitecture-level fault models. Since hardware faults actually manifest at a much lower level, it is unclear if such high level fault models are acceptably accurate. On the other hand, lower level models, such as at the gate level, may be more accurate, but their increased simulation times make it hard to track the system-level propagation of faults. Thus, an evaluation of high-level reliability solutions entails the classical tradeoff between speed and accuracy. This paper seeks to quantify and alleviate this tradeoff. We make the following contributions: (1) We introduce SWAT-Sim, a novel fault injection infrastructure that uses hierarchical simulation to study the system-level manifestations of permanent (and transient) gate-level faults. For our experiments, SWAT-Sim incurs a small average performance overhead of under 3x, for the components we simulate, when compared to pure microarchitectural simulations. (2) We study system-level manifestations of faults injected under different microarchitecture-level and gate-level fault models and identify the reasons for the inability of microarchitecture-level faults to model gate-level faults in general. (3) Based on our analysis, we derive two probabilistic microarchitecture-level fault models to mimic gate-level stuck-at and delay faults. Our results show that these models are, in general, inaccurate as they do not capture the complex manifestation of gate-level faults. The inaccuracies in existing models and the lack of more accurate microarchitecture-level models motivate using infrastruc- - tures similar to SWAT-Sim to faithfully model the microarchitecture-level effects of gate-level faults.",2009,0, 312,Automatic defect classification of TFT-LCD panels using machine learning,"Defect classification in the liquid crystal display (LCD) manufacturing process is one of the most crucial issues for quality control. To resolve this constraint, an automatic defect classification (ADC) method based on machine learning is proposed. Key features of LCD micro-defects are defined and extracted, and support vector machine is used for classification. The classification performance is presented through several experimental results.",2009,0, 313,Evaluating the accuracy of defect estimation models based on inspection data from two inspection cycles,"Defect content estimation techniques (DCETs), based on defect data from inspection, estimate the total number of defects in a document to evaluate the development process. For inspections that yield few data points DCETs reportedly underestimate the number of defects. If there is a second inspection cycle, the additional defect data is expected to increase estimation accuracy. In this paper we consider 3 scenarios to combine data sets from the inspection-reinspection process. We evaluate these approaches with data from an experiment in a university environment where 31 teams inspected and reinspected a software requirements document. Main findings of the experiment were that reinspection data improved estimation accuracy. With the best combination approach all examined estimators yielded on average estimates within 20% around the true value, all estimates stayed within 40% around the true value.",2001,0, 314,A die-based defect-limited yield methodology for line control,"Defect monitoring and control in the semiconductor fab has been well documented over the years. The methodologies typically described in the literature involve controls through full-wafer defect counts, or defect densities, with attempts to correlate defects to electrical fail modes in order to predict the yield impact. These wafer-based methodologies are not adequate for determining the impact of defects on yield. Most notably, severe complications arise when applying wafer-based methods on wafers with mixed distributions (mix of random and clustered defects). This paper describes the proper statistical treatment of defect data to estimate yield impact for mixed-distribution wafer maps. This die-based, defect-limited yield (DLY) methodology properly addresses random and clustered defects, and applies a die-based multi-stage sampling method to select defects for review. The estimated yield impact of defects on the die can then be determined. Additionally, a die normalization technique is described that permits application of this die-based methodology on multiple products with different die sizes.",2010,0, 315,Tracking concept drift of software projects using defect prediction quality,"Defect prediction is an important task in the mining of software repositories, but the quality of predictions varies strongly within and across software projects. In this paper we investigate the reasons why the prediction quality is so fluctuating due to the altering nature of the bug (or defect) fixing process. Therefore, we adopt the notion of a concept drift, which denotes that the defect prediction model has become unsuitable as set of influencing features has changed - usually due to a change in the underlying bug generation process (i.e., the concept). We explore four open source projects (Eclipse, OpenOffice, Netbeans and Mozilla) and construct file-level and project-level features for each of them from their respective CVS and Bugzilla repositories. We then use this data to build defect prediction models and visualize the prediction quality along the time axis. These visualizations allow us to identify concept drifts and - as a consequence - phases of stability and instability expressed in the level of defect prediction quality. Further, we identify those project features, which are influencing the defect prediction quality using both a tree induction-algorithm and a linear regression model. Our experiments uncover that software systems are subject to considerable concept drifts in their evolution history. Specifically, we observe that the change in number of authors editing a file and the number of defects fixed by them contribute to a project's concept drift and therefore influence the defect prediction quality. Our findings suggest that project managers using defect prediction models for decision making should be aware of the actual phase of stability or instability due to a potential concept drift.",2009,0, 316,Localizing Software Faults Simultaneously,"Current automatic diagnosis techniques are predominantly of a statistical nature and, despite typical defect densities, do not explicitly consider multiple faults, as also demonstrated by the popularity of the single-fault Siemens set. We present a logic reasoning approach, called Zoltar-M(ultiple fault), that yields multiple-fault diagnoses, ranked in order of their probability. Although application of Zoltar-M to programs with many faults requires further research into heuristics to reduce computational complexity, theory as well as experiments on synthetic program models and two multiple-fault program versions from the Siemens set show that for multiple-fault programs this approach can outperform statistical techniques, notably spectrum-based fault localization (SFL). As a side-effect of this research, we present a new SFL variant, called Zoltar-S(ingle fault), that is provably optimal for single-fault programs, outperforming all other variants known to date.",2009,0, 317,QoS-aware connection resilience for network-aware grid computing fault tolerance,"Current grid computing fault tolerance leverages IP dynamic rerouting and schemes implemented in the application or in the middleware to overcome both software and hardware failures. Despite the flexibility of current grid computing fault tolerant schemes in recovering inter-service connectivity from an almost comprehensive set of failures, they might not be able to repristinate also connection QoS guarantees, such as minimum bandwidth and maximum delay. This phenomenon is exacerbated when, as in global grid computing, the grid computing sites are not connected by dedicated network resources but share the same network infrastructure with other Internet services. This paper aims at showing the advantages of integrating grid computing fault tolerance schemes with next generation networks (NGNs) resilient schemes. Indeed, by combining the utilization of generalized multi-protocol label switching (GMPLS) resilient schemes, such as path restoration, and application or middleware layer fault tolerant schemes, such as service migration or replication, it is possible to guarantee the necessary QoS to the connections between grid computing sites while limiting the required network and computational resources.",2005,0, 318,Service Restoration Methodology for Multiple Fault Case in Distribution Systems,"Current KEPCO's distribution automation system (DAS) provides a very effective restoration solution for the single fault case but cannot handle multiple faults. This paper proposes a two-step restoration scheme-sequential and simultaneous restoration-for multiple fault cases. Efficiency has been achieved by introduction of restoration performance index (RPI) and load-balancing algorithm. Test results to show effectiveness of the proposed scheme are presented, and field experience of DAS in Korea is described as well",2006,0, 319,Hierarchical fault diagnosis and health monitoring in multi-platform space systems,"Current spacecraft health monitoring and fault diagnosis practices that involve around-the-clock limit-checking and trend analysis on large amount of telemetry data, do not scale well for future multi-platform space missions due to the presence of larger amount of telemetry data and an increasing need to make the long-duration missions cost-effective by limiting the size of the operations team. The need for efficient utilization of telemetry data by employing machine learning and rule-based reasoning has been pointed out in the literature in order to enhance diagnostic performance and assist the less-experienced personnel in performing monitoring and diagnosis tasks. In this research we develop a systematic and transparent fault diagnosis methodology within a hierarchical fault diagnosis framework for multiplatform space systems. Our proposed Bayesian network-based hierarchical fault diagnosis methodology allows fuzzy rule-based reasoning at different components in the hierarchy. Due to the unavailability of real formation flight data, we demonstrate the effectiveness of our proposed methodology by using synthetic data of a leader-follower formation flight. Our proposed methodology is likely to enhance the level of autonomy in ground support based spacecraft health monitoring and fault diagnosis.",2009,0, 320,A power transformer protection with recurrent ANN saturation correction,"Current transformers (CTs) are present in electric power systems for protection and measurement purposes and they are susceptible to the saturation phenomenon. This paper presents an alternative approach to the correction of distorted waveforms caused by CT saturation. The method uses recurrent artificial neural networks (ANN) algorithms. As an example of an application, a complete protection system for a power transformer based on the deferential logic has been utilized. The EMTP-ATP software has been chosen as the computational tool to simulate the electrical system in order to generate data to train and test the ANNs. Many ANN architectures were trained and tested. Encouraging results related to the application of the new method are presented.",2005,0, 321,Defect Prevention: A General Framework and Its Application,"Defect prevention in CMM and causal analysis and resolution in CMMI are focused on identifying the root cause of defects and preventing defects from recurring. Actions are expected at a project level as well as organization level. This paper provides a general framework of defect prevention activities which consists of organization structure, defect definition, defect prevention process and quality culture establishment. Implementation of defect prevention results in rapid and sustained improvement in software product quality which is evident from an example in Neusoft Group, where defect density in post release phase decreased from 0.85 defects/KLOC in 2000 to 0.1 defects/KLOC in 2005",2006,0, 322,Defect tolerance for gracefully-degradable microfluidics-based biochips,"Defect tolerance is an important design consideration for microfluidics-based biochips that are used for safety-critical applications. We propose a defect tolerance methodology based on graceful degradation and dynamic reconfiguration. We first introduce tile-based biochip architecture, which is scalable for large-scale bioassays. A clustered defect model is used to evaluate the graceful degradation method for tile-based biochips. The proposed schemes ensure that the bioassays mapped to a droplet-based microfluidic array during design can be executed on a defective biochip through operation rescheduling and/or resource rebinding. Real-life biochemical procedures, namely polymerase chain reaction (PCR) and multiplexed in-vitro diagnostics on human physiological fluids, are used to evaluate the proposed defect tolerance schemes.",2005,0, 323,A Model Based Framework for Specifying and Executing Fault Injection Experiments,"Dependability is a fundamental property of computer systems operating in critical environment. The measurement of dependability (and thus the assessment of the solutions applied to improve dependability) typically relies on controlled fault injection experiments that are able to reveal the behavior of the system in case of faults (to test error handling and fault tolerance) or extreme input conditions (to assess robustness of system components). In our paper we present an Eclipse-based fault injection framework that provides a model-based approach and a graphical user interface to specify both the fault injection experiments and the run-time monitoring of the results. It automatically implements the modifications that are required for fault injection and monitoring using the Javassist technology, this way it supports the dependability assessment and robustness testing of software components written in Java.",2009,0, 324,Fault Modeling and Functional Test Methods for Digital Microfluidic Biochips,"Dependability is an important attribute for microfluidic biochips that are used for safety-critical applications, such as point-of-care health assessment, air-quality monitoring, and food-safety testing. Therefore, these devices must be adequately tested after manufacture and during bioassay operations. Known techniques for biochip testing are all function oblivious (i.e., while they can detect and locate defect sites on a microfluidic array, they cannot be used to ensure correct operation of functional units). In this paper, we introduce the concept of functional testing of microfluidic biochips. We address fundamental biochip operations, such as droplet dispensing, droplet transportation, mixing, splitting, and capacitive sensing. Long electrode actuation times are avoided to ensure that there is no electrode degradation during testing. The functional testing of pin-constrained biochips is also studied. We evaluate the proposed test methods using simulations as well as experiments for a fabricated biochip.",2009,0, 325,Using motion-compensated frame-rate conversion for the correction of 3:2 pulldown artifacts in video sequences,"Currently, the most popular method of converting 24 frames per second (fps) film to 60 fields/s video is to repeat each odd-numbered frame for 3 fields and each even-numbered frame for 2 fields. This method is known as 3:2 pulldown and is an easy and inexpensive way to perform 24 fps to 60 fields/s frame-rate conversion. However, the 3:2 pulldown introduces artifacts, which are especially visible when viewing on progressive displays and during slow-motion playback. We have developed a motion-compensated frame-rate conversion algorithm to reduce the 3:2 pulldown artifacts. By using frame-rate conversion with interpolation instead of field repetition, mean square error and blocking artifacts are reduced significantly. The techniques developed here can also be applied to the general frame-rate conversion problem",2000,0, 326,Empirical evaluation of the fault-detection effectiveness of smoke regression test cases for GUI-based software,"Daily builds and smoke regression tests have become popular quality assurance mechanisms to detect defects early during software development and maintenance. In previous work, we addressed a major weakness of current smoke regression testing techniques, i.e., their lack of ability to automatically (re)test graphical user interface (GUI) event interactions - we presented a GUI smoke regression testing process called daily automated regression tester (DART). We have deployed DART and have found several interesting characteristics of GUI smoke tests that we empirically demonstrate in this paper. We also combine smoke tests with different types of test oracles and present guidelines for practitioners to help them generate and execute the most effective combinations of test-case length and test oracle complexity. Our experimental subjects consist of four GUI-based applications. We generate 5000-8000 smoke tests (enough to be run in one night) for each application. Our results show that: (1) short GUI smoke tests with certain test oracles are effective at detecting a large number of faults; (2) there are classes of faults that our smoke test cannot detect; (3) short smoke tests execute a large percentage of code; and (4) the entire smoke testing process is feasible to do in terms of execution time and storage space.",2004,0, 327,Impurity and defect passivation in poly-Si films fabricated by aluminium-induced crystallisation,"Data from resistivity, optical transmission and reflectance, and open-circuit voltage (V/sub oc/) measurements show hydrogen or ammonia plasma treatment greatly reduces the effective doping concentration, the parasitic optical absorption and improves the minority carrier properties of poly-Si films fabricated by aluminium-induced crystallisation (AIC) on glass substrates. Two 450 nm thick AIC poly-Si films on glass, one hydrogenated and one non-hydrogenated, were used to fabricate poly-Si/a-Si:H heterojunctions. The non-hydrogenated sample had a 1-sun V/sub oc/ of 136 mV and the hydrogenated sample had a 1-sun V/sub oc/ of 236 mV. The poor V/sub oc/ indicates that AIC poly-Si films are more suitable as seed layers than as absorber layers. However, heterojunctions are sensitive to surface conditions and thus further V/sub oc/ improvements may be possible by surface optimization of the hydrogenated AIC poly-Si film prior to the formation of the heterojunction.",2003,0, 328,Data Mining Applied to the Electric Power Industry: Classification of Short-Circuit Faults in Transmission Lines,"Data mining can play a fundamental role in modern power systems. However, the companies in this area still face several difficulties to benefit from data mining. A major problem is to extract useful information from the currently available non-labeled digitized time series. This work focuses on automatic classification of faults in transmission lines. These faults are responsible for the majority of the disturbances and cascading blackouts. To circumvent the current lack of labeled data, the alternative transients program (ATP) simulator was used to create a public comprehensive labeled dataset. Results with different preprocessing (e.g., wavelets) and learning algorithms (e.g., decision trees and neural networks) are presented, which indicate that neural networks outperform the other methods.",2007,0, 329,Novel modular fault tolerant switched reluctance machine for reliable factory automation systems,"Electrical machines and drives used in diverse critical fields like advanced factory automation systems, automotive and aerospace applications, military, energy and medical equipment, etc. require both special motor and converter topologies to achieve high level fault tolerance. In the paper a novel modular fault tolerant switched reluctance machine is proposed. Its stator is built up of simply to manufacture and to replace modules. The machine is able to have continuous operation despite winding faults of diverse severity. It is fed by a special power converter having separate half H-bridge leg for each coil. Thus a complex and high reliable electrical system is obtained. By advanced dynamic co-simulations (using a coupled Flux 2D and Simulink program) the behaviour of the drive system under five winding fault conditions are studied. The obtained results prove the fault tolerant capacity of the proposed machine.",2010,0, 330,Electrical Test Structures for Investigating the Effects of Optical Proximity Correction,Electrical test structures have been designed to enable the characterisation of corner serif forms of optical proximity correction. These structures measure the resistance of a conducting track with a right angled corner. Varying amounts of OPC have been applied to the outer and inner corners of the feature and the effect on the resistance of the track investigated. A prototype test mask has been fabricated which contains test structures suitable for on-mask electrical measurement. The same mask was used to print the structures using an i-line lithography tool for on-wafer characterisation. Results from the structures at wafer level have shown that OPC has an impact on the final printed features. In particular the level of corner rounding is dependent upon the dimensions of the OPC features employed and the measured resistance can be used to help quantify the level of aggressiveness of the inner corner serifs.,2009,0, 331,Image processing techniques for wafer defect cluster identification,"Electrical testing determines whether each die on a wafer functions as originally designed. But these tests don't detect all the defective dies in clustered defects on the wafer, such as scratches, stains, or localized failed patterns. Although manual checking prevents many defective dies from continuing on to assembly, it does not detect localized failure patterns-caused by the fabrication process-because they are invisible to the naked eye. To solve these problems, we propose an automatic, wafer-scale, defect cluster identifier. This software tool uses a median filter and a clustering approach to detect the defect clusters and to mark all defective dies. Our experimental results verify that the proposed algorithm effectively detects defect clusters, although it introduces an additional 1% yield loss of electrically good dies. More importantly, it makes automated wafer testing feasible for application in the wafer-probing stage.",2002,0, 332,Remote sensing of power system arcing faults,Electromagnetic radiation in the form of atmospheric radiowaves (or sferics) originate from power system apparatus when transient fault currents are present. A system to monitor these events via the detection of the induced very high frequency (VHF) sferic radiation has been in operation since November 1998. This system is part of an ongoing research program to develop overhead line fault detection and location equipment. This paper details the implementation of the sferic monitoring system and the latest developments that aim to improve event detection and triggering efficiency. Example transient sferic radiations records taken from the extensive data archive are presented. Fourier time frequency domain analysis is employed to extract features from the sferic signal data. Finally the future application of such monitoring technologies to power distribution networks is discussed.,2000,0, 333,WYSIWIB: A declarative approach to finding API protocols and bugs in Linux code,"Eliminating OS bugs is essential to ensuring the reliability of infrastructures ranging from embedded systems to servers. Several tools based on static analysis have been proposed for finding bugs in OS code. They have, however, emphasized scalability over usability, making it difficult to focus the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses specifications for bug finding using a syntax close to that of ordinary C code. The key advantage of our approach is that search specifications can be easily tailored, to eliminate false positives or catch more bugs. We present three case studies that have allowed us to find hundreds of potential bugs.",2009,0, 334,Fault detection in a tristate system environment,"Embedded computers commonly rely on multiple-board systems, called tristate system environments. These environments consist of an interconnect and drivers or receivers with tristate features and boundary scan capabilities. The authors present a comprehensive fault model that provides 100 percent fault coverage and minimizes test set size",2001,0, 335,Fault tolerance techniques for wireless ad hoc sensor networks,"Embedded sensor network is a system of nodes, each equipped with a certain amount of sensing, actuating, computation, communication, and storage resources. One of the key prerequisites for effective and efficient embedded sensor systems is development of low cost, low overhead, high resilient fault-tolerance techniques. Cost sensitivity implies that traditional double and triple redundancies are not adequate solutions for embedded sensor systems due to their high cost and high energy-consumption. We address the problem of embedded sensor network-fault-tolerance by proposing heterogeneous back-up scheme, where one type of resource is substituted with another. First we propose a broad spectrum of heterogeneous fault-tolerance techniques for sensor networks including the ones where communication and sensing are mutually backing up each other. Then, we focus our attention on two specific approaches where we back-up one type of sensors with another type of sensor. In the first, we assume faults that manifest through complete malfunctioning and in the second, we assume sensors where fault manifest through high level of error.",2002,0, 336,Adaptive Wavelet Domain Audio Steganography with High Capacity and Low Error Rate,"Embedding a secret message into a cover media without attracting attention, which is known as steganography, is desirable in some security applications. One of the medias which can be used as a cover media is audio signal. In this paper we introduce an adaptive wavelet domain steganography with high capacity and low error rate. We use lifting scheme to create perfect reconstruction filter banks which are Int2Int and hide data in least significant bits (LSB) of details coefficient in an adaptive way to reduce the error rate. Our method have zero error rate for hiding capacity below 100 kilo bits-per-second (kbps) and 0.3% error for 200 kbps, in comparison to 0.9% error of normal wavelet domain LSB steganography. Signal to noise ratio (SNR) values and listening tests results show that the stegano audio is imperceptible from original audio even with hiding capacity up to 200 kbps.",2007,0, 337,Corrective Classification: Classifier Ensembling with Corrective and Diverse Base Learners,"Empirical studies on supervised learning have shown that ensembling methods lead to a model superior to the one built from a single learner under many circumstances especially when learning from imperfect, such as biased or noise infected, information sources. In this paper, we provide a novel corrective classification (C2) design, which incorporates error detection, data cleansing and Bootstrap sampling to construct base learners that constitute the classifier ensemble. The essential goal is to reduce noise impacts and eventually enhance the learners built from noise corrupted data. We further analyze the importance of both the accuracy and diversity of base learners in ensembling, in order to shed some light on the mechanism under which C2 works. Experimental comparisons will demonstrate that C2 is not only superior to the learner built from the original noisy sources, but also more reliable than bagging or the aggressive classifier ensemble (ACE), which are two degenerate components/variants of C2.",2006,0, 338,An empirical study of fault localization for end-user programmers,"End users develop more software than any other group of programmers, using software authoring devices such as e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been little research on finding ways to help these programmers with the dependability of their software. We have been addressing this problem in several ways, one of which includes supporting end-user debugging activities through fault localization techniques. This paper presents the results of an empirical study conducted in an end-user programming environment to examine the impact of two separate factors in fault localization techniques that affect technique effectiveness. Our results shed new insights into fault localization techniques for end-user programmers and the factors that affect them, with significant implications for the evaluation of those techniques.",2005,0, 339,Strategies and behaviors of end-user programmers with interactive fault localization,"End-user programmers are writing an unprecedented number of programs, due in large part to the significant effort put forth to bring programming power to end users. Unfortunately, this effort has not been supplemented by a comparable effort to increase the correctness of these often faulty programs. To address this need, we have been working towards bringing fault localization techniques to end users. In order to understand how end users are affected by and interact with such techniques, we conducted a think-aloud study, examining the interactive, human-centric ties between end-user debugging and a fault localization technique. Our results provide insights into the contributions such techniques can make to an interactive end-user debugging process.",2003,0, 340,Interactive fault localization techniques in a spreadsheet environment,"End-user programmers develop more software than any other group of programmers, using software authoring devices such as multimedia simulation builders, e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been only a little research on finding ways to help these programmers with the dependability of the software they create. We have been working to address this problem in several ways, one of which includes supporting end-user debugging activities through interactive fault localization techniques. This paper investigates fault localization techniques in the spreadsheet domain, the most common type of end-user programming environment. We investigate a technique previously described in the research literature and two new techniques. We present the results of an empirical study to examine the impact of two individual factors on the effectiveness of fault localization techniques. Our results reveal several insights into the contributions such techniques can make to the end-user debugging process and highlight key issues of interest to researchers and practitioners who may design and evaluate future fault localization techniques.",2006,0, 341,Error correction in single-hop wireless sensor networks - A case study,"Energy efficient communication is a key issue in wireless sensor networks. Common belief is that a multi-hop configuration is the only viable energy efficient technique. In this paper we show that the use of forward error correction techniques in combination with ARQ is a promising alternative. Exploiting the asymmetry between lightweight sensor nodes and a more powerful base station even advanced techniques known from cellular networks can be efficiently applied to sensor networks. Our investigations are based on realistic power models and real measurements and, thus, consider all side-effects. This is to the best of our knowledge the first investigation of advanced forward error correction techniques in sensor networks which is based on real experiments.",2009,0, 342,Fault Coverage Measurement of a Timed Test Case Generation Approach,"Ensuring that a Real-Time Embedded System(RTES) is absent of major faults that may affect the way it performs is a non-trivial task. RTES behaviour is based on the interactions with its surrounding environment and on the timing characteristics of that same environment. As a result, time poses a new dimension to the complexity of the testing process. In previous research, we introduced a `priority-based' approach which tested the logical and timing behaviour of an RTES modeled formally as Uppaal automata. The `priority-based' approach was based on producing sets of timed test traces by achieving timing constraints coverage according to three sets of priorities, namely boundary, out-boundary and in-boundary. In this paper, we extend that work by validating the `priority-based' approach according to a well-known timed fault model. The validation process shows promising results, notably, that the `priority-based' approach is capable of detecting all the fault types included in the proposed fault model.",2010,0, 343,Cross-Cultural Differences of Error Climate between Chinese and German Entrepreneurial Firms,"Entrepreneurial firms' actions can be easily away from the predetermined goals or directions when in developing, which are errors. It would be critical for surviving how the firms cope with those errors. Our research defined the error climate as a general tendency to errors of the employees. The research used the free software, Mx, and studied four industries, including IT and software, catering and hotel, machinery and parts, and construction, to find the characteristics of the error climates of Chinese entrepreneurial firms and German entrepreneurial firms, and compared the differences between the error climates of them. The results suggest that Chinese entprepreueurial firms pay more attention to solve problems caused by errors and German entrepreneurial firms pay more attention to encourage employees communicating when an error occurs. The implication of the results for IT industry is also included.",2010,0, 344,Patching Processor Design Errors with Programmable Hardware,"Equipping processors with programmable hardware to patch design errors lets manufacturers release regular hardware patches, avoiding costly chip recalls and potentially speeding time to market. For each error detected, the manufacturer creates a fingerprint, which the customer uses to program the hardware. The hardware watches for error conditions; when they arise, it takes action to avoid the error. Overall, our scheme enables an exciting new environment where hardware design errors can be handled as easily as system software bugs, by applying a patch to the hardware",2007,0, 345,Reversible Data Hiding-Based Approach for Intra-Frame Error Concealment in H.264/AVC,"Error concealment plays an important role in robust video transmission. Recently, Chen and Leung presented an efficient data hiding-based (DH-based) approach to recover corrupted macroblocks from the intra-frame of an H.264/AVC sequence, but it suffers from the quality degradation problem. Since the quantized discrete cosine transform coefficients of an H.264/AVC sequence tend to form a Laplace distribution, we therefore propose a reversible DH-based approach for intra-frame error concealment based on this characteristic. Our design is able to achieve no quality degradation. Experimental results demonstrate that the quality of recovered video sequences obtained by our approach is indeed superior to that of the DH-based method. In addition, the quality advantage of our approach is illustrated when compared with the previous five related methods.",2010,0, 346,Spatial error concealment technique for losslessly compressed images using data hiding in error-prone channels,"Error concealment techniques are significant due to the growing interest in imagery transmission over error-prone channels. This paper presents a spatial error concealment technique for losslessly compressed images using least significant bit (LSB)-based data hiding to reconstruct a close approximation after the loss of image blocks during image transmission. Before transmission, block description information (BDI) is generated by applying quantization following discrete wavelet transform. This is then embedded into the LSB plane of the original image itself at the encoder. At the decoder, this BDI is used to conceal blocks that may have been dropped during the transmission. Although the original image is modified slightly by the message embedding process, no perceptible artifacts are introduced and the visual quality is sufficient for analysis and diagnosis. In comparisons with previous methods at various loss rates, the proposed technique is shown to be promising due to its good performance in the case of a loss of isolated and continuous blocks.",2010,0, 347,Enhanced temporal error concealment algorithm with edge-sensitive processing order,"Error concealment techniques are widely used in video decoder with error-prone communication channels. In this paper, an enhanced edge-sensitive processing order for temporal error concealment algorithm is proposed. Side information of neighboring macroblocks of the corrupted macroblocks are considered to derive a suitable processing order for error concealment, and a new motion vector searching algorithm is also proposed for temporal error concealment. Experimental results prove that the processing order plays an important role in error concealment. With considering the processing order, the proposed algorithm outperforms existing algorithms in terms of PSNR and perceptual artifacts, and the improvement of 2.45dB in PSNR can be achieved compared with the same system with raster-scan order.",2008,0, 348,Evaluation of Error Control Mechanisms Based on System Throughput and Video Playable Frame Rate on Wireless Channel,"Error control mechanisms are widely used in video communications over wireless channels. However for improving end-to-end video quality: they consume extra bandwidth and reduce effective system throughput. In this paper, considering the parameters of system throughput and playable frame rate as evaluating metrics, we investigate the efficiency of different error control mechanisms. We develop a throughput analytical model to present system effective throughput for different error control mechanisms under different conditions. For a given packet loss probability, both optimal retransmission times in adaptive ARQ and optimal number of redundant packets in adaptive FEC for each type of frames are derived by keeping the system throughput as a constant value. Also, end to end playable frame rates for the two schemes are computed. Then which error control scheme is the most suitable for which application condition is concluded. Finally empirical simulation experimental results with various data analysis are demonstrated.",2010,0, 349,Hardcopy image barcodes via block-error diffusion,"Error diffusion halftoning is a popular method of producing frequency modulated (FM) halftones for printing and display. FM halftoning fixes the dot size (e.g., to one pixel in conventional error diffusion) and varies the dot frequency according to the intensity of the original grayscale image. We generalize error diffusion to produce FM halftones with user-controlled dot size and shape by using block quantization and block filtering. As a key application, we show how block-error diffusion may be applied to embed information in hardcopy using dot shape modulation. We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base images. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information capacity that differentiates it from common hardcopy watermarks. The encoding/halftoning strategy is based on a modified version of block-error diffusion. Encoder stability, image quality versus information capacity tradeoffs, and decoding issues with and without explicit knowledge of the base image are discussed.",2005,0, 350,A Feature-Based Flexible Customization Method of Error Modeling for Machine Tool,"Error model customization for different work type of complex CNC equipments is significant to identify and compensate the errors of CNC integrated manufacturing, and further to achieve process optimization and product quality control. Traditional error modeling method is complicated, takes long time and is difficult to change itself. A flexible customized error modeling method based on feature and multi-body system (MBS) theory is proposed. Firstly, feature space of machine tool and Denavit-Hartenberg (DH) homogeneous transformation matrix space are build separately. Secondly, feature mapping function is deduced. Thirdly, customization method of error model is described. Finally, flexible customization system for error modeling based on feature is developed. Using this system and method to execute error modeling for several typical machine tools of Shenyang Machine Tool Group. It takes about 1minute to get an accurate error model. It can analyze single error's percentage of total error. These are basis of error compensation. The practices indicate the method is effective and feasible. The method provides a new thought for error control, optimization of integrated manufacturing process, digital manufacturing and automation technology.",2010,0, 351,Error scope on a computational grid: theory and practice,"Error propagation is a central problem in grid computing. We re-learned this while adding a Java feature to the Condor computational grid. Our initial experience with the system was negative, due to the large number of new ways in which the system could fail. To reason about this problem, we developed a theory of error propagation. Central to our theory is the concept of an error's scope, defined as the portion of a system that it invalidates. With this theory in hand, we recognized that the expanded system did not properly consider the scope of errors it discovered. We modified the system according to our theory, and succeeded in making it a more robust platform for distributed computing.",2002,0, 352,Interactive Error Control for Mobile Video Telephony,"Error robust video communication for hand-held devices is a delicate task because of limited computational resources and hostile channel conditions in mobile environments. The loss of coded video data on the channel can result in spatio-temporal error propagation in the video. In addition, stringent end-to-end delays for conversational applications make this challenge even more difficult. In this work, we investigate several techniques which exploit feedback from the receiver to enhance the performance of conversational video in realistic mobile communication environments. Specifically, we show how a low-complexity interactive error tracking technique can be combined with multiple reference picture selection (RPS) based on the existing syntax of H.264/AVC. This technique outperforms other interactive error protection strategies by a margin of more than 2 dB for moderate channel loss rates, with minimal impact on end-to-end delays.",2007,0, 353,ED4I: error detection by diverse data and duplicated instructions,"Errors in computing systems can cause abnormal behavior and degrade data integrity and system availability. Errors should be avoided especially in embedded systems for critical applications. However, as the trend in VLSI technologies has been toward smaller feature sizes, lower supply voltages and higher frequencies, there is a growing concern about temporary errors as well as permanent errors in embedded systems; thus, it is very essential to detect those errors. Software-implemented hardware fault tolerance (SIHFT) is a low-cost alternative to hardware fault-tolerance techniques for embedded processors: It does not require any hardware modification of commercial off-the-shelf (COTS) processors. ED4I (error detection by data diversity and duplicated instructions) is a SIHFT technique that detects both permanent and temporary errors by executing two ""different"" programs (with the same functionality) and comparing their outputs. ED4I maps each number, x, in the original program into a new number x', and then transforms the program so that it operates on the new numbers so that the results can be mapped backwards for comparison with the results of the original program. The mapping in the transformation of ED4I is x' = kx for integer numbers, where kf determines the fault detection probability and data integrity of the system. For floating-point numbers, we find a value of kf for the fraction and ke for the exponent separately, and use k = kf2k for the value of k. We have demonstrated how to choose an optimal value of k for the transformation. This paper shows that, for integer programs, the transformation with k = -2 was the most desirable choice in six out of seven benchmark programs we simulated. It maximizes the fault detection probability under the condition that the data integrity is highest",2002,0, 354,Error Modeling in Network Tomography by Sparse Code Shrinkage (SCS) Method,"Errors in data measurements for network tomography may cause misleading estimations. This paper presents a novel technique to model these errors by using sparse code shrinkage (SCS) method. SCS is used in the field of image recognition for denoising the image data and we are the first to apply this technique for estimating error free link delays from erroneous link delay data. To make SCS adoptable in network tomography, we have made some changes in the SCS technique such as the use of Non Negative Matrix Factorization (NNMF) instead of independent component analysis (ICA) for the purpose of estimating sparsifying transformation. The estimated (denoised) link delays are compared with the original (error free) link delays based on the data obtained from a laboratory test bed. The simulation results verify the accuracy of the proposed technique.",2010,0, 355,Fault injection experiment results in space borne parallel application programs,"Development of the REE Commercial-Off-The-Shelf (COTS) based space-borne supercomputer requires a detailed knowledge of system behavior in the presence of Single Event Upset (SEU) induced faults. When combined with a hardware radiation fault model and mission environment data in a medium grained system model, experimentally obtained fault behavior data can be used to: predict system reliability, availability and performance; determine optimal fault detection methods and boundaries; and define high ROI fault tolerance strategies. The REE project has developed a fault injection suite of tools and a methodology for experimentally determining system behavior statistics in the presence of application level SEU induced transient faults. Initial characterization of science data application code for an autonomous Mars Rover geology application indicates that this code is relatively insensitive to SEUs and thus can be made highly immune to application level faults with relatively low overhead strategies.",2002,0, 356,Fault-tolerant systems design-estimating cache contents and usage,"Development of the Remote Exploration and Experimentation (REE) Commercial Off The Shelf (COTS) based space-borne supercomputer requires a detailed knowledge of system behavior in the presence of Single Even Upset (SEU) induced faults. When combined with a hardware radiation fault, model and mission environment data in a medium grained system model, experimentally obtained fault behavior data can be used to: predict system reliability, availability and performance; determine optimal fault detection methods and boundaries; and define high Return On Investment (ROI) fault tolerance strategies. The REE project has developed a fault injection suite of tools and a methodology for experimentally determining system behavior statistics in the presence of SEU induced transient faults in application level codes. Where faults cannot be directly injected, analytic means are used in conjunction with experimental data to determine probabilistic system fault response. In many processors, it is not possible to inject faults directly into onboard cache. In this case, a cache contents estimation tool can be used to define probabilistic fault susceptibility which is then combined with direct memory fault injection data to determined fault behavior statistics. In this paper we discuss the structure, function and usage of a PPC-750 cache contents estimator for the REE project.",2002,0, 357,Quantifying the effects of placement errors on WSN connectivity in grid-based deployments,"Device deployment plays a key role in the performance of any Wireless Sensor Network (WSN) application. WSN device deployment (i.e. the numbers and positions of the devices) must consider several design factors, like coverage, connectivity, lifetime, etc. However, connectivity remains the most fundamental factor especially in harsh environments. Extensive work has been applied on connectivity in WSN deployments. However, realistic physical deployment errors have been ignored in the majority of that work. In this paper, we explore an efficient grid-based deployment planning for connectivity when sensors placement is affected by random bounded errors around their corresponding grid vertices. We propose a new approach to evaluate the average connectivity percentage of the deployed sensor nodes. We apply this approach to practical 3D deployment scenario, namely, the cubic grid-based deployment with bounded uniform random errors. The average connectivity percentage is computed numerically and verified by extensive simulation results. Based on the results, quantified effects of placement errors on the connectivity percentage are outlined.",2010,0, 358,ReStore: symptom based soft error detection in microprocessors,"Device scaling and large scale integration have led to growing concerns about soft errors in microprocessors. To date, in all but the most demanding applications, implementing parity and ECC for caches and other large, regular SRAM structures have been sufficient to stem the growing soft error tide. This will not be the case for long, and questions remain as to the best way to detect and recover from soft errors in the remainder of the processor - in particular, the less structured execution core. In this work, we propose the ReStore architecture, which leverages existing performance enhancing checkpointing hardware to recover from soft error events in a low cost fashion. Error detection in the ReStore architecture is novel: symptoms that hint at the presence of soft errors trigger restoration of a previous checkpoint. Example symptoms include exceptions, control flow mis-speculations, and cache or translation look-aside buffer misses. Compared to conventional soft error detection via full replication, the ReStore framework incurs little overhead, but sacrifices some amount of error coverage. These attributes make it an ideal means to provide very cost effective error coverage for processor applications that can tolerate a nonzero, but small, soft error failure rate. Our evaluation of an example ReStore implementation exhibits a 2x increase in MTBE (mean time between failures) over a standard pipeline with minimal hardware and performance overheads. The MTBF increases by 7x if ReStore is coupled with parity protection for certain pipeline structures.",2005,0,359 359,ReStore: Symptom-Based Soft Error Detection in Microprocessors,"Device scaling and large-scale integration have led to growing concerns about soft errors in microprocessors. To date, in all but the most demanding applications, implementing parity and ECC for caches and other large, regular SRAM structures have been sufficient to stem the growing soft error tide. This will not be the case for long and questions remain as to the best way to detect and recover from soft errors in the remainder of the processor - in particular, the less structured execution core. In this work, we propose the ReStore architecture, which leverages existing performance enhancing checkpointing hardware to recover from soft error events in a low cost fashion. Error detection in the ReStore architecture is novel: symptoms that hint at the presence of soft errors trigger restoration of a previous checkpoint. Example symptoms include exceptions, control flow misspeculations, and cache or translation look-aside buffer misses. Compared to conventional soft error detection via full replication, the ReStore framework incurs little overhead, but sacrifices some amount of error coverage. These attributes make it an ideal means to provide very cost effective error coverage for processor applications that can tolerate a nonzero, but small, soft error failure rate. Our evaluation of an example ReStore implementation exhibits a 2times increase in MTBF (mean time between failures) over a standard pipeline with minimal hardware and performance overheads. The MTBF increases by 20times if ReStore is coupled with protection for certain particularly vulnerable pipeline structures",2006,0, 360,Middleware for decentralised fault tolerant service execution using replication in pervasive systems,"Devices in pervasive systems are resource constrained, heterogeneous, mobile and personal. The devices may seek or provide such services as data compression, encryption, image analysis, query processing and other types within a local, but dynamic network. In order to enable sharing of services and resources, it is necessary that the middleware facilitate collaboration amongst devices. Resource and service sharing in such environments is a challenge due to device mobilities, heterogeneity and link failures. In this paper, we propose a middleware that enables decentralised decision making for fault tolerant service execution in pervasive systems. Opportunistic contacts between pairs of devices are exploited to locate service-device mappings and initiate replicas of the required service. Redundancy is an unnecessary, but unavoidable consequence of any replication based fault tolerance scheme. One of the objectives of the proposed middleware is to minimise the number of replications. Simulation results show that the proposed scheme has lower time overhead as compared to existing schemes.",2010,0, 361,2D Photonic Defect Layers in 3D Inverted Opals on Si Platforms,"Dielectric spheres synthesised for the fabrication of self-organized photonic crystals such as opals offer large opportunities for the design of novel nanophotonic devices. In this paper, we show a hexagonal superlattice monolayer of dielectric spheres inscribed on a 3D colloidal photonic crystal by e-beam lithography. The crystal is produced by a variation of the vertical drawing deposition method assisted by an acoustic field. The structures were chosen after simulations showed that a hexagonal super-lattice monolayer in air exhibits an even photonic band gap below the light cone if the refractive index of the spheres is higher than 1.93",2006,0, 362,Atmospheric correction of AMSR-E brightness temperatures for dry snow cover mapping,"Differences between the brightness temperatures (spectral gradient) collected by the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) at 18.7 and 36.5 GHz are used to map the snow-covered area (SCA) over a region including the western U.S. The brightness temperatures are corrected to take into account for atmospheric effects by means of a simplified radiative transfer equation whose parameters are stratified using rawinsonde data collected from a few stations. The surface emissivity is estimated from the model, and the brightness temperatures at the surface are computed as the product of the surface temperature and the computed emissivity. The SCA derived from microwave data is compared with that obtained from the Moderate Resolution Imaging Spectroradiometer for both cases of corrected and noncorrected brightness temperatures. The improvement to the SCA retrievals based on the corrected brightness temperatures shows an average value around 7%",2006,0, 363,Tropospheric heterogeneities corrections in differential radar interferometry,"Differential radar interferometry (DInSAR) has been used more and more widely to monitor crustal deformations due to underground mining and oil extraction, earthquakes, volcanoes, landslides, and so on. However, tropospheric heterogeneities have been identified as one of the major errors in DInSAR, which can be up to 40 cm as derived from dual-frequency GPS measurements in the example given in this paper. Therefore, it is crucial to correct the tropospheric heterogeneities in the DInSAR results for monitoring crustal deformation. These corrections from several GPS stations in the radar imaging area can be interpolated and applied to the DInSAR results. The discussions are based on data from the Tower Colliery test site southwest Sydney, Australia.",2002,0, 364,Adaptive Correction of Errors from Segmented Digital Ink Texts in Chinese Based on Context,"Digital ink texts in Chinese can neither be converted into users' desired layouts nor be recognized until their characters, lines, and paragraphs are correctly extracted. There are many errors in automatically segmented digital ink texts in Chinese because they are free forms and mixed with other languages, as well as their Chinese characters have small gaps and complex structures. Paragraphs, lines, and characters (recognizable language symbols) in digital ink may be wrongly extracted. An adaptive approach based on context is proposed to correct wrongly extracted these objects. Each extracted object is first adaptively visualized by color and shape labels according to relations between it and its neighbors. Users use simple gestures naturally and easily to merge and split wrongly extracted objects. Contexts are constructed from users' gestures and objects invoked by them, where users' intensions are identified. We have conducted experiments using real-life segmented digital ink texts in Chinese and compared the proposed approach with others. Experimental results demonstrate that the proposed approach is feasible, flexible, effective, and robust.",2010,0, 365,Software solution for fault record analysis in power transmission and distribution,"Digital protection relays provide the functionality of recording network disturbances during faults. Meanwhile the digital share in the installed relay number is a substantial one, so the utilities can gather valuable information with a large coverage of their grid and can start to enjoy the additional benefits of modem technology beyond the functions built into the devices. After a fault the operating personnel wants to obtain a most precise fault location to narrow the search for possible damage on the line. The fault locator precision of a single relay is limited by physics and by the grid conditions of mixed lines, load taps etc. But an easy-to-use software system for relay fault records can provide the desired precision to the utility personnel. The system is open to fault records of any relay, which is accomplished via the Comtrade data format. It also contains the parameters of segmented or untransposed lines. Furthermore it uses sophisticated self-adapting algorithms for analysis beyond those used at the protection relays.",2004,0, 366,Impact of configuration errors on DNS robustness,"During the past twenty years the Domain Name System (DNS) has sustained phenomenal growth while maintaining satisfactory user-level performance. However, the original design focused mainly on system robustness against physical failures, and neglected the impact of operational errors such as mis-configurations. Our measurement efforts have revealed a number of mis-configurations in DNS today: delegation inconsistency, lame delegation, diminished server redundancy, and cyclic zone dependency. Zones with configuration errors suffer from reduced availability and increased query delays up to an order of magnitude. The original DNS design assumed that redundant DNS servers fail independently, but our measurements show that operational choices create dependencies between servers. We found that, left unchecked, DNS configuration errors are widespread. Specifically, lame delegation affects 15% of the measured DNS zones, delegation inconsistency appears in 21% of the zones, diminished server redundancy is even more prevalent, and cyclic dependency appears in 2% of the zones. We also noted that the degrees of mis-configuration vary from zone to zone, with the most popular zones having the lowest percentage of errors. Our results indicate that DNS, as well as any other truly robust large-scale system, must include systematic checking mechanisms to cope with operational errors.",2009,0, 367,Offset based leaky prediction for error resilient ROI coding,"During the period of transmission, video data usually suffer from transmission errors inevitably. Intra update is a common approach to stop error propagation. However, damaged images cannot recover until next update in case of errors, which often leads to annoying effect. In this paper, we propose an enhanced leaky prediction approach that enables the region-of-interest (ROI) of images to recover gently from the immediate succeeding frame of erroneous ones in favor of better human perception. Moreover, an optimized offset compensation technique is designed to improve coding performance. Experimental results show that the proposed scheme can achieve better image quality for ROI and the fluctuation of bit rate is greatly reduced, compared to the intra update method.",2009,0, 368,An improved DMVE temporal error concealment,"During video transmission over error-prone networks, the compressed bit stream is often corrupted by channel errors which may cause the video quality degrading suddenly. In this paper, we present a novel temporal error concealment technique as a post-processing tool at the decoder side for recovering the lost information. In order to recover the lost motion vector, an improved decoder motion vector estimation (DMVE) criterion is introduced which considers temporal correlation and motion trajectory together. We utilize pixels in the two previous frames as well as surrounding pixels of the lost block. The best motion vector is determined according to the criterion, and then the lost pixels are recovered using motion compensation. Simulations show that the proposed technique can achieve remarkable objective (PSNR) and subjective gains in the quality of the recovered video.",2008,0, 369,A Dynamic Binary Translation Framework Based on Page Fault Mechanism in Linux Kernel,"Dynamic binary translation and optimization is one of the most important essential techniques for computing system virtualization. This paper proposes a new dynamic translation framework for co-designed virtual machines. It generates and handles translation requests based on page fault mechanism provided in Linux kernel. In this new framework, the translation of guest codes and the execution of translated codes can be performed on different processors in parallel. The framework support the coprocessor translating guest code pages and the host CPU executing translated pages simultaneously, thus the translator becomes more efficient. The paper also presents a qualitative analysis of the time cost in our framework on an x86-ARM co-designed dynamic binary translation system, and suggests that the performance of this framework can be further improved if shared memory between host CPU and coprocessor is used. The framework can also be used in a dynamic binary translator on multi-core platforms.",2010,0, 370,Sensitivity analysis of modular dynamic fault trees,"Dynamic fault tree analysis, as currently supported by the Galileo software package, provides an effective means for assessing the reliability of embedded computer-based systems. Dynamic fault trees extend traditional fault trees by defining special gates to capture sequential and functional dependency characteristics. A modular approach to the solution of dynamic fault trees effectively applies Binary Decision Diagram (BOD) and Markov model solution techniques to different parts of the dynamic fault tree model. Reliability analysis of a computer-based system tells only part of the story, however. Follow-up questions such as Where are the weak links in the system?, How do the results change if my input parameters change? and What is the most cost effective way to improve reliability? require a sensitivity analysis of the reliability analysis. Sensitivity analysis (often called Importance Analysis) is not a new concept, but the calculation of sensitivity measures within the modular solution methodology for dynamic and static fault trees raises some interesting issues. In this paper we address several of these issues, and present a modular technique for evaluating sensitivity, a single traversal solution to sensitivity analysis for BOD, a simplified methodology for estimating sensitivity for Markov models, and a discussion of the use of sensitivity measures in system design. The sensitivity measures for both the Binary Decision Diagram and Markov approach presented in this paper is implemented in Galileo, a software package for reliability analysis of complex computer-based systems",2000,0, 371,Fault-tolerant and power-aware scheduling algorithm in hard-real-time distributed systems,"Dynamic voltage scaling (DVS) technique is being increasingly used in hard-real-time energy-limited embedded systems as a means to conserve energy and prolong their lifetimes. In this paper, we first analyze the interplay between fault-tolerance and energy-saving as well as their quantitative needs on processor slack resource. Then, we extend the traditional fault-tolerant completion time test (FTCTT) to power-aware fault-tolerant completion time test (PAFTCTT). Based on PAFTCTT, a voltage slowdown factor calculation is proposed. These slowdown factors not only guarantee that all hard tasks can be scheduled within their deadlines despite of any single permanent fault, but also effectively reduce energy consumption. Finally, the simulation experiments reveal that slowdown factor technique can achieve the percents of energy-saving up to 31.3% (with an average of 16.3%).",2010,0, 372,"Static Var compensators (SVC) required to solve the problem of delayed voltage recovery following faults in the power system of the Saudi electricity company, western region (SEC-WR)","Each power system is unique in its load pattern, growth trends and type, generation resources and network configuration. One of the main objectives of the power system operation planners is the operation and control of the power system to satisfy the most secure, and reliable power supply. The power system of the Saudi electricity company in the western region (SEC-WR) faced a high load growth during the past few years. This load increase gave rise to a very high loading of the transmission system elements mainly power transformers and cables. The western region load is mainly composed of air conditioner (AC) during high load season. In case of faults this nature of load induces delayed voltage recovery following fault clearing on the transmission system. The sustained low voltage following transmission line faults could cause customer interruptions and may be equipment damage. The integrity of the transmission system may also be affected. The transient stability of the system may be affected. This may also influence the stability of the generating units in the system. The existing dynamic model of SEC-WR System has been described. The response of the model to the actual faults is compared with actual records obtained from the dynamic system monitor (DSM) installed in several locations in the SEC-WR System. To avoid as much as possible brown and black outs following system faults, SVC systems will be installed. An automatic under voltage load shedding scheme has been set up and optimized as an additional security and backup measures to cater for sever disturbance such as three phase and single phase faults.",2003,0,2162 373,Faults Detection and Isolation Based On Neural Networks Applied to a Levels Control System,"Each time more grows the necessity of guaranteeing itself security and trustworthiness of the equipment during the execution of the industrials processes. Then, it is very important that faults in the processes can be detected and isolated. This paper presents an approach to process fault detection and isolation (FDI) system applied to a levels control system connected with an industrial network Foundation Fieldbus. The FDI system was developed using artificial neural networksfi (ANN) and tested in real environment.",2007,0, 374,On combining fault classification and error propagation analysis in RT-Level dependability evaluation,"Early analysis of the functional impact of faults aims either at classifying the faults according to their main potential effect, or at analyzing more in depth the error propagation paths in the circuit. This paper presents the results of extensive SEU-like fault injections performed on a VHDL model of the 8051 micro-controller. The advantage of combining the two types of analyses and the impact of the workload are discussed.",2004,0, 375,Agent Model for Human Expert Trend Analysis Technique for Real Time Fault Simulation in Integrated Fault Diagnostic System,"Early fault detection is critical for safe and optimum plant operation and maintenance in any chemical plant. Quick corrective action can help in minimizing quality and productivity offsets and can assist in averting hazardous consequences in abnormal situations. In this paper, fault diagnosis based on trends analysis is considered where integrated equipment behaviors and operation trajectory are analyzed using a trend-matching approach. A qualitative representation of these trends using IF- THEN rules based on neuro-fuzzy approach is used to find root causes and possible and consequences for any detected abnormal situation. Experimental plant is constructed to provide real time fault simulation data for fault detection method verification.",2007,0, 376,Defect detection of bearing surfaces based on machine vision technique,"Due to the high demands for productivity and quality of bearing and the shortage of traditional detection methods, this paper proposes an automatic detection system based on machine vision technique. The detection system uses digital image processing technology to process the images collected by CCD camera and finish identification for the surfaces of bearing quickly and accurately. Firstly, least squares fitting and annulus scan are used to locate the bearing and the regions which will be detected. Secondly, contrast enhancement and low-pass filtering are used to improve the quality of images. Next, object inspection is applied to determine whether defects exist. Finally, the shape feature is used to finish recognition of defects. Experiments show that the detection system has the features of high efficiency, high accuracy and ease of use. This research has a certain practical value.",2010,0, 377,Study on method of single-phase-to-earth fault section location in neutral point resonant grounded system,"Due to the large amount of branches and high grounding resistance of the distribution network, the earth fault location has not been solved effectively. A live location method is proposed for the single-phase-to-earth fault section in the neutral point resonant grounded system. By this method, the compensation current of arc-suppression coil is adjusted after the metallic earth fault occurs. And the fault section can be confirmed by the variation of zero-sequence current measured by FTU equipped in the lines, for the zero-sequence current in the non-fault path is proportional to the zero-sequence voltage. When the single-phase-to-earth fault via resistance occurs, zero-sequence voltage changes after the arc-suppression coil is adjusted. The method is also effective as long as the zero-sequence current is converted by zero-sequence voltage. The site experiments have been carried out to prove that the method is feasible.",2010,0, 378,Modeling method for information model of fault tree diagnosis based on UML,"Due to the problem of complex relation between different data types existing in information model of fault tree diagnosis, the modeling method for information model of fault tree diagnosis based on UML is proposed. Firstly, the requirement of integrated diagnosis and AI-ESTATE to information in test and diagnostic environment and the shortage of information model of fault tree diagnosis is analyzed in the paper. Secondly, key structure, elements, data type and their restricting relation in the diagnostic information model of fault tree based on XML Schema is described. Afterwards, the mapping relation and the binding rule between XML Schema and UML class is researched. Finally, the UML class diagram about information model of fault tree diagnosis is created.",2010,0, 379,Practical Diagnostic Approach of Energy Consumption and Systematic Fault of AC System,"Due to the vast amount of energy consumption of AC (Air Conditioning) systems, the reasonable design, optimal operation and efficient management of AC systems are inevitable and necessary to the whole city energy management (CEM). To achieve these objectives, the verification of systematic characteristics, practical approaches of energy consumption and systematic fault diagnosis of AC systems should be taken as the premises and the most elementary works. Based on AC commissioning principle, a practical diagnosis approach of energy consumption and systematic fault of AC system is established. The framework and general procedures of the practical approach is presented and introduced in this paper.",2010,0, 380,On the Risk of Fault Coupling over the Chip Substrate,"Duplication and comparison has proven to be an efficient method for error detection. Based on this generic principle dual core processor architectures with output comparison are being proposed for safety critical applications. Placing two instances of the same (arbitrary) processor on one die yields a very cost efficient ""single chip"" implementation of this principle. At the same time, however, the physical coupling of the two replica creates the potential for certain types of faults to affect both cores in the same way, such that the mutual checking will fail. The key question here is how this type of coverage leakage relates to other imperfections of the duplication and comparison approach that would also be found using two cores on separate dies (such as coupling over a common power supply or clock). In this paper we analyze several of the relevant physical coupling mechanisms and elaborate a model to decompose the genesis of a common cause fault into several steps. We present an experimental study showing that a very tight local and temporal coincidence of the fault effect in both replica is a crucial prerequisite for a common cause fault. Based on this quantitative input we can conclude from our decomposition model that the risk of common cause faults is low for physical coupling mechanisms with relatively slow propagation speed, such as thermal and mechanical effects. The role of asymmetry for mitigating common cause faults is discussed in the light of these findings.",2009,0, 381,A new method for fault detection during power swing in distance protection,"During a power swing, currents and voltages behave such as a fault. Therefore, power swing blocking function in distance relays is necessary to discriminate between a power swing and a fault. Otherwise power swings can be considered as a fault and causes relay trip. The main problem happens when during power sings a fault occurs. In this case, distance relays should be unblocked. In this paper, a new method based on the DC component of fault currents will be proposed to detect a fault during power swing blocking. The proposed method can detect single-phase to ground, two-phase to ground and three-phase fault. Applying the new method on a sample network reveals the features of the method.",2009,0, 382,Deformation correction in ultrasound images using contact force measurements,"During an ultrasound scan, contact between the probe and the skin deforms the underlying tissue. This can be considered a feature (as in elastography), but is in general undesirable, particularly for 3D scanning. In this paper we propose a novel system to correct this deformation by measuring the contact force at the time of the ultrasound scan and then using an elastic model to predict the tissue deformation. The inverse of this deformation is then applied to the image, generating the image that would have been seen had there been no contact with the probe. A prototype system has been implemented using an a priori finite element model to predict the deformation. This has been tested on gelatine phantoms and shown to remove the contact deformation and so give improved 3D reconstructions",2001,0, 383,Test Case Mutation in Hybrid State Space for Reduction of No-Fault-Found Test Results in the Industrial Automation Domain,During development of a device a large set of testcases is executed to ensure the qualitative requirements. Nevertheless because of timing issues it is not possible to perform all possible test cases and therefore it is not possible to guarantee a product that works always as expected by the end user.Finding the root cause of failures in returned devices is still largely manual work by an expert because the exact system and environment state is not known.In this paper we present an approach which allows automatic mutation of test cases for hybrid systems to reproduce failures based on vague user descriptions.,2009,0, 384,Correction strategy for view maintenance anomaly after schema and data updating concurrently,"During maintaining the materialized view in the data warehouse, how to efficiently handle the concurrent updates is an important and intractable problem. The paper discusses typical situations that schema changes mix with data updates concurrently. And the reasons why concurrent updates result in view maintenance anomaly are analyzed. Based on the analysis, an enhanced commit agent is designed for dealing with non-order commit problem. Thus, the consistency between data warehouse and data source is guaranteed.",2005,0, 385,Some myths and common errors in simulation experiments,"During the more than fifty years that Monte Carlo simulation experiments have been performed on digital computers, a wide variety of myths and common errors have evolved. We discuss some of them, with a focus on probabilistic and statistical issues",2001,0, 386,A fault diagnosis system for heat pumps,"During the operation of heat pumps, faults like heat exchanger fouling, component failure, or refrigerant leakage reduce the system performance. In order to recognize these faults early, a fault diagnosis system has been developed and verified on a test bench. The parameters of a heat pump model are identified sequentially and classified during operation. For this classification, several `hard' and `soft' clustering methods have been investigated, while fuzzy inference systems or neural networks are created automatically by newly developed software. Choosing a simple black-box model structure, the number of sensors can be minimized, whereas a more advanced grey-box model yields better classification results",2001,0, 387,Synergistic coordination between software and hardware fault tolerance techniques,"Describes an approach for enabling the synergistic coordination between two fault-tolerance protocols to simultaneously tolerate software and hardware faults in a distributed computing environment. Specifically, our approach is based on a message-driven confidence-driven (MDCD) protocol that we have devised for tolerating software design faults, and a time-based (TB) checkpointing protocol that was developed by N. Neves and W.K. Fuchs (1996) for tolerating hardware faults. By carrying out algorithm modifications that are conducive to synergistic coordination between volatile-storage and stable-storage checkpoint establishments, we are able to circumvent the potential interference between the MDCD and TB protocols, and to allow them to effectively complement each other to extend a system's fault tolerance capability. Moreover, the protocol coordination approach preserves and enhances the features and advantages of the individual protocols that participate in the coordination, keeping the performance cost low.",2001,0, 388,The reliability of diverse systems: a contribution using modelling of the fault creation process,"Design diversity is a defence against design faults causing common-mode failure in redundant systems, but we badly lack knowledge about how much reliability it will buy in practice, and thus about its cost-effectiveness, the situations in which it is an appropriate solution and how it should be taken into account by assessors and safety regulators. Both current practice and the scientific debate about design diversity depend largely on intuition. More formal probabilistic reasoning would facilitate critical discussion and empirical validation of any predictions: to this aim, we propose a model of the generation of faults and failures in two separately-developed program versions. We show results on: (i) what degree of reliability improvement an assessor can reliably expect from diversity; and (ii) how this reliability improvement may change with higher-quality development processes. We discuss the practical relevance of these results and the degree to which they can be trusted.",2001,0, 389,Emulation-based design errors identification,Design verification has a large impact on the final testability of a system. The identification and removal of design errors from the initial design steps increases the testing quality of the entire design flow. We propose in this paper to exploit the potentialities of an emulator to accelerate a validation methodology for RTL designs. Alternative emulator configurations are compared in order to evaluate the performance speed-up of the presented methodology. The RTL design functionalities are compared with a System C executable specification model.,2002,0, 390,"Improving the Performance of Fault-Aware Scheduling Policies for Desktop Grids (Be Lazy, Be Cool)","Desktop Grids have proved to be a suitable platform for the execution of Bag-of-Tasks applications but, being characterized by a high resource volatility, require the availability of scheduling techniques able to effectively deal with resource failures and/or unplanned periods of unavailability. Fault-aware scheduling, proposed in [2], can be considered a promising approach, yielding to both performance improvements for Bag-of-Task-Applications and increased utilization for Desktop Grids. The best fault-aware scheduling strategy available at the moment uses on-line scheduling, that is it starts a task as soon as a machine becomes available. In this paper we present a machine selection policy based on the idea that sometimes is better to wait for another machine rather than greedily exploit an immediately available one. An extensive simulation study, carried on for a variety of realistic Desktop Grid configurations and Bag-of-Task workloads, has revealed that the new scheduling strategy further improve application performance and machine utilization with respect to the best fault- aware scheduling strategy among those proposed in [2].",2007,0, 391,Evaluating the impact of Undetected Disk Errors in RAID systems,"Despite the reliability of modern disks, recent studies have made it clear that a new class of faults, UndetectedDisk Errors (UDEs) also known as silent data corruption events, become a real challenge as storage capacity scales. While RAID systems have proven effective in protecting data from traditional disk failures, silent data corruption events remain a significant problem unaddressed by RAID. We present a fault model for UDEs, and a hybrid framework for simulating UDEs in large-scale systems. The framework combines a multi-resolution discrete event simulator with numerical solvers. Our implementation enables us to model arbitrary storage systems and workloads and estimate the rate of undetected data corruptions. We present results for several systems and workloads, from gigascale to petascale. These results indicate that corruption from UDEs is a significant problem in the absence of protection schemes and that such schemes dramatically decrease the rate of undetected data corruption.",2009,0, 392,A first design for CANsistant: A mechanism to prevent inconsistent omissions in CAN in the presence of multiple errors,"Despite the significant advantages of the controller area network (CAN) there is an extended belief that CAN is not suitable for critical applications, mainly because of several dependability limitations. One of them is its limited data consistency. Several solutions to this problem have been previously proposed but they are not able to efficiently ensure consistent broadcasts in the presence of multiple channel errors. This paper introduces a circuit called CANsistant, that detects all scenarios potentially leading to the inconsistent omission of a frame in the presence of up to 4 channel errors and, if necessary, retransmits the affected frame.",2009,0, 393,Wavelet Coherence and Fuzzy Subtractive Clustering for Defect Classification in Aeronautic CFRP,"Despite their high specific stiffness and strength, carbon fiber reinforced polymers, stacked at different fiber orientations, are susceptible to interlaminar damages. They may occur in the form of micro-cracks and voids, and leads to a loss of performance. Within this framework, ultrasonic tests can be exploited in order to detect and classify the kind of defect. The main object of this work is to develop the evolution of a previous heuristic approach, based on the use of Support Vector Machines, proposed in order to recognize and classify the defect starting from the measured ultrasonic echoes. In this context, a real-time approach could be exploited to solve real industrial problems with enough accuracy and realistic computational efforts. Particularly, we discuss the cross wavelet transform and wavelet coherence for examining relationships in time-frequency domains between. For our aim, a software package has been developed, allowing users to perform the cross wavelet transform, the wavelet coherence and the Fuzzy Inference System. Since the ill-posedness of the inverse problem, Fuzzy Inference has been used to regularize the system, implementing a data-independent classifier. Obtained results assure good performances of the implemented classifier, with very interesting applications.",2010,0, 394,Influence of localized defect on transmission in a coaxial Bragg structure,"Detailed simulations are presented for the effect of localized defect on the transmission of a coaxial Bragg structure. It is found that if localized defect is introduced to both the outer-wall and the inner-rod corrugations, two pass-bands will generate in the initial stop-band associated with the inter-coupling of the operating mode with spurious modes, which are quite narrow and thus may provide promising applications in high-Q resonators, tunable narrow-band filters, and pulse compressors.",2010,0, 395,Design Techniques for Streamlined Integration and Fault Tolerance in a Distributed Sensor System for Line-crossing Recognition,"Distributed sensor system applications (e.g., wireless sensor networks) have been studied extensively in recent years. Such applications involve resource-limited embedded sensor nodes that communicate with each other through self-organizing protocols. Depending on application requirements, distributed sensor system design may include protocol and prototype implementation. Prototype implementation is especially useful in establishing and maintaining system functionality as the design is customized to satisfy size, energy, and cost constraints. In this paper, we present a streamlined, application-specific approach to incorporating fault tolerance into a TDMA-based distributed sensor system for line-crossing recognition. The objective of this approach is to prevent node failures from translating into failures in the overall system. Our approach is specialized and light-weight so that fault tolerance is achieved without significant degradation in energy efficiency. We also present an asynchronous handshaking approach for providing synchronization between the transceiver and digital processing subsystem in sensor node. This provides a general method for achieving such synchronization with reduced hardware requirements and reduced energy consumption compared to conventional approaches, which rely on generic interface protocols. We demonstrate the capabilities of our approaches to fault tolerance and transceiver-processor integration through experiments involving a complete prototype wireless sensor network test-bed, and a distributed line-crossing recognition application that runs on this test-bed.",2007,0, 396,A Skeletal-Based Approach for the Development of Fault-Tolerant SPMD Applications,"Distributing applications over PC clusters to speed-up or size-up the execution is now commonplace. Yet efficiently tolerating faults of these systems is a major issue. To ease the addition of checkpoint-based fault tolerance at the application level, we introduce a {em Model for Low-Overhead Tolerance of Faults}(MoLOToF) which is based on structuring applications using {em fault-tolerant skeletons}. MoLOToF also encourages collaborations with the programmer and the execution environment. The skeletons are adapted to specific parallelization paradigms and yield what can be called {em fault-tolerant algorithmic skeletons}. The application of MoLOToF to the SPMD parallelization paradigm results in our proposed FT-SPMD framework. Experiments show that the complexity for developing an application is small and the use of the framework has a small impact on performance. Comparisons with existing system-level checkpoint solutions, namely LAM/MPI and DMTCP, point out that FT-SPMD has a lower runtime overhead while being more robust when a higher level of fault tolerance is required.",2010,0, 397,Analytically redundant controllers for fault tolerance: Implementation with separation of concerns,"Diversity or redundancy based software fault tolerance encompasses the development of application domain specific variants and error detection mechanisms. In this regard, this paper presents an analytical design strategy to develop the variants for a fault tolerant real-time control system. This work also presents a generalized error detection mechanism based on the stability performance of a designed controller using the Lyapunov Stability Criterion. The diverse redundant fault tolerance is implemented with an aspect oriented compiler to separate and thus reduce this additional complexity. A Mathematical Model of an Inverted Pendulum System has been used as a case study to demonstrate the proposed design framework.",2010,0, 398,DNA error correcting codes: No crossover.,"DNA error correcting codes over the edit metric create embeddable markers for sequencing projects that are tolerant of sequencing errors. When a sequence library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. Evolutionary algorithms are currently the best known technique for optimizing DNA error correcting codes. In this study we resolve the question of the utility of the crossover operator used in earlier studies on optimizing DNA error correcting codes. The crossover operator in question is found to be substantially counterproductive. A majority of crossover events produce results that violate minimum-distance constraints required for error correction. A new algorithm, a form of modified evolution strategy, is tested and is found to locate codes with record size. The table of best know sizes for DNA-error correcting codes is updated.",2009,0, 399,Mask contribution on CD & OVL errors budgets for Double Patterning Lithography,"Double Patterning Technology (DPT) is now considered as the mainstream technology for 32 nm node lithography. The main DPT processes have been developed according targeted applications: spacer and pitch splitting either by dual line or dual trench approaches. However, the successful implementation of DPT requires overcoming certain technical challenges in terms of exposure tool capability, process integration, mask performance and finally metrology. For pitch splitting process, the mask performance becomes critical as the technique requires a set of two masks. This paper will focus on the mask impact to the global critical dimension (CD) and overlay (OVL) errors for DPT. The mask long-distance and local off-target CD variation and image placement were determined on DP features at 180 nm and 128 nm pitches, dedicated to 45 nm and 32 nm nodes respectively. The mask data were then compared to the wafer CD and OVL results achieved on same DP patterns. Edge placement errors have been programmed on DP like-structures on reticle in order to investigate the offsets impact on CD and image placement. The CD lines increases with asymmetric spaces adjacent to the drawn lines for offsets higher than 12 nm, and then have been compared to the corresponding density induced by individual dense and sparse symmetric edges and have been correlated to the simulated prediction. The single reticle trans-X offsets were then compared to the impact on CD by OVL errors in the double patterning strategy. Finally, the pellicle-induced reticle distortions impact on image placement errors was investigated. The mechanical performance of pellicle was achieved by mask registration measurements before and after pellicle removal. The reticle contribution to the overall wafer CD and OVL errors budgets were addressed to meet the ITRS requirements.",2009,0, 400,Analysis of Fault-Tolerant Performance of a Doubly Salient Permanent-Magnet Motor Drive Using Transient Cosimulation Method,"Doubly salient permanent-magnet (DSPM) motors offer the advantages of high power density and high efficiency. In this paper, it is examined that the DSPM motor is a new class of fault-tolerant machines, a potential candidate for many applications where reliability and power density are of importance. Fault analysis is performed in a DSPM motor drive, including internal and external faults. Due to the fact that the experimentation on a true motor drive for such a purpose is impractical because of its high cost and difficulty to make, a new cosimulation model of a DSPM motor drive is developed using coupled magnetic and electric circuit solvers. Last, to improve the performance of a DSPM motor drive with an open-circuited fault, a fault compensation strategy is proposed. Simulation and experimental results are presented, showing the effectiveness of the proposed cosimulation method and the high performance of the fault-tolerant characteristic of DSPM motor drives.",2008,0, 401,Image-space Correction of AR Registration Errors Using Graphics Hardware,"directly on top of physical objects in a video scene. Registration accuracy is a serious problem in these cases since any imprecisions are immediately apparent as virtual and physical edges and features coincide. We present a hardware-accelerated image-based post-processing technique that adjusts rendering of virtual geometry to better match edges present in images of a physical scene, reducing the visual effect of registration errors from both inaccurate tracking and oversimplified modeling. Our algorithm is easily integrable with existing AR applications, having no dependency on the underlying tracking technique. We use the advanced programmable capabilities of modern graphics hardware to achieve high performance without burdening the CPU.",2006,0, 402,Neural net and expert system diagnose transformer faults,"Dissolved gas-in-oil analysis (DGA) is a common practice in transformer incipient fault diagnosis. The analysis techniques include the conventional key gas method, ratio methods, and artificial intelligence methods. Application of artificial intelligence (Al) techniques have shown very promising results. The methods include fuzzy logic, expert systems (EPS), evolutionary algorithms (EA), and artificial neural networks (ANN). A transformer incipient fault diagnosis system (ANNEPS) was developed over a period of 5 years at Virginia Tech, collaborating with Doble Engineering Company. The system can detect thermal faults (distinguishing overheating of oil from that of cellulose and between four overheating stages and overheating of oil), low-energy discharge (partial discharge), high-energy discharge (arcing), and cellulose degradation",2000,0, 403,Double circuit transmission line Fault Distance Location using Artificial Neural Network,"Distance relays used for protection of transmission lines have problems of under-reach, over-reach and maloperation due to high impedance faults. Further the problem is compounded when the distance relays are used for protection of double circuit transmission lines due to effect of zero sequence mutual coupling. Different types of faults on a protected transmission line should be located correctly. This paper presents a single neural network for fault distance location for all the ten types of faults (3 LG, 3 LLG, 3 LL, 1 LLL) in both the circuits of a double circuit transmission line fed from sources at both the end. This technique uses only one end data and accurate fault distance location is achieved after one cycle from the inception of fault. The proposed Artificial Neural Network (ANN) based Fault Distance Locator uses fundamental components of three phase current signals of both the circuits & three phase voltage signals to learn the hidden relationship in the input patterns. An improved performance is obtained once the neural network is trained suitably, thus performing correctly when faced with different system parameters and conditions i.e. varying fault type, fault location, fault resistance, fault inception angle, presence of mutual coupling and remote source infeed.",2009,0, 404,Designing a fault-tolerant architecture for real-time distributed control system,"Distributed control systems play major roles in real-time applications. In some circumstances human lives may depend on these systems. Hence, they should be highly dependable. Since there is no specific way to forecast a failure, the systems should have fault-tolerant features to allow them to continue to operate in the presence of fault. In this research, a fault-tolerant architecture for real-time distributed control system was developed. A fault-tolerant node was designed and its performance was evaluated using the Markov chains model. All these nodes communicate via a network. To choose the most suitable network for this system, several networks were analyzed qualitatively. Several medium access control protocols were also compared quantitatively using OMNET++. From the results obtained, a new fault-tolerant architecture is proposed. This system possesses high reliability not only at the node stage but also at the network stage, hence increasing the reliability of the overall system.",2002,0, 405,A Distributed Fault-Tolerant Algorithm for Event Detection Using Heterogeneous Wireless Sensor Networks,"Distributed event detection using wireless sensor networks has received growing interest in recent years. In such applications, a large number of inexpensive and unreliable sensor nodes are distributed in a geographical region to make firm and accurate local decisions about the presence or absence of specific events based on their sensor readings. However, sensor readings can be unreliable, due to either noise in the sensor readings or hardware failures in the devices, and may cause nodes to make erroneous local decisions. We present a general fault-tolerant event detection scheme that allows nodes to detect erroneous local decisions based on the local decisions reported by their neighbors. This detection scheme does not assume homogeneity of sensor nodes and can handle cases where nodes have different accuracy levels. We prove analytically that the derived fault-tolerant estimator is optimal under the maximum a posteriori (MAP) criterion. An equivalent weighted voting scheme is also derived. Further, we describe two new error models that take into account the neighbor distance and the geographical distributions of the two decision quorums. These models are particularly suitable for detection applications where the event under consideration is highly localized. Our fault-tolerant estimator is simulated using a network of 1024 nodes deployed randomly in a square region and assigned random probability of failures",2006,0, 406,Intelligent fault-tolerant CORBA service on real-time CORBA,"Distributed object applications can be made fault tolerant by replicating their constituent objects, and by distributing these replicas across the different computers in the network. The idea behind object replication is that the failure of one replica of an object can be masked from a client of the object because the other replicas can continue to perform any operation that the client requires. We propose IFTS (Intelligent Fault Tolerant CORBA Service) for handling faults of server object replica using a replication concept to support fault tolerance. It can choose the fastest primary replica using the multicast mechanism. It also introduces passive replication for secure fault tolerance. Furthermore, we propose the design and implementation of IFTS service to provide reliability and faster service using multicast technology by extending existing CORBA ORB",2001,0, 407,ACCE: Automatic correction of control-flow errors,"Detection of control-flow errors at the software level has been studied extensively in the literature. However, there has not been any published work that attempts to correct these errors. Low-cost correction of CFEs is important for real-time systems where checkpointing is too expensive or impossible. This paper presents automatic correction of control-flow errors (ACCE), an efficient error correction algorithm involving addition of redundant code to the program. ACCE has been implemented by modifying GCC, a widely used C compiler, and performance measurements show that the overhead is very low. Fault injection experiments on SPEC and MiBench benchmark programs compiled with ACCE show that the correct output is produced with high probability and that CFEs are corrected with a latency of a few hundred instructions.",2007,0, 408,A new method for fault section estimation in distribution network,"Determination of fault section is a necessary step for locating the fault in the distribution power system. In this paper a new practical based method is presented for fault section estimation in distribution system. In the proposed method, at first different zones is defined using impedance classifier. Then, the suitable locations for installing the cutout fuses are determined using expert of designer. After that, special settings for cutout fuse links are determined in such a way that they operate coordinately. Finally, current waveforms are used to distinguish which cutout fuse operated or in which section fault occurred.",2010,0, 409,Defect identification of lumber through correlation technique with statistical and textural feature extraction method,"Feature extraction is an important component of a pattern recognition system. A well-defined feature extraction algorithm makes the identification process more effective and efficient. Several techniques exist for the quality checking of wooden materials. However, image based quality checking of wooden materials still remains a challenging task. Although trivial quality checking methods are available, they do not give useful results in most situations. This paper addresses the issue of quality checking of wooden materials using statistical and textural feature extraction techniques with high accuracy and reliability. In our work, a wood defect identification system has been designed based on pre-processing techniques, feature extraction and by correlating the features of those wood species for their classification. The most popular technique used for the textural classification is Gray-level Co-occurrence Matrices (GLCM). The features from the enhanced images are thus extracted using the GLCM is correlated, which determines the classification between the various wood species. Experiments conducted under the proposed conditions showing significant results are presented.",2010,0, 410,Automated Diagnosis of Product-Line Configuration Errors in Feature Models,"Feature models are widely used to model software product-line (SPL) variability. SPL variants are configured by selecting feature sets that satisfy feature model constraints. Configuration of large feature models can involve multiple stages and participants, which makes it hard to avoid conflicts and errors. New techniques are therefore needed to debug invalid configurations and derive the minimal set of changes to fix flawed configurations. This paper provides three contributions to debugging feature model configurations: (1) we present a technique for transforming a flawed feature model configuration into a constraint satisfaction problem (CSP) and show how a constraint solver can derive the minimal set of feature selection changes to fix an invalid configuration, (2) we show how this diagnosis CSP can automatically resolve conflicts between configuration participant decisions, and (3) we present experiment results that evaluate our technique. These results show that our technique scales to models with over 5,000 features, which is well beyond the size used to validate other automated techniques.",2008,0, 411,Adding fault tolerance mechanisms to Interbus-S,"Field bus technology is now a reality in industrial environments. There are many field bus systems commercially available, and each is suitable for particular kinds of applications. In this scenario the Interbus-S system is playing a leading role, due to the efficiency of its protocol. However, a drawback of this communication system is the centralisation of the mono-master arbitration scheme. The presence of a single device to co-ordinate communication activities makes the Interbus-S protocol vulnerable to fault occurrences in the master. Maintaining full compatibility with the existing standard, the authors have defined a protocol extension which allows the whole communication system to continue working after the occurrence of a fault in the master node",2000,0, 412,Adding Integrity Verification Capabilities to the LDPC-Staircase Erasure Correction Codes,"File distribution is becoming a key technology, in particular in large scale content broadcasting systems like DVB-H/SH. They largely rely on Application Level FEC codes (AL-FEC) in order to recover from transmission erasures. We believe that sooner or later, content integrity and source authentication security services will be required in these systems. In order to save the client terminal resources, which can be a handheld autonomous device, we have designed a hybrid system that merges the AL-FEC decoding and content integrity/source authentication services. More precisely our system can detect a random object corruption triggered by a deliberate attack with a probability close to 100% almost for free in terms of computation overhead. The case of intelligent corruptions is also addressed and counter measures proposed.",2009,0, 413,Simulation-Based Bug Trace Minimization With BMC-Based Refinement,"Finding the cause of a bug can be one of the most time-consuming activities in design verification. This is particularly true in the case of bugs discovered in the context of a random-simulation-based methodology, where bug traces, or counterexamples, may be several hundred thousand cycles long. In this paper, BUg TRAce MINimization (Butramin), which is a bug trace minimizer, is proposed. Butramin considers a bug trace produced by a random simulator or semiformal verification software and produces an equivalent trace of shorter length. Butramin applies a range of minimization techniques, deploying both simulation-based and formal methods, with the objective of producing highly reduced traces that still expose the original bug. Butramin was evaluated on a range of designs, including the publicly available picoJava microprocessor, and bug traces up to one million cycles long. Experiments show that in most cases, Butramin is able to reduce traces to a very small fraction of their initial sizes, in terms of cycle length and signals involved. The minimized traces can greatly facilitate bug analysis and reduce regression runtime",2007,0,7581 414,Fault Detection Structures for the Montgomery Multiplication over Binary Extension Fields,"Finite field arithmetic is used in applications like cryptography, where it is crucial to detect the errors. Therefore, concurrent error detection is very beneficial to increase the reliability in such applications. Multiplication is one of the most important operations and is widely used in different applications. In this paper, we target concurrent error detection in the Montgomery multiplication over binary extension fields. We propose error detection schemes for two Montgomery multiplication architectures. First, we present a new concurrent error detection scheme using the time redundancy and apply it on semi-systolic array Montgomery multipliers. Then, we propose a parity based error detection scheme for the bit-serial Montgomery multiplier over binary extension Fields.",2007,0, 415,Evolutionary design and adaptation of digital filters within an embedded fault tolerant hardware platform,"Finite impulse response filters (FIRs) are crucial device for robust data communication and manipulation. Multiplierless filters have been shown to produce high performance systems with fast signal processing and reduced area. Furthermore, the distributed architecture inherent in multiplierless filters makes it a suitable candidate for fault tolerant design. Alternative approaches to the design of fault tolerant systems have been proposed using evolutionary algorithms (EAs) and the concept of evolvable hardware (EHW). This paper presents an evolvable hardware platform for the automated design and adaptation of multiplierless digital filters. Filters are realised within a dedicated programmable logic array (PLA). The platform employs a genetic algorithm to autonomously configure the PLA for a give set of coefficients. The ability of the platform to adapt to increasing numbers of faults was investigated through the evolution of a 31-tap low-pass FIR filter. Results show that the functionality of filters evolved on the PLA was maintained despite an increasing number of faults covering up to 25% of the PLA area. Additionally, three PLA initialisation methods were investigated to ascertain which produced the fastest fault recovery times. It was shown that seeding a population of random configuration-strings with the best configuration currently obtained resulted in a 6 fold increase in fault recovery speed over other methods investigated",2001,0, 416,Adaptive FMO selection strategy for error resilient H.264 coding,"Flexible macroblock ordering (FMO) is one of the effective error resilient tools in H.264/AVC video coding standard. Nevertheless the issue of how to suitably arrange the macroblocks in suitable FMO mapping type for different video applications is yet to be clarified and investigated. In this paper, we are analyzing the tradeoff and effectiveness of the six fixed FMO types, and based these six fixed FMO types, using the joint source-channel rate distortion optimization (RDO) principle to propose an adaptive FMO type selection strategy for different video scenes and applications. The experiment results shows that our method has more compatibility and flexibility than the six fixed FMO types, and better error resilience than most of them.",2008,0, 417,Implementation of reconfiguration management in fault-adaptive control systems,"Fault adaptive systems must adapt and reconfigure themselves to the changes in the environment or the system itself, and have to maintain operation even in case of system failures. In order to avoid performance degradation due to system reconfigurations, adequate reconfiguration management is necessary. This paper describes a fault-adaptive control system with multilayer control and a reconfiguration management system.",2002,0, 418,Comparison of the four configurations of the inductive Fault Current Limiter,"Fault current limiters (FCLs) are expected to play an important role in the protection of the future power networks, since the increase of loads and expansion of the power networks lead to much higher short-circuit power. This paper presents the comparison of four different configurations of inductive FCL, with respect to the FCL weight (magnetic core and winding material) and losses during both the nominal and the fault state of operation. Two main challenges in the inductive FCL design are reduction of the material weight and reduction of the induced dc winding over-voltage during the fault period. So far, solutions (core configurations) proposed in the literature are: decoupling of the dc and the ac magnetic circuits to avoid high voltages across the dc winding during a fault and the so-called open- core configuration. The presented results reveal the merits and drawbacks of each of the configurations and compare them to the conventional inductive FCL design characteristics. The results are obtained through the simulations in SaberDesinger and by experiments.",2008,0, 419,Broken rotor bar fault detection in induction motors using starting current analysis,"Fault detection based on a common steady-state analysis technique, such as FFT, is known to be significantly dependant on the loading conditions of induction motors. At light load, it is difficult to distinguish between healthy and faulty rotors because the characteristic broken rotor bar fault frequencies are very close to the fundamental component and their amplitudes are small in comparison. As a result, detection of the fault and classification of the fault severity under light load is almost impossible. In order to overcome this problem, this paper investigates the detection of rotor faults in induction machines by analysing the starting current using a newly developed quantification technique based on the wavelet transform. The analysis technique applies the wavelet transform to the envelope of the starting current. The envelope extraction is used to remove the strong fundamental component, which overshadows the characteristic differences between a healthy motor and a faulty motor with broken rotor bars. The results are then verified using tests on a machine with a varying numbers of broken bars. The effects of initial rotor position, supply imbalance and loading are also investigated",2005,0, 420,Detecting faults in four symmetric key block ciphers,"Fault detection in encryption algorithms is gaining in importance since fault attacks may compromise even recently developed cryptosystems. We analyze the different operations used by various symmetric ciphers and propose possible detection codes and frequency of checking. Several examples (i.e., AES, RC5, DES and IDEA) are presented to illustrate our analysis.",2004,0, 421,A new approach for fault detection in digital relays-based power system using Petri nets,"Fault detection in power systems, from the viewpoint of its required speed and accuracy, needs to be investigated yet. The hidden faults resulted from malfunctioning of circuit breakers and incorrect warnings of relays and happening of several faults, are the main difficulties in monitoring of power system. In this paper, Petri nets have been used for modeling and location detection of faults in power systems. In this deductive method, using the information of protection system's situation, the estimation of the faulted sections has been modeled using Petri nets. This deductive process can be represented graphically based on Petri nets and executed using matrix operations. The inputs of fault diagnosis models are the acquired data by the Remote Terminal Units (RTU) of the Supervisory Control and Data Acquisition (SCADA) system including the relay's trip signals and the signal of circuit breaker's situation. Logical operand information of digital protection relays such as pickup and operation of protection devices are more reliable than that of SCADA in reflection of relays trip condition. They can be added to the information of SCADA based fault diagnosis models. By using Petri nets, the processing time of information is reduced and the precise of fault detection procedure is increased. Also the proposed approach provides hierarchical monitoring of power systems.",2010,0, 422,A fault classification model of modern automotive infotainment system,"Fault detection, analyzing and fixing are major challenges in automotive field especially in modern complex infotainment system. To diagnose and fix a fault, it is essential to find a proper fault classification model. This paper presents a fault classification model of modern automotive infotainment system which consists of electronic control units. After investigating automotive infotainment system from different possible aspects, a fault classification scheme is proposed which best fits in this field.",2009,0, 423,Synthesis Of Optimal-Cost Dynamic Observers for Fault Diagnosis of Discrete-Event Systems,"Fault diagnosis consists in synthesizing a diagnoser that observes a given plant through a set of observable events, and identifies faults which are not observable as soon as possible after their occurrence. Existing literature on this problem has considered the case of static observers, where the set of observable events does not change during execution of the system. In this paper, we consider dynamic observers, where the observer can switch sensors on or off, thus dynamically changing the set of events it wishes to observe. We define a notion of cost for such dynamic observers and show that (i) the cost of a given dynamic observer can be computed and (ii) an optimal dynamic observer can be synthesized.",2007,0, 424,Ventilator Fault Diagnosis Based on Fuzzy Theory,"Fault diagnosis has been the research hotspot in the industry fields. It has a practical significance to discuss the effective fault diagnosis methods. Aiming at the fuzzy and random features of the occurrence probabilities, this paper presents a hybrid method that combines the fault tree with fuzzy set theory.In this approach, fuzzy aggregation and defuzzification are adopted and this method is used in ventilator fault diagnosis. The research shows that this method is feasible and effective and can be applied to the other rotating machinery fault diagnosis.",2009,0, 425,Exploratory analysis of massive data for distribution fault diagnosis in smart grids,"Fault diagnosis in power distribution systems is critical to expedite the restoration of service and improve the reliability. With power grids becoming smarter, more and more data beyond utility outage database are available for fault cause identification. This paper introduces basic methodologies to integrate and analyze data from different sources. Geographic information system (GIS) provides a framework to integrate these data through spatial and temporal relations. Features extracted from raw data provide different discriminant powers, which can be evaluated by the likelihood measure. A fault cause classifier is then trained to learn the relations between fault causes and the features. Two statistical methods, linear discriminant analysis (LDA) and logistic regression (LR), are introduced. The assumptions, general approaches and performances of these two techniques are discussed and evaluated on a real-world outage dataset.",2009,0, 426,Introducing dynamics in a fault diagnostic application using Bayesian Belief Networks,"Fault diagnostic techniques are required to determine whether a fault has occurred in a system and to identify the component failures that may have caused it. This task can be complicated when dealing with complex systems and dynamic behaviour, in particular, introduces further difficulties. This paper presents a method for fault detection on dynamic systems using Bayesian Belief Networks (BBNs). Possible trends are identified for the variables in the systems that are monitored by the sensors. Fault Trees (FTs) are built to represent the causality of the trends and these are then converted into BBNs. The networks developed for different sections are connected together to form a unique concise network. For a combination of sensors which deviate from the expected trends, calculating the updated probability enables a list of potential causes for the system scenarios to be obtained. A simple water tank system has been used to validate the method.",2009,0, 427,Parsifal: A Generic and Configurable Fault Emulation Environment with Non-Classical Fault Models,"Fault emulation has become an important tool for test evaluation. However, until now fault models other than the stuck-at fault model have rarely been used in emulation. In this paper, we propose non-classical fault models for emulation and a generic fault emulation environment capable of supporting these and other fault models and different emulation modes in a common support framework. Although different in logical implementation and physical abstraction level, all fault models are administered and applied together and can even be mixed in a single fault grading campaign. The proposed fault emulation environment is not restricted in its use to a certain emulator. The modular approach proposed in this paper allows an easy adaption for different emulation systems and reuse of all key components including the fault models. Those may be applied during fault grading campaigns as well as in-circuit emulation. We will present results obtained on the emulation system Mercury+ by Cadence",2006,0, 428,Evaluating the Use of Reference Run Models in Fault Injection Analysis,"Fault injection (FI) has been shown to be an effective approach to assessing the dependability of software systems. To determine the impact of faults injected during FI, a given oracle is needed. Oracles can take a variety of forms, including (i) specifications, (ii) error detection mechanisms and (iii) golden runs. Focusing on golden runs, in this paper we show that there are classes of software which a golden run based approach can not be used to analyse. Specifically, we demonstrate that a golden run based approach can not be used in the analysis of systems which employ a main control loop with an irregular period. Further, we show how a simple model, which has been refined using FI experiments, can be employed as an oracle in the analysis of such a system.",2009,0, 429,An improved fault locating system of distribution network based on fuzzy identification,"Fault locating system, which is designed for the fast power recovery, is very important in the economical operating of the distribution network. But, for the uncertainty of the fault information, the incorrect conclusion may be obtained by the traditional fault location calculation, so the most fault locating system can not be employed in the distribution network. In this paper, an improved fault locating system is proposed, which is composed of the fault signal acquisition unit and the fault location analysis center. Fuzzy identification is employed in the fault location analysis center to deal with the uncertainty of fault information. The failure and mistake rates of indicator action are used as the fuzzy parameters to calculate the fuzzy difference of the fault sequence and the standard fault set. The fault indicator is the primary device of fault information acquisition. The radio frequency and GPRS technology construct the communication channel of fault signal acquisition unit, which cuts down the construction cost and also ensures the obtaining accuracy of the fault information. The fault location system is working on the distribution network and operating well. With the accurate fault location, power supply recovers fast. The loss of power failure is reduced effectively.",2010,0, 430,Using PQ Monitoring and Substation Relays for Fault Location on Distribution Systems,"Fault location is of considerable interest for utilities to improve their reliability and speed storm restorations. Power quality recorders, relays, and other monitors can provide information to help locate faults. In this paper, some basic impedance-based fault-location methods are evaluated on utility measurement data with known fault locations. The main finding is that reasonably accurate fault locations are possible on a wide range of distribution circuits with either feeder-level or bus-level substation monitoring. Another important finding described is how monitoring can be used to estimate the parameters of the fault arc. This can improve fault locations and help with accident investigations, equipment failure forensics, and other hazards related to the power and energy created by the arc",2007,0, 431,Fault location using traveling wave for power networks,"Fault location using traveling wave has been applied in extra-high voltage power grids successfully. Due to its complication and high cost, it is not easy for this technique to be accepted for use in distribution system. A new traveling wave fault location system is developed simply in a cost-effective way for power networks (especially for distribution system) in this paper. Two traveling wave sensors are developed to capture the current traveling wave flowing from the capacitive equipment to earth and the voltage traveling waves in all three phases. The outputs of the sensors are then applied to the trigger and time tagging by using Global Position System (GPS) receiver. The fault position is calculated by the traveling wave arrival times in every power station where only one fault locator is installed. The fault location system is tested in the power system. Testing results show that the fault locator has high precision and robustness.",2004,0, 432,Current fault management trends in NASA's planetary spacecraft,"Fault management for today's space missions is a complex problem, going well beyond the typical safing requirements of simpler missions. Recent missions have experienced technical issues late in the project lifecycle, associated with the development and test of fault management capabilities, resulting in both project schedule delays and cost overruns. Symptoms seem to become exaggerated in the context of deep space and planetary missions, most likely due to the need for increased autonomy and the limited communications opportunities with Earth-bound operators. These issues are expected to cause increasing challenges as the spacecraft envisioned for future missions become more capable and complex. In recognition of the importance of addressing this problem, the Discovery and New Frontiers Program Office hosted a Fault Management Workshop on behalf of NASA's Science Mission Directorate, Planetary Science Division, to bring together experts in fault management from across NASA, DoD, industry and academia. The scope of the workshop was focused on deep space and planetary robotic missions, with full recognition of the relevance of, and subsequent benefit to, Earth-orbiting missions. Three workshop breakout sessions focused the discussions to target three topics: 1) fault management architectures, 2) fault management verification and validation, and 3) fault management development practices, processes and tools. The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the white paper.",2009,0, 433,Fault Management Using the CONMan Abstraction,"Fault management in networks is difficult. We argue that a major contributor to the difficulty of debugging network faults is the sheer volume of semantically anemic details exposed by protocols. Unlike past approaches that try to cope with the deluge of information exposed, in this paper we explore how to reduce and structure the management information exposed by data-plane protocols and devices to make them more amenable to fault management. To this effect, we delineate two conditions that the management interface of data-plane protocols should satisfy: it should provide a structured description of protocol reality and it should support what we call a ""conservation of bytes"" invariant. Based on this, we propose an architecture wherein data- plane protocols expose management information satisfying these conditions. This allows management applications to detect, localize and (possibly) resolve faults in a structured fashion. We discuss the detection of a representative set of real-world faults to illustrate our approach. We implemented these fault management features into three protocols and built a management application that uses the features to debug faults. Apart from serving as a proof of concept, this exercise indicates that our proposal does indeed simplify debugging of a large fraction of network faults.",2009,0, 434,Case study of designing a fault recorder system,"Fault recorders are reckoned among the most important components of protection systems within high voltage substations which through continuous monitoring the power system parameters and diagnosing fault situations as well as recording the parameters status before and after the occurrence of a faults, assisting substation engineers to identifying the cause of the failures and eliminating them. Designing the fault recorders using innovative and breakthrough hardware and software components such as high-speed processors, high capacity mass storages, accurate analog to digital converters, real time operating systems etc, enables them to collect a huge amount of precise information from the status of power network. This information can subsequently be used by evaluating software to analysis the circumstances of the fault to avoid repeating it in the future. In this paper the designing a fault recorder system is discussed as a case study for designing an embedded system, considering its specific requirements and exceptions.",2004,0, 435,A new distributed approach for building balanced ring for fault tolerance in mesh architecture,"Fault ring (f-ring) is a popular model for the fault tolerance in grid based architectures such as 2D-mesh and torus. In the work proposed by Huxi Gu et al. introduced the concept of a balanced ring( b-ring) to reduce the traffic load on the fault ring to achieve fault tolerance in a mesh architecture. However, central to their work is the formation of the balanced ring that surrounds a fault ring. In this paper, we propose a new distributed approach for the formation of the balanced ring. Our approach is based on eight-neighborhood property that requires only the local information of the faulty nodes in contrast to the global knowledge as needed by the algorithm.",2009,0, 436,Perturbation-based Fault Screening,"Fault screeners are a new breed of fault identification technique that can probabilistically detect if a transient fault has affected the state of a processor. We demonstrate that fault screeners function because of two key characteristics. First, we show that much of the intermediate data generated by a program inherently falls within certain consistent bounds. Second, we observe that these bounds are often violated by the introduction of a fault. Thus, fault screeners can identify faults by directly watching for any data inconsistencies arising in an application's behavior. We present an idealized algorithm capable of identifying over 85% of injected faults on the SpecInt suite and over 75% overall. Further, in a realistic implementation on a simulated Pentium-III-like processor, about half of the errors due to injected faults are identified while still in speculative state. Errors detected this early can be eliminated by a pipeline flush. In this paper, we present several hardware-based versions of this screening algorithm and show that flushing the pipeline every time the hardware screener triggers reduces overall performance by less than 1%",2007,0, 437,A Modified BCE Algorithm for Fault-Tolerance Scheduling of Periodic Tasks in Hard Real-Time Systems,"Fault tolerance is an important aspect of real-time control systems, due to unavoidable timing constraints. In this paper, the timing problem of a set of concurrent periodic tasks is considered where each task has primary and alternate versions. In the literature, probability of fault in the alternate version of a task is assumed to be zero. Here, a fault probability with uniform distribution has been used. In addition, to cover the situations in which both versions are scheduled with some time overlapping, a criterion is defined for prioritizing primary version against the alternate version. A new scheduling algorithm is proposed based on the defined criterion. Simulation results show that an increase in the number of executed primary tasks which improves the efficiency of processor utilization, hence prove the efficiency of the proposed algorithm.",2009,0, 438,An optimal point in scheduling real-time tasks process based on fault tolerant imprecise computation model,"Fault tolerance is an important issue due to the critical nature of the supported tasks of real-time computer systems, since timing constraints must not be violated. The imprecise computation technique has been proposed as a way to handle transient overload and to enhance fault tolerant of real-time systems. This paper introduces an exact theoretical analysis for the imprecise computation model based on three principles of maximize reward-based test, minimize response-time test, and minimize errors test, then finds optimal-point in scheduling process to satisfy three scheduling conditions. Further this is also demonstrated by the simulation results.",2002,0, 439,Modeling fault-tolerant mobile agent execution as a sequence of agreement problems,"Fault tolerance is fundamental to the further development of mobile agent applications. In the context of mobile agents, fault tolerance prevents a partial or complete loss of the agent, i.e. ensures that the agent arrives at its destination. Simple approaches such as checkpointing are prone to blocking. Replication can in principle improve solutions based on checkpointing. However existing solutions in this context either assume a perfect failure detection mechanism (which is not realistic in an environment such as the Internet), or rely on complex solutions based on leader election and distributed transactions, where only a subset of solutions prevents blocking. The paper proposes a novel approach to fault tolerant mobile agent execution, which is based on modeling agent execution as a sequence of agreement problems. Each agreement problem is one instance of the well understood consensus problem. Our solution does not require a perfect failure detection mechanism, while preventing blocking and ensuring that the agent is executed exactly once",2000,0, 440,FATOMAS-a fault-tolerant mobile agent system based on the agent-dependent approach,"Fault tolerance is fundamental to the further development of mobile agent applications. In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., it ensures that the agent arrives at its destination. We present FATOMAS, a Java-based fault-tolerant mobile agent system based on an algorithm presented in an earlier paper (2000). Contrary to the standard ""place-dependent"" architectural approach, FATOMAS uses the novel ""agent-dependent"" approach. In this approach, the protocol that provides fault tolerance travels with the agent. This has the important advantage to allow fault-tolerant mobile agent execution without the need to modify the underlying mobile agent platform (in our case ObjectSpace's Voyager). In our performance evaluation, we show the costs of our approach relative to the single, non-replicated agent execution. Pipelined mode and optimized agent forwarding are two optimizations that reduce the overhead of a fault-tolerant mobile agent execution.",2001,0, 441,Doubly Fed Induction Generator Model-Based Sensor Fault Detection and Control Loop Reconfiguration,"Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",2009,0, 442,Fault diversity among off-the-shelf SQL database servers,"Fault tolerance is often the only viable way of obtaining the required system dependability from systems built out of ""off-the-shelf"" (OTS) products. We have studied a sample of bug reports from four off-the-shelf SQL servers so as to estimate the possible advantages of software fault tolerance - in the form of modular redundancy with diversity - in complex off-the-shelf software. We checked whether these bugs would cause coincident failures in more than one of the servers. We found that very few bugs affected two of the four servers, and none caused failures in more than two. We also found that only four of these bugs would cause identical, undetectable failures in two servers. Therefore, a fault-tolerant server, built with diverse off-the-shelf servers, seems to have a good chance of delivering improvements in availability and failure rates compared with the individual off-the-shelf servers or their replicated, nondiverse configurations.",2004,0, 443,CARRIAGE: fault tolerant CORBA system based on portable interceptors,"Fault tolerance requirement, which demands the reliability and consistency in compute systems, becomes a hot issue in current distributed object application field. Based on standardized portable interceptors mechanism, CARRIAGE system has successfully integrated ORBUS, a CORBA implementation developed by the authors and EDEN, a fault-tolerant framework into a whole new fault tolerant CORBA system, which uses active replication style to enhance fault tolerance service in CORBA domain with low cost and high efficiency. Practice has identified that this software prototype conforms to the standard specification thoroughly and provides a feasible and convenient way to glue the legacy systems without modifying the original systems or the application programs either.",2002,0, 444,Formal fault tree analysis of state transition systems,"Fault tree analysis (FTA) is a traditional deductive safety analysis technique that is applied during the system design stage. However, traditional FTA does not consider transitions between states, and it is difficult to decompose complex system fault events that are composed of multiple normal components' states rather than individual component failures. To solve these problems, we first propose two different fault events of fault trees, and then present a formal fault tree construction model by introducing the concept of transition rules for event decomposition, in which the semantics of gates and minimal cut sets of fault trees are revised compared with traditional FTA.",2005,0, 445,Formal static fault tree analysis,"Fault tree analysis (FTA) is a traditional informal reliability and safety analysis technique. FTA is basically a combinational model in which standard Boolean logic constructs, such as AND and OR gates, are used to decompose the fault events. Several dynamic constructs, such as Functional Dependency (FDEP) and Priority AND (PAND) gates, are also proposed to handle dynamic behaviors of system failure mechanisms. In this article, we focus on some paradoxes and constraints of the traditional FDEP and PAND gates, and present our static solutions to these dynamic gates. The proposed static fault tree model is formalized with Maude, an executable algebraic formal specification language. Two example fault tolerant parallel processor (FTPP) configurations are used to demonstrate our static fault tree model.",2010,0, 446,Adaptive partition size temporal error concealment for H.264,"Existing temporal error concealment methods for H.264 often decide the partition size of the lost macroblock (MB) before recovering the motion information, without actual quality comparison between different partition modes. In this paper, we propose to select the best partition mode by minimizing the Weighted Double-Sided External Boundary Matching Error (WDS-EBME), which jointly measures the inter-MB boundary discontinuity, inter-partition boundary discontinuity and intrapartition block artifacts in the recovered MB. The proposed method estimates the best motion vectors for each of the candidate partition modes, calculates the overall WDS-EBME values for them, and selects the partition mode with the smallest overall WDS-EBME to recover the lost MB. We also propose a progressive concealment order for the 4times4 partition mode. Test results show that the adaptive partition size method always outperforms the fixed partition size methods. Both the adaptive and fixed partition size methods are much superior to the temporal error concealment (TEC) method in the H.264 reference software.",2008,0, 447,Shared Data from a Study of Measurement Uncertainty in Fault Injection,"Experimental dependability studies usually produce an amount of data substantially greater than what can be presented in a research paper or a technical report. For this reason, authors condensate the results into more succinct forms that allow them to convey their message. Since a large amount of the original data is left unexplored, sharing it allows other teams to discover additional facts (as well as to compare the results to other studies). In a previous paper, we investigated sources of uncertainty in measurement results obtained using three different fault injection techniques. The resulting experimental data was shared in the AMBER raw data repository. This paper gives an overview of the study and makes an attempt at further exploring the shared data.",2010,0, 448,Trustworthy Evaluation of a Safe Driver Machine Interface through Software-Implemented Fault Injection,"Experimental evaluation is aimed at providing useful insights and results that constitute a confident representation of the system under evaluation. Although guidelines and good practices exist and are often applied, the uncertainty of results and the quality of the measuring system is rarely discussed. To complement such guidelines and good practices in experimental evaluation, metrology principles can contribute in improving experimental evaluation activities by assessing the measuring systems and the results achieved. In this paper we present the experimental evaluation by software-implemented fault injection of a safe train-borne driver machine interface (DMI), to evaluate its behavior in presence of faults. The measuring system built for the purpose and the results obtained on the assessment of the DMI are scrutinized along basic principles of metrology and good practices of fault injection. Trustfulness in results has been estimated satisfactory and the experimental campaign has shown that the safety mechanisms of the DMI correctly identify the faults injected and that a proper reaction is executed.",2009,0, 449,Efficiency analysis of illumination correction methods for face recognition performance,"Face recognition is an important task in the computer vision community leading to multiple applications such as building control access, video surveillance or forensics, to mention only a few. Face images are acquired in the enrollment process to form a database, which is the first stage usually performed off line. Second stage implies real-time test procedure where a new unseen face is captured and the face recognition system authorizes or does not, or recognize the identity of the person, based on similarity matching between the new acquired face image and the ones existing in the database, providing a matching score. When the acquiring conditions provided by the enrollment process highly differ by test environmental conditions, some preprocessing steps may be required. An instance of such step is given by illumination correction. The paper aims at analyzing the efficiency of five recent state-of-the-art normalization approaches in term of illumination correction and their effect on face recognition. Surprisingly, to the best of our knowledge, no systematic comparison exist in the literature to date, fact that had motivated us to carry out such analysis. We should also note that, apart from the face recognition task, other domains, such as medical imaging, could benefit from those preprocessing techniques.",2010,0, 450,Algorithm-Based Fault Tolerance for Fail-Stop Failures,"Fail-stop failures in distributed environments are often tolerated by checkpointing or message logging. In this paper, we show that fail-stop process failures in ScaLAPACK matrix-matrix multiplication kennel can be tolerated without checkpointing or message logging. It has been proved in previous algorithm-based fault tolerance that, for matrix-matrix multiplication, the checksum relationship in the input checksum matrices is preserved at the end of the computation no mater which algorithm is chosen. From this checksum relationship in the final computation results, processor miscalculations can be detected, located, and corrected at the end of the computation. However, whether this checksum relationship can be maintained in the middle of the computation or not remains open. In this paper, we first demonstrate that, for many matrix matrix multiplication algorithms, the checksum relationship in the input checksum matrices is not maintained in the middle of the computation. We then prove that, however, for the outer product version algorithm, the checksum relationship in the input checksum matrices can be maintained in the middle of the computation. Based on this checksum relationship maintained in the middle of the computation, we demonstrate that fail-stop process failures (which are often tolerated by checkpointing or message logging) in ScaLAPACK matrix-matrix multiplication can be tolerated without checkpointing or message logging.",2008,0, 451,hFT-FW: Hybrid Fault-Tolerance for Cluster-Based Stateful Firewalls,"Failures are a permanent menace for the availability of Internet services. During the last decades, numerous fault-tolerant approaches have been proposed for the wide spectrum of Internet services, including stateful firewalls. Most of these solutions adopt reactive approaches to mask failures by replicating state-changes between replicas. However, reactive replication is a resource consuming task that reduces scalability and performance: the amount of computational and bandwidth resources to propagate state-changes among replicas might be high. On the other hand, more and more commercial off-the-shelf platforms provide integrated hardware error-detection facilities. As a result, some current fault-tolerance research works aim to replace the reactive fault-handling with proactive fault-avoidance. However, pure proactive approaches are risky and they currently face serious limitations. In this work, we propose a hybrid proactive and reactive model that exploits the stateful firewall semantics to increase the overall performance of cluster-based fault-tolerant stateful firewalls. The proposed solution reduces the amount of resources involved in the reactive state-replication by means of Bayesian techniques to perform lazy replication while, at the same time, benefits from proactive fault-tolerance. Preliminary experimental results are also provided.",2008,0, 452,Building a requirement fault taxonomy: experiences from a NASA verification and validation research project,"Fault-based analysis is an early lifecycle approach to improving software quality by preventing and/or detecting pre-specified classes of faults prior to implementation. It assists in the selection of verification and validation techniques that can be applied in order to reduce risk. This paper presents our methodology for requirements-based fault analysis and its application to National Aeronautics and Space Administration (NASA) projects. The ideas presented are general enough to be applied immediately to the development of any software system. We built a NASA-specific requirement fault taxonomy and processes for tailoring the taxonomy to a class of software projects or to a specific project. We examined requirement faults for six systems, including the International Space Station (ISS), and enhanced the taxonomy and processes. The developed processes, preliminary tailored taxonomies for critical/catastrophic high-risk (CCHR) systems, preliminary fault occurrence data for the ISS project, and lessons learned are presented and discussed.",2003,0, 453,Comparative Study of Fault-Proneness Filtering with PMD,"Fault-prone module detection is important for assurance of software quality. We have proposed a novel approach for detecting fault-prone modules using spam filtering technique, named Fault-proneness filtering. In order to show the effectiveness of fault-proneness filtering, we conducted comparative study with a static code analysis tool, PMD. In the study, Fault-proneness filtering obtains higher F1 than PMD.",2008,0, 454,Modified feedback configuration for sensor fault tolerant control,"Faults in process control systems can cause undesired reactions and shut-down of a plant, and could be damaging to components, to personnel or the environment. The improvement of reliability, safety and efficiency of the system has become increasingly important. The fundamental purpose of a FTCS scheme is to ensure that faults do not result in system breakdown and although at a lower degree of system performance. The correlative action or prevention measures can be taken to eliminate or minimize the effect of the fault. This paper demonstrates how a feedback control structure can tolerate sensor faults in a temperature control system. The proposed fault-tolerant control design consists of two parts: a nominal performance controller and a model based element to provide sensor fault compensating signals. The nominal controller can have any given structure that satisfies the performance specification, such as a PID controller. When a sensor fault is present, the controller input is augmented to compensate the fault. Results of real time implementation for temperature control are presented to demonstrate the applicability of the proposed FTCS scheme. Sensor faults and disturbance were included to test the proposed design. Deteriorated sensor performance is considered as fault.",2008,0, 455,An efficient fault-tolerant scheme for mobile agent execution,"Fault-tolerance is one of the main problems that must be resolved to improve the adoption of the agent's computing paradigm. In this paper, we develop a pragmatic framework for agent systems fault-tolerance. The developed framework deploys an independent checkpointing strategy with cooperating agent and passive replication to offer a low-cost, application-transparent model for reliable agent-based computing that covers all possible faults that might invalidate reliable agent execution, migration and communication and maintains the exactly-once and non-blocking properties. At the end, we will present some performance results that show the effectiveness of the proposed fault-tolerance scheme",2006,0, 456,Robot fault-tolerance using an embryonic array,"Fault-tolerance, complex structure management and reconfiguration are seen as valuable characteristics. Embryonic arrays represent one novel approach that takes inspiration from nature to improve upon standard techniques. An existing BAE SYSTEMS RASCALTM robot has been augmented so as to improve the motor control system reliability through two biologically-inspired systems: an embryonic array and an artificial immune system. This paper is concerned with the embryonic array; this is novel in that it supports datapath-wide arithmetic and logic functions. The array is configured to provide an autonomous self-repairing hardware motor controller and is realized using a standard Xilinx Virtex FPGA. As with previous embryonic systems, the logic requirement of the array is greater than that of a conventional FPGA or standard modular-redundancy approach. However, the array offers the advantages of both conventional FPGAs and modular-redundancy techniques. It is a reconfigurable computing platform that provides inherent fault-tolerance through its distributed self-repair mechanism.",2003,0, 457,Improvement of Temporal-Replication Mechanism in Mobile Agent System Fault-Tolerant Model,"Fault-tolerant is one of the most important content of mobile-agent system. After studying the existing mobile-agent system fault-tolerant mechanisms both in domestic and overseas, this paper is concerned with temporal-replication methodology. On the basis of witness agent approach, which was proposed by Michael R.Lyu etc, in this paper a progressed mechanism was introduced. There are two aspects of improvement, the one is to keep address and timestamp of the nodes that agent has traveled; the other is to employ the agent creation node as a fixed backup. This approach can handle server failures, place failures, and failures in message passing. It is capable of detecting and recovering most failure scenarios in mobile agent systems. We describe the design of our fault-tolerant approach to mobile agent systems, and conduct reliability evaluation for our approach by using simulation tools of C-Sim and Matlab. The evaluation results show our approach is a promising technique in achieving mobile agent system reliability.",2007,0, 458,"Using program analysis to identify and compensate for nondeterminism in fault-tolerant, replicated systems","Fault-tolerant replicated applications are typically assumed to be deterministic, in order to ensure reproducible, consistent behavior and state across a distributed system. Real applications often contain nondeterministic features that cannot be eliminated. Through the novel application of program analysis to distributed CORBA applications, we decompose an application into its constituent structures, and discover the kinds of nondeterminism present within the application. We target the instances of nondeterminism that can be compensated for automatically, and highlight to the application programmer those instances of nondeterminism that need to be manually rectified. We demonstrate our approach by compensating for specific forms of nondeterminism and by quantifying the associated performance overheads. The resulting code growth is typically limited to one extra line for every instance of nondeterminism, and the runtime overhead is minimal, compared to a fault-tolerant application with no compensation for nondeterminism.",2004,0, 459,Virtual fault simulation of distributed IP-based designs,"Fault simulation and testability analysis are major concerns in design flows employing intellectual-property (IP) protected virtual components. In this paper we propose a paradigm for the fault simulation of IP-based designs that enables testability analysis without requiring IP disclosure, implemented within the JavaCAD framework for distributed design. As a proof of concept, stuck-at fault simulation has been performed for combinational circuits containing virtual components",2000,0, 460,Model for fault tolerance and policy from RM-ODP expressed in UML/OCL,"Fault tolerance (FT) is a topic of major concern in achieving dependable systems, for both real time as well as non real time systems. The paper provides a model of achieving fault tolerance, based on the ISO/ITU Reference Model for Open Distributed Processing (RM-ODP). This reference model provides a system software engineering methodology for fault tolerance, an object based model of fault tolerance, system requirements for achieving fault tolerance in an open manner, modeling constructs and rules to enable a proper system specification of fault tolerance, and business rules in terms of policies to achieve a well formed system specification. All these aspects are discussed at some depth, but the author primarily focuses on how certain behavior can be specified and achieved in an object based system, the constructs of the Unified Modeling Language (UML) and the Object Constraint Language (OCL)",2000,0, 461,Safety verification of fault tolerant goal-based control programs with estimation uncertainty,"Fault tolerance and safety verification of control systems that have state variable estimation uncertainty are essential for the success of autonomous robotic systems. A software control architecture called mission data system, developed at the Jet Propulsion Laboratory, uses goal networks as the control program for autonomous systems. Certain types of goal networks can be converted into linear hybrid systems and verified for safety using existing symbolic model checking software. A process for calculating the probability of failure of certain classes of verifiable goal networks due to state estimation uncertainty is presented. A verifiable example task is presented and the failure probability of the control program based on estimation uncertainty is found.",2008,0, 462,An Efficient Algorithm To Analyze New Imperfect Fault Coverage Models,"Fault tolerance has been an essential architectural attribute for achieving high reliability in many critical applications of digital systems. Automatic recovery and reconfiguration mechanisms play a crucial role in implementing fault tolerance because an uncovered fault may lead to a system or subsystem failure even when adequate redundancy exists. In addition, an excessive level of redundancy may even reduce the system reliability. Therefore, an accurate analysis must account for not only the system structure but also the system fault and error handling behavior. The models that capture the fault and error handling behavior are called coverage models. The appropriate coverage modeling approach depends on the type of fault tolerant techniques used. Recent research emphasizes the importance of two new categories of coverage models: Fault Level Coverage (FLC) models and one-on-one level coverage (OLC) models. However, the methods for solving FLC and OLC models are much more limited, primarily because of the complex nature of the dependency introduced by the reconfiguration mechanisms. In this paper, we propose an efficient algorithm for solving FLC and OLC models.",2007,0, 463,A Combined Approach for Information Flow Analysis in Fault Tolerant Hardware,"Fault tolerance in information security devices is difficult to establish due to the large number of possible interactions in the device (e. g. embedded code, boolean logic, electromagnetic interference, etc.) In previous work we examined information flow as a graph problem by composing orthogonal views of the device under analysis. In other work we used fault-tree analysis to reason about information flow as a systemic failure arising from certain configurations (or faults) in either the control logic or data flow 'backbone'. In this paper we combine these approaches by taking advantage of an alternative representation of fault trees as reliability block diagrams.",2007,0, 464,Self-Adaptation of Fault Tolerance Requirements Using Contracts,"Fault tolerance is a constant concern in data centers where servers have to run with a minimal level of failures. Changes on the operating conditions or on server demands, and variations of the systems own failure rate have to be handled in such a way that SLAs are honored and services are not interrupted. We present an approach to handle fault tolerance requirements, based on component replication, which is supported by a context-aware infrastructure and guided by contracts that describe adaptation policies for each application. At run-time the infrastructure autonomically manages the deployment, the monitoring of resources, the maintenance of the fault tolerance requirements described in the contract, and reconfigures the application when necessary, to maintain compliance. An example with an Apache web server and replicated Tomcat servers is used to validate the approach.",2009,0, 465,A Framework for Proactive Fault Tolerance,"Fault tolerance is a major concern to guarantee availability of critical services as well as application execution. Traditional approaches for fault tolerance include checkpoint/restart or duplication. However it is also possible to anticipate failures and proactively take action before failures occur in order to minimize failure impact on the system and application execution. This document presents a proactive fault tolerance framework. This framework can use different proactive fault tolerance mechanisms, i.e., migration and pause/un-pause. The framework also allows the implementation of new proactive fault tolerance policies thanks to a modular architecture. A first proactive fault tolerance policy has been implemented and preliminary experimentations have been done based on system-level virtualization and compared with results obtained by simulation.",2008,0, 466,"A Low-Cost Fault-Tolerant Real, Reactive, and Apparent Power Measurement Technique Using Microprocessor","Errors may creep in when measuring power by conventional methods due to the inductance and capacitance of the coils and the induced eddy current in the metal parts of the instruments through the alternating magnetic field of the current coil. Apart from these, if a fault occurs in any of the potential transformer secondary circuits or the potential coil of the measuring equipment, a conventional meter cannot detect it, which results in underregistration. In this paper, a microprocessor-based threephase real, reactive, and apparent power measurement system is developed, which displays the power being fed to a load under both normal and faulty conditions. The microprocessor provides a simple, accurate, reliable, and economical solution to these problems. A framework of the hardware circuitry and the assembly language program for the evaluation of power values is given, and the problems to which attention should be paid to execute the proposed algorithm using the microprocessor are discussed. Illustrative laboratory test results confirm the validity and accurate performance of the proposed method in real-time.",2007,0, 467,Assessing Failure of Bridge Construction Using Fuzzy Fault Tree Analysis,Estimating exact probabilities of occurrence of bridge failure for the use in the conventional fault tree analysis (FTA) is difficult when fault events are imprecise such as human error. A fuzzy FTA model employing fuzzy sets and possibility theory to tackle this problem is proposed. An example of the collapse of cantilever gantry during construction demonstrates the capability of this approach that can assist safety engineer to better evaluate bridge performance.,2007,0, 468,Evanescent microwave sensor scanning for detection of sub-surface defects in wires,"Evanescent microwave probe (EMP) scanning is used to detect sub-surface defects in copper wire with high dielectric coatings. The primary interest is in winding wire used in the construction of high voltage motors, generators and transformer applications although this technique can be applied to other aspects of stress and flaw detection. Evanescent microwave probes have the unique ability to image subsurface features under poorly conducting or dielectric materials. This nondestructive evaluation technique will allow fast and highly accurate subsurface scans of armature or stator wire in inspecting for various surface anomalies",2001,0, 469,Using loop invariants to fight soft errors in data caches,"Ever scaling process technology makes embedded systems more vulnerable to soft errors than in the past. One of the generic methods used to fight soft errors is based on duplicating instructions either in the spatial or temporal domain and then comparing the results to see whether they are different. This full duplication based scheme, though effective, is very expensive in terms of performance, power, and memory space. In this paper, we propose an alternate scheme based on loop invariants and present experimental results which show that our approach catches 62% of the errors caught by full duplication, when averaged over all benchmarks tested. In addition, it reduces the execution cycles and memory demand of the full duplication strategy by 80% and 4%, respectively.",2005,0, 470,The secret life of bugs: Going past the errors and omissions in software repositories,"Every bug has a story behind it. The people that discover and resolve it need to coordinate, to get information from documents, tools, or other people, and to navigate through issues of accountability, ownership, and organizational structure. This paper reports on a field study of coordination activities around bug fixing that used a combination of case study research and a survey of software professionals. Results show that the histories of even simple bugs are strongly dependent on social, organizational, and technical knowledge that cannot be solely extracted through automation of electronic repositories, and that such automation provides incomplete and often erroneous accounts of coordination. The paper uses rich bug histories and survey results to identify common bug fixing coordination patterns and to provide implications for tool designers and researchers of coordination in software development.",2009,0, 471,Application of a fault injection based dependability assessment process to a commercial safety critical nuclear reactor protection system,"Existing nuclear power generation facilities are currently seeking to replace obsolete analog Instrumentation and Control (I&C) systems with contemporary digital and processor based systems. However, as new technology is introduced into existing and new plants, it becomes vital to assess the impact of that technology on plant safety. From a regulatory point of view, the introduction or consideration of new digital I&C systems into nuclear power plants raises concerns regarding the possibility that the fielding of these I&C systems may introduce unknown or unanticipated failure modes. In this paper, we present a fault injection based safety assessment methodology that was applied to a commercial safety grade digital Reactor Protection System. Approximately 10,000 fault injections were applied to the system. This paper presents a overview of the research effort, lessons learned, and the results of the endeavor.",2010,0, 472,A PIN-Based Dynamic Software Fault Injection System,"Fault injection plays a critical role in the verification of fault-tolerant mechanism, software testing and dependability benchmarking for computer systems. In this paper, according to the characteristics of software faults, we propose a new fault injection design pattern based on the PIN framework provided by Intel company, and develop a PIN-based dynamic software fault injection system (PDSFIS). Faults can be injected by PDSFIS without the source code of target applications under assessment, nor does the injection process involve interruption or software traps. Experimental assessment results of an Apache Web server obtained by the dependability benchmarking are presented to demonstrate the potentials of PDSFIS.",2008,0, 473,Multi-cycle Fault Injections in Error Detecting Implementations of the Advanced Encryption Standard,"Fault injections can easily break a cryptosystem: hence, many dedicated error detection schemes have been proposed, relying on various forms of redundancy (e.g., temporal redundancy). In this paper, we analyze the error detection coverage of two AES implementations, based on the double-data-rate computation template, with emulated faults of several durations.",2007,0, 474,Novel fault localization approach for ATPG / scan-fault failures in complex sub-nano FPGA/ASIC debugging,Fault isolation in automated test pattern generation (ATPG) / scan-fault has been increasingly challenging in today's advanced integrated circuit (IC) as diagnosis results usually point to extensive faulty nets which can be physically widespread on the die. This work highlights a novel approach in understanding electrical fault data with promising physical failure analysis (PFA) results in FPGA/ASIC debugging.,2010,0, 475,Optimization for Fault Localization in All-Optical Networks,"Fault localization is a critical issue in all-optical networks. The limited-perimeter vector matching (LVM) protocol is a novel fault-localization protocol proposed for localizing single-link failures in all-optical networks. In this paper, we study the optimization problems in applying the LVM protocol in static all- optical networks. We consider two optimization problems: one is to optimize the traffic distribution so that the fault-localization probability in terms of the number of localized links is maximized, and the other is to optimize the traffic distribution so that the time for localizing a failed link is minimized. We formulate the two problems into an integer linear programming problem, respectively, and use the CPLEX optimization tool to solve the formulated problems. We show that by optimizing the traffic distribution the fault-localization probability can be maximized and the fault-localization time can be minimized. Moreover, a heuristic algorithm is proposed to evaluate the optimization results through simulation experiments.",2009,0, 476,Exploiting Quasiperiodicity in Motion Correction of Free-Breathing Myocardial Perfusion MRI,"Free-breathing image acquisition is desirable in first-pass gadolinium-enhanced magnetic resonance imaging (MRI), but the breathing movements hinder the direct automatic analysis of the myocardial perfusion and qualitative readout by visual tracking. Nonrigid registration can be used to compensate for these movements but needs to deal with local contrast and intensity changes with time. We propose an automatic registration scheme that exploits the quasiperiodicity of free breathing to decouple movement from intensity change. First, we identify and register a subset of the images corresponding to the same phase of the breathing cycle. This registration step deals with small differences caused by movement but maintains the full range of intensity change. The remaining images are then registered to synthetic references that are created as a linear combination of images belonging to the already registered subset. Because of the quasiperiodic respiratory movement, the subset images are distributed evenly over time and, therefore, the synthetic references exhibit intensities similar to their corresponding unregistered images. Thus, this second registration step needs to account only for the movement. Validation experiments were performed on data obtained from six patients, three slices per patient, and the automatically obtained perfusion profiles were compared with profiles obtained by manually segmenting the myocardium. The results show that our automatic approach is well suited to compensate for the free-breathing movement and that it achieves a significant improvement in the average Pearson correlation coefficient between manually and automatically obtained perfusion profiles before ( 0.87 0.18) and after (0.96 0.09) registration.",2010,0, 477,Full coverage location of logic resource faults in A SOC co-verification technology based FPGA functional test environment,"Full coverage location of logic resource faults is vital for FPGA design and fabrication, rather than only detecting whether there are faults or not. Taking advantage of flexibility and observability of software in conjunction with high-speed simulation of hardware, SOC co-verification technology based in-house FPGA functional test environment embedded with an in-house computerized tool, ConPlacement, can locate logic resources automatically, exhaustively and repeatedly. The approach to implement full coverage location of configurable logic block (CLB) faults by the FPGA functional test environment is presented in the paper. Experimental results of XC4010E demonstrate that full coverage location of logic resource faults as well as multi-faults position can be realized.",2009,0, 478,Software-Based Algorithm for Modeling and Correction of Gradient Nonlinearity Distortions in Magnetic Resonance Imaging,"Functional radiosurgery is a noninvasive stereotactic technique that requires magnetic resonance image (MRI) sets with high spatial resolution. Gradient nonlinearities introduce geometric distortions that compromise the accuracy of MRI-based stereotactic localization. We present a gradient nonlinearity correction method based on a cubic phantom MRI data set. The approach utilizes a sum of spherical harmonics to model the geometrically warped planes of the cube and applies the model to correct arbitrary image sets acquired with the same scanner. In this paper, we give a detailed description of the Matlab distortion correction program, report on its performance in stereotactic localization of phantom markers, and discuss the possibility to accelerate the code using general-purpose computing on graphics processing units (GPGPU) techniques.",2008,0, 479,RTL-based functional test generation for high defects coverage in digital SOCs,"Functional test is long viewed as unfitted for production test. The purpose of this contribution is to propose a RTL-based test generation methodology which can be rewardingly used both for design validation and to enhance the test effectiveness of classic, gate-level test generation. Hence, a RTL-based defect-oriented test generation methodology is proposed, for which a high defects coverage (DC) and a relatively short test sequence can be derived, thus allowing low-energy operation in test mode. The test effectiveness, regarding DC, is shown to be weakly dependent on the structural implementation of the behavioral description. The usefulness of the methodology is ascertained using the VeriDOS simulation environment and the CMUDSP ITC'99 benchmark circuit",2000,0, 480,Design Fault Directed Test Generation for Microprocessor Validation,"Functional validation of modern microprocessors is an important and complex problem. One of the problems in functional validation is the generation of test cases that has higher potential to find faults in the design. We propose a model based test generation framework that generates tests for design fault classes inspired from software validation. There are two main contributions in this paper. Firstly, we propose a microprocessor modeling and test generation framework that generates test suites to satisfy modified condition decision coverage (MCDC), a structural coverage metric that detects most of the classified design faults as well as the remaining faults not covered by MCDC. Secondly, we show that there exists good correlation between types of design faults proposed by software validation and the errors/bugs reported in case studies on microprocessor validation. We demonstrate the framework by modeling and generating tests for the microarchitecture of VESPA, a 32-bit microprocessor. In the results section, we show that the tests generated using our framework's coverage directed approach detects the fault classes with 100% coverage, when compared to model-random test generation",2007,0, 481,Further Results on Prony Approximation for Evaluation of the Average Probability of Error,"Further results on a Prony approximation for efficient evaluation of the average probability of error over fading channels are presented. A generic definition of the average probability of error for a communication system is given. The Prony approximation method is shown to be an extension of the Chernoff bound and the MGF method. An improved algorithm for obtaining the parameters of the Prony approximation is developed. New, simple and highly accurate Prony approximations for the conditional probability of bit-error of M-ary modulations assuming parameter quantization to only 3 significant figures are presented. Furthermore, the rank and determinant design criteria for space-time block codes in Gaussian channels are shown to be valid for the exact pairwise error probability. Numerical results indicate that the relative approximation error using the Prony approximation method is less than 5% of the exact average probability of error for all practical values of the signal-to-noise ratio.",2008,0, 482,Enhancing Fault Tolerance And Reliability In GAIAOS Through Structured Overlay Network,"GAIAOS event manager is a distributed event service, based on CORBA event service with a centralized entry point, resulting in limited fault resilience and scalability. In this paper, we have proposed a decentralized event service for GAIAOS through the use of DHT based structured overlay network to overcome these problems. The proposed architecture provides a completely distributed event communication mechanism without any centralized entry point. Incorporation of the structured overlay network in GAIAOS results in higher degree of fault resilience and scalability",2006,0, 483,GMPLS fault management and impact on service resilience differentiation,"Generalized Multi-Protocol Label Switching (GMPLS) is currently under standardization. It basically reuses the MPLS control plane (IP routing and signaling) for various technologies such as fiber switching, DWDM, SONET, and packet MPLS. Since GMPLS runs in core networks, fault management is of major concern. However, fast fault recovery and backup capacity assignments are very expensive and not all customers need this or are willing to pay for it. Therefore, we propose in this paper to use several protection and bandwidth-sharing schemes on the same network in order to provide differentiated services in the resilience space. This means an operator can offer and provide several customized services. The service management system implementing, the schemes is built on top of a GMPLS network management system developed in our lab.",2003,0, 484,A Vitual Instrument for Sensors Nonlinear Errors Calibration,"Generally, the input-output characteristic of many sensors are nonlinear. To improve the measure precision, the sensors nonlinear errors need to be calibrated. At present many calibration methods have been researched in term of theoretical analysis but no effective software be developed from the points of engineering actual applications. In this paper, a virtual instrument for sensors nonlinear errors calibration, based on Labview6.1, is presented. It consists of three kinds of nonlinear errors calibration methods: linear fit calibration, polynomial fit calibration and artificial neural network calibration. Tests and performance analysis of the VI were performed with the data of fibre-optic displacement sensor",2005,0, 485,PTGC: A parallel triangular geometric correction algorithm for remote sensing images,"Geometric correction with the compute intensive characteristic is the critical step in the processing of remote sensing image. As is known, the correction based on polynomial transform will lead to significant computing errors when applied to the region with fluctuant terrain. In this paper we present a novel and more precise parallel triangular geometric correction algorithm to deal with the image of the fluctuant terrain with large amount of data. In the algorithm, dynamic cut-off points are adopted to achieve good load balance, and the triangle scan line structure is utilized to preserve the index of the triangle, which is hashed by the pixel. Hence, the redundant calculation is avoided. The algorithm is equipped with good scalability, and is able to handle arbitrary geometric distortion. The accuracy of our approach is approximate 80% higher than the polynomial geometric correction algorithm with similar computational overhead.",2009,0, 486,Fusing hard and soft computing for fault management in telecommunications systems,"Global telecommunication systems are at the heart of the Internet revolution. To support Internet traffic they have built-in redundancy to ensure robustness and quality of service. This requires complex fault management. The traditional hard approach is to reduce the number of alarm events (symptoms) presented to the operating engineer through monitoring, filtering and masking. The goal of the soft approach is to automate the analysis fully so that the underlying fault is determined from the evidence available and presented to the engineer. This paper describes progress toward automated fault identification through a fusion between these soft and hard computing approaches.",2002,0, 487,Optimization of the Alberty and Hespelt carrier frequency error detection algorithm,"Following a literature survey, the Alberty and Hespelt frequency error detection (FED) algorithm was chosen for software implementation in an all-digital demodulator. For the correct operation of this algorithm, it is desired to have symmetric bandpass filters. In this paper a simple method will be presented to ensure that the bandpass filters are fully symmetrical. Simulations show that even with symmetric bandpass filters, there is a substantial amount of tracking jitter. To overcome this problem, a smart filter has been implemented to ensure that the tracking performance is almost jitter free without increasing the error acquisition time",2005,0, 488,Spherical Near-Field Antenna Measurements: A Review of Correction Techniques,"Following an introductory review of spherical near-field scanning measurements, with emphasis on the general applicability of the technique, we present a survey of the various methods to improve measurement accuracy by correcting the acquired data before performing the transform and by special processing of the resulting data following the transform. A post-processing technique recently receiving additional attention is the IsoFiltertrade technique that assists in suppressing extraneous stray signals due to scattering from antenna range apparatus.",2007,0,7660 489,Joint Source-Channel Rate-Distortion Optimization for H.264 Video Coding Over Error-Prone Networks,"For a typical video distribution system, the video contents are first compressed and then stored in the local storage or transmitted to the end users through networks. While the compressed videos are transmitted through error-prone networks, error robustness becomes an important issue. In the past years, a number of rate-distortion (R-D) optimized coding mode selection schemes have been proposed for error-resilient video coding, including a recursive optimal per-pixel estimate (ROPE) method. However, the ROPE-related approaches assume integer-pixel motion-compensated prediction rather than subpixel prediction, whose extension to H.264 is not straightforward. Alternatively, an error-robust R-D optimization (ER-RDO) method has been included in H.264 test model, in which the estimate of pixel distortion is derived by simulating decoding process multiple times in the encoder. Obviously, the computing complexity is very high. To address this problem, we propose a new end-to-end distortion model for R-D optimized coding mode selection, in which the overall distortion is taken as the sum of several separable distortion items. Thus, it can suppress the approximation errors caused by pixel averaging operations such as subpixel prediction. Based on the proposed end-to-end distortion model, a new Lagrange multiplier is derived for R-D optimized coding mode selection in packet-loss environment by taking into account of the network conditions. The rate control and complexity issues are also discussed in this paper",2007,0, 490,Fault-based attack of RSA authentication,"For any computing system to be secure, both hardware and software have to be trusted. If the hardware layer in a secure system is compromised, not only it would be possible to extract secret information about the software, but it would also be extremely hard for the software to detect that an attack is underway. In this work we detail a complete end-to-end fault-attack on a microprocessor system and practically demonstrate how hardware vulnerabilities can be exploited to target secure systems. We developed a theoretical attack to the RSA signature algorithm, and we realized it in practice against an FPGA implementation of the system under attack. To perpetrate the attack, we inject transient faults in the target machine by regulating the voltage supply of the system. Thus, our attack does not require access to the victim system's internal components, but simply proximity to it. The paper makes three important contributions: first, we develop a systematic fault-based attack on the modular exponentiation algorithm for RSA. Second, we expose and exploit a severe flaw on the implementation of the RSA signature algorithm on OpenSSL, a widely used package for SSL encryption and authentication. Third, we report on the first physical demonstration of a fault-based security attack of a complete microprocessor system running unmodified production software: we attack the original OpenSSL authentication library running on a SPARC Linux system implemented on FPGA, and extract the system's 1024-bit RSA private key in approximately 100 hours.",2010,0, 491,A pattern defect inspection method by parallel grayscale image comparison without precise image alignment,"For automatic visual inspection of patterns on printed wiring boards and/or patterned wafers, this paper presents a new defect detection method for grayscale images without precise image alignment. Most of the conventional visual inspection algorithms based on grayscale reference comparison require precise image alignment with precision of subpixel or within 1 pixel; however, it is difficult to succeed in the precise image alignment in every image. While a defect inspection method without precise image alignment has been previously proposed for binary images, the expansion to grayscale images we discuss is indispensable for detecting more minute defects. We propose dynamic tolerance control based on grayscale morphology to reduce false defects on pattern edges, and use gray dilation operation so that a weakness of the original method for binary images, an inability to detect the absence of minute patterns, is overcome. Theoretical analysis and experimental results show that the proposed method is capable of detecting subpixel-sized defects, and has practical detection performance.",2002,0, 492,Performance evaluation of a fault-tolerant mechanism based on replicated distributed objects for CORBA,"For future applications, it is important to develop systems using small objects. Such systems must be able to provide uninterrupted services even when some small objects stop. Replicating such objects is one way to do this. However, introducing some type of redundancy into systems generally adds some overhead. Our proposed model reduces this overhead. It is implemented with multi-threaded execution for applications in actual systems. We measured the performance of an implementation for applications connected to databases. The results show that the overhead for ordinary execution and the time required to switch the replicas are both acceptably small. This technique will therefore play an important role in future systems",2001,0, 493,Geometric correction of scanned topographic maps using capable input information,"For making digital maps by using the raster-vector conversion of printed binary topographic maps, one of the problems is how to correct geometric distortion that originates in the habit of individual scanner. To use innumerable resources of printed binary topographic maps effectively, we propose an interactive interface and geometric correction algorithms, which uses peculiar coordinates information in individual map. The examples prove that the interactive interface using a cross cursor is useful and efficient for getting accurate input coordinates, and the proposed correction algorithm demonstrates high accuracy of corrected coordinates.",2003,0, 494,Random double-bit error-correcting decomposable codes,"For odd m, a family of decomposable [3(2m-1), 3(2m-1)-3m, 5] codes, based on |a+x|b+x|a+b+x| construction, are proposed. A simple high-speed decoding algorithm for these codes suitable for implementation in combinational circuits is described",2001,0, 495,Modeling of the induction machine for the diagnosis of rotor defects. Part. II. Simulation and experimental results,"For part I see ibid., p.Z001665-Z001672. Two softwares are coupled for the parameter calculation and the differential integration of the dynamic model of the squirrel cage rotor induction machine, in part. II of this paper. From theoretical study of the induction machine by an approach of multiple coupled circuits, we worked out the necessary various blocks to simulation. The calculation of machine inductances (with and without rotor defects) is carried out by the tools of software MATLAB before the beginning of simulation under Software SIMULINK. The machine parameters are then computed by using the concepts of the construction of electric machines. Simulation and experimental results are presented to confirm the validity of proposed model.",2005,0, 496,CT Based Attenuation Correction for PET Brain Imaging,"For research and clinical PET brain studies performed on PET/CT systems, the CT image is often of little benefit beyond attenuation correction. The goal of this work is both to investigate the quantitative accuracy of CT-based attenuation correction (CTAC) for PET brain studies and to determine the lowest-dose protocol that performs adequately. Measurements were performed on a GE Discovery ST PET/CT scanner using the GE kVp-dependent CTAC algorithm. The effects of pitch on CT images were investigated by acquiring CT images of a Defrise disk phantom. The effects of pitch and X-ray tube voltage and current were investigated using a human skull phantom encased in plastic filled with F-18 radioactivity. This phantom was imaged and CT-based attenuation correction factors (CTACF) were created for several permutations of pitch (0.562:1, 0.938:1, 1.375:1, 1.75:1), voltage (80, 100, 120, 140 kVp) and current (10, 100mA). Emission images were acquired and reconstructed using the 3-D reprojection algorithm with the various CTACFs. The measured mean activity concentration is independent of pitch, kVp, and mA and accurate on an absolute scale to ~5%. Anatomically sized regional differences in the brain region indicate that tube voltages less than 100 kVp may not perform adequately (10% of values have a discrepancy greater than 5%). Results indicate that the higher pitch, lower current, and tube voltages down to 100 kVp perform equivalently to higher-dose configurations.",2006,0, 497,A novel fault-tolerant control method of wireless sensor networks,"For sensor distribution of wireless sensor networks, a neural network fault-tolerant control strategy is proposed: first the control law in various fault conditions is designed using the method of system reconstruction, and then the control law features is learned based a neural network. After learning, the neural network as a controller system can be controlled. Simulation results show that: neural network controller can replace the original controller system, and for an unknown fault in the system, it can have the same fault tolerant.",2010,0, 498,A length compensation method to eliminate the varying length defect in one dimensional fisheye views,"Graphical fisheye view is an effective technique for visualizing and navigating large information structures. However, there are still technical difficulties that hinder its broader applications. One of the prominent problems is the varying length effect seen in most fisheye views. The varying length effect refers to a phenomenon that the length (or height) of a fisheye component is not fixed, but it varies with the location of the focal point. This effect may bring some disadvantages that reduce the usability of a fisheye component. To overcome this defect, sporadical solutions have been seen for specific implementations, but a systematic method has not been seen yet. This paper proposes a length compensation method to eliminate the varying length defect for one dimensional fisheye components. The method provides solutions for handling both discrete and continuous magnifications respectively. The mathematical foundation of the method is given, and the implemented prototype proves that it is effective.",2010,0, 499,Gravity measurement from moving platform by second order Kalman Filter and position and velocity corrections,"Gravity measurement is an effective tool for oil and gas exploring. It is particularly important for gravity observation from moving platforms, especially for remote areas and offshore fields by aircraft, boat, ship, submarine, satellite, vehicle, etc. The measurement is complicated by difficulty to discern gravity from platform accelerations, nonlinearity, drift and dynamic angular movement. Presented solution is second order Kalman Filter, a recursive estimator that is applied to highly nonlinear measurement problems. The filter optimally combines data of three-axis gyros, accelerometers and platform position and velocity signals to provide accurate attitude and gravity measurement. Extensive simulations verified accuracy and robustness of proposed method for measurement form different vehicles in various dynamic environments.",2009,0, 500,Design of Tone-Dependent Color-Error Diffusion Halftoning Systems,"Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Tone-dependent error diffusion (TDED) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. We present an extension of TDED to color. In color-error diffusion, which color to render becomes a major concern in addition to finding optimal dot patterns. We propose a visually meaningful scheme to train input-level (or tone-) dependent color-error filters. Our design approach employs a Neugebauer printer model and a color human visual system model that takes into account spatial considerations in color reproduction. The resulting halftones overcome several traditional error-diffusion artifacts and achieve significantly greater accuracy in color rendition",2007,0, 501,A fault-tolerance mechanism in grid,"Grid appears as an effective technology coupling geographically distributed resources for solving large-scale problems in the wide area network. Fault tolerance in grid system is a significant and complex issue to secure a stable and reliable performance. Until now, various techniques exist for detecting and correcting faults in distributed computing systems. Unfortunately, few energy focus on fault-tolerance in grid environment, especially with the emergence of OGSA. A new fault-tolerant mechanism is needed to detect and recover service faults and nodes crash. Based on our previous work on Java threads state capturing and existing mobile agent techniques, we put forward a fault-tolerant mechanism providing effective fault-handling and recovering methods.",2003,0, 502,Managing Faults for Distributed Workflows over Grids,"Grid applications composed of multiple, distributed jobs are common areas for applying Web-scale workflows. Workflows over grid infrastructures are inherently complicated due to the need to both functionally assure the entire process and coordinate the underlying tasks. Often, these applications are long-running, and fault tolerance becomes a significant concern. Transparency is a vital aspect to understanding fault tolerance in these environments.",2010,0, 503,DDGrid: A Grid Computing Environment with Massive Concurrency and Fault-Tolerance Support,"Grid Computing is an effective computing paradigm widely used in solving complex problems. There are a variety of existing grid middleware systems which support operation of grid infrastructures, including CNGrid GOS, EGEE gLite, Globus Toolkit, and OSG Condor etc. These grid infrastructures focus on encapsulating underlying computing and storage resources and providing necessary basic services such as batch job service, information service, scheduling service, and cross-domain security, etc. Some other features such as fault-tolerance, massive concurrency support are vital to the success of real applications, especially complex and long running applications. These features have not been the focus point of the current grid systems. DDGrid, a key project supported by CNGrid (China National Grid), is aiming at establishing a grid computing environment that can utilize computing resources scattered over the Internet to carry out virtual-screening operations which requires computing power that a single institute or company can't afford. In our design and implementation of DDGrid, we propose a master/worker mode which effectively utilizes computing resources that the underlying grid infrastructure provides and tries to provide additional features of fault-tolerance and massive concurrency support that are essential to the real applications.",2008,0, 504,Fault tolerant mechanism in grid based on Backup Node,"Grid is a very efficient technology to performing heavy processing with distributed resources. These resources at geographically distributed widely and without central control unit. One of the important challenges in grid is fault tolerant, because the grid resources in this environment are not reliable. And to avoid duplication with processing done by a resource, the Check Pointing mechanism has been proposed. The state of application at particular point of time can be store and in case of failure the status information can restore on new node and computation can continue. Check Pointe storage location is a major challenge in the fault tolerant techniques. In this paper we presented a new method by using a backup node. Adding a component to GRAM to improve efficiency and fault tolerant. With clear Backup Node the Check point information can save on it. This method increases the efficiency and system reliability.",2010,0, 505,Achieving Fault Tolerance on Grids with the CPPC Framework and the GridWay Metascheduler,"Grids have brought a significant increase in the number of available resources that can be provided to applications. In the last decade, an important effort has been made to develop middleware that provides grids with functionalities related to application execution. However, support for fault-tolerant executions is either lacking or limited. This paper presents an experience to endow with fault tolerance support parallel executions on grids through the integration of CPPC, a check pointing tool for parallel applications, and Grid Way, a well-known met scheduler provided with the Globus Toolkit. Since both tools are not immediately compatible, a new architecture, called CPPC-GW, has been designed and implemented to allow for the transparent execution of CPPC applications through Grid Way. The performance of the solution has been evaluated using the NAS Parallel Benchmarks. Detailed experimental results show the low overhead of the approach.",2010,0, 506,A broadband network fault distribution model,"Growth of the telecommunications market brings more and more types of broadband services, and also the total amount of users who use broadband services is growing. All of this leads to increasing number of user interference, which may be caused by various reasons. Among the most common causes of errors may be faults in the access network, the failures of the customer equipment, errors in the core network, errors in the access devices, etc. For the telecom operators it is very important to well manage the removal of those errors, because the quality of customer services depends on it. This article deals with the exploration of the most common error places and describes time distribution appearing of these errors. As part of this work was created the fault generator, which tries to truly predict the appearance of user interferences. Generator is modeled by using the method of Fourier series and a quantile function.",2009,0, 507,Temporal error concealment algorithm for H.264/AVC using omnidirectional motion similarity,"H.264/AVC is the newest one among several video compression standards. The main goals of H.264/AVC are to achieve efficient compression performance and a network friendly video coding. However, if an error occurs when transmitting compressed video, error concealment is needed to prevent error propagation and to improve the video quality. In this paper, we propose the temporal error concealment algorithm which provides high performance for H.264/AVC. The proposed algorithm uses the property that the motion vectors (MVs) between the error macroblock (MB) and the neighboring MB have high similarity to select a group of candidate MVs, when an error occurs in the inter-coded frame. Next, weighted overlapped boundary matching algorithm using the credibility of information selects the best candidate MV among a group of candidate MVs. The experimental results show that the proposed algorithm improves PSNR up to 3.02 dB compared with the boundary matching algorithm (BMA).",2010,0, 508,Physical defect modeling for fault insertion in system reliability test,"Hardware fault-insertion test (FIT) is a promising method for system reliability test and diagnosis coverage measurement. It improves the speed of releasing a quality diagnostic program before manufacturing and provides feedbacks of fault tolerance of a very complicated large system. Certain level insufficient fault tolerance can be fixed in the current system but others may require ASIC or overall system architectural modifications. The FIT is achieved by introducing an artificial fault (defect modeling) at the pin level of a module to mimic any physical defect behavior within the module, such as SEU (single event upset) or escaped delay defect. We present a hardware architectural solution for pin fault insertion. We also present a simulation framework and optimization techniques for a subset of module pin selection for FIT, such that desired coverage are obtained under the constraints of limited FIT pins due to the costs of the associated implementation. Experimental results are presented for selected ISCAS and OpenCore benchmarks, as well as for an industrial circuit.",2009,0, 509,Clustering Intelligent Sensor Nodes for Distributed Fault Detection & Diagnosis,"Having studied distributed process monitoring using intelligent nodes in a cluster, we now extend the study to include using multiple clusters of nodes to monitor multiple process units interconnected to form a process plant. Our case study is that of a reaction process consisting of two CSTRs and auxiliary equipments such as tanks, heat exchangers etc. The study aims to explore the possibility of the new framework of distributed (and collaborative) process monitoring, in which fault detection and diagnosis is performed at the physical sensor and sensor cluster level. The resulting framework should be a better advisory tool for situation analysis and fault diagnosis by plant operator; while potential extension includes prognosis and the inclusion in control and fault recovery.",2006,0, 510,Fault Tolerant ICAP Controller for High-Reliable Internal Scrubbing,"High reliable reconfigurable applications today require system platforms that can easily and quickly detect and correct single event upsets. This capability, however, can be costly for FPGAs. This paper demonstrates a technique for detecting and repairing SEUs within the configuration memory of a Xilinx Virtex-4 FPGA using the ICAP interface. The internal configuration access port (ICAP) provides a port internal to the FPGA for configuring the FPGA device. An application note demonstrates how this port can be used for both error injection and scrubbing (L. Jones, 2007). We have extended this work to create a fault tolerant ICAP scrubber by triplicating the internal ICAP circuit using TMR and block memory scrubbing. This paper will describe the costs, benefits, and reliability of this fault-tolerant ICAP controller.",2008,0, 511,Hardware/software solution for high precision defect correction in digital image sensors,"High resolution image sensors are now standard in imaging devices such as mobile phones with camera functionality. Resolution improvement in very small sensors is obtained by decreasing the pixel size but, in CMOS sensors, the likelihood of defective pixels also augments. Hence, sophisticated processing is necessary for achieving high quality images despite of noise and defects. This paper presents a hardware/software solution for high precision correction of defective pixels in an image sensor. The method maintains an always up-to-date map of the defective pixels and also allows detection of new defects as they show up during the lifetime of the sensor. The reliability of the map is assured by tracking the history of pixels defectiveness. The map is updated automatically and in real time without user intervention.",2008,0, 512,Geometric Correction of High Resolution Satellite Imagery and its Residual Analysis,"High resolution satellite images are prone to geometric distortions. To correct these, the process of geometric correction becomes vital. Only knowledge of satellite altitude, attitude, position and the information of the digital elevation model (DEM) will not be adequate for the geometric correction requirements. Therefore the authors designed an algorithm for removal of geometric distortions in satellite imagery. In that a new method of geo-referencing called pixel projection method was applied along with selection of precise ground control points (GCPs). In pixel projection method vertices of remotely sensed image is geo-located based on ancillary data. For precision of GCP least square method was used to cater for instrument bias. GCPs were selected from Google Earth's software. Though with that approach precise geo-referencing of satellite imagery was achieved and a level-1 image was successfully converted to level-3 geometrically corrected image. In this paper the authors carried out residual analysis of our new proposed method. In first step an image to image matching was performed and their MSE (mean square error) was calculated. In second step 8 points in the original image and geo-referenced images were identified and their MSE was calculated. It is observed that with new approach of geo-referencing more precise geo-referencing has been done and image is found to be accurately geometrically corrected",2006,0, 513,Improvement of bit error rate using channel interleaving for channel binding WLAN prototype,"High speed wireless LAN prototype of 324 Mbit/sec has been developed. Six channel binding of 802.11 a signal on frequency domain was employed for increasing PHY data rate. Frame aggregation was employed to improve MAC-SAP throughput. Individual adaptive data rate setting on each channel, so called adaptive dynamic channel assign was implemented in order to achieve maximum MAC-SAP throughput at any channel condition. In this paper, multiple channel interleaving over all frequency channels for randomization of burst error was presented. Interleaving can be carried out over all channels or inside individual channel. Bit and packet error rate were measured using implemented WLAN equipements with/without multiple channel interleaving. It was confirmed that improvement using multiple channel interleaving were measured to be 2 dB.",2008,0, 514,High-impedance fault detection using discrete wavelet transform and frequency range and RMS conversion,"High-impedance faults (HIFs) are faults which are difficult to detect by overcurrent protection relays. Various pattern recognition techniques have been suggested, including the use of wavelet transform . However this method cannot indicate the physical properties of output coefficients using the wavelet transform. We propose to use the Discrete Wavelet Transform (DWT) as well as frequency range and rms conversion to apply a pattern recognition based detection algorithm for electric distribution high impedance fault detection. The aim is to recognize the converted rms voltage and current values caused by arcs usually associated with HIF. The analysis using discrete wavelet transform (DWT) with the conversion yields measurement voltages and currents which are fed to a classifier for pattern recognition. The classifier is based on the algorithm using nearest neighbor rule approach. It is proposed that this method can function as a decision support software package for HIF identification which could be installed in an alarm system.",2005,0, 515,Simple correction of chemical shift changes in magnetic resonance spectroscopy quantitation,"High-resolution magic angle spinning (HRMAS) 1H spectroscopy is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. Automatic quantitation of HRMAS signals provides reliable reference profiles to monitor diseases and pharmaceutical follow-up. Nevertheless, for several metabolites chemical shifts may slightly differ according to the micro-environment in the tissue or cells, in particular its pH. This hampers accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis-set. In this work, we propose a user-friendly way to circumvent this problem based on stretching of the metabolite basis-set signals and maximization of the correlation between the HRMAS and basis-set spectra prior to quantitation.",2010,0, 516,Rotor cage fault diagnosis in induction motors based on spectral analysis of current Hilbert modulus,"Hilbert transformation is an ideal phase shifting tool in data signal processing. Being Hilbert transformed, the conjugated one of a signal is obtained. The Hilbert modulus is defined as the square of a signal and its conjugation. This work presents a method by which rotor faults of squirrel cage induction motors, such as broken rotor bars and eccentricity, can be diagnosed. The method is based on the spectral analysis of the stator current Hilbert Modulus of the induction motors. Theoretical analysis and experimental results demonstrate that has the same rotor fault detecting ability as the extended Park' vector approach. The vital advantage of the former is the smaller hardware and software spending compared with the existing ones.",2004,0, 517,Coordinated application of multiple description scalar quantization and error concealment for error-resilient MPEG video streaming,"Historically, multiple description coding (MDC) and postprocessing error concealment (ECN) algorithms have evolved separately. In this paper, we propose a coordinated application of multiple description scalar quantizers (MDSQ) and ECN, where the smoothness of the video signal helps to compensate for the loss of descriptions. In particular, we perform a reconstruction that is consistent with the data received at the decoder. When only a single description is available, the video is reconstructed in such a way that: 1) if we were to regenerate two descriptions (from the reconstructed video), one of them would be equivalent to the received description and 2) the reconstructed video is spatiotemporally smooth. Experimental results with several video sequences demonstrated a peak signal-to-noise ratio (PSNR) improvement of 0.9-2.8 dB for intracoded frames. The PSNR improvements for intercoded frames were negligible. However, for both cases, the visual improvements were much more striking than what the PSNR improvement suggested.",2005,0, 518,Fault tolerant analysis of multi-agent manufacturing systems based Petri nets,"Holonic manufacturing system (HMS) is a paradigm based on multi-agent system theory that allows fast reconfiguration of available resources to cope with changing product demands in highly uncertain manufacturing environment. The advantage and flexibility offered by HMS poses new challenges and complexities in modeling and design of production control systems. For example, HMS is largely based on multi-agent system (MAS). Contract net protocol is a well-known negotiation and task distribution mechanism for MAS. However, production processes cannot be modeled with contract net protocol. Instead, most existing literatures model production processes based on Petri net theory. The lack of integration among different modeling tools makes it deficient to apply existing tools to model, planning and control HMS. A promising solution to model HMS is to combine the flexibility and robustness of multi-agent theory with the modeling and analytical power of Petri net. This paper focuses on development of a framework to model HMS by extending the contract net protocol with timed Petri net model. The main results include: (1) a nominal collaborative Petri net (CPN) agent model for HMS. (2) Liveness conditions for (CPN) and (3) a resource unavailability model to capture the effects of resource failures. (4) Fault tolerant conditions to test whether a certain type of resource failures are allowed based on Petri net agent model of HMS and (5) a collaborative algorithm to test the feasibility of a solution.",2004,0, 519,Haloperidol Impairs Learning and Error-related Negativity in Humans,"Humans are able to monitor their actions for behavioral conflicts and performance errors. Growing evidence suggests that the error-related negativity (ERN) of the event-related cortical brain potential (ERP) may index the functioning of this response monitoring system and that the ERN may depend on dopaminergic mechanisms. We examined the role of dopamine in ERN and behavioral indices of learning by administering either 3 mg of the dopamine antagonist (DA) haloperidol (n = 17); 25 mg of diphenhydramine (n = 16), which has a similar CNS profile but without DA properties; or placebo (n = 18) in a randomized, double-blind manner to healthy volunteers. Three hours after drug administration, participants performed a go/no-go Continuous Performance Task, the Eriksen Flanker Task, and a learning-dependent Time Estimation Task. Haloperidol significantly attenuated ERN amplitudes recorded during the flanker task, impaired learning of time intervals, and tended to cause more errors of commission, compared to placebo, which did not significantly differ from diphenhydramine. Drugs had no significant effects on the stimulus-locked P1 and N2 ERPs or on behavioral response latencies, but tended to affect post-error reaction time (RT) latencies in opposite ways (haloperidol decreased and diphenhydramine increased RTs). These findings support the hypothesis that the DA system is involved in learning and the generation of the ERN.",2004,0, 520,Topological Correction of Hypertextured Implicit Surfaces for Ray Casting,"Hypertextures are a useful modelling tool in that they can add three-dimensional detail to the surface of otherwise smooth objects. Hypertextures can be rendered as implicit surfaces, resulting in objects with a complex but well defined boundary. However, representing a hypertexture as an implicit surface often results in many small parts being detached from the main surface, turning an object into a disconnected set. Depending on the context, this can detract from the realism in a scene where one usually does not expect a solid object to have clouds of smaller objects floating around it. We present a topology correction technique, integrated in a ray casting algorithm for hypertextured implicit surfaces, that detects and removes all the surface components that have become disconnected from the main surface. Our method works with implicit surfaces that are C2 continuous and uses Morse theory to find the critical points of the surface. The method follows the separatrix lines joining the critical points to isolate disconnected components.",2007,0, 521,Improving the performance of hypervisor-based fault tolerance,"Hypervisor-based fault tolerance (HBFT), a checkpoint-recovery mechanism, is an emerging approach to sustaining mission-critical applications. Based on virtualization technology, HBFT provides an economic and transparent solution. However, the advantages currently come at the cost of substantial overhead during failure-free, especially for memory intensive applications. This paper presents an in-depth examination of HBFT and options to improve its performance. Based on the behavior of memory accesses among checkpointing epochs, we introduce two optimizations, read fault reduction and write fault prediction, for the memory tracking mechanism. These two optimizations improve the mechanism by 31.1% and 21.4% respectively for some application. Then, we present software-superpage which efficiently maps large memory regions between virtual machines (VM). By the above optimizations, HBFT is improved by a factor of 1.4 to 2.2 and it achieves a performance which is about 60% of that of the native VM.",2010,0, 522,Image mosaic method based on the image geometric correction for traffic accident scene,Image mosaic is one of important technologies for image processing. It is normally used to make up a seamless and high resolution image. There are some algorithms that deal with the image mosaic. But the most make simply two or more images seamlessly form a large image for a holographic scenic display. The post-processing on the photo from a traffic accident scene is required to reflect and allow inspectors to accurately determine the actual distance between objects in the scene. Therefore the images taken from the traffic accident scene need to be corrected before being spliced to each other. The image correction allows the information on the scene to be correctly displayed. The splicing on the corrected images ensures a thorough view and complete information gain that covers the whole scene.,2010,0, 523,Fault data collection in substations according to IEC 61850,"In a research project a fault data collection device for substations with a communication interface according to IEC 61850 was developed. The IEC 61850 standard is the future of substation automation networking. This standard has no models for signaling faults included up to date. Therefore, specific logical nodes were defined by using attribute types of the standard. These additional nodes were implemented and tested in a self made server.",2009,0, 524,Eye gaze correction to guarantee eye contact in videoconferencing,"In a typical desktop video-conference setup, the camera and the display screen cannot be physically aligned. This problem produces lack of eye contact and substantially degrades the user's experience. Expensive hardware systems using semi-reflective materials are available on the market to solve the eye gazing problem. However, these specialized systems are far away from the mass market. This paper presents an alternative approach using stereo rigs to capture a three-dimensional model of the scene. This information is then used to generate the view from a virtual camera aligned with the conference image the user looks at.",2009,0, 525,LEON3 ViP: A Virtual Platform with Fault Injection Capabilities,"In addition to functional simulation for validation of hardware/software designs, there are additional robustness requirements that need advanced simulation techniques and tools to analyze the system behavior in the presence of faults. In this paper, we present the design of a fault injection framework for LEON3, a 32bit SPARC CPU based system used by the European Space Agency, described at Transaction Level using System C. First of all an extension of a previous XML formalization of basic binary faults, like memory and CPU registers corruption, is done in order to support TLM2.0transaction's parameters corruptions. Next a novel Dynamic Binary Instrumentation (DBI) technique for C++ binaries is used to insert fault injection wrappers in SystemC transaction path. For binary faults in model components the use of TLM2.0 transport_dbg is proposed. This way each component with fault injection capabilities exposes a standard interface to allow internal component inspection and modification.",2010,0, 526,A Method to Aid Recovery and Maintenance of the Input Error Correction Features,"In an information system, inputs are submitted to the system from its external environment. However, many input errors cannot be detected automatically and therefore result in errors in the effects raised by the system. Hence, the provision of input error correction features to correct these erroneous effects is critical. The recovery and maintenance of these features are complex and tedious. We have discovered some interesting control flow graph properties with regard to input errors and the implementation of their correction features. This paper proposes a method for the automated recovery of after-effect input error correction features from these properties. Based on the recovered information, we further propose a method to aid the maintenance of these features using decomposition slicing. All the empirical properties have been validated statistically. The approach has also been evaluated through some case studies",2006,0, 527,Manga University: web-based correction system for artistic design education,"In artistic design education, the teacher instructs each student individually face to face. As a result, it is difficult to share coaching with a third party or to teach distantly. To solve these problems, we have developed Manga University, which is a web-based application to aid artistic design education. It enables distant teaching or shared teaching and provides a learning portfolio for collaboration. It is applicable to several other types of artistic design education, such as fashion design or GUI design for software or the web.",2002,0, 528,A smart design of coolant tank leak testing equipment in car manufacture using fault detection and isolation observer based method,"In car manufacture industry (in Shanghai), we need to do leak test for coolant tank. Normally, many coolant tanks e.g. 5 have to be tested at the same time. The traditional test system has only one pressured liquid input with a single pressure sensor and a temperature sensor to monitor and control the coolant tank leak test system. Once a tank leaked, it was difficult to identify which one in testing had been leaking. This paper presents a smart design of the test equipment for a coolant tank leak test using existing testing equipment. This design applied fault detection and isolation observer based method.",2008,0, 529,A 180A Phase Shifter With Small Phase Error for Broadband Applications,"In commonly used multi-bit high-/low-pass phase shifters, the phase error is mostly due to the first bit, which provides the 180deg phase shift. This paper explores the design of a 180deg phase shifter that combines a high-pass filter with a transmission line to reduce the phase error over a large bandwidth. Compared to the conventional high-/low-pass phase shifter, the proposed phase shifter shows better performance in both phase error and amplitude balance over a broad bandwidth. To illustrate the principle, a 180degphase shifter using this topology is designed, fabricated and measured. The phase error is measured to be plusmn2deg, plusmn3.5deg and plusmn4.5deg over a bandwidth of 31.8%, 43.4% and 49.3% respectively. The measured amplitude imbalance of the two branches is within 0.4 dB from 840 to 1310 MHz and the return loss is found to be better than 17 dB inclusive of the effect of the switches and the discrete components.",2007,0, 530,Multi-bit Error Tolerant Caches Using Two-Dimensional Error Coding,"In deep sub-micron ICs, growing amounts of on-die memory and scaling effects make embedded memories increasingly vulnerable to reliability and yield problems. As scaling progresses, soft and hard errors in the memory system will increase and single error events are more likely to cause large-scale multi- bit errors. However, conventional memory protection techniques can neither detect nor correct large-scale multi-bit errors without incurring large performance, area, and power overheads. We propose two-dimensional (2D) error coding in embedded memories, a scalable multi-bit error protection technique to improve memory reliability and yield. The key innovation is the use of vertical error coding across words that is used only for error correction in combination with conventional per-word horizontal error coding. We evaluate this scheme in the cache hierarchies of two representative chip multiprocessor designs and show that 2D error coding can correct clustered errors up to 32times32 bits with significantly smaller performance, area, and power overheads than conventional techniques.",2007,0, 531,An algorithm for diagnostic fault simulation,"In diagnostic testing faults detectable by test vectors are partitioned into groups. This partitioning is such that a fault is distinguishable from faults in all other groups, but is indistinguishable from those in its own group. Diagnostic fault coverage (DC) is defined as the number of fault groups divided by the total number of faults. We present a new diagnostic fault simulation algorithm that determines the DC of given test vectors and produces a fault dictionary. For each vector, we begin with detected fault list at each primary output obtained from a convetional fault simulator. For the vector being simulated each fault is assigned a detection index that uniquely specifies its detection status at all primary outputs. Fault list is then partitioned. Faults with different detection index are distinguished by the simulated vector and are kept in separate groups. Any fault in a group by itself is dropped from further simulation with subsequent vectors for which its detection index remains unknown (X). After simulation of each vector, the cumulative DC is obtained by counting the fault groups. Fault dictionary syndrome for a fault is the array of its detection indexes.",2010,0, 532,"Numerical simulation of propagation and defect reflection of T(0,1) mode guided wave in pipes","For the complexity of calculating and analyzing the guided wave propagation and defect reflection in steel pipes, and the instructional role on studying the characters of T(0,1) mode guided wave to experimental studies, a method associating guided wave theory with numerical solution was applied to simulate T(0,1) mode guided wave propagation and defect reflection in steel pipes by building models, imposing surface loads, and calculating in the ANSYS program, and the characters of T(0,1) mode guided wave were studied. The results of numerical calculation prove that: the T(0,1)mode guided wave was basically non-dispersive in reasonable frequencies, the attenuation trend of amplitude was exponential and the amplitude was basically keeping stable after propagating some distance, the T(0,1) mode guide wave was sensitive to both inner and outer circumferential defects. The reflection coefficient of T(0,1) mode guided wave increases linearly with the increase of circumferential length and depth of defects. When defect depth is not through-thickness, axial length has more influence on reflection coefficient. When defect depth is through-thickness, the influence of axial length to reflection coefficient is basically omitted.",2010,0, 533,How Much Fault Protection is Enough - A Deep Impact Perspective,"For the deep impact project, a myriad of fault protection (FP) monitors, symptoms, alarms and responses is engineered into the spacecraft FP software, common and yet custom to the flyby and impactor mother-daughter spacecraft. Device faults and functional faults are monitored, which are mapped 1-to-n into FP symptoms, per instance of the fault. Symptoms are then mapped n-to-1 to FP alarms, further down mapped n-to-1 to FP responses. Though the final statistics of 49 monitors, 921 symptoms, 667 alarms, and 39 responses appear to be staggering, it remains debatable whether the amount of on-board autonomous fault protection is sufficient and friendly to operate",2005,0, 534,Network fault management systems using multiple mobile agents for multihomed networks,"For the increasing of the network complexity, it is not easy to exactly determine where the network fault is. Using the characteristic of multihoming, the paper proposes a scheme to identify the network faults, where six types of mobile agents are developed to cooperate to provide fault management functions. Moreover, the characteristics of mobility, intelligence and flexibility help the proposed scheme to identify the faults quickly. Besides, the proposed scheme is implemented on National Broadband Experimental Network (NBEN) and Taiwan Research Network (TANet2), where the two interconnected networks have their own routing policies, but are managed by the common mobile-agent-based network management network. Experimental results show that the ping-monitoring agent implemented in the proposed network fault management system has a 59.66% reduction in the time of monitoring the whole NBEN network.",2003,0, 535,Efficient Memory Error Coding for Space Computer Applications,"For the secure transaction of data between the central processing unit (CPU) of a satellite on board-computer and its local random access memory (RAM), the program memory has been usually designed with triple modular redundancy (TMR), which is a hardware implementation that includes replicated memory circuits and voting logic to detect and correct a faulty value. TMR error correction technique allows single correction of one error bit per stored word. For computers on board a satellite, there is however a definite risk of two error bits occurring within one byte of stored data. In this paper, the application of the quasi-cyclic codes to the routine error protection of SRAM program memory for satellites in low Earth orbit is described and implemented in field programmable gate array (FPGA) technology. The proposed device is transparent to the routine transfer of data between CPU and its local RAM",2006,0, 536,Incorporating imperfect debugging into software fault processes,"For the traditional SRGMs, it is assumed that a detected fault is immediately removed and is perfectly repaired with no new faults being introduced. In reality, it is impossible to remove all faults from the fault correction process and have a fault-free effect on the software development environment. In order to relax this perfect debugging assumption, we introduce the possibility of imperfect debugging phenomenon. Furthermore, most of the traditional SRGMs have focused on the failure detection process. Consideration of fault correction process in the existing models is limited. However, to achieve desired level of software quality, it is very important to apply powerful technologies for removing the errors in the fault correction process. Therefore, we divide these processes into different two nonhomogeneous Poisson processes (NHPPs). Moreover, these models are considered to be more practical to depict the fault-removal phenomenon in software development.",2004,0, 537,A fault-tolerant system for Java/CORBA objects,"Frameworks like CORBA facilitate the development of distributed applications through the use of off-the-shelf components. Though the use of distributed components allows faster building of applications, it also reduces the application availability as failure of any component can make the application unavailable. In this paper we present the design and implementation of a fault-tolerant system for CORBA objects implemented in Java. The proposed fault tolerant system employs object replication. We use a three tier architecture in which the middle tier manages replication and acts as a proxy for replicated objects. The proxy ensures consistency and transparency. In the current implementation, the proxy uses the primary-site approach to ensure strong consistency. Saving and restoring of objects' state is done transparently and it does not require object implementation to have special functions implemented for this purpose.",2008,0, 538,Fault-tolerant scheduling in distributed real-time systems,"In distributed systems, a real-time task has several subtasks which need to be executed at different nodes. Some of these subtasks can be executed in parallel on different nodes without violating their precedence relationships, if any, among them. To better exploit the parallelism, it becomes necessary to assign separate deadlines to subtasks and schedule them independently. We use three subtask deadline assignment policies which we have introduced earlier to develop a bidding-based fault-tolerant scheduling algorithm for distributed real-time systems. A local scheduler which resides on each node, tries to determine a schedule for each subtask according to the primary-backup approach. In this paper we discuss the algorithm and present the results of simulation studies conducted to establish the efficacy of our algorithm",2001,0, 539,New Method for Detecting Low Current Faults in Electrical Distribution Systems,"In electrical distribution systems, low current faults may be caused by a high impedance fault or by the fault current limitation caused by the neutral to ground connection. In the former case, an indirect contact or insulation degradation give a high value of the fault impedance. In the latter, the neutral grounding may be either isolated or compensated. Nevertheless, these types of faults do not produce enough current so that the traditional overcurrent relays or fuses are not able to detect the fault. This paper presents a new methodology, based on the superposition of voltage signals of certain frequency, for the detection of low current single phase faults in radial distribution systems. The simulation analysis and laboratory tests carried out have proved the validity of the methodology for any type of grounding method.",2007,0, 540,A comparison of techniques to optimize measurement of voltage changes in electrical impedance tomography by minimizing phase shift errors,"In electrical impedance tomography, errors due to stray capacitance may be reduced by optimization of the reference phase of the demodulator. Two possible methods, maximization of the demodulator output and minimization of reciprocity error have been assessed, applied to each electrode combination individually, or to all combinations as a whole. Using an EIT system with a single impedance measuring circuit and multiplexer to address the 16 electrodes, the methods were tested on resistor-capacitor networks, saline-filled tanks and humans during variation of the saline concentration of a constant fluid volume in the stomach. Optimization of each channel individually gave less error, particularly on humans, and maximization of the output of the demodulator was more robust. This method is, therefore, recommended to optimize systems and reduce systematic errors with similar EIT systems.",2002,0, 541,Machining accuracy improvement of five-axis machine tools by geometric error compensation,"In five-axis machining, geometric error of the machine tool is an important error source reducing machining accuracy. Through geometric error compensation, the machining accuracy can be improved. A software-based error compensation strategy is discussed in this study. All individual geometric error components of five-axis machine tools are identified by a laser interferometer system and a ball bar system. To acquire synthetic error, a generalized error model was established based on muti-body system (MBS) and homogeneous transfer matrix (HTM). Finally, the geometric error was compensated by correcting the NC code. Corresponding error compensation software has been developed and an experiment has shown the feasibility of the proposed compensation method.",2010,0, 542,A Resource Management System for Fault Tolerance in Grid Computing,"In grid computing, resource management and fault tolerance services are important issues. The availability of the selected resources for job execution is a primary factor that determines the computing performance. The failure occurrence of resources in the grid computing is higher than in a tradition parallel computing. Since the failure of resources affects job execution fatally, fault tolerance service is essential in computational grids. And grid services are often expected to meet some minimum levels of quality of service (QoS) for desirable operation. However Globus toolkit does not provide fault tolerance service that supports fault detection service and management service and satisfies QoS requirement. Thus this paper proposes fault tolerance service to satisfy QoS requirement in computational grids. In order to provide fault tolerance service and satisfy QoS requirements, we expand the definition of failure, such as process failure, processor failure, and network failure. And we propose resource scheduling service, fault detection service and fault management service and show implement and experiment results.",2009,0, 543,Reference node selection algorithm and localization error analysis for indoor sensor networks,"In indoor environment, one of the major challenges for researchers is to localize the sensor nodes with relatively high localization precision. Many traditional positioning algorithms dealt with the node localization issues, such as two-phase positioning (TPP) algorithm, without taking into account the ""reference node"" parameter, however which also strongly affects the quality of spatial localization. We analyze the localization error and draw the conclusion that the localization error is the least when three reference nodes form an equilateral triangle. Therefore, we propose reference node selection algorithm based on trilateration (RNST). The simulation results show that our algorithm can meet real-time localization requirement of the mobile nodes in indoor environment, and make the localization error less than that of the traditional algorithm.",2007,0, 544,SVM Classifier for Impulse Fault Identification in Transformers using Fractal Features,"Improper or inadequate insulation may lead to failure during impulse tests of a transformer. It is important to identify the type and the exact location of insulation failure within the winding of power transformers. This paper describes a new approach using fractal theory for extraction of features from the impulse test response of a transformer and Support Vector Machine (SVM) in regression mode to classify the fault response patterns. A variety of algorithms are available for the computation of Fractal Dimension (FD). In the present work, Box counting and Higuchi's algorithm for the determination of FD, Lacunarity, and Approximate Entropy (ApEn) has been used for the extraction of fractal features form time domain impulse test response. The analysis has been performed on both Analog and Digital Models of a 3 MVA, 33/11 kV transformer. A noticeable finding is that the SVM tool trained with the simulated data only is capable of identifying the location and fault classes of analog model data accurately within a tolerance limit of plusmn3.37% .",2007,0, 545,Evaluation of probabilistic-based selectivity technique for earth fault protection in MV networks,"In a Bayesian selectivity technique has been introduced to identify the faulty feeder in compensated medium voltage (MV) networks. The proposed technique has been based on a conditional probabilistic method applied on transient features extracted from the residual currents only using the Discrete Wavelet Transform (DWT). In this paper, the performance of this selectivity technique is evaluated when the current transformers (CTs) impacts are considered. The CTs are modeled considering their frequency characteristics. Furthermore, network noises are added to the simulated signals. Therefore, the algorithm can be tested at different practical conditions, such as nonlinear characteristics of the measuring devices and the impact of noise as well. The fault cases occurring at different locations in a compensated 20 kV network are simulated by ATP/EMTP. Results show a reduction in the algorithm sensitivity with considering CT and noise effectiveness.",2009,0, 546,Fast Recovery and QoS Assurance in the Presence of Network Faults for Mission-Critical Applications in Hostile Environments,"In a hostile military environment, systems must be able to detect and react to catastrophes in a timely manner in order to provide assurance that critical tasks will continue to meet their timeliness requirements. Our research focuses on achieving network quality of service (QoS) assurance using a Bandwidth Broker in the presence of network faults in layer-3 networks. Passive discovery techniques using the link-state information from routers provide for rapid path discovery which, in turn, leads to fast failure impact analysis and QoS restoration. In addition to network fault tolerance, the Bandwidth Broker must be fault tolerant and must be able to recover quickly. This is accomplished using a modified commercially available and open-source in- memory database cluster technology.",2007,0, 547,Fault injection in distributed Java applications,"In a network consisting of several thousands computers, the occurrence of faults is unavoidable. Being able to test the behaviour of a distributed program in an environment where we can control the faults (such as the crash of a process) is an important feature that matters in the deployment of reliable programs. In this paper, we investigate the possibility of injecting software faults in distributed Java applications. Our scheme is by extending the FAIL-FCI software. It does not require any modification of the source code of the application under test, while retaining the possibility to write high level fault scenarios. As a proof of concept, we use our tool to test FreePastry, an existing Java implementation of a distributed hash table (DHT), against node failures",2006,0, 548,Job-Site Level Fault Tolerance for Cluster and Grid environments,"In order to adopt high performance clusters and grid computing for mission critical applications, fault tolerance is a necessity. Common fault tolerance techniques in distributed systems are normally achieved with checkpoint-recovery and job replication on alternative resources, in cases of a system outage. The first approach depends on the system's MTTR while the latter approach depends on the availability of alternative sites to run replicas. There is a need for complementing these approaches by proactively handling failures at a job-site level, ensuring the system high availability with no loss of user submitted jobs. This paper discusses a novel fault tolerance technique that enables the job-site recovery in Beowulf cluster-based grid environments, whereas existing techniques give up a failed system by seeking alternative resources. Our results suggest sizable aggregate performance improvement during an implementation of our method in Globus-enabled HA-OSCAR. The technique called ''smart failover"" provides a transparent and graceful recovery mechanism that saves job states in a local job-manager queue and transfers those states to the backup server periodically, and in critical system events. Thus whenever a failover occurs, the backup server is able to restart the jobs from their last saved state",2005,0, 549,Soft error assessments for servers,"In order to assess the soft error rate (SER) of a server, it is important to not only quantify the soft error contribution of the individual semiconductor components, but also to account for derating and for SER mitigation like hardening and shielding. Derating describes the fact that not every soft error has an impact. A large number of soft errors vanish based on electrical, logical or timing considerations. They have no impact. Additionally, a server can, to a large degree, be protected from the impact of soft errors by implementing error detection and correction means. In these cases the impact of the soft error is limited to the extra compute time needed for the correction. Summing up the SER contributions from transistors and circuits results in the so-called raw soft error rate, a rate which describes just the bottom layer of the system stack. Powerful protection mechanisms at higher layers can reduce that rate by several orders of magnitude. Awareness of this vertical interaction across the different layers in the system stack leads to servers optimized for robustness.",2010,0, 550,Automated post-fault diagnosis of power system disturbances,"In order to automate the analysis of SCADA and digital fault recorder (DFR) data for a transmission network operator in the UK, the authors have developed an industrial strength multi-agent system entitled protection engineering diagnostic agents (PEDA). The PEDA system integrates a number of legacy intelligent systems for analyzing power system data as autonomous intelligent agents. The integration achieved through multi-agent systems technology enhances the diagnostic support offered to engineers by focusing the analysis on the most pertinent DFR data based on the results of the analysis of SCADA. Since November 2004 the PEDA system has been operating online at a UK utility. In this paper the authors focus on the underlying intelligent system techniques, i.e. rule-based expert systems, model-based reasoning and state-of-the-art multi-agent system technology, that PEDA employs and the lessons learnt through its deployment and online use",2006,0, 551,"Improving access to relevant data on faults, errors and failures in real systems","In order to be able to test the effectiveness and verify proposed techniques for enhanced availability based on field data from systems it is important to have reliability data of the components and the information necessary to characterize or model the system. This includes inter alia the type and number of components, their protection and dependency relations as well the automatic recovery mechanisms built into the system. An important benefit of making system models and logs available to the research community in a standard format is that it opens up the possibility for creating tools to assess and optimize deployed as well as hypothetical system configurations. Specialized tools for on-line and off-line analysis and classification of reliability data also become viable. Availability modeling tools could be benchmarked against actual data. Depending on the usefulness of such tools and the level of adoption of standard models and formats in the industry a market for reliability data analysis tools could emerge over time. These tools could be used during the design, deployment and operation phases of a system in order to predict or enhance the availability of the services it provides",2006,0, 552,Development of image-based scatter correction for brain perfusion SPECT study: comparison with TEW method,"In order to convert scatter uncorrected into corrected SPECT image, an image-based scatter correction (IBSC) method has been developed. The aim of this study was validation of its role as image converter from scatter uncorrected into corrected images equivalent to image corrected by conventional TEW method. IBSC method is executed in the postreconstruction process and only requires an attenuation corrected main photopeak image with broad value, IACb. The scatter component image is estimated by convolving IACb with a scatter function followed by multiplying with an image-based scatter fraction (SF) function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ECD SPECT human brain perfusion studies obtained from five volunteers. The noise property of the scatter corrected image using IBSC method, IIBSC, was compared with that by TEW method, ITEW, with simulated brain phantom images. Image contrast between gray with white matter in the human study was also compared between IBSC and TEW method. The global signal-to-noise (S/N) ratio of IIBSC was decreased to 14% compared to that of IACb, whereas that of ITEW was decreased to 21%. In human brain imaging, significant difference in image contrast between IBSC and TEW method was not observed (p<0.05). In conclusion, the IBSC method could be applied to clinical brain perfusion SPECT as conversion IACb into a scatter corrected image equivalent to ITEW. This achieves a better noise property than the TEW method.",2003,0, 553,Parametric fault tree for the dependability analysis of redundant systems and its high-level Petri net semantics,"In order to cope efficiently with the dependability analysis of redundant systems with replicated units, a new, more compact fault-tree formalism, called Parametric Fault Tree (PFT), is defined. In a PFT formalism, replicated units are folded and indexed so that only one representative of the similar replicas is included in the model. From the PFT, a list of parametric cut sets can be derived, where only the relevant patterns leading to the system failure are evidenced regardless of the actual identity of the component in the cut set. The paper provides an algorithm to convert a PFT into a class of High-Level Petri Nets, called SWN. The purpose of this conversion is twofold: to exploit the modeling power and flexibility of the SWN formalism, allowing the analyst to include statistical dependencies that could not have been accommodated into the corresponding PFT and to exploit the capability of the SWN formalism to generate a lumped Markov chain, thus alleviating the state explosion problem. The search for the minimal cut sets (qualitative analysis) can be often performed by a structural T-invariant analysis on the generated SWN. The advantages that can be obtained from the translation of a PFT into a SWN are investigated considering a fault-tolerant multiprocessor system example.",2003,0, 554,Fabrication error in resonant frequency of microstrip antenna,"In order to fabricate microstrip antenna with high precision for micromechanical application, we calculated the error in the resonant frequency of a patch antenna. Accuracy of the obtained results is compared with the experimental data. An accurate analysis allows success on first fabrication",2001,0, 555,Application of MATLAB in teaching of Error Theory And Survey Adjustment,"In order to improve and strengthen the ability of survey data processing, MATLAB software is introduced in the course of Error Theory and Survey Adjustment as the teaching assistance method. Based on its ability formidable calculating function, graph handling ability and the rich toolbox, the MATLAB-based Error Theory and Survey Adjustment experiment has been set up. It improves the students' creative capability and stimulates learning activity, because the abstract theory becomes direct viewing and vivid, so the teaching effect is improved.",2010,0, 556,Subpixel Edge Location Using Orthogonal Fourier-Mellin Moments Based Edge Location Error Compensation Model,"In order to improve the edge location accuracy of vision measurements, this paper presents a compensation model for edge location calculation using the orthogonal Fourier-Mellin moments (OFMMs). The edge location error is the difference between the sampling edge location and the actual one which occurs when the edge location is not within the center pixel and pixel boundaries. We established a look-up table which as the compensation model to eliminate the error for OFMMs.Experimental results showed that the proposed model can be used to correct the error of edge location. The precision of 0.19 pixel by using OFMMs with the compensation model can be achieved. It can be concluded that the proposed method is an efficient approach to satisfy the practical requirements for high accuracy for edge detection.",2008,0, 557,Analysis of measuring errors for the visible light phase-shifting point diffraction interferometer,"In order to improve the measuring accuracy of the visible light phase-shifting point diffraction interferometer (PS/PDI) for the extreme ultraviolet lithography (EUVL) aspheric mirrors, the main measuring errors will be discussed in this paper. At first, the elementary configuration and measuring principle of the visible light phase-shifting point diffraction interferometer are introduced briefly, then the different errors which are possible to affect the measuring result are summed up, the errors include PZT phase-shifting error, detector nonlinearity error, detector quantization error, wavelength instability error and intensity instability error of the laser source, vibration error, air refractivity instability error and so on. Through detailed analysis and simulation, the magnitude of these errors can be obtained. By analysing the reasons which cause these errors and the relationship between these errors and interferometer configuration parameters, some methods are put forward to avoid or restrain these errors accordingly.",2010,0, 558,Efficient High-Speed Interface Verification and Fault Analysis,"In this article we discuss three challenges of device verification and test for high-speed interfaces, with special focus on latest memory device interface generations as DDR3 / GDDR5 / XDR working in the GHz clock frequency range. We address how efficient device verification in terms of reaching target coverage fast can be achieved through random methods on ATE, both random operating condition tests and Random Test Pattern (RTP). We present a novel Random Test Pattern generation method suitable for memory device verification. Failure conditions and failing pattern must be transferred to simulation for root-cause understanding, which is not directly possible due to the large gap between pattern length on ATE in the order of 106 clock cycles and pattern length limitations in simulation of 103 clock cycles. We present an extraction algorithm on ATE with DUT in the loop to extract the minimum length failing test pattern sequence for simulation and root-cause analysis. An application example is presented, where a large Random Test Pattern revealed a special command sequence leading to device failure and how this sequence has been extracted for simulation. The presented random methods lead to fast detection of device issues by exploring the full coverage space. With the presented automated extraction that replaces manual interactive analysis, fast root-cause understanding with engineering time and ATE test-time reduction from typically few days to below one hour are achievable.",2008,0, 559,Application of set membership identification for fault detection of MEMS,"In this article, a set membership (SM) identification technique is tailored to detect faults in microelectromechanical systems. The SM-identifier estimates an orthotope which contains the system's parameter vector. Based on this orthotope, the system's output interval is predicted. If the actual output is outside of this interval, then a fault is detected. Utilization of this scheme can discriminate mechanical-component faults from electronic component variations frequently encountered in MEMS. For testing the suggested algorithm's performance in simulation studies, an interface between classical control-software (MATLAB) and circuit emulation (HSPICE) is developed",2006,0, 560,Wide band harmonic suppression based on Koch-shaped defected ground structure for a microstrip patch antenna,"In this article, a wide band harmonic suppression microstrip patch antenna using the Koch-shaped defected ground structure (DGS) is presented. In order to realize the wide band harmonious suppression, the Koch-shaped defected ground structure (DGS) microstrip circuit is designed as the feed part of the proposed antenna. By inserting parallel slots, the antenna can be directly fed by the simple 50 microstrip transmission line. The proposed patch antenna operates at the center frequency of 2.5GHz and the spurious radiations up to 15GHz are properly suppressed. Simulated and measured results indicate that the Koch-shaped defected ground structure (DGS) is effective in suppressing spurious radiations. And the patch antenna considered here can be widely applied in active integrated communication systems.",2010,0, 561,Development of neural networks module for fault identification in asynchronous machine using various types of reference signals,"In this article, the device of automatic diagnostic of asynchronous motor is discussed. This diagnostic system is based on artificial neural network (ANN), in order to find the different defects by classification. The machine health identification process is mainly based on recognition and comparison of real-time captured standard signature as stator current, rotation speed of machine. The features extraction of the instantaneous signals will then input to an artificial neural networks (ANN) for recognition and identification. The output of the neural network was trained to generate a healthy index that indicates the machine health condition. In this work, the entries used in the neural network were the various types of signals: the instantaneous values and the effective values (root mean square) of the machine parameters.",2005,0, 562,Analysis of error recovery schemes for networks on chips,"In this article, we discuss design constraints to characterize efficient error recovery mechanisms for the NoC design environment. We explore error control mechanisms at the data link and network layers and present the schemes' architectural details. We investigate the energy efficiency, error protection efficiency, and performance impact of various error recovery mechanisms.",2005,0, 563,Design of Energy-Efficient High-Speed Links via Forward Error Correction,"In this brief, we show that forward error correction (FEC) can reduce power in high-speed serial links. This is achieved by trading off the FEC coding gain with specifications on transmit swing, analog-to-digital converter (ADC) precision, jitter tolerance, receive amplification, and by enabling higher signal constellations. For a 20-in FR4 link carrying 10-Gb/s data, we demonstrate: 1) an 18-mW/Gb/s savings in the ADC; 2) a 1-mW/Gb/s reduction in transmit driver power; 3) up to 6?? improvement in transmit jitter tolerance; and 4) a 25- to 40-mV improvement in comparator offset tolerance with 3?? smaller swing.",2010,0, 564,End-to-end defect modeling,"In this context, computer models can help us predict outcomes and anticipate with confidence. We can now use cause-effect modeling to drive software quality, moving our organization toward higher maturity levels. Despite missing good software quality models, many software projects successfully deliver software on time and with acceptable quality. Although researchers have devoted much attention to analyzing software projects' failures, we also need to understand why some are successful - within budget, of high quality, and on time-despite numerous challenges. Restricting software quality to defects, decisions made in successful projects must be based on some understanding of cause-effect relationships that drive defects at each stage of the process. To manage software quality by data, we need a model describing which factors drive defect introduction and removal in the life cycle, and how they do it. Once properly built and validated, a defect model enables successful anticipation. This is why it's important that the model include all variables influencing the process response to some degree.",2004,0, 565,Rateless Codes With Unequal Error Protection Property,"In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Raptor codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the proposed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video/audio streaming",2007,0, 566,"Comments on ""Data Mining Static Code Attributes to Learn Defect Predictors""","In this correspondence, we point out a discrepancy in a recent paper, ""data mining static code attributes to learn defect predictors,"" that was published in this journal. Because of the small percentage of defective modules, using probability of detection (pd) and probability of false alarm (pf) as accuracy measures may lead to impractical prediction models.",2007,0, 567,A Lower Bound on the Probability of Undetected Error for Binary Constant Weight Codes,"In this correspondence, we study the probability of undetected error for binary constant weight codes. First, we derive a new lower bound on the probability of undetected error for binary constant weight codes. Next, we show that this bound is tight if and only if the binary constant weight codes are generated from certain t-designs in combinatorial design theory. This means that these binary constant weight codes generated from certain t-designs are uniformly optimal for error detection. Along the way, we determine the distance distributions of such binary constant weight codes. In particular, it is shown that binary constant weight codes generated from Steiner systems are uniformly optimal for error detection. Thus, we prove a conjecture of Xia, Fu, Jiang, and Ling. Furthermore, the distance distribution of a binary constant weight code generated from a Steiner system is determined. Finally, we study the exponent of the probability of undetected error for binary constant weight codes. We derive some bounds on the exponent of the probability of undetected error for binary constant weight codes. These bounds enable us to extend the region in which the exponent of the probability of undetected error is exactly determined",2006,0,7864 568,Defects detection and characterization by using cellular neural networks,"In this document, a new method to detect and to characterize surface defects in mechanical parts is reported. Cellular neural networks are used as tools for the implementation of the stereoscopic vision analysis technique. Suitable applications in microscopic defect analysis (real-time processing), in various fields (e.g. aeronautics applications) are introduced, by means of the reported examples in order to validate the cellular neural networks (CNN) approach",2001,0, 569,A method of probe refinement for fault diagnosis,"In order to obtain the information of the actual execution sequences, probes are required to be deployed in the system, which may influence the operation of system. In this paper, a method of probe refinement is proposed to reduce the cost of probes. By taking the distinction capacity of component as heuristic information, and considering the actual constraints such as that the probes in different positions impact the system differently, the algorithm of reduction is applied to remove these components which can not affect the recognition of the execution sequences. Then the necessary information of the sequences can be obtained with fewer probes deployed in the rest components. Meanwhile, the strategy is discussed and implemented to ensure the completeness of the result. Finally, the experiment shows that the refinement method can reduce the overheads of introducing probes effectively.",2010,0, 570,Bearing fault detection based on order bispectrum,"In order to process the non-stationary vibration signals such as speed up or speed down vibration signals effectively, the order bispectrum analysis technique is presented. This new method combines computed order tracking technique with bispectrum analysis. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments. Therefore, the time domain transient signal is converted into angle domain stationary one. In the end, the resampled signals are processed by bispectrum analysis technology. The experimental results show that order bispectrum analysis can effectively detect the bearing fault.",2010,0, 571,Research of industrial furnace fault diagnosis expert system,"In order to realize fast location and detection of abnormal status during running of industrial furnace, especially abnormal status of firing, this article studies and designs a fault diagnosis expert system based on fault tree theory. Firstly, formalized definition of industrial furnace fault diagnosis expert system is given in the paper, then all component elements of the expert system are analyzed and designed in detail, finally the principles and methods of design knowledge base are introduced to us by using the fault tree theory, and also reasoning algorithm is put forward, which is used to reason the fault by way of fault tree. The project practice indicates that this system knowledge model has good suitability, and what's more it is used simply and conveniently, also the outcome of fault diagnosis is reliable and steady.",2010,0, 572,Graph fitting test method for the interpolation error of moire fringe,"In order to realize the fast test for the interpolation error, the graph fitting test method for the interpolation error of Moire fringe is put forward in this paper. Firstly, the triangular wave Moire fringe photoelectric signal of the encoder whose phase difference is 90deg are sampled to get the Lissajous graph of the two signals. Secondly, the single wave represented by the founded subsection function is used to fit the practical Moire fringe Lissajous graph. Then, the fitting result is tested to verify whether it satisfies the accuracy requirement. Lastly, the founded subsection function instead of the practical wave function is used to calculate the interpolation error. Using the graph fitting method to sample the Moire fringe single of 15-bits photoelectric encoder to get the interpolation error curve, the tested maximum interpolation error is 70rdquo and the minimum error is - 69rdquo. Comparing with the interpolation error which is received from traditional test method, the change trend of the interpolation error curve is similar, and peak-peak value is almost equality. The results of experiment indicate that: the equipment is convenient and the examination method is efficient and feasible. The measure speed is fast and the manifestation result is intuitionistic. The system can be used in the working field. The method can avoid the speed influence and realize the dynamic interpolation error measure, which is significant for the research of encoder's dynamic accuracy characteristics.",2009,0, 573,Use of Faulted Phase Earthing using a custom built earth fault controller,"In order to reduce customer hours lost (CHL) and customer interruptions (CI), the use of Faulted Phase Earthing (FPE) is being considered on the Irish 20 kV distribution system. The operation of this particular FPE system is enabled by the use of a custom built Earth Fault Controller (EFC) that has the ability to detect high impedance faults of up to 12 k??. The EFC can also successfully identify single pole switching events, which have at times caused the mal-operation of existing protection. FPE involves the earthing of a faulted phase during a single line to ground fault. This ensures that the fault site is made safer and that no customers are interrupted during the fault.",2010,0, 574,Study on the Neural Network Model for Shield Construction Faults Diagnosis,"In order to solve the problem of establishing the mathematic model for shield construction faults diagnosis, an approach to the mathematic model by using BP neural network is presented in this paper. The BP neural network model for diagnosing three familiar shield construction faults based on the data of shield excavation parameters was built. The inputs of the model are respectively nine shield excavation parameters which are correlative with shield construction faults. The outputs of the model are three shield construction faults which are respectively the spewing at screw conveyer, the wear of disc-cutters and the jamming of shield. The case study of a shield project validated that the structure of the established model is practical, the diagnostic results are right and the diagnosis method is effective. The conclusion provides the beneficial guidance for the design of the online diagnosis system of shield construction faults based on the data of shield excavation parameters.",2010,0, 575,Research on influences of sampling errors on performances of three-level PWM rectifier,"In order to solve the problem that the sampling errors depress the control performances of pulse width modulation (PWM) rectifier, this paper focuses on a three-level PWM rectifier with voltage oriented control (VOC) strategy, introduces the structure of sampling process, and analyzes the influences of sampling errors on the control performances quantificationally. A software optimization method aimed for restraining the persistent and random sampling errors is then proposed. Simulation and experimental results verified the validity of the analyses and the feasibility of the proposed method. The performances of the rectifier were remarkably improved by using the proposed optimization method.",2009,0, 576,Multilayer Architecture Based on HMM and SVM for Fault Classification,"In order to solve the problems of current machine learning in fault diagnosing system of the chemical plants, a better and effective multilayer architecture model is used in this paper. Hidden Markov model (HMM) is good at dealing with dynamic continuous data and support vector machine (SVM) shows superior performance for classification, especially for limited samples. Combining their respective virtues, we propose a new multilayer architecture model to improve classification accuracy for a fault diagnosis example. The simulation result shows that this two level architecture framework combining HMM and SVM is better than the single HMM method in high classification accuracy with small training samples.",2009,0, 577,Adaptive Fault-Tolerance by Exposing Service Request Process as First-Class Object in Pervasive Computing,"In the open and dynamic pervasive computing environment, it is challenging to detect and handle the frequently occurred failures of service request. The widely used transparent mechanisms Remote Procedure Call and Object Request Broker have a great impact on the adaptive fault-tolerance, because they make it difficult for service requester to sense, configure and control the service request process. In this paper, we explicitly encapsulate and expose the service request process as the first-class object. By the exposed service request object, both the pervasive computing platform and the user applications are able to acquire the adaptive and systematic fault-tolerance ability during the service request process.",2010,0, 578,A Cellular Approach to Fault Detection and Recovery in Wireless Sensor Networks,"In the past few years wireless sensor networks have received a greater interest in application such as disaster management, border protection, combat field reconnaissance and security surveillance. Sensor nodes are expected to operate autonomously in unattended environments and potentially in large numbers. Failures are inevitable in wireless sensor networks due to inhospitable environment and unattended deployment. The data communication and various network operations cause energy depletion in sensor nodes and therefore, it is common for sensor nodes to exhaust its energy completely and stop operating. This may cause connectivity and data loss. Therefore, it is necessary that network failures are detected in advance and appropriate measures are taken to sustain network operation. In this paper we extend our cellular architecture and proposed a new mechanism to sustain network operation in the event of failure cause of energy-drained nodes. In our solution the network is partitioned into a virtual grid of cells to perform fault detection and recovery locally with minimum energy consumption. Specifically, the grid based architecture permits the implementation of fault detection and recovery in a distributed manner and allows the failure report to be forwarded across cells. The proposed failure detection and recovery algorithm has been compared with some existing related work and proven to be more energy efficient.",2009,0, 579,Optimizing Cauchy Reed-Solomon Codes for Fault-Tolerant Network Storage Applications,"In the past few years, all manner of storage applications, ranging from disk array systems to distributed and wide-area systems, have started to grapple with the reality of tolerating multiple simultaneous failures of storage nodes. Unlike the single failure case, which is optimally handled with RAID level-5 parity, the multiple failure case is more difficult because optimal general purpose strategies are not yet known. Erasure coding is the field of research that deals with these strategies, and this field has blossomed in recent years. Despite this research, the decades-old Reed-Solomon erasure code remains the only space-optimal (MDS) code for all but the smallest storage systems. The best performing implementations of Reed-Solomon coding employ a variant called Cauchy Reed-Solomon coding, developed in the mid 1990's. In this paper, we present an improvement to Cauchy Reed-Solomon coding that is based on optimizing the Cauchy distribution matrix. We detail an algorithm for generating good matrices and then evaluate the performance of encoding using all implementations Reed-Solomon codes, plus the best MDS codes from the literature. The improvements over the original Cauchy Reed-Solomon codes are as much as 83% in realistic scenarios, and average roughly 10% over all cases that we tested",2006,0, 580,Modeling Errors in Small Baseline Stereo for SLAM,"In the past few years, there has been significant advancement in localization and mapping using stereo cameras. Despite the recent successes, reliably generating an accurate geometric map of a large indoor area using stereo vision still poses significant challenges due to the accuracy and reliability of depth information especially with small baselines. Most stereo vision based applications presented to date have used medium to large baseline stereo cameras with Gaussian error models. Here we make an attempt to analyze the significance of errors in small baseline (usually <0.1m) stereo cameras and the validity of the Gaussian assumption used in the implementation of Kalman filter based SLAM algorithms. Sensor errors are analyzed through experimentations carried out in the form of a robotic mapping. Then we show that SLAM solutions based on the extended Kalman filter (EKF) could become inconsistent due to the nature of the observation models used",2006,0, 581,New attenuation correction for the HRRT using transmission scatter correction and total variation regularization,In the standard software for the Siemens HRRT PET scanner the most commonly used segmentation in the A-map reconstruction for human brain scans is MAP-TR. Problems with bias in the lower cerebellum and pons in HRRT brain images have been reported. The main source of the problem is poor bone / soft tissue segmentation in these regions and the lack of scatter correction in the A-map reconstruction. In this paper we describe and validate the new TXTV segmentation method (included in the HRRTU 1.0 and 1.1 user software) aimed at solving the bias problem.,2009,0, 582,Detection of rotor faults in torque controlled induction motor drives,"In the supervision of electrical equipment, the task of diagnostic system is to detect an upcoming machine fault as soon as possible, in order to save expensive manufacturing processes or to replace fault parts. An important issue in such effort is the modelling of the induction machine (IM) including rotor bar and end-ring faults, with a minimum of computational complexity. In this paper, a simpler method is employed in the simulation of an induction motor with rotor asymmetries. Simulation of classical and dynamic space vector models, Finite Element Analysis and experimental results are presented to support the proposed model. The need for detection of rotor faults at an earlier stage has pushed the development of monitoring methods with increasing sensitivity and noise immunity. Addressing diagnostic techniques based on current signatures analysis (MCSA), the characteristic components introduced by specific faults in the current spectrum are investigated and a diagnosis procedure correlates the amplitudes of such components to the fault extent. The impact of feedback control on asymmetric rotor cage induction machine behavior is also analyzed. It is shown that the variables usually employed in diagnosis procedures assuming open-loop operation are no longer effective under closed-loop operation. Simulation results show that signals already present at the drive are suitable to effective diagnostic procedure. The utilization of the current regulator error signals in rotor failure detection are the aim of the present work. The use of a band-pass filter bank to detect the presence of sidebands is also proposed.",2007,0, 583,Strategies to Attend Destructive and Nondestructive Faults in Power Transmission Substations,"In this document, the reliability evaluation of line circuits and the development of restoration strategies for ten transmission substations from the most important Colombian power transmission utility (ISA) are presented. There are two types of strategies to restore transmission substations: Fault procedures and contingency plans. The Fault procedures are used to attend non-destructive faults of the switching, control and protection systems on the transmission substation, while contingency plans help to attend destructive faults on power transformers and breakers.By applying the fault procedures and contingency plans, it causes the reduction of the economical consequences of the transmission equipment malfunctioning. The proposed restoration strategies help to accomplish the availability goals imposed by the market regulator and also to maintain the service continuity to the end user.",2004,0, 584,Zech Logarithmic Decoding of Triple-Error-Correcting Binary Cyclic Codes,"In this letter, Zech logarithmic decoding method is proposed for triple-error-correcting binary cyclic codes, which have not been developed before, whose generator polynomials have at most three irreducible factors.",2008,0, 585,Harmonics mitigation and power factor correction with a modern three-phase four-leg shunt active power filter,"In this paper a compensating system using four-leg shunt active power filter (SAPF) in a three-phase four-wire distribution network which will be able to mitigate harmonics, absorb or generate reactive power, and improve the power factor on supply side, is presented. Two control approaches based on p-q theory and load current detection using phase locked loop (PLL) are proposed. To validate the compensation performance of SAPF the distribution network with nonlinear loads is simulated using MATLAB/Simulink software. Simulation results have proved and validated the performance of SAPF minimizing the total harmonics distortion (THD) and neutral current.",2010,0, 586,Statistical error modeling of CNN-UM architectures: the binary case,In this paper a detailed error model is analyzed of the CNN-UM in a general statistical manner. The locally regular template class is considered and the possibility of erroneous output is expressed from the component nonlinearity and parameter deviation.,2002,0, 587,A method for intellectualized detection and fault diagnosis of vacuum circuit breakers,"In this paper a method for intellectualized detection and fault diagnosis of vacuum circuit breakers is introduced. The system consists of sensors, single-chips, measuring circuits, processing circuits, controlling circuits, extended ports, communication interface, etc. It can monitor on-line the condition of a vacuum circuit breaker, analyze its change tendency, identify and locate and display the detectable faults. This paper describes the main detecting principles and diagnostic foundations. The hardware structure and software design are also given",2000,0, 588,Robust Data Hiding Technique for Video Error Concealment over DVB-H channel,"In this paper a method for providing additional information for error concealment during the transmission of a video file over an error-prone channel is presented. The transmission of the additional information is performed without increasing the bandwidth occupation, by hiding this information into the video itself at the encoder side. The considered transmission system is the DVB-H system. The performance of the method has been investigated by simulations and experimental tests in order to evaluate the perceived quality of the processed video and the robustness of the data hiding method against the H.264/AVC coding.",2007,0, 589,A procedure to correct the error in the structure function based thermal measuring methods,In this paper a methodology is presented to correct the systematic error of structure function based thermal material parameter measuring methods. This error stems from the fact that it is practically impossible to avoid parallel heat-flow paths in case of forced one-dimensional heat conduction. With the presented method we show how to subtract the effect of the parallel heat-flow paths from the measured structure function. With this correction methodology the systematic error of structure function based thermal material parameter measuring methods can be practically eliminated. Application examples demonstrate the accuracy increase obtained with the use of the method.,2004,0, 590,Performance Increase of Error Control Operation on Data Transmission,"In this paper a new approach is proposed to increase the performance of the operation of error control on data transmission. Specifically, a hardware structure for parallel cyclic redundancy check (CRC) calculation is developed to speed up the error control operation of data transmission. Based on a study of the properties of both CRC and check sum (CS) a new error detecting scheme is developed which combines CRC and CS. Also it is shown that the proposed error detecting scheme ensures high reliability and performance of the error control operation on data transmission in comparison to CRC alone.",2009,0, 591,High impedance fault detection in distribution networks using support vector machines based on wavelet transform,"In this paper a new pattern recognition based algorithm is presented to detect high impedance fault (HIF) in distribution networks. In this method, using wavelet transform (WT), the time-frequency based features of the current waveform up to 6.25 kHz are calculated. To extract the best feature set of the generated time frequency features, two methods including principle component analysis (PCA) and linear discriminant analysis (LDA) are used and then support vector machines (SVM) is used as a classifier to distinguish the HIFs considering with and without broken conductor from other similar phenomena such as capacitor banks switching, no load transformer switching, load switching and harmonic loads considering induction motors, arc furnaces. The results show high accuracy of the proposed method in the detection task.",2008,0, 592,Exploring Quality Metrics to Support Defect Management Process in a Multi-site Organization - A Case Study,"In large software development projects, the number of defects can be considerably high and defect management can become even more challenging when the development is distributed over several sites. Defect reduction solutions and commonly agreed defect management methods are needed to handle the defects and to meet the target quality level of the software, measured by the number of open defects. In this study, a combination of three quality metrics was used to support the defect management process in four consecutive multi-site software development programs involving several hundred people, and the result was compared to a program not using the described quality criteria set. According to the results, defect closing speed was improved, the number of open defects was reduced, and defects were reported earlier in programs that were using the quality metrics.",2008,0, 593,The research on the relation between magnetic leakage signal character and defect character,"In magnetic flux leakage (MFL) testing of pipelines, quantitatively classifying defects remains to be a difficult problem. The mathematical model of MFL field is presented based on the finite element (FE) method, and two-dimensional axisymmetric FE model of MFL testing system is also established by adopting ANSYS software to simulate magnetostatics. The results show that different defects cause different signal. There are certain functions between the defect character and MFL signal character. Defect is quantitatively classified by MFL signal character.",2008,0, 594,Design aspects and pattern prediction for phased arrays with subarray position errors,"In modern array design, the antenna elements are often grouped into mechanical units such as printed antenna boards and mechanical subarrays/multipacks. This contributes to a more cost efficient manufacturing process and facilitates integration, handling, reuse and exchange of units, but it also makes the antenna element position errors correlated. Classical papers predict the statistical sidelobe level based on the assumption of uncorrelated errors, but using this for the general case, the statistical sidelobe level is under estimated. In this paper, the statistical sidelobe level for arrays with correlated position errors is predicted. Furthermore, rules of thumb relating antenna element position tolerances and mechanical array design to antenna array performance (sidelobe level) are given. Finally, array design aspects are discussed.",2010,0, 595,Genetic algorithms applied to optimal tolerance levels of multiattribute inspection errors,"In modern manufacturing environment, inspection equipment often can deal with more than one quality characteristic simultaneously. At the design stage of such inspection equipment, it is necessary to identify optimal combination of inspection error tolerance levels of multiattributes. We suggest a genetic algorithm by which one can determine the optimal tolerance levels of errors for multiinspection attributes at a minimum cost of ownership (COO). The COO model is formulated as a function of not only the initial purchase cost but also the inspection cost over lifetime. Our approach is expected to effectively contribute to marketing as well as manufacturing of inspection equipment.",2003,0, 596,Performance of restricted earth fault protection scheme in the presence of current transformer remanence,"In modern power system protection, an accurate transformation of primary short circuit current is vital to ensure correct operation of the high impedance Restricted Earth Fault (REF) protection scheme used for transformer protection. The dc component in the fault current as well as the remanent flux cans causes severe saturation conditions if the current transformer is not selected correctly. Behavior of the current transformer during the transient condition is important as this will determine the stability of the REF protection scheme. This paper discusses how the remanence phenomenon beside the dc component in the fault current can causes severe saturation to the current transformer and affects the stability of the REF protection scheme.",2008,0, 597,Stiffness and load free transmission error for the Multi-Flexible-Body-Dynamics (MFBD) Simulation of a wind turbine gearbox using a FE-based tooth contact analysis,In most MFBD-systems it is possible to use a gear element to create a dynamic model of a wind turbine gearbox. This article shows a possibility to calculate the stiffness and the load free transmission error for this element using a FE-based tooth contact analysis.,2008,0, 598,"Organized, automatic registration and analysis of performance and fault processes [in nuclear power plants]","In nuclear power plants in Slovakia and in the Czech Republic, organized tests are being carried out on electric equipment, in order to verify their reliability. This paper introduces methods for the calculation of derived electric quantities",2001,0, 599,Correction for continuous motion in small animal PET,"In small animal PET imaging experiments, animals are generally required to be anaesthetized to avoid motion artifacts. However, anaesthesia can alter biochemical pathways within the brain, thus affecting the physiological parameters under investigation. The ability to image conscious animals would overcome this problem and open up the possibility of entirely new investigational paradigms.",2008,0, 600,Implicit Social Network Model for Predicting and Tracking the Location of Faults,"In software testing and maintenance activities, the observed faults and bugs are reported in bug report managing systems (BRMS) for further analysis and repair. According to the information provided by bug reports, developers need to find out the location of these faults and fix them. However, bug locating usually involves intensively browsing back and forth through bug reports and software code and thus incurs unpredictable cost of labor and time. Hence, establishing a robust model to efficiently and effectively locate and track faults is crucial to facilitate software testing and maintenance. In our observation, some related bug locations are tightly associated with the implicit links among source files. In this paper, we present an implicit social network model using PageRank to establish a social network graph with the extracted links. When a new bug report arrives, the prediction model provides users with likely bug locations according to the implicit social network graph constructed from the co-cited source files. The proposed approach has been implemented in real-world software archives and can effectively predict correct bug locations.",2008,0, 601,Error Analysis of the Complex Kronecker Canonical Form,"In some interesting applications in control and system theory, i.e. in engineering, in ecology (Leslie population model), in financial/actuarial (Leontief multi input - multi output) science, linear descriptor (singular) differential/difference equations with time-invariant coefficients and (non-) consistent initial conditions have been extensively used. The solution properties of those systems are based on the Kronecker canonical form, which is an important component of the Matrix Pencil Theory. In this paper, we present some preliminary results for the error analysis of the complex Kronecker canonical form based on the Euclidean norm. Finally, under some weak assumptions an interesting new necessary condition is also derived.",2010,0, 602,"Correction [to ""Taking Advantage of Mutual Coupling in Radio-Communication Systems Using a Multi-Port Antenna Array"" [Aug 07 208-220]","In the above titled paper (ibid., vol. 49, no. 4, pp. 208-220, Aug 07), several items required correction, including equation (28). The corrections to the text and equation are presented here.",2007,0, 603,Mean-square-error reduction for quantized FIR filters,"In the article the author discuss fundamental properties of canonic signed digit (CSD) fixed point representation of numbers. Although properties of CSD format are well known from literature, published proofs are tedious and occupy lot of columns of text. Here the problem has been reduced to the problem of combinatorial number. The tool for this reduction is a ""drawer lemma"" - lemma about the distribution of identical objects in drawers or holes. Next there is proposed an algorithm for the computation and quantization of canonic signed digit (CSD) coefficients in a constant-coefficient multiplierless FIR filter. The algorithm is proven to be optimal in the mean square error sense. The algorithm is recurrent and unexpectedly simple, so it can be easily implemented inside any mathematical program as MATLAB or MATHCAD",2006,0, 604,Curve fitting algorithm using iterative error minimization for sketch beautification,"In previous sketch recognition systems, curve has been fitted by a bit heuristic method. In this paper, we solved the problem by finding the optimal parameter of quadratic Bezier curve and utilize the error minimization between an input curve and a fitting curve by using iterative error minimization. First, we interpolated the input curve to compute the distance because the input curve consists of a set of sparse points. Then, we define the objective function. To find the optimal parameter, we assume that the initial parameter is known. Then, we derive the gradient vector with respect to the current parameter, and the parameter is updated by the gradient vector. This two steps are repeated until the error is not reduced. From the experiment, the average approximation error of the proposed algorithm was 0.946433 about 1400 synthesized curves, and this result demonstrates that the given curve can be fitted very closely by using the proposed fitting algorithm.",2008,0, 605,An integration of H.264 based Error Concealment technique and the SPLIT layer protocol,"In recent years Error Correction, Error Concealment, Error Control and Fairness techniques have been developed to counter the inevitable errors encountered on a non-ideal network. The Split-Layer Video Multicast Protocol (SPLIT) has been uniquely designed to take advantages of these techniques. Its primary advantage is its ability to conserve bandwidth while maintaining a level of satisfactory quality. The basic principle is to conceal errors that are created by dropping a layer, due to network congestion and recovering the quality using already developed error concealment techniques. It is the intention of this paper to research and develop this integration of SPLIT with an established error concealment algorithm.",2006,0, 606,A multi-path routing protocol with fault tolerance in mobile ad hoc networks,"In recent years many researches have focused on ad-hoc networks, mainly because of their independence to any specific structure. These networks suffers from frequent and rapid topology changes that cause many challenges in their routing. Most of the routing protocols try to find a path between source and destination nodes because any path will expire, offer a short period, the path reconstruction may cause the network inefficiency. The proposed protocol build two paths between source and destination and create backup paths during the route reply process, route maintenance process and local recovery process in order to improve the data transfer and the fault tolerance. The protocol performance is demonstrated by using the simulation results obtain from the global mobile simulation software(Glomosim). The experimental results show that this protocol can decrease the packet loss ratio rather than DSR and SMR and it is useful for the applications that need a high level of reliability.",2009,0, 607,Common Software-Aging-Related Faults in Fault-Tolerant Systems,"In recent years, remarkable attention has been focused on software aging phenomena, in which the performance of software systems degrades with time. Fault-tolerant software systems which provide high assurance may suffer from such phenomena. Based on the common software-aging-related faults in fault-tolerant systems, a behavior model of a double-version fault-tolerant software system is established using Markov reward model. The performance of the system such as expected service rate in steady state is evaluated and the sensitivity analysis of some parameters is performed.",2008,0, 608,Building a Transformer Defects Database for UHF Partial Discharge Diagnostics,"In the case of a defective transformer, when a partial discharge is detected and recorded, critical information can be deduced from its pattern, such as the type of defect, its criticality or even information on the level of degradation of the insulation. This information can help to determine the remaining life of the transformer and thus provide criteria for its maintenance and operation. In this paper different artificial PD patterns will be recorded in the laboratory, representative of specific transformer defects, in order to build a database for comparison purposes when measuring on-line. This can greatly improve the recognition and identification of the defect and thus help take some important life assessment conclusions on the transformer.",2007,0, 609,Extraction error modeling and automated model debugging in high-performance custom designs,"In the design cycle of high-performance integrated circuits, it is common that certain components are designed directly at the transistor level. This level of design representation may not be appropriate for test generation tools that usually require a model expressed at the gate level. Logic extraction is a key step in test model generation to produce a gate-level netlist from the transistor-level representation. This is a semi-automated process which is error-prone. Once a test model is found to be erroneous, manual debugging is required, which is a resource-intensive and time-consuming process. This paper presents an in-depth analysis of typical sets of extraction errors found in the test model representations of the pipelines in high-performance designs today. It also develops an automated debugging solution for single extraction errors for pipelines with no state equivalence information. A suite of experiments on circuits with similar architecture to that found in the industry confirms the fitness and practicality of the solution",2006,0, 610,Learning fault-tolerance from nature,"In the last decade, there has been a considerable increase of interest in fault-tolerant computing due to dependability problems related to process scaling, embedded software, and ubiquitous computing. In this paper, we discuss an approach to fault-tolerance which is inspired by biological systems. Biological systems are capable of maintaining their functionality under a variety of genetic changes and external perturbations. They have natural self-healing, self-maintaining, self-replicating and self-assembling mechanisms. We present experimental and numerical evidence that the intrinsic fault-tolerance of biological systems is due to the dynamical phase in which the gene regulatory network operates. The dynamical phase is, in turn, determined by the subtle way in which redundancy is allocated in the network. By understanding the principles of redundancy allocation at the genetic level, we may find ways to build chips that possess the inherent fault-tolerance of biological systems.",2008,0, 611,3D-3 Classification of Defects for Guided Waves Inspected Pipes by a Neural Network Approach,"In this paper the effectiveness of a procedure that allows the flaws characterization of pipes inspected by a long range guided waves is investigated. The method performs the extraction of correlation coefficients between the x, y, z components of the displacement of simulated guided waves reflected by defects on pipes. These features feed a neural network classifier which evaluates the dimensions of well defined geometry defects on the pipe under test. The results show lower error rates in the evaluation of both angular and axial extent of a defect.",2007,0, 612,Fault detection techniques for effective line side asset monitoring,"In this paper the results of current research into the state-of-the-art in predictive fault detection and diagnosis methods for railway line-side assets is presented. Research to date has mainly focussed on point machines, track circuits and level crossing systems. It will be argued that, through the use of examples, that the most appropriate method for robust fault detection is based around generic models that are tuned for a particular instance of an asset. Furthermore, once a fault has been detected, it is necessary to have an a priori knowledge of the symptoms that are observable under fault conditions to reliably diagnose faults.",2005,0, 613,Faults digital library in burning process,"In this paper was developed a faults library for the steam boiler, the accent being on the study of the situations when faults occur in the burning process (furnace). This library is used in the monitoring and control system, the main object consisting in detection and localization of different faults. The system realise the fault detection in real time, comparison the process response with the response of the other models from the digital library and when are applied the same commands or signals in the same mode that in real process. In the last two sections of the paper was developed the application of fault detection algorithm based on plant model of burning process (included in this paper) and graphical results of fault detection experiments.",2008,0, 614,Correction of the Off-Axis Reflector Beam Squint in Passive Images of the Fourth Stokes Parameter at 91 GHz,"In this paper we analyze effects of the antenna beam squint on the fourth Stokes parameter V images taken by the fully polarimetric imager SPIRA and suggest methods to correct them. The first images of complex scenery, containing man- made and natural objects, showed unexpected features in the y-parameter with amplitudes of up to 30 K, depending on the contrast in the total intensity images. They have pronounced variation in elevation direction, whereas in azimuth (horizontal) only small changes are observed. A simple relation can be established between the measured fourth Stokes parameter and the scene brightness distributions in V and the total intensity I. It allows correction of the beam-squint effects in the V Stokes parameter image using image processing methods. Another, less general method, could be more easily applied to SPIRA images, achieving a comparable enhancement. In this way, the beam-squint effects were reduced down to the uncertainty of the instrumental polarimetric calibration.",2007,0, 615,Sampling error analysis applied to high-accuracy measurements,In this paper we apply a mathematical analysis of the main error sources in sampling theory to estimate alias and integration errors in asynchronous digital sampling measurements.,2008,0, 616,On the structure and performance of a novel blind source separation based carrier phase synchronization error compensator,"In this paper we carry out a detailed performance analysis of a novel blind-source-separation (BSS) based DSP algorithm that tackles the carrier phase synchronization error problem. The results indicate that the mismatch can be effectively compensated during the normal operation as well as in the rapidly changing environments. Since the compensation is carried out before any modulation specific processing, the proposed method works with all standard modulation formats and lends itself to efficient real-time custom integrated hardware or software implementations.",2002,0, 617,Automatic correction of exposure problems in photo printer,In this paper we consider a problem of automatic enhancement of amateur photos in photo printer. The purpose of correction consists of making photos more pleasant for an observer. The photos with various exposure problems and with poorly distinguishable details in shadow areas are analyzed. Our approach is based on contrast stretching and alpha-blending of both brightness of the initial image and estimations of reflectance. For obtaining reflectance estimation a simplified illumination model is used. The luminance is estimated using bilateral filter. Reflectance is estimated using heuristic functions of ratio between brightness of the initial image and estimation of luminance. The correction parameters are chosen adaptively based on histogram analysis. Noise suppression and some sharpening occur during correction. The time and memory optimization issues are considered. Look-up tables and recursive separable bilateral filter are applied to speed up the algorithm. The quality of the algorithm is evaluated by surveying of observer's opinions and by comparisons with already existing software and hardware solutions for local shadow correction. The proposed algorithm was implemented into firmware of Samsung dye-sublimation compact photo printer,2006,0, 618,In-flight fault detection and isolation in aircraft flight control systems,"In this paper we consider the problem of test design for real-time fault detection and isolation (FDI) in the flight control system of fixed-wing aircraft. We focus on the faults that are manifested in the control surface elements (e.g., aileron, elevator, rudder and stabilizer) of an aircraft. For demonstration purposes, we restrict our focus on the faults belonging to nine basic fault classes. The diagnostic tests are performed on the features extracted from fifty monitored system parameters. The proposed tests are able to uniquely isolate each of the faults at almost all severity levels. A neural network-based flight control simulator, FLTZreg, is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of FDI",2005,0, 619,Experimental evaluation of differential thermal errors in magnetoelastic stress sensors for Re<180,"In this paper we discuss laboratory results from solenoidal magnetoelastic measurements performed on a sample of bridge suspension cable. The experiments were designed to answer basic questions regarding the potential effects of inhomogeneous temperature fields and differential thermal errors between sensor and sample on the accuracy of coercive force measurements. The experiments were conducted with a sample of suspension cable installed in a temperature-controlled thermal chamber. Internal cable temperatures were measured with fine thermocouples which did not breach the sheath, permitting knowledge of the temperature field to be obtained without altering heat transfer. Two sets of experiments were conducted. First, magnetic measurements were taken with the sensor/cable in thermal equilibrium at two different temperatures, and at intermediate points when the temperatures of different ferromagnetic components of the system varied. The magnitude of different heat transfer effects were quantified, and correlated with estimates of the coercive force. Secondly, in order to judge the absolute accuracy of the measurements and to obtain data for further optimizing computer simulations, reference magnetic data was measured using two techniques on a single wire sample of the cable steel. These results are an important step in quantifying the overall accuracy of these sensors.",2002,0, 620,Failure semantics of mobile agent systems involved in network fault management,In this paper we examine what failure semantics are desirable for services provided by a mobile agent system (MAS) when assuming the MAS to be part of a network fault management system. We also present results from an evaluation project where the failure semantics of state of the art MAS were examined,2000,0, 621,Worst case reliability prediction based on a prior estimate of residual defects,"In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software N and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. For the ""fractional defect"" case, there can be a high probability of surviving any subsequent time t. The implications of the theory are discussed and compared with alternative reliability models.",2002,0, 622,Analyzing and Improving the Simulation Algorithm of IEEE 802.11DCF Error Frame Model in QualNet Simulator,"In this paper we focus on the concept of IEEE 802.11 DCF error frame model. Based on the analysis of the model in detail, we point out the error of the simulation algorithm of the DCF error frame model in QualNet simulation environment. Furthermore, we modify the simulation algorithm in QualNet strictly according to the specifications of the DCF error frame model. The simulation results show that compared with the original simulation algorithm, the modified algorithm can simulate the related specifications of the DCF error frame model more correctly, thus can provide more reliable simulation results.",2010,0, 623,A Rule Based Approach for Skew Correction and Removal of Insignificant Data from Scanned Text Documents of Devanagari Script,"In this paper we have presented a rule based approach for removing insignificant data and skew from scanned documents of Devanagari script. To develop an OCR system for Devanagari script is not an easy job hence proper preprocessing of these scanned documents requires noise removal and correcting skew from the image. The proposed system is based on rule based methods, morphological operations and connected component labeling. Images used for the experiment are binarised grayscale images. Experiments and results show that presented method is robust for preprocessing scanned images of Devanagari text documents.",2007,0, 624,Topological Properties of a New Fault Tolerant Interconnection Network for Parallel Computer,"In this paper we introduce a new interconnection network, the extended varietal hypercube with cross connection denoted by EVHC(n,k). This network has hierarchical structure and it overcomes the poor fault tolerant properties of extended varietal hypercube. This network has low diameter, constant degree connectivity and low message traffic density.",2008,0, 625,Error resilience tools in the MPEG-4 and H.264 video coding standards,"In this paper we introduce error resilience tools which are used by MPEG-4 and H. 264 video coding standards. MPEG-4 as well as H.264 offers error resilience tools which were implemented by older video coding standards, but they also introduce new tools and improved efficiency of previously used tools. This paper offers only brief review of them, but it focuses on their most important characteristics.",2008,0, 626,Closed-Form Error Analysis of Dual-Hop Relaying Systems over Nakagami-m Fading Channels,"In this paper we investigate the end-to-end performance of dual-hop relaying systems over non identical-Nakagami-m fading channels. Our analysis considers channel state information (CSI-) assisted relays that just amplify and retransmit the information signal also known as ""non-regenerative"" relays. New closed-form expressions for the average bit error probability (ABEP) are derived. The proposed expressions apply to general operating scenarios with distinct Nakagami-m fading parameters and average signal to noise ratios (SNRs) between the hops. When the fading parameter is an odd multiple of one half, the ABEP is expressed in terms of hypergeometric functions. When m takes any real non integer value, the obtained results involve the fourth Appell's hypergeometric function. For an arbitrary fading parameter, an analysis of such a scheme is performed using the well known moment-based approach.",2010,0, 627,Unified Architectural Support for Soft-Error Protection or Software Bug Detection,"In this paper we propose a unified architectural support that can be used flexibly for either soft-error protection or software bug detection. Our approach is based on dynamically detecting and enforcing instruction- level invariants. A hardware table is designed to keep track of run-time invariant information. During program execution, instructions access this table and compare their produced results against the stored invariants. Any violation of the predicted invariant suggests a potential abnormal behavior, which could be a result of a soft error or a latent software bug. In case of a soft error, monitoring invariant violations provides opportunistic soft-error protection to multiple structures in processor pipelines. Our experimental results show that invariant violations detect soft errors promptly and as a result, simple pipeline squashing is able to fix most of the detected soft errors. Meanwhile, the same approach can be easily adapted for software bug detection. The proposed architectural support eliminates the substantial performance overhead associated with software-based bug-detection approaches and enables continuous monitoring of production code.",2007,0, 628,Fault tolerant multipath routing with overlap-aware path selection and dynamic packet distribution on overlay network for real-time streaming applications,In this paper we propose overlap-aware path selection and dynamic packet distribution due to failure detection in multipath routing overlay network. Real-time communications that utilize UDP do not ensure reliability for realizing fast transmission. Therefore congestion or failure in a network deteriorates the quality of service significantly. The proposed method seeks an alternate path that hardly overlaps an IP path so as to improve its reliability. The proposed method also detects congestion or failure by differential of packet loss rate and apportion packets to the IP path and the alternate path dynamically. Evaluation on PlanetLab shows the proposed method avoids congestion Consequently the influence of congestion and failure lessens and the proposed multipath routing improves the reliability which can be used for real-time communications.,2008,0, 629,"Early resynchronization, error detection and error concealment for reliable video decoding","In this paper we proposed a robust video decoding scheme using early resynchronization, error detection (ED) and error concealment (EC). Many video coding standards such as H.263+ and MPEG-2 make the compressed video signals more vulnerable to channel errors because of error propagation due to using variables length coding (VLC). Channel errors cannot only result in false decoding of current bits but also make the decoder lose synchronization. The data between the errors and next resynchronization marker (RM) will be discarded even if they are not corrupted by channel errors. Early resynchronization can retrieve unaffected data between the errors and next RM, which can greatly reduce error propagation due to using VLC. We combined early resynchronization with deliberately designed ED and EC technologies and tested three early resynchronization schemes in our simulation. It has been found that the third scheme can give the best tradeoff between the final reconstructed image quality and searching times.",2003,0, 630,A note on fault diagnosis algorithms,"In this paper we review algorithms for checking diagnosability of discrete-event systems and timed automata. We point out that the diagnosability problems in both cases reduce to the emptiness problem for (timed) BuAchi automata. Moreover, it is known that, checking whether a discrete-event system is diagnosable, can also be reduced to checking bounded diagnosability. We establish a similar result for timed automata. We also provide a synthesis of the complexity results for the different fault diagnosis problems.",2009,0, 631,A new scheduling algorithm for dynamic task and fault tolerant in heterogeneous grid systems using Genetic Algorithm,"In this paper with studying of all parameters in grid environment a new scheduling algorithm for independent task is introduced according to Genetic Algorithm. This algorithm can be more efficient and more dependable than similar previous algorithms. The simulated results and reasons for reaching to better makespan and more efficiency in the grid environment. In the grids with high fault with high fault rate for fault tolerant is used from check point method that has more efficiency that other methods such as retry, migration and replication. This method maintains effective efficiency in these situations. So the servicing quality increases in various grid environments and also the average time of task recoveries decreases considerably The main purpose of this paper is reducing the repeating of the generations in Genetic Algorithm for reaching higher speed and also considering the communications costs (available in fitness function) with maintaining the fitness efficiency. The simulations are done with Gridsim for showing the created improvement at proposed algorithm rather than previous algorithms.",2010,0, 632,JPEG 2000 backward compatible error protection with Reed-Solomon codes,"In this paper, a backward compatible header error protection mechanism is described. It consists of the addition of a dedicated marker segment to a JPEG 2000 codestream, that will contain the error correction data generated by a block error correction code (e.g. a Reed Solomon code). This mechanism allows leaving the original data intact, hence providing backward compatibility with the already standardised JPEG 2000. Neither side information from higher level, nor extra signalling encapsulation is needed, as the required information is directly embedded in the codestream and also protected. Finally, it is shown how this mechanism can be used for perform unequal error protection of the whole JPEG 2000 stream.",2003,0, 633,Evaluation of two algorithms of multiple corrections on complementarity conditions,"In this paper, a careful analysis and evaluation for two approaches based on multiple corrections on complementarity conditions in the nonlinear predictor-corrector primal-dual interior point algorithm (PCPDIPA) is carried out. Unlike the other methodologies that improve the performance of PCPDIPA from the perspective of problem modeling/formulation, these two approaches directly tackle the centrality issue of the algorithm by re-using the available matrix factorization. Several parameters controlling the performance of these two approaches are identified by a series of numerical testing on several large-scale cases. Numerical comparison results are provided as indices for choosing appropriate centrality correction methodology for other power system applications.",2004,0, 634,Fault Diagnosis Implementation of Induction Machines based on Advanced Digital Signal Processing Techniques,"In this paper, a comprehensive cross correlation-based fault diagnostic method is proposed for real time DSP implementation. It covers both fault monitoring and decision making stages. In practice, a motor driven by an adjustable speed drive is run at various operating points where the frequency, amplitude and phase of the fault signatures varies with time. These dynamic changes are considered as one of the common factor that yields erroneous fault tracking and unstable fault detection. In this paper, the proposed algorithms deals with the operating point dependent ambiguities and threshold issues. It is theoretically and experimentally verified that the motor fault can continuously be tracked when the operating point changes within a limited range.",2009,0, 635,Fuzzy Time-Frequency defect classifier for NDT applications,"In this paper, a customized classifier is presented for the industry-practiced nondestructive evaluation (NDE) protocols using a hybrid-fuzzy inference system (FIS) to classify the and characterize the defects commonly present in the steel pipes used in the gas/petroleum industry. The presented system is hybrid in the sense that it utilizes both soft computing through fuzzy set theory, as well as conventional parametric analysis through time-frequency (TF) methods. Various TF transforms have been tested and the most suitable one for this application, multiform tiltable exponential distribution (MTED), is presented here. Four defining states are considered in the paper; slag, porosity, crack, and lack-of-fusion, representing the four most critical types of defects present in welds on the pipes. The necessary features are calculated using the TF coefficients and are then supplied to the fuzzy inference system as input to be used in the classification. The resulting system has shown excellent defect classification with very low misclassification and false alarm rates.",2009,0, 636,On the Optimization of Local and End-to-end Forward Error Correction,"In this paper we investigate where best to use forward error correction if one or more mobile radio links is included on a path. On one hand, if the errors are handled locally where they appear, the knowledge of the channel conditions are better and no extra redundancy will have to traverse the other links. On the other hand, the requirements of the application are better known at the end nodes, hence the error correction can be better tuned to the needs of the application end-to-end. The first aspect investigated is the effect of a correlated error process, which is exemplified by a fading radio channel. The length of the error correcting code and the burst tolerance is essential for performance when correlation is taken into account, therefore packet level coding applied endto- end is efficient at high correlation whereas local bit error correction is more efficient at low correlation. TCP is used to exemplify the difficulties in estimating the parameters needed to implement effective coding both locally and end-to-end. Furthermore, simulation results show how the locally optimal parameters differ depending on the end-to-end path. Performance comparisons in several cases demonstrate that end-to-end forward error correction (FEC) can often be efficient to mitigate problems that are in principle local to a specific link.",2005,0, 637,Memory fault diagnosis by syndrome compression,"In this paper we present a data compression technique that can be used to speed up the transmission of diagnosis data from the embedded RAM with built-in self-diagnosis (BISD) support. The proposed approach compresses the faulty-cell address and March syndrome to about 28% of the original size under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. The proposed compression technique reduces the time for diagnostic test, as well as the tester storage capacity requirement",2001,0, 638,An Efficient Fault-Tolerant Routing Methodology for Meshes and Tori,"In this paper we present a methodology to design fault-tolerant routing algorithms for regular direct interconnection networks. It supports fully adaptive routing, does not degrade performance in the absence of faults, and supports a reasonably large number of faults without significantly degrading performance. The methodology is mainly based on the selection of an intermediate node (if needed) for each source-destination pair. Packets are adaptively routed to the intermediate node and, at this node, without being ejected, they are adaptively forwarded to their destinations. In order to allow deadlock-free minimal adaptive routing, the methodology requires only one additional virtual channel (for a total of three), even for tori. Evaluation results for a 4 x 4 x 4 torus network show that the methodology is 5-fault tolerant. Indeed, for up to 14 link failures, the percentage of fault combinations supported is higher than 99.96%. Additionally, network throughput degrades by less than 10% when injecting three random link faults without disabling any node. In contrast, a mechanism similar to the one proposed in the BlueGene/L, that disables some network planes, would strongly degrade network throughput by 79%.",2004,0, 639,Nonparametric model error bounds for control design in the presence of nonlinear distortions,"In this paper we present a procedure to generate nonparametric bounds on the model errors of a measured frequency response function that are due to nonlinear distortions. In a first step the nonlinear system is represented by a linear system (the best linear approximation) plus a noise source that accounts for the unmodelled nonlinear effects. In a second step, the model error bounds are calculated starting from the measured noise source characteristics. The whole process is embedded in a simple and time efficient experimental procedure",2001,0, 640,Analysis of stator short-circuit faults for induction machine using finite element modeling,In this paper we present an analysis on the different types of short-circuit (sc) that may affect the stator windings by means of a finite element model. Three cases of short-circuit were simulated on the electrical circuit of stator. The machine is modelled in magneto-evolving.,2010,0, 641,Design optimization of time- and cost-constrained fault-tolerant distributed embedded systems,"In this paper we present an approach to the design optimization of fault tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating transient faults. Our design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied. We present several heuristics which are able to find fault-tolerant implementations given a limited amount of resources. The developed algorithms are evaluated using extensive experiments, including a real-life example.",2005,0, 642,Fault Recognition on Power Networks via SNR Analysis,"In this paper we present and to an extent analyze, findings from a real installation of broadband over power lines done by the Greek Public Power Corporation in the region of Larissa. The findings indicate the correlation between faulty equipment on the power networks and the signal-to-noise ratio of the power-line signal induced thus showing the possibility of developing a new method of fault recognition in power networks.",2009,0, 643,Design and implementation of a pluggable fault tolerant CORBA infrastructure,"In this paper we present the design and implementation of a Pluggable Fault Tolerant CORBA Infrastructure that provides fault tolerance for CORBA applications by utilizing the pluggable protocols framework that is available for most CORBA ORBS. Our approach does not require modification to the CORBA ORB, and requires only minimal modifications to the application. Moreover; it avoids the difficulty of retrieving and assigning the ORB state, by incorporating the fault tolerance mechanisms into the ORB. The Pluggable Fault Tolerant CORBA Infrastructure achieves performance that is similar to, or better than, that of other Fault Tolerant CORBA systems, while providing strong replica consistency.",2002,0, 644,"Fault detection, diagnosis and control in a tactical aerospace vehicle",In this paper we propose a fault-tolerant control ( FTC ) scheme using multiple controller switching. The performance of this scheme is studied on a tactical aerospace vehicle. A parity space (PS) based residual generation approach is used to detect the fault. Once a fault is detected the diagnosis scheme identifies the faulty actuator. Using this information on-line reconfiguration of the controller is done based on the configuration of the existing healthy actuator. To implement this scheme no modification were done in hardware (H/W) configuration and only existing redundancies were utilised. Simulation with nonlinear 6-degree of freedom (6-DoF) model shows that the above fault tolerant control approach is able to reduce the probability of failure due to actuators.,2003,0, 645,Intelligent fault diagnosis system based on UML,"In this paper, it is united that the ordinary software project method and the object-oriented method. The object-oriented analysis, the object-oriented design and the object-oriented modeling in the intelligent fault diagnosis system with UML is introduced, which decreases the complexity of the intelligent fault diagnosis system that is more manageable. The diagnostic reasoning adopts the technique of expert system and reasoning under uncertainty. The intelligent fault diagnosis system is a very important part of the power station simulation system. With the data gathered from the devices or the converted parameters, it can find the devices which are out of order by reasoning. The diagnostic reasoning adopts the technique of expert system and reasoning under uncertainty.",2009,0, 646,Joint Fault-Tolerant Design of the Chinese Space Robotic Arm,"In this paper, joint reliability design for the Chinese space robotic arm has been discussed. Redundant controller unit, redundant can bus communication unit and latch-up power protection unit have been outlined. The fault tree of the joint has been built. Moreover, the new algorithm of auto-adjust thresholding value has been presented for fault detection, and the fault-tolerant strategies of joint have been proposed. Experimental results demonstrate the effectiveness of the joint fault-tolerant design",2006,0, 647,Error density metrics for business process model,"In this paper, metrics for business process model (BPM), are proposed, which are capable to measure the usability and effectiveness of BPMs. The proposed model is adapting error density metrics to BPMs by considering the similarities between the conceptual characteristics of BPMs and software products. We applied seven software metrics for evaluating quality of business processes/ process models. Results show that our metrics help the organization to improve their process, as weighted measurements are indicators for unexpected situations/behaviour for business processes.",2009,0, 648,Power system fault diagnosis modeling techniques based on encoded Petri nets,"In this paper, power system fault diagnosis based on Petri nets and coding theory is further studied. Previous research work is briefly reviewed. Characteristics of Petri nets model in power system fault diagnosis and identification are demonstrated in detail, also a fast model revision algorithm of power components is proposed, which makes the scheme more applicable to large-scale power network. The method is tested in the IEEE 118-bus power system and simulation results show that the suggested approach is accurate by using of error correction theory, model revision is easy, fast when power network is expanded or topology is changed, which makes the encoded Petri nets method more applicable in real power system",2006,0, 649,Threshold calculation using LMI-technique and its integration in the design of fault detection systems,"In this paper, problems related to the threshold calculation and its integration in the design of observer-based fault detection systems are studied. The focus of the study is the application of practical fault detection methods in the framework of observer-based fault detection schemes. The basic idea consists in the formulation of threshold calculation as some standard optimization problems which are then solved using the well-established LMI-optimization technique.",2003,0, 650,Channel frame error rate for Bluetooth in the presence of microwave ovens,"In this paper, radiation from microwave ovens is measured using PRISM, a custom-built device designed to measure transmissions in the ISM band. The measured signals, treated as a rising noise floor, are then applied to a semi-analytic simulation to determine the probability of frame error rate (FER) per channel for six Bluetooth packet types.",2002,0, 651,State Estimation and Fast Fault Detection For Ship Electrical Systems,"In this paper, the advantages of state estimation on ship power systems are demonstrated and extended to perform fast fault detection. For more-electric ships it becomes critical to monitor the system and respond to faults within milliseconds to limit damage from the fault and transfer to an intact supply. A compact ship allows syncronization of sampled real time voltage and current data from electrical sensors to compute the phase angle and voltage magnitude at every bus on the system. On a ship power system the data samples can be collected every 0.5 milliseconds by the central computer. Bad data analysis, smoothed values for the operating condition, and differential current sensing are performed within milliseconds. An example of state estimation is done for an LPD17 assault ship. Other examples of using phase data and fault detection are given in the paper. Algorithms for remedial action may be activated by the results of the state estimation and fault detection.",2007,0, 652,Fundamental Limitations on Designing Optimally Fault-Tolerant Redundant Manipulators,"In this paper, the authors examine the problem of designing nominal manipulator Jacobians that are optimally fault tolerant to one or more joint failures. Optimality is defined here in terms of the worst-case relative manipulability index. While this approach is applicable to both serial and parallel mechanisms, it is especially applicable to parallel mechanisms with a limited workspace. It is shown that a previously derived inequality for the worst-case relative manipulability index is generally not achieved for fully spatial manipulators and that the concept of optimal fault tolerance to multiple failures is more subtle than previously indicated. Lastly, the authors identify the class of 8-DOF Gough--Stewart platforms that are optimally fault tolerant for up to two joint failures. Examples of optimally fault-tolerant 7- and 8-DOF mechanisms are presented.",2008,0, 653,On the Use of Behavioral Models for the Integrated Performance and Reliability Evaluation of Fault-Tolerant Avionics Systems,"In this paper, the authors propose an integrated methodology for the reliability and performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers when designing the control system, but incorporates additional artifacts to model the failure behavior of the system components. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. After all system configurations have been evaluated, the values of the performance metrics for each configuration and the probabilities of going from the nominal configuration (no component failures) to any other configuration are merged into a set of probabilistic measures of performance. To illustrate the methodology, and to introduce a tool that the authors developed in MATLAB/SIMULINKreg that supports this methodology, the authors present a case-study of a lateral-directional flight control system for a fighter aircraft",2006,0, 654,Intraoperative ultrasonography for the correction of brainshift based on the matching of hyperechogenic structures,"In this paper, a global approach based on 3D freehand ultrasound imaging is proposed to (a) correct the error of the neuronavigation system in image-patient registration and (b) compensate for the deformations of the cerebral structures occurring during a neurosurgical procedure. The rigid and non rigid multimodal registrations are achieved by matching the hyperechogenic structures of brain. The quantitative evaluation of the non rigid registration was performed within a framework based on synthetic deformation. Finally, experiments were carried out on real data sets of 4 patients with lesions such as cavernoma and low-grade glioma. Qualitative and quantitative results on the estimated error performed by neuronavigation system and the estimated brain deformations are given.",2010,0, 655,A High Speed and Low Cost Error Correction Technique for the Carry Select Adder,"In this paper, a high speed and low cost error correction technique is proposed for the Carry Select Adder (CSA) which can correct both transient and permanent errors and is applicable on all partitioning types of the basic CSA circuit. The proposed error correction technique is compatible with all existing error detection techniques which are proposed for the CSA adder. The synthesized results show that applying this novel error correction technique to a CSA with error detection technique results in up to 18.4%, 3.1% and 14.9%, increase in power consumption, delay and area respectively.",2009,0, 656,Detection and treatment of faults in automated machines based on Petri nets and Bayesian networks,"In this paper, a methodology for considering detection and treatment of faults in automated machines is introduced. This methodology is based on the integration of Petri nets for diagnosis (BPN) and Bayesian networks. After that, the integration among detection/treatment of faults and the ""normal"" processes (represented by Petri nets, PN) is possible. This integration allows us to develop a fault tolerant supervisor, which considers all the processes in the same structure. A case study of fault tolerant AGV is considered. Finally, a simulation tool for edition and analysis of models with these characteristics is introduced.",2003,0, 657,Embedded model-based fault diagnosis for on-board diagnosis of engine control systems,"In this paper, a model-based fault diagnosis scheme for on-board diagnosis in spark ignition (SI) engine control systems is presented. The developed fault diagnosis system fully makes use of the available control structure and is embedded into the control loops. As a result, the implementation of the diagnosis system is realized with low demands on engineering costs, computational power and memory. The developed diagnosis scheme has been successfully applied to the air intake system of an SI-engine",2005,0, 658,Exact Fault-Tolerant Feasibility Analysis of Fixed-Priority Real-Time Tasks,"In this paper, a necessary and sufficient (exact) feasibility test is proposed for fixed-priority scheduling of a periodic task set to tolerate multiple faults on uniprocessor. We consider a fault model such that multiple faults can occur in any task and at any time, even during recovery operations. The proposed test considers tolerating a maximum of f faults that can occur within any time interval equal to the largest relative deadline of the task set. The feasibility of the task set is checked based on the maximum workload requested by the higher-priority jobs within the released time and deadline of the job of each task that is released at the critical instant. The maximum workload is calculated using a novel technique to compose the execution time of the higher-priority jobs. To the best of our knowledge, no other work (assuming the same fault model as ours) has derived an exact feasibility test for periodic task sets having a lower time complexity than that of the test proposed in this paper.",2010,0, 659,Induction Motor Electrical Fault Diagnosis Using Voltage Spectrum of an Auxiliary Winding - Part II,"In this paper, a new method for induction motor fault diagnosis is presented. It is based on the so-called voltage spectrum of an auxiliary small winding inserted between two of the stator phases. An expression of the inserted inductance voltage is presented. After that, discrete Fourier transform analyzer is required for converting the voltage signal from the time domain to the frequency domain. Simulations results curried out for defected and non defected motor show the effectiveness of the proposed method.",2008,0, 660,A low complexity and efficient slice grouping method for H.264/AVC in error prone environments,"In this paper, a new method is proposed for Macroblock (MB) importance classification of inter frames. Instead of selecting the most important MBs, the least important MBs are decided first. It makes use of the properties of skip mode in the H.264/AVC standard as the first step. Because the number of MBs chosen as skip mode in a frame varies, further classification is usually required. Four other different features therefore are considered to determine the Important Factor of the remaining MBs. It has been proved that the proposed method can provide good objective and subjective video quality performance, whilst also being simple and fast.",2009,0, 661,Using run-time reconfiguration for fault injection in hardware prototypes,"In this paper, a new methodology for the injection of single event upsets (SEU) in memory elements is introduced. SEUs in memory elements can occur due to many reasons (e.g. particle hits, radiation) and at any time. It becomes therefore important to examine the behaviour of circuits when an SEU occurs in them. Reconfigurable hardware (especially FPGAs) was shown to be suitable to emulate the behaviour of a logic design and to realise fault injection. The proposed methodology for SEU injection exploits FPGAs and, contrarily to the most common fault injection techniques, realises the injection directly in the reconfigurable hardware, taking advantage of run-time reconfiguration capabilities of the device. In this case, no modification of the initial design description is needed to inject a fault, that results in avoiding hardware overheads and specific synthesis, place and route phases.",2002,0, 662,Harmonic-suppressed Elliptic-Function Low-pass Filter Using Defected Ground Structure,"In this paper, a new microstrip elliptic-function low-pass filter(LPF) using a defected ground structure(DGS) is proposed. By integrating DGS units, the proposed filter provides attenuation- pole for the widc-stopband characteristic due to the resonance characteristic of DGS. The equivalent-circuit model and the 3-D model simulation for the DGS are used to determine the dimension parameters. Etching the designed DGS on the backside of an interdigital elliptic-function LPF, a new microstrip LPF with broad stopband is obtained after- impedance matching. The experimental results show the new filter can suppress the harmonic signal very well.",2007,0, 663,Remote sensing digital image processing techniques in active faults survey,"In this paper, an effective method is presented to identify active faults from different sources of remote sensing images. First, we compared the capability of some satellite sensors in active faults survey. Then, we discussed a few digital image processing approaches used for information enhancement and feature extraction related to faults. Those methods include band ratio, PCA (Principal Components Analysis), Tasseled Cap Transformation, filtering and texture statistics, etc. Extensive experiments were implemented to validate the efficiency of those methods. We collected Landsat MSS, TM and ETM Plus images of Shandong Province, northern China. DEM (Digital Elevation Model) data of 25 m resolution and Chinese resource satellite-Resource-2 images with pixel size of about 5 m are also acquired in very important active faults regions. The experimental results show that remote sensing multi-spectral images have great potentials in large scale active faults investigation. We also get satisfied results when deal with invisible faults those lying beneath the earth surface.",2003,0, 664,Error resilience video coding in H.264 encoder with potential distortion tracking,"In this paper, an efficient rate-distortion (RD) model for an H.264 video encoder in a packet loss environment is presented. The encoder keeps tracking the potential error propagation on a block basis by taking into account the source characteristics, network conditions as well as the error concealment method. The end-to-end distortion invoked in this RD model is estimated according to the potential error-propagated distortion stored in a distortion map. The distortion map, in terms of each frame, is derived after the frame is encoded, which can be used for the RD-based encoding of the subsequent frames. Since the channel distortion has been considered in the proposed RD model, the new Lagarangian parameter is derived accordingly. The proposed method outperforms the error robust rate-distortion optimization method in the H.264 test model better in terms of both transmission efficiency and computational complexity.",2004,0, 665,Extended Fault-Location Formulation for Power Distribution Systems,"In this paper, an extended impedance-based fault-location formulation for generalized distribution systems is presented. The majority of distribution feeders are characterized by having several laterals, nonsymmetrical lines, highly unbalanced operation, and time-varying loads. These characteristics compromise traditional fault-location methods performance. The proposed method uses only local voltages and currents as input data. The current load profile is obtained through these measurements. The formulation considers load variation effects and different fault types. Results are obtained from numerical simulations by using a real distribution system from the Electrical Energy Distribution State Company of Rio Grande do Sul (CEEE-D), Southern Brazil. Comparative results show the technique robustness with respect to fault type and traditional fault-location problems, such as fault distance, resistance, inception angle, and load variation. The formulation was implemented as embedded software and is currently used at CEEE-D's distribution operation center.",2009,0, 666,Research on monitoring and fault diagnostic system of turbine-generator,"In this paper, based on the systematic analysis on the structure, operation, and fault of a large generator, the software of MDST (monitoring and diagnosis system) is designed and compiled, and it is applied on the spot. The software MDST consults the national standard and the departmental standard and the correlative prescription of regulation of generator on operation, experiment, examination and reparation, and quality verification etc., and under full consideration of the factors, such as functional realization of the software, customer requirements, operator-computer interactive communication, friendly interface, authority partition, and secure and stable operation of the software and so on",2001,0, 667,Software reliability modeling of fault detection and correction processes,"In this paper, both fault detection and correction processes are considered in software reliability growth modeling. The dependency of the two processes is first studied from the viewpoint of the fault number in two ways. One is the ratio of corrected fault number to detected fault number, which appears S-shaped. And the other is the difference between the detected fault number and corrected fault number, which appears Bell-shaped. Then based on the ratio and difference functions, two software reliability models are proposed for both fault detection and correction processes. The proposed models are evaluated by a data set of software testing. The experimental results show that the new models fit the data set of fault detection and correction processes very well.",2009,0, 668,On Unequal Error Protection of Convolutional Codes From an Algebraic Perspective,"In this paper, convolutional codes are studied for unequal error protection (UEP) from an algebraic theoretical viewpoint. We first show that for every convolutional code there exists at least one optimal generator matrix with respect to UEP. The UEP optimality of convolutional encoders is then combined with several algebraic properties, e.g., systematic, basic, canonical, and minimal, to establish the fundamentals of convolutional codes for UEP. In addition, a generic lower bound on the length of a UEP convolutional code is proposed. Good UEP codes with their lengths equal to the derived lower bound are obtained by computer search.",2010,0, 669,The Error Reduced ADI-CPML Method for EMC Simulation,"In this paper, convolutional perfectly matched layer (CPML) is developed for the recently proposed error reduced (ER) ADI-FDTD method to solve electromagnetic compatibility problems efficiently. Its numerical results are examined and compared with the conventional ADI-CPML method. It is found that for a CFL number equal to 5, the reflection error of the ER- ADI-CPML is approximately 12 dB better than the conventional ADI-CPML method.",2007,0, 670,Fault Detection Using Differential Flatness in Flight Guidance Systems,"In this paper, flight guidance dynamics are shown to be implicit differentially flat with respect to the inertial position of an aircraft. This proves the existence of a set of relations between these flat outputs and the state variables representative of flight guidance dynamics and between these flat outputs and the basic inputs to flight guidance dynamics. A neural network is introduced to obtain, from the actual trajectory, nominal flight parameters which can be compared with actual values to detect abnormal behaviour",2007,0, 671,Spatio-temporal boundary matching algorithm for temporal error concealment,"In this paper, a novel temporal error concealment algorithm, called spatio-temporal boundary matching algorithm (STBMA), is proposed to recover the information lost in the video transmission. Different from the classical boundary matching algorithm (BMA), which just considers the spatial smoothness property, the proposed algorithm introduces a new distortion function to exploit both the spatial and temporal smoothness properties to recover the lost motion vector (MV) from candidates. The new distortion function involves two terms: spatial distortion term and temporal distortion term. Since both the spatial and temporal smoothness properties are involved, the proposed method can better minimize the distortion of the recovered block and recover more accurate MV. The proposed algorithm has been tested on H.264 reference software JM 9.0. The experimental results demonstrate the proposed algorithm can obtain better PSNR performance and visual quality, compared with BMA which is adopted in H.264",2006,0, 672,Three-Stage Error Concealment for Distributed Speech Recognition (DSR) with Histogram-Based Quantization (HQ) Under Noisy Environment,"In this paper, a three-stage error concealment (EC) framework based on the recently proposed histogram-based quantization (HQ) for distributed speech recognition (DSR) is proposed, in which noisy input speech is assumed and both the transmission errors and environmental noise are considered jointly. The first stage detects the erroneous feature parameters at both the frame and subvector levels. The second stage then reconstructs the detected erroneous subvectors by MAP estimation, considering the prior speech source statistics, the channel transition probability, and the reliability of the received subvectors. The third stage then considers the uncertainty of the estimated vectors during Viterbi decoding. At each stage, the error concealment (EC) techniques properly exploit the inherent robust nature of histogram-based quantization (HQ). Extensive experiments with AURORA 2.0 testing environment and GPRS simulation indicated the proposed framework is able to offer significantly improved performance against a wide variety of environmental noise and transmission error conditions.",2007,0, 673,Concurrent error detection for involutional functions with applications in fault-tolerant cryptographic hardware design,"In this paper, a time redundancy based Concurrent Error Detection (CED) technique targeting involutional functions is presented. A function F is an involution if F(F(x))=x. The proposed CED technique exploits the involution property and checks if x=F(F(x)). Unlike traditional time redundancy based CED methods, this technique can detect both permanent and transient faults",2006,0, 674,A new adaptive hybrid neural network and fuzzy logic based fault classification approach for transmission lines protection,"In this paper, an adaptive hybrid neural networks and fuzzy logic based algorithm is proposed to classify fault types in transmission lines. The proposed method is able to identify all ten shunt faults in transmission lines with high level of robustness against variable conditions such as measured amplitudes and fault resistance. In this approach, a two-end unsynchronized measurement of the signals is used. For real-time estimation of unknown synchronization angle and three phase phasors a two-layer adaptive linear neural (ADALINE) network is used. The estimated parameters are fed to a fuzzy logic system to classify fault types. This method is feasible to be used in digital distance relays which are able to be programmed, to share and discourse data with all protective and monitoring devices. The proposed method is evaluated by a number of simulations conducted in PSCAD/EMTDC and MATLAB software.",2008,0, 675,Power Factor Correction of Direct Torque Controlled Brushless DC Motor Drive,"In this paper, an algorithm for power factor correction (PFC) of direct torque control (DTC) brushless dc motor drive in the constant torque region is presented. The proposed DTC approach introduces a two-phase conduction mode as opposed to the conventional three-phase DTC drives. Unlike conventional six-step PWM current control, by properly selecting the inverter voltage space vectors of the two-phase conduction mode from a simple look-up table at a predefined sampling time, the desired quasi-square wave current is obtained. Therefore, a much faster torque response is achieved compared to conventional current control. Furthermore, to eliminate the low-frequency torque oscillations caused by the non-ideal trapezoidal shape of the actual back-EMF waveform of the BLDC motor, a pre-stored back-EMF versus position look-up table is designed. The duty cycle of the boost converter is determined by a control algorithm. This control algorithm is based on the input voltage, output voltage which is the dc-link of the BLDC motor drive, and the inductor current using the average current control method with input voltage feed-forward compensation during each sampling period of the drive system. A theoretical concept is developed and the validity and effectiveness of the proposed DTC of BLDC motor drive scheme with PFC are verified through the experimental results. The test results verify that the proposed PFC for DTC of BLDC motor drive improves the power factor from 0.77 to about 0.9997 irrespective of the load.",2007,0, 676,An approach for intelligent detection and fault diagnosis of vacuum circuit breakers,"In this paper, an approach for intelligent detection and fault diagnosis of vacuum circuit breakers is introduced, by which, the condition of a vacuum circuit breaker can be monitored on-line, and the detectable faults can be identified, located, displayed and saved for the use of analyzing their change tendencies. The main detecting principles and diagnostics are described. Both the hardware structure and software design are also presented.",2002,0, 677,A new three-phase inverter power-factor correction (PFC) scheme using field programmable gate array,"In this paper, a new three-phase Inverter power-factor correction PFC scheme is proposed using field programmable gate array (FPGA) technology all the functions can be implemented in a single chip. The implementation tool fit the entered design into the target device (XC4008E). Design verification includes functional simulator, in circuit testing and timing simulation the main function is to verify the proper operation of the designed circuit. It will compile a design file into a configuration file that is optimized in terms of use of logic gates and interconnections for the target device. Power factor measures how effective electrical power is being used. A high power factor means that electrical power is being utilized effectively, while a low power factor indicates poor utilization of electrical power. The simplest way to improve power factor is to add power factor correction capacitors to your plant distribution system. In this paper a new technique is proposed to improve the power factor, experimental results are presented to show the effectiveness of the proposed technique.",2002,0, 678,Broken bar fault detection in induction motors based on modified winding Function,"In this paper, a new turn function for skewed rotor bars based on winding Function approach is presented. This approach has been used for simulating the machine behavior under healthy and broken rotor conditions. Proposed method is applied for rotor bars fault detection based on an advanced Park's vectors approach.",2010,0, 679,Power System Fault Estimation under Non-white Noise,"In this paper, a novel fault detection and isolation filter design method for power distribution in DC electronic system is proposed. The proposed double filter method is a combination of quadratic programming (QP) and Kalman filter which has three main advantages: first, it could estimate the fault position accurately under non-white noise. Second, this is a sequential method which could be used for real-time fault estimation with limited computation. Third, the model is simple and could be used on other circuits without much effort. The performance of the proposed method is verified by numerical simulations.",2010,0, 680,An novel loss protection scheme for H.264 video stream based on Frame Error Propagation Index,"In this paper, a novel GOP level unequal loss protection (G-ULP) scheme is proposed for robust H.264-coded video streaming over packet loss networks. This scheme use frame error propagation index (FEPI) to characterize video quality degradation caused by error propagation in different frames in a GOP when suffer from packet loss. A fast FEPI calculation method in compression domain is also proposed in this paper. By exploiting the unequal significance in different frames in a GOP, different amount of forward error correction (FEC) packets are allocated to different frames in a GOP. The optimal FEC allocation algorithm is based on the FEPI of each frame in the GOP. The simulation results show that the proposed G-ULP scheme can improve the receiver side reconstructed video quality remarkably under different channel loss patterns.",2008,0, 681,A Decision Tree-Based Method for Fault Classification in Double-Circuit Transmission Lines,"In this paper, a novel method for fault classification of double-circuit transmission lines is presented. The proposed method needs voltages and currents of only one side of the protected line. After detecting the exact time of fault inception and calculating the odd harmonics of the measured signals, up to the nineteenth, a decision tree algorithm has been employed for recognition of the intercircuit fault type. Also, the proposed method is extended for classification of crossover faults in these transmission lines. Simulation results have shown that the proposed method can classify the faults in less than a quarter of a cycle with the highest possible accuracy.",2010,0, 682,Quasi-cyclic generalized ldpc codes with low error floors,"In this paper, a novel methodology for designing structured generalized LDPC (G-LDPC) codes is presented. The proposed design results in quasi-cyclic G-LDPC codes for which efficient encoding is feasible through shift-register-based circuits. The structure imposed on the bipartite graphs, together with the choice of simple component codes, leads to a class of codes suitable for fast iterative decoding. A pragmatic approach to the construction of G-LDPC codes is proposed. The approach is based on the substitution of check nodes in the protograph of a low-density parity-check code with stronger nodes based, for instance, on Hamming codes. Such a design approach, which we call LDPC code doping, leads to low-rate quasi-cyclic G-LDPC codes with excellent performance in both the error floor and waterfall regions on the additive white Gaussian noise channel.",2008,0, 683,A New Topology of Fault-current Limiter and its control strategy,"In this paper a new type of fault current limiter based on DC reactor with using superconductor are presented. In normal operation condition the limiter has no obvious effect on loads. When fault happens, the bypass AC reactor and series resistor will insert the fault line automatically to limit the short circuit current, when the control circuit detects a short circuit fault, the solid state bridge in fault line works as an inverter and is closed as soon as possible. Subsequently the fault current is fully limited by the bypass AC reactor and series resistor. The magnitude of Lac and rac must be equal with protected load. By using the electro-magnetic transients in DC systems which are the simulator of electric networks (EMTDC) software we carried out analysis of the voltage and current waveforms for fault conditions. Waveforms are considered in calculating the voltage drop at substation during the fault. The analysis used in selecting an appropriate inductance value for designing",2006,0, 684,Optimal worst case estimation for LPV-FIR models with bounded errors,In this paper discrete time linear parameter varying (LPV) models with finite impulse response (FIR) dynamic structure are considered. Measurement errors are assumed to be bounded and in such condition the worst case parameter estimate errors are derived together with the input sequences that allow their determination. The main result of the paper shows that the optimal input design of LPV-FIR models is achieved by combining the available results on optimal input design for invariant FIR models with the results on optimal input design for static blocks,2000,0, 685,Intelligent Fault Isolation of Control Valves in a Power Plant,"In this paper fault happening for control valves in a power plant is considered. The method is implemented for boiler feed water valve and is extendable to boiler fuel valve and governor valve. Fault kind and location is determined using measurement of boiler state variables. In spite of robust controller design, it may be unable to handle these faults and restricts their effects. A fuzzy compensator does fault accommodation tasks prosperously with modification set-points to the controller. Fuzzy supervisor is implemented in a way which prompts simulation time related to the current implements. Simulation results show the effectiveness of the proposed methodology",2006,0, 686,Modified residue codes based on residue number system as a fault tolerance scheme,In this paper residue number system (RNS) arithmetic and redundant residue number system (RRNS) based codes as well as their properties are reviewed. We propose the modification of the RRNS based error correction codes with less number of residues. Our method reduces the hardware overhead drastically and also improves the performance of the system. We review the properties of the modified RRNS codes. We propose applications of the modified RRNS codes as fault tolerance scheme.,2009,0, 687,Computer simulation of the native point defects structure in CdTe,"In this paper the computer simulation of the structure of point defects in CdTe is developed by the numeral solution of electron neutrality equation. The software presented contains lexical analyzer, which allows to construct, analyse and solve the electron neutrality equations of various type. The results of the performer calculations coincide well with experimental data, obtained from the high temperature (500-1200 K) measurements of kinetic coefficients carried out on the CdTe single crystals right their growth.",2009,0, 688,Robust hierarchical mobile IPv6 (RH-MIPv6): an enhancement for survivability and fault-tolerance in mobile IP systems,"In wireless networks, system survivability is one of the most important issues in providing quality of service (QoS). However, since failure of home agent (HA) or mobile anchor point (MAP) causes service interruption, the hierarchical mobile IPv6 (HMIPv6) has only weak survivability. In this paper, we propose robust hierarchical mobile IPv6 (RH-MIPv6), which provides fault tolerance and robustness in mobile networks. In RH-MIPv6, a mobile node (MN) registers primary (P-RCoA) and secondary (S-RCoA) regional care of addresses to two different MAPs (primary and secondary) simultaneously. We develop a mechanism to enable the mobile node or correspondent node (CN) to detect the failure of primary MAP and change their attachment from the primary to secondary MAP. By this recovery procedure, it is possible to reduce the failure recovery time. Analytical evaluation indicates that RH-MIPv6 has faster recovery time than HMIPv6 and we also show through simulation as like analytical result. Consequently, RH-MIPv6 shows about 60% faster recovery time compared with HMIPv6.",2003,0, 689,Performance Analysis of Error Control Codes for Wireless Sensor Networks,"In wireless sensor networks, the data transmitted from the sensor nodes are vulnerable to corruption by errors induced by noisy channels and other factors. Hence it is necessary to provide a proper error control scheme to reduce the bit error rate (BER). Due to the stringent energy constraint in sensor networks, it is vital to use energy efficient error control scheme. In this paper, we focus our study on the performance analysis of various error control codes in terms of their BER performance and power consumption on different platforms. In detail, error control codes with different constraints are implemented and simulated using VHDL. Implementation on FPGA and ASIC design is carried out and the energy consumption is measured. The error control performance of these codes is evaluated in terms of bit error rate (BER) by transmitting randomly generated data through a Gaussian channel. Based on the study and comparison of the three different error control codes, we identify that binary-BCH codes with ASIC implementation are best suitable for wireless sensor networks",2007,0, 690,Application layer error correction scheme for video header protection on wireless network,"In wireless video streaming application, video information may be corrupted by a noisy channel. By introducing error resilience and error concealment techniques, many researchers have tried to eliminate quality degradation of reconstructed picture in decoding a corrupted data. On the contrary, there are relatively fewer works discussing the ways to diminish the corruption. Hence, system designers need to use different methods to restrict the error cause by channel within a tolerable extent. In other words, the system will be difficult to be implemented in practical design. In this paper, we propose a way to protect the video header information in application layer without modifying standardized syntax. Beside, we also consider channel condition of wireless transmission and propose a way to reduce redundant bits used in channel coding. By doing this, the bitstream can be simply transmitted over practical wireless network and the reconstructed picture quality outperforms the original one.",2005,0, 691,Optimal packet scheduling for wireless video streaming with error-prone feedback,"In wireless video transmission, burst packet errors generally produce more catastrophic results than equal number of isolated errors. To miniimize the playback distortion it is crucial for the sender to know the packet errors at the receiver and then optimally schedule next transmissions. Unfortunately, in practice, feedback errors result in inaccurate observations of the receiving status. In this paper, we develop an optimal scheduling framework to minimize the expected distortion by first estimating the receiving status. Then, we jointly consider the source and channel characteristics and optimally choose the packets to transmit. The optimal transmission strategy is computed through a partially observable Markov decision process. The experimental results show that the proposed framework improves the average peak signal-to-noise ratio (PSNR) by 0.6-1.3 dB upon using a traditional system without packet scheduling. Moreover, we show that the proposed method smoothes out the bursty distortion periods and results in less fluctuating PSNR values.",2004,0, 692,Practical Evaluation of Opportunistic Error Correction,"In, we have proposed a novel cross-layer scheme based on resolution adaptive ADCs and fountain codes for the OFDM systems to lower the power consumption in ADCs. The simulation results show that it saves more than 70% power consumption in ADCs comparing to the current IEEE 802.11a system. In this paper, we investigate its performance in the realworld. Measurement results show that the FEC layer used in the IEEE 802.11a system consumes around 59 times of the amount of power in ADCs comparing to the LDPC codes from the IEEE 802.11n standard, whose power consumption in ADCs is around 26 times of the proposed cross-layer method. In addition, this new cross-layer approach only needs to process the well-received packets to save the processing power. The latter can not be applied in the current FEC schemes.",2009,0, 693,Effect of Implantation Defects and Carbon Incorporation on Si/SiGe Bipolar Characteristics,"Incorporation of carbon in SiGe has attracted great interest, which makes SiGeC based heterojunction transistors as an attractive device for high frequency applications. Carbon addition in SiGe dramatically reduces out-diffusion of boron caused by excess of interstitials generated by extrinsic base implantation. However carbon incorporation negatively influences the electrical device characteristics. In this paper we investigate active implantation defects, the aim was to the specify space localization and parasitic effects of this ones. Second, we investigate the impact of carbon content on the electrical characteristics of device, the results show that indeed C contents A 1% severely degrade transistor performances.",2009,0, 694,Analysis of Real-Time Systems Sensitivity to Transient Faults Using MicroC Kernel,"Increasing complexity of safety-critical systems that support real-time multitasking applications requests the concurrency management offered by real-time operating systems (RTOS). Real-time systems can suffer severe consequences if the functional as well as the time specifications are not met. In addition, real-time systems are subject to transient errors originating from several sources, including the impact of high energy particles on sensitive areas of integrated circuits. Therefore, the evaluation of the sensitivity of RTOS to transient faults is a major issue. This paper explores sensitivity of RTOS kernels in safety-critical systems. We characterize and analyze the consequences of transient faults on key components of the kernel of MicroC, a popular RTOS. We specifically focus on its task scheduling and context switching modules. Classes of fault syndromes specific to safety-critical real-time systems are identified. Results reported in this paper demonstrate that 34% of faults that affect the scheduling and context switching functions led to scheduling dysfunctions. This represents an important fraction of faults that cannot be ignored during the design phase of safety-critical applications running under an RTOS",2006,0, 695,Software detection mechanisms providing full coverage against single bit-flip faults,"Increasing design complexity for current and future generations of microelectronic technologies leads to an increased sensitivity to transient bit-flip errors. These errors can cause unpredictable behaviors and corrupt data integrity and system availability. This work proposes new solutions to detect all classes of faults, including those that escape conventional software detection mechanisms, allowing full protection against transient bit-flip errors. The proposed solutions, particularly well suited for low-cost safety-critical microprocessor-based applications, have been validated through exhaustive fault injection experiments performed on a set of real and synthetic benchmark programs. The fault model taken into consideration was single bit-flip errors corrupting memory cells accessible to the user by means of the processor instruction set. The obtained results demonstrate the effectiveness of the proposed solutions.",2004,0, 696,Modeling the coverage and effectiveness of fault-management architectures in layered distributed systems,"Increasingly, fault-tolerant distributed software applications use a separate architecture for failure detection instead of coding the mechanisms inside the application itself. Such a structure removes the intricacies of the failure detection mechanisms from the application, and avoids repeating them in every program. However, successful system reconfiguration now depends on the management architecture (which does both fault detection and reconfiguration), and on management subsystem failures, as well as on the application. This paper presents an approach which computes the architecture-based system reconfiguration coverage simultaneously with its performability.",2002,0, 697,Influence of Magnetic Saturation on Diagnostic Signal of Induction Motor Cage Faults,"Induction motor rotor cage diagnostics bases on additional components in stator phase current with (1-2s)fo and (1+2s)fo frequencies. The (1-2s)fo component arises as the main effect of the cage asymmetry whereas the (1+2s)fo component is a secondary effect. It is commonly known, that it arises due to speed ripples, which are generated by alternating component of electromagnetic torque in a motor with faulty cage. This paper shows that magnetic saturation of a main magnetic circuit also generates the (1+2s)fo component in stator currents. A special mathematical model accounting for saturation is used to prove this thesis. Results of quantitative analysis of an influence of inertia and saturation on the (1+2s)fo component is shown.",2007,0, 698,Optimizing the fault tolerance capabilities of distributed real-time systems,"Industrial real-time systems typically have to satisfy complex requirements, mapped to the task attributes, eventually guaranteed by a fixed priority scheduler in a distributed environment. These systems consist of a mix of hard and soft tasks with varying criticality, as well as associated fault tolerance requirements. Time redundancy techniques are often preferred in industrial applications and, hence, it is extremely important to devise resource efficient methodologies for scheduling real-time tasks under failure assumptions. In this paper, we propose a methodology to provide a priori guarantees in distributed real-time systems with redundancy requirements. We do so by identifying temporal feasibility windows for all task executions and re-executions, as well as allocating them on different processing nodes. We then use optimization theory to derive the optimal feasibility windows that maximize the utilization on each node, while avoiding overloads. Finally on each node, we use integer linear programming (ILP) to derive fixed priority task attributes that guarantee the task executions within the derived feasibility windows, while keeping the associated costs minimized.",2009,0, 699,Evaluation of fault tolerance latency from real-time application's perspectives,"Information on Fault Tolerance Latency (FTL), which is defined as the total time required by all sequential steps taken to recover from an error, is important to the design and evaluation of fault-tolerant computers used in safety-critical real-time control systems with deadline information. In this paper, we evaluate FTL in terms of several random and deterministic variables accounting for fault behaviors and/or the capability and performance of error-handling mechanisms, while considering various fault tolerance mechanisms based on the trade-off between temporal and spatial redundancy, and use the evaluated FTL to check if an error-handling policy can meet the Control System Deadline (CSD) for a given real-time application",2000,0, 700,Data-fitting reconstruction for defect inspection of airplane aluminum structure in infrared thermographic NDT,"Infrared (IR) thermography has already demonstrated to be an effective tool for nondestructive testing and evaluation (NDT & E) applications. Pulsed thermography (PT) is such kind of technique often used in rapid and wide-area sub-surface inspection. In aviation industry, metal structures of aluminum, which has high thermal conductivity and diffusivity, usually need to be verified in material integrality both for manufacture and maintenance. And it is generally difficult to get sufficient noise-free sampling data for an accurate analysis due to swift heat conduction in material of high thermal conductivity, if the IR device is not good enough for a high sampling rate. So a data-fitting process is offered to reconstruct the sequence from a PT test for cost-effective detection by a commercial mediocre infrared imaging system. This method brings two evident advantages: increased image quality by reducing temporal stochastic noise, and discretionary number of reconstructed frame at any precision for quantitative analysis. A particular specimen of duralumin is machined to emulate airplane components. And experimental difference curves and images are given, behaving well to indicate the validity of this economical inspection.",2009,0, 701,An accurate analysis of the effects of soft errors in the instruction and data caches of a pipelined microprocessor,"Instruction and data caches are well known architectural solutions that allow significant improvement in the performance of high-end processors. Due to their sensitivity to soft errors, they are often disabled in safety critical applications, thus sacrificing performance for improved dependability. In this paper, we report an accurate analysis of the effects of soft errors in the instruction and data caches of a soft core implementing the SPARC architecture. Thanks to an efficient simulation-based fault injection environment we developed, we are able to present in this paper an extensive analysis of the effects of soft errors on a processor running several applications under different memory configurations. The procedure we followed allows the precise computation of the processor failure rate when the cache is enabled even without resorting to expensive radiation experiments.",2003,0, 702,Design of control programs for efficient handling of errors in flexible manufacturing cells,"Insufficient indication of errors is a problem in many manufacturing systems. Lack of support for resynchronization of the cell and its control system is another, less obvious problem. A third problem connected to errors in manufacturing cells is lack of support for manual control. In order to resolve an error situation manual control of the cell is often required. The problem is that some of the manual operations may be blocked due to machine protection. When an operator is to execute a blocked operation the only response is that nothing happens. This paper proposes a method where control programs with integrated functions for error detection, resynchronization, and support for manual control are generated out of information that already exists in the development process of a manufacturing system.",2004,0, 703,Review of fault diagnosis in control systems,"In this paper, we review the major achievements on the research of fault diagnosis in control systems (FDCS) from three aspects which including fault detection, fault isolation and hybrid intelligent fault diagnosis. Fault detection and isolation (FDI) are two important stages in the diagnosis process while hybrid intelligent fault diagnosis is the hot issue in current research field. The particular feature of FDCS is using the closed-loop monitoring information in control system to establish the quantitative and qualitative process model, detecting and then isolating the main failures in sensors, actuators, and the controlled process; the main challenge of FDCS is reducing the false alarm rate and missing alarm rate, improving the sensitivity and rapidity. The robust fault detection in the transition process, the knowledge acquisition for quantitative and qualitative diagnosis based on process history data, and hybrid intelligent fault diagnosis system architecture are worthy of a deeper research.",2009,0, 704,Iterative correction of frequency response mismatches in time-interleaved ADCs: A novel framework and case study in OFDM systems,"In this paper, we study a versatile iterative framework for the correction of frequency response mismatch in time-interleaved ADCs. Based on a general time varying linear system model, we establish a flexible iterative framework, which enables the development of various efficient iterative correction algorithms. In particular, we study the Gauss-Seidel iteration in detail to illustrate how the correction problem can be solved iteratively, and show that the iterative structure can be efficiently implemented using Farrow-based variable digital filters with few general-purpose multipliers. Simulation results show that the proposed iterative structure performs better than conventional compensation structures. Moreover, a preliminary study on the BER performance of OFDM systems due to TI ADC mismatch is conducted.",2010,0, 705,Impact of transmission error in DCH channel on the performance of UMTS networks,"In this paper, we study the impact of dedicated channel (DCH) transmission error on the performance of the application layer in universal mobile terrestrial system (UMTS) networks. We simulate a UMTS network with fully implemented protocol stack and study the impact of transmission error in the physical layer on the throughput and the delay variance (jitter) as performance metrics in the application layer. Our simulation results indicate that the net error rate for the delivered data in the application layer in acknowledged mode (AM) is smaller than that of unacknowledged mode (UM) however, in both AM and UM modes the channel throughput perceived in the application layer is decreased by increasing channel error rate in an approximately linear fashion. Simulation results also indicate that increasing/decreasing channel error rate in the physical layer has no significant impact on the delay variance in both AM and UM modes.",2008,0, 706,Feature Selection with Imbalanced Data for Software Defect Prediction,"In this paper, we study the learning impact of data sampling followed by attribute selection on the classification models built with binary class imbalanced data within the scenario of software quality engineering. We use a wrapper-based attribute ranking technique to select a subset of attributes, and the random undersampling technique (RUS) on the majority class to alleviate the negative effects of imbalanced data on the prediction models. The datasets used in the empirical study were collected from numerous software projects. Five data preprocessing scenarios were explored in these experiments, including: (1) training on the original, unaltered fit dataset, (2) training on a sampled version of the fit dataset, (3) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on the unsampled fit dataset, (4) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on a sampled version of the fit dataset, and (5) training on a sampled version of the fit dataset using only the attributes chosen by feature selection based on the sampled version of the fit dataset. We compared the performances of the classification models constructed over these five different scenarios. The results demonstrate that the classification models constructed on the sampled fit data with or without feature selection (case 2 and case 5) significantly outperformed the classification models built with the other cases (unsampled fit data). Moreover, the two scenarios using sampled data (case 2 and case 5) showed very similar performances, but the subset of attributes (case 5) is only around 15% or 30% of the complete set of attributes (case 2).",2009,0, 707,The Two-Level-Turn-Model Fault-tolerant Routing Scheme in Tori with Convex Faults,"In this paper, we would present a fault-tolerant wormhole routing scheme, called two-level-turn-model scheme, in the tori with convex faults. The reason why we choose the convex faults is that the shape of the convex faults is instrumental to design an effective fault-tolerant routing algorithm. Compared with many other solutions, which mainly focus on providing extra virtual channels to tolerate the faults, one of the advantages of our solution is that it is based on the turn model, which itself could tolerate some faults for some messages. At the same time, our solution could work effectively no matter where the fault region locates and no matter whether the fault regions are connected. In our solution, two patterns of the turn model are complementary to tolerate the faults. With a few limits to the shape of the convex faults, at most five virtual channels per physical channel are required to avoid the deadlock.",2008,0, 708,An Alternative Approach to Fault Location on Main Line Feeders and Laterals in Low Voltage Overhead Distribution Networks,"In this study, a digital fault location and monitoring technique for overhead power distribution lines with laterals is presented. The technique is based on utilising the fault voltage and current samples obtained at a single location of a typical distribution system with laterals; these are then filtered after the analogue to digital conversion process by using digital filtering techniques to obtain power frequency components of voltage and current samples. In the implementation of the algorithm, superimposed voltage and current components rather than total values are used to minimise the effects of pre-fault loading on the accuracy. The effectiveness of this method is verified through electromagnetic transients program (EMTP) simulations.",2008,0, 709,Identification and fault diagnosis of a simulated model of an industrial gas turbine,"In this study, a model-based procedure exploiting analytical redundancy for the detection and isolation of faults of a gas turbine system is presented. The diagnosis scheme is based on the generation of so-called ""residuals"" that are errors between estimated and measured variables of the process. The work is completed under both noise-free and noisy conditions. Residual analysis and statistical tests are used for fault detection and isolation, respectively. The final section shows how the actual size of each fault can be estimated using a multilayer perceptron neural network used as a nonlinear function approximator. The proposed fault detection and isolation tool has been tested on a single-shaft industrial gas turbine model.",2005,0, 710,The design and implementation of a microcontroller-based single phase on- line uninterrupted power supply with power factor correction,"In this study, the design and implementation of a microcontroller-based single phase on-line UPS (Uninterrupted Power Supply) with PFC (Power Factor Correction) were made practically. SP-320-24 SMPS (Switch Mode Power Supply) module was used to correct the input power factor. Input power factor value was held at the desired value in uninterrupted power supply topologies. In the realized system, two PIC16F876 were used as microcontroller. One of them was used to generate sinusoidal PWM (Pulse Width Modulation) signals that are used to drive n-channel MOSFETs in push pull inverter and to assure feedback control. Other one was used to control and display units. Harmonics were eliminated and output filter was simplified by using sinusoidal PWM technology.",2009,0, 711,Detection of small defects by THz-waves for non-destructive testing in dielectric layered structures,"In this study, the small defects detection in dielectric layered structures by THz waves for nondestructive testing. Finite element method were used for modelling of the structures.",2010,0, 712,Errors Estimation Models for Embedded Software Development Projects,"In this study, we analyze and evaluate development data obtained in the course of ""OMRON software Co."", that develops equipments used in financial markets, as well as data obtained during the development of software. Because of this, it is becoming very important for the software- development corporations to find how to develop software efficiently while guaranteeing delivery time and quality, and holding down developing costs. Especially, estimating manpower of new projects and guaranteeing quality of software are important, because the estimation relates to costs and the quality relates to the reliability of corporations. In the field of embedded software, development techniques, management techniques, tools, testing techniques, reusing techniques, real time operating systems and so on have already been studied. However, there is few studies about the relationship between the development scales and the total errors using the data accumulated by past projects. Hence, we establish a model for the first trial to estimate the errors of a new project by using regression analysis.",2007,0, 713,Fault Diagnosis of Power Transformer Using SVM and FCM,"In this study, we are concerned with fault diagnosis of power transformer. The objective is to explore the use of some advanced techniques such as SVM and FCM and quantify their effectiveness when dealing with dissolved gases extracted from power transformers. The proposed fault diagnosis system consists of data acquisition, fault/normal diagnosis, identification of fault and analysis of aging degree parts. In data acquisition part, concentrated gases are extracted from transformer for data gas analysis. In fault/normal diagnosis part, SVM is performed to separate normal state from fault types. The determination of fault type is executed by multi-class SVM in identification part. Although the inputted data is normal state, the analysis of aging degree is performed by considering the distance measure calculated by comparing with reference model constructed by FCM and input data. Our approach makes it possible to measure the possibility and degree of aging in normal transformer as well as the identification of faults in abnormal transformer. As the simulation results to verify the effectiveness, the proposed method showed more improved classification results than conventional methods.",2008,0, 714,Error performance of maximal-ratio combining with transmit antenna selection in flat Nakagami-m fading channels,"In this paper, the performance of an uncoded multiple-input-multiple-output (MIMO) scheme combining single transmit antenna selection and receiver maximal-ratio combining (the TAS/MRC scheme) is investigated for independent flat Nakagami-m fading channels with arbitrary real-valued m. The outage probability is first derived. Then the error rate expressions are attained from two different approaches. First, based on the observation of the instantaneous channel gain, the binary phase-shift keying (BPSK) asymptotic bit error rate (BER) expression is derived, and the exact BER expression is obtained as an infinite series, which converges for reasonably large signal-to-noise ratios (SNRs). Then the exact symbol error rate (SER) expressions are attained as a multiple infinite sum based on the moment generating function (MGF) method for M-ary phase-shift keying (M-PSK) and quadrature amplitude modulation (M-QAM). The asymptotic SER expressions reveal a diversity order equal to the product of the m parameter, the number of transmit antennas and the number of receive antennas. Theoretical analysis is verified by simulation.",2009,0, 715,Error in projection of planewaves using various basis functions,"In this paper, the projection error of RMS in 1D and 2D case is analyzed. The analytical projection error on the infinite meshes are given in closed form for the pulse basis, triangular basis, the second-order basis in 1D case, the divergence-conforming basis on rectangular element and the one-directional triangular element in 2D case. In addition, the projection error is numerically calculated for various basis functions with a finite computational domain. There are good agreements between the analytical and numerical results. It is found the projection error of p-th order 1D basis function is asymptotically proportional to (p+1) power of the size of the element. More results and discussions about the projection errors will be presented at the conference.",2008,0, 716,Design and analysis of AC/DC converters with interleaved Power Factor Correction,"In this paper, two current sharing control schemes base on average current mode control is proposed for interleaved Power Factor Correction (PFC) converter. The boost inductor current of each phases of interleaved PFC must be sensed to achieve current sharing and power limiting. To meet this requirement, we proposed a current transformers current sense skill which achieve current sharing and also can improve current harmonic distortion during high line operation. Finally, the SPICE like tool of simulation for the whole system is built up for verification.",2008,0, 717,Improvement of CAN BUS Performance by Using Error-Correction Codes,"In this paper, two variants of the Hybrid Automatic Repeat Request (HARQ) scheme for CAN bus are presented. The basic HARQ uses error-correction code based on the Reed-Solomon (RS) technique and the Cyclic Redundancy Check (CRC) method to detect errors. The second scheme uses the cyclic error-correction method instead of the CRC error-detection method to further improve the throughput. Moreover, the second scheme uses no additional bit overhead when compared with the basic HARQ scheme. This paper presents the performance of the proposed schemes using MATLAB and NS2 simulations. Experimental data of error patterns were used for realistic evaluation. The basic HARQ method corrects 100% of error bursts shorter than 7 bits. When the burst length falls between 7 to 10 the scheme corrects between 86% and 56% of the corrupted frames. Network Simulator (NS2) simulations showed that the throughput increased by 92% when the user message size was increased from the standard 64 bits to 512 bits as a result of reduced overhead per user bit.",2007,0, 718,A resource manager for optimal resource selection and fault tolerance service in Grids,"In this paper, we address the issues of resource management and fault tolerance in Grids. In Grids, the state of the selected resources for job execution is a primary factor that determines the computing performance. Specifically, we propose a resource manager for optimal resource selection. The resource manager automatically selects the optimal resources among candidate resources using a genetic algorithm. Typically, the probability of failure is higher in Grid computing than in a traditional parallel computing and the failure of resources affects job execution fatally. Therefore, a fault tolerance service is essential in computational Grids and Grid services are often expected to meet some minimum levels of quality of service (QoS) for desirable operation. To address this issue, we also propose fault tolerance service to satisfy QoS requirements. We extend the definition of failures, such as process failure, processor failure, and network failure, and design the fault detector and fault manager. The simulation results indicate that our approaches are promising in that (1) our resource manager finds the optimal set of resources that guarantees the optimal performance; (2) the fault detector detects the occurrence of resource failures; and (3) the fault manager guarantees that the submitted jobs complete and improves the performance of job execution due to job migration even if some failures happen.",2004,0, 719,Particle filtering for adaptive sensor fault detection and identification,"In this paper, we address the problem of adaptive sensor fault identification and validation by particle filtering. The model-based approaches are developed, where the sensor system is modeled by a Markov switch dynamic state-space model. To handle the nonlinearity of the problem, two different particle filters: mixture Kalman filter (MKF) and stochastic M-algorithm (SMA) are proposed. Simulation results are presented to compare the effectiveness and complexity of MKF and SMA methods",2006,0, 720,Fault tolerance in IEEE 802.11 WLANs,"in this paper, we address the problem of enhancing the fault tolerance of IEEE 802.11 wireless local area networks focusing on tolerating access point - AP failures. We develop a fault detection approach, which promises to be more effective to identify AP failures. In particular, we focus on the problem of overcoming APs failures working with reconfiguration of the remaining APs by changing parameters such as power level and frequency channels. Our approach consists of two main phases: Design and Fault Response. In Design phase, we deal with quantifying, placement and setting up of APs according to both area coverage and performance criteria. In Fault Response phase we consider the reconfiguration of active APs in order to deal with AP fault in the service area. Finally, we describe one of the major characteristics of the proposed architecture, which is a simple implementation in concordance with established IEEE 802.11 standards and related management systems.",2006,0, 721,Probability of error of space-time coded OFDM systems with frequency offset in frequency-selective Rayleigh fading channels,"In this paper, we analyze the performance of space-time coded (ST-coded) orthogonal frequency division multiplexing (OFDM) systems with carrier frequency offset (CFO) in a frequency-selective Rayleigh fading channel. Closed-form analytical expressions are derived for the symbol error probability (SEP) for M-PSK and M-QAM modulation schemes. The SEP expressions derived are valid and exact for OFDM systems with highly frequency-selective wireless channels where the subcarrier channel responses are i.i.d. with Rayleigh fading. Also, for the case where imperfect channel knowledge is used for space-time decoding, we derive expressions for constellation phase-rotation and post-equalized SINR (signal-to-interference-and-noise ratio) degradation due to CFO. The numerical results demonstrate the sensitivity of the receiver error performance to CFO in ST-coded OFDM systems.",2005,0, 722,Transmission characteristic of defect for the 90 bend photonic crystal,"In this paper, we analyzed the transmission characteristics of the point-defected around the outer layers of 90-degree bend photonic crystal. The simulation results show the vertical and horizontal point-defects have much effect than slanted ones.",2010,0, 723,Dynamic Multiple-Fault Diagnosis With Imperfect Tests,"In this paper, we consider a model for the dynamic multiple-fault diagnosis (DMFD) problem arising in online monitoring of complex systems and present a solution. This problem involves real-time inference of the most likely set of faults and their time-evolution based on blocks of unreliable test outcomes over time. In the DMFD problem, there is a finite set of mutually independent fault states, and a finite set of sensors (tests) is used to monitor their status. We model the dependence of test outcomes on the fault states via the traditional D-matrix (fault dictionary). The tests are imperfect in the sense that they can have missed detections, false alarms, or may be available asynchronously. Based on the imperfect observations over time, the problem is to identify the most likely evolution of fault states over time. The DMFD problem is an intractable NP-hard combinatorial optimization problem. Consequently, we decompose the DMFD problem into a series of decoupled subproblems, one for each sample epoch. For a single-epoch MFD, we develop a fast and high-quality deterministic simulated annealing method. Based on the sequential inferences, a local search-and-update scheme is applied to further improve the solution. Finally, we discuss how the method can be extended to dependent faults.",2009,0, 724,Fault management for next-generation IP-over-WDM networks,"In this paper, we present a broad outline for fault management for next generation IP-over-WDM network. This system contains three different components namely fault detection, fault recovery and fairness service provisions for different needs. We also review various fault detection and fault recovery schemes for WDM networks.",2005,0, 725,A heuristic method to reduce fault candidates for a speedy fault diagnosis,"In this paper, we present a heuristic method to reduce fault candidates for an efficient fault diagnosis. This paper uses a matching algorithm for the exact fault diagnosis. But the time consumption of a fault diagnosis using the matching algorithm is huge. So, we present a new method to reduce the fault diagnosis time. The method to reduce the time consumption is separated into two different phases which are a pattern comparison and a back-tracing comparison in failing pattern. The proposed method reduces fault candidates by comparing failing patterns with good patterns during critical path tracing process and comparing back-tracing from non-erroneous POs with back-tracing erroneous POs. The proposed method increases the simulation speed than the conventional algorithms. And this method is also applicable to any other fault diagnosis algorithms. Experimental results on ISCAS'85 and ISCAS'89 benchmark circuits show that fault candidate lists are reduced than those of previous diagnosis methods.",2008,0, 726,"Fault detection, isolation, and accommodation in a UAV longitudinal control system","In this paper, we present a multiple-model based method of analyzing for the longitudinal controller performance loss caused by actuator faults in the aircraft elevator system. More specifically, we consider the effects of the failure-induced elevator actuator bandwidth reduction integrated with the longitudinal flight dynamics. Results of the proposed multiple-model based fault detection, isolation, and accommodation (FDIA) algorithm are applied to an uninhabited air vehicle (UAV) subject to multiple actuator failures. Simulation results illustrate that the proposed fault-tolerant controller is an effective and efficient FDIA methodology, and could practically be vital to attack a wide range of similar applications.",2010,0, 727,A practical system for online diagnosis of control valve faults,"In this paper, we present a new control valve monitoring system using the nonparametric statistical hypothesis tests for the diagnosis of backlash, deadband, leakage, and blocking, four common faults found in many control systems. To make our system practical and inexpensive, we utilize the sensor measurements available in most control systems. For the classification of individual faults, we extract the geometric features from the measurement signals and detect the faults from their temporal trends. Based on the features, the statistical hypothesis tests are applied to discriminate the faults.",2007,0, 728,Online Failure Forecast for Fault-Tolerant Data Stream Processing,"In this paper, we present a new online failure forecast system to achieve predictive failure management for fault-tolerant data stream processing. Different from previous reactive or proactive approaches, predictive failure management employs failure forecast to perform informed and just-in-time preventive actions on abnormal components only. We employ stream-based online learning methods to continuously classify runtime operator state into normal, alert, or failure, based on collected feature streams. We have implemented the online failure forecast system as part of the IBM system S stream processing system. Our experiments show that the on-line failure forecast system can achieve good prediction accuracy for a range of stream processing software failures, while imposing low overhead to the stream system.",2008,0, 729,Single-image vignetting correction using radial gradient symmetry,"In this paper, we present a novel single-image vignetting method based on the symmetric distribution of the radial gradient (RG). The radial gradient is the image gradient along the radial direction with respect to the image center. We show that the RG distribution for natural images without vignetting is generally symmetric. However, this distribution is skewed by vignetting. We develop two variants of this technique, both of which remove vignetting by minimizing asymmetry of the RG distribution. Compared with prior approaches to single-image vignetting correction, our method does not require segmentation and the results are generally better. Experiments show our technique works for a wide range of images and it achieves a speed-up of 4-5 times compared with a state-of-the-art method.",2008,0, 730,A technique for fault diagnosis of defects in scan chains,"In this paper, we present a scan chain fault diagnosis procedure. The diagnosis for a single scan chain fault is performed in three steps. The first step uses special chain test patterns to determine both the faulty chain and the fault type in the faulty chain. The second step uses a novel procedure to generate special test patterns to identify the suspect scan cell within a range of scan cells. Unlike previously proposed methods that restrict the location of the faulty scan cell only from the scan chain output side, our method restricts the location of the faulty scan cell from both the scan chain output side and the scan chain input side. Hence the number of suspect scan cells is reduced significantly in this step. The final step further improves the diagnostic resolution by ranking the suspect scan cells inside this range. The proposed technique handles both stuck-at and timing failures (transition faults and hold time faults). The extension of the procedure to diagnose multiple faults is discussed. The experimental results show the effectiveness of the proposed method",2001,0, 731,Energy-efficient soft error-tolerant digital signal processing,"In this paper, we present energy-efficient soft error-tolerant techniques for digital signal processing (DSP) systems. The proposed technique, referred to as algorithmic soft error-tolerance (ASET), employs low-complexity estimators of a main DSP block to achieve reliable operation in the presence of soft errors. Three distinct ASET techniques - spatial, temporal and spatio-temporal- are presented. For frequency selective finite-impulse response (FIR) filtering, it is shown that the proposed techniques provide robustness in the presence of soft error rates of up to P/sub er/=10/sup -2/ and P/sub er/=10/sup -3/ in a single-event upset scenario. The power dissipation of the proposed techniques ranges from 1.1 X to 1.7 X (spatial ASET) and 1.05 X to 1.17 X (spatio-temporal and temporal ASET) when the desired signal-to-noise ratio SNR/sub des/=25 dB. In comparison, the power dissipation of the commonly employed triple modular redundancy technique is 2.9 X.",2006,0, 732,The effectiveness of using non redundant test cases with program spectra for bug localization,"In this paper, we present our approach of using non redundant test cases with program spectra (one of the automated bug localization techniques) to locate software bugs in a program. We evaluate several spectra metrics (functions mapped from program spectra) using the non redundant test cases. Extensive evaluation on Siemens Test Suite and subset of Unix datasets shows the effectiveness of locating bug using non redundant test cases with program spectra. In this paper, we also show that by adding duplicates of non redundant test cases, the stability and performance of spectra metrics are affected.",2009,0, 733,Partial simulation-driven ATPG for detection and diagnosis of faults in analog circuits,"In this paper, we propose a novel fault-oriented test generation methodology for detection and isolation of faults in analog circuits. Given the description of the circuit-under-test, the proposed test generator computes the optimal transient test stimuli in order to detect and isolate a given set of faults. It also computes the optimal set of test nodes to probe at, and the time instants to make measurements. The test generation program accommodates the effects introduced by component tolerances and measurement inaccuracy, and can be tailored to fit the signal generation capabilities of a hardware tester. Experimental results show that the proposed technique can be applied to generate transient tests for both linear and non-linear analog circuits of moderate complexity in reasonably less CPU time. This will significantly impact the test development costs for an analog circuit and will decrease the time-to-market of a product. Finally, the short duration and the easy-to-apply feature of the test stimuli will lead to significant reduction in production test costs.",2000,0, 734,A band-notched ultra-wideband 1 to 4 wilkinson power divider using symmetric defected ground structure,"In this paper, we propose a novel ultra-wideband Wilkinson power divider with symmetric defected ground structure (DGS) having frequency band-notch characteristic. The Wilkinson power divider was invented in 1960 and has wide applications in microwave circuits and antenna feeds. But it has a narrow bandwidth. Several schemes have been devised to increase its bandwidth. The main proposal used series connection of several sections having considerably increased bandwidth and high isolation between outputs for equal power divider. We use connection of three sections to obtain the UWB Wilkinson power divider. It is well known that the DGS of the microstrip line is implemented by making artificial defect on the ground and the ground defect provides a resonance property in transfer characteristic. In the microstrip line, DGS on the ground plane provides band rejection characteristic at some resonance frequency corresponding to the size of defect on the ground. In this paper, band-notch characteristic is achieved by inserting a symmetric spiral DGS under the microstrip line of the power divider. Experimental results of the constructed prototype are presented.",2007,0, 735,Proxy-Based SNR Scalable Error Tracking for Real-Time Video Transmission Over Wireless Broadband,"In this paper, we propose a proxy-based SNR scalable error tracking framework for real-time video transmission where the server is wired connected to Internet and the client is connected to Internet through wireless broadband networks. We assume that all errors (packet losses) result from wireless links, and wired links are assumed to be error-free. The client sends back NACKs with the information about base layer lost packets to the proxy via a feedback channel. Once the NACK is received, the proxy uses the motion data and the side information received from the streaming server to perform error tracking. We compare our approach to the original proxy-based error tracking scheme without scalability support at the same bitrate and bit error rate. Experimental results show that the proposed method can effectively improve performance",2006,0, 736,A fast and accurate multi-cycle soft error rate estimation approach to resilient embedded systems design,"In this paper, we propose a very fast and accurate analytical approach to estimate the overall SER and to identify the most vulnerable gates, flip-flops, and paths of a circuit. Using such information, designers can selectively protect the vulnerable parts resulting in lower power and area overheads that are the most important factors in embedded systems. Unlike previous approaches, the proposed approach firstly does not rely on fault injection or fault simulation; secondly it measures the SER for multi cycles of circuit operation; thirdly, the proposed approach accurately computes all three masking factors, namely, logical, electrical, and timing masking; fourthly, the effects of error propagation in re-convergent fanouts are considered in the proposed approach. SERs estimated by the proposed approach for some ISCAS89 circuit benchmarks are compared with that estimated by the Monte Carlo (MC) simulation based fault injection approach. The results show that the proposed approach is about four orders of magnitude faster than the MC fault injection approach while having an accuracy of about 97%. This level of fastness and accuracy makes the proposed approach a viable solution to measure the SER of very large size circuits used in industry.",2010,0, 737,An Effective Error Concealment Framework For H.264 Decoder Based on Video Scene Change Detection,"In this paper, we propose an effective error concealment framework for H.264 decoder based on the scene change detection. The proposed framework quickly and accurately detects whether scene change occurs in the decoding frame, based on the detection result, both corrupted intra frames and damaged inter frames can be reconstructed by spatial or improved temporal EC (Error Concealment) algorithm. The experiment shows that, compared with the traditional error concealment method in the H.264/A VC non- normative decoder, the proposed framework has better robustness and can efficiently improve the visual quality and PSNR of the decoded video.",2007,0, 738,Generalization of Rateless Codes for Unequal Error Protection and Recovery Time: Asymptotic Analysis,"In this paper, we propose rateless codes that provide unequal error protection (UEP) property. We analyze the asymptotic properties of these codes under the iterative decoding algorithm. We further verify our work with simulations. The simulation results indicate that the proposed codes have strong UEP property. Moreover, the UEP property does not have a considerable drawback on the overall performance of the codes. We also discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video/audio streaming",2006,0, 739,Fault Tolerance & Testable Software Security: A Method of Quantifiable Non-Malleability with Respect to Time,"In this paper, we demonstrate there exists practical limits to the recoverability and integrity verification (non-malleability) of software with respect to time a property to the best of our knowledge not demonstrated previously; this in turn, implies practical limits to software security using current existing processing hardware. Non-malleability applied to software implies that it should be infeasible for an attacker to modify a piece of software, thus creating a software fault. Given the recoverability limitation, we demonstrate a quantifiable definition for secure software with respect to integrity/tamper resistance.",2007,0, 740,A CPW bandpass filter using defected ground structure with shorting stubs for 60 GHz applications,"In this paper, we design and fabricate a coplanar waveguide (CPW) bandpass filter operating at 60 GHz. A new DGS pattern is proposed and serves as the unit cell. We cascade two unit cells to construct a second-order bandpass filter. The equivalent lumped element circuit model was extracted form the full-wave electromagnetic simulation. Moreover, we obtain good agreement between circuit model analysis and simulation. The measurement results for the proposed bandpass filter reveals a 3 dB pass band with a bandwidth of 6.8 GHz. Furthermore, the insertion loss at 60 GHz is less than 2 dB.",2010,0, 741,Bivariate Software Fault-Detection Models,"In this paper, we develop bivariate software fault-detection models with two time measures: calendar time (day) and test-execution time (CPU time) and incorporate both of them to assess the quantitative software reliability with higher accuracy. The resulting stochastic models are characterized by a simple binomial process and the bivariate order statistics of software fault-detection times with different time scales.",2007,0, 742,A high resolution and early triggering on multi layer processing defects,"In this paper, we discussed the methodology of multi layers defect monitoring using probability of occurrence (p) instead of average value. We also show how the probability of occurrence (p) is able to early trigger a defectivity event thus preventing major yield impact compare to average value.",2006,0, 743,Fault management using passive testing for mobile IPv6 networks,"In this paper, we employ the communicating finite state machine (CFSM) model for networks to investigate fault management using passive testing. First, we introduce the concept of passive testing. Then, we introduce the CFSM model, the observer model and the fault model with necessary assumptions. We introduce the fault detection algorithm using passive testing. Then, we briefly present our new passive testing approach for fault location, fault identification, and fault coverage based on the CFSM model. We illustrate the effectiveness of our new technique through simulation of a practical protocol example, a 4-node mobile IPv6 network. Finally, conclusions and potential extensions are discussed",2001,0, 744,The complexity of adding failsafe fault-tolerance,"In this paper, we focus our attention on the problem of automating the addition of failsafe fault-tolerance where fault-tolerance is added to an existing (fault-intolerant) program. A failsafe fault-tolerant program satisfies its specification (including safety and liveness) in the absence of faults. And, in the presence of faults, it satisfies its safety specification. We present a somewhat unexpected result that, in general, the problem of adding failsafe fault-tolerance in distributed programs is NP-hard. Towards this end, we reduce the 3-SAT problem to the problem of adding failsafe fault-tolerance. We also identify a class of specifications, monotonic specifications and a class of programs, monotonic programs. Given a (positive) monotonic specification and a (negative) monotonic program, we show that failsafe fault-tolerance can be added in polynomial time. We note that the monotonicity restrictions are met for commonly encountered problems such as Byzantine agreement, distributed consensus, and atomic commitment. Finally, we argue that the restrictions on the specifications and programs are necessary to add failsafe fault-tolerance in polynomial time; we prove that if only one of these conditions is satisfied, the addition of failsafe fault-tolerance is still NP-hard.",2002,0, 745,The effect of the specification model on the complexity of adding masking fault tolerance,"In this paper, we investigate the effect of the representation of safety specification on the complexity of adding masking fault tolerance to programs - where, in the presence of faults, the program 1) recovers to states from where it satisfies its (safety and liveness) specification and 2) preserves its safety specification during recovery. Specifically, we concentrate on two approaches for modeling the safety specifications: 1) the bad transition (BT) model, where safety is modeled as a set of bad transitions that should not be executed by the program, and 2) the bad pair (BP) model, where safety is modeled as a set of finite sequences consisting of at most two successive transitions. If the safety specification is specified in the BT model, then it is known that the complexity of automatic addition of masking fault tolerance to high atomicity programs - where processes can read/write all program variables in an atomic step) - is polynomial in the state space of the program. However, for the case where one uses the BP model to specify safety specification, we show that the problem of adding masking fault tolerance to high atomicity programs is NP-complete. Therefore, we argue that automated synthesis of fault-tolerant programs is likely to be more successful if one focuses on problems where safety can be represented in the BT model.",2005,0, 746,Research and Implementation of Fault Diagnosis System on Oil Field Based on Intelligence Integrating,"In this paper, the authors put forward the integrating intelligence diagnosis model and the key technologies on this model in order to solve the work status diagnosis problem of oil well .The merits of this system included: knowledge acquisition automatically, high speed of identifying, high degree of intelligent zing, etc., and this system can identify 21 kinds of faults, this diagnosis technology was more exact comparing to other diagnosis methods.",2009,0, 747,A strategy to replace the damaged element for fault-tolerant induction motor drive,"In this paper, the best moment to replace to the damaged element in a fault-tolerant induction motor drive working with a open-loop and closed-loop control is presented, a previous stage of fault-diagnostic to detect a short-circuit or open-circuit failure in the power device is considered. The technique is based on the connection of bidirectional switches to electrically isolate the damaged element by mean of fuse blown corresponding and to replace only the damaged device by another healthful one at the most suitable moment, the main issue is to diminish the tracking error of the motor current during the fault transient. Experimental and simulation results are obtained in order to validate the technique proposed.",2008,0, 748,Control strategy of transformer coupling solid state fault current limiter and its experimental study with capacitance load,"In this paper, the control strategy of transformer coupling three phase bridge type solid state fault current limiter (TC-SSFCL) is presented. It is validated in an experimental system. It's proved that the control strategy ensures TC-SSFCL has excellent control performance and current-limiting effectiveness in its normal startup, operation and current-limiting conditions. Issues faced by TC-SSFCL in a double-side power system are analyzed and solutions are suggested. The series resonance which appears in the test of TC-SSFCL with capacitance load is analyzed in detail. To suppress the resonance, both methods, pre-triggering the SCR bridge and adding a damping resistance in parallel with the coupling transformer, are discussed. The simulation results verify that the proposed strategies for suppressing the resonance are effective.",2009,0, 749,Fault Tolerant Control for Nonlinear Systems: Sum-of-Squares Optimization Approach,"In this paper, the fault tolerant control problem of nonlinear systems against actuator failures is considered. By representing the open-loop nonlinear systems in a state dependent linear-like polynomial form and implementing a special class of Lyapunov functions, the above problem can be formulated in terms of state dependent linear polynomial inequalities. Semidefinite programming relaxations based on the sum of squares decomposition are then used to efficiently solve such inequalities.",2007,0, 750,Fault Tolerant Strategies for BLDC Motor Drives under Switch Faults,"In this paper, the fault tolerant system for BLDC motors has been proposed to maintain the control performance under switching device faults of inverter. The proposed fault tolerant system provides compensation for open-circuit faults and short-circuit faults in power converter. The fault identification is quickly achieved by simple algorithm using the characteristic of BLDC motor drives. The drive system after fault identification is reconfigured by the four-switch topology connecting a faulty leg to the middle point of DC-link using bidirectional switches. The feasibility of the proposed fault tolerant system is proved by simulation",2006,0, 751,The SRGM Framework of Integrated Fault Detection Process and Correction Process,"In this paper, the hypothesis that the detected faults will be immediately removed is revised. An integration of fault detection process and correction process is considered. According to the statistic of the faults, the two new frameworks, which are the SRGM framework including repeated faults (CRDW) and the SRGM framework excluding repeated faults (CNRDW), are presented. The above two frameworks, not only can predict the number of cumulative detected faults, but also can predict the number of corrected faults. In this paper, as an example, two reliability models are gained from CNRDW with different detection process and different correction process. The fitting capability and prediction capability of the two models are evaluated by an open software failure data set. The experimental results show that the presented models have a fairly accurate fitting capability and prediction capability compared with other software reliability growth models.",2008,0, 752,Implementation of a bug algorithm in the e-puck from a hybrid control viewpoint,"In this paper, the implementation in the e-puck robot of an algorithm to keep going in a trajectory while evading fixed obstacles is presented. A review of some existing algorithms for trajectory tracking with obstacles avoidance is done. Also the basic characteristics of the mobile robot e-puck and the programming environment used to implement the control algorithm and prove the performance of the robot are summarized. By modeling the kinematics of the robot and simulating the implementation of the algorithm, the good performance of the control in both the simulated environment and the real robot in different surroundings are shown. The effects of different environmental factors in the performance of the robot are analyzed. This leads to suggest some algorithm improvements as a matter of future works.",2010,0, 753,A Monte Carlo study of deconvolution algorithms for partial volume correction in quantitative PET,"In this study, we evaluated several deconvolution methods for partial volume (PV) correction within dynamic positron emission tomography (PET) brain imaging and compared their performance with a PV correction method based on structural imaging. The motivation for this study stemmed from the errors in structural imaging based PV correction that are caused by magnetic resonance (MR) image segmentation and MR-PET registration inaccuracies. The studied deconvolution methods included variants of the iterative Richardson-Lucy deconvolution, variants of the reblurred Van Cittert deconvolution and the linear Wiener deconvolution. Our material consisted of a database of 16 Monte Carlo simulated dynamic 11C-Raclopride images with the same underlying physiology but differing underlying anatomy. We compared the binding potential (BP) values in putamen and caudate resulting from differing PV correction methods to the values computed based on the ground truth time activity curves (TACs). In addition, root mean square errors between TACs extracted from deconvolved images and the ground truth TACs were computed. The iterative deconvolution approaches featured better performance than the linear one. As expected, MR based PV correction under ideal conditions (perfect MR-PET registration and MR image segmentation) yielded more accurate quantification than the deconvolution based methods. However, the iterative deconvolution methods clearly improved the quantitative accuracy of computed physiological parameters (BP) as compared to the case of no PV correction. As variants of the reblurred Van Cittert deconvolution resulted in a lower anatomy-induced variance to the BP values, we consider them to be more interesting than Richardson-Lucy type deconvolution methods.",2006,0, 754,Compiler Optimizations for Fault Tolerance Software Checking,"In this work we propose a set of compiler optimizations to identify and remove redundant checks from the replicated code. Two checks are considered redundant if they check the same variable. In this work we evaluate two levels of hardware or system support: memory without support for checkpointing and rollback, where memory is guaranteed to not be corrupted with wrong values and memory with low-cost support for checkpointing and rollback. We also consider the situation where register file is protected with parity or ECC, such as Intel Itanium, Sun UltraSPARC and IBM Power4-6 because software implementations can take advantage of this hardware feature and reduce some of the replicated instructions. We have evaluated our approach using LLVM as our compiler infrastructure and PIN for fault injection. Our experimental results with Spec benchmarks on a Pentium 4 show that in the case where memory is guaranteed not to be corrupted, performance improves by an average 6.2%. With more support for checkpoint performance improves by an average 14.7%. A software fault tolerant system that takes advantage of the register safe platforms improves by an average 16.0%. Fault injection experiments show that our techniques do not decrease fault coverage, although they slightly increase the number of segmentation faults.",2007,0, 755,Calculating Functions of Interval Type-2 Fuzzy Numbers for Fault Current Analysis,"In this work, functions of type-2 fuzzy numbers are analyzed. For the special case of interval type-2 fuzzy numbers, the type-2 membership function of the output variable is calculated using the lower and upper membership functions of the input variables and the vertex method. This procedure is used in an application where the type-2 fuzzy fault currents of an electric distribution system are calculated. The results are shown and the advantages of the approach are discussed",2007,0, 756,Fault tolerance system for UAV using hardware in the Loop Simulation,"In Unmanned Aerial Vehicle system, the application software is large and becoming hard to meet the real-time application. UAV system may suffer faults in the controlled plant as well as in the execution platform. The execution platforms support a modern real-time embedded system, but distributed architecture is made of heterogeneous components that may incur transient or permanent faults. In the proposed system, we have developed UAV system based on Hardware in-the Loop Simulation (HILS), which is used for testing the UAV with real time data and real environment. This technique of simulation makes the system to be tested extrinsically with various conditions. The major part of this system deals with fault tolerance system for the UAV with fault injection mechanism and fault detection mechanism using the fault tree analysis. The fault recovery mechanism is also used for making the UAV to land in the safe mode. This paper also deals with the hardware in the simulation setup for the UAV system using the QNX operation system. This reliable architecture can enhance analysis capabilities for critical safety properties and reduce costs for such systems using Hardware in-the Loop Simulation.",2010,0, 757,Extended transfer bound error analysis in the presence of channel random nuisance parameter,"In this paper, we present the extended transfer bound analysis for the error performance of a general trellis code in the channel with the overall correlated continuous valued nuisance parameters. We introduce proper parameter model and include it into a new extended form of the transfer function. In this way, both, the new additional parameter space and the original error space are incorporated into system error analysis. An example application with simple trellis code and Rayleigh fading channel is investigated in order to demonstrate the functionality of the principle. Computer simulation results are presented for two different codes and various fading scenarios, and comparisons are made among analytical and measured system error performances. It was shown that for fading amplitude our approach was able to predict correct error asymptotes",2005,0, 758,Diagnosis of Defects on Scan Enable and Clock Trees,"In this paper, we propose a different software based method that does not require any extra effort manipulating test parameters. First, we assume the defects are on scan chains and use previously published chain diagnosis algorithms presented in Y. Huang et al. (2005) to identify the suspect scan cells. Secondly, if there is at least one faulty chain that is modeled with stuck-at-X fault, we attempt to diagnose with stuck-at-0 fault model at scan enable. As we mentioned earlier that the shift operation is incorrect when the scan cell value is obtained from the system logic for each shift cycle, it is very likely we see both stuck-at-1 and stuck-at-0 at scan cells. So stuck-at-X fault model at scan cells is a sign of the stuck-at-0 fault model for scan enable defects",2006,0, 759,Design of LDPC-based error correcting cipher,"In this paper, we propose a LDPC error correcting cipher which joints the Advanced Encryption Standard (AES) and LDPC code together. The LDPC error correcting cipher which is based on the wide trail strategy is a six round block cipher that encrypts 256 bit plaintexts using secret key to produce 512 bit ciphertexts, and the key is composed of 128 bit AES secret key and LDPC generator matrix. By using the LDPC generator matrix with high performance in diffusion property, we made the LDPC error correcting cipher as secure as the Advanced Encryption Standard (AES) against linear, differential attacks in fewer rounds. Even the square attack has no effect on attacking the cipher. Lastly, the process of encrypting/decrypting is implemented, and the security and error correction capacity is analyzed also.",2008,0, 760,Helping users avoid bugs in GUI applications,"In this paper, we propose a method to help users avoid bugs in GUI applications. In particular, users would use the application normally and report bugs that they encounter to prevent anyone - including themselves - from encountering those bugs again. When a user attempts an action that has led to problems in the past, he/she will receive a warning and will be given the opportunity to abort the action - thus avoiding the bug altogether and keeping the application stable. Of course, bugs should be fixed eventually by the application developers, but our approach allows application users to collaboratively help each other avoid bugs - thus making the application more usable in the meantime. We demonstrate this approach using our ""Stabilizer"" prototype. We also include a preliminary evaluation of the Stabilizer's bug prediction.",2005,0, 761,Theoretical expression of error event probability for a trellis chaos coded modulation concatenated with space-time blok code,"In this paper, we propose a new insights for the chaos coded modulation (CCM) schemes originally proposed by Kozic & al. using the approximation of the distance spectrum distribution with some usual laws, a complete study of the performances of these CCM schemes when they are concatenated with a Space Time Block Code (STBC) is proposed. Accurate bounds are obtained even in the case of time selective channels.",2010,0, 762,Zero Defect Strategy for Electronic Components in Automotive Applications,Key challenges for components like ASICs and power semiconductors in automotive applications are introduced. The background for zero defects is highlighted. A vision of a strategy to achieve zero defects will be drawn,2006,0, 763,Laboratory system for a traceable measurement of error vector magnitude,Laboratory system for a traceable measurement of error vector magnitude (EVM) is presented. It comprises a sampling oscilloscope which is directly traceable to basic physical quantities and a spectrum analyzer which is used for routine measurements and calibrations. The system is able to measure EVM for basic digitally modulated signals with low uncertainty.,2009,0, 764,An NN-based atmospheric correction algorithm for Landsat/TM thermal infrared data,"Land surface temperature (LST) is a key variable for studies of global or regional land surface processes, energy and water cycle, and thus, has important applications in various areas. Atmospheric correction is a major issue in LST retrieval using remote sensing data because the presence of the atmosphere always influences the radiation from the ground to the space sensor. Atmospheric correction of thermal infrared (TIR) data for land surface temperature retrieval is to estimate the three atmospheric parameters: transmittance, path radiance and the downward radiance. Typically the atmospheric parameters are obtained using atmospheric profiles combined with a radiative transfer model (RTM). But this approach is time-consuming and expensive, which is impractical for high-speed (near-realtime) operational atmospheric correction. An artificial neural network (NN) based atmospheric correction model for Landsat/TM thermal infrared data is proposed. The multi-layer feed-forward neural network (MFNN) is selected, in which the atmospheric profiles (temperature, humidity and pressure), elevation and scan angle are the input variables, and the atmospheric parameters are the output variables. The MFNN is combined with the radiative transfer simulation, using MODTRAN 4.0 and the latest global assimilated data. Finally, the transmittance and path radiance derived by the MFNN-based algorithm is compared with MODTRAN4.0 results. The RMSE for both parameters are 0.0031 and 0.035 Wm-2sr-1m-1, respectively. The results indicate that the proposed approach can be a practical method for Landsat/TM thermal data in both accuracy and efficiency.",2010,0, 765,On a nonlinear multiple-centrality-corrections interior-point method for optimal power flow,"Large scale nonlinear optimal power flow (OPF) problems have been efficiently solved by extensions from linear programming to nonlinear programming of the primal-dual logarithmic barrier interior-point method and its predictor-corrector variant. Motivated by the impressive performance of the nonlinear predictor-corrector extension, in this paper we extend from linear programming to nonlinear OPF the efficient multiple centrality corrections (MCC) technique that was developed by Gondzio. The numerical performance of the proposed MCC algorithm is evaluated on a set of power networks ranging in size from 118 buses to 2098 buses. Extensive computational results demonstrate that the MCC technique is fast and robust, and outperforms the successful predictor-corrector technique",2001,0, 766,Finding Defects in Natural Language Confidentiality Requirements,"Large-scale software systems must adhere to complex, multi-lateral security and privacy requirements from regulations. It is industrial practice to define such requirements in form of natural language (NL) documents. Currently existing approaches to analyzing NL confidentiality requirements rely on a manual linguistic transformation and normalization of the original text, prior to the analysis. This paper presents an alternative approach to analyzing requirements by using semantic annotations placed directly into the original NL documents. The benefits of this alternative approach are twofold: (1) it can effectively be supported by an interactive annotation tool and (2) there is a direct traceability between annotation structures and the original NL documents. We have evaluated our method and tool support using the same real-world case study that was used to evaluate the earlier linguistic approach. Our results show that our method generates the same results, i.e., it uncovers the same problems.",2009,0, 767,Correction of the interpolation error of quadro-phase detection in interferometry,Laser interferometers using quadro-phase detectors for the fringe counting are frequently used for ultra-precise displacement measurements. The accuracy of interferometers in nanometrology is limited mainly by the interpolation error of the detector system. The new method based on applications of curve fitting using nonlinear last square method was developed for correcting the interpolation error of quadro-phase detectors. The results from the experimental data obtained in interferometrical comparator CMI IK-1 are presented.,2000,0, 768,Active and latent failure conditions leading to human error in physical security,"Latent failure conditions describe the set of background circumstances which eventually lead to an unsafe act. These latent and active failures are generally uncovered during an accident investigation following an incident such as an airplane crash. This paper proposes that many evident preconditions for system failure exist in the physical security industry. This proposition is based upon observing control room operator's capabilities and practices while conducting operations in a 3D virtual control room simulator. The nature of physical security industry is such that there is very little real-time feedback on operator and system performance. Operators are expected to maintain high levels of detection performance day after day, month after month. In most cases how the approximately 500,000 individuals that are protecting the nation's critical infrastructure will perform will only be known when an actual crisis occurs. It is proposed that the latent preconditions for failures in the physical security industry can be uncovered using the same methodologies used in accident investigation. The causal factors for human performance breakdown can be uncovered through simulation exercises rather than through actual incident investigation. Remedial measures can then be developed and validated in the simulator prior to being implemented in actual operations.",2010,0, 769,An error resilience scheme for layered video coding,"Layered video coding combined with prioritized transmission is widely recognized as one of the schemes for providing error resilience in video transport system. We examine the error performance of data partitioning coded MPEG-2 video bitstreams transmitted over channel subject to bit errors. While base layer errors cannot be tolerated, only a limited amount of errors in the enhancement layer is acceptable. Further improvements on bit error resilience can be achieved using the EREC coder in the enhancement layer. Our results show the PSNR gain of up to 3 dB with EREC coded enhancement layer and no errors in the base layer.",2005,0, 770,"ACE: an aggressive classifier ensemble with error detection, correction and cleansing","Learning from noisy data is a challenging and reality issue for real-world data mining applications. Common practices include data cleansing, error detection and classifier ensembling. The essential goal is to reduce noise impacts and enhance the learners built from the noise corrupted data, so as to benefit further data mining procedures. In this paper, we present a novel framework that unifies error detection, correction and data cleansing to build an aggressive classifier ensemble for effective learning from noisy data. Being aggressive, the classifier ensemble is built from the data that has been preprocessed by the data cleansing and correcting techniques. Experimental comparisons will demonstrate that such an aggressive classifier ensemble is superior to the model built from the original noisy data, and is more reliable in enhancing the learning theory extracted from noisy data sources, in comparison with simple data correction or cleansing efforts",2005,0, 771,Stream Prediction Model Based on Tendency Correction,"Linear regression model is widely used in data stream prediction processing. In order to eliminate the prediction deviation caused by small data set, curve tendency correction technique is used to increase the prediction accuracy. Firstly the weighted moving method is used to modify the prediction function parameters. This algorithm improves the predicting accuracy, but causes low efficiency of time and space. Based on this algorithm, the exponential smoothing method is proposed. It is proved that this algorithm can reduce the space and time complexity, and also improves the prediction accuracy.",2009,0, 772,Error propagations for local bundle adjustment,"Local bundle adjustment (LBA) has recently been introduced to estimate the geometry of image sequences taken by a calibrated camera. Its advantage over standard (global) bundle adjustment is a great reduction of computational complexity, which allows real-time performances with a similar accuracy. However, no confidence measure on the LBA result such as uncertainty or covariance has yet been introduced. This paper introduces statistical models and estimation methods for uncertainty with two desirable properties: (1) uncertainty propagation along the sequence and (2) real-time calculation. We also explain why this problem is more complicated than it may appear at first glance, and we provide results on video sequences.",2009,0, 773,Consistent outdoor vehicle localization by bounded-error state estimation,"Localization is a part of many automotive applications where safety is of crucial importance. We think that the best way to guarantee the safety in these applications is to guarantee the results of their embedded localization algorithms. Unfortunately localization of vehicles is mostly solved by Bayesian methods which (due to their structure) can only guarantee their results in a probabilistic way. Interval analysis allows an alternative approach with bounded-error state estimation. Such an approach provides a bounded set of configurations that is guaranteed to surround the actual vehicle configuration. We have validated the bounded-error state estimation with an outdoor vehicle equipped with odometers, a GPS receiver and a gyro. With the experimental results we compare the bounded-error state estimation with the particle filter localization in terms of consistency and computation time.",2009,0, 774,Request Path Driven Model for Performance Fault Diagnoses,"Locating and diagnosing performance faults in distributed systems is crucial but challenging. Distributed systems are increasingly complex, full of various correlation and dependency, and exhibit dramatic dynamics. All these made traditional approaches prone to high false alarms. In this paper, we propose a novel system modeling technique, which encodes component's dynamic dependencies and behavior characteristics into system's meta-model and takes it as a unifying framework to deploy component's sub-models. We propose an automatic analyze approach to distill, from request travel paths, request path signatures, the essential information of component's dynamic behaviors, and use it to induce metamodel with Bayesian network, and then use the model to make fault location and diagnoses. We take up fault-injection experiments with RUBiS, a TPCW alike benchmark, simulating eBay.com. The results indicate that our model approach provides effective problem diagnosis, i.e., Bayesian network technique is effective for fault detecting and pinpointing, in terms of request tracing context. Moreover, meta-model induced with request paths, provides an effective guidance for learning statistical correlations among metrics across the system, which effectively avoid 'false alarms' in fault pinpointing. As a case study, we construct a proactive recovery framework, which integrate our system modeling technique with software rejuvenation technique to guarantee system's quality of services.",2010,0, 775,Defects Inspecting System for Tapered Roller Bearings Based on Machine Vision,"Many bearing manufacturers take traditional manual methods to inspect defects of bearings, which are in low efficiency, costly and unreliable. In this paper, a defects inspecting system for tapered roller bearing based on machine vision (DISTRB) is introduced. DISTRB is used to realize online automatic inspection, which includes mechanical parts, electrical parts, lighting equipments, a CCD, a computer and its software. DISTRB actualizes automatic online inspection by CCDs and defects recognizing algorithms and takes the place of workers' eyes. Some implementation devices are used to get high-quality images. Defects recognition algorithms of DISTRB are mainly elaborated. The defects of bearings include roller-missing in tapered roller bearings (RM) and some rollers being installed opposite to the direction of standard need (ROD). The key of DISTRB is to use practical image processing skills to improve the accuracy of the defects recognizing algorithms. The accuracy and stability of DISTRB mainly depends on the algorithms with a precondition of a fixed lighting part and imaging part. The DISTRB involves multi-domains such as mechatronic engineering, computer science, image-processing and pattern recognition. In this paper, taking image of type 32011X bearing as a research object, defects recognizing algorithms for RM and ROD defects are illustrated. After theoretical calculation and experiments in laboratory and practical applications in factories, the defects inspecting system is reliable, stable and effective.",2010,0, 776,A novel nanocomposite and its application in repairing bone defects,"Many commercial bone graft substitutes (BGS) and experimental bone tissue engineering scaffolds have been developed for bone repair and regeneration. Among them, sol-gel processed bioactive glasses showed promise due to their unique nanostructure. This study reports the in vivo bone regeneration using a newly developed porous bioactive and resorbable nanoscomposite that is composed of nano-bioactive glass (BG), collagen (COL), hyaluronic acid (HYA) and phosphatidylserine (PS), BG-COL-HYA-PS. The nanocomposite was prepared by a combination of sol-gel and freeze-drying methods. A rabbit radius defect model was used to evaluate bone regeneration at time points of 2, 4 and 8 weeks. Techniques including radiography, histology, fluorescent marker, and micro-CT were applied to characterize the new bone formation. In addition, ectopic bone formation was also investigated for the osteoinductivity of the materials. 8 weeks' results showed that (1) nearly complete bone regeneration was achieved for the BG-COL-HYA-PS nanocomposite that was combined with a bovine bone morphogenetic protein (BMP); (2) partial bone regeneration was achieved for the BG-COL-HYA-PS composites alone; and (3) control remained empty. This study demonstrated that the novel BG-COL-HYA-PS nanocomposite with or without the grafting of BMP, is a promising BGS or a tissue engineering scaffold for non-load bearing orthopaedic applications.",2008,0, 777,Application fault tolerance with Armor middleware,"Many current approaches to software-implemented fault tolerance (SIFT) rely on process replication, which is often prohibitively expensive for practical use due to its high performance overhead and cost. The adaptive reconfigurable mobile objects of reliability (Armor) middleware architecture offers a scalable low-overhead way to provide high-dependability services to applications. It uses coordinated multithreaded processes to manage redundant resources across interconnected nodes, detect errors in user applications and infrastructural components, and provide failure recovery. The authors describe the experiences and lessons learned in deploying Armor in several diverse fields.",2005,0, 778,How does resource utilization affect fault tolerance?,"Many fault-tolerant architectures are based on the single-fault assumption, hence accumulation of dormant faults represents a potential reliability hazard. Based on the example of the fail-silent Time-Triggered Architecture we study sources and effects of dormant faults. We identify software as being more prone to dormant faults than hardware. By means of modeling we reveal a high sensitivity of the MTTF to the existence of even a small amount of irregularly used resources. We propose on-line testing as a means of coping with dormant faults and sketch an appropriate test strategy",2000,0, 779,Applying fault-tolerant solutions of circulant graphs to meshes and hypercubes,"Many important architectures such as rings, meshes and hypercubes can be modeled as circulant graphs. As a result, circulant graphs have received a lot of attention, and a new method was developed for designing fault-tolerant solutions for them. We review this method in this paper, and examine its applications to the design of fault-tolerant solutions for meshes and hypercubes. Our results indicate that these solutions are efficient.",2005,0, 780,Improved Bolstering Error Estimation for Gene Ranking,"Many methods have been proposed to identify differentially expressed genes in diseased tissues. The performance of the method is closely related to the evaluation metric. We examine several error estimation algorithms (i.e., cross validation, bootstrap, resubstitution, and resubstitution with bolstering) for three classifiers (i.e., support vector machine, Fisher's discriminant, and signed distance function). To control the classifier's data-overfitting problem, usually caused by small sample size for many real datasets, we generate synthetic datasets based on real data. This way, we can monitor sample size impact when evaluating the metrics. We find that resubstitution with bolstering has the best result, especially with respect to computational efficiency. However, classical bolstering tends to bias in high dimensions. Thus, we further investigate ways to reduce bolstering estimation bias without increasing computational intensity. Results of our investigation indicate that the estimator tends to become unbiased as the sample size increases. We also find that modified bolstering is the best among all metrics in terms of estimation accuracy and computational efficiency.",2007,0, 781,A quantitative error analysis for mobile network planning,"Many planning tools and algorithms are being developed to make network planning more powerful and efficient. Commonly, we need to do error analysis for the planning tools in order to evaluate them and thus improve the algorithms. This paper presents methods of quantitative error analysis for network planning. To do error analysis, drive test data are collected and then compared with planning results. In particular, the author discusses measurement requirement because it is considered that the accuracy of error analysis depends on the sampling data.",2003,0, 782,Fault tolerance for embedded control system,"Many products with embedded electronic systems demand high reliability and high security, such that they can be trusted to operate in safety-critical applications. Safe and secure system engineering requires designers to consider all the consequences of errors and intrusions in their systems in addition to normal modes of operation. The proposed system for embedded control system meets the requirements of fault tolerance as a major issue. Fault tolerance proposed in this paper is based on redundancy, transient faults, and fault detection which are processed by software techniques. This fault tolerance system can enhance analysis capabilities for critical safety properties and reduce certification costs.",2009,0, 783,Towards energy-aware software-based fault tolerance in real-time systems,"Many real-time systems employed in defense, space, and consumer applications have power constraints and high reliability requirements. In this paper, we focus on the relationship between fault tolerance techniques and energy consumption. In particular, we establish the energy efficiency of Application Level Fault Tolerance (ALFT) over other software-based fault tolerance methods. We then develop sensible energy-aware heuristics for ALFT schemes. The heuristics yield up to 40% energy savings.",2002,0, 784,Using Logic Criterion Feasibility to Reduce Test Set Size While Guaranteeing Double Fault Detection,"Logic criteria are used in software testing to find inputs that guarantee detecting certain faults. Thus, satisfying a logic criterion guarantees killing certain mutants. Some logic criteria are composed of other criteria. Determining component criterion feasibility can be used as a means to reduce test set size without sacrificing fault detection. This paper introduces a new logic criterion based on component criterion feasibility. Given a predicate in minimal DNF, a determination is made of which component criteria are feasible for individual literals and terms. This in turn provides determination of which criteria are necessary to detect double faults and kill second-order mutants. A test set satisfying this new criterion guarantees detecting the same double faults as a larger test set satisfying another criterion. An empirical study using predicates in avionics software showed that tests sets satisfying the new criterion detected all but one double fault type. For this one double fault type, 99.91% of the double faults were detected and combining equivalent single faults nearly always yielded an equivalent double fault.",2009,0, 785,Automatic Correction of Loop Transformations,"Loop nest optimization is a combinatorial problem. Due to the growing complexity of modern architectures, it involves two increasingly difficult tasks: (1) analyzing the profitability of sequences of transformations to enhance parallelism, locality, and resource usage, which amounts to a hard problem on a non-linear objective function; (2) the construction and exploration of search space of legal transformation sequences. Practical optimizing and parallelizing compilers decouple these tasks, resorting to a predefined set of enabling transformations to eliminate all sorts of optimization-limiting semantical constraints. State-of-the-art optimization heuristics face a hard decision problem on the selection of enabling transformations only remotely related to performance. We propose a new design where optimization heuristics first address the main performance anomalies, then correct potentially illegal loop transformations a posteriori, attempting to minimize the performance impact of the necessary adjustments. We propose a general method to correct any sequence of loop transformations through a combination of loop shifting, code motion and index-set splitting. Sequences of transformations are modeled by compositions of geometric transformations on multidimensional affine schedules. We provide experimental evidence of the scalability of the algorithms on real loop optimizations.",2007,0, 786,Highly Parallel FPGA Emulation for LDPC Error Floor Characterization in Perpendicular Magnetic Recording Channel,"Low-density parity-check (LDPC) codes offer a promising error correction approach for high-density magnetic recording systems due to their near-Shannon limit error-correcting performance. However, evaluation of LDPC codes at the extremely low bit error rates (BER) required by hard disk drive systems, typically around 10-12 to 10- 15, cannot be carried out on high-performance workstations using conventional Monte Carlo techniques in a tractable amount of time. Even field-programmable gate array (FPGA) emulation platforms take a few weeks to reach BER between 10-11 and 10-12. Thus, we implemented a highly parallel FPGA processing cluster to emulate a perpendicular magnetic recording channel, which enabled us to accelerate the emulation by > 100times over the fastest reported emulation. This increased throughput enabled us to characterize the performance of LDPC code BER down to near 10-14 and investigate its error floor.",2009,0, 787,An adaptive error concealment mechanism for H.264/AVC encoded low-resolution video streaming,"Low-rate video is widely used especially in mobile communications. H.264/AVC (advanced video coding) is well suited for the real-time error resilient transport over packet oriented networks. In real-time communications, lost packets at the receiver cannot be avoided. Therefore, it is essential to design efficient error concealment methods which allow to visually reduce the degradation caused by the missing information. Each method has its own quality of reconstruction. We implemented various efficient error concealment techniques and investigated their performance in different scenarios. As a result, we propose and evaluate an adaptive error concealment mechanism that accomplishes both - good performance and low complexity enabling the deployment for mobile video streaming applications. This mechanism selects suitable error concealment method according to the amount of instantaneous spatial and temporal information of the video sequence.",2006,0, 788,Machine Learning and Bias Correction of MODIS Aerosol Optical Depth,"Machine-learning approaches (neural networks and support vector machines) are used to explore the reasons for a persistent bias between aerosol optical depth (AOD) retrieved from the MODerate resolution Imaging Spectroradiometer (MODIS) and the accurate ground-based Aerosol Robotic Network. While this bias falls within the expected uncertainty of the MODIS algorithms, there is room for algorithm improvement. The results of the machine-learning approaches suggest a link between the MODIS AOD biases and surface type. MODIS-derived AOD may be showing dependence on the surface type either because of the link between surface type and surface reflectance or because of the covariance between aerosol properties and surface type.",2009,0, 789,Employing Rapid Prototyping biomedical model to assist the surgical planning of defect mandibular reconstruction,"Mandibular defects need customized titanium plates for reconstruction. The conventional design process of titanium plate is prior to surgical interventions. The efficiency and accuracy of the conventional method is mainly dependent on experiences and skills of the designer. In order to decrease the dependence on design experience and enhance the participation of surgeons in the design process, designing and fabricating a customized titanium plate implant according to Rapid Prototyping (RP) biomedical model could be employed. RP biomedical model is greatly convenient to diagnosis and treatment planning. It could decrease the operation time and the risk of misinterpretation of the medical problem. In these cases, the operation time was reduced by 1.5-2.5 hours. A physical biomedical model also facilitates surgery planning and makes the rehearsal and simulation of the operation possibly. The customized titanium plate shape conforms to the patient's mandible anatomy and is thick enough to provide the strength and stiffness needed to fix the mandible until the bone grafts and soft tissues heal. At the same time it is thin enough to avoid extensive intrusion into soft tissues and to avoid weighing too much for the patient's jaw.",2010,0, 790,A Method for Predicting Marker Tracking Error,"Many augmented reality (AR) applications use marker-based vision tracking systems to recover camera pose by detecting one or more planar landmarks. However, most of these systems do not interactively quantify the accuracy of the pose they calculate. Instead, the accuracy of these systems is either ignored, assumed to be a fixed value, or determined using error tables (constructed in an off-line ground-truthed process) along with a run-time interpolation scheme. The validity of these approaches are questionable as errors are strongly dependent on the intrinsic and extrinsic camera parameters and scene geometry. In this paper we present an algorithm for predicting the statistics of marker tracker error in real-time. Based on the scaled spherical simplex unscented transform (SSSUT), the algorithm is applied to the augmented reality toolkit plus (ARToolKitPlus). The results are validated using precision off-line photogrammetric techniques.",2007,0, 791,Efficient Intra Refresh Using Motion Affected Region Tracking for Surveillance Video over Error Prone Networks,"Intra refresh is well-known as a simple technique for eliminating temporal error propagation in a predictive video coding system. However, for surveillance video systems, most of the existing intra refresh methods do not make good use of the properties of surveillance video. This paper proposes an efficient error recovery scheme using a novel intra refresh based on motion affected region tracking (MTIR). For every frame between two successive intra refresh frames, the motion affected regions are statistic analyzed and the error sensitive ones are intra refreshed in an refreshed frame. To suppress the potential spatial and temporal error propagation, constrained intra prediction is used for the intra refreshed macroblocks, and the reference number of an inter predicted frame which behinds an intra refreshed frame is limited. Experimental results show that compared with existing intra refresh methods and flexible macroblock ordering, the proposed method can achieve a considerable improvement on both objective peak signal noise ratio (PSNR) and subjective visual quality at the same bit rate and packet loss rate.",2008,0, 792,Upset-like fault injection in VHDL descriptions: A method and preliminary results,Investigates an approach allowing one to evaluate the consequences of single event upset phenomena for the reliable operation of processors. The method is based on the simulation of bit flips using a modified version of a high-level circuit description. Preliminary results illustrate the potential of this new strategy,2001,0, 793,"Experiences, strategies, and challenges in building fault-tolerant CORBA systems","It has been almost a decade since the earliest reliable CORBA implementation and, despite the adoption of the fault-tolerant CORBA (FT-CORBA) standard by the Object Management Group, CORBA is still not considered the preferred platform for building dependable distributed applications. Among the obstacles to FT-CORBA's widespread deployment are the complexity of the new standard, the lack of understanding in implementing and deploying reliable CORBA applications, and the fact that current FT-CORBA do not lend themselves readily to complex, real-world applications. We candidly share our independent experiences as developers of two distinct reliable CORBA infrastructures (OGS and Eternal) and as contributors to the FT-CORBA standardization process. Our objective is to reveal the intricacies, challenges, and strategies in developing fault-tolerant CORBA systems, including our own. Starting with an overview of the new FT-CORBA standard, we discuss its limitations, along with techniques for best exploiting it. We reflect on the difficulties that we have encountered in building dependable CORBA systems, the solutions that we developed to address these challenges, and the lessons that we learned. Finally, we highlight some of the open issues, such as nondeterminism and partitioning, that remain to be resolved.",2004,0, 794,Definition of Managed Objects for Fault Management of Satellite Network,"It is a new subject for network management to investigate the fault management of satellite network. Fault management of satellite network can acquire data of managed objects by means of network management protocol. Multiplex network management protocol (MNMP) exhibits great advantages on satellite network management. However, the current MIB of MNMP defines only a part of management objects, most of which focus on the transformation of management objects of SNMP MIB and CMIP. According to objects for fault management of the four granularities, i.e. network-, function-, device-, and component-class of satellite network, this paper defines the corresponding information of management objects. The aforementioned definition conforms to definition of MNMP managed object (DMMO). All these work can solve the lack of objects in MIB fault management for MNMP.",2009,0, 795,Engineering Equipment Integrated Fault Diagnosis System Based on Component Technology,"It is difficult to develop the corresponding fault diagnosis system and the software reusability is bad because the engineering equipment type are so many and their performance is diverse. This paper discussed the solution to engineering equipment integrated fault diagnosis system based on component technology, put forward the system model and gave the system frame designment process and working principle. The software was designed based on the three-layer hierarchy. It is easy to reuse and maintain, and the operation of the software is simple. A kind of new theory and method to develop the engineering equipment fault diagnosis system for the future was provided.",2009,0, 796,Fault Diagnosis of Generator Based on D-S Evidence Theory,"It is difficult to identify the fault type with the signal gathered from the sensors. In this paper, a new fusion algorithm based on the Dempster-Shafer theory of evidence and neural networks is brought forward. This method combines the advantages of D-S evidence theory and the BP neural network. Neural networks are used to pretreated the data gathered from the embedded sensors in the monitoring system of hydropower plant. Compared with the approaches that only adopt D-S evidence theory or neural networks, the accuracy of diagnostic results is obviously improved, and the signals analysis proved this conclusion. This method has been applied in the monitoring system of JiLin FengMan hydropower plant successfully.",2008,0, 797,Injecting intermittent faults for the dependability validation of commercial microcontrollers,"It is expected that intermittent faults will be a great challenge in modern VLSI circuits. In this work, we present a case study of the effects of intermittent faults on the behavior of a commercial microcontroller. The methodology used lies in VHDL-based fault injection technique, which allows a systematic and exhaustive analysis of the influence of different fault and system parameters. From the simulation traces, the occurrences of failures and latent errors have been logged. To extend the study, the results obtained have been compared to those got when injecting transient and permanent faults. The applied methodology can be generalized to more complex systems.",2008,0, 798,Partial Discharge Pattern Characteristic of HV Cable Joints with Typical Artificial Defect,"Insulation failure of XLPE cable system, particularly in joints and terminations, mostly attributes to local defects resulted from inadequate manufacturing or poor installation workmanship. Since it is well-acknowledged that partial discharge (PD) has a relationship with the discharge source (fault or defect), PD detection and pattern recognition have been widely accepted as a means to provide information on both the type and severity of defects or potential failures, which is further expected to give advice on repair or replacement of cable accessories. This paper presents a laboratory experiment on HOkV XLPE cable joints with artificial typical defects and their partial discharge pattern characteristic. PD patterns and statistical operators of these defects shows dissimilarity, which can be utilized as samples for further on-site PD pattern recognition.",2010,0, 799,The rising threat of vulnerabilities due to integer errors,"Integer errors are mistakes a programmer makes in sensitive operations involving integer data-type variables. Bugs caused by incorrect integer use are a fact of life for developers. In early 2001, it became clear that integer errors frequently cause security vulnerabilities. The article explains the vulnerabilities, and offers guidelines to prevent the introduction of these flaws. The most important thing is to maintain awareness of the risks during software development.",2003,0, 800,"Snooze: A Scalable, Fault-Tolerant and Distributed Consolidation Manager for Large-Scale Clusters","Intelligent workload consolidation and dynamic cluster adaptation offer a great opportunity for energy savings in current large-scale clusters. Because of the heterogeneous nature of these environments, scalable, fault-tolerant and distributed consolidation managers are necessary in order to efficiently manage their workload and thus conserve energy and reduce the operating costs. However, most of the consolidation managers available nowadays do not fulfill these requirements. Hence, they are mostly centralized and solely designed to be operated in virtualized environments. In this work, we present the architecture of a novel scalable, fault-tolerant and distributed consolidation manager called Snooze that is able to dynamically consolidate the workload of a software and hardware heterogeneous large-scale cluster composed out of resources using the virtualization and Single System Image (SSI)technologies. Therefore, a common cluster monitoring and management API is introduced, which provides a uniform and transparent access to the features of the underlying platforms. Our architecture is open to support any future technologies and can be easily extended with monitoring metrics and algorithms. Finally, a comprehensive use case study demonstrates the feasibility of our approach to manage the energy consumption of a large-scale cluster.",2010,0, 801,Enhanced server fault-tolerance for improved user experience,"Interactive applications such as email, calendar, and maps are migrating from local desktop machines to data centers due to the many advantages offered by such a computing environment. Furthermore, this trend is creating a marked increase in the deployment of servers at data centers. To ride the price/performance curves for CPU, memory and other hardware, inexpensive commodity machines are the most cost effective choices for a data center. However, due to low availability numbers of these machines, the probability of server failures is relatively high. Server failures can in turn cause service outages, degrade user experience and eventually result in lost revenue for businesses. We propose a TCP splice-based Web server architecture that seamlessly tolerates both Web proxy and backend server failures. The client TCP connection and sessions are preserved, and failover to alternate servers in case of server failures is fast and client transparent. The architecture provides support for both deterministic and non-deterministic server applications. A prototype of this architecture has been implemented in Linux, and the paper presents detailed performance results for a PHP-based Webmail application deployed over this architecture.",2008,0, 802,Improvement of Interpixel Uniformity in Carbon Nanotube Field Emission Display by Luminance Correction Circuit,"Interpixel uniformity is critical to image quality, and is hard to be improved by structural reformation of emissive elements because of nonuniformity in luminance characteristics of subpixels in self-emissive device like the carbon nanotube field emission display (CNT-FED). In this paper, we discuss the improvement of the interpixel uniformity by individually controlling the luminance of all subpixels in a display panel. We propose a prototype CNT-FED with improved interpixel uniformity using luminance correction circuitry. The analysis of interpixel uniformity with the proposed luminance correction circuit shows that the index of interpixel uniformity increases about 8% with only about 10% luminance reduction ratio that is a relatively small penalty compared to the enhancement of the interpixel uniformity. The proposed correction method can also be used to ensure improvement of the interpixel uniformity in large-size CNT-FED panels.",2008,0, 803,Studying effect of location and resistance of inter-turn faults on fault current in power transformers,"Inter-turn (turn-to-turn) fault is one of the most important failures which could occur in power transformers. This phenomenon could seriously reduce the useful life length of transformers. Meanwhile, transformer protection schemes such as differential relays are not able to detect this kind of fault. This type of fault should be studied carefully to determine its features and characteristics. In this paper the effect of fault location and fault resistance on the amplitude of fault current is studied. It is found that change of fault location along the winding has considerable effect on fault current amplitude. It would also be shown that, even small fault resistance could have major effect on fault current amplitude. In this paper, a real 240/11 kV, 27 MVA transformer is used for simulation studies.",2007,0, 804,A generic method for fault injection in circuits,"Microcircuits dedicated to security in smartcards are targeted by more and more sophisticated attacks like fault attacks that combine physical disturbance and cryptanalysis. The use of simulation for circuit validation considering these attacks is limited by the time needed to compute the result of the chosen fault injections. Usually, this choice is made by the user according to his knowledge of the circuit functionality. The aim of this paper is to propose a generic and semi-automatic method to reduce the number of fault injections using types of data stored in registers (latch by latch)",2006,0, 805,Built-in sequential fault self-testing of array multipliers,"Microprocessor datapath architectures operate on signed numbers usually represented in two's-complement or sign-magnitude formats. The multiplication operation is performed by optimized array multipliers of various architectures which are often produced by automatic module generators. Array multipliers have either a standard, nonrecoded signed (or unsigned) architecture or a recoded (modified Booth's algorithm) architecture. High-quality testing of array multipliers based on a comprehensive sequential fault model and not affecting their well-optimized structure has not been proposed in the past. In this paper, we present a built-in self-testing (BIST) architecture for signed and unsigned array multipliers with respect to a comprehensive sequential fault model. The BIST architecture does not alter the well-optimized multiplier structure. The proposed test sets can be applied externally but their regular nature makes them very suitable for embedded, self-test application by simple specialized hardware which imposes small overheads. Two different implementations of the BIST architecture are proposed. The first implementation focuses on the test invalidation problem and targets robust sequential fault testing, while the second one focuses on test cost reduction (test time and hardware overhead).",2005,0, 806,Optical Fault Attacks on AES: A Threat in Violet,"Microprocessors are the heart of the devices we rely on every day. However, their non-volatile memory, which often contains sensitive information, can be manipulated by ultraviolet (UV) irradiation. This paper gives practical results demonstrating that the non-volatile memory can be erased with UV light by investigating the effects of UV-Clight with a wavelength of 254 nm on four different depackaged microcontrollers. We demonstrate that an adversary can use this effect to attack an AES software implementation by manipulating the 256- bit S-box table. We show that if only a single byte of the table is changed, 2 500 pairs of correct and faulty encrypted inputs are sufficient to recover the key with a probability of 90%, in case the key schedule is not modified by the attack. Furthermore, we emphasize this by presenting a practical attack on an AES implementation running on an 8-bit microcontroller. Our attack involves only a standard decapsulation procedure and the use of alow-cost UV lamp.",2009,0, 807,Mining Top-K Fault Tolerant Frequent Patterns with Sliding Windows in Data Streams,"Mining frequent patterns over streaming data has become an important research focus field with broad applications. However, the real-world data may be usually polluted by uncontrolled factors. Fault-tolerant frequent pattern can express more generalized information than frequent pattern which is absolutely matched. Therefore, a novel single-pass algorithm is proposed for efficiently mining top-k fault-tolerant frequent pattern from data streams without minimum support threshold specified by user. A novel data structure is developed for maintaining the essential information of itemsets generated so far. Experimental results show that the developed algorithm is an efficient method for mining top-k fault-tolerant frequent pattern from data streams.",2010,0, 808,Integrated monitoring and Control of Cycloconverter Drive System for Fault Diagnosis and Predictive Maintenance,"Mining power systems have a unique industrial environment: low short-circuit levels, high-power non-linear equipments, transient over-voltages, high altitude locations, dust, vibrations, etc., where faults easily occur affecting the integrity of the electronics equipment and drives reducing reliability with undesirable effects on safety and production. In order to reduce the occurrence of failures, most power installations have protection systems intended for reliable and selective clearance of faults, but their protection signals are usually not integrated to the distributed control system (DCS). With advances of information technologies, it becomes more practical to create more powerful signal processing systems for a predictive monitoring function implementation. Predictive maintenance capability can reduce the costs associated with downtimes, as well as to avoid safety hazards associated with unexpected and catastrophic failures. This paper is mainly focused on the study and implementation of a laboratory scale prototype and monitoring system for a cycloconverter with a real-time model for observation of internal variables that works integrated with the DCS control. The main contribution of this system, over the conventional monitoring and diagnosis systems is the reduction of required measurements and the use of control variables for improving performance and robust system surveillance",2006,0, 809,A novel co-evolutionary approach to automatic software bug fixing,"Many tasks in software engineering are very expensive, and that has led the investigation to how to automate them. In particular, software testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered is still a duty of the programmers. In this paper we propose an evolutionary approach to automate the task of fixing bugs. This novel evolutionary approach is based on co-evolution, in which programs and test cases co-evolve, influencing each other with the aim of fixing the bugs of the programs. This competitive co-evolution is similar to what happens in nature for predators and prey. The user needs only to provide a buggy program and a formal specification of it. No other information is required. Hence, the approach may work for any implementable software. We show some preliminary experiments in which bugs in an implementation of a sorting algorithm are automatically fixed.",2008,0, 810,Semi-automatic fault localization and behavior verification for physical system simulation models,"Mathematical modeling and simulation of complex physical systems are emerging as key technologies in engineering. Modern approaches to physical system simulation allow users to specify simulation models with the help of equation-based languages. Due to the high-level declarative abstraction of these languages program errors are extremely hard to find. This paper presents an algorithmic semi-automated debugging framework for equation-based modeling languages. We show how program slicing and dicing performed at the intermediate code level combined with assertion checking techniques can automate, to a large extent, the error finding process and behavior verification for physical system simulation models.",2003,0, 811,Fault detection and location of open-circuited switch faults in matrix converter drive systems,"Matrix converter based electric vehicles can be effectively applied to military vehicles due to weight and volume reduction as well as high temperature operation with no dc-bus capacitors fragile in a harsh environment. For successful applications for military vehicle areas, satisfactory reliability issues have to be incorporated into the matrix converter drives. This paper proposes a fault diagnostic technique for detecting and locating open-circuited faults in switching components of matrix converter drive systems. In this paper, the fault-mode behaviors of the matrix converter are, in detail, explored under the open-circuited switch fault conditions. Based on the investigated knowledge of the converter behaviors, the proposed scheme enables the matrix converter drive to detect and exactly identify power switches in which open-circuited faults have occurred. The proposed fault diagnostic algorithm is based on monitoring nine voltage errors assigned to nine bi-directional switches of the matrix converter. The voltage error signals are constructed with simple comparison of measured input and output voltages. In case that any of bi-directional switches are associated with open-circuited switch faults, the dedicated voltage error signals rise over a certain threshold value, which can be possible to detect a fault occurrence and locate the faulty switch. Since the developed diagnostic method requires no construction of reference output voltages from the pulsewidth modulation (PWM) reference signals, it can be implemented with simple and robust features. Verification results are presented to demonstrate the feasibility of the proposed technique.",2009,0, 812,Measuring code edges of ADCs using interpolation and its application to offset and gain error testing,"Measuring analog threshold voltage between two different digital codes is a common test performed during production testing of ADCs. Due to the noisy nature of analog signals, this test can take considerable amount of costly test time. This paper presents a fast algorithm for measuring code edges of ADCs. In this method, a gaussian distribution is assumed for noise in ADCs and the code edges are determined by interpolating the inverse cumulative distribution of the gaussian function. A fast testing method for guaranteeing the specifications on offset and gain error is developed as a corollary to the code edge measuring technique. Practical issues in implementing this technique in production testing are discussed. Production test results show excellent repeatability and large saving in production test time",2000,0, 813,Static Security Analysis Based on Input-Related Software Faults,"It is important to focus on security aspects during the development cycle to deliver reliable software. However, locating security faults in complex systems is difficult and there are only a few effective automatic tools available to help developers. In this paper we present an approach to help developers locate vulnerabilities by marking parts of the source code that involve user input. We focus on input-related code, since an attacker can usually take advantage of vulnerabilities by passing malformed input to the application. The main contributions of this work are two metrics to help locate faults during a code review, and algorithms to locate buffer overflow and format string vulnerabilities in C source code. We implemented our approach as a plug in to the Grammatech CodeSurfer tool. We tested and validated our technique on open source projects and we found faults in software that includes Pidgin and cyrus-imapd.",2009,0, 814,Fault Simulation and Diagnostics in Generating System for Aircraft,"It is obvious, that diagnostics and prognostics of the future condition of various technical systems in aviation has special value [1]. The theoretical analysis and results of faults simulation of generating system for aircraft, and also an estimation of faults influence for system work are submitted in the given work. MATLAB/SIMULINK software package is used as simulation instrument. The received results will be used for diagnostic system of generating system (GS) and prognostic of possible faults.",2007,0, 815,Semi-active replication of SNMP objects in agent groups applied for fault management,"It is often useful to examine management information base (MIB) objects of a faulty agent in order to determine why it is faulty. This paper presents a new framework for semi-active replication of SNMP management objects in local area networks. The framework is based on groups of agents that communicate with each other using reliable multicast. A group of agents provides fault-tolerant object functionality. An SNMP service is proposed that allows replicated MIB objects of a faulty agent of a given group to be accessed through fault-free agents of that group. The presented framework allows the dynamic definition of agent groups, and management objects to be replicated in each group. A practical fault-tolerant tool for local area network fault management was implemented and is presented. The system employs SNMP agents that interact with a group communication tool. As an example, we show how the examination of TCP-related objects of faulty agents have been used in the fault diagnosis process. The impact of replication on network performance is evaluated",2001,0, 816,Automatic detection technology of surface defects on plastic products based on machine vision,"It is very necessary to detect surface defects on plastic products during production process and post treatment. The research and application of automatic detection technology of surface defects on plastic products is supposed to greatly liberate the human workforce, improve the automated production level, and has wide application prospect. The development of machine vision's key technologies such as illumination system, CCD camera, image enhancement, image segmentation, image recognition, and so on has been explained in detail. Its application on detection for surface defects on plastic products such as plastic electronic components, strips, PVC building materials, films, leather, bottles, and so on is also presented briefly. Especially, it mainly focuses on the automatic detection of surface defects for injection products, and the automatic detection system is proposed. It is composed of the conveyor belt device, the image acquisition and processing software and PLC control device, et al.",2010,0, 817,MemScroll: Dynamic Memory Errors Detector in C Programs,"Memory access errors are frequently occurred in computer programs written in C language. Such memory errors are one of the principal reasons for failures of C programs. Accordingly, a number of research works have suggested various techniques to detect them automatically. However, existing techniques have one or more of the following problems: inability to detect all memory errors, changing the memory allocation mechanism, incompatibility with libraries, and excessive performance overheads. To cope with these problems, in this paper we suggest a new and automated tool to detect dynamic memory access errors in C programs. The primary goal of our approach is to present a tool with high precision, better performance, and the relatively low space overheads.",2007,0, 818,Development of Fault Detection System in Air Handling Unit,"Monitoring systems used at present to operate air handling unit(AHU) optimally do not have a function that enables to detect faults properly when there are faults of such as operating plants or performance falling, so they are unable to manage faults rapidly and operate optimally. In this paper, we have developed a classified rule-based fault detection system which can be inclusively used in AHU system of a building by installation of sensor which is composed of AHU system and required low costs compare to the model based fault detection system which can be used only in a special building or system. In order to experiment this algorithm, it was applied to AHU system which is installed inside environment chamber(EC), verified its own practical effect, and confirmed its own applicability to the related field in the future.",2008,0, 819,Fault Tree Analysis of an Aircraft Electric Power Supply System to Electrical Actuators,"More electric aircraft technology enables the power supply of flight critical equipment such as the electronic flight control system. In the process of designing an electric power supply system in an aircraft, system safety and reliability are key requirements. Other important factors are low weight and low life cycle costs (LCC). Fault tree analysis is used to analyze a possible top event such as a malfunction for a primary rudder function. In a complicated design, a malfunction can be caused by a number of failures in different equipments or systems. The objective with the presented work is the feasibility of evaluating risks and the impact of a failure in a specific component, regardless of where in the system the component is located. The system is assumed to be part of an UAV for flights in populated areas",2006,0, 820,A New Algorithm for the Detection of Faults in Permanent Magnet Machines,"Most condition monitoring techniques are based on steady state analysis. However, machines typically operate under transient conditions thus prompting the development of non-stationary fault detection techniques. In this paper, the steady state analysis technique (motor current signature analysis) is extended to a new non-stationary fault detection technique to detect several different faults in a low-voltage high current PMSM. It will be shown that the motor current signature analysis (MCSA) technique cannot be applied directly to machines operating under transient conditions. The new technique, which is based on an adaptive algorithm capable of extracting non-stationary sinusoids, is able to extract fault information during transient operation of the machine",2006,0, 821,Mesh segmentation schemes for error resilient coding of 3-D graphic models,"Most existing coding techniques for three-dimensional (3-D) graphic models focus on coding efficiency. Due to irregular structure of the 3-D mesh and the use of variable length entropy codes, channel errors often propagate in the coded bitstream and severely distort the decoded model. By segmenting a 3-D graphic model (or its connected components) into small pieces, the impact of channel errors is confined to directly corrupted pieces rather than the whole mesh. The mesh segmentation schemes should be compatible with the underlying encoding techniques to achieve the low computational and coding overhead. In this research, we examine four mesh segmentation schemes, i.e., multiseed traversal, threshold traversal, morphing-based volume splitting, and content-based segmentation, and apply them to the context of error resilient mesh coding based on the constructive traversal coding technique. The advantages and shortcomings of each segmentation method are discussed. These schemes segment a mesh into pieces according to the target piece size that is determined according to the channel error rate.",2005,0, 822,Probabilistic Event-Driven Heuristic Fault Localization using Incremental Bayesian Suspected Degree,Most fault localization techniques are based on time windows. The size of time windows impacts on the accuracy of the algorithms greatly. This paper took weighted bipartite graph as fault propagation model and proposed a heuristic fault localization algorithm based on incremental Bayesian suspected degree (IBSD) to eliminate the above shortcomings. IBSD sequentially analyzes the incoming symptoms in an event-driven way and incrementally computes the Bayesian suspected degree and determines the most probable fault set for the current observed symptoms. Simulation results show that the algorithm has high fault detection rate as well as low false positive rate and has a good performance even in the presence of unobserved alarms. The algorithm which has a polynomial computational complexity can be applied to large scale communication network.,2008,0, 823,PBG devices based on periodic structures with defects,"Most high-density integrated optics components contain optical waveguides, scatterers, resonators, etc. Photonic crystals, as periodical structures with some defects, have attracted much interest in this context as well. In every such device, the optical beam propagates crossing the boundaries of these parts several times. For example, realistic photonic crystals are illuminated by a waveguide mode and, therefore, modeling the coupling of a waveguide in the crystal taking into account the interaction with the walls will be of importance. In this paper we make an attempt to consider the geometry of the crystal and the shape of the boundary as well, especially those parts of the surface, where the optical beams are traveling. In other words, this is an attempt to simulate a more realistic structure in order to optimize the properties of the PBG circuit using the appropriate numerical experiments.",2002,0, 824,Incomplete test vectors fail to detect obscure VoIP software errors,"Most ITU-T (i.e., international telecommunication union standardization sector) standards provide precise specifications for the proper operating behaviors of the systems they specify. However, such specifications are inappropriate for some standards such as the standards for audio coders used in VoIP. For such standards, ITU-T commonly supplies a set of input test data with corresponding correct output results. In this paper, we focus on the G.729 audio-coder algorithm. We use a version of G.729 code that can produce the bit-exact desired output for the given set of input test data to show that there can still be errors in the code even though the output matches the output in the ITU-T specification. We demonstrate that the given test vectors are not comprehensive enough to detect some of the obscure errors that can exist in the software. Therefore, we cannot rely solely on the given test vectors to test and validate our code.",2005,0, 825,Raising network fault management intelligence,"Most large network management centers have relatively low skilled personnel as their first level operations staff. Many organizations attempt to cope with this situation by restricting the set of problems these people have to deal with to those which are well understood and documented. Several software packages exist which can correlate and filter incoming events from the network and present a select subset to the operator. Unfortunately, programming these fault management applications requires considerable expertise and effort. Often, once the initial development is done, the implementation remains static, while the network itself is dynamic. This paper proposes a methodology for documenting known faults and responses, programming fault correlation engines, continuously examining real behavior, and feeding the result back into the programming process. This results in a continuous improvement in fault management intelligence, with corresponding improvement in network availability and thus value of the network to the organization",2000,0, 826,Client-transparent fault-tolerant Web service,"Most of the existing fault tolerance schemes for Web servers detect server failure and route future client requests to backup servers. These techniques typically do not provide transparent handling of requests whose processing was in progress when the failure occurred. Thus, the system may fail to provide the user with confirmation for a requested transaction or clear indication that the transaction was not performed. We describe a client-transparent fault tolerance scheme for Web servers that ensures correct handling of requests in progress at the time of server failure. The scheme is based on a standby backup server and simple proxies. The error handling mechanisms of TCP are used to multicast requests to the primary and backup as well as to reliably deliver replies from a server that may fail while sending the reply. Our scheme does not involve OS kernel changes or use of user-level TCP implementations and requires minimal changes to the Web server software",2001,0, 827,MV generator low-resistance grounding and stator ground fault damage,"Most of the in-plant medium-voltage generators are grounded through resistors ranging from 200 to 400 A. In some unusual situations, the resistors may even be as high as 1200 A. Low-resistance grounding is a preferred choice for a medium-voltage power distribution system. However, extensive generator stator ground fault damages had been reported due to the prolonging generator neutral ground current, which would continue to flow even after the main and field breaker opened. This ground fault current could continue to flow for as long as 5 s depending on the generator open-circuit time constant T'do. As a result, the higher the generator neutral current, would lead to the higher stator ground fault damages. An IEEE Working Group has recently completed its study and made recommendations for medium-voltage generator grounding in a multiple source industrial environment. Based on the considerations of transient overvoltage, ground fault damages, and ground fault protection, the working group suggests a few variations of grounding systems, which basically are low-resistance grounding systems during normal operating conditions, and the generator would be switched to a high-resistance grounded system from a normal low-resistance grounded system when a ground fault occurs in the generator stator. The purposes of this paper are: 1) to examine the generator transient over-voltage and currents under the low-resistance ground fault conditions, and also to evaluate their corresponding stator ground fault damages, and 2) to establish an acceptable maximum system ground fault level. For comparison purposes, three versions of low-resistance grounding systems have been considered and they are: a low-resistance grounding system with a neutral breaker at the generator (hereinafter called a ""neutral breaker system""); a low-resistance grounding system with the generator neutral low resistor being switched to a high-resistor after a stator ground fault (hereinafter called a ""hybrid system""); and a low-resistance grounding system similar to the current practice (hereinafter called a ""traditional system""). The simulation study is conducted with the aid of the Electro Magnetic Transient Program. An experimental analog generator model is also used to verify the - simulation results.",2004,0, 828,A Study of Software Fault Detection and Correction Process Models,"Most of the models for software reliability analysis are based on reliability growth models which deal with the fault detection process only. In this paper, some useful approaches to the modeling of both software fault detection and fault correction processes are discussed. To provide accurate predictions for correct decision-makings, parameters estimation method is critical. Specifically, a new explicit formula for the likelihood function of the combined fault detection and correction process is derived and the maximum likelihood estimates are obtained under various time delay assumptions. As an illustration, actual dataset from a software development project is analyzed. In addition, cost models are discussed in the context of this modeling framework on fault detection and correction. The corresponding effects on optimal release time are analyzed comprehensively. Also, potential benefits of this model on other aspects of software testing management are discussed",2006,0, 829,A Model of Front-End Pre-Change Corrective Testing,"Most of the testing process models deal with testing within development. To our knowledge, there are no process models exclusively dedicated to the testing of corrective changes. For this reason, we have outlined a process model covering the testing activities at the front-end support level and evaluated them within 15 software organizations.",2006,0, 830,Analysis of the effect of Java software faults on security vulnerabilities and their detection by commercial web vulnerability scanner tool,"Most software systems developed nowadays are highly complex and subject to strict time constraints, and are often deployed with critical software faults. In many cases, software faults are responsible for security vulnerabilities which are exploited by hackers. Automatic web vulnerability scanners can help to locate these vulnerabilities. Trustworthiness of the results that these tools provide is important; hence, relevance of the results must be assessed. We analyze the effect on security vulnerabilities of Java software faults injected on source code of Web applications. We assess how these faults affect the behavior of the scanner vulnerability tool, to validate the results of its application. Software fault injection techniques and attack trees models were used to support the experiments. The injected software faults influenced the application behavior and, consequently, the behavior of the scanner tool. High percentage of uncovered vulnerabilities as well as false positives points out the limitations of the tool.",2010,0, 831,A Framework for Evaluating Automatic Classification of Underlying Causes of Disturbances and Its Application to Short-Circuit Faults,"Most works in power systems event classification concern classifying an event according to the morphology of the corresponding waveform. An important and even more difficult problem is the classification of the event underlying cause. However, the lack of labeled data is more problematic in this second scenario. This paper proposes a framework based on frame-based sequence classification (FBSC), the Alternative Transient Program (ATP), and a public dataset to advance research in this area. As a proof of concept, a thorough evaluation of automatic classification of short circuits in transmission lines is discussed. Simulations with different preprocessing (e.g., wavelets) and learning algorithms (e.g., support vector machines) are presented. The results can be reproduced at other sites and elucidate several tradeoffs when designing the front end and pattern recognition stages of a sequence classifier. For example, when considering the whole event in an offline scenario, the combination of the raw front end and a decision tree is competitive with wavelets and a neural network.",2010,0, 832,K-space and image space combination for motion artifact correction in multicoil multishot diffusion weighted imaging,"Motion during diffusion encodings leads to phase errors in different shots of a multishot acquisition. Phase differences in k-space data among shots result in phase cancellation artifacts in the reconstructed image. Due to aliasing of the phase from under-sampled regions of the shot, correction of the phase error using direct low-resolution phase subtraction is incomplete. We introduce a new k-space and image-space combination (KICT) method for motion artifacts cancellation that avoids incomplete phase error correction. Further, the method preserves the phase of the object, which is important for parallel imaging applications.",2008,0, 833,Lung motion correction on respiratory gated 3-D PET/CT images,"Motion is a source of degradation in positron emission tomography (PET)/computed tomography (CT) images. As the PET images represent the sum of information over the whole respiratory cycle, attenuation correction with the help of CT images may lead to false staging or quantification of the radioactive uptake especially in the case of small tumors. We present an approach avoiding these difficulties by respiratory-gating the PET data and correcting it for motion with optical flow algorithms. The resulting dataset contains all the PET information and minimal motion and, thus, allows more accurate attenuation correction and quantification.",2006,0, 834,A New Upper Bound on the Block Error Probability After Decoding Over the Erasure Channel,"Motivated by cryptographic applications, we derive a new upper bound on the block error probability after decoding over the erasure channel. The bound works for all linear codes and is in terms of the generalized Hamming weights. It turns out to be quite useful for Reed-Muller codes for which all the generalized Hamming weights are known whereas the full weight distribution is only partially known. For these codes, the error probability is related to the cryptographic notion of algebraic immunity. We use our bound to show that the algebraic immunity of a random balanced m-variable Boolean function is of order m/2(1-o(1)) with probability tending to 1 as m goes to infinity",2006,0, 835,No time for bugs,Motor-sport teams expect a fast turnaround when they tune the software that goes into the electronic controllers that manage the highly tuned engines used in their cars. Subtle changes to the way the embedded computer controls the engine can make the difference between winning and being stuck in the pits. And it means that the developers at one specialist company can have just two hours to make critical changes to their code.,2004,0, 836,A new approach for mobile agent fault-tolerance and reliability using distributed systems,"Mobile agents have attracted considerable interest in recent years. In the context of mobile agents, fault tolerance is crucial to enable the integration of mobile agent technology into today's business applications. To further develop mobile agent technology, reliability mechanisms such as fault tolerance and transaction support are required. For this purpose, we first identify two basic requirements for fault-tolerant mobile agent execution: (1) non-blocking (i.e., a single failure does not prevent progress of the mobile agent execution); and (2) exactly-once (i.e., multiple executions of the agent are prevented). To achieve fault tolerance for the agent system, especially for the agent transfer to a new host, we use distributed transaction processing. This paper proposes a novel approach to fault-tolerant mobile agent execution, which is based on modeling agent execution as a sequence of agreement problems. Each agreement problem is one instance of the well-understood consensus problem. Our solution does not require a perfect failure detection mechanism, while preventing blocking and ensuring that the agent is executed exactly once",2005,0, 837,Model Checking For Fault Explanation,"Model checking is very effective at finding out even subtle faults in system designs. A counterexample is usually generated by model checking algorithms when a system does not satisfy the given specification. However, a counterexample is not always helpful in explaining and isolating faults in a system when the counterexample is very long, which is usually the case for large scale systems. As such, there is a pressing need to develop fault explanation and isolation techniques. In this paper, we present a new approach for the fault explanation and isolation in discrete event systems with LTL (linear-time temporal logic) specifications. The notion of fault seed is introduced to characterize the cause of a fault. The identification of the fault seed is further reduced to a model checking problem. An algorithm is obtained for the fault seed identification. An example is provided to demonstrate the effectiveness of the approach developed",2006,0, 838,Algorithmic Cholesky factorization fault recovery,"Modeling and analysis of large scale scientific systems often use linear least squares regression, frequently employing Cholesky factorization to solve the resulting set of linear equations. With large matrices, this often will be performed in high performance clusters containing many processors. Assuming a constant failure rate per processor, the probability of a failure occurring during the execution increases linearly with additional processors. Fault tolerant methods attempt to reduce the expected execution time by allowing recovery from failure. This paper presents an analysis and implementation of a fault tolerant Cholesky factorization algorithm that does not require checkpointing for recovery from fail-stop failures. Rather, this algorithm uses redundant data added in an additional set of processors. This differs from previous works with algorithmic methods as it addresses fail-stop failures rather than fail-continue cases. The implementation and experimentation using ScaLAPACK demonstrates that this method has decreasing overhead in relation to overall runtime as the matrix size increases, and thus shows promise to reduce the expected runtime for Cholesky factorizations on very large matrices.",2010,0, 839,Modeling of cable fault system,"Modeling is the essential part of implementing the prediction and location of three-phase cable fault. To predict and locate cable fault, a model of three-phase cable fault system is constructed based on a great deal of measured validation data by choosing BP neural network that has nonlinear characteristic and using the unproved BP algorithm, Levenberg-Marquardt data-optimized method. It is shown by the simulation using MATLAB software that the parameters of the model converge rapidly, and the simulated output of the neural network model and the measured output of cable fault system are approximately equal, and the mean value of the relatively predictive error of the fault distance is smaller than 0.3%, so that the model quality is reliable.",2004,0, 840,Fault tolerant mechatronics [automotive applications],"Modern cars exhibit a variety of new functionalities concerning engine management, safety, vehicle dynamics control as well as comfort and convenience. Safety features like airbags, antilock braking systems (ABS), anti-skid systems, belt tensioners or the electronic stability program (ESP) are standard fittings of present day car models and in some cases even stipulated by legislation. These safety systems have led to an increased avoidance of accidents by actively affecting vehicle dynamics and to a mitigation of the consequences of accidents on the driver and passengers by innovative restraint systems. As a rule these systems are mechatronic systems. Mechatronic systems today derive their functionality by an interlocked interaction of mechanics, electronics and information technology. Their deployment in the safety-relevant environment requires fault tolerance. Fault tolerant mechatronics is based on redundancy, which must be supervised and tested permanently. Reliability of sensor and actuator technology is essential for future motor vehicle systems. Operability and reliability are to be achieved by suitable on-board and on-line test methods. Exemplarily this is shown for future X-by-wire applications.",2004,0, 841,Model-based fault diagnosis and reconfiguration of robot drives,"Modern drives of mobile robots are complex machines. Because of this complexity, as well as of wear and aging of components, faults occurs in such systems quite frequently at runtime. In order to use such drives in truly autonomous robots it is desirable that the robot is able to automatically react to such faults. Therefore, the robot needs reasoning and reconfiguration capabilities in order to be able to detect, localize and repair such faults on-line. In this paper we propose a model-based diagnosis and reconfiguration framework which allows an autonomous robot to detect and compensate faults in its drive. Moreover, we present an implementation for a real robot platform. Finally, we report experimental results which shows that the proposed framework is able to correctly cope with injected faults in the drive hardware, like broken motors.",2007,0, 842,Algorithm-based fault tolerance for many-core architectures,"Modern many-core architectures with hundreds of cores provide a high computational potential. This makes them particularly interesting for scientific high-performance computing and simulation technology. Like all nano scaled semiconductor devices, many-core processors are prone to reliability harming factors like variations and soft errors. One way to improve the reliability of such systems is software-based hardware fault tolerance. Here, the software is able to detect and correct errors introduced by the hardware. In this work, we propose a software-based approach to improve the reliability of matrix operations on many-core processors. These operations are key components in many scientific applications.",2010,0, 843,A morphological filter to distinguish a fault from capacitor switching,"Modern numerical relays have substantially reduced false operation of overcurrent relays used for capacitor protection. However, faster and more reliable operation can be useful. This paper proposes a morphological filter for fast and clear distinction between fault currents and switching currents that can benefit all types of overcurrent relays used for protection of capacitor banks. The filter performance is tested using waveforms generated from PSCAD/EMTDC simulation of a standardized test-system.",2010,0, 844,Modern Solutions to Stabilize Numerical Differential Relays for Current Transformer Saturation during External Faults,"Modern numerical relays use improved measuring and stabilizing methods and can therefore tolerate current transformer (CT) saturation to a large extend. Dimensioning of CTs for short time to saturation however requires precise consideration of the transient CT behaviour in the first milliseconds after fault inception. Therefore, traditional calculation and dimensioning guidelines have to be reviewed and completed accordingly. Initially the paper discusses CT dimensioning and specifications that need to be considered for modern numerical relays. Typical CT dimensioning rules are given, based on theoretical calculation and test series. Then, the paper will discuss modern algorithms for fast detection of CT saturation and methods of adaptive measurement and stabilization that are available today in some modern numerical relays",2006,0, 845,Logic code transformation and minimization algorithm for fault diagnostic systems,"Modern paper winders have hundreds of actuators and thousands of lines of programmable logic controller code. The complex structure of winder and its control software requires also new kinds of diagnostic methods. With modern fault diagnostic systems, the operator can quickly and accurately identify the cause of a fault. Building up a fault diagnostic system requires usually a lot of manual work. In this paper are introduced a new multilevel product-of-sums net model and a new level-to-level-minimization algorithm, which enables to automatically transform, minimize and present programmable logic controller programs in fault diagnostic systems.",2004,0, 846,Real-Time Tasks Scheduling with Value Control to Predict Timing Faults During Overload,"Modern real-time applications are very dynamic and cannot cope with the use of worst case execution time to avoid overload situations. Therefore scheduling algorithms that are able to prevent timing faults during overload are required. In this context, the value parameter has become useful to add generality and flexibility to such systems. In this paper, we present the scheduling algorithm called DMB (dynamic misses based), which is capable of dynamically changing tasks value in order to adjust their importance according to its timing faults rate. The main goal of DMB is to allow the prediction of timing faults during overloads and thereby support a dynamic tuning of tasks fault rate. It is used to enhance the features of the previously defined TAFT (time-aware fault-tolerant) scheduler. Obtained results show that DMB in conjunction with TAFT reached the most promising results during overloads, allowing to control tasks degradation in a graceful and determined way",2007,0, 847,Videoendoscopic distortion correction and its application to virtual guidance of endoscopy,"Modern video based endoscopes offer physicians a wide-angle field of view (FOV) for minimally invasive procedures, Unfortunately, inherent barrel distortion prevents accurate perception of range. This makes measurement and distance judgment difficult and causes difficulties in emerging applications, such as virtual guidance of endoscopic procedures. Such distortion also arises in other wide FOV camera circumstances. This paper presents a distortion correction technique that can automatically calculate correction parameters, without precise knowledge of horizontal and vertical orientation. The method is applicable to any camera-distortion correction situation. Based on a least-squares estimation, the authors' proposed algorithm considers line fits in both FOV directions and gives a globally consistent set of expansion coefficients and an optimal image center. The method is insensitive to the initial orientation of the endoscope and provides more exhaustive FOV correction than previously proposed algorithms. The distortion-correction procedure is demonstrated for endoscopic video images of a calibration test pattern, a rubber bronchial training device, and real human circumstances. The distortion correction is also shown as a necessary component of an image-guided virtual-endoscopy system that matches endoscope images to corresponding rendered three-dimensional computed tomography views.",2001,0, 848,Evaluating the Fault Tolerance of Stateful TMR,"Module redundancy is often used in the construction of reliable systems. Triple Module Redundancy (TMR) is a method for improving reliability through module redundancy, although it does not give the correct results when two out of three modules fail. We, therefore, proposed a new voting architecture known as Stateful TMR, which uses both the results of TMR and the history of states to select the most reliable module. Through simulations, we evaluate the reliability of a module using both TMR and Stateful TMR, and show that for both transient and permanent failures, Stateful TMR achieves higher reliability than TMR.",2010,0, 849,Time series analysis for bug number prediction,"Monitoring and predicting the increasing or decreasing trend of bug number in a software system is of great importance to both software project managers and software end-users. For software managers, accurate prediction of bug number of a software system will assist them in making timely decisions, such as effort investment and resource allocation. For software end-users, knowing possible bug number of their systems will enable them to take timely actions in coping with loss caused by possible system failures. To accomplish this goal, in this paper, we model the bug number data per month as time series and, use time series analysis algorithms as ARIMA and X12 enhanced ARIMA to predict bug number, in comparison with polynomial regression as the baseline. X12 is the widely used seasonal adjustment algorithm proposed by U.S. Census. The case study based on Debian bug data from March 1996 to August 2009 shows that X12 enhanced ARIMA can achieve the best performance in bug number prediction. Moreover, both ARIMA and X12 enhanced ARIMA outperform the baseline as polynomial regression.",2010,0, 850,Impact analysis of faults and attacks in large-scale networks,"Monitoring and quantifying component behavior is key to, making networks reliable and robust. The agent-based architecture presented here continuously monitors network vulnerability metrics providing new ways to measure the impact of faults and attacks.",2003,0, 851,A Novel Watermarking Extraction Based on Error Correcting Code and Evidence Theory,Nowadays digital watermarking has played an important role in the copyright protection of multimedia. A new robust watermarking algorithm is proposed with the base of error correcting code (ECC) and data fusion by evidence theory and distortion factor. The algorithm does not need any mask or original image during the extraction and makes the watermark more robust and further more the data fusion composed by a new watermark confidence relatively reduces the volume of the embedding watermark. At last experiments have proved the algorithm is practical.,2008,0, 852,Magnets faults characterization for Permanent Magnet Synchronous Motors,"Nowadays Permanent Magnet Synchronous Motor (PMSM) are an attractive alternative to induction machines for a variety of applications due to their higher efficiency, power density and wide constant power speed range. In this context the condition monitoring of magnets status is receiving more and more attention since is critical for industrial applications. This paper presents a characterization of rotor faults for such a motor due to local and uniform demagnetization by means of two dimensional (2-D) Finite Element Analysis (FEA) and proposes a new non-invasive method for their detection by means of a Fourier transform of the back-EMF. The proposed approach is then validated for three permanent magnet synchronous motors with different winding configurations.",2009,0, 853,Multi-domain fault management architecture based on a shared ontology-based knowledge plane,"Nowadays the number of services whose functionalities need to adapt across heterogeneous networks with different technological and administrative domains has substantially increased. Each domain involves different management procedures; therefore the comprehensive management of multi-domain services presents serious problems. This paper is focused in fault management aspects and presents a novel management architecture for multi-domain environments. The architecture is based on a distributed set of agents, using semantic techniques. A Shared Knowledge Plane has been implemented to ensure communication between agents.",2010,0, 854,Investigating fault tolerant computing systems reliability,"Nowadays, computers and networks represent the heart of a great part of modern technologies. Computing systems are widely used in many application areas, and they are desired to achieve various complex and safety-critical missions. As consequence, greater attention is lavished on performance and dependability evaluation of computing systems. This brings to the specification of precise techniques and models, that consider and evaluate aspects before (consciously or unconsciously) approximated or ignored at all. On the other hand, the increasing importance assumed by such systems is translated in terms of tighter and tighter constraints, requirements and/or policies (QoS, fault tolerance, maintenance, redundancy, etc.) according to the systems' criticism. The evaluation must therefore take into great account such dynamic behaviors, carefully identifying and quantifying dependencies among devices. In this paper we face the problem of individuating and evaluating the most common dynamic behaviors and dependencies affecting fault tolerant computing systems. We propose some models to represent such aspects in terms of reliability/availability, basing on dynamic reliability block diagrams (DRBD), a new formalism derived from RBD we developed. In this way we want to provide the guidelines for adequately evaluating fault tolerant computing system reliability/availability.",2008,0, 855,Research on code pattern automata-based code error pattern automatic detection technique,"Nowadays, many defects, e.g., obscure error generation-scenario and lacking of formalization which is the basis for the automatic error detection, exist in field of code error research. Furthermore, the automation of error detection will greatly affect the quality and efficiency of software testing. Therefore, more deeply research on code errors need to be done. At first, this paper presents the definition of code error pattern based on definition of pattern. Secondly, it investigates the formalization description of code error pattern. Then, it studies the automatic error pattern detecting technique based on non-determinate finite state automata and treats the matching technique of error pattern as the key problem. Finally, some case studies are given. The preliminary results show the rationality of code error pattern definition and the effectiveness of error pattern formalization description and error pattern matching technique.",2009,0, 856,Improving Robustness of Real-Time Operating Systems (RTOS) Services Related to Soft-Errors,"Nowadays, more critical applications that have stringent real-time constraint are placed and run in an environment with real-time operating system (RTOS). The provided services of RTOSs are subject to faults that affect both functional and timing of Tasks which are running based on RTOS. In this paper, we try to evaluate and analyze robustness of services due to soft-errors in two proposed architecture of RTOS which are (SW-RTOS and HW/SW- RTOS). According to experimental result we finally propose an architecture which provides more robust services in term of soft-error. Real-Time Operating System (RTOS) users desire predictable response time at an affordable cost, due to this demand Hardware/Software Real-Time Operating Systems (HW/SW-RTOS) appeared. This paper analyzes the impact of soft-errors in real-time systems running applications under purely Software RTOS versus HW/SW-RTOS. The proposed model is used to evaluate robustness of services like scheduling, synchronization time management and memory management and inter process communication in Software based RTOS and HW/SW-RTOS. Experimental results show HW/SW-RTOS provide more robust services in term of soft-err or against purely software based RTOS.",2007,0, 857,Analysis of the influence of intermittent faults in a microcontroller,"Nowadays, new submicron technologies have allowed increasing processors performance while decreasing their size. However, as a side effect, their reliability has been negatively affected. Although mainly permanent and transient faults have been studied, intermittent faults are expected to be a big challenge in modern VLSI circuits. Usually, intermittent faults have been assumed to be the prelude of permanent faults. Currently, intermittent faults due to process variations and residues have grown, being necessary to study their effects. The objective of this work has been to analyse the impact of intermittent faults, taking advantage of the power of the simulation-based fault injection methodology. Using as background faults observed in real computer systems, we have injected intermittent faults in the VHDL model of a microcontroller. The controllability and flexibility of VHDL- based fault injection technique has allowed us to do a detailed analysis of the influence of some parameters of intermittent faults. We have also compared the results obtained with the impact of transient and permanent faults.",2008,0, 858,Comparison of fourier & wavelet transform methods for transmission line fault classification,"Nowadays, power supply has become a business asset. The quality and reliability of power system needs to be maintained in order to obtain optimum performance. Therefore, it is extremely important that transmission line faults from various sources to be identified accurately, reliably and be corrected as soon as possible. In this paper, a new technique is discussed by which avoiding noise in fault detection in high voltage transmission lines is achieved. Later, a comparative study of the performance of Fourier transform and wavelet transform based methods combined with protective relaying pattern classifier algorithm Neural Network for classification of faults is presented. A new classification method is proposed for decreasing training time and dimensions of NN. The proposed algorithms are based on Fourier transform analysis of fundamental frequency of current signals in the event of a short circuit. Similar analysis is performed on transient current signals using multi-resolution Haar wavelet transform, and comparative characteristics of the two methods are discussed.",2010,0, 859,Considering Faults in Service-Oriented Architecture: A Graph Transformation-Based Approach,"Nowadays, using Service-Oriented Architectures (SOA) is spreading as a flexible architecture for developing dynamic enterprise systems. Due to the increasing need of high quality services in SOA, it is desirable to consider different Quality of Service (QoS) aspects in this architecture as security, availability, reliability, fault tolerance, etc. In this paper we investigate fault tolerance mechanisms for modeling services in service-oriented architecture. We propose a metamodel (formalized by a type graph) and some graph rules for monitoring services and their communications to detect faults. By defining additional graph rules as reconfiguration mechanisms, service requesters can be dynamically switched to a new service (with similar descriptions). To validate our proposal, we use our previous approach to model checking graph transformation using the Bogor model checker.",2009,0, 860,Development of a fault locating system using object-oriented programming,"Object-oriented programming has received wide acceptance in power system applications. The main advantages achieved by using the object-oriented techniques include easy maintenance, enhanced expandability and inherent implementation flexibility. This paper presents recent software developments of a fault location system implemented using object-oriented approaches and C++ programming language. The proposed new algorithms, implementation techniques, and evaluation studies are addressed. The developed system aims at pinpointing faults on transmission systems by utilizing recorded data coming from digital fault recorders (DFRs) sparsely located at various substations. The concept of waveform matching is implemented by taking advantage of the power system model. The mathematical formulation of the problem is presented. A unique genetic algorithm based search engine for pinpointing the most probable fault location is proposed. The suitability of applying the object-oriented programming for fault location software developments is demonstrated",2001,0, 861,Data diverse fault tolerant architecture for component based systems,"Of late, component based software design has become a major focus in software engineering research and computing practice. These software components are used in a wide range of applications some of which may have mission critical requirements. In order to achieve required level of reliability, these component-based designs have to incorporate special measures to cope up with software faults. This paper presents a fault tolerant component based data driven architecture that is based on C2 architectural framework and implements data diverse fault tolerance strategies. The proposed design makes a trade-off between platform flexibility, reliability and efficiency at run time and exhibits its ability to tolerate faults in a cost effective manner. Application of proposed design is exhibited with a case study.",2009,0, 862,A fault-tolerant solid state mass memory for highly reliable instrumentation,"Often, in space missions, a large amount of data from on board instrumentation must be stored in highly reliable mass memories. In this paper the implementation of a prototype of a Solid State Mass Memory (SSMM) for high reliability demanding applications is presented. A description of the SSMM architecture is given together with an in depth description of the prototype. The SSMM has been implemented by using a fast prototyping methodology. By using this technique, a flexible re-programmable test bed useful for the testing of both the conventional and faulty (using the fault-injection approach) functionality of the system has been obtained.",2004,0, 863,A Hybrid Method for Fault Detection and Modelling using Modal Intervals and ANFIS,"Oftentimes the practical performance of analytical redundancy for fault detection and accommodation is decreased by the uncertainties associated to the model of the system and to the measurements. In this paper these uncertainties are taken into account through the definition of intervals for both the parameters of the model and the measurements. In the proposed method, a fault alarm is fired when an inconsistency between the behaviours of the system and the model emerges. Afterwards, the behaviour of the faulty system is modelled using an Adaptive Neuro Fuzzy Inference System (ANFIS). The identified model can be used for the fault accommodation task. The proposed method is applied to a simulated chemical plant. The obtained results highlight the capabilities for fault detection and accommodation of this method.",2007,0, 864,"New Results on Periodic Sequences With Large -Error Linear Complexity","Niederreiter showed that there is a class of periodic sequences which possess large linear complexity and large k-error linear complexity simultaneously. This result disproved the conjecture that there exists a trade-off between the linear complexity and the k-error linear complexity of a periodic sequence by Ding By considering the orders of the divisors of xN-1 over BBF q, we obtain three main results which hold for much larger k than those of Niederreiter : a) sequences with maximal linear complexity and almost maximal k-error linear complexity with general periods; b) sequences with maximal linear complexity and maximal k -error linear complexity with special periods; c) sequences with maximal linear complexity and almost maximal k-error linear complexity in the asymptotic case with composite periods. Besides, we also construct some periodic sequences with low correlation and large k -error linear complexity.",2009,0, 865,A Preliminary Study on Two Dimensional Image Reconstruction of Log Cross-Section Defect Based on Stress Wave,"Nondestructive testing for wood defect detecting has been playing an important role in wood industry. The 2D image reconstruction contributed greatly for log cross-section defect testing in order to promote the utilization rate of wood resources. At first, this paper studied the stress wave computerized tomography technique, and introduced the straight-line tracing technique and the Algebraic Reconstruction Technique (ART) algorithm. Then, the medium model was constructed for numerical simulation analysis. The reconstruction of the medium model was conducted using straight line tracing - ART algorithm, and the impact of the number of iterations for image reconstruction accuracy was analyzed. At last, this paper validated the feasibility for two-dimensional image reconstruction of log internal defects using this method by physical model testing. Empirical and medium model results showed that the convergence of straight line tracing - ART algorithm was fast and reconstruction image was good. The two-dimensional image reconstruction of log internal defects could basically be realized using the straight line tracing-algebraic reconstruction algorithm method. And the feasibility and practicality of theory and technique that proposed in this paper were validated by practical testing.",2010,0, 866,A New Faulted Line Identification Method Based on Incremental Impedances,"Non-effective neutral grounding is a common practice in the power distribution systems of some European and Asian countries. This configuration is also used in some industrial plants in North America. When single-phase-to-ground faults occur, the system can continue to operate without tripping immediately. This significantly improves the service reliability of the system. However, the faulted line must be identified within a required time frame (typically 30 minutes to 2 hours). It turns out that identifying the faulted line is a significant challenge. This challenge has attracted a lot of research work since 1980's. Although many methods have been proposed or developed, there is still no satisfactory solution to the problem. This paper proposes a new method to solve the problem basing on the system identification theory. The idea is to use the disturbance produced by the fault to determine the impedance of the unfaulted side of the system. The sign of the impedance can reveal which line is experiencing a single-phase to ground fault. In other words, the proposed method is an improved version of the directional relay concept for application of the faulted line identification. This paper presents detailed research results on the proposed method. The method is evaluated using computer simulations and lab experiments. Several improvements of the method are made based on the results. The results show that the proposed method can work effectively under various system situations and it is a viable competitor to the existent methods.",2007,0, 867,Notice of Retraction
An intelligent method for the control of magnitude of parabolic-like transmission error of a pair of gears,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Traditionally, the control of magnitude of parabolic-like transmission error of a pair of gears requires a lot of time-consuming trial-and-error manual procedures. To advance design efficiency, this paper has proposed an intelligent method to control efficiently and accurately the magnitude of parabolic-like transmission error. The intelligent method is devised based on the development of a system of governing equations under the conditions of tooth contact and constraints for magnitude of parabolic-like transmission error. Design parameters required to be determined are transformed into the roots of the system of equations. Just applying Newton's root finding method, parameters needed to be designed are obtained automatically and efficiently. To show how to apply the proposed method, a pair of external gears composed of an involute gear and a circular-arc gear is adopted to be an example. The gear pair is verified numerically the magnitude of parabolic-like transmission error is really controlled.",2010,0, 868,Notice of Retraction
The coordinate transformation method of GPS RTK and their relationship with the position error,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

We must calculate first the coordinate transformation parameters between WGS-84 coordinates system and the national or local coordinates system when we measure with GPS RTK. In order to have better results, a new idea will be put forward about the four-parameter transformation of the small regional area, and its validity is validated in this paper. At the same time, through comparing with the current method, we find that the parameters precision of the new method is slightly better than that of the current method in the same conditions. Finally, according to the new method, we discuss the effect of the common point error on the accuracy of coordinate transformation parameters when we use GPS RTK, and find to have a simple calculation process, a better result, a change at regular on between the common point error and the parameters precision. All this have an instructive role of operating conveniently GPS RTK.",2010,0, 869,Notice of Retraction
Boiler Tube Leakage Acoustic Localization Error Analysis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Efficient and reliable operation is the main requirement of the modern power plant, hence it's significant that the boiler tube leakage source is localized precisely. The four-element acoustic array was exploited in the furnace, a set of hyperbolic equations were established and the sensors distributed for capture the leakage signal were given. The stable and sharp peak can be obtained by an approximation of the maximum likelihood (ML) estimator, which was verified via the experiment on localization of leakage. The time differences of arrival (TDOA) error level was 0.1us, the array failed to fix position as the remarkable coordinate error, while the error level was 0.01us, the coordinates error was linearized and reduced to the permitted range (less than 1m).",2010,0, 870,Notice of Retraction
Study on the Uncertain Problems in Power Grid Fault Diagnosis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

To solve the uncertain problems existing in power grid fault diagnosis, on the base of fault diagnosis expert system based on rules and causal logic, a method of Bayesian statistics inference is proposed in this paper , through the analysis of historical data, the diagnosis system gets a general memory function; meanwhile, introducing case-based reasoning (CBR), establishing a special case library, enables the system to remember the special events; the application of the two methods, has improved the diagnosis system's comprehensive reasoning abilities and enhanced the adaptability and self-learning ability of the system.",2009,0, 871,Notice of Retraction
Present situation and development of error calibration for linear motion guide,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The calibration of linear motion guide is not only the academic direction for mechanical machining, but also the important means for error compensation of multi-freedom motion system. The calibration of linear motion guide is important part for kinematic research of motion system and main portion for the enhancement of motion system precision. This paper addresses the direct calibration method and the indirect calibration method in the calibration of linear motion guide. Especially the artifacts method, optics method and displacement lines method, which include their principle, characteristic and present situation, are expounded at length. And the artifacts method is the important way which may be used in short journey fields. While the displacement lines method is not only applied to long journey fields, but also is an economical calibration method.",2010,0, 872,Notice of Retraction
A fast error measurement system for CNC machine tools based on step-gauge,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Traditional error measuring methods for CNC machine tools are often restricted by repeated alignment in set-up, high price and strict measuring environment. In order to develope a convenient and fast error measuring system, research about the principle of error measurement based on step-gauge has been done. The proposed equipment is convenient for it has no demand for installation accuracy and working environment. It can identify the gesture of step-gauge with its own measuring function. And automatic correction of micrometer and step-gauge's gesture can eliminate interference caused by installation, which has greatly enhanced the efficiency. Temperature difference compensation enables the equipment to perform in the workshop. This method is fast with accuracy within 2~3 m, which can be applied to measure position error of economy machine tools. In addition, it can generate corresponding compensation files for various CNC system.",2010,0, 873,Experimental evaluation of error control for video multicast over wireless LANs,"Multicasting of compressed video streams over wireless networks demands significantly different approaches to error control than those used in wired networks, due to high packet loss rate. This paper describes an experimental study of a proxy service to enhance interactive MPEG-1 video streams when multicast across wireless local area networks. The architecture and operation of the proxy service are presented, followed by results of a performance study conducted on a mobile computing testbed The main contribution of the paper is to show that a combination of forward and backward error control is effective when applied to video streams for mobile collaborating users",2001,0, 874,Fault Diagnosis Method for Mobile Robots Using Multi-CMAC Neural Networks,"Multi-CMAC (cerebellar model articulation controller) neural networks based fault detection and diagnosis (FDD) method for mobile robots are proposed. Three failure types (system fault, sensor fault, and combined fault) are handled. Mobile robot system consists of several functional modules belonging to different module groups, which execute different tasks. According to the consistency among sensors information between the neighbor modules in the same module group, the method of fault diagnosis is studied. Then, multiple CMAC neural networks are used to implement the diagnosis. One CMAC neural network is set to one module group. In the neural network, the sensor information is used as the inputs and the fault signals are used as the outputs. As an example, the method is implemented on a drive system of a wheeled mobile robot. The simulation results show the effectiveness of the proposed technique.",2007,0, 875,IR thermographic detection of defects in multi-layered composite materials used in military applications,"Multi-layered composites are frequently used in many military applications as constructional materials and light armours protecting personnel and armament against fragments and bullets. Material layers can be very different by their physical properties. Therefore, such materials represent a difficult inspection task for many traditional techniques of non-destructive testing (NDT). Typical defects of composite materials are delaminations, a lack of adhesives, condensations and crumpling. IR thermographic NDT is considered as a candidate technique to detect such defects. In order to determine the potential usefulness of the thermal methods, specialized software has been developed for computing 3D (three- dimensional) dynamic temperature distributions in anisotropic six-layer solid bodies with subsurface defects. In this paper, both modeling and experimental results which illustrate advantages and limitations of IR thermography in inspecting composite materials will be presented.",2007,0, 876,Evolving non-dominated solutions in multi objective fault section estimation for automated distribution networks,"Multiobjective evolutionary algorithms (MOEAs) that use nondominated sorting have been criticized mainly for their computational complexity and nonelitism approach. In this paper, we suggest a non-dominated sorting based multiobjective evolutionary algorithm (MOEA), called nondominated sorting genetic algorithm-II (NSGA-II) for solving the fault section estimation problem in automated distribution networks, which alleviates all the above difficulties. Due to the presence of various conflicting objective functions, the fault location task is a multi-objective, optimization problem. The considered FSE problem should be handled using Multiobjective Optimization techniques since its solution requires a compromise between different criteria. In the adopted formulation, these criteria are fast and accurate estimation of the potential fault location. In contrast to the conventional Genetic algorithm (GA) based approach; NSGA-II does not require weighting factors for conversion of such a multi-objective optimization problem into an equivalent single objective optimization problem. Based on the simulation results on four different automated distribution networks, the performance of the NSGA-II based scheme has been found significantly better than that of a conventional GA based method.",2010,0, 877,Fault-Tolerant Interior-Permanent-Magnet Machines for Hybrid Electric Vehicle Applications,"Multiphase interior permanent magnet (IPM) motors are very good candidates for hybrid electric vehicle applications. High torque pulsation is the major disadvantage of most IPM motor configurations. A five-phase IPM motor with low torque pulsation is discussed. The mathematical model of the five-phase motor is given. A control strategy that provides fault tolerance to five-phase permanent-magnet motors is introduced. In this scheme, the five-phase system continues operating safely under loss of up to two phases without any additional hardware connections. This feature is very important in traction and propulsion applications where high reliability is of major importance. The system that is introduced in this paper will guarantee high efficiency, high performance, and high reliability, which are required for automotive applications A prototype four-pole IPM motor with 15 stator slots has been built and is used for experimental verification.",2007,0, 878,Load balance and fault tolerance in NAT-PT,"NAT-PT is one important technology for IPv4 to IPv6 transition by mapping protocols and addresses to enable communication between IPv4-only and IPv6-only networks. However, traditional NAT-PT gateways have limits in scalability, resilience and performance. This paper presents two approaches, NAT-based and DNS-based, for load-balance and fault-tolerance to address such limits. Our implementation and analysis demonstrate that these two approaches can have better performance, resilience and scalability than the traditional NAT-PT approach, and DNS-based approach is better than NAT-based one.",2003,0, 879,Research on network fault diagnosis based on mobile agent,"Network fault management is an important part of Network management. Aiming to the problem of the traditional SNMP-based network fault management, a mobile agent-based network fault diagnosis model is proposed, which applies mobile agent technology to network management, so it can monitor, detect and deal with network fault autonomously. The structure and function of model, especially the architecture and strategy of diagnostic agent, is depicted. Finally, comparing the system performance through experiment and results show that the network fault diagnosis based on mobile agent has greater advantage.",2010,0, 880,Development of network management console: SNMP based network mapping and fault management,"Network management tools are used to manage and monitor the internetworking environment of an organization. This tools help to identify problems and potential problems of an internetworking environment. They also can be used to assist the help desk in identifying and escalating problems relating to the IT infrastructure. Fault management is the process of locating and correcting network problems or faults. Fault management is an important element of network management which consists of three steps. The steps are identifying the occurrence of a fault, isolating the cause of the fault and facilitating fault correction. This paper discusses the work done to develop an SNMP based network mapping and fault management tool at University Tenaga Nasional. The system displays a status map, where a visual display of the status of critical network elements are displayed to allow users to verify and isolate problems. An end-to-end testing function is used for testing on a scheduled or on-demand basis. The system also creates a historical record of the faults.",2002,0, 881,A novel arc-suppression method for grounding fault in wind farm,"Neutral-point non-effectively grounded system is used in the wind farm with complex structure, high cost, wide distribution and higher capacitance current. This paper presents an arc-suppression method for the neutral non-effective grounding system-voltage arc-suppression method. A reactor is directly connected to the fault bus and ground. The voltage of the reactor is used to suppress the voltage of the fault phase, to make sure it become much smaller than the amplitude of electric strength recovery, undermine the arc renewed mechanism, and remove the arc grounding. Simulation result in the paper verifies the validity and advantages of the method and it is not affected by grounding current, eliminating the complex track and compensation calculation. Arc-suppression method is simple. Especially suitable for system with high capacity current,such as cables wind farms.",2010,0, 882,VisTRE: A Visualization Tool to Evaluate Errors in Terrain Representation,"New data sources and sensors bring new possibilities for terrain representations, and new types of characteristic errors. We develop a system to visualize and compare terrain representations and the errors they produce.",2006,0, 883,Testbench components verification using fault injection techniques,"New methodologies for digital designs verification make use of SystemVerilog's object-oriented mechanisms to speed-up the verification environment development. Yet, undetected testbench errors can slow down or even compromise the overall verification process. This paper concentrates on fault injection technique applied to different SystemVerilog testbench components. By altering functionality in different places of the testbench, potential hidden errors can be detected, improving the testbench capacity to detect design misbehavior. The fault injection in SystemVerilog testbench components may be used in addition to existing methods of functional verification analysis, for testbench validation. Modalities to alter the main testbench components are presented, highlighting the effects on the testbench behavior.",2010,0, 884,Transformer Power Fault Diagnosis System Design Based On The HMM Method,"Once failures for large-scale transformer power occur, which will result in catastrophic economic losses and social impact. Therefore, it is necessary to design and apply a state monitoring and fault diagnosis system for large-scale transformer in order to improve the reliability and accuracy for power transformer during its running, which will benefit increasing the power enterprise economic performance, promoting economic and social development. This paper aims at large-scale power transformer, introduces Hidden Markov Models theory into power transformer fault diagnosis field, and a fault diagnosis method using the HMM is put forward. Some issues and their settling methods about the HMM applying into power transformer brings are further analysed. The fault diagnosis principle bases on the HMM is discussed in detail. The power transformer faults are classified and each of their characteristic variables is determined, accordingly, the fault diagnosis model library for power transformer is researched. Finally, the fault diagnosis system for large-scale power transformer is designed.",2007,0, 885,Data-Driven Fault Detection Based on Process Monitoring using Dimension Reduction Techniques,"One goal of integrated vehicle health management (IVHM) for commercial airplane customers is to monitor sensor data to anticipate problems before flight deck effects (FDEs) ground the airplane for unplanned maintenance. Airplane subsystems - such as flight and environmental control systems, electrical and hydraulic power - can have a high number of associated parameters. Monitoring sensor data streams individually can be inefficient, and fail to detect problems. A research effort at The Boeing Company is investigating anomaly detection algorithms for multivariate time series of parametric data. The multivariate process monitoring techniques account for correlation between parameters, and therefore alert when relationships between parameters change, as well as when mean levels of individual parameters change. Since many traditional multivariate process monitoring techniques are not suited for the high number of parameters in airplane subsystems, this paper discusses using dimension reduction techniques. One example is principal component analysis (PCA). If the assumptions behind PCA are not met, then monitoring charts based on conventional PCA alone can show false alarms and bad detectability. Independent component analysis (ICA) is a recently developed method in which the goal is to decompose observed data into linear combinations of statistically independent components. ICA can be considered an extension of PCA since it uses PCA as an initial pre-whitening stage. Like projection pursuit density estimation, ICA searches for projections of the data that are most non-Gaussian.",2008,0, 886,A dual human performance strategy: error management and defense-in-depth,"One of INPO's strategic objectives is to minimize the frequency and severity of events. Error management, a traditional approach for human performance, is only one part of an effective management strategy. While error rate tends to drive event frequency, event severity is a function of defenses. Defenses are necessary to protect the plant from the hazard of human error. In addition to error management, an effective strategy emphasizes defense-in-depth through organizational processes and values and leadership to counter the effects of error that could lead to events. Sustained high levels of performance, safety, and reliability are dependent on both the rigorous application of error prevention techniques and the integrity of defenses-in-depth.",2002,0, 887,Modeling the burst error statistics of Viterbi decoding for punctured codes,One of the characteristics of the Viterbi algorithm is that errors at the output appear grouped in bursts. The knowledge of these error burst statistics is very useful in designing a communication system. This paper presents a simple model of Viterbi decoder burst-error statistics derived from the code distance properties. The puncturing process is also modeled. Results are validated through additional computer simulations. The advantage of the model is that it may be used to generate Viterbi error sequences quickly when a very large amount of data is needed. The complete analysis of the convolutional code used in Digital Video Broadcasting (DVB) standards is shown.,2002,0, 888,On the design of tunable fault tolerant circuits on SRAM-based FPGAs for safety critical applications,"Mission-critical applications such as space or avionics increasingly demand high fault tolerance capabilities of their electronic systems. Among the fault tolerance characteristics, the performance and costs of an electronic system remain the leader factors in the space and avionics market. In particular, when considering SRAM-based FPGAs, specific hardening techniques generally based on Triple Modular Redundancy need to be adopted in order to guarantee the desired fault tolerance degree. While effectively increasing the fault tolerance capability, these techniques introduce an important performance degradation and a dramatic area overhead, that results in higher design costs. In this paper, we propose an innovative design flow that allow the implementation of fault tolerance circuits in SRAM-based FPGA devices with different fault tolerance capability degrees. We introduce a new metric that allows a designer to precisely estimate and set the desired fault tolerance capabilities. Experimental analysis performed on a realistic industrial-type case study demonstrates the efficiency of our methodology.",2008,0, 889,Fault Detection and Diagnosis in IP-Based Mission Critical Industrial Process Control Networks,"Mission-critical industrial process control networks support secure and reliable communications of devices in a controlling or manufacturing environment. They used to mostly use proprietary protocols and networks. Recently, however, many of them are being migrated to IP-based networks to consolidate many different types of networks into a single common network to simplify network operation, administration, and maintenance, and reduce operational expenses and capital expenditures. Despite their wide deployment, most operators have very little knowledge on how to operate them reliably and securely. This is mainly due to the operators' unfamiliarity with various faults that occur on IP-based process control networks. The current process of detecting and diagnosing faults in process control networks is mostly manual and thus the operators detect the problems only after noticeable process malfunctions. This article presents an overview of industrial process control networks and discusses the issues of introducing IP technologies into them. We then propose a fault detection and diagnosis method which is suitable for IP-based process control networks. We also present the system architecture and implementation of fault detection and diagnosis system as well as its deployment at POSCO. Finally, based on operational experience, we have generated a failure prediction model that can be used to predict potential alarms.",2008,0, 890,Performance modelling of a fault-tolerant agent-driven system,"Mobile agent-based technology has attracted considerable interest in both academia and industry in recent years. Many agent-based execution models have been proposed and their effectiveness have been demonstrated in the literature. However, these models require a high overhead to achieve the reliable execution of mobile agents. In this paper, we propose a new mobile agent-based execution model, which is based on a surveillant mechanism. Extensive theoretical analysis of a stochastic nature is provided to evaluate the performance of our model, including the transaction time from node to node, the life expectancy of mobile agents, and the population distribution of mobile agents. The analytical results reveal new theoretical insights into the fault-tolerant execution of mobile agents and show that our model outperforms the existing fault-tolerant models. Our model provides an efficient way to increase overall performance and a promising method in achieving mobile agent system reliability.",2005,0, 891,Three-loop temporal interpolation for error concealment of MDC,"Multiple description coding (MDC) can be used as an error resilience (ER) technique for video coding. In case of transmission errors, error concealment can be combined with MDC to reconstruct the lost frame, such that the propagated error to the following frames is reduced. In this paper, we propose a new temporal error concealment method named three-loop temporal interpolation (TLTI). TLTI can be well combined with temporal sub-sampling ER methods, such as MDC and alternative motion-compensated prediction. In the simulation, we compare the performance of TLTI with unidirectional motion compensated temporal interpolation (UMCTI). Both visual and quantitive results show that TLTI can achieve a better video quality than UMCTI",2006,0, 892,Dominant Color Tracking Based Color Correction for Multi-View Video Using Kalman Filter,"Multi-view video is a new multimedia service which provides interactive experience and depth perception. Color variations between the camera views are necessary to be eliminated in multi-view imaging. In this paper, a dominant color tracking based color correction method for multi-view video is proposed using Kalman filter. It tracks the dominant colors by using Kalman filter to fit the color variation of the real corrected multi-view videos. Experimental results show that the proposed method can eliminate discontinuous variations between frames for higher coding performance.",2009,0, 893,Design trends and challenges of logic soft errors in future nanotechnologies circuits reliability,"Nanometer circuits are becoming increasingly susceptible to soft errors due to alpha particle and atmospheric neutron strikes as device scaling reduces node capacitances and supply voltage scaling reduces noise margins. The result is a significantly reduced reliability that becomes unacceptable in an increasingly number of applications as we move deeper to the nanotechnologies. In this context, logic soft errors, a concern for space applications in the past are a reliability issue at ground-level today. More and more techniques were used to mitigate various faults including the logic soft errors. The paper comprehensively analyzes logic soft errors sensitivity in future deep submicron processes, and also discusses the fault tolerant schemes at different design levels.",2008,0, 894,Sensor soft fault detection method of autonomous underwater vehicle,"Operating in complex ocean environment, the condition monitoring and fault diagnosis of sensors have great impact on the safety of autonomous underwater vehicle (AUV). When the sensor soft fault of AUV is detected by the traditional method of observer based on the means of close-loop control and close-loop detection, the sensor measured value with fault information is fed into the input of observer, which will affect the output of observer and make it track with the measured value of sensor. Meanwhile, as the sensor information with fault is fed into the controller, the fault of sensor will be compensated by the adjusting function of controller. To solve the problem that it is difficult to detect the soft fault of sensors from the state data of AUV and the output of observer, the paper presents a novel diagnosis method to detect the sensor soft fault. Based on the means of close-loop control and open-loop detection, it constructed the open-loop state observer model using RBF neural network and take the observer residual and actual residual as the judicative residual and the basis residual respectively. The sensor condition is adjudged according to the different trend of the two kinds of residual mentioned above. In the process of training of RBF neural network state observer, the selective method of initial center is improved and the repeated selection of initial center is avoided. The experiment results show that the improved RBF learning algorithm has faster convergence speed and better training effect. The pool experimental results prove that the method of sensor soft fault detection is feasible and effective for the autonomous underwater vehicle.",2009,0, 895,Fault-tolerance by regeneration: using development to achieve robust self-healing neural networks,"Opposed to the standard paradigm of 'fault-tolerance by redundancy', ontogeny offers the possibility to engineer artificial organisms which can re-grow faulty components. Similar to what happens in nature, organisms display self-healing: a homeostatic process which allows proper operation while suffering faults. In this paper we present a system which evolves developing spiking neural networks capable of controlling simulated Khepera robots in a wall avoidance task. Development is controlled by a decentralized process executed by each cell's identical growth program. To test the system's self-healing capability, networks are (1) subjected to random faults during development and (2) mutilated during operation. Results demonstrate how development can (i) rapidly produce proper neuro-controllers and (ii) re-grow neurons to recover normal operation. These results show that development, originally proposed to increase the evolvability of large phenotypes, also allow the production of artifacts with sustained fault-tolerance. These artifacts would be especially well-suited for tasks that require long periods of operation in absence of external maintenance.",2005,0, 896,Design and simulation of Optical Frequency Domain reflectometer for short distance fault detection in optical fibers and integrated optical devices using ptolemy-II,"Optical Frequency Domain measurements are much more accurate and efficient as compared to that in time domain for short distances of fibers or integrated optical devices. In the present investigation, the traditional Optical Time Domain Reflectometry (OTDR) technique has been combined, which is used for analysis of fibers of large length, with frequency analysis of the time domain signal to develop a system that detects faults in optical IC's or fibers of very small lengths in a non-destructive fashion. The drawback of the OTDR is that it can be used only for cables of long distances because when used in short fibers the total time taken between the injected laser beam and the response will be negligible. To circumvent this problem we use the frequency domain analysis of the time domain data by applying Fourier transform.",2009,0, 897,Dead-time correction for an orbiting rod normalization in a cylindrical PET system,"Orbiting rod source coincidence acquisitions may exhibit a significant amount of dead-time, depending on the system geometry and the rod source activity. This dead-time is a function of the relative location between the rod source and the crystals in the block detector and, therefore, varies across the sinogram row. Methods: Normalization scans were acquired on a GE Discovery ST positron emission tomography (PET)-CT system using three different rod activities and of duration such that the total acquired counts (T+S+R) were held constant. To develop a model of the dead-time, acquisitions at six static source locations, centered over each crystal in a single block detector, were acquired for each of the rod activity levels. The resultant block busy data were analyzed such that the profile of block busy as the rod traversed all lines-of-response (with respect to the said block) was found. The profile was fit with a Gaussian function and parameterized by full-width at half-maximum and amplitude. For image analysis, a 20 cm uniform cylinder and a whole-body patient scan were analyzed in reconstructed image space. The datasets were reconstructed with each uncorrected normalization, then with the normalization corrected for dead-time. Results: A model has been determined allowing application of dead-time correction to an orbiting rod normalization scan based on measured block-busy in the normalization raw data. Bias and variance effects were removed from reconstructed images. Conclusion : The model corrects for dead-time effects in the normalization. Analysis of image quality impact shows that bias and variance effects can be reduced to insignificant levels.",2004,0, 898,Conversation errors in Web service coordination: run-time detection and repair,"Organizations that own Web services participating in a workflow composition may evolve their components independently. Service coordination can fail when previously legal messages between independently changing, distributed components become illegal because their respective workflow models are no longer synchronized. This paper presents an intelligent-agent framework that wraps a Web service in a conversation layer and a simple workflow-adaptation function. The conversation layer implements protocols and consults globally shared, declarative policy specifications to resolve interaction failures. The framework allows agents to resolves various model mismatches that cause interaction errors, including changes to required preconditions, partners, and expected message ordering. Implications of this distributed approach to Web service coordination are also discussed.",2005,0, 899,Forward error correction codes to reduce intercarrier interference in OFDM,"Orthogonal Frequency Division Multiplexing (OFDM) is sensitive to the carrier frequency offset (CFO), which destroys orthogonality and causes intercarrier interference (ICI). Recently, a simple rate 1/2 repeat coding scheme has been shown to be effective in suppressing ICI. That such a simple coding scheme is so effective raises an interesting question. Can more powerful error correcting codes with less redundancy be used just as effectively for the same purpose? In this paper, we propose the use of rate-compatible punctured convolutional (RCPC) codes",2001,0, 900,Closed-Form Symbol Error Probabilities of Distributed Orthogonal Space-Time Block Codes,"Orthogonal space-time block codes allow utilising the diversity provided by multiple-input-multiple-output communication channels, thereby decreasing the error probability for a given communication rate. The contribution of this paper is the derivation of closed-form expressions of the symbol error probability of distributed codes deployed over Nakagami flat fading channels with different channel gains and fading parameters",2006,0, 901,Fuzzy vibration fault diagnosis system of steam turbo-generator rotor,"Our mechanical fuzzy fault diagnosis system of rotors adopts the double diagnosis symptom group diagnosis strategy, and sets up a module using C++ builder 5.0. From the results of laboratory experiments and operations on site, we verify that the system diagnosis is very accurate.",2002,0, 902,Space Mapping With Adaptive Response Correction for Microwave Design Optimization,"Output space mapping is a technique introduced to enhance the robustness of the space-mapping optimization process in case the space-mapped coarse model cannot provide sufficient matching with the fine model. The technique often works very well; however, in some cases it fails. Especially in the microwave area where the typical model response (e.g., |S 21|) is a highly nonlinear function of the free parameter (e.g., frequency), the output space-mapping correction term may actually increase the mismatch between the surrogate and fine models for points other than the one at which the term was calculated, as in the surrogate model optimization process. In this paper, an adaptive response correction scheme is presented to work in conjunction with space-mapping optimization algorithms. This technique is designed to alleviate the difficulties of the standard output space mapping by adaptive adjustment of the response correction term according to the changes of the space-mapped coarse model response. Examples indicate the robustness of our approach.",2009,0, 903,Energy-aware fault-tolerant clustering scheme for target tracking wireless sensor networks,"Over the last few years, the deployment of wireless sensor networks (WSNs) has been fostered in diverse applications. WSN has great potential for a variety of domains ranging from scientific experiments to commercial applications. Due to the deployment of WSNs in dynamic and unpredictable environments, these applications must have potential to cope with variety of faults. This paper proposes an energy-aware fault-tolerant clustering protocol for target tracking applications termed as the Fault-Tolerant Target Tracking (FTTT) protocol. The identification of redundant nodes (RNs) makes sensor node (SN) fault tolerance plausible and the clustering endorsed recovery of sensors supervised by a faulty cluster head (CH). The FTTT protocol intends two steps of reducing energy consumption: first, by identifying RNs in the network; secondly, by restricting the numbers of SNs sending data to the CH. Simulations validate the scalability and low power consumption of the FTTT protocol in comparison with LEACH protocol.",2010,0, 904,Packet error rate optimization for routing algorithms in ad hoc wireless sensor networks,"Over the last years, wireless multihop ad hoc networks have received a tremendous interest from research groups. The major focal point has been the routing protocols, i.e. how to select the best route to send information from any source to any destination. Each node in an ad hoc network may work as a router to relay connections or data packets to their destination. The key issues of ad hoc networking are medium access control, which is used to share common channel resources among wireless nodes, and the networking layer. In this work, we have implemented a routing protocol based on the multihop algorithm to transmit information from any source to any destination node. We have focused on an indoor wireless sensor network, where all the links between nodes follow a line-of-sight propagation model. Medium access control and physical layer have been implemented according to the IEEE 802.15.4 standard in the 2.45 GHz frequency band where information is modulated in O-QPSK. In multihop wireless networks, one important question is how does the route selection depend on the physical and logical topology of the network. In terms of routing there is a trade-off between between hop distance and reception rate. This compromise will give us the best route to minimize the power consumption and maximize the transmission success probability.",2008,0, 905,Software Reliability Analysis by Considering Fault Dependency and Debugging Time Lag,"Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given",2006,0, 906,Design by extrapolation: an evaluation of fault tolerant avionics,"Over the past 30 years, safety-critical avionics systems such as Fly-By-Wire (FBW) flight controls, full-authority digital engine controls, and other systems have been introduced on many commercial and military airplanes and spacecraft. Early FBW systems, such as on the F-16 and Airbus A320, were considered revolutionary and introduced with extreme caution. These early systems and their successors all make use of redundant and fault-tolerant avionics to provide the required dependability and safety, but have used significantly different architectures. This paper examines the different levels of criticality and fault tolerance required by different types of avionics systems, establishes architectural categories of fault-tolerant architectures, and identifies the discriminating features of the varied approaches. Examples of discriminators include the level of redundancy, methods of engaging backup systems, protection from software errors, and the use of dissimilar hardware and software. The strengths and weaknesses of the approaches will be identified. The paper concludes with some speculation on trends for future systems based on this evaluation of previous systems",2002,0,907 907,Design by extrapolation: an evaluation of fault-tolerant avionics,"Over the past 30 years, safety-critical avionics systems such as fly-by-wire (FBW) flight controls, full-authority digital engine controls, and other systems have been introduced on many commercial and military airplanes and spacecraft. Early FBW systems, such as on the F-16 and Airbus A320, were considered revolutionary and were introduced with extreme caution. These early systems and their successors all make use of redundant and fault-tolerant avionics to provide the required dependability and safety, but have used significantly different architectures. This paper examines the different levels of criticality and fault tolerance required by different types of avionics systems, establishes architectural categories of fault-tolerant architectures, and identifies the discriminating features of the different approaches. Examples of discriminators include the level of redundancy, methods of engaging backup systems, protection from software errors, and the use of dissimilar hardware and software. The strengths and weaknesses of the different approaches are identified. The paper concludes with some speculation on trends for future systems based on this evaluation of previous systems",2001,0, 908,Masking Does Not Protect Against Differential Fault Attacks,"Over the past ten years, cryptographic algorithms have been found to be vulnerable against side-channel attacks such as power analysis attacks, timing attacks, electromagnetic radiation attacks and fault attacks. These attacks capture leaking information from an implementation of the algorithm in software or in hardware and apply cryptanalytical and statistical tools to recover the secret keys. A very well-known countermeasure against these attacks is to randomize every execution of the algorithm and every intermediate piece of data with a so-called masking method. In this paper we demonstrate that traditional countermeasures such as masking methodsfor symmetric cryptosystems are completely inefficient against fault attacks. In other words, differential fault attacks still apply on masked data. As an example we show how to recover secret keys from two masked AES implementations using a basic differential fault attack.",2008,0, 909,The error chain in using Electronic Chart Display and Information Systems,"Over-reliance in the information provided by electronic chart display and information systems (ECDIS) is a higher risk in navigation. This paper states the advantages of ECDIS and potential human errors, equipment errors and operation errors that have been found in using ECDIS. The root cause analysis for determining the core errors, the error chain in using ECDIS is discussed and identified, then the comments for correctly using ECDIS is proposed.",2008,0, 910,Toward Fault-Tolerant P2P Systems: Constructing a Stable Virtual Peer from Multiple Unstable Peers,"P2P systems must handle unexpected peer failure and leaving, and thus it is more difficult to implement than server-client systems. In this paper, we propose a novel approach to implement P2P systems by using virtual peers. A virtual peer consists of multiple unstable peers. A virtual peer is a stable entity; application programs run on a virtual peer are not compromised unless a majority of the peers fail within a short time duration. If a peer in a virtual peer fails, the failed peer is replaced by another (non-failed) one to restore the number of working peers. The primary contribution of this paper is to propose a method to form a stable virtual peer over multiple unstable peers. The major challenges are to maintain consistency between multiple peers and to replace a failed peer with another one. For the first issue, the Paxos consensus algorithm is used. For the second issue, the process migration technique is used to replicate and transfer a running process to a remote peer. Furthermore, the relation between the reliability of a virtual peer and the number of peers assigned to a virtual peer is evaluated. The proposed method is implemented in our musasabi P2P platform. An overview of musasabi and its implementation is also given.",2009,0, 911,Evaluation of multicast error recovery using convolutional codes,"Packet losses due to congested traffic conditions in the Internet leads to the investigation of the reliable delivery of data to multicast receivers. In this paper, we examine the technique based on FEC, using (n, k, m) convolutional codes to recover lost packets. We show that when the redundant packets are generated by convolutional codes, a receiver can tolerate a certain amount of packet loss and still be able to obtain all data without requiring retransmission. We evaluate the effectiveness of the proposed approach to perform the recovery of lost packets for multicast transmission taking into account two different parameters: the number of packets needed to be sent to guarantee the reception of data, and the number of transmissions. Finally, we compare the proposed approach with the scheme when parity packets are based on Reed-Solomon codes. We demonstrate that the use of parity generated by convolutional coding is more efficient at reducing bandwidth requirements and the number of transmissions from the source",2000,0, 912,A Dynamic Temporal Error Concealment Algorithm for H.264,"Packet losses or errors of high compressed video stream during transmission over error-prone channel may cause serious decline in video quality. Error concealment (EC) at decoder side is an effective technology to reduce this video degradation. This paper proposes a Dynamic Temporal Error Concealment (DTEC) algorithm for H.264, which chooses different error concealment approach according to the variance of motion vectors of available macro-blocks (MBs) around the lost MB. Furthermore, a recovery method based Directional Temporal Boundary Match Algorithm (DTBMA) is proposed. Experimental results show that the proposed algorithm not only increases PSNR but also improves subjective video quality compared with conventional temporal error concealment algorithms in the case of the same packet loss rate.",2010,0, 913,Robust Adaptive Tracking Using Mixed Normalized/Unnormalized Estimation Errors,"Parameter adjustment mechanism has an important role to obtain the smooth and fast responses in adaptive control systems. Using the normalized estimation error can improve the robustness properties of the adaptive system despite the perturbations, whereas by which the admissible tracking error and fast convergence may not be obtained necessarily. This paper concerns with the design of a parameter adjustment mechanism ensures that robust, fast and smooth convergence is obtained despite the disturbances and parameter variations. The algorithm is developed based on a variable normalizing gain to guarantee the convergence and then improved by combining with an unnormalized estimation approach to meet all the desired specifications. The proposed algorithm is then applied to model reference adaptive control (MRAC) scheme to ensure that robust tracking is obtained despite the perturbations. Simulation results show the capability of the proposed algorithm compared to the pure normalized or unnormalized approaches.",2007,0, 914,EVAL: Utilizing processors with variation-induced timing errors,"Parameter variation in integrated circuits causes sections of a chip to be slower than others. If, to prevent any resulting timing errors, we design processors for worst-case parameter values, we may lose substantial performance. An alternate approach explored in this paper is to design for closer to nominal values, and provide some transistor budget to tolerate unavoidable variation-induced errors. To assess this approach, this paper first presents a novel framework that shows how microarchitecture techniques can trade off variation-induced errors for power and processor frequency. Then, the paper introduces an effective technique to maximize performance and minimize power in the presence of variation-induced errors, namely High-Dimensional dynamic adaptation. For efficiency, the technique is implemented using a machine-learning algorithm. The results show that our best configuration increases processor frequency by 56% on average, allowing the processor to cycle 21% faster than without variation. Processor performance increases by 40% on average, resulting in a performance that is 14% higher than without variation - at only a 10.6% area cost.",2008,0, 915,A model driven approach to the design and implementing of fault tolerant Service oriented Architectures,"One of the key stages of the development of a fault tolerant service oriented architecture is the creation of diagnosers, which monitors the systempsilas behaviour to identify the occurrence of failure. This paper presents a model driven development (MDD) approach to the automated creation of the diagnosing services and integrating them into the system. The outline of the method is as follows. BPEL models of the services are transformed to deterministic automaton with unobservable event representations using MDD transformations. Then, relying on discrete event system techniques a diagnoser automaton for the deterministic automata are created automatically. Finally, the diagnoser automaton is transformed into a new BPEL representation, which is integrated into the original architecture.",2008,0, 916,Assessment of Data Diversity Methods for Software Fault Tolerance Based on Mutation Analysis,"One of the main concerns in safety-critical software is to ensure sufficient reliability because proof of the absence of systematic failures has proved to be an unrealistic goal. fault-tolerance (FT) is one method for improving reliability claims. It is reasonable to assume that some software FT techniques offer more protection than others, but the relative effectiveness of different software FT schemes remains unclear. We present the principles of a method to assess the effectiveness of FT using mutation analysis. The aim of this approach is to observe the power of FT directly and use this empirical process to evolve more powerful forms of FT. We also investigate an approach to FT that integrates data diversity (DD) assertions and TA. This work is part of a longer term goal to use FT in quantitative safety arguments for safety critical systems.",2006,0, 917,An Operation-Centered Approach to Fault Detection in Symmetric Cryptography Ciphers,"One of the most effective ways of attacking a cryptographic device is by deliberate fault injection during computation, which allows retrieving the secret key with a small number of attempts. Several attacks on symmetric and public-key cryptosystems have been described in the literature and some dedicated error-detection techniques have been proposed to foil them. The proposed techniques are ad hoc ones and exploit specific properties of the cryptographic algorithms. In this paper, we propose a general framework for error detection in symmetric ciphers based on an operation-centered approach. We first enumerate the arithmetic and logic operations included in the cipher and analyze the efficacy and hardware complexity of several error-detecting codes for each such operation. We then recommend an error-detecting code for the cipher as a whole based on the operations it employs. We also deal with the trade-off between the frequency of checking for errors and the error coverage. We demonstrate our framework on a representative group of 11 symmetric ciphers. Our conclusions are supported by both analytical proofs and extensive simulation experiments",2007,0, 918,Distance errors correction for the time of flight (ToF) cameras,One of the most important distance measurement errors is produced by light reflections. These errors can't be avoided and black are more affected than white objects. The measured distance by the ToF camera to an object in the scene changes if surrounding objects are moved. The distance error can be greater than 50% and camera calibration is useless if objects are moved. The calibration method we propose can be performed in any conditions not only in the laboratory. The distance errors for all objects in the scene can be corrected if on the objects are attached white or black tags/ labels. The ToF cameras can be improved using an active illumination with structured light. The improvement will eliminate the distance errors produced by light reflections.,2008,0, 919,Fuzzy Rule Base Influence on Genetic-Fuzzy Reconstruction of CMM 3D Triggering Probe Error characteristics,"One of the most important sources of coordinate measuring machine (CMM) errors is the probe used to collect coordinate points on measured objects. The error value depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM touch trigger probe. The automatically generated FKBs are used for the reconstruction of the direction-dependent probe error w. The angles beta and gamma are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real/binary like coded genetic algorithm developed by the authors. The influence of the number of fuzzy rules (FR) on the precision of the genetically-generated FKBs is investigated by varying the number of fuzzy sets (FS) on the premises and on the conclusion. The results of the learning are examined. Once the adequate number of fuzzy rules is found, an optimal learning is performed and a near-optimal FKB of probe error characteristics is proposed",2006,0, 920,Improvement on error concealment for H.264 spatial scalable video coding,"One of the scalability features in H.264/SVC is spatial scalability, and it is found to be useful in distribution of video content to a variety of consumers. Transmission of an H.264/SVC stream over a packet network may introduce losses, at worst case would involve whole frame losses. It is necessary to conceal the errors in the decoded bitstream. We propose an improvement in the error concealment scheme in H.264/SVC by improved motion estimation and post processing of the concealed frames in the pixel domain. The proposed method successfully conceals the lost frame and was found to be effective for low bitrate video transmission over conventional methods.",2010,0, 921,"Data aware, low cost error correction for wireless sensor networks","One or the main challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor data collection and aggregation, while satisfying the low-cost, low-energy operating constraints of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability resulting in transient failures. Existing reliability techniques that address transient failures in circuits and communication channels incur prohibitively high energy, bandwidth and cost overheads in the sensor nodes. In this paper we investigate application-level error correction techniques for sensor networks that exploit the properties of sensor data to eliminate any overhead on the sensor nodes, at the expense of nominal buffer requirements at the data aggregator nodes, which are much less cost/energy constrained. Our approach involves use of spatio-temporal correlations in sensor data, the goals of the application, and its vulnerability to various errors. We present our error-correction algorithm and evaluate it through simulations using real and synthetic sensor data. Experimental results validate the feasibility of our approach to provide high degree of reliability in sensor data aggregation, without imposing overheads on sensor nodes.",2004,0, 922,On-line built-in self-test for operational faults,"On-line testing is fast becoming a basic feature of digital systems, not only for critical applications, but also for highly-available applications. To achieve the goals of high error coverage and low error latency, advanced hardware features for testing and monitoring must be included. One such hardware feature is built-in self-test (BIST), a technique widely applied in manufacturing testing. We present a practical on-line periodic BIST method for the detection of operational faults in digital systems. The method applies a near-minimal deterministic test sequence periodically to the circuit under test (CUT) and checks the CUT responses to detect the existence of operational faults. To reduce the testing time, the test sequence may be partitioned into small sequences that are applied separately - this is especially useful for real-time digital systems. Several analytical and experimental results show that the proposed method is characterized by full error coverage, bounded error latency, moderate space and time redundancy",2000,0, 923,Defect handling in medium and large open source projects,"Open source projects have resulted in numerous high-quality, widely used products. Understanding the defect-handling strategies such projects employ can help us use the publicly accessible defect data from these projects to provide valuable quality-improvement feedback and to better understand the defect characteristics for a wider variety of software products. We conducted a survey to understand defect handling in selected open source projects and compared the particular approaches taken in different projects. We focused on defect handling instead of the broader quality assurance activities other researchers have previously reported. Our results provided quantitative evidence about the current practice of defect handling in an important subset of open source projects.",2004,0, 924,A Study on Defect Density of Open Source Software,"Open source software (OSS) development is considered an effective approach to ensuring acceptable levels of software quality. One facet of quality improvement involves the detection of potential relationship between defect density and other open source software metrics. This paper presents an empirical study of the relationship between defect density and download number, software size and developer number as three popular repository metrics. This relationship is explored by examining forty-four randomly selected open source software projects retrieved from SourceForge.net. By applying simple and multiple linear regression analysis, the results reveal a statistically significant relationship between defect density and number of developers and software size jointly. However, despite theoretical expectations, no significant relationship was found between defect density and number of downloads in OSS projects.",2010,0, 925,An improved global analysis for program bug checking,"Property simulation is an efficient approach that checks if a program satisfies certain temporal safety properties, and inter-procedural property simulation terminates in polynomial time and space. This paper first proposes a general framework for bug checking. Then, it defines function summary as a stand-in for the function. It improves property simulation and gives a new global analysis approach, in which the execution state is described in extended intervals. Experiments show that the testing tool based on this new approach has a higher accuracy rate for program bug checking.",2009,0, 926,Incorporating Fault Tolerance with Replication on Very Large Scale Grids,"Providing fault tolerance for message passing parallel application on a distributed environment is a rule rather than an exception. A node failure can cause the whole computation to stop and has to be restarted from the beginning if no fault tolerance is available. However, introducing fault tolerance has some overhead on speedup that can be achieved. In this paper, we introduce a new technique called replication with cross-over packets for reliability and to increase fault tolerance over Very Large Scale Grids (VLSG). This technique has two pronged effect of avoiding single point of failure and single link of failure. We incorporate this new technique into the L-BSP model and show the possible speedup of parallel process. We also derive the achievable speedup for some fundamental parallel algorithms using this technique.",2007,0, 927,A new capability to detect and locate insulation defects in complex wiring systems,"Pulse arrested spark discharge (PASD) is an effective means to detect and locate a variety of insulation defects in complex wiring geometries such as breached insulation, chafing, and physically small insulation cracks. It is highly immune to line impedance variations, an important property in aircraft wiring systems, and has been shown to be nondestructive to electrical insulation materials. Because of the simplicity of the PASD concept, the low energy PASD pulser and diagnostics can be readily implemented into a portable, briefcase-sized diagnostic system. Although this patented technique will likely evolve as it enters into field applications, it is capable of making a near-term impact on the ability of inspection and maintenance organizations to detect and locate potentially hazardous insulation defects.",2005,0, 928,Diagnosing quality of service faults in distributed applications,"QoS management refers to the allocation and scheduling of computing resources. Static QoS management techniques provide a guarantee that resources will be available when needed. These techniques allocate resources based on worst-case needs. This is especially important for applications with hard QoS requirements. However, this approach can waste resources. In contrast, a dynamic approach allocates and deallocates resources during the lifetime of an application. In the dynamic approach the application is started with an initial resource allocation. If the application does not meet its QoS requirements, a resource manager attempts to allocate more resources to the application until the application's QoS requirement is met. While this approach offers the opportunity to better manage resources and meet application QoS requirements, it also introduces a new set of problems. In particular, a key problem is detecting why a QoS requirement is not being satisfied and determining the cause and, consequently, which resource needs to be adjusted. This paper investigates a policy-based approach for addressing these problems. An architecture is presented and a prototype described. This is followed by a case study in which the prototype is used to diagnose QoS problems for a web application based on Apache",2002,0, 929,A Web services-based universal approach to heterogeneous fault databases,"QuakeSim is a Web browser-based problem-solving environment that provides a set of links between newly available resources from NASA's Earth-observing systems, high-performance simulations, automated data mining, and more traditional tools. It's the first Web services-based, interoperable environment for creating large-scale forward models of earthquake processes. A Web services-based portal provides global access to geologic reference models of faults and fault data, simple analysis tools, new parallel forward models, and visualization support. QuakeTables, a database system for handling both real and simulated data, provides input for earthquake simulation tools using fault data. Later, it will include other types of earthquake data as well. This article describes our Web-based universal approach to heterogeneous earthquake databases using QuakeTables to demonstrate the design, development, and implementation challenges of incorporating solid earth science data sets in high-performance computing simulations.",2005,0, 930,Error sensitivity analysis for wireless JPEG2000 using perceptual quality metrics,"Quality assessment of mobile and wireless multimedia services including image and video applications has gained increased attention in recent years as a means of facilitating efficient radio resource management. In particular, approaches that utilize perceptual-based metrics are becoming more dominant, as conventional fidelity metrics such as the peak signal-to-noise ratio (PSNR) may not correlate well with quality as perceived by the human observer. In this paper, we focus on the error sensitivity analysis for images given in the wireless JPEG2000 (JPWL) format using perceptual quality metrics. Specifically, the perceptual quality improvements obtained by progressively decoding an increasing number of image packets are examined. It is shown that the considered perceptual quality metrics exploiting structural image features may accompany or replace the PSNR-based error sensitivity description (ESD) marker segment in the wireless JPEG2000 standard. This addition will increase the effectiveness of the ESD marker segment as it facilitates the communication of reduced-reference information about the image quality from the transmitter to the receiver. In addition, the proposed approach can be used to guide the design of preferential error control coding schemes, link adaptation techniques, and selective retransmission of packets with respect to their contribution to overall quality as perceived by humans.",2008,0, 931,Monitoring Software Quality Evolution for Defects,"Quality control charts, especially c-charts, can help monitor software quality evolution for defects over time. c-charts of the Eclipse and Gnome systems showed that for systems experiencing active maintenance and updates, quality evolution is complicated and dynamic. The authors identify six quality evolution patterns and describe their implications. Quality assurance teams can use c-charts and patterns to monitor quality evolution and prioritize their efforts.",2010,0, 932,Estimates of Radial Current Error from High Frequency Radar using MUSIC for Bearing Determination,Quality control of surface current measurements from high frequency (HF) radar requires understanding of individual error sources and their contribution to the total error. Radial velocity error due to uncertainty of the bearing determination technique employed by HF radar is observed with both direction finding and phased array techniques. Surface current estimates utilizing Multiple Signal Classification (MUSIC) direction finding algorithm with a compact antenna design are particularly sensitive to the radiation pattern of the receive and transmit antennas. Measuring the antenna pattern is a common and straightforward task that is essential for accurate surface current measurements. Radial current error due to the a distorted antenna pattern is investigated by applying MUSIC to simulated HF radar backscatter for an idealized ocean surface current. A Monte Carlo type treatment of distorted antenna patterns is used to provide statistics of the differences between simulated and estimated surface current. RMS differences between the simulated currents and currents estimated using distorted antenna patterns are 3-12 cm/s greater than those using perfect antenna patterns given a simulated uniform current of 50 cm/s. This type of analysis can be used in conjunction with antenna modeling software to evaluate possible error due to the antenna patterns before installing a HF radar site.,2007,0, 933,One kind of quantum CSS code for burst errors,"Quantum error correction is an important research area of quantum information. This paper proposes one kind of quantum CSS code for burst errors. Two construction methods are given of the proposed CSS code obtaining from classical cyclic code. Using these methods, a group of CSS codes of length 15 and 30 suitable for burst errors are obtained. In conclusion, the capability of correcting burst errors of the quantum CSS codes is much larger than their capability of correcting random errors.",2010,0, 934,Continuous phase corrections applied to SAR imagery,"Phase error compensation is typically applied identically to every pixel in a Synthetic Aperture Radar (SAR) image. For certain modern systems and applications, this methodology is on the verge of becoming insufficient. We present Pixel-Unique Phase Adjustment (PUPA), an algorithm that performs an arbitrary spatially varying correction. We treat this as a deconvolution problem for which the goal is to minimize the cost function corresponding to the maximum likelihood estimate of the restored image. Our approach uses an iterative, gradient-based optimization algorithm. This method handles nonparametric phase errors and removes distortions exactly. We present results on real SAR data and demonstrate that quality is limited only by measurement noise. We analyze performance in terms of both computational complexity and memory requirements, and discuss two different implementations that allow a tradeoff to be made between these resources.",2009,0, 935,Error Correction on IRIS Biometric Template Using Reed Solomon Codes,"Pin number or password that is used for authentication can be easily attacked. This limitation triggered the utilization of biometric for secured transactions. Biometric is unique to each individual and is reliable. Among the types of biometric being used currently, iris is the most accurate and it remains stable throughout a persons life. However the major challenge on iris and other biometric for authentication is the intra user variability in the acquired identifiers. Iris of the same person captured at different time may differ due to the signal noise of the iris camera. Traditional cryptography method is unable to encrypt and store biometric template, then perform the matching directly. Minor changes in the bits of the feature set extracted from the iris may lead to a huge difference in the results of the encrypted feature. In our approach, an iris biometric template is secured using iris biometric and passwords. Error Correction Code, ECC is introduced to reduce the variability and noise of the iris data. Experimental results show that this approach can assure a higher security with a low false rejection or false acceptance rate. The successful iris recognition rate using this approach is up to 97%.",2010,0, 936,Mitigation of GPS multipath error using recursive least squares adaptive filtering,"Positional accuracy of GPS is limited by various error sources like ionosphere, troposphere, clock, instrumental bias, multipath etc. Among these, multipath errors are quite significant, since it should be dynamically modelled with respect to GPS receiver environment. In this paper, multipath error is estimated based on both code and carrier phase measurements using CMC (code minus carrier) method. It is verified with experimental static dual frequency GPS receiver data. The multipath time series data is applied to various Recursive Least Squares (RLS) adaptive filtering algorithms to minimize the multipath error. The results are encouraging and significant reduction of multipath error is observed. The convergence of RLS filters is faster than the conventional Least Mean Squares (LMS) adaptive filters. These RLS filters can also be applied to real time kinematic GPS applications.",2010,0, 937,"Design methodology to trade off power, output quality and error resiliency: application to color interpolation filtering","Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd up-scaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and ""output quality"". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (peak signal to noise ratio) under aggressive voltage scaling as well as extreme process variations in sub-70 nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are effected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.",2007,0, 938,A classification approach for power distribution systems fault cause identification,"Power distribution systems play an important role in modern society. When distribution system outages occur, fast and proper restorations are crucial to improve the quality of services and customer satisfaction. Proper usages of outage root cause identification tools are often essential for effective outage restorations. This paper reports on the investigation and results of two popular classification methods: logistic regression (LR) and artificial neural network (ANN) applied on power distribution fault cause identification. LR is seldom used in power distribution fault diagnosis, while ANN has been extensively used in power system reliability researches. This paper discusses the practical application problems, including data insufficiency, imbalanced data constitution, and threshold setting that are often faced in power distribution fault cause identification problems. Two major distribution fault types, tree and animal contact, are used to illustrate the characteristics and effectiveness of the investigated techniques.",2006,0, 939,Fault location using wavelet energy spectrum analysis of traveling waves,"Power grid faults generate traveling wave signals at the fault point. The signals transmit to both ends of the faulted transmission line, and to the whole power grid. The traveling wave signals have many components with different frequencies and all the components have fault characteristics. The signals can be employed in locating accurately the fault, and the location method cannot be influenced by current transformer saturation and low frequency oscillation. The frequency band component with energy concentrated in the detected traveling wave is extracted by wavelet energy spectrum analysis. The arrival time of the component is recorded with wavelet analysis in the time domain. The propagation velocity of the component is calculated by the last recorded traveling wave arrived time at both ends of the tested transmission line, which is generated by an outside disturbance. The fault location scheme is simulated with ATP software. Results show that the accuracy of the method is little affected by fault positions, fault types and grounding resistances. The fault location error is less than 100 m.",2007,0, 940,Experimental study on the partial discharge characteristics of four typical defects in GIS,"Partial discharge (PD) can be one of the most influential phenomena in the aging of electrical insulation systems. In order to ensure reliability in the operation of such equipment, the detection and interpretation of PD measurements is required to monitor the condition of insulation. In this paper, physical model of typical defects in gas-insulated substation (GIS) are established, and studied for PD using pulse current method, which are based on the characteristics of defects and PD checker instrument. The results show that the phase-resolved PD (PRPD) patterns, the time domain waveforms, average amplitude and central phase changing of PD signals have very similar distribution characteristics and those of different defect have different ones. So it would be very convenient for the recognition of PD types and a deeper study of the mechanism of PD.",2009,0, 941,An investigation of partial volume effect and partial volume correction in small animal positron emission tomography (PET) of the rat brain,"Partial volume correction (PVC) has been successfully applied to human PET data, where a range of methods has been used including the use of anatomical side information. The rat brain is expected to have low variability for animals of similar weight, thus making it possible to delineate volumes of interest (VOIs) on a stereotaxic atlas [1]. The aims of this study were to investigate the magnitude of partial volume effect (PVE) in small animal PET for different regions in the rat brain and to evaluate the performance of PVC based on the geometric transfer matrix method (GTM) [2] using anatomical regions drawn on a stereotaxic atlas. PVE estimates in terms of activity retention in each region and spill-over between regions were calculated by convolving each region with a measured spatially invariant point spread function. PVC was tested on dynamic microPET studies of the dopaminergic D2 receptor radioligand 11C-Raclopride which were simulated using PET SORTEO, a Monte Carlo based PET simulator [3]. The kinetics of striatum and remaining brain were simulated based on the simplified reference tissue model [4] using the cerebellum as the reference tissue. A significant amount of PVE is present in microPET rat brain studies with recovery of true VOI concentration being between 52% and 20%. In the simulated 11C-Raclopride study the uncorrected time activity curves showed up to 55% reduction in measured activity concentration and a bias in binding potential of up to 36%. Good activity recovery and improvement of binding potential estimation was achieved with PVC (0.26% to 4.36% bias). We conclude that PVE has a substantial influence on rat brain studies and PVC should be used to improve quantitative accuracy. PVC using the adapted GTM method shows promising results.",2008,0, 942,Generating resolution-enhanced images for correction of partial volume effects in emission tomography: a multiresolution approach,"Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROI), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. The objective of our study was therefore to develop a methodology for PVE correction to enable not only the accurate recuperation of activity concentrations, but also to generate PVE corrected images. In the multiresolution analysis that we define here, details of a high resolution image H (MRI or CT) are extracted, transformed and integrated in a low resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the ""a trous"" algorithm, which allows the spatial frequencies to be easily obtained at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Improvement was obtained also in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in an epileptic patient.",2005,0, 943,"Whirlwind: Overload protection, fault-tolerance and self-tuning in an Internet services platform","Performance and availability are of critical importance when Internet services are integrated into emergency response management. Poor performance or service failure can result in severe economic, social or environmental cost. This paper presents Whirlwind, a software architecture that includes primitives for overload management and fault tolerance. A Whirlwind service is composed of a collection of isolated, independent, sequential processes that communicate through asynchronous message passing. If a process fails, the fault is contained within the process and a message is propagated to monitoring processes that may attempt to recover from the error. Processes are grouped with other processes that share similar resource, computation and concurrency requirements. Each group contains a scheduler and a thread pool that drives execution of processes within the group. The group may also define a message predicate that determines if a message posted to a process in the group is accepted. A rejected message typically signals overload and allows the application the chance to perform load shedding and avoid over commitment of resources. Principals are shared between processes in different groups, enabling consistent prioritization and admission control across groups. The resource management policies are typically driven by feedback loops that monitor resource availability and system performance, and adjust tuning parameters to meet performance goals. Whirlwind evolved over a period of five fire seasons as part of emergency response software in Victoria, Australia.",2009,0, 944,A dual-side comparison method of online fault monitoring for railway signal cable,"Railway signal cable plays an important role to ensure trains operating safe. The failure of railway signal cable has a serious effect on safety of train operation. As a result of characteristics of cable laying itself, the traditional methods of fault detection of railway signal cable has great limits to its application. Therefore, the fixed resistors can be connected into spare wire pairs in the cable junction box along lines, which has formed a closed loop of circuit network. Thus, By applying different voltages at dual-side of circuit to comparison the loop resistance, the status of cable can be reflected by the change of resistance in the loop and the spare wire pairs can be used to monitor the online fault of railway signal cable instead of occupying wire pairs. After practical test, the online monitoring and positioning of fault of railway signal cable about breaking has been solved effectively.",2010,0, 945,Fault Diagnosis System of Locomotive Axle's Track Based on Virtual Instrument,"Railway vehicle is usually working in hazardous environment. The operational status of axle, which is the railway vehicle's main components to run, affects the safe operation of the railway vehicle directly. Over the years, the railway of our country has remained low equipment-rate, high using-rate and high intensity-transport. In addition, because of the need of the rapid development of economy, our country has raised the speed of the railway vehicle, which leads to the railway vehicle's axle wearing and tearing seriously, the life shortened and accidents increasing significantly. So, it is very necessary and urgent to develop the detection system because axle is related to the safety of the railway vehicle. Compared with traditional instrument, the virtual instrument has characteristics of openness, easy using and high cost performance. It has already been widely used in detection systems at present. Based on LabVIEW which is a developing platform software of NI Company, the paper has designed a kind of fault diagnosis system on locomotive axle's track.",2010,0, 946,On the error distribution for randomly-shifted lattice rules,"Randomized quasi-Monte Carlo (RQMC) methods estimate the expectation of a random variable by the average of n dependent realizations of it. In general, due to the strong dependence, the estimation error may not obey a central limit theorem. Analysis of RQMC methods have so far focused mostly on the convergence rates of asymptotic worst-case error bounds and variance bounds, when n , but little is known about the limiting distribution of the error. Here we examine this limiting distribution for the special case of a randomly-shifted lattice rule, when the integrand is smooth. We start with simple one-dimensional functions, where we show that the limiting distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. In higher dimensions, for linear functions, the distribution function of the properly standardized error converges to a spline of degree equal to the dimension.",2009,0, 947,Adaptive Forward Error Correction (AFEC) based Streaming using RTSP and RTP,"Real Time streaming Protocol (RTSP) is the de facto standard for streaming and almost all the set up boxes and media players support it. RTSP is a very simple text based protocol for streaming. And this simplicity is one of the main reasons for the popularity of the protocol. We propose the use of RTSP along with Real Time Protocol (RTP) for Adaptive Forward Error Correction (AFEC) based streaming. The idea is to use the existing semantics of the protocol with minimal overloading to achieve the goal. The use of the RTP is solely to provide a mechanism for the packet sequence number, which is the main requirement to ensure reliability. Since, we will be using existing protocol semantics (overloading the existing semantics), we do not have the luxury of using Negative Acknowledgements (NACKs) or information rich packet headers. However, we can add the intelligence in the client/server to interpret the overloaded data and achieve the same results.",2006,0, 948,Tolerance to multiple transient faults for aperiodic tasks in hard real-time systems,"Real-time systems are being increasingly used in several applications which are time-critical in nature. Fault tolerance is an essential requirement of such systems, due to the catastrophic consequences of not tolerating faults. In this paper, we study a scheme that guarantees the timely recovery from multiple faults within hard real-time constraints in uniprocessor systems. Assuming earliest-deadline-first scheduling (EDF) for aperiodic preemptive tasks, we develop a necessary and sufficient feasibility-check algorithm for fault-tolerant scheduling with complexity O(n2-), where n is the number of tasks to be scheduled and is the maximum number of faults to be tolerated",2000,0, 949,Delay Constraint Error Control Protocol for Real-Time Video Communication,"Real-time video communication over wireless channels is subject to information loss since wireless links are error-prone and susceptible to noise. Popular wireless link-layer protocols, such as retransmission (ARQ) based 802.11 and hybrid ARQ methods provide some level of reliability while largely ignoring the latency issue which is critical for real-time applications. Therefore, they suffer from low throughput (under high-error rates) and large waiting-times leading to serious degradation of video playback quality. In this paper, we develop an analytical framework for video communication which captures the behavior of real-time video traffic at the wireless link-layer while taking into consideration both reliability and latency conditions. Using this framework, we introduce a delay constraint packet embedded error control (DC-PEEC) protocol for wireless link-layer. DC-PEEC ensures reliable and rapid delivery of video packets by employing various channel codes to minimize fluctuations in throughput and provide timely arrival of video. In addition to theoretically analyzing DC-PEEC, the performance of the proposed scheme is analyzed by simulating real-time video communication over ldquorealrdquo channel traces collected on 802.11 b WLANs using H.264/AVC JM14.0 video codec. The experimental results demonstrate performance gains of 5-10 dB for different real-time video scenarios.",2009,0, 950,Mining with Noise Knowledge: Error Aware Data Mining,"Real-world data are dirty, and therefore, noise handling is a defining characteristic for data mining research and applications. This talk will review existing research efforts on data cleansing and classifier ensembling in dealing with random noise, and then present our recent research on an error aware data mining design to process structured noise. This error aware data mining framework makes use of error information (such as noise level, noise distribution, and data corruption rules) to improve data mining results. Experimental comparisons on real-world datasets will demonstrate the effectiveness of this design.",2007,0, 951,Error distribution of range measurements in Wireless Sensor Networks (WSNs),"Previous work predominantly assumes that the pdf of range measurements exhibit Gaussian characteristics. The objective of this paper is to scrutinize this assumption for different environments. Experiments are performed in both outdoor and indoor environments with Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) conditions using IEEE 802.15.4 compliant devices. These devices consist of a low-cost 2.45 GHz chipset and use Time-of-Flight (TOF) measurement technique for range estimation. Our results and analysis are based on four different statistical Goodness-of-Fit (GOF) tests i.e., a graphical technique, Linear correlation coefficient, Anderson-Darling, and Chi-squared for investigating the Gaussianity of range measurements.",2010,0, 952,A Rapid Fault Injection Approach for Measuring SEU Sensitivity in Complex Processors,"Processors are very common components in current digital systems and to assess their reliability is an essential task during the design process. In this paper a new fault injection solution to measure SEU sensitivity in processors is presented. It consists in a hardware-implemented module that performs fault injection through the available JTAG-based On-Chip Debugger (OCD). It can be widely applicable to different processors since JTAG standard is an extended interface and OCDs are usually available in current processors. The hardware implementation avoids the communication between the target system and the software debugging tool. The method has been applied to a complex processor, the ARM7TDMI. Results illustrate the approach is a fast, efficient and cost-effective solution.",2007,0, 953,Looking for Product Line Feature Models Defects: Towards a Systematic Classification of Verification Criteria,"Product line models (PLM) are important artifacts in product line engineering. Due to their size and complexity, it is difficult to detect defects in PLMs. The challenge is however important: any error in a PLM will inevitably impact configuration, generating issues such as incorrect product models, inconsistent architectures, poor reuse, difficulty to customize products, etc. Surveys on feature-based PLM verification approaches show that there are many verification criteria, that these criteria are defined in different ways, and that different ways of working are proposed to look for defect. The goal of this paper is to systematize PLM verification. Based on our literature review, we propose a list of 23 verification criteria that we think cover those available in the literature.",2009,0, 954,Using redundancies to find errors,"Programmers generally attempt to perform useful work. If they performed an action, it was because they believed it served some purpose. Redundant operations violate this belief. However, in the past, redundant operations have been typically regarded as minor cosmetic problems rather than serious errors. This paper demonstrates that, in fact, many redundancies are as serious as traditional hard errors (such as race conditions or pointer dereferences). We experimentally test this idea by writing and applying five redundancy checkers to a number of large open source projects, finding many errors. We then show that, even when redundancies are harmless, they strongly correlate with the presence of traditional hard errors. Finally, we show how flagging redundant operations gives a way to detect mistakes and omissions in specifications. For example, a locking specification that binds shared variables to their protecting locks can use redundancies to detect missing bindings by flagging critical sections that include no shared state.",2003,0, 955,The Effect of the Number of Defects on Estimates Produced by Capture-Recapture Models,Project managers use inspection data as input to capture-recapture (CR) models to estimate the total number of faults present in a software artifact. The CR models use the number of faults found during an inspection and the overlap of faults among inspectors to calculate the estimate. A common belief is that CR models underestimate the number of faults but their performance can be improved with more input data. This paper investigates the minimum number of faults that has to be present in an artifact before the CR method can be used. The result shows that the minimum number of faults varies from ten faults to twenty-three faults for different CR estimators.,2008,0, 956,Fault diagnosis for transformer based on fuzzy entropy,"Power transformers are one of the key equipments of the power system, so it is valuable to discover the incipient fault timely and truly. Code deficiency exists in the gas ratio method by the IEC/DEEE standard and fault diagnosis for power transformers. A model based on fuzzy entropy for power transformer faults diagnosis is put forward, which expand coding bound of original IEC three-ratio. At the same time, the method has some contain fault ability in a certain degree. It also shows the probability and disposes lost or false power transformer fault symptoms. That shows the validity of the method for power transformer fault diagnosis by dissolved gas-in-oil analysis.",2007,0, 957,Joint evaluation of performance and robustness of a COTS DBMS through fault-injection,"Presents and discusses observed failure modes of a commercial off-the-shelf (COTS) database management system (DBMS) under the presence of transient operational faults induced by SWIFI (software-implemented fault injection). The Transaction Processing Performance Council (TPC) standard TPC-C benchmark and its associated environment is used, together with fault-injection technology, building a framework that discloses both dependability and performance figures. Over 1600 faults were injected in the database server of a client/server computing environment built on the Oracle 8.1.5 database engine and Windows NT running on COTS machines with Intel Pentium processors. A macroscopic view on the impact of faults revealed that: (1) a large majority of the faults caused no observable abnormal impact in the database server (in 96% of hardware faults and 80% of software faults, the database server behaved normally); (2) software faults are more prone to letting the database server hang or to causing abnormal terminations; (3) up to 51% of software faults lead to observable failures in the client processes",2000,0, 958,Condition monitoring and fault diagnosis of electrical motors-a review,"Recently, research has picked up a fervent pace in the area of fault diagnosis of electrical machines. The manufacturers and users of these drives are now keen to include diagnostic features in the software to improve salability and reliability. Apart from locating specific harmonic components in the line current (popularly known as motor current signature analysis), other signals, such as speed, torque, noise, vibration etc., are also explored for their frequency contents. Sometimes, altogether different techniques, such as thermal measurements, chemical analysis, etc., are also employed to find out the nature and the degree of the fault. In addition, human involvement in the actual fault detection decision making is slowly being replaced by automated tools, such as expert systems, neural networks, fuzzy-logic-based systems; to name a few. It is indeed evident that this area is vast in scope. Hence, keeping in mind the need for future research, a review paper describing different types of faults and the signatures they generate and their diagnostics' schemes will not be entirely out of place. In particular, such a review helps to avoid repetition of past work and gives a bird's eye view to a new researcher in this area.",2005,0, 959,A fault-tolerant MPLS-based control and communication network for the Lucent LambdaRouter,"Recently, the importance of out-of-band signaling has been realized for next-generation intelligent optical transport networks (OTNs). This next-generation OTN will be capable of providing services like real-time point-and-click provisioning of optical channels, automatic protection and restoration, network topology auto-discovery, and bandwidth management. However, for successful deployment and usage of these services, reliability of the out-of-band signaling network, referred to as the control and communication network (CCN), is critical. Specifically, it is desired that the signaling network transparently recover from any single failure events and continue providing communication between OTN nodes. In this paper, we propose a fault-tolerant, scalable, and cost-effective architecture for IP-based control plane of next-generation OTNs. The architecture is based on multiprotocol label switching (MPLS) protocols. The essence is to provide diverse paths in the network so that any failure in the OTN, as well as the control network, can be recovered transparently. These diverse paths employ a packet dual-feed and select mechanism for fast recovery from failures. The architecture requires diverse paths between the neighboring OTN nodes, instead of all-pair of nodes, enabling the control plane to seamlessly scale with the size of the OTN network. Moreover, the architecture allows the control plane topology to be independent of the OTN topology. This gives operators considerable flexibility in the design of the CCN in a cost-effective manner by using virtual private networks (VPNs) through public data networks, or by avoiding unnecessary OTN links. We also present an implementation of the proposed architecture that will be used as the highly reliable IP-based CCN in OTNs consisting of the Lucent LambdaRouter, an optical cross-connect product. Although in this paper the architecture is discussed in the context of providing a highly reliable IP-centric CCN for optical cross- connect-based (OXC-based) OTNs, it can be extended to a general purpose fault-tolerant CCN for any next-generation OTN.3",2002,0, 960,Detecting defects on planar circuits by using non-contacting magnetic probe,"Recently, the research of non-contacting measurement with magnetic coupling theorem mostly choose CPW(coplanar waveguide) loop-type circuits as probes. It has advantage of low cost, easy to fabricate and simple designing. While moving the probe, different relative position between planar circuit and magnetic probe cause different strength of coupling. Variation of resonance frequency due to changing magnetic coupling and electric coupling from metal strip outline can be observed. The relation between planar circuit and magnetic probe is analyzed by full-wave EM simulation and some simple measurement. Furthermore, the LC equivalent circuit has also been built for analyzing. At last, the possibility of doing quickly defect-detecting by sweeping the circuits at special frequency will be discussed.",2010,0, 961,A control-theoretic energy management for fault-tolerant hard real-time systems,"Recently, the tradeoff between low energy consumption and high fault-tolerance has attracted a lot of attention as a key issue in the design of real-time embedded systems. Dynamic Voltage Scaling (DVS) is known as one of the most effective low energy techniques for real-time systems. It has been observed that the use of control-theoretic methods can improve the effectiveness of DVS-enabled systems. In this paper, we have investigated reducing the energy consumption of fault-tolerant hard real-time systems using feedback control theory. Our proposed feedback-based DVS method makes the system capable of selecting the proper frequency and voltage settings in order to reduce the energy consumption while guaranteeing hard real-time requirements in the presence of unpredictable workload fluctuations and faults. In the proposed method, the available slack-time is exploited by a feedback-based DVS at runtime to reduce the energy consumption. Furthermore, some slack-time is reserved for re-execution in case of faults. Simulation results show that, as compared with traditional DVS methods without fault-tolerance, our proposed approach not only significantly reduces energy consumption, but also it satisfies hard real-time constraints in the presence of faults. The transition overhead (both time and energy), caused by changing the system supply voltage, are also taken into account in our simulation experiments.",2010,0, 962,Developing auto recipe management system for LCD panel auto defect detecting inspection machine,"Recently, there is an intensive price competition among mass LCD panel makers. to decrease labor costs, many manufacturing processes such as auto defect detecting are getting automated. However, to maintain its optimal performance, user must keep attention to hardware settings and many recipe parameters. These efforts are supposed to be managed constantly as developing new model. In this paper, we introduce automated defect finding algorithm generally used in periodic pattern image, and suggest auto recipe management algorithms and system that help user to devote less effort for maintenance. Three algorithms are developed - auto pitch calculation, auto threshold control, auto light source intensity control. To verify the performance of auto recipe management algorithm, simulation and machine tests are executed with several color filter and TFT pattern image. Through the tests, we verified the performance of developed algorithm and got successful result.",2008,0, 963,MTF measurement and a phantom study for scatter correction in CBCT using primary modulation,"Recently, we proposed a scatter correction methods for X-ray imaging using primary modulation. A primary modulator with spatially variant attenuating materials is inserted between the X-ray source and the object to make the scatter and part of the primary distributions strongly separated in the Fourier domain. Linear filtering and demodulation techniques suffice to extract and correct the scatter for this modified system. The method has been verified by computer simulations and preliminary experimental results. In this work, we look into a hybrid method using both primary modulation and an anti-scatter grid. The reconstructed image resolution using the proposed approach is evaluated by MTF measurements, and the scatter correction performance is also investigated by experiments on a human chest phantom. The results using the proposed hybrid method are compared with those using an antiscatter grid only. Experiments with scatter inherently suppressed using a narrowly opened collimator and an anti-scatter grid (a slot-scan geometry) were also carried out. The comparison shows that the filtering in the proposed algorithm does not impair the image resolution, and the primary modulation method can effectively suppress the scatter artifacts. In the central region of interest, the reconstruction error relative to the image obtained using a slot-scan geometry is reduced from 18.10% to 0.48% for the human chest phantom, if the primary modulation method is used.",2006,0, 964,Design and error analysis of reconfigurable turn-milling machine tool,"Reconfigurable turn-milling machine tool is the cooperative product of reconfigurable machine tool technology and turn-miling process. With the quick reconfiguration of different modules, the machine tool can adjust to produce a part family in a lower cost and higher efficience. The method and process of designing turn-milling RMT is discussed in this paper. Using commercial off the shelf (COTS) method, some possible configurations of machine tool were developed. Some defenition about RMT workspace were given to guide the reconfiguration of the machine tools. Also, this research has applied screw theory to perform error analysis and error compensation of turn-milling reconfigurable machine tools.",2009,0, 965,Towards the fault tolerant software: fuzzy extension of crisp equivalence voters,"Redundancy, on which fault tolerance is based, can be achieved through hardware, software, information and time. With respect to different version outputs from redundant software versions, voting strategies are separated into two classes. Voting strategies are either based on output classification, partitioning of the outputs, or on convergence functions. The traditional equivalence relation does not enable gradual comparisons below the fixed threshold. Fuzzy extension of classical numerical equivalence relation, proposed in the paper, overcomes those potential problems. Test examples are graphically illustrated",2001,0, 966,Satellite recovery control strategy based on attitude error information fusion,"Reentry braking maneuver is the most important TT&C operation of recovery satellites. After finishing appointed missions, a recovery satellite needs to reenter into the atmosphere, land on the ground, and be recovered. During the recovery process, it is necessary for the satellite to perform a series of maneuver or control operations, such as adjusting body attitude, separating recovery capsule from orbit capsule, igniting braking rocket, etc. The ignition attitude information of braking rocket has a close relationship with the length of the reentry trajectory and size of the landing area. So, attitude error information is the key factor to determining the recovery control strategy of the satellite. In order to shorten reentry trajectory length, increase coverage of TT&C, increase recovery control accuracy and narrow landing area size, a new method to make recovery control strategies for the satellites is proposed based on attitude error information fusion.",2007,0, 967,Adaptive fault tolerant systems: reflective design and validation,"Reflection has been used with some success, since quite a few years now, for dealing with separation of concerns and transparency of fault-tolerance mechanisms for the application. Nevertheless, it has also shown some concern regarding the control of fine-grain information such as thread control or other deep aspects of the platform. We propose here the use of a new concept, called multi-level reflection, for firstly solving these issues, but also for introducing more adaptation into fault-tolerant reflective architectures. We also discuss some essential validation issues of reflective systems, which are still a challenge for future research.",2003,0, 968,Sub-picture: ROI coding and unequal error protection,Region-of-interest coding and unequal error protection are two important tools in video communication systems to improve the received visual quality. One common property of the two techniques is that unequal coding or transmission is applied to improve the quality of the most important parts of images. The proposed sub-picture coding technique facilitates both region-of-interest coding and unequal error protection by partitioning images to regions of interest and separating the corresponding coded data units from each other. Simulation results show that the overall subjective quality is considerably improved compared to the conventional coding schemes.,2002,0, 969,A Fault Diagnosis Method in Satellite Networks,"Satellite network is a time-varying network, its network topology changed periodically. The common network fault diagnosis models and methods are not suitable to it. This paper is based three levels management architecture. The central management station manages the sub-stations and satellites, the sub-stations manage the satellite agents. When satellite network ruuning, if the satellite could not respond to the network management demands, the intra-domain cooperation or inter-domain cooperation would be activated. The suspectable fault satellite could be diagnosed through cooperation among the fault diagnosis agents in the satellite and the diagnosis task are disassembled. The simulation results shows that in the circumstance of the low faulty frequency, the new method put forward in this paper could be effectively used in satellite network with short cooperative time and low throughput.",2010,0, 970,Rescue: a microarchitecture for testability and defect tolerance,"Scaling feature size improves processor performance but increases each device's susceptibility to defects (i.e., hard errors). As a result, fabrication technology must improve significantly to maintain yields. Redundancy techniques in memory have been successful at improving yield in the presence of defects. Apart from core sparing which disables faulty cores in a chip multiprocessor, little has been done to target the core logic. While previous work has proposed that either inherent or added redundancy in the core logic can be used to tolerate defects, the key issues of realistic testing and fault isolation have been ignored. This paper is the first to consider testability and fault isolation in designing modern high-performance, defect-tolerant microarchitectures. We define intra-cycle logic independence (ICI) as the condition needed for conventional scan test to isolate faults quickly to the microarchitectural-block granularity. We propose logic transformations to redesign conventional superscalar microarchitecture to comply with ICI. We call our novel, testable, and defect-tolerant microarchitecture Rescue.",2005,0, 971,Responsive Fault-Tolerant Computing in the Era of Terascale Integration State of Art Report,"Scaling in hardware integration process results in IC-process geometry reductions, lower operating voltages and increased clock speeds. This paper first surveys the reliability obstacles these developments give rise to and then points out that computing systems can no longer be safely assumed to fail only by crashing. Yet this assumption is at the core of primary-backup replication which the literature presents as the appropriate, and hence the most widely used, strategy for time-critical fault-tolerant applications. The paper then observes that building computing nodes with announced crash failure mode is a promising way forward to deal with the emerging reliability challenges. Work carried out to assure such a failure mode has also been briefly surveyed.",2008,0, 972,Diagnose compound scan chain and system logic defects,"Scan based diagnosis can be of great help to guide physical failure analysis, which is critical for the success of silicon debug and yield ramp up. In practice, diagnosis becomes more difficult if scan chain defects and system logic defects co-exist on one die, which are called compound defects in this paper. We first describe the challenges in diagnosing this type of compound defects. A novel diagnosis flow is proposed to diagnose the compound defects on scan chain and system logic. The diagnosis methodology was successfully applied in industrial designs.",2007,0, 973,Emission-based scatter correction in SPECT imaging,"Scatter correction in single photon emission computed tomography (SPECT) has been focused on either using multiple-window acquisition technique or the scatter modeling technique in iterative image re-construction. We propose a technique that uses only the emission data for scatter correction in SPECT. We assume that the scatter data can be approximated by convolving the primary data with a scatter kernel followed by the normalization using the scatter-to-primary ratio (SPR). Since the emission data is the superposition of the primary data and the scatter data, the convolution normalization process approximately results in the sum of the scatter data and a convolved version of scatter data with the kernel. By applying a proper scaling factor, we can make the estimation approximately equal to or less than the scatter data anywhere in the projection domain. Phantom and patient cardiac SPECT studies show that using the proposed emission-based scatter estimation can effectively reduce the scatter-introduced background in the reconstructed images. And additionally, the computational time for scatter correction is negligible as compared to no scatter correction in iterative image reconstruction.",2010,0, 974,Assessing the impact of active guidance for defect detection: a replicated experiment,"Scenario-based reading (SBR) techniques have been proposed as an alternative to checklists to support the inspectors throughout the reading process in the form of operational scenarios. Many studies have been performed to compare these techniques regarding their impact on the inspector performance. However, most of the existing studies have compared generic checklists to a set of specific reading scenarios, thus confounding the effects of two SBR key factors: separation of concerns and active guidance. In a previous work we have preliminarily conducted a repeated case study at the University of Kaiserslautern to evaluate the impact of active guidance on inspection performance. Specifically, we compared reading scenarios and focused checklists, which were both characterized as being perspective-based. The only difference between the reading techniques was the active guidance provided by the reading scenarios. We now have replicated the initial study with a controlled experiment using as subjects 43 graduate students in computer science at University of Bari. We did not find evidence that active guidance in reading techniques affects the effectiveness or the efficiency of defect detection. However, inspectors showed a better acceptance of focused checklists than reading scenarios.",2004,0, 975,A New Wafer Level Latent Defect Screening Methodology for Highly Reliable DRAM Using a Response Surface Method,"Screening latent defects in a wafer test process is very important task in both reducing memory manufacturing cost and enhancing the reliability of emerging package products such as SIP, MCP, and WSP. In terms of the package assembly cost, these package products are required to adopt the KGD (known good die) quality level. However, the KGD requires a long burn-in time, added testing time, and high cost equipments. To alleviate these problems, this paper presents a statistical wafer burn-in methodology for the latent defect screen in the wafer test process. The newly proposed methodology consists of a defect-based wafer burn-in (DB-WBI) stress method based on DRAM operation characteristics and a statistical stress optimization method using RSM (response surface method) on the DRAM manufacturing test process. Experimental data shows that package test yields in the immature fabrication process improved by up to 6%. In addition, experimental results show that the proposed methodology can guarantee reliability requirements with a shortened package burn-in time. In conclusion, this methodology realizes a simplified manufacturing test process supporting time to market with high reliability.",2008,0, 976,A system architecture assisting user trial-and-error process in in-silico drug design,"Screening on computers, or in-silico screening, has the potential to dramatically reduce the total cost and time required for the whole drug design process. Recently, a variety of in-silico screening systems has been reported on literatures. Nonetheless, scientists still have much difficulty in benefiting from such systems. The reason can be explained from the fact that scientists' trial-and-error consideration, whose purpose is to precisely describe molecules on a computer, in prior to in-silico screening stage takes a long time. We present a flexible user-support environment that assists in scientists' trial-and-error consideration with a ""trial set"" concept. On this environment, scientists utilize the trial set to easily complete a sequence of tasks accompanied with parameter changes. The experiment in this paper shows that the first prototype system featured by this trial set effectively works. Furthermore, we describe the future plan of the system.",2004,0, 977,An Analysis of Microarchitecture Vulnerability to Soft Errors on Simultaneous Multithreaded Architectures,"Semiconductor transient faults (i.e. soft errors) have become an increasingly important threat to microprocessor reliability. Simultaneous multithreaded (SMT) architectures exploit thread-level parallelism to improve overall processor throughput. A great amount of research has been conducted in the past to investigate performance and power issues of SMT architectures. Nevertheless, the effect of multithreaded execution on a microarchitecture's vulnerability to soft error remains largely unexplored. To address this issue, we have developed a microarchitecture level soft error vulnerability analysis framework for SMT architectures. Using a mixed set of SPEC CPU 2000 benchmarks, we quantify the impact of multithreading on a wide range of microarchitecture structures. We examine how the baseline SMT microarchitecture reliability profile varies with workload behavior, the number of threads and fetch policies. Our experimental results show that the overall vulnerability rises in multithreading architectures, while each individual thread shows less vulnerability. By considering both performance and reliability, SMT outperforms superscalar architectures. The SMT reliability and its tradeoff with performance vary across different fetch policies. With a detailed analysis of the experimental results, we point out a set of potential opportunities to reduce SMT microarchitecture vulnerability, which can serve as guidance to exploiting thread-aware reliability optimization techniques in the near future. To our knowledge, this paper presents the first effort to characterize microarchitecture vulnerability to soft error on SMT processors",2007,0, 978,A Unified Environment for Fault Injection at Any Design Level Based on Emulation,"Sensitivity of electronic circuits to radiation effects is an increasing concern in modern designs. As technology scales down, Single Event Upsets (SEUs) are made more frequent and probable, affecting not only space applications, but also applications at earth's surface, like automotive applications. Fault injection is a method widely used to evaluate the SEU sensitivity of digital circuits. Among the existing fault injection techniques, those based on FPGA emulation have proven to be the fastest ones. In this paper a unified emulation environment which combines two fault injection techniques based on FPGA emulation is proposed. The new emulation environment provides both, a high speed tool for quick fault detection, and a medium speed tool for in-depth analysis of SEUs propagation. The experiments presented here show that the two techniques can be successfully applied in a complementary manner.",2007,0, 979,Optimal Cluster-Cluster Design for Sensor Network with Guaranteed Capacity and Fault Tolerance,"Sensor networks have recently gained a lot of attention from the research community. To ensure scalability sensor networks are often partitioned into clusters, each managed by a cluster head. Since sensors self organize in the form of clusters within a hierarchal wireless sensor network, it is necessary for a sensor node to perform target tracking cooperating with a set of sensors that belong to another cluster. The increased flexibility allows for efficient and optimized use of sensor nodes. While most of the previous research focused on the optimal communication of sensors in one cluster, very little attention has been paid to the efficiency of cooperation among the clusters. This paper proposes a heuristic algorithm of designing optimal structure across clusters to allow the inter-cluster flow of communication and resource sharing under reliability constraints. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing.",2007,0, 980,A Fault-Tolerant Strategy for Improving the Reliability of Service Composition,"Service composition is an important means for integrating the individual Web services for creating new value added systems that satisfy complex demands. Since, Web services exist in the heterogeneous environments on the Internet, study on how to guarantee the reliability of service composition in a distributed, dynamic and complex environment becomes more and more important. This paper proposes a service composition net(SCN) and fault-tolerant strategy to improve the reliability of service composition. The strategy consists of static strategy, dynamic strategy and exception handling mechanism, which can be used to dynamically adjust component service for achieving good reliability as well as good overall performance. SCN is adopted to model different components of service composition. The fault detection and fault recovery mechanisms are also considered. Based on the constructed model, theories of Petri nets help prove the consistency of processing states and the effectiveness of the strategy. A case study of Export Service illustrates the feasibility of proposed method.",2010,0, 981,Recursive Evaluation of Fault Tolerance Mechanisms for SLA Management,"Service level agreements (SLAs) have been introduced into the grid in order to build a basis for its commercial uptake. The challenge for Grid providers in agreeing and operating SLA-bound jobs is to ensure their fulfillment even in the case of failures. Hence, fault-tolerance mechanisms are an essential means of the provider's SLA management. The high utilization of commercial operated clusters leads to scenarios in which typically a job migration effects other jobs scheduled. The effects result from the unavailability of enough free resources which would be needed to catch all resource outages. Consequently before initiating a migration, its effects for other jobs have to be compared and the initiation of fault- tolerance (FT-) mechanisms have to be evaluated recursively. This paper presents a measurement for the benefit of initiating a FT-mechanism, the recursive evaluation, and termination condition. Performing such an impact evaluation of an initiated chain of FT-mechanisms is often more profitable than performing a single FT-mechanism and accordingly this is important for the Grid commercialization.",2008,0, 982,Error monitoring for optical metropolitan network services,"Service providers rely on performance monitoring capabilities not only to ensure integrity of their network but also to support service-level agreements with their customers. The depth of monitoring is directly tied to the technology and protocol used in the transport layer of the network. Next-generation services based on enterprise-centric, non-SONET/SDH protocols, such as Gigabit Ethernet and Fibre Channel, as well as managed protocol-independent wavelength transport, have created a number of challenges for service providers because of the differences in how error monitoring is performed. In this article we describe and compare protocol-dependent and protocol-independent error monitoring techniques that apply to these service offerings",2002,0, 983,A Fault Taxonomy for Service-Oriented Architecture,"Service-oriented architecture (SOA) is a popular design paradigm for distributed systems today but the high adaptivity and complexity of SOA implementations may also introduce additional sources of faults. We first describe typical steps in SOA to understand possible faults. Then, we provide a corresponding fault taxonomy according to the process of service invocation. Finally, we present possible benefits of our taxonomy for dependability enhancement in SOA-based systems.",2007,0, 984,An Engineering Process for Autonomous Fault Management in Service-Oriented Systems,"Service-Oriented Computing reveals features which are not commonly found in conventional computing paradigms; loose coupling, dynamism, blackbox, evolvability, and heterogeneity. These features make diagnosing and healing faults found in deployed services and service-related elements more challenging than managing conventional systems. Hence, service-oriented systems management often results in problems of increased cost/effort, decreased effectiveness, and irresolvable service faults. Applying key disciplines of autonomic computing to services management would effectively resolve these problems and automate the task. This paper presents a process for managing faults in autonomous manner. We define a whole life-cycle process for managing faults in SOA and the essential artifacts of each phase. Then, we present instructions for carry out each phase. Especially, we propose methods for fault diagnosis and reasoning knowledgebase updates by learning mechanism in detail. The process is not limited to providing a theoretical basis for service management, but it can be practically applied with current service-oriented architecture standards.",2010,0, 985,Classification-Error Cost Minimization Strategy: DCMS,"Several classification applications such as intrusion detection, biometric recognition, etc. have different costs associated with different classification errors. In such scenarios, the goal is to minimize the cost incurred, and not the classification error rate itself. This paper proposes a Cost Minimization Strategy, dCMS, which when applied to classifiers, provides a boost in the performance by reducing the cost incurred due to classification errors. dCMS is classifier-type independent, however it exploits the statistical properties of the trained classifier. It does not require classifiers to be retrained, which is particularly advantageous in scenarios where the costs vary dynamically. Convincing results are provided which indicate the statistically significant reduction in cost incurred by applying dCMS, in a diverse set of classification scenarios with datasets and classifiers of varying complexities.",2007,0, 986,Delay defect characteristics and testing strategies,"Several factors influence production delay testing and corresponding DFT techniques: defect sources, design styles. ability to monitor process characteristics, test generation time. available test time, and tester memory. We present an overview of delay defect characteristics and the impact of delay defects on IC quality. We also discuss practical delay-testing strategy in terms of test pattern generation, test application speed, DFT, and test cost.",2003,0, 987,A novel error concealment scheme for intra frames of H.264 video,"Several methods had been proposed for error concealment of H.264 video either in spatial domain or temporal domain. For H.264 video the macroblock is the more basically block unit and it tends to be lost in whole in an error prone channel. Single concealment approach can not achieve acceptable performance for a block of size 1616. The scheme proposed is a combination of spatial concealment and temporal concealment and it has been demonstrated that it can significantly improve the quality of the video. Fast-DCT based spatial domain interpolation approach (Z. Alkachouh and M.G. Bellanger, 2000] and mean interpolation method are employed here for spatial domain concealment. For temporal domain direct copy is used after a determination operation.",2005,0, 988,A Research of MPLS-Based Network Fault Recovery,"Researching on use of new real-time connection-oriented services like streaming technologies and mission-critical transaction-oriented services and more reliable network become inevitable trends presently. MPLS is a next generation backbone architecture, it can speed up packet forwarding to destination by label switching. However, if there exists no backup LSP when the primary LSP fails, MPLS frames cannot be forwarded to destination. Therefore, fault recovery has become an important research area in MPLS Traffic Engineering. At present, two famous methods, Makam and Haskin are belonging to Protection Switching, and other methods basically come into being on base of them. But these two famous methods both have its disadvantage. In order to solve their drawback, the thesis tries to do some exploration on the MPLS-based recovery model. This new model in the thesis using the Reverse Backup Path to solve the wrapping of data back to the path is too long, and using the simulation tool NS-2 to do some experiments. Finally the simulation experiments show that new method of MPLS-based recovery has less packet disorder and much lower delay and packet losses than the two famous methods.",2010,0, 989,Comparative investigation of diagnostic media for induction motors: a case of rotor cage faults,"Results of a comparative experimental investigation of various media for noninvasive diagnosis of rotor faults in induction motors are presented. Stator voltages and currents in an induction motor were measured, recorded, and employed for computation of the partial and total input powers and of the estimated torque. Waveforms of the current, partial powers pAB and pCB, total power, and estimated torque were subsequently analyzed using the fast Fourier transform. Several rotor cage faults of increasing severity were studied with various load levels. The partial input power pCB was observed to exhibit the highest sensitivity to rotor faults. This medium is also the most reliable, as it includes a multiplicity of fault-induced spectral components",2000,0, 990,Prediction-error based reversible watermarking,"Reversible watermarking has become a highly desirable subset of fragile watermarking for sensitive digital imagery in application domains such as military and medical because of the ability to embed data with zero loss of host information. This reversibility enables the recovery of the original host content upon verification of the authenticity of the received content. We propose a new reversible watermarking algorithm. The algorithm exploits the correlation inherent among the neighboring pixels in an image region using a predictor. The prediction-error at each location is calculated and, depending on the amount of information to be embedded, locations are selected for embedding. Data embedding is done by expanding the prediction-error values. A compressed location map of the embedded locations is also embedded along with the information bits. Our algorithm exploits the redundancy in the image to achieve very high data embedding rates while keeping the resulting distortion low.",2004,0, 991,Partial discharge fault location and diagnosis on HV power cables,"Risk assessment with a large population of service-aged distribution class cables demands field-applicable partial discharge measurement techniques. Recent experience with lightweight portable instruments applied with resonant test sets as well as with O.1 Hz high voltage sources is presented. Application of correlation functions to compensate the influence of high frequency properties of power cables, such as attenuation or dispersion, are used to improve location accuracy. Additionally, characteristics of phase-resolved partial discharge patterns to assess cause and potential risk of the discharge activity are discussed, Furthermore, methods and application of hand-held instruments to fine-locate the site of discharge activity are presented",2000,0, 992,Use of invariant properties to evaluate the results of fault-injection-based robustness testing of protocol implementations,"Robustness testing has as main objective to determine how a system behaves in the presence of unexpected inputs or stressful environmental conditions. An approach commonly used for that purpose is fault injection, in which faults are deliberately injected into a system to observe its behavior. One main limitation of this approach is results evaluation: a system is considered as robust if it does not crash or hang during testing. This is not enough because a system can still continue to execute, but present a wrong behavior. To overcome this limitation, we propose a passive approach for robustness testing, in which the system under test is instrumented for fault injection during runtime, as well as for monitoring its behavior. At the end, the readouts collected are analyzed to determine whether the observed behavior under faults is consistent with properties based on a finite state model of the system. We illustrate the approach using an implementation of the wireless application protocol (WAP). The approach was implemented using off-the-shelf tools; results obtained thus far are presented.",2008,0, 993,FPGA On-Board Computer design based on hierarchical fault tolerance,"Safety is a crucial requirement of On-Board Computer (OBC) design of a satellite, especially for the new type OBC--takes FPGA as central processor. Upon that this paper proposes a plan of FPGA OBC design and adds hierarchical fault tolerant concept to enhance the reliability of the OBC system. The fault tolerant architecture can be divided into three hierarchic ranks, containing single-CPU reconfiguration, component unit transfer and dual-CPU subrogation. One of the above fault manage mode will be chosen to deal with problems according to error situation in-orbit. In the worst cases, all three modes may be used. The last part of the paper gives the functional verification approach under development for the hierarchical fault tolerant OBC design.",2008,0, 994,Taming coincidental correctness: Coverage refinement with context patterns to improve fault localization,"Recent techniques for fault localization leverage code coverage to address the high cost problem of debugging. These techniques exploit the correlations between program failures and the coverage of program entities as the clue in locating faults. Experimental evidence shows that the effectiveness of these techniques can be affected adversely by coincidental correctness, which occurs when a fault is executed but no failure is detected. In this paper, we propose an approach to address this problem. We refine code coverage of test runs using control- and data-flow patterns prescribed by different fault types. We conjecture that this extra information, which we call context patterns, can strengthen the correlations between program failures and the coverage of faulty program entities, making it easier for fault localization techniques to locate the faults. To evaluate the proposed approach, we have conducted a mutation analysis on three real world programs and cross-validated the results with real faults. The experimental results consistently show that coverage refinement is effective in easing the coincidental correctness problem in fault localization techniques.",2009,0, 995,Fault Tolerance by Quartile Method in Wireless Sensor and Actor Networks,"Recent technological advances have lead to the emergence of wireless sensor and actor networks (WSAN) which sensors gather the information for an event and actors perform the appropriate actions. Since sensors are prone to failure due to energy depletion, hardware failure, and communication link errors, designing an efficient fault tolerance mechanism becomes an important issue in WSAN. However, most research focus on communication link fault tolerance without considering sensing fault tolerance on paper survey. In this situation, actor may perform incorrect action by receiving error sensing data. To solve this issue, fault tolerance by quartile method (FTQM) is proposed in this paper. In FTQM, it not only determines the correct data range but also sifts the correct sensors by data discreteness. Therefore, actors could perform the appropriate actions in FTQM. Moreover, FTQM also could be integrated with communication link fault tolerance mechanism. In the simulation results, it demonstrates FTQM has better predicted rate of correct data, the detected tolerance rate of temperature, and the detected temperature compared with the traditional sensing fault tolerance mechanism. Moreover, FTQM has better performance when the real correct data rate and the threshold value of failure are varied.",2010,0, 996,Data replication strategies for fault tolerance and availability on commodity clusters,"Recent work has shown the advantages of using persistent memory transaction processing. In particular the Vista transaction system uses recoverable memory to avoid disk I/O, thus improving performance by several orders of magnitude. In such a system, however the data is safe when a node fails, but unavailable until it recovers, because the data is kept in only one memory. In contrast, our work uses data replication to provide both reliability and data availability while still maintaining very high transaction throughput. We investigate four possible designs for a primary-backup system, using a cluster of commodity servers connected by a write-through capable system area network (SAN). We show that logging approaches outperform mirroring approaches, even when communicating more data, because of their better locality. Finally, we show that the best logging approach also scales well to small shared-memory multiprocessors",2000,0, 997,Throughput and Delay Analysis of IEEE 802.1le Block ACK with Channel Errors,"Recently, along with many emerging applications and services over Wireless LANs (WLANs), the demands for higher-speed WLANs have been growing drastically. However, it is well known that IEEE 802.11 Medium Access Control (MAC) has a high overhead. As a solution to improve the system efficiency, the new IEEE 802.11e MAC introduces Block ACK scheme. In this paper, we mathematically analyze both throughput and delay performances of the 802.11e Block ACK scheme over a noisy channel considering the Block ACK protection scheme. Then, the numerical results are verified with ns-2 simulations.",2007,0, 998,Fault-tolerance of functional programs based on the parallel graph reduction,"Recently, parallel computing has been applied to many systems. Functional programming is suitable for parallel programming because of referential transparency and is applied to symbol processing systems and parallel database systems. Programs with some functional programming can be regarded as graphs and are processed in terms of reduction of the corresponding graph. The paper proposes fault tolerance of functional programming based on graph reduction. The proposed method stores the received graph as a message log and an erroneous task is recovered by using the checkpoint and the stored graph. Computer simulations reveal that the time overhead of the proposed method is small. If the checkpoint interval is 30 seconds and the number of tasks is 3, for example, the time overhead is less than 10%",2001,0, 999,Software-based transparent and comprehensive control-flow error detection,"Shrinking microprocessor feature size and growing transistor density may increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software-based techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes two new software-based comprehensive control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the branch-error categories. We implemented the techniques in our dynamic binary translator so that the techniques can be applied to existing x86 binaries transparently. We compared our new techniques with the previous ones and we show that our methods cover more errors while has similar performance overhead.",2006,0, 1000,A March-CL test for interconnection faults of SOC,"Shrinks of feature size, high working frequency, and rising number of the IP cores integrated in SOC make the problem with interconnection test critics. A March-CL test for interconnection faults of SOC is proposed in this article. According to the method, eight test patterns are used to detect stuck and delay faults of interconnection between IP cores. The IP connected by interconnection under test (IUT) is wrapped and complied with IEEE1500. Short test time and low area overhead are achieved with the method. Moreover, modified wrapper cell structure with simple control logic is adopted for detecting delay in March-CL test. Finally, March-CL test is applied to ITCpsila02 bench, and result proves that the method covers 100% of stuck, bridge and delay faults in synchronous interconnection test.",2008,0, 1001,Passive and Active Combined Attacks: Combining Fault Attacks and Side Channel Analysis,"Side-channel attacks have been deeply studied for years to ensure the tamper resistance of embedded implementations. Analysis are most of the time focused either on passive attack (side channel attack) or on active attacks (fault attack). In this article, a combination of both attacks is presented. It is named PACA for Passive and Active Combined Attacks. This new class of attacks allows us to recover the secret key with only one curve of leakages. Practical results on a secure implementation of RSA have been obtained and are presented here. Finally, a new kind of infective methodology is defined and countermeasures to counteract this type of analysis are introduced.",2007,0, 1002,SDG Multiple Fault Diagnosis by Fuzzy Logic and Real-Time Bidirectional Inference,"Significant research has been done in the past 30 years to use signed directed graph (SDG) for process fault diagnosis. However, multiple fault diagnosis is still a difficult problem because the number of combinations grows exponentially with the number of faults. The method by real-time inverse inference is suitable for multiple fault diagnosis. However, the choice of thresholds is made based on experience so improper thresholds may lead to missed or wrong diagnosis. In addition, the compensatory response and inverse response in the SDG model usually hamper inverse inference and it also leads to missed or wrong diagnosis. In this work, a SDG multiple fault diagnosis by fuzzy logic and real-time bidirectional inference is proposed. Fuzzy logic is used to determine the states of nodes in SDG model and a bidirectional inference strategy based on assumption and verification is used to overcome influence of compensatory response and inverse response. The poor resolution of SDG based fault diagnosis is overcome by arranging the causes in decreasing order according to the indexes calculated by fuzzy logic. The implementation of multiple fault diagnosis method is done using the integrated SDG modeling, inference and post-processing software platform. Its application is illustrated on an atmospheric distillation tower. The result shows this method provides fast, reliable and accurate multiple fault diagnosis.",2009,0, 1003,Electrical Test Structures for the Characterisation of Optical Proximity Correction,"Simple electrical test structures have been designed that will allow the characterisation of corner serif forms of optical proximity correction. The structures measure the resistance of a short length of conducting track with a right angled corner. Varying amounts of OPC can be applied to the outer and inner corners of the feature and the effect on the resistance of the track measured. These structures have been simulated and the results are presented in this paper. In addition a preliminary test mask has been fabricated which has test structures suitable for on-mask electrical measurement. Measurement results from these structures are also presented. Furthermore structures have been characterised using an optical microscope, a dedicated optical mask metrology system, an AFM scanner and finally a FIB system. In the future the test mask will be used to print the structures using a step and scan lithography tool so that they can be measured on-wafer. Correlation of the mask and wafer results will provide a great deal of information about the effects of OPC at the CAD level and the impact on the final printed features.",2007,0, 1004,On the use of RTP for monitoring and fault isolation in IPTV,"Since the first operational IPTV networks have been deployed, service providers and operators have struggled to make their subscribers happy and satisfied with their services. To keep them as customers in the long term, they have been looking for ways to identify impairments to the perceived quality of experience. It is now well understood that this can only be achieved if the service providers have virtual eyes throughout their networks. In this article we provide an overview of the Real- Time Transport Protocol and its application to IPTV. We describe the monitoring and reporting features offered by RTP, and emphasize how they can be used to enhance subscriber QoE.",2010,0, 1005,Error Resilient Video Coding Using B Pictures in H.264,"Since the quality of compressed video is vulnerable to errors, video transmission over unreliable Internet is very challenging today. Multi-hypothesis motion-compensated prediction (MHMCP) has been shown to have error resilience capability for video transmission, where each macroblock is predicted by a linear combination of multiple signals (hypotheses). B picture prediction is a special case of MHMCP. In H.264/AVC, the prediction of B pictures is generalized such that both of the two predictions can be selected from the past pictures or from the subsequent pictures. The multiple reference picture framework in H.264/AVC also allows previously decoded B pictures to be used as references for B picture coding. In this paper, we will discuss the error resilience characteristics of the generalized B pictures in H.264/AVC. Three prediction patterns of B pictures are analyzed in terms of their error-suppressing abilities. Both theoretical models (picture level error propagation) and simulation results are given for the comparison.",2009,0, 1006,Fault detection and isolation in fluid power systems using a parametric estimation method,"Since the states or the parameters of a monitored process are closer to the process faults in terms of signal flow, observer-based state and system parameter estimation theories, are well developed in fault detection and isolation (FDI) for linear time invariant (LTI) systems. However, for systems, where explicit models are difficult to derive, the functional redundancy based FDI strategy, derived for LTI systems, cannot be directly implemented. In this paper, a parametric estimation method for FDI is employed for an electro-hydraulic system. Based on a set of criteria, an auto-regressive model with exogenous input (ARX) is first selected to approximate the dynamic behavior of the system. Next, the relationship between system's supply pressure and the coefficients of the ARX model is studied through experiments. It is shown that direct threshold checking on estimated model coefficients can be employed as a FDI strategy to detect and isolate faults originating from incorrect supplying pump pressure.",2002,0, 1007,Fault tolerant model for data dissemination in wireless sensor networks,"Reliable event detection at the sink is based on collective information provided by source nodes and not on any individual report. Reliable data gathering and transmission are important in wireless sensor networks. While node redundancy in WSNs, increases the fault tolerance, no guarantees on reliability levels can be assured. Furthermore, the frequent node or other physical failures within WSNs impact the observed reliability over time and make that reliability models dont work properly in data transmission protocols. Many frameworks for modeling reliability of data transport protocols in WSNs are currently exist but dont satisfy the reliable data transmission. The existence of such a framework would simplify evaluation and comparison of protocols. This paper formulates the problem of data transport in a WSN as a set of operations with reliability block diagram. The operations aim at filtering the raw data to streamline its reliable transport towards the sink. Based on this formulation we systematically define a reliability framework and compare the reliability of existing data transmission protocols (ESRT, RMST) with this new framework.",2008,0, 1008,Forward Error Correction for File Delivery in DVB-H,"Reliable filecasting to vehicular users in wireless broadcast networks is a very challenging task, as an error-free transmission of the files is required. Reliability can be achieved by means of forward error correction (FEC) and post-delivery file repair mechanisms. In this paper, we describe and compare the standardized FEC schemes in DVB-H (digital video broadcast-handheld) at the link and application layer, and evaluate their performance for file delivery to vehicular users. We show that by using application layer FEC it is possible to efficiently correct the errors appeared due to temporarily shadowing and fast fading, by taking advantage of the time and space diversity introduced by the bursty character of DVB-H transmissions and terminal mobility. Our results may constitute a valuable guide for operators when dimensioning their network capacity and service bandwidths, especially in the situations when vehicular users represent a large part of the customer base.",2007,0, 1009,Implications of Rent's Rule for NoC Design and Its Fault-Tolerance,"Rent's rule is a powerful tool for exploring VLSI design and technology scaling issues. This paper applies the principles of Rent's rule to the analysis of networks-on-chip (NoC). In particular, a bandwidth-version of Rent's rule is derived, and its implications for future NoC scaling examined. Hop-length distributions for Rent's and other traffic models are then applied to analyse NoC router activity. For fault-tolerant design, a new type of router is proposed based on this analysis, and it is evaluated for mutability and its impact on congestion by further use of the hop-length distributions. It is shown that the choice of traffic model has a significant impact on scaling behaviour, design and fault-tolerant analysis",2007,0, 1010,Towards a Model of Fault Tolerance Technique Selection in Static and Dynamic Agent-Based Inter-Organizational Workflow Management Systems,"Research in workflow management systems design references the mobile agent computing paradigm where agents have been shown to increase the total capacity of a workflow system through the decoupling of execution management from a statically designated workflow engine, although coordinating fault tolerance mechanisms has been shown to be a downside due to increased overall execution times. To address this issue, we develop a model for comparing the effects of two fault tolerance techniques: local and remote checkpointing. The model enables an examination of fault tolerance coordination impacts on execution time while concomitantly taking into account the dynamic nature of a workflow environment. A proposed use for the model includes providing for selecting and configuring agent-based fault tolerance approaches based on changes in environmental variables - an approach that allows the owners of a workflow management system to reap the scaling efficiency benefits of the mobile agent paradigm without being forced to make trade-offs in execution performance.",2005,0, 1011,Scatter and cross-talk corrections in simultaneous Tc-99m/I-123 brain SPECT using constrained factor analysis and artificial neural networks,"Simultaneous imaging of Tc-99m and I-123 would have a high clinical potential in the assessment of brain perfusion (Tc-99m) and neurotransmission (I-123) but is hindered by cross-talk between the two radionuclides. Monte Carlo simulations of 15 different dual-isotope studies were performed using a digital brain phantom. Several physiologic Tc-99m and I-123 uptake patterns were modeled in the brain structures. Two methods were considered to correct for cross-talk from both scattered and unscattered photons: constrained spectral factor analysis (SFA) and artificial neural networks (ANN). The accuracy and precision of reconstructed pixel values within several brain structures were compared to those obtained with an energy windowing method (WSA). In I-123 images, mean bias was close to 10% in all structures for SFA and ANN and between 14% (in the caudate nucleus) and 25% (in the cerebellum) for WSA. Tc-99m activity was overestimated by 35% in the cortex and 53% in the caudate nucleus with WSA, but by less than 9% in all structures with SFA and ANN. SFA and ANN performed well even in the presence of high-energy I-123 photons. The accuracy was greatly improved by incorporating the contamination into the SFA model or in the learning phase for ANN. SFA and ANN are promising approaches to correct for cross-talk in simultaneous Tc-99m/I-123 SPECT",2000,0, 1012,A taxonomy of software security defects for SST,"Software security test (SST) is a useful way to validate software system security attribute. Defects based testing technologies are more effective than traditional specification testing technologies, and more and more researchers pay their attention to the testing methods. Before testing, an organized list of actual defects is especially essential. But at present the only existing suitable taxonomies are mostly for software designers or tool-builders, and do not adequately represent security defects that are found in modern software. In our work, we have coalesced previous efforts to categorize security errors as well as problem reports in order to create a kind of security defects taxonomy. We correlate this taxonomy with available information about current Top 10 software dangerous errors, which come from CWE, SANS and other authoritative vulnerabilities enumerations. We suggest that this taxonomy is suitable for software security testers and to outline possible areas of future research.",2010,0, 1013,Self-healing strategies for component integration faults,"Software systems increasingly integrate Off-The-Shelf (OTS) components. However, due to the lack of knowledge about the reused OTS components, this integration is fragile and can cause in the field a lot of failures that result in dramatic consequences for users and service providers, e.g. loss of data, functionalities, money and reputation. As a consequence, dynamic and automatic fixing of integration problems in systems that include OTS components can be extremely beneficial to increase their reliability and mitigate these risks. In this paper, we present a technique for enhancing component-based systems with capabilities to self-heal common integration faults by using a predetermined set of healing strategies. The set of faults that can be healed has been determined from the analysis of the most frequent integration bugs experienced by users according to data in bug repositories available on Internet. An implementation based on AOP techniques shows the viability of this technique to heal faults in real case studies.",2008,0, 1014,Bug Hunt: Making Early Software Testing Lessons Engaging and Affordable,"Software testing efforts account for a large part of software development costs. However, as educators, we struggle to properly prepare students to perform software testing activities. This struggle is caused by multiple factors: (1) it is challenging to effectively incorporate software testing into an already over-packed curriculum, (2) ad-hoc efforts to teach testing generally happen too late in the students' career, after bad habits have already been developed, and (3) these efforts lack the necessary institutional consistency and support to be effective. To address these challenges we created Bug Hunt, a web-based tutorial to engage students in learning software testing strategies. In this paper we describe the most interesting aspects of the tutorial including the lessons and feedback mechanisms, and the facilities for instructors to configure the tutorial and obtain automatic student assessment. We also present the lessons learned after two years of deployment.",2007,0, 1015,Improving Fault Detection Capability by Selectively Retaining Test Cases during Test Suite Reduction,"Software testing is a critical part of software development. As new test cases are generated over time due to software modifications, test suite sizes may grow significantly. Because of time and resource constraints for testing, test suite minimization techniques are needed to remove those test cases from a suite that, due to code modifications over time, have become redundant with respect to the coverage of testing requirements for which they were generated. Prior work has shown that test suite minimization with respect to a given testing criterion can significantly diminish the fault detection effectiveness (FDE) of suites. We present a new approach for test suite reduction that attempts to use additional coverage information of test cases to selectively keep some additional test cases in the reduced suites that are redundant with respect to the testing criteria used for suite minimization, with the goal of improving the FDE retention of the reduced suites. We implemented our approach by modifying an existing heuristic for test suite minimization. Our experiments show that our approach can significantly improve the FDE of reduced test suites without severely affecting the extent of suite size reduction",2007,0, 1016,Verifying provisions for post-transaction user input error correction through static program analysis,Software testing is a time-consuming and error-prone process. Automated software verification is an important key to improve software testing. This paper presents a novel approach for the automated approximate verification of provisions of transactions for correcting effects that result from executing database transactions with wrong user inputs. The provision is essential in any database application. The approach verifies the provision through analyzing the source codes of transactions in a database application. It is based on some patterns that in all likelihood exist between the control flow graph of a transaction and the control flow graphs of transactions for correcting some post-transaction user input errors of the former transaction. We have validated the patterns statistically.,2002,0, 1017,Superfit Combinational Elusive Bug Detection,"Software that has been well tested and analyzed may fail unpredictably when a certain combination of conditions occurs. In bounded exhaustive testing (BET) all combinations are tested on reduced versions of a problem/application with the idea that bugs associated with combinations for full versions of a program may also show up when combinations are tested for the reduced version. In previous work, a class oriented JUnit framework approach to BET was introduced, along with the idea of a BET test pattern. In this paper we considered the application of BET to system testing, using an extension of the FIT (framework for integrated testing) framework called SuperFIT. This approach is described along with a simple example of the application of a SuperFIT test generation tool.",2008,0, 1018,Using fault model relaxation to diagnose real scan chain defects,"Software-based scan chain fault diagnosis is typically composed of two steps. First, scan chain flush patterns are used to identify faulty chains and fault models. This is followed by chain diagnosis using scan patterns in the second step. In this paper, we target chain diagnosis on one special category of chain faults: intermittent scan chain faults. It is showed that these faults may not be modeled correctly in the first step. Hence, a novel diagnosis methodology based on scan chain fault model relaxation is proposed.",2005,0, 1019,Impact of solid-state fault current limiters on protection equipment in transmission and distribution systems,"Solid-state fault current limiters (SSFCLs) offer a number of benefits when incorporated within transmission and distribution systems. SSFCLs can limit the magnitude of a fault current seen by a system using different methods, such as inserting a large impedance in the current path or controlling the voltage applied to the fault. However, these two methods can introduce a few problems when SSFCLs are used in a system along with other protection equipment such as protective relays and sensors. An experiment was designed and implemented to evaluate the behavior of the protective relays in a mimic distribution system with a SSFCL. This paper introduces the details of the experiment and the result shows that the distorted current and voltage waveforms resulting from the action of the SSFCL disturb the protective equipment.",2010,0, 1020,Systematic t-Unidirectional Error-Detecting Codes over Zm,"Some new classes of systematic t-unidirectional error-detecting codes over Zm are designed. It is shown that the constructed codes can detect two errors using two check digits. Furthermore, the constructed codes can detect up to mr-2 + r-2 errors using r ges 3 check bits. A bound on the maximum number of detectable errors using r check digits is also given.",2007,0, 1021,Fault tolerance tradeoffs in moving from decentralized to centralized embedded systems,"Some safety-critical distributed embedded systems may need to use centralized components to achieve certain dependability properties. The difficulty in combining centralized and distributed architectures is achieving the potential benefits of centralization without giving up properties that motivated the use of a distributed approach in the first place. This paper examines the impact on fault tolerance of adding selected centralized components to distributed embedded systems, and possible approaches to choosing an appropriate configuration. We consider the proposed use of a star topology with centralized bus guardians in the time-triggered architecture. We model systems with different levels of centralized control in their star couplers, and compare fault tolerance properties in the presence of star-coupler faults. We demonstrate that buffering entire frames in the star coupler could lead to failures in startup and integration. We also show that constraining buffer size imposes restrictions on frame size and clock rates.",2004,0, 1022,Experimental Study of Discriminant Method with Application to Fault-Prone Module Detection,"Some techniques have been applied to improving software quality by classifying the software modules into fault-prone or non fault-prone categories. This can help developers focus on some high risk fault-prone modules. In this paper, a distribution-based Bayesian quadratic discriminant analysis (D-BQDA) technique is experimental investigated to identify software fault-prone modules. Experiments with software metrics data from two real projects indicate that this technique can classify software modules into a proper class with a lower misclassification rate and a higher efficiency.",2008,0, 1023,A Comparative Evaluation of Scatter Correction Techniques in High-Resolution Detectors Based on PSPMTs and Scintillator Arrays,"Single photon emission computed tomography images suffer from low contrast as a result of photon scatter. The standard method for excluding the scatter component in pixelized scintillators is the application of an energy window around the central photopeak channel of each crystal cell, but small angle scattered photons still appear in the photopeak window, and they are included in the reconstructed images. A number of scatter correction techniques have been proposed in order to estimate the scatter component, but they have not yet been applied in pixelized scintillators, where most groups use the standard one-photopeak window for scatter correction. In this paper, the author have assessed three subtraction techniques that use a different approach in order to calculate the scatter component and subtract it from the photopeak image: the dual energy window subtraction technique, the convolution subtraction technique, and a deconvolution technique. All these techniques are compared to the standard method",2006,0, 1024,SEU data and fault tolerance analysis of a LEON 3FT processor,Single-bit per word error protection is implemented on this LEON 3 fault tolerant processor. SEU data is reviewed and fault analysis is examined based on processor operation and operational environment.,2009,0, 1025,Reducing the soft-error rate of a high-performance microprocessor,"Single-bit upsets from transient faults have emerged as a key challenge in microprocessor design. Soft errors will be an increasing burden for microprocessor designers as the number of on-chip transistors continues to grow exponentially. Unlike traditional approaches, which focus on detecting and recovering from faults, this article introduces techniques to reduce the probability that a fault will cause a declared error. The first approach reduces the time instructions sit in vulnerable storage structures. The second avoids declaring errors on benign faults. Applying these techniques to a microprocessor instruction queue significantly reduces its error rate with only minor performance degradation",2004,0, 1026,Slant Correction of Vehicle License Plate Integrates Principal Component Analysis Based on Color-Pair Feature Pixels and Radon Transformation,"Slant Correction plays an important role in the pretreatment during the recognition of vehicle license plate (VLP). To reduce and avoid the interference from the dirt, noise and frame of the VLP, as well as simplify the computation load, a method of slant correction of VLP integrates PCA (Principal Component Analysis) based on color-pair feature and radon transformation is presented in this paper. Three steps compose the method. The first step is to obtain the color-pair feature pixels of the image of VLP. The second step aims to seek the approximate slant angle of the plate by principal component analysis of the color-pair feature pixels. The final step is to achieve the further exact slant angle by radon transformation. The approach is implemented by program. And the experimental results demonstrate that this method is more precise and efficient than absolutely principal component analysis or radon transformation.",2008,0, 1027,Transient fault detection via simultaneous multithreading,"Smaller feature sizes, reduced voltage levels, higher transistor counts, and reduced noise margins make future generations of microprocessors increasingly prone to transient hardware faults. Most commercial fault-tolerant computers use fully replicated hardware components to detect microprocessor faults. The components are lockstepped (cycle-by-cycle synchronized) to ensure that, in each cycle, they perform the same operation on the same inputs, producing the same outputs in the absence of faults. Unfortunately, for a given hardware budget, full replication reduces performance by statically partitioning resources among redundant operations. We demonstrate that a Simultaneous and Redundantly Threaded (SRT) processor-derived from a a Simultaneous Multithreaded (SMT) processor-provides transient fault coverage with significantly higher performance. An SRT processor provides transient fault coverage by running identical copies for the same program simultaneously as independent threads. An SRT processor provides higher performance because it dynamically schedules its hardware resources among the redundant copies. However, dynamic scheduling makes is difficult to implement lockstepping, because corresponding instructions from redundant threads may not execute in the same cycle or in the same order. This paper makes four contributions to the design of SRT processors. First, we introduce the concept of the sphere of replication, which abstract both the physical redundancy of a lockstepped system and the logical redundancy of an SRT processor. This framework aids in identifying the scope of fault coverage and the input and output values requiring special handling. Second, we identify two viable spheres of replication in an SRT processor, and show that one of them provides fault detection while checking only committed stores and uncached loads. Third, we identify the need for consistent replication of load values, and propose and evaluate two new mechanisms for satisfying thi- - s requirement. Finally, we propose and evaluate two mechanisms-slack fetch and branch outcome queue-that enhance the performance of an SRT processor by allowing one thread to prefetch cache misses and branch results for the other thread. Our results with 11 SPEC95 benchmarks show that an SRT processor can outperform an equivalently sized, on-chip, hardware-replicated solution by 16% on average, with maximum benefit of up to 29%.",2000,0, 1028,SMS4 Algorithm Algebra Fault Attack,"SMS4 algorithm packet length and key length is 128 bit. This article presents a SMS4 algorithm of byte-oriented theory, algebraic fault attack only need 1 error redaction that we can recover SMS4 of 128 bit key.",2010,0, 1029,Defect location in traditional vs. Web applications - an empirical investigation,"So far, few attempts were carried out in literature to understand the specific nature of Web bugs and their distribution among the tiers of applications' architecture. In this paper we present an experimental investigation conducted with five pairs of homologous applications (Web and traditional) and 780 real bugs taken from SourceForge aimed at studying the distributions of bugs in Web and traditional applications. The investigation follows a rigorous experimental procedure and it was conducted in the context of three bachelor theses. The study results, although preliminarily, provide a clear-cut empirical evidence that the presentation layer in Web applications is more defect-prone when compared to analogous traditional applications.",2009,0, 1030,"Facing more than moore, is magnetic microscopy the new Swiss knife for 3D defect localization in SiP?","So far, most of the defects at system level are assembly related. This trend obviously concerns Systems in Package, Stacked dies, Packages on Package devices, passive integration, integration of logic, power, wireless, analog, sensor and actuator in the same packaged device... All these Systems in Package defy our failure localization tools. For the first time, we have to face non transparent material, massive 3D structure with mandatory long working distance and need of relatively high spatial resolution. Magnetic microscopy has the ability do this as long as we are able to adapt its principle to the mandatory 3D sensitivity and resolution. We have developed a global 3D approach based on simulation in order to target m resolution at long working distance.",2010,0, 1031,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOA- based applications.",2007,0,4613 1032,Evaluation of soft-bit error sequence generators at the output of the decoding process,"Soft decision decoding algorithms are widely used in modern wireless systems; convolutional and turbo codes are usually adopted as the inner scheme thanks to their capability to correct symbol errors at low SNR values. Such algorithms can achieve high coding gains using soft decoding, and modern digital hardware technology enables efficient and low cost practical implementations. We apply the experience gained in previous work, concerning the simulation of bit error processes (Costamagna, E. et al., Proc. IEEE, vol.90 p.842-59, 2002), to implement soft-bit generative models based on hidden Markov chains and chaotic attractors. Both the input and the output of the demodulation process of a GSM-GPRS and a 3GPP UMTS transceiver are observed, developing our earlier analysis (Costamagna et al., IEEE 58th VTC, 2003), and the quality of the soft-bit sequences generated for the input is evaluated comparing the sequences obtained at the output of the demodulator when simulated or target sequences are supplied at the input. Moreover, the deep significance of some statistical features exhibited by the sequences in order to describe their error burst behavior is briefly discussed.",2004,0, 1033,The Instruction Scheduling for Soft Errors Based on Data Flow Analysis,"Soft errors are emerging as a new challenge in computer applications. To mitigate the effects of soft errors, a variety of techniques have been proposed in the past. They can be mainly classified into two types: hardware-based and software-based. Hardware-based methods are expensive, since they require replicated hardware modules or developing custom hardware equipment. Although software-based methods do not incur the high economical costs, they usually utilize the strategies of data duplication and time redundancy for tolerating soft errors, which provoke memory overhead and performance degradation. In this paper, we propose a compiler optimization approach that can enhance the reliability of programs without extra costs. The basic idea is to use instruction scheduling to decrease the total valid area that is vulnerable to soft errors during the execution process. Based on the result of data flow analysis, the concrete algorithm for basic block scheduling is described in the dynamic programming fashion. The experimental results of fault injection indicate that the average reliability of the benchmark programs have been improved for 2% without palpable overhead.",2009,0, 1034,On Reducing Circuit Malfunctions Caused by Soft Errors,Soft errors due to radiation are expected to increase in nanoelectronic circuits. Methods to reduce system failures due to soft errors include use of redundancy and making circuit elements robust such that soft errors do not upset signal values. Recent works have noted that electronic circuits have partial intrinsic immunity to soft errors since single event upsets on a large percentage of signal lines do not cause errors on circuit outputs. Using ISCAS-89 benchmark circuits we present experimental evidence that the partial immunity to single event upsets is in most cases due to redundancy in the circuits and thus immunity to soft errors may not be available in irredundant circuits. Thus goals on immunity to soft errors may not be achievable in highly optimized circuits without adding circuit redundancy and/or relaxing the requirements on system failures due to soft errors.,2008,0, 1035,Compiler-directed selective data protection against soft errors,"Soft errors in electronic devices are a growing concern for many embedded systems from diverse domains. Chip vendors are already working with system customers on ways to guard against the effects of soft errors. While error code based protection mechanisms for memories such as ECC are important, indiscriminately applying them to all data can have serious memory space and energy overheads. This paper demonstrates how an optimizing compiler can be useful in deciding which data elements need to be protected based on user-specified annotations. The proposed idea makes use of a variant of forward slicing.",2005,0, 1036,On the Characterization and Optimization of On-Chip Cache Reliability against Soft Errors,"Soft errors induced by energetic particle strikes in on-chip cache memories have become an increasing challenge in designing new generation reliable microprocessors. Previous efforts have exploited information redundancy via parity/ECC codings or cacheline duplication for information integrity in on-chip cache memories. Due to various performance, area/size, and energy constraints in various target systems, many existing unoptimized protection schemes may eventually prove significantly inadequate and ineffective. In this paper, we propose a new framework for conducting comprehensive studies and characterization on the reliability behavior of cache memories, in order to provide insight into cache vulnerability to soft errors as well as design guidance to architects for highly efficient reliable on-chip cache memory design. Our work is based on the development of new lifetime models for data and tag arrays residing in both the data and instruction caches. Those models facilitate the characterization of cache vulnerability of stored items at various lifetime phases. We then exemplify this design methodology by proposing reliability schemes targeting at specific vulnerable phases. Benchmarking is carried out to showcase the effectiveness of our approach.",2009,0, 1037,Self-Adaptive Data Caches for Soft-Error Reliability,"Soft-error induced reliability problems have become a major challenge in designing new generation microprocessors. Due to the on-chip caches' dominant share in die area and transistor budget, protecting them against soft errors is of paramount importance. Recent research has focused on the design of cost-effective reliable data caches in terms of performance, energy, and area overheads, based on the assumption of fixed error rates. However, for systems in operating environments that vary with time or location, those schemes will be either insufficient or overdesigned for the changing error rates. In this paper, we explore the design of a self-adaptive reliable data cache that dynamically adapts its employed reliability schemes to the changing operating environments thus to maintain a target reliability. The proposed data cache is implemented with three levels of error protection schemes, a monitoring mechanism, and a control component that decides whether to upgrade, downgrade, or keep the current protection level based on the feedback from the monitor. Our experimental evaluation using a set of SPEC CPU2000 benchmarks shows that our self-adaptive data cache achieves similar reliability to a cache protected by the most reliable scheme, while simultaneously minimizing the performance and power overheads.",2008,0, 1038,Defect frequency and design patterns: an empirical study of industrial code,"Software ""design patterns"" seek to package proven solutions to design problems in a form that makes it possible to find, adapt, and reuse them. A common claim is that a design based on properly applied patterns will have fewer defects than more ad hoc solutions. This case study analyzes the weekly evolution and maintenance of a large commercial product (C++, 500,000 LOC) over three years, comparing defect rates for classes that participated in selected design patterns to the code at large. We found that there are significant differences in defect rates among the patterns, ranging from 63 percent to 154 percent of the average rate. We developed a new set of tools able to extract design pattern information at a rate of 3106 lines of code per hour, with relatively high precision. Based on a qualitative analysis of the code and the nature of the patterns, we conclude that the Observer and Singleton patterns are correlated with larger code structures and, so, can serve as indicators of code that requires special attention. Conversely, code designed with the Factory pattern is more compact and possibly less closely coupled and, consequently, has lower defect numbers. The Template Method pattern was used in both simple and complex situations, leading to no clear tendency.",2004,0, 1039,Using software invariants for dynamic detection of transient errors,"Software based error detection techniques usually imply modification of the algorithms to be hardened, and almost certainly also demand a huge memory footprint and/or execution time overhead. In the software engineering field, program invariants have been proposed as a means to check program correctness during the development cycle. In this work we discuss the use of software invariants verification as a low cost alternative to detect soft errors after the execution of a given algorithm. A clear advantage is that this approach does not require any change in the algorithm to be hardened, and in case its computational cost and memory overhead are proven to be much smaller than duplication for a given algorithm, it may become a feasible option for hardening that algorithm against soft errors. The results of fault injection experiments performed with different algorithms are analyzed and some guidelines for future research concerning this technique are proposed.",2009,0, 1040,Classification of Software Defect Detected by Black-Box Testing: An Empirical Study,"Software defects which are detected by black box testing (called black-box defect) are very large due to the wide use of black-box testing, but we could not find a defect classification which is specifically applicable to them in existing defect classifications. In this paper, we present a new defect classification scheme named ODC-BD (Orthogonal Defect Classification for Black-box Defect), and we list the detailed values of every attribute in ODC-BD, especially the 300 detailed black-box defect type. We aim to help black-box defect analyzers and black-box testers improve their analysis and testing efficiency. The classification study is based on 1860 black-box defects collected from 39 industry projects and 2 open source projects. Furthermore, two empirical studies are included to validate the use of our ODC-BD. The results show that our ODC-BD can improve the efficiency of black-box testing and black-box defect analysis.",2010,0, 1041,The Real Cost of Software Errors,"Software is no longer creeping into every aspect of our lives - it's already there. In fact, failing to recognize just how much everything we do depends on software functioning correctly makes modern society vulnerable to software errors.",2009,0, 1042,Fault Tree Analysis of Software-Controlled Component Systems Based on Second-Order Probabilities,"Software is still mostly regarded as a black box in the development process, and its safety-related quality ensured primarily by process measures. For systems whose lion share of service is delivered by (embedded) software, process-centred methods are seen to be no longer sufficient. Recent safety norms (for example, ISO 26262) thus prescribe the use of safety models for both hardware and software. However, failure rates or probabilities for software are difficult to justify. Only if developers take good design decisions from the outset will they achieve safety goals efficiently. To support safety-oriented navigation of the design space and to bridge the existing gap between qualitative analyses for software and quantitative ones for hardware, we propose a fault-tree-based approach to the safety analysis of software-controlled systems. Assigning intervals instead of fixed values to events and using Monte-Carlo sampling, probability mass functions of failure probabilities are derived. Further analysis of PMF lead to estimates of system quality that enable safety managers to take an optimal choice between design alternatives and to target cost-efficient solutions in every phase of the design process.",2009,0, 1043,Pavlov's Bugs: Matching Repair Policies with Rewards,"Software maintenance engineers devote a significant amount of work to repairing user-identified errors. But user, maintainer, and manager perceptions of an error's importance can vary, and bug-repair assignment policies can adversely affect those perceptions.",2009,0, 1044,The challenge of accurate software project status reporting: a two-stage model incorporating status errors and reporting bias,"Software project managers perceive and report project status. Recognizing that their status perceptions might be wrong and that they may not faithfully report what they believe, leads to a natural question-how different is true software project status from reported status? Here, the authors construct a two-stage model which accounts for project manager errors in perception and bias that might be applied before reporting status to executives. They call the combined effect of errors in perception and bias, project status distortion . The probabilistic model has roots in information theory and uses discrete project status from traffic light reporting. The true statuses of projects of varying risk were elicited from a panel of five experts and formed the model input. The same experts estimated the frequency with which project managers make status errors, while the authors created different bias scenarios in order to investigate the impact of different bias levels. The true status estimates, error estimates, and bias levels allow calculation of perceived and reported status. The results indicate that at the early stage of the development process most software projects are already in trouble, that project managers are overly optimistic in their perceptions, and that executives receive status reports very different from reality, depending on the risk level of the project and the amount of bias applied by the project manager. Key findings suggest that executives should be skeptical of favorable status reports and that for higher risk projects executives should concentrate on decreasing bias if they are to improve the accuracy of project reporting.",2002,0,6392 1045,Considering Both Failure Detection and Fault Correction Activities in Software Reliability Modeling,"Software reliability is widely recognized as one of the most significant aspects of software quality and is often determined by the number of software uncorrected faults in the system. In practice, it is essential for fault correction prediction, because this correction process consumes a heavy amount of time and resources to predict whether reliability goals have been achieved. Therefore, in this paper we discuss a general framework of the modeling of the failure detection and fault correction process. Under this general framework, we not only verify the existing non-homogeneous poisson process (NHPP) models but also derive several new NHPP models. In addition, we show that these approaches cover a number of well-known models under different conditions. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction processes",2006,0, 1046,A foundation for adaptive fault tolerance in software,"Software requirements often change during the operational lifetime of deployed systems. To accommodate requirements not conceived during design time, the system must be able to adapt its functionality and behavior. The paper examines a formal model for reconfigurable software processes that permits adaptive fault tolerance by adding or removing specific fault tolerance techniques during runtime. A distributed software-implemented fault tolerance (SIFT) environment for managing user applications has been implemented using ARMOR processes that conform to the formal model of reconfigurability. Because ARMOR processes are reconfigurable, they can tailor the fault tolerance services that they provide to themselves and to the user applications. We describe two fault tolerance techniques: microcheckpointing and assertion checking, that have been incorporated into ARMOR process via reconfigurations to the original ARMOR design. Experimental evaluations of the SIFT environment on a testbed cluster at the Jet Propulsion Laboratory demonstrate the effectiveness of these two fault tolerance techniques in limiting data error propagation among the ARMOR processes. These experiments validate the concept of using an underlying reconfigurable process architecture as the basis for implementing replaceable error detection and recovery services.",2003,0, 1047,Experimental Performance Comparison of Rate-based and Store-and-Forward Transmission Mechanisms over Error-Prone Cislunar Communication Links,"Some work has been done in evaluating the performance of the rate-based SCPS-pure rate control and the store-and-forward CFDP protocol. In this paper, we present an experimental performance comparison between SCPS-pure rate control and the CFDP in the deferred mode over a simulated error-pone, long-delayed, point-to-point cislunar communication link. The focus of this work is to see which protocol is more effective in coping with a long cislunar-link delay, especially when hybridized with a high bit-error-rate (BER), and which one has throughput performance advantage over another. The comparison results show that SCPS-pure rate control has a better performance than CFDP-TCP and Linux-TCP over both symmetric and asymmetric cislunar communication channels with all BERs, and its throughput advantage is higher than 3000 bytes/s over both symmetric and asymmetric channels at all levels of BER The advantage is more significant over asymmetric cislunar channel with a high BER around 10-5. The rate-based SCPS-pure rate control is more effective in coping with a long cislunar link delay, especially when hybridized with a high BER. CFDP-TCP and Linux-TCP do not have significant performance difference in any cases.",2007,0, 1048,Developing a low-cost high-quality software tool for dynamic fault-tree analysis,"Sophisticated modeling and analysis methods are being developed in academic and industrial research labs for reliability engineering and other domains. The evaluation and evolution of such methods based on use in practice is critical to research progress, but few such methods see widespread use. A critical impediment to disseminating new methods is the inability to produce, at a reasonable cost, supporting software tools that have the: usability and dependability characteristics that industrial users require; and evolvability to accommodate software change as the underlying analysis methods are refined and enhanced. The difficulty of software development thus emerges as a key impediment to advances in engineering modeling and analysis. This paper presents an approach to tool development that attacks these problems. Progress requires synergistic, interdisciplinary collaborations between application-domain and software-engineering researchers. The authors have pursued such an approach in developing Galileo: a fault tree modeling and analysis tool. These innovations are described in two dimensions: (1) the Galileo core reliability modeling and analysis function; and (2) the authors' work on software engineering for high-quality, low-cost modeling and analysis tools",2000,0, 1049,A Mobile Based Application for Detecting Fault by Sound Analysis in Car Engines Using Triangular Window and Wavelet Transform,"Sound energy belies an enormous amount of information which can provide assistance in various applications like navigation, communication, recognition, medical - diagnosis, detection and therapy, analysis & design of structures and many more. This paper presents a method for detecting faults in car engines just by recording their sounds through window based applications in mobile phones and processing it in the mobile phone using our algorithm. The algorithm preprocesses the sound to remove the noise if present, the noise free car engine sound is passed through a triangular window to acquire the Multi resolution coefficients for discrete wavelet transform and compare resulting plots. The Wavelet transform analysis is done and the plots are scrupulously analyzed and the faults can now be detected by referring to plots. With a little know-how non technical persons can now detect the abnormalities in car engines just be using this automation.",2010,0, 1050,A Diagnostic Tree Approach for Fault Cause Identification in the Attitude Control Subsystem of Satellites,"Space and Earth observation programs demand stringent guarantees ensuring smooth and reliable operations of space vehicles and satellites. Due to unforeseen circumstances and naturally occurring faults, it is desired that a fault-diagnosis system be capable of detecting, isolating, identifying, or classifying faults in the system. Unfortunately, none of the existing fault-diagnosis methodologies alone can meet all the requirements of an ideal fault- diagnosis system due to the variety of fault types, their severity, and handling mechanisms. However, it is possible to overcome these shortcomings through the integration of different existing fault-diagnosis methodologies. In this paper, a novel learning-based, diagnostic-tree approach is proposed which complements and strengthens existing efficient fault detection mechanisms with an additional ability to classify different types of faults to effectively determine potential fault causes in a subsystem of a satellite. This extra capability serves as a semiautomatic diagnostic decision support aid to expert human operators at ground stations and enables them to determine fault causes and to take quick and efficient recovery/reconfiguration actions. The developed diagnosis/analysis procedure exploits a qualitative technique denoted as diagnostic tree (DX-tree) analysis as a diagnostic tool for fault cause analysis in the attitude control subsystem (ACS) of a satellite. DX-trees constructed by our proposed machine-learning-based automatic tree synthesis algorithm are demonstrated to be able to determine both known and unforeseen combinations of events leading to different fault scenarios generated through synthetic attitude control subsystem data of a satellite. Though the immediate application of our proposed approach would be at ground stations, the proposed technique has potential for being integrated with causal model-based diagnosis and recovery techniques for future autonomous space vehicle missions.",2009,0, 1051,Improved microwave circuit design using multipoint-response-correction space mapping and trust Regions,"Space mapping (SM) technology includes so-called output SM that ensures exact matching between the surrogate and the fine model at the current design. Output SM exploits the fine model data at a single design and is not able to align the models' sensitivity. Here, a multipoint response correction is introduced that generalizes the concept of output SM. By using a design-variable-dependent correction term and exploiting all available fine model information, the proposed technique provides exact match between the surrogate and the fine model at several designs. This not only retains the benefits of output SM but also enhances sensitivity matching between the two models, which results in improved performance of the SM optimization process. Verification using two microwave design problems is provided.",2010,0, 1052,Applying the Na i ve Bayes Classifier to Assist Users in Detecting Speech Recognition Errors,"Speech recognition (SR) is a technology that can improve accessibility to computer systems for people with physical disabilities or situation-introduced disabilities. The wide adoption of SR technology; however, is hampered by the difficulty in correcting system errors. HCI researchers have attempted to improve the error correction process by employing multi-modal or speech-based interfaces. There is limited success in applying raw confidence scores (indicators of system's confidence in an output) to facilitate anchor specification in the navigation process. This paper applies a machine learning technique, in particular Naive Bayes classifier, to assist detecting dictation errors. In order to improve the generalizability of the classifiers, input features were obtained from generic SR output. Evaluation on speech corpuses showed that the performance of Naive Bayes classifier was better than using raw confidence scores.",2005,0, 1053,Analysis of the forming defects of the trapezoidal inner-gear spinning,"Spin-forming of cup-shaped thin-walled trapezoidal inner-gear is a new technology of the near-net forming in gear manufacturing field. Experiments and FEM simulations shows that the quality of workpiece is greatly influenced by the forming process method. The defects, such as the un-uniform distribution of tooth height along the axial and tangential directions and the wave-shaped opening-end of workpiece etc., occur easily. The mechanism for defects during spinning is analyzed, which is mainly caused by the partial thickening (in the area of gear-tooth) and thinning (in the area of tooth-groove). Improved forming methods, such as stagger spin-forming, positive-reverse rotation of main spindle and adopting a restraint ring at the opening-end of blank, are put forward. The forming methods are simulated by MSC. Marc and also investigated by experiments. The results show that the improved process methods overcome the forming defects effectively.",2009,0, 1054,A rapid prototyping system for error-resilient multi-processor systems-on-chip,"Static and dynamic variations, which have negative impact on the reliability of microelectronic systems, increase with smaller CMOS technology. Thus, further downscaling is only profitable if the costs in terms of area, energy and delay for reliability keep within limits. Therefore, the traditional worst case design methodology will become infeasible. Future architectures have to be error resilient, i.e., the hardware architecture has to tolerate autonomously transient errors. In this paper, we present an FPGA based rapid prototyping system for multi-processor systems-on-chip composed of autonomous hardware units for error-resilient processing and interconnect. This platform allows the fast architectural exploration of various error protection techniques under different failure rates on the microarchitectural level while keeping track of the system behavior. We demonstrate its applicability on a concrete wireless communication system.",2010,0, 1055,Towards SOA-Based Code Defect Analysis,"Static code analysis is the analysis of software that is performed to acquire information concerning the dynamic behavior of programs built from that software, without actually executing the programs. Currently, most analysis techniques are implemented as independent tools, or plugins for integrate development environment (IDE, e.g., Eclipse). However, in this paper, we introduce a new way to release the analyzing ability: a Web Service based approach, which can integrate multiple analysis tools, and provides analyzing ability by way of standard Web Service interface. The user can benefit code analyzing ability by just submit the code to be analyzed to the service, without download any analyzing tools, and then get the merged result. The experiment shows that the proposed approach is feasible and efficient.",2008,0, 1056,Towards identification of latent defects: Yield mining using defect characteristic model and clustering,"Statistical yield modeling is used to calculate the probability of a die containing a latent defect based on its spatial relationship with other dies in its surrounding neighborhood. Previous research implements a blanket application of predictive yield mining on devices and assumes that a spatial relationship exists between killer defects screened at probe test, and latent defects screened at packaged level burn-in. This research investigates the use of the defect characteristic models as yield models to screen latent defects while taking into account wafers with defect clusters. It goes on to evaluate the pre-selected yield models in economic terms and interprets the results as either a predictive or descriptive yield model to describe and identify latent defects.",2009,0, 1057,Combining a spread spectrum technique with error-correction code to design an immune stegosystem,"Steganography is the art and science of hiding that a communication is taken place. It embeds the secret file (text, audio or image) in other carrier file. Text in image steganography is considered in this work. The proposed stegosystem uses Spread Spectrum technique which is applied in spatial domain together with error correction coding. These are used to increase the security and robustness of the system. Random location selection within the cover image pixels is also proposed in the work. Improvement has been achieved in robustness on the expense of reducing the capacity of hiding. The imperceptibility of the stego image is assessed by using peak signal-to-noise ratio (PSNR) measure. Attacks in the form of lossy compression and additive noise are considered. The performance of the proposed system has shown good immunity to moderate levels of channel noise and lossy compression ratios.",2008,0, 1058,Depth correction: Methods for approximating depth information in web camera depth maps,"Stereo web cameras can be used to obtain depth information about a scene. The maps produced using this procedure are prone to missing depth values, more so than when using a commercial stereo web camera. Post-processing can be used to find approximations for unknown depth values. In this paper we describe four techniques for approximating missing depth values and compare their effectiveness. A large median filter which considered only known pixel values has achieved promising results showing an increase in the proportion of valued pixels and decrease in the number of connected regions. A large median filter and use of a close operator have also both decreased the number of connected regions although they both increased the proportion of unknown pixels.",2010,0, 1059,Stable fault-tolerant adaptive fuzzy/neural control for a turbine engine,"Stimulated by the growing demand for improving the reliability and performance of systems, fault-tolerant control has been receiving significant attention since its goal is to detect the occurrence of faults and achieve satisfactory system performance in the presence of faults. To develop an intelligent fault-tolerant control system, we begin by constructing a design model of the system using a hierarchical learning structure in the form of Takagi-Sugeno fuzzy systems. Afterwards, the fault-tolerant control scheme is designed based on stable adaptive fuzzy/neural control, where its online learning capabilities are used to capture the unknown dynamics caused by faults. Finally, the effectiveness of the proposed methods has been studied by extensive analysis of system zero dynamics and asymptotic tracking abilities for both indirect and direct adaptive control cases, and by component level model simulation of the General Electric XTE46 turbine engine",2001,0, 1060,Stress wave analysis of turbine engine faults,"Stress Wave Analysis (SWAN) provides real-time measurement of friction and mechanical shock in operating machinery. This high frequency acoustic sensing technology filters out background levels of vibration and audible noise, and provides a graphic representation of machine health. By measuring shock and friction events, the SWAN technique is able to detect wear and damage at the earliest stages and is able to track the progression of a defect throughout the failure process. This is possible because as the damage progresses, the energy content of friction and shock events increases. This `stress wave energy' is then measured and tracked against normal machine operating conditions. This paper describes testing that was conducted on several types of aircraft and industrial gas turbine engines to demonstrate SWAN's ability to accurately detect a broad range of discrepant conditions and characterize the severity of damage",2000,0, 1061,Effective data sharing system for fault tolerant Structural Health Monitoring system,"Structural Health Monitoring (SHM) system is a promising technology to determine the health condition of a structure and localize its damage. SHM system is widely used, especially in gigantic structures for its strong requirements of safety. Many researches about SHM system have been conducting. However, conventional SHM system did not take account of an accidental collapse caused by earthquake. As a result, there is a possibility that sensor nodes, network links, and a center server cause failures in processing or storing of data gathered in the building. These failures may lose important data that contains useful information for post-analyzing the collapse. The data includes when, where and why the structure is damaged and collapsed. Theses information have a great deal of potential for preventing similar damage or collapse and it will make great contribution to accelerate future structural researches. To solve the problem of data losses in a damage, data sharing system for SHM system is proposed. This system shares data within sensor nodes. The proposed system achieves the following three processes; node search, backup node selection and backup data transfer process. It is important to conduct these processes under the condition of limited resources due to small and not powerful sensor nodes. Backup node selection is an important process of this system. This paper mainly focused on this selection. Round-trip time, a size of free memory space of backup nodes and value of displacement caused by vibration are used for the selection. In this paper, the selection method based on displacement was checked. An experiment was conducted by using practical sensor nodes and acceleration data. From the result of this experiment, proposed system could share data by using selection method based on displacement.",2010,0, 1062,Detection of structural defects in pipes using time reversal of guided waves,"Structural health monitoring of buried pipelines is of vital importance as infrastructures age. Ultrasonic guided waves are a popular method for inspecting buried pipes, due to their potential for long propagation. Unfortunately, the large number of wave modes present, and the effects of dispersion, in a pipeline make analysis of the received signals difficult. We plan to use Time Reversal Acoustics to compensate for these complex signals, and improve performance for the detection of faults in a pipeline. We will present theoretical performance results for conventional and Time Reversal detectors, verified with simulations conducted in PZFlex. Time Reversal shows a potential for a reduction in the power requirements of a fault detection system.",2009,0, 1063,Defect prevention and detection in software for automated test equipment,"Software for automated test equipment can be tedious and monotonous making it just as error-prone as other types of software. Active defect prevention and detection are important for test applications. Incomplete or unclear requirements, a cryptic syntax, variability in syntax or structure, and changing requirements are among the problems encountered in test applications for one tester. These issues increase the probability of error introduction during test application development. This paper describes a test application development tool designed to address these issues for the PT3800 tester, a continuity and insulation resistance tester. The tool was designed with powerful built-in defect prevention and detection capabilities. A reduction in rework and a two-fold increase in productivity are the results. The defect prevention and detection capabilities are described along with lessons learned and their applicability to other test equipment software.",2007,0,6724 1064,Software Defect Content Estimation: A Bayesian Approach,"Software inspection is a method to detect errors in software artefacts early in the development cycle. At the end of the inspection process the inspectors need to make a decision whether the inspected artefact is of sufficient quality or not. Several methods have been proposed to assist in making this decision like capture recapture methods and Bayesian approach. In this study these methods have been analyzed and compared and a new Bayesian approach for software inspection is proposed. All of the estimation models rely on an underlying assumption that the inspectors are independent. However, this assumption of independence is not necessarily true in practical sense, as most of the inspection teams interact with each other and share their findings. We, therefore, studied a new Bayesian model where the inspectors share their findings, for defect estimate and compared it with Bayesian models in the literature, where inspectors examine the artefact independently. The simulations were carried out under realistic software conditions with a small number of difficult defects and a few inspectors. The models were evaluated on the basis of decision accuracy and median relative error and our results suggest that the dependent inspector assumption improves the decision accuracy (DA) over the previous Bayesian model and CR models",2006,0, 1065,A study of applying the bounded Generalized Pareto distribution to the analysis of software fault distribution,"Software is currently a key part of many safety-critical applications. But the main problem facing the computer industry is how to develop a software with (ultra) high reliability on time, and assure the quality of software. In the past, some researchers reported that the Pareto distribution (PD) and the Weibull distribution (WD) models can be used for software reliability estimation and fault distribution modeling. In this paper we propose a modified PD model to predict and assess the software fault distribution. That is, we suggest using a special form of the Generalized Pareto distribution (GPD) model, named the bounded Generalized Pareto distribution (BGPD) model. We will show that the BGPD model eliminates several modeling issues that arise in the PD model, and perform detailed comparisons based on real software fault data. Experimental result shows that the proposed BGPD model presents very high fitness to the actual fault data. In the end, we conclude that the distribution of faults in a large software system can be well described by the Pareto principle.",2010,0, 1066,Error-Resilient H.264/AVC Video Transmission Using Two-Way Decodable Variable Length Data Block,"Standard video coders utilize variable length coding (VLC) to obtain more data compression in addition to what lossy coding has achieved at the expense of making the compressed bitstream very vulnerable to channel errors. Even a 1-bit error incurred in the bitstream may cause the follow-up bitstream to be either erroneously decoded or completely undecodable, and this could further result in error propagation. To mitigate this phenomenon, a new VLC coding scheme is proposed in this paper, called the two-way decodable variable length data block (TDVLDB), which allows the compressed bitstream to be bidirectionally decodable without exploiting data partitioning. The proposed TDVLDB scheme is able to effectively recover more uncorrupted data from the corrupted packets. Furthermore, it is able to correct some, if not all, channel errors of a finite-length burst error. To effectively identify the location of the first actual error incurred within the current slice, a bitstream similarity measurement (BSM) algorithm is proposed. Note that the proposed TDVLDB scheme is generic in the sense that it can be exploited in any image or video coding framework as long as it involves the use of VLC and requires error-resilience capability. In this paper, the proposed TDVLDB is incorporated into the H.264/advanced video coding (AVC) coder to evaluate its error-resilience performance in terms of rate-distortion coding efficiency. Compared with the baseline H.264/AVC coding, the TDVLDB-incorporated H.264/AVC-based coding scheme has demonstrated significant objective and subjective video quality improvements when the bitstream is transmitted over error-prone channels.",2010,0, 1067,A New Integrated Spatio-Temporal Framework for Video Error Concealment,"Temporal error concealment (TEC) algorithms do not produce good results in the presence of cut scenes, camera panning or object occlusion in videos due to the lack of a suitable reference frame. Under such circumstances, spatial EC (SEC) may perform better. Thus, EC algorithms should decide adaptively between TEC or SEC. In this paper, we propose an integrated spatio-temporal EC framework which extends our previous work in TEC and SEC. The proposed algorithm decides adaptively on the best EC mode and produces average PSNR improvements of up to 4.59 dB over a coding-mode based EC algorithm. Most importantly, our algorithm produces results of much better perceptual quality.",2008,0, 1068,Testing Ternary Content Addressable Memories With Comparison Faults Using March-Like Tests,"Ternary content addressable memory (TCAM) plays an important role in various applications for its fast lookup operation. This paper proposes several comparison fault models (i.e., the faults cause Compare operation fail) of TCAMs based on electrical defects, such as shorts between two circuit nodes and transistor stuck-open and stuck-on faults. Two March-like tests for detecting comparison faults are also proposed. The first March-like test requires 4N Write operations, 3N Erase operations, and 4N+2B Compare operations to cover 100% of targeted comparison faults for an NtimesB-bit TCAM with Hit output only. The second March-like test requires 2N Write operations, 2N Erase operations, and 4N+2B Compare operations to cover 100% of targeted comparison faults for an NtimesB-bit TCAM with Hit and Priority Address Encoder outputs. Compared with the previous work, the proposed tests have lower time complexity for typical TCAMs; they can be used to test TCAMs with different comparator structures; and their time complexities are independent of the number of stuck-on faults. Also, they can cover delay faults in comparison circuits",2007,0, 1069,Test coverage and post-verification defects: A multiple case study,"Test coverage is a promising measure of test effectiveness and development organizations are interested in cost-effective levels of coverage that provide sufficient fault removal with contained testing effort. We have conducted a multiple-case study on two dissimilar industrial software projects to investigate if test coverage reflects test effectiveness and to find the relationship between test effort and the level of test coverage. We find that in both projects the increase in test coverage is associated with decrease in field reported problems when adjusted for the number of prerelease changes. A qualitative investigation revealed several potential explanations, including code complexity, developer experience, the type of functionality, and remote development teams. All these factors were related to the level of coverage and quality, with coverage having an effect even after these adjustments. We also find that the test effort increases exponentially with test coverage, but the reduction in field problems increases linearly with test coverage. This suggests that for most projects the optimal levels of coverage are likely to be well short of 100%.",2009,0, 1070,FedEx - a fast bridging fault extractor,"Test pattern generation and diagnosis algorithms that target realistic bridging faults must be provided with a realistic fault list. In this work we describe FedEx, a bridging fault extractor that extracts a circuit from the mask layout, identifies the two-node bridges that can occur, their locations, layers, and relative probability of occurrence. Our experimental results show that FedEx is memory efficient and fast",2001,0, 1071,Analyzing and diagnosing interconnect faults in bus-structured systems,"Testing multimodule systems presents several challenges, particularly when systems use submicron technology. The authors propose strategies to diagnose interconnect faults in bus-structured systems using several models. We propose several methods and strategies for a diagnosis using different fault models, including those applicable to submicron technology. Besides defining new features, such as the logical extent of faults, we also propose a reduction strategy that permits 100% fault detection and identification (including fault location)",2002,0, 1072,Using Formal Verification to Reduce Test Space of Fault-Tolerant Programs,"Testing object-oriented programs is still a hard task, despite many studies on criteria to better cover the test space. Test criteria establish requirements one want to achieve in testing programs to help in finding software defects. On the other hand, program verification guarantees that a program preserves its specification but its application is not very straightforward in many cases. Both program testing and verification are expensive tasks and could be used to complement each other.This paper presents a new approach to automate and integrate testing and program verification for fault-tolerant systems. In this approach we show how to assess information from programs verification in order to reduce the test space regarding exceptions definition/use testing criteria. As properties on exception-handling mechanisms are checked using a model checker(Java PathFinder), programs are traced. Information from these traces can be used to realize how much testing criteria have been covered, reducing the further program test space.",2008,0, 1073,Fault Detection Likelihood of Test Sequence Length,"Testing of graphical user interfaces is important due to its potential to reveal faults in operation and performance of the system under consideration. Most existing test approaches generate test cases as sequences of events of different length. The cost of the test process depends on the number and total length of those test sequences. One of the problems to be encountered is the determination of the test sequence length. Widely accepted hypothesis is that the longer the test sequences, the higher the chances to detect faults. However, there is no evidence that an increase of the test sequence length really affect the fault detection. This paper introduces a reliability theoretical approach to analyze the problem in the light of real-life case studies. Based on a reliability growth model the expected number of additional faults is predicted that will be detected when increasing the length of test sequences.",2010,0, 1074,An Efficient Test Pattern Selection Method for Improving Defect Coverage with Reduced Test Data Volume and Test Application Time,"Testing using n-detection test sets, in which a fault is detected by n (n > 1) input patterns, is being increasingly advocated to increase defect coverage. However, the data volume for an n-detection test set is often too large, resulting in high testing time and tester memory requirements. Test set selection is necessary to ensure that the most effective patterns are chosen from large test sets in a high-volume production testing environment. Test selection is also useful in a time-constrained wafer-sort environment. The authors use a probabilistic fault model and the theory of output deviations for test set selection - the metric of output deviation is used to rank candidate test patterns without resorting to fault grading. To demonstrate the quality of the selected patterns, experimental results were presented for resistive bridging faults and non-feedback zero-resistance bridging faults in the ISCAS benchmark circuits. Our results show that for the same test length, patterns selected on the basis of output deviations are more effective than patterns selected using several other methods",2006,0, 1075,Lightweight Fault-Tolerance Mechanism for Distributed Mobile Agent-Based Monitoring,"Thanks to asynchronous and dynamic natures of mobile agents, a certain number of mobile agent-based monitoring mechanisms have actively been developed to monitor large scale and dynamic distributed networked systems adaptively and efficiently. Among them, some mechanisms attempt to adapt to dynamic changes in various aspects such as network traffic patterns, resource addition and deletion, network topology and so on. However, failures of some domain managers are very critical to providing correct, real-time and efficient monitoring functionality in a large-scale mobile agent-based distributed monitoring system. In this paper, we present a novel fault- tolerance mechanism to have the following advantageous features appropriate for large-scale and dynamic hierarchical mobile agent-based monitoring organizations. It supports fast failure detection functionality with low failure-free overhead by each domain manager transmitting heart-beat messages to its immediate higher-level manager. Also, it minimizes the number of non-faulty monitoring managers affected by failures of domain managers. Moreover, it allows consistent failure detection actions to be performed continuously in case of agent creation, migration and termination, and is able to execute consistent takeover actions even in concurrent failures of domain managers.",2009,0, 1076,Azimuth-track level compensation to reduce blind-pointing errors of the Deep Space Network antennas,"The 34-meter antennas of the NASA Deep Space Network are wheel and track antennas. The latter term refers to a set of wheels at the base of the structure, which roll on a circular steel track supported by a concrete foundation ring. The track is assumed flat; however, its level varies due to manufacturing imperfections, structural loads, non-uniformity of the soil, and temperature variations. It is specified that the deviations of the azimuth-track level shall not exceed 0.5 mm. During tracking, this amplitude of deviations causes deformations of the antenna structure, resulting in pointing errors of 2 mdeg, which exceed the required accuracy for 32-GHz (Ka-band) tracking. However, structural deformations caused by the azimuth track unevenness are repeatable; therefore, a look-up table can be created to improve the blind-pointing accuracy. This paper presents the process for creation of the look-up table, describes the instrumentation necessary for determining the pointing errors, and describes the processing of inclinometer data. It derives algorithms for the pointing-error estimation, and for the azimuth-axis tilt using the inclinometer data. It compares the error corrections based on the created look-up table and actual measurements of pointing errors using the conical scanning (conscan) technique. This comparison shows a satisfactory convergence that justifies the implementation of the approach in forthcoming NASA missions",2000,0, 1077,Cascaded H-bridge Multilevel Inverter Drives Operating under Faulty Condition with AI-Based Fault Diagnosis and Reconfiguration,"The ability of cascaded H-bridge multilevel inverter drives (MLID) to operate under faulty condition including AI-based fault diagnosis and reconfiguration system is proposed in this paper. Output phase voltages of a MLID can be used as valuable information to diagnose faults and their locations. It is difficult to diagnose a MLID system using a mathematical model because MLID systems consist of many switching devices and their system complexity has a nonlinear factor. Therefore, a neural network (NN) classification is applied to the fault diagnosis of a MLID system. Multilayer perceptron (MLP) networks are used to identify the type and location of occurring faults. The principal component analysis (PCA) is utilized in the feature extraction process to reduce the NN input size. A lower dimensional input space will also usually reduce the time necessary to train a NN, and the reduced noise may improve the mapping performance. The genetic algorithm (GA) is also applied to select the valuable principal components to train the NN. A reconfiguration technique is also proposed. The proposed system is validated with simulation and experimental results. The proposed fault diagnostic system requires about 6 cycles (~100 ms at 60 Hz) to clear an open circuit and about 9 cycles (~150 ms at 60 Hz) to clear a short circuit fault. The experiment and simulation results are in good agreement with each other, and the results show that the proposed system performs satisfactorily to detect the fault type, fault location, and reconfiguration.",2007,0, 1078,Autonomous cooperation technique to achieve fault tolerance in service oriented community system,"The advancement of mobile telecommunication and wireless technologies is required to provide local but familiar services in daily life, which has not been satisfied through the global services on the Internet. In retail business under evolving market, users request access to unknown but appropriate services based on their preference and situation, and retailers need to be aware of the current requirements of the majority of consumers in specific local trade areas. Because of the transience of the requirements of the users in their trade-areas, the services require being temporary and having the time limit. Therefore the areas of the services need to become narrower with the time. The concept of the service oriented community has been proposed to satisfy both the users and the retailers requirements. It consists of members in the specified area based on services, and they cooperate with each other in order to get mutual benefits. For realization of the service oriented community, the systems require flexibility for the effective provision of the services and fault tolerance for the stable service. In the service oriented community system, the Time Distance has been introduced as the efficient measure of the distance between the users and the retailers. The Time Distance Oriented Service System architecture has been proposed to satisfy these requirements for flexible and stable services, where the nodes are autonomously distribute services and reduce the service area based on the time distance. Here autonomous cooperation technique for achieving fault tolerance is proposed in order to satisfy the requirement of high service availability.",2002,0, 1079,On Fault Isolation by Functional and Hardware Redundancy,"The aim of the work is to exploit some aspects of the functional and hardware redundancy in fault detection and isolation tasks using back-propagation neural networks as functional approximation devices to be used as residuals generators which will evaluated by means of rule based strategies. Implementation procedure is carried out with the facilities supplied by a FOUNDATIONTM Fieldbus compliant tool, which manage databases, neural network structures and training algorithms under mentioned standard.",2006,0, 1080,"Comparative study of two modelling implementation methods of a wound-rotor induction machine: Simulation, fault diagnosis and validation","The aim of this paper is to compare two modeling methods for a wound-rotor induction machine in order to simulate it in both healthy and faulty modes. The circuit-oriented approach which represents the machine model as a rotating transformer will be compared to the discrete event method. First, the wound-rotor induction machine model will be described using the classical equations in the abc reference frame. In fact, the circuit oriented method is based on a representation with only resistances, inductances and controlled voltage sources. It has been implemented by using the MATLAB/Simulinkcopy software. With the discrete event method, the classical abc reference frame model is based on ordinary differential equations which are translated in block-scheme to be interpreted as a coupling of basic models based on a discrete event system specification. Finally, the performances of the two methods have been verified by comparison between simulation and experiments on a 5.5 kW- 220 V /380 V - 50 Hz-8 poles wound-rotor induction machine working in both healthy and faulty modes at different load conditions.",2009,0, 1081,Inter-turn short circuit fault detection of wound rotor induction machines using Bispectral analysis,"The aim of this paper is to develop a technique for detecting an inter-turn short-circuit in a wound rotor induction machine working as generator by using the bispectral analysis. Bispectral analysis is able to provide more information than power spectrum analysis. In the present investigation, the rotor and stator currents of the machine operating in both healthy and faulty modes are thoroughly analyzed. In the first stage, the bispectral analysis is applied to signals generated by a wound-rotor induction machine model developed by using MATLAB/Simulink environment. This model allows the simulation of the inter-turn short circuit in the stator of the machine in any system with control circuits and/or connections to the grid by means of power electronics converters. Then, the same analysis has been performed using experiments on a 5.5kW--220V/380V - 50Hz - 8 poles wound-rotor induction machine working in both healthy and faulty modes at different load conditions. Finally, a comparison between simulation and experimental results is given. Very promising results are obtained and presented. The results and the analysis indicate that bispectrum can be successfully applied to machine asymmetric faults, and stator winding fault analysis.",2010,0, 1082,Fault detection algorithm based on a discrete-time observer residual generator - a GPS application,"The aim of this paper is to present a design methodology for an observer-based fault detection filter. This approach allows to compute a filter gain such that both fault detection and disturbance attenuation requirements are accomplished. The developments are based on H2 and H optimization type methods in the discrete-time framework. The existence conditions for the observer-based detection filter are expressed in terms of feasibility of some matrix inequalities. An important advantage of the described approach is the possibility to extend the results to other classes of systems (as sampled-data systems, stochastic systems with Markovian jumps and corrupted with multiplicative white noise, etc.) for which disturbance attenuation type results expressed in terms of feasibility of appropriate Linear Matrix Inequality (LMI) systems, are available. In order to illustrate the derived theoretical results, a case study concerning the problem of Global Positioning System (GPS) integrity monitoring is considered.",2007,0, 1083,Comparison of modelling methods and of diagnostic of asynchronous motor in case of defects,"The aim of this paper is to provide the reader with a comparison base between several methods of modelling and diagnosis of faulted induction machine. In the case of high precision modelling, the paper presents several methods used to simulate the faulted induction machine such as: the extended Park model, the analytical three-phase model, the coupled magnetic circuit model, the reluctance network method and the finite elements method. Also, some compact modelling approaches are described such as the electrical and thermal behavior models used for the parameter estimation diagnosis method, and the pattern recognition approach. These different modelling and diagnosis methods are tested in the case of the broken rotor bars fault.",2004,0, 1084,From planning of complex bone deformities correction to computer aided operation,"The aim of this project is to develop a new approach for the correction of bone deformities using a computer-aided surgery tool. We use measurement-based planning of corrections on CT data and an ultrasound-based navigation system for the real-time support of surgical operations. We implement a single-cut, close-and-open wedge osteotomy correction of complex 3D deformities. In the discussed method, virtual reality and image processing techniques allow planning and supporting of the operation on a personal computer. The designed software prototype significantly simplifies the process of osteotomy planning and allows surgeon training and assistance. The program reduces the total time of the operation planning and increases the accuracy of the correction significantly",2001,0, 1085,Final Prediction Error of Autoregressive Model as a New Feature in the Analysis of Heart Rate Variability,"The aim of this study is to offer a new heart rate variability (HRV) index that increases the accuracy in the discrimination of patients with congestive heart failure (CHF) from the control group. For this purpose, final prediction errors (FPE), which shows the quality of the conformity of autoregressive (AR) model, are calculated for model degrees from 1 to 100. Although the optimal AR model order and FPE values are widely used in the literature, they have not been used as possible HRV indices. In this study, we used FPE as an HRV feature for discriminating the patients with CHF from normal subjects and made a comparison with the other common HRV indices. As a result, we showed that FPE of AR model is a possible significant HRV feature.",2007,0, 1086,A novel test system for automated surface characterization of performance relevant defects,"Summary form only given. Higher areal storage densities increase the relevance of smaller media defects which might play a major role in HDD performance. In particular, the decrease in distance of the write and read elements to the spinning media (a decrease from 20 nm to 5 nm is expected in the next 5 years) requires a very careful control of defect heights. Furthermore, anti-ferromagnetic coupled media may cause new types of defects, which cannot be detected by any topographic sensitive measurement technique. We have developed a non-destructive, clean-room compatible measurement tool which allows us to test the entire surface with respect to magnetic and topographic properties of the magnetic storage media as well as to characterize specific areas on a nm/sup 2/ scale by means of scanning probe microscopy (SPM). SPM techniques are already used for defect characterization but they suffer from a small throughput, are often destructive and defect areas cannot be re-tested with exactly the same recording parameters. We have combined a spin stand tester (SST) and a SPM to a single unit. A positioning procedure allows us to automatically position specific areas which have been detected by the SST for SPM characterization.",2002,0, 1087,Fault-aware job scheduling for BlueGene/L systems,"Summary form only given. Large-scale systems like BlueGene/L are susceptible to a number of software and hardware failures that can affect system performance. We evaluate the effectiveness of a previously developed job scheduling algorithm for BlueGene/L in the presence of faults. We have developed two new job-scheduling algorithms considering failures while scheduling the jobs. We have also evaluated the impact of these algorithms on average bounded slowdown, average response time and system utilization, considering different levels of proactive failure prediction and prevention techniques reported in the literature. Our simulation studies show that the use of these new algorithms with even trivial fault prediction confidence or accuracy levels (as low as 10%) can significantly improve the performance of the BlueGene/L system.",2004,0, 1088,A fault-tolerant approach to network security,"Summary form only given. The increasing use of the Internet, especially for internal and business-to-business applications has resulted in the need for increased security for all networked systems to avoid unauthorized access and use. A failure of network security can effectively close the business, its availability is vital to operations. Vital functions such as firewalls and VPNs must remain in operation without loss of time for fallover, without loss of data and must be able to be placed even at remote locations where support personnel may not be readily available. Network firewalls are the first, and often are the only, line of defense against an attack. However, the firewall can be a double-edged sword. In operation, the firewall protects the network from everything from Denial of Service attacks to the entry of known viruses and unauthorized intrusion. If the firewall falls, there are generally only two options: Leave the network open to all or shut down access by anyone. The default condition is to close everything off, but this can be as disastrous as leaving the network open. Due to the importance of the firewall, most leading firewall software provides some method of establishing a form of fail-over redundancy for high availability. Yet in most cases this means some form of clustering using a secondary system as a backup with specialty software to detect and respond to a failure of the primary firewall. Such a clustered approach introduces additional complexity when establishing and configuring the firewall and additional complexity when upgrading. It also adds dramatically to the cost, not only in the hardware for the firewall, but in additional software copies and in the expertise for clustering support software required to establish and maintain the cluster. The approach we will discuss examines the creation of network security based on a hardware approach to fault tolerance. This approach will dramatically reduce the system complexity, simultaneously eliminating the need for special clustering software and special expertise for configuring the system for the kind of continuous availability that is the objective of the network security application. In addition, because the hardware approach is something that is designed in from the inception of the system, there are additional advantages. The fault tolerance is not an afterthought, but rather the purpose of the hardware, meaning that the system can be made to function very smoothly with very little administration. Failure of a part of the system is seamlessly recovered by the redundant elements, without loss of data in memory or loss of state for the system. In sum, this paper discusses the ability to create network security that reaches the standard of being continuously available, what is often referred to as the ""Holy Grail of reliability,"" 99.999% uptime",2001,0, 1089,A fault tolerant topology control in wireless sensor networks,"Summary form only given. Topology control of wireless sensor networks (WSNs) is a key design challenge in terms of extending the lifetime of the network. The paper presents a fault tolerant topology control by adding necessary redundant nodes to the network's simple communication backbone, which results a higher vertex connectivity degree. It provides not only fault tolerance for unreliable node failure, but also support for upper level protocols. The paper also identifies several factors and synchronization methods which may affect the redundant node selection. A simulation study shows the improvement of network lifetime with a desired vertex connectivity degree.",2005,0, 1090,Analysis of the Timing System Error of the Constellation Automatic Navigation,"System integrated clock (SIC) plays an important role in implementing the high-accuracy constellation automatic time synchronization and information exchange. In the establishment of SIC, error and noise are unavoidably introduced. In the paper, various error sources in the process are analyzed at first, and then an error-reduction method under the model of two-way plus common view time comparison is put forward and analyzed. Theory research and simulation experiment show that the constellation time synchronization error is below 10 seconds.",2007,0, 1091,Online fault detection in a hardware/software co-design environment: system partitioning,"System reliability aspects are receiving a lot of attention in the design of systems for critical application fields. Often these issues are approached at low abstraction levels, toward the end of the design process, introducing significant overheads. By introducing fault detection requirements at system level, when a hardware/software co-design process is to be carried out, it is possible to evaluate the overheads and benefits of different solutions. The traditional partitioning phase has been modified in order to take into account the reliability issues for selecting, among the several identified reliable solutions, the one that best responds to the user's requirements. The paper presents the partitioning for a co-design flow aimed at providing fault detection properties to the final system, selecting the hardware tasks and the software tasks for implementing both the system functionality and the checking capabilities.",2001,0, 1092,Adaptive Lossy Error Protection architecture in H.264 video transmission,"Systematic Lossy Error Protection (SLEP) is a robust error resilient mechanism which uses Wyner-Ziv coding to protect the video bitstream. In this paper, we propose a low overhead adaptive lossy error protection (ALEP) mechanism that provides a good trade-off between the error resilience and decoded video quality. The proposed method can generate appropriate redundant slices to provide proper error correction capability for varying channel conditions. The proposed method maintains good video quality at low packet loss rate compared to original SLEP and still provides sufficient error correction capability at high packet loss rate in our simulation results. It achieves 2-3 dB PSNR improvement at 5% packet loss rate for various video sequences in our simulations.",2009,0, 1093,The design and use of persistent memory on the DNCP hardware fault-tolerant platform,"Systems that are designed to recover from system failure due to software faults of the operating system and/or application typically require a means of persistently storing a subset of the state of the application. Disk drives are most often used as this persistent storage, but at a performance cost incurred repeatedly during normal execution as well as again at recovery time. Academic work has pioneered the concept of using a region of conventional memory, protecting it, and making it persist across operating system crashes and reboots, and making it as reliable as a disk. This can be used in place of a disk to alleviate the performance penalties noted above. This paper describes a project to take these concepts and apply them in a RAM disk-based realization of persistent memory (PM) as part of the Lucent DNCP (Distributed Network Control Platform) hardware fault-tolerant platform and implemented for the HP-UX operating system, focusing on its use by a main-memory database (MMDB) system. While we found that the reduction in recovery time was small relative to the reboot time, we achieved a nearly 40% reduction in execution time for an MMDB benchmark run on the PM as opposed to its normal use of a disk for achieving recoverability.",2001,0, 1094,Fault Injection-based Reliability Evaluation of SoPCs,"Systems-on-programmable-chip (SoPCs) include processors, memories and programmable logic that allow to catch multiple application requirements such as high performance, reconfigurability and low-costs. Due to these characteristics, they are also becoming very attractive for safety-critical applications. However, the issue of assessing the reliability they can provide and debugging the possible safety-related mechanisms they embed is still open. In this paper, we present a new fault-injection approach for evaluating the impact of transient faults in SoPCs. Fault-injection experiments are reported on a case study consisting of a Web server implemented on a Xilinx Virtex-II FPGA embedding a PowerPC 405 and running the whole TCP/IP stack",2006,0, 1095,Fault Propagation in Tabular Expression-Based Specifications,"Tabular expressions have been used in industry for many years to precisely document software in a readable notation. In this paper, we propose a fault-based testing technique that traces the propagation of faults from the expression in each cell of a tabular expression to the output of the program under test. The technique has been formalized in the form of abstract test case constraints also represented by tabular expressions, so that it can be easily applied and automated.",2008,0, 1096,Influence of Nonconsecutive Bar Breakages in Motor Current Signature Analysis for the Diagnosis of Rotor Faults in Induction Motors,"Studies of rotor asymmetries in squirrel-cage induction motors have traditionally focused on analyses of the effects of the breakage of adjacent bars on the magnetic field and current spectrum. However, major motor manufacturers have reported cases where damaged bars are randomly distributed around the rotor perimeter of large HV machines. In some of these cases, the motors were being monitored under maintenance programs based on motor current signature analysis (MCSA), and the degree of degradation found in the rotor was much greater than that predicted by analysis of their current spectra. For this reason, a complete study was carried out, comprising a theoretical analysis, as well as simulation and tests, to investigate the influence that the number and location of faulty bars has on the traditional MCSA diagnosis procedure. From the theoretical analysis, based on the application of the fault-current approach and space-vector theory, a very simple method is deduced, which enables the left sideband amplitude to be calculated for any double bar breakage, per unit of the sideband amplitude corresponding to a single breakage. The proposed methodology is generalized for the estimation of the sideband amplitude in the case of multiple bar breakages and validated by simulation using a finite-element-based model, as well as by laboratory tests.",2010,0, 1097,Investigation of Fault-Tolerant Adaptive Filtering for Noisy ECG Signals,"Studies shows that electrocardiogram (ECG) computer programs perform at least equally well as human observers in ECG measurement and coding, and can replace the cardiologist in epidemiological studies and clinical trials (Kors and Herpen, 2001). However, in order to also replace the cardiologist in clinical settings, such as for out-patients, better systems are required in order to reduce ambient noise while maintaining signal sensitivity. Therefore the objective of this work was to develop an adaptive filter to remove the contaminating signal in order to better obtain and interpret the electrocardiogram (ECG) data. To achieve reliability, the real-time computing systems must be fault-tolerant. This paper proposed a fault-tolerant adaptive filter for noise cancellation of ECG signals. Comparison of the performance and reliability of non-fault-tolerant and fault-tolerant adaptive filters are performed. Experimental results showed that the fault-tolerant adaptive filter not only successfully extract the ECG signals, but also is very reliable",2007,0, 1098,Substation fault analysis requirements,"Substation automation has critical role in power systems. Substations are responsible for many protection, control and monitoring functions that allow robust routing of power from generators to loads through a complex network of transmission lines. With the latest technology development many intelligent electronic devices (IEDs) available in substations today are capable of performing enhanced functionalities beyond what their basic function is. This brings an opportunity for adding new functionalities that go well beyond what the traditional substation automation solutions have provided. This presentation summarizes requirements for automated fault analysis functions that may be performed in substations in the future. Particular focus is on requirements for implementing a new concept of merging operational and non-operational data with a goal of improving fault analysis. The requirements are aimed at expanding the substation automation role in automated fault analysis towards better serving many utility groups: operations, protection and asset management.",2009,0,1099 1099,Substation fault analysis requirements,"Substation automation has critical role in power systems. Substations are responsible for protection, control and monitoring functions that allow robust routing of power from generators to loads through a complex network of transmission lines. With the latest technology development, many intelligent electronic devices (IEDs) available in substations today are capable of performing enhanced functionalities beyond what their basic function is. This brings an opportunity for adding new functionalities that go well beyond what the traditional substation automation solutions have provided.",2010,0, 1100,SUDS: An Infrastructure for Creating Bug Detection Tools,SUDS is a powerful infrastructure for creating dynamic bug detection tools. It contains phases for both static analysis and dynamic instrumentation allowing users to create tools that take advantage of both paradigms. The results of static analysis phases can be used to improve the quality of dynamic bug detection tools created with SUDS and could be expanded to find defects statically. The instrumentation engine is designed in a manner that allows users to create their own correctness models quickly but is flexible to support construction of a wide range of different tools. The effectiveness of SUDS is demonstrated by showing that it is capable of finding bugs and that performance is improved when static analysis is used to eliminated unnecessary instrumentation.,2007,0, 1101,Conformal method to eliminate the ADI-FDTD staircasing errors,"The alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is an unconditionally stable method and allows the time step to be increased beyond the Courant-Friedrich-Levy (CFL) stability condition. This method is potentially very useful for modeling electrically small but complex features often encountered in applications. As the regular FDTD method, however, the spatial discretization in the ADI-FDTD method is only first-order accurate for discontinuous media; several researchers have shown that the errors can be very high when the regular ADI-FDTD method is applied to such discontinuous media. On the other hand, the conformal FDTD method has recently emerged as an efficient FDTD method with higher order accuracy. In this work, a second-order accurate ADI-FDTD method using the conformal approximation of spatial derivatives is proposed. This new scheme, called the ADI-CFDTD method, retains the second-order accuracy in both temporal and spatial discretizations even for discontinuous media with metallic structures, and is unconditionally stable. 2D and 3D examples demonstrate the efficacy of this method and its application in EMC problems.",2006,0, 1102,Study for Amplitude Error and Space/Time Orthogonal Error of Two-Phase Time-Grating Sensor,"The amplitude error, space orthogonal error and time orthogonal error of time grating sensor are the main factors affecting the accuracy, which need to be eliminated. First of all, its discussed in detail that the influence of the three kinds of errors on the sensor measurement errors. and the influence is simulated with Matlab. Second, error detection methods is study. Its proposed that orthogonal location test method, which can realize detection of three errors. Third, its study that the new error compensation method of the time orthogonal and space orthogonal, which can eliminate two kinds of orthogonal error by constructing a new signal. The method, simulated with Matlab, can completely eliminate two kinds of orthogonal error. Finally, experiment show that the method of detection and correction error, such as the amplitude error, space orthogonal error and time orthogonal error, can effectively reduce errors, and greatly improve the sensor accuracy.",2010,0, 1103,Starting Synchrophasor measurements in Egypt: A pilot project using fault recorders,"The analysis of the large systems blackouts during the last years have pointed out the need for the function of real-time wide area monitoring, protection and control (RT WAM PAC). Phaser measurements (PM) has proved to be the vital source of data necessary for many applications in power system RT WAM PAC. The paper explains this new technology, the reasons behind its use and points out the role it plays in the overall view of RT WAM PAC. The recent applications of the synchrophasors projects allover the world is highlighted. Generic and specific technical requirements for synchrophasors measurements are explained. The paper also illustrates, using actual recorded case from the fault recording system in the Egyptian Power Network, how the synchronized voltage phasors at different locations within the system, could be presented to the system operator during different operating states, using a software program designed specially for this purpose. A pilot project for PM in Egypt using the existing disturbance recorders is presented showing the benefits which could be gained from such a project.",2008,0, 1104,Autonomous Fault Emulation: A New FPGA-Based Acceleration System for Hardness Evaluation,"The appearance of nanometer technologies has produced a significant increase of integrated circuit sensitivity to radiation, making the occurrence of soft errors much more frequent, not only in applications working in harsh environments, like aerospace circuits, but also for applications working at the earth surface. Therefore, hardened circuits are currently demanded in many applications where fault tolerance was not a concern in the very near past. To this purpose, efficient hardness evaluation solutions are required to deal with the increasing size and complexity of modern VLSI circuits. In this paper, a very fast and cost effective solution for SEU sensitivity evaluation is presented. The proposed approach uses FPGA emulation in an autonomous manner to fully exploit the FPGA emulation speed. Three different techniques to implement it are proposed and analyzed. Experimental results show that the proposed Autonomous Emulation approach can reach execution rates higher than one million faults per second, providing a performance improvement of two orders of magnitude with respect to previous approaches. These rates give way to consider very large fault injection campaigns that were not possible in the past",2007,0, 1105,RECASTER: synthesis of fault-tolerant embedded systems based on dynamically reconfigurable FPGAs,"Summary form only given. We present a fault-tolerant embedded system design methodology that uses dynamically reconfigurable FPGAs as spares for several dedicated hardware components. The key advantage is the reduction of area or cost as compared to dedicated spares. During normal operation, each FPGA is dynamically reconfigured with system tasks and can replace any of them if a fault is detected. For a specified coverage, i.e., system tasks that might be affected by a single fault, our algorithm allocates a set of FPGAs and determines schedules, including task executions and FPGA reconfigurations, that provide the required redundancy while satisfying deadlines and minimizing either area or cost. For each task requiring simple redundancy, the algorithm also determines a schedule in which an FPGA replaces this task. Our experimental results indicate that, with a smaller area and cost, this collective redundancy based on dynamically reconfigurable FPGAs allows system recovery from a larger number of single faults, each affecting one task, as compared to the conventional spare approach.",2004,0, 1106,Characterization of printed solder paste excess and bridge related defects,"Surface mount technology (SMT) involves the printing of solder paste on to printed circuit board (PCB) interconnection pads prior to component placement and reflow soldering. This paper focuses on the solder paste deposition process. With an approximated cause ratio of 50 - 70% of post assembly defects, solder paste deposition represents the most significant cause initiator of the three sub-processes. Paradigmatic cause models, and associated design rules and effects data are extrapolated from academic and industrial literature and formulated into physical models that identify and integrate the process into three discrete solder paste deposition events - i.e. (i) stencil / PCB alignment, (ii) print stroke / aperture filling and (iii) stencil separation / paste transfer. The projectpsilas industrial partners are producers of safety-critical products and have recognised the in-service reliability benefits of electro-mechanical interface elimination when multiple smaller circuit designs are assimilated into one larger printed circuit assembly (PCA). However, increased solder paste deposition related defect rates have been reported with larger PCAs and therefore, print process physical models need to account for size related phenomena.",2008,0, 1107,Enforcing integrability by error correction using l1-minimization,"Surface reconstruction from gradient fields is an important final step in several applications involving gradient manipulations and estimation. Typically, the resulting gradient field is non-integrable due to linear/non-linear gradient manipulations, or due to presence of noise/outliers in gradient estimation. In this paper, we analyze integrability as error correction, inspired from recent work in compressed sensing, particulary lscr0 - lscr1 equivalence. We propose to obtain the surface by finding the gradient field which best fits the corrupted gradient field in lscr1 sense. We present an exhaustive analysis of the properties of lscr1 solution for gradient field integration using linear algebra and graph analogy. We consider three cases: (a) noise, but no outliers (b) no-noise but outliers and (c) presence of both noise and outliers in the given gradient field. We show that lscr1 solution performs as well as least squares in the absence of outliers. While previous lscr0 - lscr1 equivalence work has focused on the number of errors (outliers), we show that the location of errors is equally important for gradient field integration. We characterize the lscr1 solution both in terms of location and number of outliers, and outline scenarios where lscr1 solution is equivalent to lscr0 solution. We also show that when lscr1 solution is not able to remove outliers, the property of local error confinement holds: i.e., the errors do not propagate to the entire surface as in least squares. We compare with previous techniques and show that lscr1 solution performs well across all scenarios without the need for any tunable parameter adjustments.",2009,0, 1108,Error probability of satellite communication system in the presence of transmitting ground station HPA nonlinearity,"Taking the uplink and downlink cochannel interference and noise into account, we determine the error probability in detecting a binary phase-shift keying (BPSK) signal transmitted over a satellite system containing two high power amplifiers (HPA). The first one is the constituent part of the transmitting ground station and the second one is the constituent part of the satellite station. The emphasis is placed on determining the system performance degradation imposed by the influence of the nonlinear characteristic of the HPA in the transmitting ground station in combination with the negative influences of the uplink and downlink cochannel interference, as well as the nonlinear characteristic of HPA in the satellite station",2001,0, 1109,Design of quasi-cyclic Tanner codes with low error floors,"Tanner codes represent a broad class of graph-based coding schemes, including low-density parity-check (LDPC) and turbo codes. Whereas many different classes of LDPC and turbo codes have been proposed and studied in the past decade, very little work has been performed on the broader class of Tanner codes. In this paper we propose a design technique which leads to efficiently encodable quasi-cyclic Tanner codes based on both Hamming and single parity check (SPC) nodes. These codes are characterized by fast message-passing decoders and can be encoded using shift-register-based circuits. The resulting schemes exhibit excellent performance in both the error floor and waterfall regions on the additive white Gaussian noise channel.",2006,0, 1110,Tracing the Defect Location (TDL),"TDL is the system that converts the analog process to digital process. During the process of building the substrate, there can be many defects which are generated by both a human and other environmental things. To reduce the redundant process like handling by human-beings, TDL can substitute physical operation for logical process.",2007,0, 1111,Point-of-Care Support for Error-Free Medication Process,"Technological advances and critical needs have led to medication administration devices and tools designed to prevent and reduce medication errors. They include smart medication carts, robots and dispensers for professionals and smart dispensers for naive users. This paper describes architectures and interfaces of these devices. It also discusses support environment that is needed to increase their effectiveness and missing standards that will enable their integration into medication process tool chain.",2007,0, 1112,Design and implementation of equipments fault management system based on telecom enterprise data center,"Telecom enterprise data center is in charge of local data switching equipment which in the network interconnection, switching, and routing selection plays an important role. Therefore, these equipments can run in a good way is closely related to the local telecom internet running, but these equipments require frequent updating software versions, maintenance, hardware failures, which requires fast query to the device, the information system can solve this problem. Design system function modules; include basic information management, software version management, operation records management, fault records management, as well as Excel file data processing systems. A well-designed database software can help achieve the establishment of such a system. At the end of the paper, the system deficiencies are pointed out.",2010,0, 1113,Novel method for detection of transformer winding faults using Sweep Frequency Response Analysis,"Sweep frequency response analysis (SFRA) is an established tool for determining the core deformations, bulk winding movement relative to each other, open circuits and short circuit turns, residual magnetism, deformation within the main transformer winding, axial shift of winding etc. This test is carried out on the transformer without actual opening it and is an off line test. This paper explains the fundamental studies of SFRA measurement on basic electrical circuits, which can be extended for studying the mechanical integrity of a transformer after short circuit fault, transportation etc.",2007,0, 1114,Modeling and Measuring Error Contributions in Stepwise Synthesized Josephson Sine Waves,"Synthesizing Josephson stepwise approximated sine waves to produce an AC quantum standard is limited by errors in the waveform transitions between quantized voltages. Many parameters have an impact on the shape of the transients. A simple model with a single equivalent time constant can be used to evaluate the influence of these parameters. Measurements of the transients allow establishment of the value of the equivalent time constant and prediction of the response to variations of the parameters. We have experimentally confirmed the influence of changes in the bias current used for the quantized voltages and in the microwave power applied. Under usual operating conditions, the model predicts that increasing the number of samples per period nonlinearly reduces the difference between measured and ideal root-mean-square (rms) values. This behavior was confirmed through measurements with thermal converters.",2009,0, 1115,A Service-Oriented Fault-Tolerant Environment for Telecom Operation Support Systems,"The complexity that telecommunications companies are faced with in their Operations Support Systems (OSSs) is especially apparent in their billing systems. These systems are required to handle large volumes of data located on different legacy systems operating on heterogeneous platforms. These systems are often integrated using service oriented architecture (SOA). The successful completion of billing processes is essential to the operation of a telecommunications company, as it ensure the smooth inflow of income from services rendered. When a billing process fails it is the generally the result of one of two types of faults - availability faults or reliability faults. Unless the system has been designed with fault-tolerance in mind, failures constitute an extremely costly and time consuming occurrence. This paper proposes a Service Oriented Fault Tolerant Environment (SOFTEN) that seeks to address this problem directly. The SOFTEN described here is composed of three key modules: a Fault Tolerant Proxy (FT Proxy), Fault Tolerant Services (FT Services), and an Adaptable Service Bus (ASB). The FT Proxy allows developers to define strategies and responses for dealing with the occurrence of both availability and reliability faults. FT Services are fault handling mechanisms that enact the procedures established in the FT Proxy. The ASB provides intelligent routing and data transform between the billing process and the billing services. Further, the paper presents results that demonstrate the effectiveness of the solution, both in increasing reliability and in reducing calculation time for completion of the billing process. An implementation of this system has been operational at the ChungHwa Telecom Company, in Taiwan since May 2008 and provides complete support for its billing application. As a result, the billing process cycle time has been reduced from 10-16 days to 3-4 days, which cleared the way for further growth of the business.",2008,0, 1116,Performability Analysis of a Fault Tolerant Sectorized Cellular Network,"The concept of performability takes into accounts both availability and performance of a system. The issue of performability is very important for cellular networks due to its inherent characteristics like scarce of radio resources, high failure & outage rate and above all anytime, anywhere service expectation of the subscribers. In this work, first an algorithm is proposed for providing coverage to the users of failed cell/sector till it is repaired. The algorithm distributes the impact of a cell failure into whole network such that the fairness is maintained in the system despite the failure, thus it provides fault tolerance to the system. Then the performability of the cellular system is studied with the proposed algorithm",2006,0, 1117,Intercircuit and cross-country fault detection and classification using artificial neural network,"The conductor geometry in double circuit lines makes them prone to multi-circuit faults like earthed and unearthed intercircuit faults and cross country faults. The probability of inter-circuit faults is increased when the multiple lines are mounted on the same tower. Mutual coupling is also present during un-earthed intercircuit faults. The phase-to-phase inter-circuit fault (without earth connection) provokes the presence of zero-sequence current, which is detected by ground distance relays. The consequences of intercircuit faults are often not considered in conventional relay design philosophies. In this paper both earthed and un-earthed intercircuit faults are investigated. Artificial neural network based technique has been employed for detection and faulty phase identification (classification) of intercircuit and cross-country faults. The study is carried out on a MATLAB platform and results of ANN based fault detector/classifier are presented and discussed. The simulated test results shows that this technique detects and identifies the faulted phase correctly and quickly over wide range of power system operating conditions; which is a striking benefit of ANN based technique.",2010,0, 1118,Recovery of fault-tolerant real-time scheduling algorithm for tolerating multiple transient faults,"The consequences of missing deadline of hard real time system tasks may be catastrophic. Moreover, in case of faults, a deadline can be missed if the time taken for recovery is not taken into account during the phase when tasks are submitted or accepted to the system. However, when faults occur tasks may miss deadline even if fault tolerance is employed. Because when an erroneous task with larger execution time executes up to end of its total execution time even if the error is detected early, this unnecessary execution of the erroneous task provides no additional slack time in the schedule to mitigate the effect of error by running additional copy of the same task without missing deadline. In this paper, a recovery mechanism is proposed to augment the fault-tolerant real-time scheduling algorithm RM-FT that achieves node level fault tolerance (NLFT) using temporal error masking (TEM) technique based on rate monotonic (RM) scheduling algorithm. Several hardware and software error detection mechanisms (EDM), i.e. watchdog processor or executable assertions, can detect an error before an erroneous task finishes its full execution, and can immediately stops execution. In this paper, using the advantage of such early detection by EDM, a recovery algorithm RM-FT-RECOVERY is proposed to find an upper bound, denoted by Edm Bound, on the execution time of the tasks, and mechanism is developed to provide additional slack time to a fault-tolerant real-time schedule so that additional task copies can be scheduled when error occurs.",2007,0, 1119,Modeling of hydraulic systems tailored to diagnostic fault detection systems,"The consistent and reliable operation of hydraulic componentry is paramount for many systems. From spacecraft to the most basic automotive bottle jack an undetected failure can have significant consequences if not noticed in time. Many hydraulic systems have diagnostics capabilities but these are normally very limited in scope. They can detect events only related to specifically monitored components or general system failures. Typically these diagnostic systems are designed after the fact and tuned to meet goals. When hydraulic systems are designed, simulation models are frequently used to gain some idea of the finished systems performance. Rarely is the simulation model designed to accommodated and optimize a diagnostic capability. The number and placement of diagnostic sensors can have a significant effect on the ability of a diagnostic system to resolve faults early in their evolution cycle. This paper describes a technique developed at the Penn State Applied Research Laboratories Systems Operations and Automation Department for the design of hydraulic simulation models that pre-incorporate fault diagnostic advanced design features. This technique is being applied to a US Army M1120 heavy expanded mobility tactical truck (HEMTT) load handling system (LHS) supply vehicle that utilizes a palletized hydraulic loading system. The test vehicle has a hydraulic system that was fully instrumented with sensors for this work. This paper addresses a piece in the diagnostic puzzle that has until now been looked at. Namely what additional features in a simulation model allow for optimal placement and monitoring of the hydraulic system. The researchers have found that this typically leads the readdressing or removing certain engineering assumptions that are typically made when designing simulation models. When this is done the models are more flexible when it comes to diagnostic implementation",2006,0, 1120,Error detection and correction in VLSI systems by online testing and retrying,"The conventional dual-module redundant (DMR) structure with comparison is a kind of self-testing structure used in some commercial general-purpose computers, such as IBM 4341 processor and IBM S/390 G3 and G4. One of its drawbacks is it does not have error recovery ability. Retrying finds applications because of its low hardware overhead. Micro-rollback and instruction retry as in VAX 8600 are effective methods for rapid error recovery that the error detection latency is only a few clock cycles.",2003,0, 1121,Neuro-Fuzzy Fabric Defect Detection and Classification for Knitting Machine,"The conventional method in textile industries for fault detection is human inspection. In this method, the classification of produced fabrics and online identification of fault in production line is done by human vision. The goal of this study is to replace the current human visual inspection methods with an automated visual inspection system that provide high accuracy of defect detection and classification. The proposed system consists of a computer based vision system (for capturing an image), image processing tools (for adjusting the image and extracting the features) and a system for detecting and classifying fabric defects. Two types of features are extracted (tonal and texture). Two types of systems are used. The first system is based on fuzzy clustering using fuzzy c-means clustering (FCM). The second system is adaptive neural-fuzzy inference system (ANFIS). Experimental results show that, the proposed systems are capable of detecting and classifying fabric defect, also it shows that using both tonal and texture features gives higher detection and classification rate than using each one individually",2006,0, 1122,Progressive PCA modeling for enhanced fault diagnosis in a batch process,"The conventional process monitoring procedure using principal component analysis (PCA) can show which variable is highly related with the fault by looking at the contribution plots for the monitoring statistics, SPE (squared prediction errors) and T2. However, this procedure is not able to determine if the variable is just affected by the fault or the variable is the cause of the fault. In addition, it is not able to show fault propagation through the process variables during the process time. The proposed progressive PCA modeling procedure can identify all variables related to the fault through progressively removing the identified variables and PCA modeling with the remaining variables. It can also provide timing information of when abnormal behaviors are observed for the identified variables by using time series SPE plots with control limits estimated by weighted chi-squared distribution. Based on the timing information, it is able to build a flow chart showing the fault propagation paths. The proposed method is demonstrated on a benchmark fed-batch penicillin process simulator.",2010,0, 1123,Increasing Software Engineering Efficiency Through Defect Tracking Integration,"The creation of a collaboration system to facilitate updates to the defect tracking system, by extracting the defects status and resolution from the software engineers' code comments, will eliminate duplicated software engineer workload and inaccurate updates to the defect tracking system that result in lost or inaccurate metrics. This collaboration system will result in significantly improved productivity, traceability, and process compliance. Typically software quality assurance specialists will test the software and log defects within a defect tracking system. Software engineers will then correct the defect in the Integrated Development Environment (IDE), and update the status of the defect in the defect tracking system. Software engineers are required to enter the same information in the code comments, version control, and defect tracking system causing wasted software engineer time and lost opportunities to effectively capture historical metrics.",2006,0, 1124,Selection of currents patterns using SVMs for locating faults in radial power systems,"The current regulatory framework demands better quality service to electric utilities, using as a measure of non scheduled power outages the SAIDI and SAIFI indexes. In this paper, a selection of characteristics from protective relays using a set of support vector machines (SVM) is proposed in order to solve the fault location problem in power distribution systems. To show the effectiveness of the proposed methodology, a large scale power distribution system was used. Results show lower validation errors which are associated to the high performance. Opportune fault location in power distribution systems plays an important role in the service improvement.",2008,0, 1125,Fault diagnosis and debugging of SOPC based on Rough Sets Theory,"The embedded system develops, it becomes more and more complex system. Thus, it is more difficult for designers and developers to diagnose and debug the embedded system. As one of the most advanced technologies, SOPC is a hardware and software integrated embedded system. In the paper, the fault diagnosis and debugging system based on Rough Sets Theory is presented to help to identify error causes. It can quickly diagnose system errors. In the last, a case is also presented to verify the efficiency and effectivity of the method.",2010,0, 1126,Fault Location in Distribution Systems by Means of a Statistical Model,"The enhancement of power distribution system reliability requires a great investment but not all the utilities are in a position to assume it. Therefore, any strategy that allows the improvement of reliability should be reflected directly in the decrease of the duration and frequency of interruptions. In this paper an alternative solution to the problems of continuity associated to fault location is presented. A methodology of statistical nature is proposed using mixture of distributions. With this approach a statistical model is obtained from the extraction of characteristic patterns of the signals registered by measurement equipments, along with the parameters and own topology of the network during an event. The purpose of this methodology is to offer an economic alternative of easy implementation for the development of strategies oriented to improve the reliability from the decrease in the times of attention and recovery of the system",2006,0, 1127,Determination of BGA structural defects and solder joint defects by 3D X-ray laminography,"The equipment and software for X-ray laminography has advanced rapidly in recent years. The latest systems can distinguish features as small as 5 m in diameter and identify their locations to within 10 m in three dimensions within an IC package. Details of complex closely spaced structures can be extracted readily with recently developed microlaminographs. At this resolution, the method is termed microlaminography. In this paper, the technology, methodology and results from a microlaminography system developed for failure analysis in IC packaging are presented. The copper traces in the built up layers of a BGA substrate were extracted and analysed individually. Bond-wire shorts in the plane of the solder resist in a lot of BGA assemblies were located and identified with subsequent verification by destructive physical analysis (DPA). 3D reconstructions of individual solder balls within an assembly were created and examined for defects. These analyses could not have been done by normal 2D X-ray; formerly only DPA could extract such information",2001,0, 1128,Weighted motion vectors of double candidate blocks based temporal error concealment method for video transmission over wireless network,"The error concealment(EC) method for video transmission over wireless network recommended by H.264 cannot reconstruct the corrupted image effectively, which was caused partly by the inaccurate predicted motion vector(MV). Thus a temporal error concealment(TEC) method based on weighted motion vectors of double candidate blocks in the previous frame is proposed in this paper. The proposed method conceals a corrupted block by the MV given by weighting motion vectors of double candidate blocks according to their different accuracy. Moreover, to improve the accuracy of a candidate block and it's MV when searching a matched block in the reference frame, the reliability of the neighboring blocks of the current corrupted block was assigned by different reliability coefficients in terms of their different contribution to recover the corrupted block. The performances, on several standard video sequences, of the proposed TEC method have been compared with those obtained by the EC method in H.264. Experimental results show that the proposed method outperforms the method in H.264 both on PSNR and visual quality obviously.",2010,0, 1129,Soft-error resilience of the IBM POWER6 processor,"The error detection and correction capability of the IBM POWER6TM processor enables high tolerance to single-event upsets. The soft-error resilience was tested with proton beam- and neutron beam-induced fault injection. Additionally, statistical fault injection was performed on a hardware-emulated POWER6 processor simulation model. The error resiliency is described in terms of the proportion of latch upset events that result in vanished errors, corrected errors, checkstops, and incorrect architected states.",2008,0, 1130,Error Detection Reliability of LTE CRC Coding,"The error detection performance of CRC coding in LTE with general two-level early stopping algorithms for turbo decoding is investigated. Analytical models for the probability of block error and undetected block error at the code block and transport block levels were developed. Simulations were used to verify the model for shorter CRC lengths and block sizes. The analytical models show the setting of 24-bit CRCs in LTE allows low-complexity, robust and reliable early stopping algorithms to reduce turbo decoding complexity.",2008,0, 1131,Research on generating detector algorithm in fault detection,"The essence of fault diagnosis is pattern recognition to characters of fault. The non-dimensional parameter immune detector as the recognition fault detector is constructed with some non-dimensional parameter being more hypersensitive to fault combined with negative selection mechanism of artificial immune system. And two parts of immune vaccine and immune learning produce the excellence detector to diagnose fault directly. The part of immune response show fault identification and the result of the k-nearest neighbor classify method to diagnose fault. The algorithm of the design frame graph and the concrete implementing approach is given in detail. In the end, the simulation examples show that the detector by the algorithm is valid to fault detection.",2010,0, 1132,Strong replica consistency for fault-tolerant CORBA applications,"The Eternal system provides transparent fault tolerance for CORBA applications, without requiring modifications to the application or to the object request broker (ORB), and without requiring special skills of the CORBA application programmers. Eternal maintains strong replica consistency as replicas of objects perform operations, and even as they fail and are recovered. Eternal implements the new fault-tolerant CORBA standard",2001,0, 1133,Eternal: fault tolerance and live upgrades for distributed object systems,"The Eternal system supports distributed object applications that must operate continuously, without interruption of service, despite faults and despite upgrades to the hardware and the software. Based on the CORBA distributed object computing standard, the Eternal system replicates objects, invisibly and consistently, so that if one replica of an object fails, or is being upgraded, another replica is still available to provide continuous service. Through the use of interceptors, Eternal renders the object replication transparent to the application and also to the CORBA ORE. Consequently, Eternal is able to provide fault tolerance, and live hardware and software upgrades, for existing unmodified CORBA application programs, using unmodified commercial-off-the-shelf ORBs",2000,0, 1134,A Global Simulation of Microwave Emission: Error Structures Based on Output From ECMWF's Operational Integrated Forecast System,"The European Centre for Medium-range Weather Forecasts (ECMWF) brightness will use temperatures from the soil moisture and ocean salinity mission to analyze root zone soil moisture through a variational data assimilation system. The first guess is obtained from numerical weather prediction (NWP) model fields, an auxiliary database, and a land surface microwave emission model. In this paper, we present the community microwave emission model and research the first-guess errors in L-band brightness temperatures. An error propagation study is performed on errors introduced through: (1) uncertainties in the parameterizations of the radiative transfer model; (2) auxiliary geophysical quantities for the radiative transfer computations; and (3) an imperfect NWP model. It is found that the vegetation and dielectric models introduce uncertainties with a difference of up to 25 K between models. However, the biggest error in brightness temperature is likely related to the use of an auxiliary vegetation database, which results in differences of -20 to +20 K in our simulations. These potential errors are in many regions higher than the variance in brightness temperatures related to an imperfect NWP model.",2008,0, 1135,System Level Approaches for Mitigation of Long Duration Transient Faults in Future Technologies,"The evolution of the technology in search of smaller and faster devices brings along the need for a new paradigm in the design of circuits tolerant to soft errors. The current assumption of transient pulses shorter than the cycle time of the circuit will no longer be true, thereby precluding the use of most of the mitigation techniques proposed so far. With transient faults duration spanning more than one clock cycle of operation, new fault tolerance solutions, working at the system level, with low area and performance overheads, must be devised. In this paper we propose the first steps in the direction of using low cost verification schemes at the algorithmic level, applied to general purpose matrix multiplication applications. Experimental results obtained with two different implementations of checker circuits using the proposed technique are presented and discussed.",2007,0, 1136,Multi-Agent Fault Diagnosis in Manufacturing Systems Using Soft Computing,"The expeditious and accurate diagnosis of faults in manufacturing systems is essential in order to avoid expensive downtime. Many artificial intelligence approaches to automated fault diagnosis use techniques that are too computationally complex to achieve a diagnosis in real-time or are too inflexible for dynamic systems. Other approaches use either structural or symptom-based reasoning. Functional approaches are unable to provide real-time response due to their computational complexity, whereas, symptom-based approaches are only able to handle situations specifically coded in rules. Current hybrid approaches that combine the two methods are too structured in their approach to switching between the reasoning methods and, therefore fail to provide the flexible, rapid response of humans experts. This paper presents a robust, extensible approach to fault diagnosis that allows unstructured switching between reasoning methods using multiple fuzzy intelligent agents that examine the problem domain from a variety of perspectives.",2007,0, 1137,A novel approach to faulted-phase selection using current traveling waves and wavelet analysis,"The early traveling wave faulted-phase selectors, due to lack of effective tool to process transient signals, had to directly used instantaneous values of signals so that they cannot overcome such bad influence as noise disturbance. Fortunately, wavelet analysis, with its time-frequency localization ability and the wavelet transform modulus maxima (WTMM) concept, is well suited to treat with singularity of fault-generated traveling waves in EHV/UHV transmission lines. This paper presents a novel approach to fast and accurate phase-selection, which used the WTMM of initial model current traveling waves according to the fault characteristic relations deduced from the boundary conditions of various types of faults. The criterion is explicit in characteristics and physical concepts, and is apt to be realized. A large number of EMTP simulations demonstrated the new faulted-phase selection algorithm.",2002,0, 1138,The Sorcerer's Apprentice Guide to Fault Attacks,"The effect of faults on electronic systems has been studied since the 1970s when it was noticed that radioactive particles caused errors in chips. This led to further research on the effect of charged particles on silicon, motivated by the aerospace industry, which was becoming concerned about the effect of faults in airborne electronic systems. Since then various mechanisms for fault creation and propagation have been discovered and researched. This paper covers the various methods that can be used to induce faults in semiconductors and exploit such errors maliciously. Several examples of attacks stemming from the exploiting of faults are explained. Finally a series of countermeasures to thwart these attacks are described.",2006,0, 1139,A binary Particle Swarm Optimization approach to fault diagnosis in parallel and distributed systems,"The efficient diagnosis of hardware and software faults in parallel and distributed systems remains a challenge in today's most prolific decentralized environments. System-level fault diagnosis is concerned with the identification of all faulty components among a set of hundreds (or even thousands) of interconnected units, usually by thoroughly examining a collection of test outcomes carried out by the nodes under a specific test model. This task has non-polynomial complexity and can be posed as a combinatorial optimization problem. Here, we apply a binary version of the Particle Swarm Optimization meta-heuristic approach to solve the system-level fault diagnosis problem (BPSO-FD) under the invalidation and comparison diagnosis models. Our method is computationally simpler than those already published in literature and, according to our empirical results, BPSO-FD quickly and reliably identifies the true ensemble of faulty units and scales well for large parallel and distributed systems.",2010,0, 1140,On the Integration of Mobility in a Fault-Tolerant e-Health Web Information System,"The e-health domain has for objective to assist and manage citizens health. It concerns many actors like patient, doctors, hospitals and administration. Current and forthcoming generations of application will be web based and will integrate more and more mobile devices. In such application domain, dependability is a key notion. This paper presents, through a case study, how we can develop an application that controls the insulin injection and that is embedded in a mobile device belonging to an e-health Web Information System (WIS). In order to ensure the dependability of the control systems, we show how to use coordinated atomic actions (CAA). In order to implement our design, we explain how to use a development framework that we have made to implement CAA, which originally was not tailored for mobile fault-tolerant applications. Thus, in this paper, we also explain how we have adapted and used CAA-DRIP for mobile devices.",2007,0, 1141,Application of multi-agents for fault detection and reconfiguration of power distribution systems,"The electric power system has become a very complicated network at present because of re-structuring and the penetration of distributed generation and storage. A single fault can lead to massive cascading effects affecting power supply and power quality. An overall systematic solution for these issues could be obtained by an artificial intelligent mechanism called the multi-agent system. This paper presents a multi-agent system model for fault detection and reconfiguration based on graph theory and mathematical programming. The multi-agent models are simulated in Java Agent Development Framework and Matlabreg and are applied to a power system model designed in the commercial software, the Distributed Engineering Workstationcopy. The circuit that is used to model the power distribution system is a simplified model of the Circuit of the Future, developed by Southern California Edison. Possible fault cases were tested and a few critical test scenarios are presented in this paper. The results obtained were promising and were as expected.",2009,0, 1142,Electrical Network Models for Chassis Fault Current Analysis,"The electrical power distribution system for International Space Station (ISS) payloads is designed with the single-point-ground (SPG) at the power source. The system design requires the minimum isolation resistance of one mega-ohm between the power circuits and the chassis ground at electrical loads to prevent current from flowing in ground references. Under fault conditions, solid state devices protect electrical wires by opening the circuits to interrupt power supply in the event of over-current to meet the electrical design and safety requirements. Due to the voltage potential difference between the electrical load power return and the SPG, nominal return current from power loads may flow in the reverse direction to the fault through the wire designed for nominal current load. This presents a concern that the fault current might exceed the electrical wire rating. Excessive fault current may jeopardize crew safety and equipment integrity. Therefore, the ISS payload safety review panel requires an accurate assessment of chassis fault current. The original electrical power distribution design approach adopted a simplified calculation which divides the nominal return current based on the ratio of fault resistance and the resistance of power returns wire. This approach treats each fault path independent from other loads on the same power bus and therefore, the interaction between nominal system loads and faulted circuit is ignored. The ISS payload engineering integration organization developed an improved design process network by accurately modeling power return-wire to chassis short-circuit scenarios to predict the fault current of each chassis fault path. The development of electrical network models and their applications are discussed. This methodology offers the flexibility of rapid model reconfiguration to accommodate different electrical load types and variations in fault resistance and source resistance to cover a wide range of scenarios. This analysis provid- es essential design information for the set points of the solid-state protection devices. This is an improved design process for any aerospace electrical power systems with SPG where the voltage differential between power load return and power source is significant, i.e., greater than a few hundred milli-volts",2005,0, 1143,Fault Ride Through operation of a DFIG wind farm connected through VSC HVDC,The electromechanical transients during a deloading of a DFIG turbine and the Fault Ride Through (FRT) capability of a DFIG wind farm connected through HVDC transmission lines are discussed. The electromechanical oscillations during a deloading operation of a DFIG wind turbine generator are simulated using BLADED software. Then power reduction control during a fault was achieved by reducing the power from the wind farm as a whole and by deloading the individual wind generator. A new power blocking technique applied at the offshore converter station was used to reduce the wind farm power output. Simultaneous control of the wind farm and wind turbine power outputs enabled a smooth power reduction during the fault.,2010,0, 1144,Implementing Network Partition-Aware Fault-Tolerant CORBA Systems,"The current standard for fault-tolerance in the Common Object Request Broker Architecture (CORBA) does not support network partitioning. However, distributed systems, and those deployed on wide area networks in particular, are susceptible to network partitions. The contribution of this paper is the description of the design and implementation of a CORBA fault-tolerance add-on for partitionable environments. Our solution can be applied to an off-the-shelf Object Request Broker, without having access to the ORB's source code and with minimal changes to existing CORBA applications. The system distinguishes itself from existing solutions in the way different replication and reconciliation strategies can be implemented easily. Furthermore, we provide a novel replication and reconciliation protocol that increases the availability of systems, by allowing operations in all partitions to continue",2007,0, 1145,Timing-based delay test for screening small delay defects,"The delay fault test pattern set generated by timing unaware commercial ATPG tools mostly affects very short paths, thereby increasing the escape chance of smaller delay defects. These small delay defects might be activated on longer paths during functional operation and cause a timing failure. This paper presents an improved pattern generation technique for transition fault model, which provides a higher coverage of small delay defect that lie along the long paths, using a commercial no-timing ATPG tool. The proposed technique pre-processes the scan flip-flops based on their least slack path and the detectable delay defect size. A new delay defect size metric based on the affected path length and required increase in test frequency is developed. We then perform pattern generation and apply a novel pattern selection technique to screen test patterns affecting longer paths. Using this technique will provide the opportunity of using existing timing unaware ATPG tools as slack based ATPG. The resulting pattern set improves the defect screening capability of small delay defects",2006,0, 1146,Model-based fault detection for the DELFI-N3XT Attitude Determination System,"The Delfi-n3Xt nanosatellite is the second Dutch university satellite currently being developed at the Delft University of Technology. In its design, the Attitude Determination System (ADS) will be pivotal for optimal power point tracking to adequately provide the energy needed for normal operation and charging of the batteries. In this paper we explore a fault detection mechanism for the ADS based on the Unscented Kalman Filter (UKF) state estimator which has been successfully integrated into the simulation and modelling environment. The UKF provides a more computationally efficient estimator than traditional Kalman filter variants. Faults introduced in the system include changes in the noise model and stuck-at-0 faults, resulting in disturbances in the output of the filter. Parameters of the filter are varied and the behaviour of the outcoming residuals is analyzed to evaluate its effectiveness in the detection of these errors.",2010,0, 1147,Invisible barcode with optimized error correction,"The demands to connect between paper media and digital worlds are increasing, and an ""electronic clipping system"", which overprints an invisible barcode into analog media, is an example of connecting these worlds. However, depending on the combination of the target paper media, the ink used in the media, and the invisible ink, we cannot always correctly decode the information. In this paper, we introduce a new 2D barcode encoding and decoding method which overcomes such problems. In our method, we divide the 2D code data area into several blocks, and do not use for encoding the blocks where many of the bits in the block have high probabilities of bit errors. The decoder determines the skipped blocks from the image itself. The experimental results proved that our method greatly improved the probability of correct information decoding for the invisible barcode.",2008,0, 1148,"Aspect-Oriented Programing Techniques to support Distribution, Fault Tolerance, and Load Balancing in the CORBA-LC Component Model","The design and implementation of distributed High Performance Computing (HPC) applications is becoming harder as the scale and number of distributed resources and application is growing. Programming abstractions, libraries and frameworks are needed to better overcome that complexity. Moreover, when Quality of Service (QoS) requirements such as load balancing, efficient resource usage and fault tolerance have to be met, the resulting code is harder to develop, maintain, and reuse, as the code for providing the QoS requirements gets normally mixed with the functionality code. Component Technology, on the other hand, allows a better modularity and reusability of applications and even a better support for the development of distributed applications, as those applications can be partitioned in terms of components installed and running (deployed) in the different hosts participating in the system. Components also have requirements in forms of the aforementioned non-functional aspects. In our approach, the code for ensuring these aspects can be automatically generated based on the requirements stated by components and applications, thus leveraging the component implementer of having to deal with these non-functional aspects. In this paper we present the characteristics and the convenience of the generated code for dealing with load balancing, distribution, and fault-tolerance aspects in the context of CORBA-LC. CORBA-LC is a lightweight distributed reflective component model based on CORBA that imposes a peer network model in which the whole network acts as a repository for managing and assigning the whole set of resources: components, CPU cycles, memory, etc.",2007,0, 1149,Issues on the Design of Efficient Fail-Safe Fault Tolerance,"The design of a fault-tolerant program is known to be an inherently difficult task. Decisions taken during the design process will invariably have an impact on the efficiency of the resulting fault-tolerant program. In this paper, we focus on two such decisions, namely (i) the class of faults the program is to tolerate, and (ii) the variables that can be read and written. The impact these design issues have on the overall fault tolerance of the system needs to be well-understood, failure of which can lead to costly redesigns. For the case of understanding the impact of fault classes on the efficiency of fail-safe fault tolerance, we show that, under the assumption of a general fault model, it is impossible to preserve the original behavior of the fault-intolerant program. For the second problem of read and write constraints of variables, we again show that it is impossible to preserve the original behavior of the fault-intolerant program. We analyze the reasons that lead to these impossibility results, and suggest possible ways of circumventing them.",2009,0, 1150,Outboard flaps control system based on the avionics deterministic fault-tolerant data bus LTTP,"The avionics data bus systems of the last four decades were dominated by federated systems. In the European Integrated Project DECOS (Dependable Embedded Components and Systems, partially funded by EC-IST-FP6-511764), in Subproject 6 (SP6) - application demonstrator domain aerospace, the attempt was to develop a new technology approach for an integrated system based on the time-triggered communication architectures. The new technologies in flight control electronics systems selected for the DECOS SP6 test-bench consist of the following: layered time-triggered data bus architectures (LTTP), digital actuator control electronics (ACE), signal processor based motor control electronics (NICE) and a centralized system control unit (SCU) embedded in an integrated modular avionics (IMA) cabinet. These electronics were used to drive two mechanically synchronized rotary actuators with DC brushless motors. The purpose of the present paper is to describe the new technologies implemented in DECOS SP6 with an emphasis on the experiences made and efforts deployed at Liebherr-Aerospace Lindenberg in developing, manufacturing and integrating all components into a high-lift actuation system test-rig.",2008,0, 1151,Burst-Error Analysis of Dual-Hop Fading Channels Based on the Second-Order Channel Statistics,"The burst-error (BE) rate of dual-hop fading channels under a fixed fade threshold is estimated based on the level crossing rate (LCR) and average fade duration (AFD). The LCR and AFD of the equivalent signal-to-noise ratio (SNR) are first derived for dual-hop Nakagami-m and Weibull fading channels with a fixed-gain amplify-and-forward (AF) relay, where closed-form lower and upper bounds are derived for the LCR and AFD of the Nakagami-m fading channels. Numerical results from theoretical evaluations and Monte Carlo simulations are illustrated to validate the analysis and to compare the performance of the two fading channels.",2010,0, 1152,Research on the intelligent agent of distributed fault diagnose system,"The characteristics and composing of fault diagnose intelligent agent technology is discussed. The intelligent agent model, and the structure of monitoring agent, analysis agent, diagnose agent of distributed fault diagnose is also discussed. What's more, the class definition of agent diagnose knowledge and the class definition of diagnose rules is expounded with examples. The tasks of fault diagnose system can be decomposed and assigned easily by the intelligent agent technology, and can improve the diagnose ability of distributed remote fault diagnose expert system, can settle well the inconsistency between the universality and adaptability of diagnose software",2006,0, 1153,An Investigation of Non-Uniform Error Cost Function Design in Automatic Speech Recognition,"The classical Bayes decision theory [3] is the foundation of statistical pattern recognition. In [4], we have addressed the issue of non-uniform error criteria in statistical pattern recognition, and generalized the Bayes decision theory for pattern recognition tasks where errors over different classes have varying degrees of significance. We further introduced the weighted minimum classification error (MCE) method for a practical design of a statistical pattern recognition system to achieve empirical optimality when non-uniform error criteria are prescribed. However, one key issue in the weighted MCE method, the methodology of building a suitable non-uniform error cost function given the userpsilas requirements, has not been addressed yet. In this paper, we propose some viable techniques for the design of the non-uniform error cost function in the context of automatic speech recognition (ASR) according to different training scenarios. The experimental results on the TIDIGITS database [8] are presented to demonstrate the effectiveness of our methodologies.",2008,0, 1154,Single Fault Models for Timed FSMs,"The classification and detection of single timing faults in timed FSMs are introduced. A graph augmentation method is used to formulate the detection models for timing faults. It is shown that, by using our graph augmentation models, a faulty IUT ends up in a different state than the intended one, hence, enabling the detection of these single timing faults",2005,0, 1155,A fast color correction method based on image analysis,"The colors in a digital image depend not only on the surface properties of the objects present in the scene depicted, but also on the lighting conditions and the characteristics of the capturing device. The estimation of scene colorimetric from raw data is still an open issue, especially for digital images that have been acquired by digital image-capturing devices in unknown conditions. A color correction method is developed and tested in this paper. In order to increase the running speed of color correction, we introduce a simple statistical analysis to detect the color distorts. An approach for recognizing the dominant color objects based on block-features and region-features is presented for improving accuracy and reliability. All functions are run in the decorrelated color space. The results of many experiments show that it can get a better performance than the other existing methods.",2004,0, 1156,"Comparison between random and pseudo-random generation for BIST of delay, stuck-at and bridging faults","The combination of higher quality requirements and sensitivity of high performance circuits to delay defects has led to an increasing emphasis on delay testing of VLSI circuits. As delay testing using external testers requires expensive ATE, built-in self test (BIST) is an alternative technique that can significantly reduce the test cost. The generation of test patterns in this case is usually pseudo-random (produced from an LFSR), and it has been proven that Single Input Change (SIC) test sequences are more effective than classical Multiple Input Change (MIC) test sequences when a high robust delay fault coverage is targeted. In this paper, we first question the use of a pseudo-random generation to produce effective delay test pairs. We demonstrate that using truly random test pairs (produced from a software generation) to test path delay faults in a given circuit produces higher delay fault coverage than that obtained with pseudo-random test pairs obtained from a classical primitive LFSR. Next, we show that the same conclusion can be drawn when stuck-at or bridging fault coverage is targeted rather delay fault coverage. A modified hardware TPG structure allowing the generation of truly random test patterns is introduced at the end of the paper",2000,0, 1157,Study of fault diagnosis approach based on rules of deep knowledge representation of signed directed graph,"The fault diagnosis method using a signed directed graph (SDG) based on qualitative model as a model of the system is useful to real-time diagnosis of failures that occur in process. First, it establishes the SDG of the systems and components, simplifies these SDG corresponding to the fault modes needing to be diagnosed, at the same time SDG are described the many rules forms for shortening the calculating time of making use of SDG, then expands the diagnosing rule with expert knowledge to construct the diagnosing rule bank of the system. Second, the fault modes can be primary diagnosed by using the constructed rules. And then the modes that can not be distinguished are diagnosed by adding adequate quantitative information. The case studies show that the problem of misoperation autodiagnosis during computer simulation training can be solved effectively, and the SDG diagnosis method has good completeness, fine resolution and detailed explanation in actual industrial process",2005,0, 1158,Research on fault diagnosis and forecast system of forest harvester based on CAN-bus information,"The fault diagnosis of forest harvester is updating with the application of CAN bus technology. Aiming at the CAN technology which was utilized on forest harvester currently, the complexity of the fault information and the difficulty of diagnosis, an USB-CAN intelligent interface card was designed in this paper. Based on the interface card, the software of Microsoft Visual C++6.0 is utilized to build the fault diagnosis system with BP neural network and Kalman filter. The fault diagnosis and forecast to the main systems of forest harvester were released online after incepting, filtering and removing the noise of the signal from the CAN bus. As the experiments show that Kalman filtering plays good on removal of noise from the complex fault signal, and the BP neural network trainings of the systems are effective to implement non-linear mapping from the fault phenomenon to the fault position of forest harvester.",2010,0, 1159,Intelligent Fault Diagnosis Using Entropy-Based Rough Decision Tree Method,"The fault diagnosis on large complex system is a difficult problem due to the complex structure of the system and the presence of high dimensional fault datasets. To solve this problem, integrating minimize entropy principle approach (META), rough sets theory and C4.5 algorithm, an entropy-based rough decision tree method is proposed to extract fault diagnosis rules. The diagnosis example of a 4153 diesel demonstrated that the solution can reduce the cost and raise the efficiency of diagnosis method, and verified the feasibility of engineering application.",2007,0, 1160,On small current grounding system in single-phase grounding fault line selection based on DSP,"The fault mechanism of small current grounding system in single-phase grounding is discussed, and the principle of fault line selection of transient component on faulty phase current is indicated. By the design of hardware and software, it is proved the amplitudes summation of zero sequence transition current between fault line and healthy lines makes a big difference in the feature band. The analysis of the system simulation shows the area of zero sequence transition current on fault line in the spectrum is the largest. Therefore, the application of small current grounding system in single-phase grounding fault line selection based on DSP is an advanced novel method which can distinguish between fault line and healthy lines effectively and accurately.",2010,0, 1161,Model-checking for validation of a fault protection system,"The Fault Protection (FP) system of a spacecraft is a critical component for its operation. The system diagnoses problems with the health of the spacecraft, and directs actions to resolve those problems. It therefore warrants a high degree of assurance as to its correctness. In this paper, we describe the use of model checking to help validate key requirements of such a FP system. The particular system we deal with is that of a generic FP engine ""networked"" to the rest of the spacecraft. Its design is specified with a high degree of rigor, using state machine diagrams to define both the FP engine, and the spacecraft-specific responses that the engine directs. We describe the way we have modeled the FP engine and its operating environment so as to validate key requirements of its operation, and the influence of the above design characteristics on this effort",2001,0, 1162,The application of digital filter in the analysis of fault signal of forest harvester,"The fault signals, received by the host computer of forecast system of forest harvester through CAN-bus, usually have various kinds of noises and interference signals which come form the signal sources, sensors and external interferences. For measuring and controlling precisely, the noises and interferences of the being measured signals must be eliminated. So, commonly used digital filters have been used according to characteristics of fault signals which were updated in all sub nodes. Furthermore, aiming at the complex features of fault information of forest harvester, a filter based on Kalman filtering theory has been established for filtering processing and noise reduction to fault signals with programming implementation of visual c++. Then, the simulation results show that the effect on restrain white noise with Kalman filter is obvious. For raising the effect of the filter further, one composite digital filter has been composed with several different kinds of filters, thus with which the effect on complicated digital signals is more obvious.",2010,0, 1163,Fault-Prone Filtering: Detection of Fault-Prone Modules Using Spam Filtering Technique,"The fault-prone module detection in source code is of importance for assurance of software quality. Most of previous conventional fault-prone detection approaches have been based on using software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. In order to mitigate such difficulties, we propose a novel approach for detecting fault-prone modules using a spam filtering technique. Because of the increase of needs for spam e-mail detection, the spam filtering technique has been progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in a way that the source code modules are considered as text files and are applied to the spam filter directly. In order to show the usefulness of our approach, we conducted an experiment using source code repository of a Java based open source development. The result of experiment shows that our approach can classify more than 70% of software modules correctly.",2007,0, 1164,Adaline for fault detection in Electrical High Voltage transmission line,"The application of neural networks to power systems has been extensively reported. Neural networks based protection techniques have been proposed by a number of authors. However, almost all the studies have so far employed the back-propagation neural network structure with supervised learning. This paper presents an on line method for fault identification in Electrical High Voltage (EHV) transmission line. This approach utilizes linear adaptive neuron, which is called Adaline. The Adaline neural network is generally used for prediction and identification problems and is rarely used for power system protection. Using current signals, the Adaline process has a strong tracking capability and is fast due to its simple construction, which makes it more suitable for the implementation. Our Adaline approach is compared with a multilayer perceptron in order to see the influence of the fault resistance over the fault time detection.",2010,0, 1165,Modeling and estimation of the spatial distribution of elevation error in high resolution DEMs from stereo-image processing,"The authors examine the spatial variability of elevation error in high-resolution DEMs derived from stereo-image processing. DEM error models are developed by examining the correlation between various parameters and the observed DEM error. The DEM vertical errors are obtained from a database containing more than 51,000 points of known elevation. The error models are shown to have strong correlation with the magnitude of the random DEM vertical error. In addition, the models capture the full dynamic range of the observed error and are able to predict the overall error in four different high-resolution DEMs to within 5%. Moreover, the final model is used to estimate the vertical error at every point in a DEM containing ~20 million elevations",2000,0, 1166,Development of a Dynamic Operation Permission Agent for Preventing Commission Errors of Operators,The authors proposed a conception of dynamic operation permission system. The main idea of the dynamic operation permission is to prevent only evident commission errors and to leave operators behave as they like as far as they follow operation manuals and various operation rules. The operation permission is made based on the viewpoints of (1) following the operations specified in operation manuals and (2) suitability of an operation on plant behavior. This study implements the first part of the dynamic operation permission system as an agent composed of an operation database and two software agents on a distributed cooperative environment. The performance of the dynamic operation permission agent is validated by several example scenarios of an abnormal situation of steam generator tube rupture accident of a pressurized water reactor plant.,2007,0, 1167,Low-cost flexible software fault tolerance for distributed computing,"The authors revisit the problem of software fault tolerance in distributed systems. In particular, we propose an extension of a message-driven confidence-driven (MDCD) protocol we have developed for error containment and recovery in a particular type of distributed embedded system. More specifically, we augment the original MDCD protocol by introducing the method of ""fine-grained confidence adjustment,"" which enables us to remove the architectural restrictions. The dynamic nature of the MDCD approach gives it a number of desirable characteristics. First, this approach does not impose any restrictions on interactions among application software components or require costly message-exchange based process coordination/synchronization. Second, the algorithms allow redundancies to be applied only to low-confidence or critical interacting software components in a distributed system, permitting flexible realization of software fault tolerance. Finally, the dynamic error containment and recovery mechanisms are transparent to the application and ready to be implemented by generic middleware.",2001,0, 1168,FTOS: Model-driven development of fault-tolerant automation systems,"The design of fault-tolerant automation systems is a complex task. These systems must not only satisfy real-time requirements but they must also deliver the specified functionality in the presence of both software and hardware faults. To achieve fault-tolerance, systems have to use redundancy. This redundancy is usually achieved by replicating hardware units and executing the application within a distributed system. Model-based design tools promise to reduce the complexity of the design process by raising the abstraction level. However, most of the existing tools focus only on functional aspects. Code realizing extra-functional requirements such as fault-tolerance mechanisms, communication, and scheduling is not targeted. However, this type of code makes up the majority of the code of a fault-tolerant real-time system. This paper presents FTOS, a model-based development tool for the design of fault-tolerant automation systems that focuses on code generation of extra-functional requirements and therefore complements existing tools.",2010,0, 1169,Numerically reliable methods for optimal design of fault detection filters,"The design problem of fault detection and isolation filters is formulated as a model matching problem and solved using an H2-or H-norm optimization approach. A systematic procedure is proposed to choose appropriate filter specifications which guarantee the existence of proper and stable solutions of the model matching problem. This selection is integral part of numerically reliable computational methods to design of H2- or H-optimal fault detection filters. The proposed design approach is completely general, being applicable to both continuous- and discrete-time systems, and can easily handle even unstable and/or improper systems.",2005,0, 1170,Tamper detection with self-correction hybrid spatial-DCT domains image authentication technique,"The development of effective image authentication techniques is of remarkably growing interest. Some recently developed fragile, semi-fragile/robust or hybrid-watermarking algorithms not only verify the authenticity of the watermarked image but also provide self-reconstruction capabilities. However, several algorithms have been reported as vulnerable to various attacks, especially blind pattern matching attacks with insufficient security. In this paper, we propose a new blind dual-domain self-embedding watermarking scheme with more secure embedding processes of the imagepsilas blocks fragile signatures and robust approximations and more reliable local alterations detection with auto-correction capabilities while surviving normal image content preserving operations.",2008,0, 1171,Adaptive fault recovery for networked reconfigurable systems,"The device-level size and complexity of reconfigurable architectures makes fault tolerance an important concern in system design. In this paper, we introduce a fully automated fault recovery system for networked systems, which contain FPGAs (field programmable gate arrays). If a fault is detected hat cannot be addressed locally, fault information is transferred to a reconfiguration server. Following design recompilation to avoid the fault, a new FPGA configuration is returned to the remote system and computation is reinitiated. To illustrate the benefit of this approach, we have implemented a complete fault recovery system, which requires no manual intervention. An important part of the system is a timing-driven incremental router for Xilinx Virtex devices. This router is directly interfaced to Xilinx JBits and uses no CAD tools from the standard Xilinx Alliance tool flow. Our completed system has been applied to three benchmark designs and exhibits complete fault recovery in up to 12x less time than the standard incremental Xilinx PAR flow.",2003,0, 1172,Fault diagnosis of power distribution systems using a multi-agent approach,"The distribution system of a public electricity supply company is monitored, controlled and managed from one or more control centres. The distribution system consists of networks from 132 kV down to the 240/415 V supplied to customers' premises. At times of extreme weather conditions, when major disturbances such as storms, blizzards and lightning occur, many faults can be generated simultaneously, and the control engineers can be rapidly inundated with a large number of telemetry messages, damage reports and customers' telephone calls reporting loss of supply. The task of fault diagnosis on the distribution networks operating at 11 kV and lower voltages is very different from the higher voltage levels because very little telemetered data is available and knowledge of a fault occurrence often depends on customers complaining of loss of supply. Unfortunately, the connectivity of customers to the network is not normally available on geographic information systems (GIS). A multi-agent system for fault diagnosis of electricity distribution networks has been developed as an aid to control engineers (CE).",2004,0, 1173,On the proposition of an EMI-based fault injection approach,"The following paper describes a new approach to perform physical fault injection in electronic systems. The approach is settled around a gigahertz transverse electromagnetic (GTEM) cell, which is employed in a controlled process to inject faults in the system under test (SUT). The assumed fault models are delay faults (provoked by signal propagation delay increase in SUT critical paths, thus resulting in de-synchronization between the computed data to be latched and the clock signal) and bit-flips (i.e., corruption of static data stored in memory elements).",2005,0, 1174,Practical sensor fault torelant control system,"The fundamental purpose of an FTCS scheme is to ensure that faults do not result in system failure and ensure the achievement of best performance at a lower degree of system performance. In this paper we propose a fault tolerant control design consisting of two parts: a nominal performance controller and a fault detection element to provide fault compensating signals to the feedback loop. The nominal controller can have any given structure that satisfies the performance specification. When a sensor failure is detected, the controller structure is augmented by signals from plant model to compensate for the fault. A temperature control system is used as a case study. Results of simulation and real time implementation for temperature control system concur and demonstrate the applicability of the proposed FTCS scheme.",2008,0, 1175,Fault ride-through of fully enclosed squirrel-cage induction generators for wind farms in Thailand,"The increasing amount of wind power generation in Thailand power systems requires stability analysis considering interaction between wind-farms and transmission systems. Dynamics introduced by dispersed wind generators at the distribution level can usually be neglected. However, large wind farms have a considerable influence to power system dynamics and must definitely be considered for analyzing power system dynamics. For this purpose, a detailed dynamic model of fully enclosed squirrel cage induction generator with gearbox (full-scale power electronics converters) of a 2.3 MW wind turbine with has been implemented using the modeling environment of the simulation software DIgSILENT PowerFactory. For investigating grid compatibility aspects of this wind generator concept, a model of a 96.6 MW wind farm, with typical layout, based on 42 wind turbines of the 2.3 MW-class has been analyzed. This paper is focusing on transient stability and fault ride through capability when the grid voltage has dropped to a very low value.",2010,0,8331 1176,Impact of Combined Heat and Power (CHP) generation on the fault current level in urban distribution networks (UDN),"The increasing demand on the urban distribution network (UDN) imposed by distributed generation (DG), such as renewable sources and Combined Heat and Power (CHP), will impact on the operation of the UDN in a number of areas including fault current level and voltage stability. In general most DG connections are small CHP plants employing reciprocating engine or gas turbine as a prime mover directly coupled to synchronous generator with electrical output up to 1MVA and 0.415kV generating voltage are mainly connected to low voltage busbars of 0.415 kV and in some cases to 10.5kV busbars through a transformer. In general, all new connected CHP plants causes some increase in fault level. For significant volumes, connection of CHP that would most likely occur in the UDN, which would lead to increase in fault level issues as the UDN tent to have the lowest fault level headroom. The aim of this paper is to present the consequences and operating limitations of connecting CHP to the UDN. In order to calculate the fault current at a network bus, a simple Thevenin model is used for the UDN.. The application of the methodology is demonstrated and result obtained by using ERAC power system analyzing software on 13 busbars network resembling part of typical UDN where continuity of power supply is very important. A discussion is also included on potential measures available to handle increase in fault current, cost and responsibility of upgrading equipment in the network. The analysis estimates the fault current on each busbar and the Average Current Fault (ACF) due to the addition of the CHP generation plants.",2010,0, 1177,Model checking a fault-tolerant startup algorithm: from design exploration to exhaustive fault simulation,"The increasing performance of modern model-checking tools offers high potential for the computer-aided design of fault-tolerant algorithms. Instead of relying on human imagination to generate taxing failure scenarios to probe a fault-tolerant algorithm during development, we define the fault behavior of a faulty process at its interfaces to the remaining system and use model checking to automatically examine all possible failure scenarios. We call this approach ""exhaustive fault simulation"". In this paper we illustrate exhaustive fault simulation using a new startup algorithm for the time-triggered architecture (TTA) and show that this approach is fast enough to be deployed in the design loop. We use the SAL toolset from SRI for our experiments and describe an approach to modeling and analyzing fault-tolerant algorithms that exploits the capabilities of tools such as this.",2004,0, 1178,Model-based fault Diagnosis for IEEE 802.11 wireless LANs,"The increasingly deployed IEEE 802.11 wireless LANs (WLANs) challenge traditional network management systems because of the shared open medium and the varying channel conditions. There needs to be an automated tool that can help diagnosing both malicious security faults and benign performance faults. It is often difficult, however, to identify the root causes since the manifesting anomalies from network measurements are highly interrelated. In this paper we present a novel approach, called MOdel-based self-Diagnosis (MODI), for fault detection and localization. Our solution consists of Structural and Behavioral Model (SBM) that is constructed using both structural causality from wireless protocol specifications and behavioral statistics from network measurements. We use logic-based backward reasoning to automate fault detection and localization based on SBM, by comparing observed network measurements with expected network behaviors and by tracing back causality structures. The reasoning algorithm and the model description are decoupled so a SBM model can be easily updated for varying WLAN configurations and changing network conditions. Compared to previous work, the contribution of this paper is the architecture and the algorithm of the diagnosis core, rather than the WLAN measurement techniques. We built and deployed MODI-embedded wireless APs that can detect both security attacks and troubleshoot performance problems. These MODI-enabled APs can also cooperate to diagnose cross-AP problems, such as those caused by device mobility. The evaluation results demonstrate that the proposed model-based diagnosis is fast and effective with little overhead.",2009,0, 1179,Fault-Tolerant Real-Time Scheduling Algorithm for Tolerating Multiple Transient Faults,"The influence of computer systems in human life is increasing and thereby increases the need for having reliable, robust and real-time services of computer systems. Avoidance of any catastrophic consequences due to faults in such systems is one of the main objectives. This paper presents a fault-tolerant realtime scheduling algorithm, RM-FT, by extending the rate monotonic (RM) scheduling for real-time systems. The main approach used is employing temporal error masking (TEM) technique to achieve node level fault tolerance (NLFT) within the least common multiple of periods of a set of pre-emptively scheduled periodic tasks with at most f transient faults.",2006,0, 1180,Effects of Defects on the In-plane Dynamic Energy Absorption of Metal Honeycombs,"The in-plane dynamic energy absorption of metal honeycombs with defects consisting of missing cells are analyzed using explicit dynamic finite element method. Two types of structural defects (a single defect located in the center of the model and a double defect) are firstly introduced. Then the influence of the defects and the impact velocities on the energy absorption abilities of metal honeycombs is investigated. Researches show that single and isolated defects reduce the absorbed energy of cellular materials. The separation distance between two defects has little effect on the dynamic energy absorption, while the size of the single defect has great influence on it. These results will provide some useful guides for the safety evaluation and the dynamic energy absorption design of metal honeycombs.",2010,0, 1181,An experiment family to investigate the defect detection effect of tool-support for requirements inspection,"The inspection of software products can help to find defects early in the development process and to gather valuable information on product quality. An inspection is rather resource intensive and involves several tedious tasks like navigating, sorting, or checking. Tool support is thus hoped to increase effectiveness and efficiency. However, little empirical work is available that directly compares paper-based (i.e., manual) and tool-based software inspections. Existing reports on tool support for inspection generally tend to focus on code inspections while little can be found on requirements or design inspection. We report on an experiment family: two experiments on paper-based inspection and a third experiment to empirically investigate the effect of tool support regarding defect detection effectiveness and inspection effort in an academic environment with 40 subjects. Main results of the experiment family are: (a) The effectiveness is similar for manual and tool-supported inspections; (b) the inspection effort and defect overlap decreased significantly with tool support, while (c) efficiency increased considerably with tool support.",2003,0, 1182,Reconfigurable control and fault identification system,"The integration of health management and reconfigurable flight control is demonstrated in the NAVAIR/Boeing Reconfigurable Control and Fault Identification System (RCF1S) Dual Use Science and Technology program (Contract N00421-003-0123). The major research results include: 1) expansion of diagnostic capability by detecting levels of actuator degradation that can be used to reduce could not duplicates and false alarms in the current BIT systems, 2) fusion of system and component level health assessment results, 3) the ability to perform structured tests of control system actuator components during flight when loads and temperature environment are present to enable a more accurate health assessment with no impact on the flight trajectory or ride quality, 4) modification of actuator controls in order to make the best use of the degraded actuator remaining capabilities and/or extend the life of the component and 5) modification of system level controls to compensate for the degraded component/subsystem capability or damage.",2004,0, 1183,High-level vulnerability over space and time to insidious soft errors,"The integrity of computational results is being increasingly threatened by soft errors, especially for computations that are large-scale or performed under harsh conditions. Existing methods for soft error estimation do not clearly characterize the vulnerability associated with a particular result. 1) We propose a metric which captures the intrinsic vulnerability over space and time (VST) to soft errors that corrupt computational results. The method of VST estimation bridges the gap between the inherently low-level faults and the high-level computational failures that they eventually cause. 2) We define a model of an insidious soft error and try to clear up confusion around the concept of silent data corruption. 3) We present experimental results from three vulnerability studies involving floating-point addition, CORDIC, and FFT computations. The results show that traditional vulnerability metrics can be confounded by seemingly reliable but inefficient implementations which actually incur high vulnerability per computation. The VST method characterizes vulnerability accurately, provides a figure-of-merit for comparing alternative implementations of an algorithm, and in some cases uncovers pronounced and unexpected fluctuations in vulnerability.",2008,0, 1184,DSP-Based Sensorless Electric Motor Fault Diagnosis Tools for Electric and Hybrid Electric Vehicle Powertrain Applications,"The integrity of electric motors in work and passenger vehicles can best be maintained by frequently monitoring its condition. In this paper, a signal processing-based motor fault diagnosis scheme is presented in detail. The practicability and reliability of the proposed algorithm are tested on rotor asymmetry detection at zero speed, i.e., at startup and idle modes in the case of a vehicle. Regular rotor asymmetry tests are done when the motor is running at a certain speed under load with stationary current signal assumption. It is quite challenging to obtain these regular test conditions for long-enough periods of time during daily vehicle operations. In addition, automobile vibrations cause nonuniform air-gap motor operation, which directly affects the inductances of electric motors and results in a noisy current spectrum. Therefore, it is challenging to apply conventional rotor fault-detection methods while examining the condition of electric motors as part of the hybrid electric vehicle (HEV) powertrain. The proposed method overcomes the aforementioned problems by simply testing the rotor asymmetry at zero speed. This test can be achieved at startup or repeated during idle modes where the speed of the vehicle is zero. The proposed method can be implemented at no cost using the readily available electric motor inverter sensors and microprocessing unit. Induction motor fault signatures are experimentally tested online by employing the drive-embedded master processor (TMS320F2812 DSP) to prove the effectiveness of the proposed method.",2009,0, 1185,Automatic delay correction method for IP block-based design of VLSI dedicated digital signal processing systems: theoretical foundations and implementation,"The Intellectual Property (IP)-based design for high-throughput dedicated digital signal processing (DSP) systems is obviously an important issue for improving not only design productivity, but also design from the high level of abstraction. However, in some cases, synthesizable register transfer level (RTL) model obtained by an automatic assembly of RTL IPs can be wrong due to delays induced by implementation constraints. In this paper, we present the formalization of the problem and propose an approach called automatic delay correction method (ADCM) to solve the problem without inserting an extra interface circuitry. The approach automatically inserts control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IPs. The delays may be managed by registers or by counters included in the control structure. A formal theory of ADCM is developed to guide the implementation and guarantee optimal solutions in latency and area. Through experiments with synthetic example and three real world high-throughput DSP circuits, we also show the effectiveness of our approach.",2006,0, 1186,Research on key techniques for intelligent prediction of fault in safe running of complex electromechanical equipment,"The intelligent prediction of fault is an important and difficult modern technique in our world, and the Technique can assure the safe running of key equipment and improve the working condition of it. The Technique is mainly based on large and complex electromechanical equipment, research and disclosure the new methods and technique on intelligent prediction of fault based on data mining. Through the characteristic of fault data of electromechanical system, the acquisition methods of fault prediction based on data mining will be found out; the flow of fault prediction based on data mining will be set up; the fault prediction system and relative engineering application software will be made based on data mining. The test shows: the key technique for intelligent prediction of fault in safe running of complex electromechanical equipment can improve the efficiency of fault prediction for large electromechanical equipment.",2010,0, 1187,Influence of the AC system faults on HVDC system and recommendations for improvement,"The interaction between AC and DC systems in a long distance bulk power transmission system is very complicated. In this paper, taking some cases in China Southern Power Grid as examples, the main functions of the DC control system operating during the AC faults is discussed. During the AC faults, the protection and monitoring system for DC converter and the protection system for converter transformers and auxiliary transformers may work incorrectly, reasons for these cases are analyzed and recommendations are given to solve the defects. Experience from these cases will help us to improve the ability of operation and maintenance to insure the safety and provide useful references for the design of HVDC and the coordination of AC/DC system in China.",2009,0, 1188,Nanopillar Formation via Defect Activation and Coulomb Explosion Initiated by a 355 nm Nd:YAG Laser Beam,"The interaction of nanosecond laser pulses in the ultraviolet wavelength range with the semiconductor SiC was investigated. Under low energy fluence, an array of highly orientated nanoparticles on the surface of SiC was formed via defect activation and coulomb explosion using 355 nm UV laser irradiation. Under high energy fluence, surface modification and ablation could occur.",2004,0, 1189,Watermarking Scheme Based on Discrete Wavelet Transform and Error-Correction Codes,"The interest in the digital image watermarking rises rapidly each year. This paper describes the influence of error correction codes on digital watermarking systems which work in the frequency domain. A new watermarking scheme is introduced. The scheme is based on the well-known watermarking scheme, which uses the discrete wavelet transform (DWT). The proposed method has been tested with Bose Chaudhuri and Hocquenheim code (BCH) with different parameters. All the results obtained and a comparison of the proposed and the basic well-known watermarking method are given inside this paper.",2009,0, 1190,A fault tolerance mechanism for network intrusion detection system based on intelligent agents (NIDIA),"The intrusion detection system (IDS) has as objective to identify individuals that try to use a system in way not authorized or those that have authorization to use but they abuse of their privileges. The IDS to accomplish its function must, in some way, to guarantee reliability and availability to its own application, so that it gets to give continuity to the services even in case of faults, mainly faults caused by malicious agents. This paper proposes an adaptive fault tolerance mechanism for network intrusion detection system based on intelligent agents. We propose the creation of a society of agents that monitors a system to collect information related to agents and hosts. Using the information which is collected, it is possible: to detect which agents are still active; which agents should be replicated and which strategy should be used. The process of replication depends on each type of agent, and its importance to the system at different moments of processing. We use some agents as sentinels for monitoring and thus allowing us to accomplish some important tasks such load balancing, migration, and detection of malicious agents, to guarantee the security of the proper IDS",2006,0, 1191,Upgrade of the ICRF fault and control systems on alcator C-Mod,"The Ion Cyclotron RF Transmitter System (ICRF) at Alcator C-Mod comprises four separate transmitters each capable of driving 2 MW of power into plasma loads. Four separate transmission lines guide RF power into three antennas, each mounted in a separate horizontal port, in the C-Mod Tokamak. Protection for the antennas, matching elements and transmission line is accomplished by two unique but interdependent subsystems encompassed by the ICRF Fault System. The Antenna Protection System evaluates antenna phasing and voltage, sets fault thresholds, generates fault signals, and passes fault information to the Master Fault Processor. During operation, the Master Fault Processor is responsible for detecting hazards along the transmission line, generating faults, processing faults from the Antenna Protection System, terminating RF drive and extinguishing faults within 10 mus. In addition, the system controls various delays and sets the boundaries for RF retries. The ICRF Control System provides amplitude regulation for all antennas and phase control for a four-strap antenna. We are modifying some of the fault processing components and control elements of these systems in an effort to improve reliability and serviceability, and increase flexibility. This upgrade will reduce wired interconnections, add remote features to improve access to key operating parameters, improve RF isolation with new switching components, simplify phase control, and expand the RF regulation system to an active control regime whereby plasma parameters may become direct feedback elements for RF regulation. Details of the proposed upgrade to the system will be presented, and implementation of any new technological tools will be discussed.",2009,0, 1192,Eye gaze correction with stereovision for video-teleconferencing,"The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.",2004,0, 1193,On the computation of the linear complexity and the k-error linear complexity of binary sequences with period a power of two,"The linear Games-Chan algorithm for computing the linear complexity c(s) of a binary sequence s of period lscr=2n requires the knowledge of the full sequence, while the quadratic Berlekamp-Massey algorithm requires knowledge of only 2c(s) terms. We show that we can modify the Games-Chan algorithm so that it computes the complexity in linear time knowing only 2c(s) terms. The algorithms of Stamp-Martin and Lauder-Paterson can also be modified, without loss of efficiency, to compute analogs of the k-error linear complexity for finite binary sequences viewed as initial segments of infinite sequences with period a power of two. We also develop an algorithm which, given a constant c and an infinite binary sequence s with period lscr=2n , computes the minimum number k of errors (and an associated error sequence) needed over a period of s for bringing the linear complexity of s below c. The algorithm has a time and space bit complexity of O(lscr). We apply our algorithm to decoding and encoding binary repeated-root cyclic codes of length lscr in linear, O(lscr), time and space. A previous decoding algorithm proposed by Lauder and Paterson has O(lscr(loglscr)2) complexity",2005,0, 1194,The Method of Location Error Detection and Correcting in Smart Home Environments,"The location awareness system is an essential factor for the development of low cost sensor networks for use in the smart home environments and ubiquitous networking. This paper addresses the problem of tracking a moving intelligent robot using the pharos indoor location system. Our motivating applications include intelligent robot navigation, where location sensors provide position information to a moving robot. All of these applications require the position of a device moving at human speeds to be tracked. Ubiquitous indoor environments often contain substantial amounts of metal and other such reflective materials that affect the propagation of radio frequency signals in non-trivial ways, causing severe multi-path effects, dead-spots, noise, and interference. In this paper, we address the location error detection and correcting problems of the sensor network platform with RF and ultrasonic for location tracking system. Especially we present a novel detecting error of location and correcting using LQI that is particularly well suited to support context-aware smart home computing. It is achieved by considering the speed variance of the moving object and the corrected the interference errors. This proposed scheme applied the method of an error correct to the mobile object. We brought a 5% efficiencies comparing with a three existing schemes (listener inference algorithm). And The system called Pharos, aims to combine the advantages of real-time tracking systems that implement distributed environment with regardless of infrastructure or infrastructure-less wireless sensor networks.",2006,0, 1195,An Improved Spatial Error Concealment Algorithm Based on H.264,"The losses of packets are inevitable when the video is transported over error-prone networks. Error concealment methods can reduce the quality degradation of the received video by masking the effects of such errors. This paper presents a novel spatial error concealment algorithm based on the directional entropy in the available neighboring Macro Blocks (MBs), which can adaptively switch between weighted pixel average (WPA) adopted in H.264 and an improved directional interpolation algorithm to recover the lost MBs. In this work, the proposed algorithm was evaluated on H.264 reference software JM8.6. The illustrative examples demonstrate that the proposed method can achieve better Peak to Signal-to-Noise Ratio (PSNR) performance and visual quality, compared with WPA and the conventional directional interpolation algorithm respectively.",2009,0, 1196,Plan-based replication for fault-tolerant multi-agent systems,"The growing importance of multi-agent applications and the need for a higher quality of service in these systems justify the increasing interest in fault-tolerant multi-agent systems. In this article, we propose an original method for providing dependability in multi-agent systems through replication. Our method is different from other works because our research focuses on building an automatic, adaptive and predictive replication policy where critical agents are replicated to avoid failures. This policy is determined by taking into account the criticality of the plans of the agents, which contain the collective and individual behaviors of the agents in the application. The set of replication strategies applied at a given moment to an agent is then fine-tuned gradually by the replication system so as to reflect the dynamicity of the multi-agent system",2006,0, 1197,A highly selective super-wide bandpass filter by cascading HMSIW with asymmetric defected ground structure,"The half mode substrate integrated waveguide (HMSIW) possesses the highpass characteristic of SIW but the size is nearly half reduced. A recently proposed asymmetric defected ground structure (ADGS), composed of two square headed slots connected with a rectangular slot transversely under a microstrip line, exhibits quasi-elliptic-function band-reject characteristics around 3 GHz with high selectivity. Based on the circuit model, the structure of the ADGS is modified to perform well at about 16 GHz. By combining the HMSIW and the modified ADGS, a super-wide bandpass filter operating at about 8-16 GHz with high selectivity at both upper and lower band is proposed. Both simulated and measured results have been presented to demonstrate the validity of the proposed wideband filter.",2010,0, 1198,Configurable fault-tolerant processor (CFTP) for spacecraft onboard processing,"The harsh radiation environment of space, the propensity for SEUs to perturb the operations of silicon-based electronics, the rapid development of microprocessor capabilities and hence software applications, and the high cost (dollars and time) to develop and prove a system, require flexible, reliable, low cost, rapidly developed system solutions. A reconfigurable triple-modular-redundant (TMR) system-on-a-chip (SOC) utilizing field-programmable gate arrays (FPGAs) provides a practical solution for space-based systems. The configurable fault-tolerant processor (CFTP) is such a system, designed specifically for the purpose of testing and evaluating, on orbit, both the reliability of instantiated TMR soft-core microprocessors, the ability to reconfigure the system to support any onboard processor function, and the means for detecting and correcting SEU-induced configuration faults. The CFTP utilizes commercial off-the-shelf (COTS) technology to investigate a low-cost, flexible alternative to processor hardware architecture, with a total-ionizing-dose (TID) tolerant FPGA as the basis for a SOC. The flexibility of a configurable processor, based on FPGA technology, enables on-orbit upgrades, reconfigurations, and modifications to the soft-core architecture in order to support dynamic mission requirements. Single event upsets (SEU) to the data stored in the FPGA-based soft-core processors are detected and corrected by the TMR architecture. SEUs affecting the FPGA configuration itself are corrected by background ""scrubbing"" of the configuration. The CFTP payload consists of a printed circuit board (PCB) of 5.3 inches7.3 inches utilizing a slightly modified PC/104 bus interface. The initial FPGA configuration is an instantiation of a TMR processor, with included error detection and correction (EDAC) and memory controller circuitry. The PCB is designed with requisite supporting circuitry including a configuration controller FPGA, SDRAM, and flash memory in order to allow the greatest variety of possible configurations. The CFTP is currently manifested as a space test program (STP) experimental payload on the Naval Postgraduate School's NPSAT1 and the United States Naval Academy's MidSTAR-1 satellites, which was launched into low earth orbit in March 2003- .",2004,0, 1199,Fault diagnosis based on granular matrix-SDG and its application,"The hierarchical fault diagnosis based on granular matrix and Signed Directed Graph (SDG) is presented in the paper. Granular Computing (GrC) theory can be introduced into SDG-based fault diagnosis to optimize the decision table. The rules of fault diagnosis are reasoned out through searching the associated path of the SDG model. The redundant nodes of the failure diagnosis rules are reduced by the attribute reduction algorithm based on granular matrix, which can simplify the solution of failure diagnosis, avoid the setting of the redundant sensor, and decrease the complexity of collocating sensor network. Compared with the traditional failure diagnosis based on SDG, the designed scheme and an experimental example of a hot nitric acid cooling failure diagnosis system show that the hierarchical fault diagnosis based on granular matrix and SDG in the paper is not only feasibly and effectively, but also valuable in practice.",2009,0, 1200,A study on dynamic error of the measurement machine with low stiffness,"The higher measuring accuracy and velocity of the coordinate measuring machine are required for shorter cycle time for modern manufacturing industry. However, the increments in measuring accuracy and velocity of CMM applications are limited by the low stiffness components which cause the complicated deformation and dynamic errors. According to the structural characteristics of a special structure coordinate measuring machine, a non-rigid body model is established and the error compensation formula is derived. Further more the dynamic model of this CMM is established by Lagrange energy theory. Based on the models of this CMM, the systematic analysis method of the measurement error is proposed. In this paper, the measurement errors of CMM are studied in an integrated theoretical and an experimental approach. We use a laser interferometer to measure the measurement errors of this CMM under static deformation, and the dynamic errors are measured by different motion parameters of CMM. The experiment results show that the measuring errors caused by the limited stiffness of the components of the measuring machine can not be neglected. By software compensation, the measurement error can be reduced from 8.9mum to 2mum. The outcome of the dynamic errors experiment reflects the influence of dynamic errors under different motion parameters. Hence, the result is useful to study the measurement for decreasing and restraining the dynamic errors and to present a clearer and deeper knowledge of compensation for dynamic errors.",2009,0, 1201,Study and Realizing of Method of AC Locating Fault in Distribution System,"The idea of AC locating fault method in distribution system is that inject an AC signal into the fault phase after a single line to ground fault happened and then diagnose the fault along the transmission line with the handled AC signal detector utilizing dichotomy method until the fault is determined. The frequency of the injected AC signal used in this study is 60 Hz. Compared with the injected S signal technique, this method is called low frequency AC signal injection method. In this paper, the hardware of the signal source and the software design are introduced. The SCM managing pulse is used in the control section of the hardware, and the application of PWM control technique in this hardware is discussed in this reference; as the software design, the PWM signal is generated by coding based on the relation between the injected signal and PWM waveforms. The high frequency PWM signal excited a couple of breakers in the inversion source and then the output terminal will get high stable injected signals by filtering and generate AC signal with invariable frequency and adjustable voltage, based on which the signal detector could detect the required signal easily. The proposed signal source device reduces the difficulty of high impedance to ground detecting and improves the accuracy and reliability of locating fault. This technique, which allows the ground detecting and is convenient for engineers' operation, reduces the locating fault time, improves its efficiency and is proved by simulation and analysis on its validity.",2010,0, 1202,"A problem-specific fault-tolerance mechanism for asynchronous, distributed systems","The idle computers on a local area, campus area, or even wide area network represent a significant computational resource-one that is, however, also unreliable, heterogeneous, and opportunistic. We describe an algorithm that allows branch-and-bound problems to be solved in such environments. In designing this algorithm, we faced two challenges: (1) scalability, to effectively exploit the variably sized pools of resources available, and (2) fault tolerance, to ensure the reliability of services. We achieve scalability through a fully decentralized algorithm, in which the dynamically available resources are managed through a membership protocol. We guarantee fault tolerance in the sense that the loss of up to all but one resource will not affect the quality of the solution. For propagating information reliably, we use epidemic communication for both the membership protocol and the fault-tolerance mechanism. We have developed a simulation framework that allows us to evaluate design alternatives. Results obtained in this framework suggest that our techniques can execute scalably and reliably",2000,0, 1203,Space-Time Correlation Based Fault Correction of Wireless Sensor Networks,"The nodes within wireless sensor network (WSN) have low reliability, and are easy to act abnormally and produce erroneous data. To address the problem, we propose a distributed fault correction algorithm, which makes use of correlation among data of adjacent nodes and the correlation between current data and historical data on a single node. The algorithm could correct measurement errors every time nodes take measures. Simulation results show that the algorithm could help correct lots of errors, and only introduce very few errors, while still keep effective for nodes near event region border where many existed algorithms failed.",2008,0, 1204,Dependency-aware fault diagnosis with metric-correlation models in enterprise software systems,"The normal operation of enterprise software systems can be modeled by stable correlations between various system metrics; errors are detected when some of these correlations fail to hold. The typical approach to diagnosis (i.e., pinpoint the faulty component) based on the correlation models is to use the Jaccard coefficient or some variant thereof, without reference to system structure, dependency data, or prior fault data. In this paper we demonstrate the intrinsic limitations of this approach, and propose a solution that mitigates these limitations. We assume knowledge of dependencies between components in the system, and take this information into account when analyzing the correlation models. We also propose the use of the Tanimoto coefficient instead of the Jaccard coefficient to assign anomaly scores to components. We evaluate our new algorithm with a Trade6-based test-bed. We show that we can find the faulty components within top-3 components with the highest anomaly score in four out of nine cases, while the prior method can only find one.",2010,0, 1205,Resource Allocation for Error Resilient Video Coding Over AWGN Using Optimization Approach,"The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.",2008,0, 1206,Fault detection of plasma etchers using optical emission spectra,"The objective of this paper is to investigate the suitability of using optical emission spectroscopy (OES) for the fault detection and classification of plasma etchers. The OES sensor system used in this study can collect spectra at up to 512 different wavelengths. Multiple scans of the spectra are taken from a wafer, and the spectra data are available for multiple wafers. As a result, the amount of the OES data is typically large. This poses a difficulty in extracting relevant information for fault detection and classification. In this paper, we propose the use of multiway principal component analysis (PCA) to analyze the sensitivity of the multiple scans within a wafer with respect to typical faults such as etch stop, which is a fault that occurs when the polymer deposition rate is larger than the etch rate. Several PCA-based schemes are tested for the purpose of fault detection and wavelength selection. A sphere criterion is proposed for wavelength selection and compared with an existing method in the literature. To construct the final monitoring model, the OES data of selected wavelengths are properly scaled to calculate fault detection indices. Reduction in the number of wavelengths implies reduced cost for implementing the fault detection system. All experiments are conducted on an Applied Materials 5300 oxide etcher at Advanced Micro Devices (AMD) in Austin, TX",2000,0, 1207,Dependability analysis of fault-tolerant multiprocessor systems by probabilistic simulation,"The objective of this research is to develop a new approach for evaluating the dependability of fault-tolerant computer systems. Dependability has traditionally been evaluated through combinatorial and Markov modelling. These analytical techniques have several limitations, which can restrict their applicability. Simulation avoids many of the limitations, allowing for more precise representation of system attributes than feasible with analytical modelling. However, the computational demands of simulating a system in detail, at a low abstraction level, currently prohibit evaluation of high-level dependability metrics such as reliability and availability. The new approach abstracts a system at the architectural level, and employs life testing through simulated fault-injection to accurately and efficiently measure dependability. The simulation models needed to implement this approach are derived, in part, from the published results of computer performance studies and low-level fault-injection experiments. The developed probabilistic models of processor, memory and fault-tolerant mechanisms take such properties of real systems, as error propagation, different modes of failures, event dependency and concurrency. They have been integrated with a workload model and statistical analysis module into a generalised software tool. The effectiveness of such an approach was demonstrated through the analysis of several multiprocessor architectures",2001,0, 1208,Fault tolerance analysis of odd-even transposition sorting networks with single pass and multiple passes,"The odd-even transposition sorting networks have a simple and reliable VLSI implementation, and also have good fault tolerance properties. In this paper, a formal proof is presented for a well-known conjecture that states the odd-even transposition sorting networks are one-fault tolerant with respect to stuck-at-X fault at any internal comparators. Also, new simulation results are reported for a new mode of operation, sorting via multiple passes through the networks. Under this mode, the simulation results reveal that the odd-even transposition networks are k-fault tolerant with respect to stuck-at-X, stuck-at-H, and stuck-at-T at any set of internal comparators.",2003,0, 1209,FuSE - a hardware accelerated HDL fault injection tool,"The ongoing miniaturization of digital circuits makes them more and more susceptible to faults which also complicates the design of fault tolerant systems. In this context fault injection plays an important role in the process of fault tolerance validation. As a result many fault injection tools have emerged during the last decade. However these tools only operate on specific domains and can therefore be referred to as hardware- or software-, simulation- or emulation based techniques. In this paper we present FuSE, a single fault injection tool which covers multiple domains as well as different fault injection purposes. FuSE has been designed for usage with the SEmulatorreg-an FPGA-based hardware accelerator. The created tool set has been fully automated for the fault injection process and only requires a VHDL description and a test bench of the circuit under test. FuSE can then perform fault injection experiments with a diagnostic resolution that is known from simulation-based approaches, but at a speed that even handles long running experiments with ease.",2009,0, 1210,Error handling for the CDF online silicon vertex tracker,"The online silicon vertex tracker (SVT) is composed of 104 VME 9U digital boards (of eight different types). Since the data output from the SVT (few MB/s) are a small fraction of the input data (200 MB/s), it is extremely difficult to track possible internal errors by using only the output stream. For this reason, several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams, and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named spy buffers, which act as built-in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be frozen at any time (e.g., on error detection) to take a snapshot of all data flowing through each SVT board. The spy buffers are coordinated at system level by the Spy Control Board. The architecture, design, and implementation of this system are described",2001,0, 1211,Finding faults [data security],"The only way to attack strong cryptographic implementations is to attack the infrastructure upon which they are built. This infrastructure is most often the underlying operating system or middleware, but attacks can also be mounted directly against the hardware upon which the cryptographic implementation is being run. This issue's Crypto Corner describes some of the methods used to induce faults in systems and explains how such faults can be exploited to reveal secret information.",2005,0, 1212,Satellite images geometric correction based on non-parametric algorithms and self-extracted GCPs,"The geometric correction of high resolution satellite images can be carried out through generic non-parametric models that relates image to terrain coordinates. Traditional approaches to image geocoding rely on the measurement of a sufficient number of GCPs in both the ground and the image reference systems. Non-parametric models require a large number of GCPs well distributed on the whole scene, but the GCP identification and collection is a widely time-consuming operation and not always a simple task. Authors have developed two procedures for geometric correction based respectively on the rational function model (RFM) and on a new neural network approach (MLP, multilayer perception), and a procedure for automatic ground control points (GCPs) extraction (AGE, automatic GCPs extraction) by means of a multi-resolution least squares matching technique. This paper concerns about a new orthorectification procedure based on the sequential application of AGE, MLP and RFM algorithms for georeferencing high resolution satellite images. Tests have been carried out on Eros-A1 satellite images, using as reference maps available aerial orthoimages at a map scale of 1:10000. A Case study is presented",2004,0, 1213,Research on Analyse and Compensation Approach Aimed at CNC Machine Geometrical and Kinematic Errors,"The geometrical error and kinematic error are regarded as the prime reasons to bring about CNC machine contour error, which confine the improvement of machining precision further. In the paper, the main error sources that produce geometrical error and kinematic error in CNC machine are researched in detail, and an analyse approach aimed at the main error sources is put forward which adopts the ""arc interpolation motion - arc image method"". Furthermore, the compensation approach aimed at error resources such as the mismatch of position loop servo gains and orthogonal axes out of the vertical is developed in CNC software. Finally, the analyse and compensation approach is tested on a CNC experiment table. The experimentation result reveals that the developed analyse and compensation approach aimed at the main error sources can enhance machine contour precision greatly. Consequently, the research is helpful to improve and keep CNC machine high precision long-time.",2009,0, 1214,RFOH: A New Fault Tolerant Job Scheduler in Grid Computing,"The goal of grid computing is to aggregate the power of widely distributed resources. Considering that the probability of failure is great in such systems, fault tolerance has become a crucial area in computational grid. In this paper, we propose a new strategy named RFOH for fault tolerant job scheduling in computational grid. This strategy maintains the history of fault occurrence of resources in Grid Information Server (GIS). Whenever a resource broker has jobs to schedule, it uses this information in Genetic Algorithm and finds a near optimal solution for the problem. Further, it increases the percentage of jobs executed within specified deadline. The experimental result shows that we can have a combination of user satisfaction and reliability. Using checkpoint techniques, the proposed strategy can make grid scheduling more reliable and efficient.",2010,0, 1215,The Measure of human error: Direct and indirect performance shaping factors,"The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categoriesdirect and indirect measures of human performance. While some PSFs such as time to complete a task are directly measurable, other PSFs, such as fitness for duty, can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.",2007,0, 1216,Topology error identification for the NEPTUNE power system using an artificial neural network,"The goal of the North Eastern Pacific Time-Series Undersea Networked Experiment (NEPTUNE) is to construct a cabled observatory on the floor of the Pacific Ocean, encompassing the Juan de Fuca Tectonic Plate. The power system associated with the proposed observatory is unlike conventional terrestrial power systems in many ways due to the unique operating conditions of cabled observatories. The unique operating conditions of the system require hardware and software applications that are not found in terrestrial power systems. This paper builds upon earlier work and describes a method for topology error identification in the NEPTUNE system that utilizes an artificial neural network (ANN) to determine single contingency topology errors.",2004,0, 1217,BOND: An interposition agents based fault injector for Windows NT,"The goal of this paper is to present BOND, a Software Fault Injection tool able to simulate abnormal behavior of a computer system running Windows NT 4.0 Operating System. The Fault Injector is based on interposition techniques, which guarantees a low impact on the execution of the target program, and allows the injection of Commercial off-the-Shelf software programs. BOND allows performing both statistical and deterministic fault injection experiments, trading-off between overhead and precision of the obtained results. Moreover, the tool is capable of injecting faults into different locations, at any level of the application context (code and data sections, stack, heap, processor's registers, system calls, ...). A complete set of experimental results on different application programs demonstrates the effectiveness and the flexibility of the tool",2000,0, 1218,Error Detection and Correction for Speech Recognition using Airport Layout Information: Concept and Evaluation,"The goal of this research is to integrate a voice recognition system (VRS) into an electronic flight bag (EFB) for the specification of taxi-routes. To verify route feasibility, resolve certain ambiguities and provide limited `automatic repair' to a route that could not be fully interpreted, information regarding the possible route structures on the airport is used. The information from a speaker-independent voice recognition system is matched with information on the known possible routes on an airport. This provides an error detection and correction capability. A study was performed to evaluate the improvements that can be obtained",2006,0, 1219,Error-related potential recorded by EEG in the context of a p300 mind speller brain-computer interface,The Mind Speller is a Brain-Computer Interface (BCI) which enables subjects to spell text on a computer screen by detecting P300 Event-Related Potentials in their electroencephalograms (EEG). This BCI application is of particular interest for disabled patients who have lost all means of verbal and motor communication. Error-related Potentials (ErrP) in the EEG are generated by the subject's perception of an error. We report on the possibility of using this ErrP for improving the performance of our Mind Speller. We tested 6 subjects and recorded several typing sessions for each of them. Responses to correct and incorrect performances of the BCI are recorded and compared. The shape of the received ErrP is compared to other studies. The detection of this ErrP and its integration in the Mind Speller are discussed.,2010,0, 1220,Modeling by groups for faults tolerance based on multi agent systems,"The mobile Ad hoc networks are distributed environments characterized by a high mobility and limited battery resources. In these networks, mobiles nodes are subject to many errors. In this paper, we present our approach of modeling by groups for faults tolerance based in MAS, which predicts a problem and provide decisions in relation to critical nodes. Our work contributes to the resolution of two points. First, we propose an algorithm for modeling by groups in wireless network Ad hoc. Secondly, we study the fault tolerance by prediction of disconnection and partition in network; therefore we provide an approach which distributes efficiently the information in the network by selecting some objects of the network to be duplicates of information.",2010,0, 1221,Design and evaluation of a fault-tolerant mobile-agent system,"The mobile agents create a new paradigm for data exchange and resource sharing in rapidly growing and continually changing computer networks. In a distributed system, failures can occur in any software or hardware component. A mobile agent can get lost when its hosting server crashes during execution, or it can get dropped in a congested network. Therefore, survivability and fault tolerance are vital issues for deploying mobile-agent systems. This fault tolerance approach deploys three kinds of cooperating agents to detect server and agent failures and recover services in mobile-agent systems. An actual agent is a common mobile agent that performs specific computations for its owner. Witness agents monitor the actual agent and detect whether it's lost. A probe recovers the failed actual agent and the witness agents. A peer-to-peer message-passing mechanism stands between each actual agent and its witness agents to perform failure detection and recovery through time-bounded information exchange; a log records the actual agent's actions. When failures occur, the system performs rollback recovery to abort uncommitted actions. Moreover, our method uses checkpointed data to recover the lost actual agent.",2004,0, 1222,Modeling and simulation of the power transformer faults and related protective relay behavior,The modeling of power transformer faults and its application to performance evaluation of a commercial digital power transformer relay are the objectives of this study. A new method to build an EMTP/ATP power transformer model is proposed in this paper. Detailed modeling of the transformer relay is also discussed. The transient waveforms generated by ATP under different operating conditions are utilized to evaluate the performance of the transformer relay. The computer simulation results presented in this paper are consistent with the laboratory test results obtained using an analog power system model,2000,0, 1223,Run-time fault detection in monitor based concurrent programming,"The monitor concept provides a structured and flexible high-level programming construct to control concurrent accesses to shared resources. It has been widely used in concurrent programming environments for implicitly ensuring mutual exclusion and explicitly achieving process synchronization. This paper proposes an extension to the monitor construct for detecting run-time errors in monitor operations. Monitors are studied and classified according to their functional characteristics. A taxonomy of concurrency control faults over a monitor is then defined. The concepts of a monitor event sequence and a monitor state sequence provide a uniform approach to history information recording and fault detection. Rules for detecting various types of faults are defined. Based on these rules, fault detection algorithms are developed. A prototypical implementation of the proposed monitor construct with run-time fault detection mechanisms has been developed in Java. We briefly report our experience with and evaluation of our robust monitor prototype.",2001,0, 1224,An extended reliability growth model for managing and assessing corrective actions,"The most widely used traditional reliability growth tracking model and reliability growth projection model are both included as IEC International Standard and US ANSI National Standard models. These traditional models address reliability growth based on failure modes surfaced during the test. With the tracking model all corrective actions are incorporated during test, called test-fix-test. With the projection model all corrective actions are delayed until the end of test. This is called test-find-test. However, the most common approach for development-testing programs include some corrective actions during testing and some delayed fixes incorporated at the end of test. That is, test-fix-find-test. This paper presents an extended model that addresses this practical situation and allows for preemptive corrective actions.",2004,0, 1225,Implementation a novel control-protective scheme on laboratorial HVDC system to distinguish between transient and steady state faults,"The nature of HVDC system faults respect to their transient or steady state behavior mainly depends on two main parameters. The first is location of the fault, and the other is the condition which causes to fault occur. This paper mainly reviews transient and steady state faults in HVDC system and control system response due to these faults. Our new control- protective method has some settings that permits operator to change HVDC system response with varying these settings. With using this method, HVDC system may have different responses when a fault occurs. Then we can select one, which is more proper than other. All simulations are performed in MATLAB-SIMULINK. Experimental studies are done on our laboratorial system, 500 Volts and 20 Ampere HVDC system, which constructed in S.H.L in IUSTof lRAN.",2007,0, 1226,Effect of QT interval correction during autonomic blockade in combination with changes in posture,"The measurement of QT interval on ECG is a marker for malignant ventricular arrhythmias. When QT is measured, it must be corrected to become independent of heart rate (HR) and become a comparable measure of repolarization between different conditions. The objective of this work was to evaluate different types of QT interval correction (Bazzett, Individual and Hodges) for different QT/RR relations. This was accomplished with selective blockade of sympathetic and vagal autonomic systems by using combinations of postural changes and drugs. When comparing vagal condition (supine) versus sympathetic condition (standing atropine) a significant shortening of 43 ms (P<0.006) was observed. Whereas when comparing sympathetic (standing) versus vagal condition (supine propranolol) a significant lengthening of 23 ms (P<0.005) was observed. The Individual correction method achieved the best uncorrelation between QT and RR interval, making QT more independent of HR and ANS status",2005,0, 1227,Safing and fault protection for the MESSENGER mission to Mercury,"The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission is a NASA Discovery-class, deep-space mission to orbit the planet Mercury. Its purpose is to map the planet surface using various scientific instruments and explore the interior of the planet using measurements from instruments such as a magnetometer and observation of planetary libration. This paper discusses the architecture and implementation of the methods by which faults in the MESSENGER spacecraft are detected and the effects of those faults mitigated. The responsibility of the redundant Fault Protection Processors (FPPs) is to detect faults and take autonomous corrective actions that will keep the spacecraft healthy and safe.",2002,0, 1228,Heating of cables due to fault currents,"The metal screen of new type of medium and high voltage single-core underground cables is comprised of helically applied copper wires embedded in a semiconductive layer. The distribution of current and power among the thin wires is non-uniform in a three-phase system, what can lead to their non-uniform heating, especially during fault conditions. The paper investigates this problem with 2D finite element models. The simulations are based on the coupling of the electromagnetic and thermal fields.",2010,0, 1229,Defect recognition of optical fiber fusion based on wavelet packet technique,"The important meaning of the optical fiber fusion defect recognition was introduced based on wavelet packet technique. Detecting the optical fiber fusion point by using the UltraPAC system, aiming at the defect feature, the method of analyzing and extracting the defect eigenvalue by using wavelet packet analysis and pattern recognition by making use of the wavelet neural network is discussed. This method can realize to extract the interrelated information which can reflect defect feature from the ultrasonic information being detected and analysis it by the information. Constructing the network model for realizing the qualitative recognition of defects. The results of experiment show that the wavelet packet analysis adequately make use of the information in time-domain and in frequency-domain of the defected echo signal, multi-level partition the frequency bands and analyze the high-frequency part further which don't been subdivided by multi-resolution analysis, and choose the interrelated frequency bands to make it suited with signal spectrum. Thus, the time-frequency resolution is risen, the good local amplificatory property of the wavelet neural network and the study characteristic of multi-resolution analysis can achieve the higher accuracy rate of the qualitative classification of fusion defects.",2010,0, 1230,Fault diagnosis technology based on wavelet analysis and resonance demodulation,"The impulse signal is contained in the fault signals of some pivotal components such as gears and axletrees. Extracting weensy impact information is an important method to diagnose equipment. A mathematical model on the technology of resonant demodulation is put forward in this paper. The model provides the theoretical basis on how to use the technology to extract the weensy impulse signal from the normal low-frequency vibrating signal, at the same time, another method that wavelet analysis is used to extract the weensy impulse information is introduced too. Simulation and practical application manifest that both wavelet analysis and demodulation have good effect on extracting the weensy impulse from the mechanical fault caused by gear and axletree.",2004,0, 1231,Automatic Identification of Faults in Power Systems Using Neural Network Technique,"The main objective involved with this paper consists of presenting the results obtained from the application of artificial neural networks and statistical tools in the automatic identification and classification process of faults in electric power distribution systems. The developed techniques to treat the proposed problem have used, in an integrated way, several approaches that can contribute to the successful detection process of faults, aiming that it is carried out in a reliable and safe way. The compilations of the results obtained from practical experiments accomplished in a pilot radial distribution feeder have demonstrated that the developed techniques provide accurate results, identifying and classifying efficiently the several occurrences of faults observed in the feeder.",2007,0, 1232,Fault Tolerant Control on an Electric Vehicle,"The main purpose of this paper concerns a fault tolerant control applied on an electric vehicle known as RobuCar which is a 4times4 electric vehicle, with four electromechanical wheel systems. Fault tolerant control (FTC) is attended to continue the system operation in the presence of several faults in order to remain the security of the system. Active fault tolerant control (AFTC) approach needs fault detection and isolation (FDI) algorithm to detect and identify the fault with minimum false alarm, missed alarm and time delay ... Only healthy components are used to reconfigure the control law of the system. RobuCar is particularly adapted to receive a fault tolerant module since it is composed of redundant components.",2006,0, 1233,Algorithms for bounded-error correlation of high dimensional data in microarray experiments,"The problem of clustering continuous valued data has been well studied in literature. Its application to microarray analysis relies on such algorithms as k-means, dimensionality reduction techniques, and graph-based approaches for building dendrograms of sample data. In contrast, similar problems for discrete-attributed data are relatively unexplored. An instance of analysis of discrete-attributed data arises in detecting co-regulated samples in microarrays. In this paper, we present an algorithm and a software framework, PROXIMUS, for error-bounded clustering of high-dimensional discrete attributed datasets in the context of extracting co-regulated samples from microarray data. We show that PROXIMUS delivers outstanding performance in extracting accurate patterns of gene-expression.",2003,0, 1234,Automatic installation of software-based fault tolerance algorithms in programs generated by GCC compiler,"The problem of designing radiation-tolerant devices working in application critical systems becomes very important especially if human life depends on the reliability of control mechanisms. One of the possible solution of this problem are pure software protection methods. They constitute different category of techniques to detect transient faults and correct corresponding errors. Software fault tolerance schemes are cheaper to implement since they can be used with standard, commercial of-the-shelf (COTS) components. Additionally, they do not require any hardware modification. In this paper, author propose a new implementation mechanism for software based fault protection algorithms performed automatically during application compilation.",2010,0, 1235,LMI-based Lipschitz Observer Design with Application in Fault Diagnosis,The problem of fault detection and diagnosis in the class of nonlinear Lipschitz systems is considered. An observer-based approach offering extra degrees of freedom over classical Lipschitz observers is introduced. This freedom is used for the sensor faults diagnosis problem with the objective to make the residual converge to the faults vector achieving detection and estimation at the same time. The use of appropriate weightings to solve this problem in a standard convex optimization framework is also demonstrated. A LMI design procedure solvable using commercially available software is presented,2006,0, 1236,Respiratory Motion Correction in 3-D PET Data With Advanced Optical Flow Algorithms,"The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.",2008,0, 1237,Multisensor track-to-track association for tracks with dependent errors,"The problem of track-to-track association has been considered until recently in the literature only for pairwise associations. In view of the extensive recent interest in multisensor data fusion, the need to associate simultaneously multiple tracks has arisen. This is due primarily to bandwidth constraints in real systems, where it is not feasible to transmit detailed measurement information to a fusion center but, in many cases, only local tracks. As it has been known in the literature, tracks of the same target obtained from independent sensors are still dependent due to the common process noise. This paper derives the likelihood function for the track-to-track association problem from multiple sources, which forms the basis for the cost function used in a multidimensional assignment algorithm that can solve such a large scale problem where many sensors track many targets. While a recent work derived the likelihood function under the assumption that the track errors are independent, the present paper incorporates the (unavoidable) dependence of these errors.",2004,0, 1238,Faults in processor control subsystems: testing correctness and performance faults in the data prefetching unit,"The processor control subsystems have for a long time been recognized as a bottleneck in the process of achieving complete fault coverage through various functional test propagation approaches. The difficult-to-test corner cases are further accentuated in fault-resilient control subsystems as no functional effect is incurred as a result of the fault, even though performance suffers. We investigate the construction of software programs, capable of providing full fault coverage at minimal hardware cost, for one such fault resilient subsystem in processor architecture: the data prefetching unit. Experimental results confirm the efficacy of the proposed method",2001,0, 1239,Fault-Driven Re-Scheduling For Improving System-level Fault Resilience,"The productivity of HPC system is determined not only by their performance, but also by their reliability. The conventional method to limit the impact of failures is checkpointing. However, existing research shows that such a reactive fault tolerance approach can only improve system productivity marginally. Leveraging the recent progress made in the field of failure prediction, we propose fault-driven rescheduling (FARS) to improve system resilience to failures, and investigate the feasibility and effectiveness of utilizing failure prediction to dynamically adjust the placement of active jobs (e.g. running jobs) in response to failure prediction. In particular, a rescheduling algorithm is designed to enable effective job adjustment by evaluating performance impact of potential failures and rescheduling on user jobs. The proposed FARS complements existing research on fault-aware scheduling by allowing user jobs to avoid imminent failures at runtime. We evaluate FARS by using actual workloads and failure events collected from production HPC systems. Our preliminary results show the potential of FARS on improving system resilience to failures.",2007,0, 1240,Design and Implementation of Inference Engine in Safety Risk Assessment Expert System in Petrochemical Industry Based on Fault Tree,"The project in petrochemical industry is complex and risky. For this feature, we established a safety risk assessment (SRA) expert system based on fault tree in petrochemical industry. In this paper, we studied the design and implementation of infer engine in SRA expert system. we adopted the method of fault tree analysis (FTA) to acquire expert knowledge and the fault tree established is to be the basis of inference. The knowledge in petrochemical industry was divided into shallow knowledge and deep knowledge and the method of KR(knowledge representation) adopted in this paper is production rule combined with the framework. On the basis of good representation and organization of knowledge, we adopted infer control strategy of forward reason combined with depth-first search which improve the validity and accuracy of the SRA expert system to a certain extent.",2010,0, 1241,An application of genetic algorithms to the geometric correction of HypSEO hyperspectral data,"The proposed paper describes a method to correct geometric distortions of raw images based on the optimization of a satellite geometric model. The geometric model depends on a wide set of parameters such as detector offset and orientation in the focal plane and coefficients of the polynomials fitting orbital and attitude data. The optimization of the model is achieved in two steps: at first, images obtained by different channels in the focal plane are coregistered with respect to each other. Then, the image of a reference channel is geolocated with the aid of ground control points (GCP).",2002,0, 1242,Dynamic interlocking for fault-level limiting,"The purpose of dynamic interlocking for fault-level limitation is to replace hardwired interlocks with more flexible computer-based systems. An early version of dynamic interlocking was implemented at Drax power station in the UK a number of years ago using a purpose-built microprocessor system, running software with the network model hard-coded into it. This dynamic interlocking system could not be applied to another power system without redesigning both the hardware and software. To overcome the dedicated nature of the first system, a new dynamic interlocking system was built and subsequently installed in Heysham 2 nuclear power station in the UK. The software operates on-line and in real-time. The core software for this new system was general purpose so that it could be applied to any power system. To configure the software for any new network the user has only to enter the appropriate network data through a graphical user interface. The paper describes the core software that was subsequently installed in Heysham 2 nuclear power station, including practical experience of its operation. The work was completed in 1985 and implemented on specialised hardware. Despite this, the core software is even more relevant today than at that time because advances in hardware technology have created the opportunity to construct a universally applicable dynamic interlocking system. The advent of circuit breakers with built-in microprocessor systems and communication interfaces, as well as the power of SCADA systems, has created the opportunity to install dynamic interlocking systems on many plants using the generalised software described here and standard hardware. Such systems have the potential to reduce installation costs, greatly improve the flexibility of plant operation and reduce maintenance costs",2001,0, 1243,Development of a Testbench for Validation of DMT and DT2 Fault-Tolerant Architectures on SOI PowerPC7448,The purpose of TAFT fault tolerance studies conducted at CNES is to prepare the space community for the significant evolution linked to the usage of COTS components for developing spacecraft supercomputers. CNES has patented the DMT and DT2 fault-tolerant architectures with 'light' features. The development of a DMT/DT2 testbench based on a PowerPC7448 microprocessor from e2v is presented in this paper.,2008,0, 1244,Human Factors in Large-Scale Biometric Systems: A Study of the Human Factors Related to Errors in Semiautomatic Fingerprint Biometrics,"The purpose of this paper is to demonstrate the importance of considering human factors in large-scale, complex biometric systems. A team of 19 board-certified latent print examiners conducted 1620 latent fingerprint image formatting tasks, 1797 encoding tasks, and 146 388 side-by-side comparison tasks of latent prints with potential matching tenprint candidates. Examiner feedback from ten encoding mistakes and 13 comparison mistakes provided significant data to demonstrate the deserved consideration of relating the Department of Defense (DoD) Human Factors Classification System (HFACS) to semiautomatic fingerprint biometrics. The increase in match rate of 10% and 7%, respectively, for encoding and comparison when verifications are conducted on these tasks provides striking evidence of the risks involved if large-scale biometric system integrators or owners design and operate biometric systems based solely on single human examiner conclusions without ample consideration of the error recovery mechanism of second-examiner involvement for low-quality data.",2010,0, 1245,A Statistical Approach for Estimating the Correlation between Lightning and Faults in Power Distribution Systems,"The paper deals with the subject of the source-identification of transient voltage disturbances in distribution system buses. In particular, a statistical procedure is proposed for the evaluation of the probability that a lightning flash detected by a lightning location system (LLS) could cause a fault and, therefore, relay interventions, generally associated with voltage dips. The proposed procedure is based on the coordinated use of the information provided by the LLS and the availability of an advanced simulation tool for the accurate simulation of lightning-induced voltages on complex power systems, namely the LIOV-EMTP code. The uncertainty levels of the stroke location and of the peak current estimations provided by the LLS are discussed and their influence on the lightning-fault correlation is analyzed",2006,0, 1246,A system for incipient fault detection and fault diagnosis based on MCSA,"The paper describes a system for automated detection of incipient faults in induction machines. The system has been based on the Motor Current Signature Analysis method (MCSA) and aimed to be applied in a thermal electric power plant in south Brazil. First, the mechanism of fault evolution is introduced and clarified regarding the most common induction motor faults: stator winding short-circuits, broken and cracked rotor bars and eccentricity faults. The influence of the load condition on the fault indicator is discussed based on practical cases, obtained through fault simulations using a prototype. The main theoretical and conceptual aspects of the developed system are presented, including the signal acquisition and conditioning as well the database which stores the motor signals acquired over a time period. Some results from the practical use of the system are shown to illustrate the system capabilities.",2010,0, 1247,An automated system for incipient fault detection and diagnosis in induction motors based on MCSA,"The paper describes a system for automated detection of incipient faults in induction machines. The system is based on the Motor Current Signature Analysis method (MCSA) and aimed to be applied in a thermal electric power plant in south Brazil. First, the mechanism of fault evolution is introduced and clarified regarding the most common induction motor faults: stator winding short-circuits, broken and cracked rotor bars and eccentricity faults. The influence of the load condition on the fault indicator is discussed based on practical cases, obtained through fault simulations using a prototype. The main theoretical and conceptual aspects of the developed system are presented, including the signal acquisition and conditioning as well the database which stores the motor signals acquired over a time period. Some results from the practical use of the system are shown to illustrate the system capabilities.",2010,0, 1248,Prototype-based minimum error classifier for handwritten digits recognition,The paper describes an application of the prototype-based minimum error classifier (PBMEC) to the offline recognition of handwritten digits. The PBMEC uses a set of prototypes to represent each digit along with an L-norm of distances as the decoding scheme. Optimization of the system is based on the minimum classification error (MCE) criterion. We introduce a new clustering criterion adapted to the PBMEC structure that minimizes an L-norm-based distortion measure. The new clustering algorithm can generate a smaller number of prototypes than the standard k-means with no loss in accuracy. It is also shown that the PBMEC trained with MCE can achieve over 42% improvement from the baseline k-means process and requires only 28 Kb storage to match the performance of a 1.46 Mb sized k-NN classifier.,2004,0, 1249,Impact of advancements of numerical distance relays over setting and testing the ground faults characteristics,"The paper describes different techniques in testing the characteristics of modern microprocessor impedance relays and their influence over developing a valid test plan. First, a brief description of historical development of distance relays is described. Then, the need for automated testing of the relays is explained. A practical application in testing phase-to-ground faults characteristics is presented. First the concept of loop impedance calculation is described. Challenges experienced and solutions provided for automatic testing of the impedance relays are shown. The paper emphasizes the importance of correct selection of the zero sequence compensation factors and shows the impact over relay operation. Finally the results are summarized and conclusions are drawn.",2004,0, 1250,"Automotive signal fault diagnostics - part I: signal fault analysis, signal segmentation, feature extraction and quasi-optimal feature selection","The paper describes our research in vehicle signal fault diagnosis. A modern vehicle has embedded sensors, controllers and computer modules that collect a large number of different signals. These signals, ranging from simple binary modes to extremely complex spark timing signals, interact with each other either directly or indirectly. Modern vehicle fault diagnostics very much depend upon the input from vehicle signal diagnostics. Modeling vehicle engine diagnostics as a signal fault diagnostic problem requires a good understanding of signal behaviors relating to various vehicle faults. Two important tasks in vehicle signal diagnostics are to find what signal features are related to various vehicle faults, and how can these features be effectively extracted from signals. We present our research results in signal faulty behavior analysis, automatic signal segmentation, feature extraction and selection of important features. These research results have been incorporated in a novel vehicle fault diagnostic system, which is described in another paper (see Yi Lu Murphey et al., ibid., p.1076-98).",2003,0, 1251,An empirical study of software reuse vs. defect-density and stability,"The paper describes results of an empirical study, where some hypotheses about the impact of reuse on defect-density and stability, and about the impact of component size on defects and defect-density in the context of reuse are assessed, using historical data (data mining) on defects, modification rate, and software size of a large-scale telecom system developed by Ericsson. The analysis showed that reused components have lower defect-density than non-reused ones. Reused components have more defects with highest severity than the total distribution, but less defects after delivery, which shows that that these are given higher priority to fix. There are an increasing number of defects with component size for non-reused components, but not for reused components. Reused components were less modified (more stable) than non-reused ones between successive releases, even if reused components must incorporate evolving requirements from several application products. The study furthermore revealed inconsistencies and weaknesses in the existing defect reporting system, by analyzing data that was hardly treated systematically before.",2004,0, 1252,Impact of advancements on automated relay testing over checking sensitive earth fault characteristic,"The paper describes the impact of advancements in automatic testing of protective relays and some of the unique problems associated with automatic testing. First, the increased need for advanced automatic testing for commissioning protective relays is explained. Next, a detailed application of a unique problem associated with the automatic testing of a directional earth fault characteristic is presented. Finally the results are summarized and conclusions are drawn.",2004,0, 1253,Information system of Relay protection for fault record analysis using WEB technology,"The paper describes the information system of the Relay protection department of the transmission system operator HEP, Area of Rijeka used for the remote access to relay protection equipment and the archival of data fault records captured in the power system. The system described facilitates analysis of power system event and relay protection operation using WEB technology.",2008,0, 1254,Recovery in fault-tolerant distributed microcontrollers,"The paper describes the use of fault tolerance in a microcontroller node to be used in a network of embedded processors. It is primarily motivated by long-life space applications where radiation-induced transient errors will be a frequent occurrence, and a few chip failures may be expected before a mission is completed. A testbed has been constructed, and a real time executive has been developed and tested in it. Preliminary fault-insertion testing has been started. Due to interconnection constraints for latchup circumvention and other reasons, we have chosen a design that is not Byzantine resilient. Even though inconsistent signaling may occur occasionally, multiple recovery actions must converge to a successful testing and restart of the system to regain correct functionality.",2001,0, 1255,Experimental Validation of Fault Injection Analyses by the FLIPPER Tool,The paper discusses the experimental validation of fault injection analyses performed with the FLIPPER tool. Failure probabilities obtained by fault injection were compared against failure probabilities obtained at accelerated proton testing of a benchmark design provided by the European Space Agency.,2010,0,4815 1256,The effects of Gaussian weighting errors in hybrid SC/MRC combiners,"The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on the statistics of the output of hybrid selection/maximal-ratio (SC/MRC) receiver as well as the degradation of the average symbol error rate (ASER) performance from the ideal case. New expressions for the probability density function (PDF), cumulative distribution function (CDF) and moment generating function (MGF) of the coherent hybrid SC/MRC combiner output signal-to-noise ratio (SNR) are derived. The MGF is then used to derive exact closed form ASER formulas for binary and M-ary modulations employing a nonideal hybrid SC/MRC receiver in Rayleigh fading. Results for both SC and MRC are obtained as limiting cases. The effect of the weighting errors on the outage rate of error probability and the average combined SNR are also investigated. These analytical results provide some insights into the trade-off between diversity gain and combination losses with the increasing order of diversity branches in an energy-sharing communication system",2000,0, 1257,An evolutionary RBF networks based on RPCL and its application in fault diagnosis,"The performance of a RBF neural network strongly depends on the network structure and parameters. Therefore, an algorithm that can automatically select the network configuration will be very beneficial. This paper presents a novel method where configuration of an RBF network can be learned by a hybrid training schema. Appropriate number of neural nets in the hidden layer and their coarse centers is obtained by a modified version of Rival Penalized Competitive Learning (RPCL). Then, Genetic Algorithm is applied to determine the optimal parameters including refined cluster centers, variance of Radial-Basis Function and weights. The application of proposed evolutionary RPCL-RBF networks is discussed and the result of simulation illustrate that proposed method is effective.",2009,0, 1258,Fault tolerant routing in mobile ad hoc networks,"The performance of ad hoc routing protocols will significantly degrade, if there are malfunctioned nodes in the network. Fault tolerant routing protocols address this problem by exploring the network redundancy through multipath routing. Designing an effective and efficient fault tolerant routing protocol is inherently hard, because the problem is NP-complete and the precise path information is unavailable. This paper solves this problem by presenting an end-to-end estimation-based fault tolerant routing algorithm E2FT. E2FT deploys two complementary processes: route estimation and route selection. Through end-to-end performance measurement, the route estimation process gives improving estimation results via iterations. Based on these estimation results, the route selection process decides a multipath route for packet delivery. The route selection is refined progressively with the increasingly accurate estimation result using ""confirmation"" and ""dropping"" procedures. Through theoretical analysis and simulation, we show E2FT can achieve a high packet delivery rate with acceptable overhead.",2003,0, 1259,"Performance of cellular CDMA with cell site antenna arrays, Rayleigh fading, and power control error","The performance of code-division multiple-access (CDMA) systems is affected by multiple factors such as large-scale fading, small-scale fading, and cochannel interference (CCI). Most of the published research on the performance analysis of CDMA systems usually accounts for subsets of these factors. In this work, it is attempted to provide a comprehensive analysis which joins several of the most important factors affecting the performance of CDMA systems. In particular, new analytical expressions are developed for the outage and bit-error probability of CDMA systems. These expressions account for adverse effects such as path loss, large-scale fading (shadowing), small-scale fading (Rayleigh fading), and CCI, as well as for correcting mechanisms such as power control (compensates for path loss and shadowing), spatial diversity (mitigates against Rayleigh fading), and voice activity gating (reduces CCI). The new expressions may be used as convenient analysis tools that complement computer simulations. Of particular interest are tradeoffs revealed among system parameters, such as maximum allowed power control error versus the number of antennas used for spatial diversity",2000,0, 1260,Error Rate Estimation of Finite-Length Low-Density Parity-Check Codes on Binary Symmetric Channels Using Cycle Enumeration,"The performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms can be accurately estimated if the weight J and the number |Ej| of the smallest error patterns that cannot be corrected by the decoder are known. To obtain J and |Ej|, one would need to perform the direct enumeration of error patterns with weight i les J. The complexity of enumeration increases exponentially with J, essentially as nJ, where n is the code block length. In this paper, we approximate J and |Ej| by enumerating and testing the error patterns that are subsets of short cycles in the code's Tanner graph. This reduces the computational complexity by several orders of magnitude compared to direct enumeration, making it possible to estimate the error rates for almost any practical LDPC code. To obtain the error rate estimates, we propose an algorithm that progressively improves the estimates as larger cycles are enumerated. Through a number of examples, we demonstrate that the proposed method can accurately estimate both the bit error rate (BER) and the frame error rate (FER) of regular and irregular LDPC codes decoded by a variety of hard-decision iterative decoding algorithms.",2007,0, 1261,A novel mathematical morphology filter for the accurate fault location in power transmission lines,"The performance of Multi-resolution Morphology Gradient (MMG)-based fault location scheme in power transmission lines will deteriorate when the noise is imposed on the transient signals. In this paper, a novel mathematical morphology filter Dierion is proposed to effectively reduce the noise in the transient signals and improve the performance of the accurate fault location using MMG. This proposed filter with parameters (m, k) extracts the kth element in the sorted input signal according to the structure element, whose length is m. Dilation and Erosion filter, the two basic operators in Mathematical Morphology (MM), are formulated in a unified mathematical framework defined by Dierion filter based on the assumption that the structure element is symmetric with respect to its origin. The general statistical characteristics of this proposed morphological filter are discussed in details. The results show that Dierion filter with parameters (m, (m + 1)/2), which namely is Median filter, performs better in noise reduction compared with Dilation and Erosion filters. The efficiency of Dierion filter's noise reduction is verified by a variety of simulations. Therefore, the MMG protection scheme is improved significantly to be noisy tolerant.",2009,0, 1262,Panel statement: why progress in (composite) fault tolerant real-time systems has been slow (-er than expected... & what can we do about it?),"The pervasiveness of computers in our current IT driven society (transportation, e-commerce, e-transactions, communication, process control), also implies our growing dependency on their ""correct"" functionality. In many a case, the real value of the systems and also our usage of these systems comes, in part, based on the dependency (real or perceived) we are consequently willing to put into the provisioning of the services i.e., the implicit or explicit assurance of trust we put for sustained delivery of desired services. Some systems are considered as safety-critical (flight/reactor control etc), though others are accorded varied degrees of criticality. Nevertheless, our expectancy extends to obtaining the proper services when the system is fault-free and especially when it encounters perturbations (design or operational), e.g., electromagnetic interference or a lightning strike for an aircraft. Consequently, it is important to qualitatively and quantitatively associate some measures of trust in the system's ability to ""actually"" deliver us the desired services in the presence of faults. This is often termed as ""dependability"" measures for a system with a plethora of fault-tolerance (FT) strategies to help achieve desired levels of dependability. As before, dependability entails the sustained delivery of services, be they service-critical or cost-critical, regardless of the perturbations encountered during their operation.",2004,0, 1263,Sensor placement strategy for in-situ bearing defect detection,"The placement of sensors is of critical importance to achieving high quality measurement for machine condition monitoring and fault diagnosis. This paper investigates sensor placement strategy for detecting structural defects of a ball bearing. Based on an analytical study of signal propagation from the defect location to the sensors, numerical simulations using a finite element algorithm were conducted to validate the signal strength at several representative sensor locations. The results were then experimentally verified through actual measurements. The study has shown that to achieve a high signal-to-noise ratio, the sensors need to be placed as closely as possible to the bearing, where signals due to structural defects are generated. The study has provided a theoretical framework for designing sensor-embedded bearings with built-in diagnostic capabilities",2000,0, 1264,"Influence of random, pile-up and scatter corrections in the quantification properties of small-animal PET scanners","The potential of PET imaging for pre-clinical studies will be fully realized only if repeatable, reliable and accurate quantitative analysis can be performed. The characteristic blurring of PET images due to positron range and non co-linearity, as well as random, pile-up and scatter contributions, that may be significant for fully 3D PET acquisitions of small animal, make it difficult their quantitative analysis. In this work specific activity versus specific counts in the image calibration curves for 3D-OSEM reconstructions from a commercially available small animal PET scanner are determined. Both linear and non-linear calibration curves are compared and the effect of corrections for random and scatter contributions are studied. To assess the improvement in the calibration procedure when scatter and random corrections are considered, actual data from a rat tumor pre- and post-cancer therapy are analyzed. The results show that correcting for random and scatter corrections can increase the sensitivity of PET images to changes in the biological response of tumors by more than 15%, compared to uncorrected reconstructions.",2007,0, 1265,Unequal Error Protection (UEP) for Wavelet-Based Wireless 3D Mesh Transmission,"The recent popularity of networked graphics applications such as distributed military simulators and online games, has increased the need to transmit large 3D meshes and textures over wireless networks. To speed up large mesh transmission over low-bandwidth wireless links, we use a wavelet-based technique that aggressively compresses large meshes and enables progressive (piece-wise) transmission. Using wavelets, a server only needs to send the full connectivity information of a small base mesh along with wavelet coefficients that refine it, saving memory and bandwidth. To mitigate packet losses caused by high wireless error rates, we propose a novel forward error correction (FEC) scheme based on unequal error protection (UEP). UEP adds more error correction bits to regions of the mesh that have more details. Our work uses UEP to make wavelet-encoded meshes more resilient to wireless errors. Experimental results shows that our proposed UEP scheme is more error-resilient than no error protection (NEP) and equal error protection (EEP) as the packet loss rate increases by achieving 50% less relative errors and maintaining the decoded mesh structure. Our scheme can be integrated into future mobile devices and shall be useful in application areas such as military simulators on mobile devices.",2009,0, 1266,Disjoint-Paths and Fault-Tolerant Routing on Recursive Dual-Net,"The recursive dual-net is a newly proposed interconnection network for of massive parallel computers. The recursive dual-net is based on a recursive dual-construction of a base network. A k-level dual-construction for k > 0 creates a network containing (2n0)2 k nodes with node-degree d0 + k, where no and do are the number of nodes and the node-degree of the base network, respectively. The recursive dual-net is node and edge symmetric and can contain huge number of nodes with small node-degree and short diameter. Disjoint-paths routing and fault-tolerant routing are fundamental and critical issues for the performance of an interconnection network. In this paper, we propose efficient algorithms for disjoint-paths and fault-tolerant routings on the recursive dual-net.",2009,0, 1267,The influence of modeling error on dynamic compensation of sensors,"The relationship between sensors' modeling error and widening multiple of sensors' working frequency band was analyzed quantitatively, and its spectrums were obtained. They offered the theoretic basis to estimate the feasibility and effect in engineering practice of the compensating method to improve the sensor's dynamic characteristics, and proved that the compensating method has good reliability. Conclusions are drawn clearly: for a first-order system, the compensation effect is conspicuous, which only relates to modeling error but is independent of time constant; for a second-order system, the compensation effect is determined by modeling error and damping ratio. Within a wide range, if the damping ratio is larger, the modeling precision required to obtain the same compensation effect can be lower. Through quantitatively analysis, we found that the modeling precision in dynamic compensation should not be so high as one is taken for granted currently.",2004,0, 1268,"RHIC insertion region, shunt power supply current errors",The Relativistic Heavy Ion Collider (RHIC) was commissioned in 1999 and 2000. RHIC requires power supplies to supply currents to highly inductive superconducting magnets. The RHIC Insertion Region contains many shunt power supplies to trim the current of different magnet elements in a large superconducting magnet circuit. Power Supply current error measurements were performed during the commissioning of RHIC. Models of these power supply systems were produced to predict and improve these power supply current errors using the circuit analysis program MicroCap V by Spectrum Software (TM). Results of the power supply current errors are presented from the models and from the measurements performed during the commissioning of RHIC,2001,0, 1269,Reliability analysis of protective relays in fault information processing system in China,"The reliability indices of protective relays are first put forward in this paper. A Markov probability model is then established to evaluate the reliability of relay protection. With the state space analytical method, all the steady state probabilities and state transition probabilities can be calculated utilizing the data stored in the fault information processing system. We can get an equation that represents the influence of routine test intervals on relay unavailability. Based on this, the optimum routine test interval for protective relays can be determined. This paper also proposes an efficient method of processing large amount of information by the fault information processing system and evaluating the reliability of protective relays with it, and the corresponding software package is also developed. The application of it to an actual power system in China proves the method to be correct and effective",2006,0, 1270,The reliable platform service: a property-based fault tolerant service architecture,"The reliable platform is a fault tolerant architecture designed to provide a structured but flexible framework for the delivery of dependable services for highly critical applications such as X-by-wire systems. The approach is based on defining a structured hierarchy of critical fault tolerant services with corresponding properties that can be explicitly specified and verified. The architecture also incorporates a comprehensive error model that is inclusive of symmetric and asymmetric (i.e. Byzantine) errors of both a permanent and transient nature. Advanced features include the use of hybrid error recovery algorithms, and node/process level synchronization strategies. The system is capable of managing diverse processes at different levels of severity and with varied failure semantics. The system is dynamically reconfigurable based on error containment regions and online diagnosis protocols.",2005,0, 1271,Evaluation of risk in canal irrigation systems due to non-maintenance using fuzzy fault tree approach,"The safety and performance of many existing irrigation systems could be improved by doing the preventive maintenance activities. Modeling canal irrigation systems in terms of condition and performance that can be directly correlated with particular canal system maintenance activities. There are two categories of scheduled maintenance activity in irrigation maintenance systems: maintenance may be targeted towards restoring deliveries (""restorative maintenance""), or towards reducing the risk of failures (""preventative maintenance""). This paper covers the latter kind of maintenance scheduling by 'risk analysis'. The purpose of this risk analysis is to forecast the impact of preventative maintenance on deliveries from the main channel systems. After gaining the experience from the preliminary risk analysis through questionnaire based survey and failure history of past record, a 'fuzzy fault tree (FFT)' method is developed for the rapid risk assessment and it is necessitated for the irrigation system manager/engineer. 'Risk analysis' of irrigation systems from the point of view of maintenance is studied and applied to the Tirunelveli Channel Systems located in India. The effectiveness is calculated for each preventative maintenance tasks and it is ranked according to the effectiveness/cost ratio.",2003,0, 1272,Design of Fault Diagnosis Instrument for Speed Control System Based on Virtual Instrument,"The operating status of the Electronic Speed Controller and the engine of the cannon needed to be evaluated and checked in ordinary maintenance, examination and repair. A useful fault diagnosis system for Electronic Speed Controller was designed by LabVIEW. The system was composed by magneto electric spin speed sensor, junction box, PWM driven module, PCI1802 multi-function card and Apolo530 industry control portable computer. It could be used to measure and simulate the signal of the magneto electric spin speed sensor or the executor and also could be used to diagnose the faults of the sensor, adjust board and executor of the Electronic Speed Controller. Reliability and convenience of the system was showed in the simulation running experiment.",2010,0, 1273,Optimal placement of sensor in gearbox fault diagnosis based on VPSO,"The optimization layout of the acceleration sensor and application of particle swarm optimization (PSO) algorithm to solve the fitness problems of such optimization are discussed in this paper. Based on the gearbox finite element modeling and the result of modal analysis, use the particle swarm optimization with adaptive velocity (VPSO) algorithm, and take the two kinds of fitness function as evaluation goal, has realized the optimization and positioning of gearbox sensor layout, analyzed optimization result.",2010,0, 1274,The Orion GN&C data-driven flight software architecture for automated sequencing and fault recovery,"The Orion Crew Exploration Vehicle (CEV) is being designed to include capabilities that allow significantly more automation than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. This paper presents the Guidance, Navigation & Control (GN&C) flight software architecture designed to provide evolvable automation capability that sequences through software modes and configurations. This software architecture is required to maintain flexibility to address the maturation of operational concepts over time, permit ground and crew operators to gain trust in the system, and provide capabilities for human override of the automation in `off-nominal' situations. To allow for mission flexibility, reconfigurability and reduce the recertification expense over the life of the program, a data-driven approach is used to load the mission event plan as well as the flight software artifacts associated with the GN&C subsystem. The flight software schema for automated mission sequencing is presented with a concept of operations for interactions with ground and crew members. This data is managed through a prototype database of GN&C level sequencing data, which tracks mission specific parameters to aid in the scheduling of GN&C activities. A prototype architecture for fault detection, isolation and recovery interactions with the automation software is presented as part of the upcoming design maturation to respond with appropriate GN&C and vehicle-level actions in `off-nominal' scenarios.",2010,0, 1275,Requirements and concepts for an inter-organizational fault management architecture,"The outsourcing of IT services to dedicated providers has been a successful and sometimes painful step for many enterprises in the past decade. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. However, splitting a service into several service parts, which have to be implemented, operated, and maintained by different providers, is a non-trivial task. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this paper we present the results of a thorough use case based requirements analysis, which is part of our ongoing work to specify an architecture for inter-organizational fault management.",2010,0, 1276,Developing Fault Injection Environment for Complex Experiments,"The paper addresses the problem of creating a comprehensive fault injection environment, which integrates and improves various simulation and supplementary functions. This is illustrated with experimental results.",2008,0, 1277,Evaluation of transient fault susceptibility in microprocessor systems,The paper addresses the problem of evaluating transient fault impact on COTS microprocessor systems. We present the problem of fault effect propagation from low logical to software and application levels. Such an analysis is needed to optimize error detection and correction mechanisms at hardware and software levels. For this purpose we use sophisticated fault injectors. The usefulness of the presented approach was proved in many experiments described in the paper. It may support hardware/software co-design.,2004,0, 1278,The properties of the method of frequency errors correction of input circuits within the range of nonlinear operation,"The paper concerns the method of the correction of frequency errors of input circuits, elaborated for correction of module errors and phase errors of linear input circuits. The method is implemented in the spectral domain in a programmed manner. The author presents the findings of experimental research which show the efficiency of the method in measurements of rms value of nonsinusoidal current within the range of linear and nonlinear processing characteristics of a current transformer.",2010,0, 1279,Communication strategy and fault-tolerance abilities development in bio-inspired hardware systems with FPGA-based artificial cell network,"The paper deals through computer-aided modeling, numerical simulation and experimental research with the bio-inspired digital systems, in order to implement VLSI hardware which exhibits the abilities of living organisms, such as: evolution capabilities, self-healing and fault-tolerance. The theoretical backgrounds of the work are founded in cellular embryology's basic concepts. In the first stage of the researches a new model for an FPGA-based artificial cell is proposed and developed. Also a new communication strategy inside the cell networks is presented, in order to reproduce with high fidelity the complex phenomena and interaction rules in bio-inspired hardware systems. In the next steps the fault-tolerance and self-healing phenomena between these cells in a bi- dimensional structure is careful analyzed and simulated. The final purpose is to design a bio-inspired hardware system (embryonic machine) with programmable FPGA arrays, for study and experiment basic properties of living organisms.",2008,0, 1280,CAN generator and error injector,"The paper deals with the possibilities available for testing the industrial distributed systems (field buses) and their components. It especially focuses on erroneous state handling tests, caused by the external electromagnetic disturbances. Although the common approach is the same for any field bus standard, each of them has some specific attributes that have to be taken into account. Above mentioned common approach is therefore illustrated by the example based on the CAN (Controller Area Network) standard. A special instrument (test generator) was developed to meet all the specific test requirements; its design, features, parameters and usage are presented as well. The most important attribute of the presented approach is in providing the exactly defined and repeatable test conditions.",2002,0, 1281,Faults diagnosis for power transformer based on support vector machine,"The power transformer is very important equipment in a power system, and it is necessary to carry through faults diagnosis for it. Support vector machine is a machine learning algorithm based on statistical learning theory, which can get good classification effects with a few learning samples. A new power transformer fault diagnosis method based on support vector machine is presented in this paper. The method has many advantages for transformer faults diagnosis, such as simple algorithm, good classification and high efficiency. This faults diagnosis method finally has been proved by many practical faults data of power transformer. Compared experiment results with the traditional three-ratio method, this method has higher diagnosis right ratio. So it shows that such method is very feasible and is very suitable for power transformer faults diagnosis.",2010,0, 1282,Numerical Simulation of Multi-path Ultrasonic Flowmeter: Ultrasonic Path Error Analysis,"The precision of ultrasonic flowmeter is mainly depending on ultrasonic path adaptability for velocity profile of pipe, especially for the special cases, such as ultrasonic flowmeter located near the elbow of pipe. In this paper, the flow fields of some typical pipes were numerically calculated with commercial CFD software, and the velocity profiles in different section of pipe were investigated. Based on the simulated velocity profile, flow rate in special section of the double bend pipe was measured using different ultrasonic path configuration methods, such as Gaussian-path, Jacobianpath and diametral-path. According to the error of the measured flow rate, it is easy to get the optimal layout of ultrasonic path for the particular velocity profile. It is proved that the numerical simulation of flow field is useful for the optimization of the ultrasonic paths of flowmeter.",2010,0, 1283,Time and frequency domain analyses based expert system for impulse fault diagnosis in transformers,"The presence of insulation failure in the transformer winding is detected using the voltage and current oscillograms recorded during the impulse test. Fault diagnosis in transformers has several parameters such as the severity of fault, the kind of fault and the location of the fault. Detection of major faults involving a large section of the coils have never been a big issue and several visual and computational methods have already been proposed by several researchers. The present paper describes an expert system based on re-confirmative method for the diagnosis of minor insulation failures involving small number of turns in transformers during impulse tests. The proposed expert system imitates the performance of an experienced testing personnel. To identify and locate a fault, an inference engine is developed to perform deductive reasoning based on the rules in the knowledge base and different statistical techniques. The expert system includes both the time-domain and frequency-domain analyses for fault diagnosis. The basic aim of the expert system is to provide a non-expert with the necessary information and interaction in order to make fault diagnosis in a friendly windowed environment. The rules for fault diagnosis have been so designed that these are valid for the range of power transformers used in practice up to a voltage level of 33 kV. The fault diagnosis algorithm has been tested using experimental results obtained for a 3 MVA transformer and simulation results obtained for 5 and 7 MVA transformers",2002,0, 1284,A Resilient Backpropagation Neural Network based Phase Correction System for Automatic Digital AC Bridges,The present paper describes the development of an ANN based phase correction system which has been employed in conjunction with a real automatic digital ac bridge. The proposed ANN-based phase corrector has been developed using backpropagation learning employing resilient backpropagation (popularly known as RPROP). Significant improvements have been obtained in the proposed phase correction system for measuring impedance and reported in the paper,2004,0, 1285,An approach to ultrasonic fault machinery monitoring by using the wigner-ville and choi-williams distributions,The present work shows that the Wigner-Ville and Choi-Williams distributions are useful to determine the proper operation of rotating axes driven by motors and speed controllers. The bearing diagnosis obtained by analyzing real sound samples using phase-array ultrasonic technology will be discussed to highlight the importance of the proposed tools for industrial machinery monitoring.,2008,0, 1286,Patient motion tracking in the presence of measurement errors,"The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.",2009,0, 1287,Defect detection and classification using a SQUID based multiple frequency eddy current NDE system,"The probability of detection (POD) of hidden fatigue defects in riveted multilayer joints, e.g. aircraft fuselage, can be improved by using sophisticated eddy-current systems which provide more information than conventional NDE equipment. In order to collect this information, sensor arrays or multi-frequency excitation schemes can be used. We have performed simulations and measurements with an eddy current NDE system based on a SQUID magnetometer. To distinguish between signals caused by material defects and those caused by structures in the sample, such as bolts or rivets, a high signal-to-noise ratio is required. Our system provides a large analog dynamic range of more than 140 dB/Hz in unshielded environment, a digital dynamics of the ADC of more than 25 bit (>150 dB) and multiple frequency excitation. A large number of stacked aluminum samples resembling aircraft fuselage were measured, containing titanium rivets and hidden defects in different depths in order to obtain sufficient statistical information for classification of the defect geometry. We report on flaw reconstruction using adapted feature extraction and neural network techniques",2001,0, 1288,?omplete set of the special process equipment for the defect-free production of reticles,"The paper presents an integrated solution of a problem to develop a set of the equipment for the defect-free production of reticles and photomasks. The integrated approach to the equipment design allows to obtain certain advantages disclosed below. Accordingly, the paper highlights the following main issues: (-) Practical realization of these advantages in the special process equipment developed by the KBTEM-OMO enterprise of the PLANAR. (-) Advantages in the development of a complete set of the special process equipment. Without taking into account technical and chemical processes, this complete set includes three component parts: (-) Multi-beam laser pattern generator; (-) Die-to-Database reticle inspection system; (-) Laser reticle repair system.",2007,0, 1289,Generic invariant-based static analysis tool for detection of runtime errors in Java programs,"The paper presents an invariant-based generic tool to statically analyze Java programs in order to detect potential errors (bugs). We briefly discuss the supporting theoretical framework and highlight the results of the tool. It can automatically detect potential bugs such as illegal dereference and array bounds and report them before the program is executed. For a Java class, invariants related to the category of error under examination are automatically generated and used to assess the validity of variable usage in the implementation of this class. The tool provides a practical and extensible generic mechanism for error detection to help industry practitioners who work with an object oriented language such as Java. The presented mechanism is capable of addressing error detection for a variety of error categories that cannot be caught by flow-based static analysis tools",2000,0, 1290,Nonlinear equivalent circuit model of a traction transformer for winding internal fault diagnostic purposes,The paper presents the development of an equivalent circuit model of a traction transformer intended for studies into the diagnostics of interturn faults. The model accounts for the nonlinear B-H characteristic of the transformer core. The derivation of the model is based on the Lagrange's energy method. The values of model parameters were evaluated by means of 3D finite element model computations. The simulation studies based on the proposed equivalent circuit suggest that interturn faults can be detected by evaluating transformer losses.,2008,0, 1291,Re-configuration of task in flight critical system Error detection and control,"The paper presents the error detection and control for control metrics of the re-configuration algorithm in an embedded avionics application with extensive checks and validation. This is being carried out in real-time for decision-making. The success of the re-configurable algorithm is based on the integrity of the data from multiple sources. Hence, the integrity checks of these sources need to be controlled and maintained. Integrity checks as part of error detection and control mechanism is implemented using the Hamming code with error detection and error handling capabilities. The paper presents the experimental simulation studies in both Xilinx platform and VxWorks with target. The control parameters used in the re-configuration algorithm is treated with phase conditions of flight, data sampling and averaging before it is being applied for decision-making process. The integrity and error control/detection is quite critical particularly for the validation of control parameters used for re-configuration in the algorithm and hence the error detection and control scheme is designed and simulated using the Xilinx FPGA platform. The paper presents the algorithm in brief, data sampling techniques based on multiple threshold, identification of phases in flight, error detection/control mechanisms for data integrity and validity. The experimental and simulation studies related to the above areas are detailed with results.",2009,0, 1292,Data Mining Using Rough Sets and Orthogonal Signal Correction-Orthogonal Partial Least Squares Analysis,"The paper put forward Data mining using rough sets and orthogonal signal correction-orthogonal partial least squares analysis (RS-OSC-OPLS/O2PLS). first, dimensionality reduction and de-noising with rough sets and orthogonal signal correction;second, Data mining using orthogonal partial least squares analysis. The method was proved to be feasible and effective after tested with 13 kinds of nationalities crowds data.",2010,0, 1293,Development of an ATP-based fault simulator for underground power cables,The paper reports the development of an ATP-based fault simulator for underground power cables. Emphasis is given to the design philosophy and a detailed description of the user interface which has been developed. Case studies are also included to illustrate the applications of the software,2000,0, 1294,Application of EJB Component in Grid Faults Diagnosis System,"The paper, combining the basic principle and architecture of grid technology as well the combined application in grid fault diagnosis expert system, puts forward the application scheme of EJB component in grid fault diagnosis system. The paper designs the grid service layer based on OGSA, and discusses the application methods of grid for fault diagnosis expert system. The fault diagnosis grid framework on expert system and key technology of DGESA are researched in this paper.",2009,0, 1295,Research on the quantification of defects in injection molded parts based on digital image processing,"The parameters of plastic injection molding process for warpage which can be gotten the value from CAE system can be optimized based on intelligent algorithms such as artificial neural network (ANN) and support vector machine (SVM). However, the quantification methods such as weld lines still rely on the subjective judgments law traditionally. This paper proposes a method to quantify the defect of weld lines based on digital image processing, and then the parameters of molding are optimized by support vector machine to minimize the weld lines.",2009,0, 1296,On the Distribution of Software Faults,"The Pareto principle is often used to describe how faults in large software systems are distributed over modules. A recent paper by Andersson and Runeson again confirmed the Pareto principle of fault distribution. In this paper, we show that the distribution of software faults can be more precisely described as the Weibull distribution.",2008,0, 1297,Improving instantaneous atmospheric corrections for reflected GPS signals L1/L2 observation techniques with an integrated GPS receiver,"The paper is intended to describe relationships of reflected GPS signals and environmental effect factors. No discussion of control would be complete without illuminating the software dimension. An altitude iteration loop into least squares searching for receiver position was modified and enhanced by giving 4 or more pseudo-ranges and ephemeredes and zooming on the plot to detect the search pattern. [1] The software is developed using MATLAB tools, which can display results of the reflected surface area altitude in real time. For the reflected signal application, it has been proven that a GPS receiver cannot smoothly detect reflected signals because under the underground base receiver system cannot smoothly receive the reflected GPS signals when the GPS signals are weak and discontinuously reflected. [2] This problem negatively impacts and reduces application of reflected signals. The design of an integrated receiver system permits smooth reception of the signals reflected by the water or ground. A rapid resolution altimetry method was developed by using surface elevations spectrum with the elevation angle and line of sight theory in the reflecting geometry and atmospheric corrections with ambiguity solution.",2007,0, 1298,A Practical Coder of Fully Scalable Video over Error-Prone Network,"The paper is mainly devoted to explore practical scalable video coding technologies over error-prone wireless and mobile network with fluctuant bandwidth and various terminals. The new coder supports full scalability of spatial resolution, SNR quality and temporal in a flexible and simple method. Instead of common-used but complex MCTF-based scheme the new coder integrates smartly several optimized and improved technologies, i.e. so-called Hierarchical-B-Picture-Like, DWT, successive multi-level quantization, and SPIHT. The proposed coder can produce embedded bitstream satisfying various requirements of bandwidth, resolution and SNR of terminals over error-prone network. The experiments show that the proposed coder is practical and flexible and works well over error-prone network.",2009,0, 1299,A service based approach to decentralized diagnosis and fault tolerant control,"The paper presents a hierarchical architecture for fault tolerant control of mechatronic systems. In the architecture, both the diagnosis and the reconfiguration are completely decentralized according to the structure of the control system. This is achieved by using a purely service oriented view of the system including both hardware and software. The service view with no cyclic dependencies is further used to obtain Bayesian networks for modeling the system.",2010,0, 1300,Integrated FPGA based ASIC design on error code correction counter for UPS telecommunication,"The paper presents a hybrid error code correction counter suitable for the UPS telecommunication with three different signal specifications. Based on the FPGA implementation, the required functional blocks can be partitioned and designed as follows: pulse combination, serial to parallel data transfer, one frame latching, combination logic to serial pulse generation, MPU based one second pulse generation, asynchronous counter with asynchronous clear. Through systematic integration as described in this paper, the error code correction counter can be successfully designed. It is believed that the associated implementation technique will be applicable to the research and development of the tester technology on the UPS telecommunication.",2001,0, 1301,Spatial error concealment based on directional decision and intra prediction,"The paper presents a novel spatial error concealment algorithm based on directional decision and intra prediction. Unlike previous approaches that simultaneously recover the pixels inside a missing macroblock (MB), we propose to recover them 44 block by 44 block. Each missing 1616 MB in an intra frame is first divided into 16 blocks, each with size 44, then recovered block by block using Intra_44 prediction. Previously-recovered blocks can be used in the recovery process afterwards. The principle advantage of this approach is the improved capability of recovering MB with edges and the lower computational complexity. The proposed algorithm has been tested on the H.264/AVC reference software JM7.2. Experimental results demonstrate the advantage of the proposed method.",2005,0, 1302,Testing the H.264 error-resilience on wireless ad-hoc networks,"The purpose of this paper is to provide a framework to tune and evaluate the performance of the H.264 codec in 802.11b wireless ad-hoc networks. The codec's error-resilience features are measured under stress conditions typical to these networks, and the most critical parameters are presented. We present solutions concerning the random packet loss problem and show how to quickly recover from packet loss bursts.",2003,0, 1303,Parameter Estimations in Linear Regression Models with AR(2) Errors in Which the Parameters Have a Special Relationship,"The purpose of this paper is to study parameters estimations in linear regression model with AR(2) errors At = A1At-1 + A2At-2 - At, t = 1, 2,A, n in which the parameters have a special relationship A2 = A1 2. For the properties of variance-covariance matrix A , This kind of models are transformed into the standard linear regression models without autocorrelation errors and apply the method of cycle generalized least squares (CGLS) to estimate parameters. Simulation results show that efficiency of CGLS method is superior over the method of generalized least squares (GLS) under mean square error criterion.",2009,0, 1304,Optimization of Rb-82 PET acquisition and reconstruction protocols for myocardial perfusion defect detection,"The purpose of this study is to optimize the dynamic Rb-82 myocardial perfusion (MP) PET acquisition/reconstruction protocols for maximum perfusion defect detection using realistic simulation data and task-based evaluation. Time activity curves (TACs) of different organs at both rest and stress conditions were extracted from dynamic Rb-82 PET images of 5 normal patients. Combined SimSET-GATE Monte Carlo simulation was used to generate nearly noise-free (NNF) MP PET data from a time series of 3D NCAT phantoms with organ activities modeling different pre-scan delay times (PDTs) and total acquisition times (TATs). Poisson noise was added to the NNF projections and the OS-EM algorithm was applied to generate noisy reconstructed images. The channelized Hotelling observer (CHO) with 3232 spatial templates corresponding to 4 octave-wide frequency channels was used to evaluate the images. The area under the ROC curve (AUC) was calculated from the CHO rating data as an index for image quality in terms of MP defect detection. The 0.5 cycle/cm Butterworth post-filtering on OS-EM (with 21 subsets) reconstructed images generates the highest AUC values while those from iteration number 1 to 4 do not show different AUC values. The optimized PDTs for both rest and stress conditions are found to be close to the cross points of the left ventricular chamber and myocardium TACs, which may promote individualized PDT for patient data processing and image reconstruction. Shortening the TATs for 3 minutes from the clinically employed acquisition time does not affect the MP defect detection significantly for both rest and stress studies.",2008,0, 1305,Wavelet packet analysis applied in detection of low-voltage DC arc fault,The randomness and instantaneity of low-voltage DC arc fault make it difficult to be detected by methods in time or frequency domain. This paper proposed a method based on wavelet packet analysis which has the localization characteristics to detect low-voltage DC arc fault. And the effectiveness of this method has been proved by the simulation analysis with the MATALAB software and the arc simulation experiments.,2009,0, 1306,Fusion of soft and hard computing for fault diagnosis in manufacturing systems,"The rapid diagnosis of faults in computerized manufacturing systems is crucial to reduce expensive downtime. Many hard computing approaches use either symptom-based or functional reasoning. Symptom-based approaches are unable to handle exceptions, while functional approaches are computationally expensive, and, thus unable to produce a real-time response. Current hybrid approaches which combines the two hard computing methods are too structured in their approach to switching between reasoning methods and, thus fail to provide rapid response comparable to humans. This paper presents a robust, extensible approach to fault diagnosis combines these hard computing methods with the soft computing of agents using fuzzy logic. This fusion of hard and soft computing methods allows unstructured switching between reasoning methods by utilizing multiple intelligent agents which examine the problem domain from a variety of perspectives.",2003,0, 1307,Fault Diagnosis Expert System Based on Integration of Fault-Tree and Neural Network,"The traditional fault diagnosis expert system is dependent on knowledge acquisition of the experts. Knowledge acquisition is recognized as the ""bottleneck"" problem of expert system. In addition, there are also some limitations of adaptive capacity, learning ability and real-time. And artificial neural network with good fault-tolerance and associative memory function, as well as very strong self-adaptive, self-learning ability, just can make up for the limitations of traditional expert system. This paper will construct a new expert system with the artificial neural network into and fault tree. Besides fault tree and neural network, this article mainly introduces the system model of fault diagnosis of the fire control computer and sensor subsystem, the method and process of fault diagnosis. In this expert system, we use the object-oriented production rule to represent the knowledge, which solves the bottleneck problem of the diagnostic knowledge acquisition effectively. The inferential process begins with the abnormal event and finally finds all of the possible faults and the faulty component. For some possible faulty components, which have large number of fault samples, the neural network model can be used to diagnose. The training network of fault samples employs the BP neural network. Finally, simulation training results show that the fault diagnosis expert system based on the combination of fault tree and neural network is rational and effective in fault diagnosis of the fire control system, realizes perfectly the combination of new knowledge and old one, and can grasp the state of systems dynamically.",2009,0, 1308,Vibration fault diagnosis of hydro-turbine generating unit based on rough 1-v-1 multiclass support vector machine,"The traditional vibrant fault diagnosis classifier of hydro-turbine generating unit (HGU) can't reflect the uncertain information in fault pattern recognition. To overcome the above problem a novel classifier based on rough set (RS) and 1-v-1 multiclass support vector machine (SVM) is introduced. In this method, the basic ideas of RS: upper approximation, lower approximation and boundary region are used to describe the positive region, negative region and margin of SVM. By using 1-v-1 method, the multiclass classification of SVM is realized. Then the description of upper approximation, lower approximation and boundary region of multiclass are determined. At last, the rules of classifier are acquired. The results show that the proposed classifier has high classification reliability, more concise rule, and lower requirement of memory space in operation stage, and can reflect the uncertain information of fault diagnosis.",2010,0, 1309,Seeded fault testing in support of mechanical systems prognostic development,"The US Navy is evolving a strategy to develop and demonstrate prognostics and health management for propulsion and mechanical systems. How this overall strategy has evolved and its current status are presented. The SH-60 program was initiated as the first proof-of-concept effort to develop, demonstrate, and integrate available and advanced mechanical diagnostic technologies for propulsion and power drive system monitoring. Included in these technologies were various rule based and model based analysis techniques which were applied to demonstrate and validate various levels of diagnostic and prognostic capabilities. These are discussed and updated. Using past and recent ""seeded faults"" tests as case examples, various diagnostic methods were used to identify the faults, and various means of applying prognostics and health management are discussed. The most recent examples of ""seeded faults"" and related tests are also discussed as case studies, demonstrating various degrees of diagnostic, prognostic and health management capabilities. The current philosophy and thinking in respect to prognostics for mechanical systems is embellished, using examples from past and more recent ""seeded fault"" databases. Of particular interest is discussions of tests focused on demonstrating the understanding of fault to failure progression characteristics and the physics of failures. Accomplishments are discussed and additional needed testing and demonstration requirements are defined. Finally, status and future planned efforts for the USN Helicopter Transmission Test Facility (HTTF) and other related test resources are presented.",2002,0, 1310,Estimation of the output error statistics of space-time equalization in an antenna array EGPRS receiver with soft-decision decoding,"The use of antenna arrays can help combat cochannel interference (CCI) in wireless cellular systems. In this paper, we consider an enhanced general packet radio service diversity receiver based on least squares spatio-temporal equalization and soft-decision decoding in the presence of decision feedback and/or asynchronous CCI. We compare known and novel estimators of the error mean and variance at the output of the deterministic space-time equalizer. The collected simulation data indicate that the estimation of the error mean and variance is critical to the performance of soft-in/hard-out Viterbi decoding in the presence of nonstationary input disturbance. Moreover, the use of short-term error statistics provides receiver performance gains of up to 15-20 dB in terms of signal-to-interference ratio, with respect to the use of burst statistics based on the training sequence midamble and tentative decisions on the payload symbols.",2005,0, 1311,A fault-tolerant modular control approach to multi-robot perimeter patrol,"The use of large scale multi-robot systems is motivated by a number of desirable features, such as scalability, fault tolerance, robustness and lower cost with respect to more complex and specialized agents. This work is focused on a a behavior-based approach to the problem of the multi-robot border patrolling, in the framework of the Null-Space-based Behavioral control (NSB); it is based on two previous works of the same authors, where the feasibility of the approach is demonstrated. Namely, a few aspects of the approach, not yet tackled in previous works, are investigated: its robustness to faults of individual agents, its capability of managing large numbers of robots, the possibility of adding new tasks in the framework of the multi-robot patrolling problem. Along these directions, our approach has been validated in simulation with a large number of robots and sudden faults as well as experimentally on a team composed by three Pioneers 2-DX robots.",2009,0, 1312,Application of weighted least squares to OSL vector error correction,"The use of least squares has been beneficial in providing more accurate one-port calibrations. Commercially available ECAL units have taken advantage of this method. A more generalized application of a least squares method, the weighted least squares method provides benefit when the models for the calibration standards are not all trusted equally. The use of the weighted least squares provides a method of discounting the effect of calibration standards as the model accuracy degrades instead of abruptly dropping the use of the standard outside of a specified frequency range avoiding discontinuities in subsequent measurements. A detailed view a 1.85 m calibration kit demonstrates the improved results possible using a weighted least squares method. The paper includes the description of a proximity function that enhances the results when the response of two or more standards begin to cluster. Additional enhancements due to databased models over traditional polynomial models is also presented.",2003,0, 1313,Minimising Loss-Induced Errors in Real Time Wireless Sensing by Avoiding Data Dependency,"The use of local processing to reduce data transmission rates, and thereby power and bandwidth requirements, is common in wireless sensor networks. Achieving the minimum possible data rate, however, is not always the optimal choice when the effects of packet loss on overall measurement error are considered. This paper presents a case study from the area of wireless inertial motion capture, in which the best distributed processing strategy is shown to be that which minimises inter-packet data dependency, rather than overall data rate. Drawing on this result and further analysis, we identify questions that should be raised during the design process in order to understand the effects of packet loss on distributed signal processing tasks, decide whether the errors are acceptable for an application, and choose appropriate techniques to mitigate them if necessary.",2009,0, 1314,"Gate-Sizing-Based Single Test for Bridge Defects in Multivoltage Designs","The use of multiple voltage settings for dynamic power management is an effective design technique. Recent research has shown that testing for resistive bridging faults in such designs requires more than one voltage setting for 100% fault coverage; however, switching between several supply voltage settings has a detrimental impact on the overall cost of test. This paper proposes an effective gate sizing technique for reducing test cost of multi-Vdd designs with bridge defects. Using synthesized ISCAS and ITC benchmarks and a parametric fault model, experimental results show that for all the circuits, the proposed technique achieves single Vdd test, without affecting the fault coverage of the original test. In addition, the proposed technique performs better in terms of timing, area, and power than the recently proposed test point insertion technique. This is the first reported work that achieves single Vdd test for resistive bridge defects, without compromising fault coverage in multi-Vdd designs.",2010,0, 1315,Application of Aircraft Fuel Fault Diagnostic Expert System Based on Fuzzy Neural Network,Theories of expert system and fuzzy artificial neural network (ANN) are applied to solve the problem of fault diagnosis in the aircraft fuel system. A multilayer neural network model of the aircraft fuel system is put forward and the integrated aircraft fuel fault diagnostic expert system which solves the problems of knowledge representation and knowledge acquisition of traditional expert system is realized. The hardware-in-loop simulation results show that the expert system diagnoses the fault in accessories rapidly and accurately and it is proved that the expert system is significative and helpful for further development in the aircraft fuel fault diagnosis.,2009,0, 1316,A novel self-routing reconfigurable fault-tolerant cell array,"There are a number of examples of reconfigurable, fault- tolerant hardware consisting of cells that have the same hardware structure. The arithmetic or logical functions of the cells can be configured to implement specific functions. The interconnection of these configured cells then allow the system to implement any complex task. If some cells become faulty, the function and routing of other cells have to be reconfigured to restore the normal system function. This paper presents a self-routing, reconfigurable and fault-tolerant cell array. If one or more cells are faulty, spare cells can replace the faulty cells automatically and the rerouting can also be achieved automatically. The cell array achieves fault tolerance without the aid of external software or hardware.",2007,0, 1317,A Fault and Mobility Tolerant Location Server for Large-scale Ad-hoc Networks,"There are many essential applications for quorum systems in ad-hoc networks, such as that of location servers in large-scale networks. Existing research proposes many approaches to the problems, many of which are incomplete, cumbersome, or incur significant cost. We describe and analyse a self-organising quorum system that creates an emergent intelligence to minimise overhead and maximise survivability. We compare our quorum system with ones proposed in the literature in terms of delivery success and find that it performs favourably.",2007,0, 1318,Error Analysis and Compensation of roll Measuring Device for Roll Grinder NC,"There are many parameters that cause roll measurement error of roll grinder NC. The effect factors of probe error in roll two-point measurement approach are analyzed, and the mathematic models and compensation methods of the probe static error and dynamic error are proposed in this paper. Based on the original control system and hardware of roll grinder, we designed a new type measuring device and error compensation software system. And the experiment shows that the probe compensation operation can greatly improve the measurement accuracy.",2007,0, 1319,A low-tech solution to avoid the severe impact of transient errors on the IP interconnect,"There are many sources of failure within a system-on-chip (SoC), so it is important to look beyond the processor core at other components that affect the reliable operation of the SoC, such as the fabric included in every one that connects the IP together. We use ARM's AMBA 3 AXI bus matrix to demonstrate that the impact of errors on the IP interconnect can be severe: possibly causing deadlock or memory corruption. We consider the detection of 1-bit transient faults without changing the IP that connects to the bus matrix or the AMBA 3 standard and without adding extra latency while keeping the performance and area overhead low. We explore what can be done under these constraints and propose a combination of techniques for a low-tech solution to detect these rare events.",2009,0, 1320,Prediction by Samples From the Past With Error Estimates Covering Discontinuous Signals,"There are several reasons why the classical sampling theorem is rather impractical for real life signal processing. First, the sinc-kernel is not very suitable for fast and efficient computation; it decays much too slowly. Second, in practice only a finite number N of sampled values are available, so that the representation of a signal f by the finite sum would entail a truncation error which decreases rather slowly for NA A, due to the first drawback. Third, band-limitation is a definite restriction, due to the nonconformity of band and time-limited signals. Further, the samples needed extend from the entire past to the full future, relative to some time t = t0. This paper presents an approach to overcome these difficulties. The sinc-function is replaced by certain simple linear combinations of shifted B-splines, only a finite number of samples from the past need be available. This deterministic approach can be used to process arbitrary, not necessarily bandlimited nor differentiable signals, and even not necessarily continuous signals. Best possible error estimates in terms of an Lp-average modulus of smoothness are presented. Several typical examples exhibiting the various problems involved are worked out in detail.",2010,0, 1321,Towards Software Quality Economics for Defect-Detection Techniques,"There are various ways to evaluate defect-detection techniques. However, for a comprehensive evaluation the only possibility is to reduce all influencing factors to costs. There are already some models and metrics for the cost of quality that can be used in that context. The existing metrics for the effectiveness and efficiency of defect-detection techniques and experiences with them are combined with cost metrics to allow a more fine-grained estimation of costs and a comprehensive evaluation of defect-detection techniques. The current model is most suitable for directly comparing concrete applications of different techniques",2005,0, 1322,A Leader Management Scheme for Fault-Tolerant Multiple Subnets,"There have been a number of Ethernet-based fault-tolerant schemes which provide fast fault detection and recovery in a subnet environment. However, they are not scalable since Ethernet frame cannot be transmitted to the outside of a subnet. Our SAFE (scalable autonomous fault-tolerant Ethernet) scheme divides whole network into several subnets and manages leader nodes in each subnet. Leader nodes communicate each other for inter-subnet fault recovery. In this paper, we study a fault-tolerant leader management scheme for multiple subnet network. When one of leader nodes fails, the other node will be charged with task of previous leader quickly. Proposed scheme is performing in an autonomous way and network can operate continuously without interruption.",2009,0, 1323,A SCADA system for on-line battery early faults precaution,There have been many papers written about methods used to determine the health of batteries in critical standby applications. In this paper intends to produce a critical on-line analysis of trace resistance method and trace voltage method through the SCADA system application. Yet this management system can be proposed to find the early faults of the batteries. Results indicated that the SCADA system with online battery monitoring systems has been highly willing to manufacture industry in the UPS power management quality.,2009,0, 1324,Towards measurement of testability of concurrent object-oriented programs using fault insertion: a preliminary investigation,"There is a lack of methods and techniques for measuring testability of concurrent object-oriented programs. Current theory and practice for testing sequential programs do not usually apply to concurrent systems. An approach towards measuring the testability of concurrent Java programs is proposed in this paper. The key idea is to take a program and insert faults that are related to the concurrency aspects. The approach is based on two methods: (1) mutation of the program based on keywords, and (2) creation of conflict graphs based on static analysis of the code.",2002,0, 1325,A fault tolerance procedure for P2P online games,"There is a need to analyze the stability problem of Peer-to-Peer (P2P) online games in depth considering the frequency and scope of player departure and the variation of players usual behavior, e.g. actions, at runtime. An overlay multicast network also known as Application Layer Multicast (ALM) tree, used for intra-zone communication in P2P MMOGs, with a low impact on node departure is preferred which is an open research area to explore. This paper extends cluster concept that brings stability in overlays and devises a fault tolerance solution to handle the problem of routing game states. The presented approach can not only provide fault tolerance to the overlay network but also preserve required end-to-end distance in terms of hops which is an important factor for interactive applications like online game.",2010,0, 1326,An improved atmospheric correction algorithm for hyperspectral remotely sensed imagery,"There is an increased trend toward quantitative estimation of land surface variables from hyperspectral remote sensing. One challenging issue is retrieving surface reflectance spectra from observed radiance through atmospheric correction, most methods for which are intended to correct water vapor and other absorbing gases. In this letter, methods for correcting both aerosols and water vapor are explored. We first apply the cluster matching technique developed earlier for Landsat-7 ETM+ imagery to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data, then improve its aerosol estimation and incorporate a new method for estimating column water vapor content using the neural network technique. The improved algorithm is then used to correct Hyperion imagery. Case studies using AVIRIS and Hyperion images demonstrate that both the original and improved methods are very effective to remove heterogeneous atmospheric effects and recover surface reflectance spectra.",2004,0, 1327,The research of printing's image defect inspection based on machine vision,"There may be printing's image defect in printing during the course of print. It is absolutely necessary for Print Corporation to inspect printing with defect. Traditionary printing defect inspection is usually completed by manpower, whose inspect quality is instability, and inspect efficiency is low. The paper proposed a method of printing's image defect inspection based on machine vision, sot up an experiment system. Familiar printing defects such as filar defects, zonal defects, punctuate defects and block defects are inspected in the experiment, which proves practicability of the method and has value of extend application.",2009,0, 1328,Research and Development of the Thermal Error Compensation Embedded in CNC System,"Thermally-induced errors are the major contributor to the inaccuracy of the high-precision machine tools. Limited by CNC system's close-loop structure, the present thermal error compensations are difficult to be embedded in CNC system and only can be as an additional unit out of the CNC system. This paper designed and developed an embedded error compensation module based on the opening CNC system with field bus. The presented embedded error compensation module can exchange data with the CNC system main control module in real time, share the system resources and provide on-line temperature measurement and thermal model calculation. Then the position-dependent thermal error can be compensated by correct the coordinates command during CNC system interpolation progress. An experiment was carried out to verify the performance of the compensation system.",2010,0, 1329,Directional aspects of balance corrections in man,This article attempts to highlight the new insights into balance control that have been gained by using multidirectional perturbations and to demonstrate how this new focus enables a better understanding of how the central nervous system (CNS) malfunctions in patients with balance disorders. Multidirectional perturbations have proven to be a valuable tool to better understand dynamic postural control in normal and balance-deficient populations. The advantages gained by using multidirectional perturbations to exert joint displacement profiles at different levels and in different directions has allowed greater insight into how passive joint characteristics and active muscle synergies are triggered and shaped by peripheral and central sensory systems to elicit directionally specific postural responses to avoid a fall.,2003,0, 1330,Transient fault-tolerance through algorithms,"This article describes that single-version enhanced processing logic or algorithms can be very effective in gaining dependable computing through hardware transient fault tolerance (FT) in an application system. Transients often cause soft errors in a processing system resulting in mission failure. Errors in program flow, instruction codes, and application data are often caused by electrical fast transients. However, firmware and software fixes can have an important role in designing an ESD, or EMP-resistant system and are more cost effective than hardware. This technique is useful for detecting and recovering transient hardware faults or random bit errors in memory while an application is in execution. The proposed single-version software fix is a practical, useful, and economic tool for both offline and online memory scrubbing of an application system without using conventional N versions of software (NVS) and hardware redundancy in an application like a frequency measurement system",2006,0, 1331,Real-time fault diagnosis [robot fault diagnosis],"This article presents a number of complementary algorithms for detecting faults on-board operating robots, where a fault is defined as a deviation from expected behavior. The algorithms focus on faults that cannot directly be detected from current sensor values but require inference from a sequence of time-varying sensor values. Each algorithm provides an independent improvement over the basic approach. These improvements are not mutually exclusive, and the algorithms may be combined to suit the application domain. All the approaches presented require dynamic models representing the behavior of each of the fault and operational states. These models can be built from analytical models of the robot dynamics, data from simulation, or from the real robot. All the approaches presented detect faults from a finite number of known fault conditions, although there may potentially be a very large number of these faults.",2004,0, 1332,Fault Tolerance for Manufacturing Components,"This article proposes a multiagent system for industrial production elements that transfers the concept of fault tolerance to the manufacturing levels of the organisation, acting automatically under open protocols when there is degradation or failure of any of the components, ensuring that normal operation is resumed within a delimited time. The main characteristics of this system are the drastic reduction in recovery times, the support for the significant heterogeneity existing in these scenarios and their high level of automation, while practically dispensing with the intervention of system administrators.",2006,0, 1333,Novel bandstop filter using dual-U shape defected microstrip structure,"This article proposes a new defected microstrip structure (DMS) which is used to design filter is introduced. Its resonant characteristics can be adjusted by changing the dimension of the structure unit. Finally, a novel bandstop filter based on this structure is designed, fabricated and tested. Both simulated and measured results show that the proposed filter has a good performance.",2010,0, 1334,Neural networks based on fuzzy clustering and its applications in electrical equipment's fault diagnosis,"This article put forward a sample processing method using fuzzy clustering and studied the application of fuzzy competition classification method in extracting contradiction samples, then advanced the diagnosis method of neural network based on fuzzy clustering, finally carried on the simulation research. The calculation results show that all the above-mentioned methods are quite practical.",2005,0, 1335,DTS - A Software Defects Testing System,"This demo presents DTS (software defects testing system), a tool to catch defects in source code using static testing techniques. In DTS, various defect patterns are defined using defect patterns state machine and tested by a unified testing framework. Since DTS externalizes all the defect patterns it checks, defect patterns can be added, subtracted, or altered without having to modify the tool itself. Moreover, typical interval computation is expanded and applied in DTS to reduce the false positive and compute the state of defect state machine. In order to validate its usefulness, we perform some experiments on a suite of open source software whose results are briefly presented in the last part of the demo.",2008,0, 1336,Novel Dual-Band Filter Incorporating Defected SIR and Microstrip SIR,"This letter presents a novel approach to design dual-band bandpass filter by using defected stepped impedance resonator (DSIR) and microstrip stepped impedance resonator (MSIR). A pair of MSIRs on the upper plane forms a cross coupled filtering passage, and a pair of DSIRs at the lower plane constructs a linear phase filtering passage. Both of them are fed by a common T-shaped microstrip feed line with source-load coupling. Then they are directly combined to construct a compact dual-band filter with two passbands centering at 2.35 GHz and 3.15 GHz, respectively. The measurement results agree well with the full-wave electromagnetic designed responses.",2008,0, 1337,An Improved Decoding Algorithm for DVC Over Multipath Error Prone Wireless Channels,This letter presents an improved decoding algorithm for distributed video coding (DVC) for enhanced performance over error prone wireless channels with multipath fading. The effects of the channel errors on both Wyner-Ziv and key frame bit streams are considered and novel noise models are proposed together with the associated modifications to the decoding algorithm. Simulations are performed using a W-CDMA wireless channel and the results are analyzed to determine the effect of each individual modification. State-of-the-art H.264/AVC is also considered under similar conditions for the comparison. The simulation results show that the proposed modifications provide a significant improvement in the DVC codec performance under unfavorable channel conditions.,2009,0, 1338,Evaluating the word error rate of channel codes via the squared radius distribution of decision regions,"This letter proposes an efficient approach to evaluate the word error rate (WER) of channel codes with the distribution of squared radius of decision regions. The squared radius can be measured via simulations and then the WER for any signal to noise ratio (SNR) can be evaluated. In addition, by approximating the distribution of squared radius as Gamma distribution, only the mean and variance of squared radius need to be measured. For long codes, the WER can be further simplified to a single term of incomplete gamma function or Gaussian Q function. Moreover, for the code family which has a threshold SNR, the squared radius can be used to estimate the threshold SNR.",2008,0, 1339,Impact of Transmission Network Reinforcement on Improvement of Power System Voltage Stability and Solving the Dynamic Delayed Voltage Recovery and Motor Stalling Problem After System Faults in the Saudi Electricity in the Western Region,"The Saudi Electricity company power system in the Western Region of the Kingdom (SEC-WR) is unique in its load pattern, growth trends and type, generation recourses and network configuration. The power system of the SEC-WR faced and is facing a high load growth. The high load increase gives rise to a very high loading of the transmission system elements, mainly power transformers and cables. The Western Region load is mainly composed of air conditioner (AC) during high load season. In case of faults this nature of load induces delayed voltage recovery following fault clearing on the transmission system. The sustained low voltage following transmission line faults could cause customer interruptions and may be equipment damage. The integrity of the transmission system may also be affected. The transient stability of the system may be affected. This may also influence the stability of the generating units in the system. The existing dynamic model of SEC-WR System has been described. The response of the model to the actual faults is compared with actual records obtained from the dynamic system monitor (DSM) installed in several locations in the SEC-WR System. The solution of the delayed voltage recovery problem after system faults may be achieved by reinforcement of the system, adding static VAr compensators (SVC) to provide the dynamic reactive power support to the system, reducing the fault clearing time and by under voltage load shedding. This paper analyzes and discusses the first alternative, the system reinforcement",2006,0, 1340,Providing Security in Intelligent Agent-Based Grids by Means of Error Correction,"The security of the existing ambient intelligence model presents a particular challenge because this architecture combines different networks, being prone to vulnerabilities and attacks. Specific techniques are required to be studied in order to provide a fault-tolerant system. One way to address this goal is to apply different techniques in order to correct the errors introduced in the systems by a spectrum of sources. This paper is a rational continuity of the previous work by offering a method of detecting and correcting errors.",2009,0, 1341,Selective ground-fault protection using an adaptive algorithm model in neutral ungrounded power systems,"The selective ground-fault protection is greatly valued for the safe and reliable operation of power systems. In order to eliminate the effect of zero-sequence current transformer's phase character on selective ground-fault protection devices, this paper proposes a new adaptive principle of selective ground-fault protection, and gives an algorithm model of action criterion based on the half-wave Fourier algorithm. The simulation results show that this criterion will possess very good selectivity to ground faults",2000,0, 1342,The uniform formula of single phase earth - fault distance relay with compensation,"The single-phase earth fault is the most common fault type on high voltage transmission line. But the sensitivity of single-phase distance relays can't satisfy the requirement of power system. In order to improve the sensitivity of distance relays to protect the single-phase faults with large earth resistance, one uniform formula is provided to express most of the single-phase distance relays in this paper. More than ten kinds of relays that have been used widely can be expressed by the uniform formula through giving different coefficients. This simplifies the expressions of conventional relays. The uniform formula is helpful to analysis the performance of these relays. It also provides an approach to create new relays with better performance or to lucubrate the correlative principles. Two new relays based on the uniform formula are proposed in this paper. Simulations show that the new relays have better performance than the old ones. At last, the fundamental about the choice of polarization voltage is discussed.",2003,0, 1343,Error Analysis of Scheduling Sleeping Nodes in Wireless Sensor Networks,"The sleeping technique is one of the most popular ways to save energy of battery powered sensor nodes. Many existing researches on sleeping technique are based on the pre-knowledge of sensor nodes deployment, e.g., a known probability distribution of sensor nodes in the target sensing field. Thus, whether or not a scheduling sleeping scheme has a good performance mostly depend upon the pre-knowledge of sensor nodes deployment. In this paper, we show the discrepancy of scheme performances including energy consumption and network lifetime based on inaccurate pre-knowledge of sensor deployment. Through the analytical studies, we conclude that the discrepancy is very large and can not be neglected. We hence propose a distribution-free approach to study energy consumption. In our approach, no assumption of probability distribution of sensor node deployment is needed. The proposed approach has yielded good estimation of network energy consumption.",2009,0, 1344,Research of Software Defect Prediction Model Based on Gray Theory,"The software testing process is an important stage in the software life cycle to improve software quality. An amount of software data and information can be obtained. Based on analyzing the source and the type of software, this paper describes several use of the defects data and explains how to estimate the density of the software by using the software defects data collected in the practical work, and to use GM model to predict and assess the software reliability.",2009,0, 1345,Performance analysis of HTS fault current limiter combined with a ZnO varistor against transient overvoltages,"The superconducting technology nowadays is an innovation in the field of the electrical power supply. Using HTS in fault current limiter (FCL) represents a new category of electrical equipment and a novel configuration of electrical network. In fact the high temperature superconductivity (HTS) makes a relatively sharp transition to a highly resistive state when the critical current density is exceeded, and this effect has suggested their use for resistive fault current limiters. FCL is an important element in order to reduce system impedance, which permits an increase of power transmission. Furthermore, it allows additional meshing of a power system, which increases the power availability. The most significant features of the SCFCLs requested from the power system operating conditions are limiting impedance, a trigger current level and a recovery time. In this paper, a model of HTS FCL using ZnO varistor is proposed. The effectiveness of this model is investigated through results of simulation in MATLAB/Simulink software. In addition it is illustrated the limiting feature of HTS FCL and protected role of ZnO varistor for transient overvoltage",2006,0, 1346,Monitoring of photovoltaic systems for performance evaluation and fault identification,"The sustainability of standalone photovoltaic systems passes through an accompaniment of the systems installed in the field. To subsidize this accompaniment procedures had been developed for monitoring a similar system in a laboratory. The standalone photovoltaic system implanted in the Research Center in Intelligent Energy of the Group of Studies in Energy-CPEI GREEN PUC Minas is similar to the systems installed by Companhia Energetica de Minas Gerais-CEMIG in the schools of isolated communities, inside the solar light program. A simulation of the system was implemented and the aims were to optimize the project and carry out a comparative study with the monitoring results. The procedure for assembly of the monitoring facility consisted of the implantation of the voltage and current sensors, implantation of the irradiance and temperature sensors, installation of the acquisition boards and development of the monitoring program. The results presented here will allow the development of a program of preventive maintenance of the photovoltaic systems installed by CEMIG.",2004,0, 1347,Error handling for the CDF Silicon Vertex Tracker,"The SVT online tracker for the CDF upgrade reconstructs two-dimensional tracks using information from the Silicon Vertex detector (SVXII) and the Central Outer Tracker (COT). The SVT has an event rate of 100 kHz and a latency time of 10 s. The system is composed of 104 VME 9U digital boards (of 8 different types) and it is implemented as a data driven architecture. Each board runs on its own 30 MHz clock. Since the data output from the SVT (few Mbytes/sec) are a small fraction of the input data (200 Mbytes/sec), it is extremely difficult to track possible internal errors by using only the output stream. For this reason several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named Spy Buffers which act as built in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be frozen at any time (e.g. on error detection) to take a snapshot of all data flowing through each SVT board. The Spy Buffers are coordinated at system level by the Spy Control Board. The architecture, design and implementation of this system are described",2000,0, 1348,Fault identification of double circuit lines,"The technique for identifying fault types on single circuit lines, suggested by Ferrero et al. (1993) is modified to be suitable for protecting double circuit lines. The modified technique is based on estimating the Fourier coefficients of double circuit line currents using the recursive least square identifier. The sequence components are obtained by a linear transformation of the fundamental frequency phase currents of each circuit. The fuzzy logic inference system processes these components to identify fault type. The computer simulation test results indicate the possibility of using the suggested technique as an effective tool for high speed digital relaying",2001,0, 1349,Behavioral Fault Model for Neural Networks,"The term neural network (NN) originally referred to a network of interconnected neurons which are basic building blocks of the nervous system. Fault tolerance is known as an inherent feature of artificial neural networks (ANNs). Wide attention has been given to the problem of fault-tolerance in VLSI implementation domain and not enough attention has been paid to intrinsic capacity of survival to faults. In this work we focus on the impact of faults on the neural computation in order to show neural paradigms cannot be considered intrinsically fault-tolerant. A high abstraction level (corresponding to the neural graph) error model is introduced in this paper. We propose fault model and present an analysis of the usability of our method for fault masking. Simulation results show with this new fault model, the fault with less significant contribution is masked in output.",2009,0, 1350,On test application time and defect detection capabilities of test sets for scan designs,"The test application time of test sets for scan designs can be reduced (without reducing the fault coverage) by removing some scan operations, and increasing the lengths of the primary input sequences applied between scan operations. In this paper, we study the effects of such a compaction procedure on the ability of a test set to detect defects. Defect detection is measured by the number of times the test set detects each stuck-at fault, which was shown to be related to the defect coverage of the test set. We also propose a compaction procedure that affects the numbers of detections of stuck-at faults in a controlled way",2000,0, 1351,Study on Monitoring and Fault Diagnosis for Ignition System of Engines,"The test technology for ignition system of automobile engine has been introduced, and the model of distributed monitoring and diagnosis system has been proposed in this paper. The whole structure of this monitoring and diagnosis system has been introduced. Moreover, method for pertinent knowledge acquisition, analysis and organization has been discussed. Finally, typical applications for distributor centrifugal organization model and ignition coil model by using the monitoring and fault diagnosis system are provided.",2009,0, 1352,Nonlinear Discrete-time feedback error learning with PI Controller for AC servo motor,The theme of this paper is to show the efficiency of nonlinear discrete time error learning(NDTFEL) in real application and how to improve it. PI Controller is chosen to improve the stability and reliability of the system by increasing robustness of the system. The simulations show how NDTFEL work on error rejecting. The results also show how PI controller improve the stability of NDTFEL.,2008,0, 1353,Application of the Elman Network and Synthetically Relational Analysis to Fault Diagnosis for Suction Fan,The theory of Elman network and gray relational analysis was introduced; a method based on the synthetically relational analysis was presented and applied to fault diagnosis of Suction Fan. The same testing fault mode was diagnosed by both the method of Elman network and the synthetically relational analysis respectively. The result shows that the fault diagnosis based on the synthetically relational analysis is feasible and simple.,2008,0, 1354,The PSTR/SNS scheme for real-time fault tolerance via active object replication and network surveillance,"The TMO (Time-triggered Message-triggered Object) scheme was formulated as a major extension of the conventional object structuring schemes with the idealistic goal of facilitating general-form design and timeliness-guaranteed design of complex real-time application systems. Recently, as a new scheme for realizing TMO-structured distributed and parallel computer systems that are capable of both hardware and software fault tolerance, we have formulated and demonstrated the PSTR (Primary-Shadow TMO Replication) scheme. An important new extension of the PSTR scheme discussed in this paper is an integration of the PSTR scheme and a network surveillance (NS) scheme. This extension results in a significant improvement in the fault coverage and recovery time bound achieved. The NS scheme adopted is a recently-developed scheme that is effective in a wide range of point-to-point networks, and it is called the SNS (Supervisor-based Network Surveillance) scheme. The integration of the PSTR scheme and the SNS scheme is called the PSTR/SNS scheme. The recovery time bound of the PSTR/SNS scheme is analyzed on the basis of an implementation model that can be easily adapted to various commercial operating system kernels",2000,0, 1355,RMS bounds and sample size considerations for error estimation in linear discriminant analysis,"The validity of a classifier depends on the precision of the error estimator used to estimate its true error. This paper considers the necessary sample size to achieve a given validity measure, namely RMS, for resubstitution and leave-one-out error estimators in the context of LDA. It provides bounds for the RMS between the true error and both the resubstitution and leave-one-out error estimators in terms of sample size and dimensionality. These bounds can be used to determine the minimum sample size in order to obtain a desired estimation accuracy, relative to RMS. To show how these results can be used in practice, a microarray classification problem is presented.",2010,0, 1356,Two corpuses of spreadsheet errors,"The widespread presence of errors in spreadsheets is now well-established. Quite a few methodological and software approaches have been suggested as ways to reduce spreadsheet errors. However, these approaches are always tailored to particular types of errors. Are such errors, in fact, widespread? A tool that focuses on rare errors is not very appealing. In other fields of error analysis, especially linguistics, it has proven useful to collect corpora (systematic samples) of errors. This paper presents two corpora of errors seen in spreadsheet experiments. Hopefully, these corpora will help us assess the claims of spreadsheet reduction approaches and should guide theory creation and testing.",2000,0, 1357,Evaluation of a single accelerometer based biofeedback system for real-time correction of neck posture in computer users,"The worldwide adoption of computers is closely linked to increased prevalence in neck and shoulder pain. Many ergonomic interventions are available; however, the lifetime prevalence of neck pain is still estimated as high as 80%. This paper introduces a biofeedback system using a novel single accelerometer placement. This system allows the user to react and correct for movement into a position of bad posture. The addition of visual information provides artificial proprioceptive information on the cranial-vertebral angle. Six subjects were tested for 5 hours with and without biofeedback. All subjects had a significant decrease in the percentage of time spent in bad posture when using biofeedback.",2009,0, 1358,Monitoring Crustal Deformation along the Xianshuihe Fault in the Eastern Tibetan Margin Area with Envisat ScanSAR Interferometry,"The Xianshuihe fault of Sichuan province, southwest China, is a highly active strike-slip fault approximately 350 km long. Previous studies have described up to 10-17 mm/yr of left-lateral slip on the Xianshuihe fault during the last decades, as indicated from geological criteria and GPS observations. The satellite InSAR observation is an alternative technique effectively used to measure the crustal deformation on the active fault. However, the conventional strip mode SAR imagery (i.e. ENVISAT/ASAR IM image) covers usually a 100 km wide stripe and will therefore not be sufficient for such large-scale deformation study. In this paper, we presented ScanSAR interferometry applied to ENVISAT/ASAR WSM data to produce about 400times400 km2 deformation map over the Xianshuihe fault zone and the Garze-Yushu fault zone. The preliminary stacked deformation results of six WS-WS interferograms indicated a northeast-southwest deformation trend nearly perpendicular to the two fault zones and demonstrated the potential to monitor such wide-stretched deformation using ENVISAT WSS data.",2008,0, 1359,Autonomous Fault Protection Orbit Domain Modeling In Aerobraking,"The Spacecraft Imbedded Distributed Error Response (SPIDER) Fault Protection architecture used on the Mars Reconnaissance Orbiter (MRO) incrementally developed capabilities based upon heritage spacecraft. The primary driving factors behind the improvements for this mission stemmed from several key concerns and development goals/requirements. Due to decreased risk tolerance at the program level, most hardware was cross-strapped on the spacecraft and additional autonomous responses were required to ensure the safety of the spacecraft to provide for a more robust handling of the system during high-risk events. Numerous interplanetary spacecraft missions have demonstrated the need to reduce maintenance effort during spacecraft operations. Aerobraking: the process of using atmospheric drag to dissipate orbital energy to achieve the desired science orbit following orbit insertion is a delicate process. The operations team is constrained by orbit geometry, 2-way light time delays, Martian weather, and a highly dynamic environment in which each pass through the atmosphere affects the subsequent orbit timing in a non-deterministic way. In the event of a fault on past missions, the operations team has required both constant contact with the spacecraft to diagnose the failure, and prompt issuance of the necessary ground commands to ensure spacecraft safety. In order to increase the autonomous response capability during Aerobraking and reduce the response time to faults, MRO developed a new set of capabilities, called the Navigation Performance Monitor (NPM). NPM, a member of the performance layer of SPIDER Fault Protection software, provides autonomous orbit domain modeling so the spacecraft may configure itself correctly for events within each orbit. During Aerobraking for MRO, NPM was called into action and successfully provided orbit modeling data to the Safe Mode software in response to two system faults; this demonstrated the robustness of our approach.",2007,0, 1360,The Analysis and Comparison of Tracking Error between SSE-50 ETF and SSE-180 ETF,"The SSE-50 ETF and SSE-180 ETF are extremely familiar no matter in the index-establishing or the characteristics of index, so this thesis selects the SSE-50 ETF and SSE-180 ETF as the research objects, then analysis the tracking errors of SSE-50 ETF and SSE-180 ETF empirically in three ways, finally, the thesis discusses the reasons which make the different tracking errors between SSE-50 ETF and SSE-180 ETF.",2010,0, 1361,High-performance speed measurement by suppression of systematic resolver and encoder errors,"The subject of this paper is a method which suppresses systematic errors of resolvers and optical encoders with sinusoidal line signals. The proposed method does not require any additional hardware and the computational efforts are minimal. Since this method does not cause any time delay, the dynamic of the speed control is not affected. By means of this new scheme, dynamic and smooth running characteristics of drive systems are improved considerably.",2004,0, 1362,Fault-tree based evaluation of tiered autonomic systems,The success of self configuration in autonomic distributed systems depends on the availability of the management components and their interconnections. This paper describes a fault tree model to evaluate the availability of a tiered autonomic system that considers the failures of management components. The model predictions will guide the selection and placement of the management components to meet the service-level availability requirements at substantially lower cost and time.,2008,0, 1363,A newly developed web-based fault locating technology for transmission lines and its experience in the field,"This paper describes a newly developed fault locating (FL) system using Internet technology. The method using current and voltage data at one terminal for FL has been applied, and good results have been obtained. But this method has theoretical error in multi-terminal transmission lines because of the influence of the fault current from the terminal without FL equipment. It is necessary for precise fault location to gather the data from all of the terminals connected to the faulted fine. Internet technology has progressed remarkably and enabled us to gather the data easily from distributed multi-terminals. The proposed FL system uses current, voltage data and status information of power apparatus of multi-terminals via Internet communication. By using these data, fault-locating accuracy of proposed FL system improves remarkably. The network for transmitting information is constituted by general network devices. To perform fault location, a general PC is used. Moreover, for the acquisition of real-time information from power system, compact terminal devices are used. These devices have micro web servers for realizing the data transmission over HTTP. This combination of Internet technology and real time processing technology realized low-cost and high-performance FL systems. The proposed system has been in practical use in Chubu Electric Power Co. Inc.. and excellent results have been obtained.",2002,0, 1364,Short Paper: Data Mining-based Fault Prediction and Detection on the Grid,"This paper describes a novel approach to fault detection and prediction on the grid based on data mining techniques. Data mining techniques are here applied as a mean to effectively process the significant amount of captured data from grid sites, services, workflows and activities. The paper provides a first approach of proposed techniques in terms of its ability of utilizing relevant information and the fault tolerance requirements. Such approach is one intelligent, distributed framework of fault detection and prediction for anomaly and failed activity by using resource- and workflow-based information. We use fault predictions to improve the performance of the workflow execution by avoiding potential faults of activities",2006,0, 1365,Software-implemented fault-tolerance and separate recovery strategies enhance maintainability [substation automation],"This paper describes a novel approach to software-implemented fault tolerance for distributed applications. This new approach can be used to enhance the flexibility and maintainability of the target applications in a cost-effective way. This is reached through a framework-approach including: (1) a library of fault tolerance functions; (2) a middleware application coordinating these functions; and (3) a language for the expression of nonfunctional services, including configuration, error recovery and fault injection. This framework-approach increases the availability and reliability of the application at a justifiable cost, also thanks to the re-usability of the components in different target systems. This framework-approach further increases the maintainability due to the separation of the functional behavior from the recovery strategies that are executed when an error is detected, because the modifications to functional and nonfunctional behavior are, to some extent, independent, and hence less complex to deal with. The resulting tool matches well, e.g., with current industrial requirements for embedded distributed systems, calling for adaptable and reusable software components. The ""integration of this approach in an automation system of a substation for electricity distribution"" reports this experience. This case study shows in particular the ability of the configuration-and-recovery language ARIEL to allow adaptability to changes in the environment. This framework-approach is also useful in the context of distributed automation systems that are interconnected via a nondedicated network",2002,0, 1366,"Application of a radio frequency antenna technique for monitoring the speed, torque and internal fault conditions of A.C. and D.C. electric motors","This paper describes a novel radio frequency (R.F.) measurement technique which can be used to measure the speed of rotation and the torque developed by a mechanically commutated DC electric motor. It also has the potential to detect fault conditions within the motor. The measuring instrument is a specifically designed radio antenna which detects the flux variations and R.F. signals associated with the partial discharges or arcing events present during commutation. Basic commutation and the factors influencing the partial discharge or arcing activity are described, followed by examples of the signals received and measurement results.",2004,0, 1367,Real-Time Model-Based Fault Detection and Diagnosis for Alternators and Induction Motors,"This paper describes a real-time model-based fault detection and diagnosis software. The electric machines diagnosis system (EMDS) covers field winding shorted-turns fault in alternators and stator windings shorted-turns fault in induction motors. The EMDS has a modular architecture. The modules include: acquisition and data treatment; well-known parameters estimation algorithms, such as recursive least squares (RLS) and extended Kalman filter (EKF); dynamic models for faults simulation; faults detection and identification tools, such as M.L.P. and S.O.M. neural networks and fuzzy C-means (FCM) technique. The modules working together detect possible faulty conditions of various machines working in parallel through routing. A fast, safe and efficient data manipulation requires a great DataBase managing system (DBMS) performance. In our experiment, the EMDS real-time operation demonstrated that the proposed system could efficiently and effectively detect abnormal conditions resulting in lower-cost maintenance for the company.",2007,0, 1368,Identification of network parameter errors,"This paper describes a simple yet effective method for identifying incorrect parameters associated with the power network model. The proposed method has the desired property of distinguishing between bad analog measurements and incorrect network parameters, even when they appear simultaneously. This is accomplished without expanding the state or the measurement vectors. There is also no need to a priori specify a suspect parameter set. All these features are verified via simulations that are carried out using different-size test systems for various possible cases. Implementation of the method involves minor changes in the weighted least-squares state estimation code; hence, it can be easily integrated into existing state estimators as an added feature.",2006,0, 1369,The exterminators [software bugs],"This paper describes a sound methodology developed at Praxis High Integrity Systems for detecting and exterminating bugs during all stages of a software project. To develop software, the London-based software house uses mathematically based techniques, known as formal methods, which require that programmers begin their work not by writing code but rather by stringing together special symbols that represent the program's logic. Like a mathematical theorem, these symbol strings can be checked to verify that they form logically correct statements. Once the programmer has checked that the program doesn't have logical flaws, it's a relatively simple matter to convert those symbols into programming code. With an average of less than one error in every 10,000 lines of delivered code, Praxis claims a bug rate that is at least 50 times better than the industry standard.",2005,0, 1370,Operating characteristics of the Permanent-Magnet-Biased Saturation Based Fault Current Limiter,"This paper describes a topological configuration of a fault current limiter consisting of a permanent and saturable core. The operating characteristics of this permanent-magnet-biased saturation based fault current limiter (PMFCL) are simulated in details by means of finite element method (FEM) with ANSOFT. Firstly, the relationship between the total harmonic distortion of transient limiting current and its peak value is analyzed. Secondly, the paper presents the flux density variation with different source voltages as well as magnetic flux density distribution under different operation conditions. Finally, the fault current limiting characteristics influenced by the source voltages and the number of turns of the coil are investigated. The research results present analytical basis and calculation reference for the parameter optimization of this type of PMFCL.",2008,0, 1371,Tunable Transient Filters for Soft Error Rate Reduction in Combinational Circuits,"This paper describes a tunable transient filter (TTF) design for soft error rate reduction in combinational logic circuits. TTFs can be inserted into combinational circuits to suppress propagated single- event upsets (SEUs) before they can be captured in latches/flip- flops. TTFs are tuned by adjusting the maximum width of the propagated SEU that can be suppressed. TTFs require 6-14 transistors, making them an attractive cost-effective option to reduce the soft error rate in combinational circuits. A global optimization approach based on geometric programming that integrates TTF insertion with dual-VoD and gate sizing is described. Simulation results for the 70 nm process technology indicate that a 17-48X reduction in the soft error rate can be achieved with this approach.",2008,0, 1372,Defect Data Analysis Based on Extended Association Rule Mining,"This paper describes an empirical study to reveal rules associated with defect correction effort. We defined defect correction effort as a quantitative (ratio scale) variable, and extended conventional (nominal scale based) association rule mining to directly handle such quantitative variables. An extended rule describes the statistical characteristic of a ratio or interval scale variable in the consequent part of the rule by its mean value and standard deviation so that conditions producing distinctive statistics can be discovered As an analysis target, we collected various attributes of about 1,200 defects found in a typical medium-scale, multi-vendor (distance development) information system development project in Japan. Our findings based on extracted rules include: (l)Defects detected in coding/unit testing were easily corrected (less than 7% of mean effort) when they are related to data output or validation of input data. (2)Nevertheless, they sometimes required much more effort (lift of standard deviation was 5.845) in case of low reproducibility, (i)Defects introduced in coding/unit testing often required large correction effort (mean was 12.596 staff-hours and standard deviation was 25.716) when they were related to data handing. From these findings, we confirmed that we need to pay attention to types of defects having large mean effort as well as those having large standard deviation of effort since such defects sometimes cause excess effort.",2007,0, 1373,Ionospheric corrections from a prototype operational assimilation and forecast system,"This paper describes an operational system, sponsored by the US Air Force, for generating and distributing near real-time three-dimensional ionospheric electron densities and corresponding GPS propagation delays. The core ionospheric model solves plasma dynamics and composition equations governing evolution of density, velocity and temperature for ion species on a fixed grid in magnetic coordinates. It uses a realistic model of the Earth's magnetic field and solar indices obtained in real time from NOAA's Space Environment Center. At the present time the model computes real-time ion and electron densities at a grid of more than one million points. Higher resolutions are anticipated in the future. While the core model is capable of delivering realistic results, its accuracy can be significantly improved by employing a special set of numerical techniques known as data assimilation. These techniques originated and are currently used for numerical weather forecasting. The core ionospheric model is constantly fed real-time observational data from a network of reference GPS ground stations. This improves both the nowcast and the forecast of electron densities. Web-based access to the system is provided to early users for validation and exploration purposes at: http://fusionnumerics.com/ionosphere.",2004,0, 1374,Detecting errors in the ATLAS TDAQ system: A neural networks and support vector machines approach,"This paper describes how neural networks and support vector machines can be used to detect errors in a large scale distributed system, specifically the ATLAS Trigger and Data AcQuisition (TDAQ) system. By collecting, analysing and preprocessing some of the data available in the system it is possible to recognize and/or predict error situations arising in the system. This can be done without detailed knowledge of the system, nor of the data available. Hence the presented methods could be used in similar system without significant changes. The TDAQ system, and in particular the main components related to this work, is described together with the test setup used. We simulate a number of error situations in the system and simultaneously gather both performance measures and error messages from the system. The data are then preprocessed and neural networks and support vector machines are applied to try to detect the error situations, achieving classification accuracy ranging from 88% to 100% for the neural networks and 90.8% to a 100% for the support vector machines approach.",2009,0, 1375,Development of fault detection and reporting for non-central maintenance aircraft,"This paper describes how real-time faults can be automatically detected in Boeing 737 airplanes without significant hardware or software modifications, or potentially expensive system re-certification by employing a novel approach to Airplane Conditioning and Monitoring System (ACMS) usage. The ACMS is a function of the Digital Flight Data Acquisition Unit (DFDAU), which also collects aircraft parameters and transmits them to the Flight Data Recorder (FDR). The DFDAU receives digital and analog data from various airplane subsystems, which is also available to the ACMS. Exploiting customized ACMS software allows airline operators to specify collection and processing of various aircraft parameters for flight data monitoring, maintenance, and operational efficiency trending. Employing a rigorous systems engineering approach with detailed signal analysis, fault detection algorithms are created for software implementation within the ACMS to support ground-based reporting systems. To date, over 160 algorithms are in development based upon the existing Fault Reporting and Fault Isolation Manual (FRM/FIM) structure and availability of system signals for individual faults. Following successful field-testing and implementation, 737 airplane customers have access to a state of fault detection automation not previously available on aircraft without central maintenance monitoring.",2010,0, 1376,A Fault-Tolerant Design of a Microcontroller-Based Monitoring System for a Mine Drainage Pump,"This paper describes hybrid-redundant design of monitoring systems with proprietary industrial-control computer boards. Input channels use three-mode redundancy, host controllers two-mode redundancy and output channels use warm redundancy. Experiments and analysis of system reliability by the Markov model indicate that the reliability of normal operation for 5 years is above 0.9.",2008,0, 1377,Estimation of Dip Frequency from Fault Statistics - Including Three-Phase Characteristics,"This paper describes methods for estimating the voltage dip frequency, due to faults, experienced by customers. The input to the calculation consists of fault statistics from network operators. The calculations are based on the method of fault positions, extended with a three-phase classification of voltage dips",2006,0, 1378,Icon based error concealment for JPEG and JPEG 2000 images,"This paper describes methods to recover the useful data in JPEG and JPEG 2000 compressed images and to estimate data for those portions of the image where correct data cannot be recovered. These techniques are designed to handle the loss of hundreds of bytes in the file. No use is made of restart markers or other optional error detection features of JPEG and JPEG 2000, but an uncorrupted low resolution version of the image, such as an icon, is assumed to be available. These icons are typically present in Exif or JFIF format JPEG files.",2003,0, 1379,Mars Polar Lander fault identification using model-based testing,"This paper describes the application of the Test Automation Framework (TAF) on the Mars Polar Lander (MPL) software. The premature shutdown of the descent engine on the MPL spacecraft is believed to be the most likely cause for the mission failure. It is believed that the engine shutdown occurred when the three landing legs were extended into their deployed position. This event created an unanticipated transient touchdown indication from the legs, causing the software to inadvertently shutdown the descent engines prior to reaching the surface of Mars. This spurious indication should have been ignored by the touchdown monitor (TDM) software, but due to a design flaw, was actually ""latched,"" thus causing the premature engine shutdown. The TAF approach was used to model the TDM software requirements. The associated TAF tools generated tests that identified a potential TDM fault",2001,0,1380 1380,Mars Polar Lander fault identification using model-based testing,"This paper describes the application of the Test Automation Framework on the Mars Polar Lander (MPL) software. The premature shutdown of the descent engine on the MPL spacecraft is believed to be the most likely cause for the mission failure. It is believed that the engine shutdown occurred when the three landing legs were extended into their deployed position. This event created an unanticipated transient touchdown indication from the legs, causing the software to inadvertently shutdown the descent engines prior to reaching the surface of Mars. This spurious indication should have been ignored by the Touchdown Monitor (TDM) software, but due to a design flaw, was actually stored in program variable thus causing the premature engine shutdown. The TAF approach was used to model the TDM software requirements. The associated TAF tools generated tests that identified a TDM fault that is the most likely cause of the mission failure.",2002,0, 1381,The Application of Time Series Seasonal Multiplicative Model and GARCH Error Amending Model on Forecasting the Monthly Peak Load,"This paper describes the basic principle of seasonal multiplicative model in time series and the GARCH model, applies the former to forecast the monthly peak load, and then uses the latter to amend the forecasting error. Calculating the real data of a regional power grid, results show the forecasting precision and effect of the error modifying model.",2009,0, 1382,Minosthe design and implementation of an embedded real-time operating system with a perspective of fault tolerance,"This paper describes the design and implementation of a small real time operating system (OS) called Minos and its application in an onboard active safety project for general aviation. The focus of the operating system is predictability, stability, safety and simplicity. We introduce fault tolerance aspects in software by the concept of a very fast reboot procedure and by an error correcting flight data memory (FDM). In addition, fault tolerance is supported by custom designed hardware.",2008,0, 1383,Detection and location of low-level arcing faults in metal-clad electrical apparatus,"This paper describes the development and testing of a microprocessor based system for detecting and locating switchgear system low-level arcing faults. Four parameters of the arcing phenomena are monitored for detecting arcs. This makes the performance more reliable than the techniques that use fewer phenomena. Some results that show the performance of the developed system are presented in the paper. The cases include arcs at different locations inside and outside a transformer, and operation during normal loads",2001,0, 1384,High level net models: a tool for permutation mapping and fault detection in multistage interconnection network,"This paper aims at structurising the detection of different types of stuck-at faults for a wide range of multistage interconnection networks (MINs). The results reported so far in this respect have been mainly based on direct combinatorial analysis of the concerned networks with very little consideration towards the modelling aspects. Graphical representation coupled with well-defined semantics allowing formal analysis has already established the Petri net as an effective tool for modelling dynamic systems. However, the existing variants of high level nets had certain limitations in modelling the dynamic behaviour of mapping a permutation through the MIN and further analysis of the same. This has inspired the authors to propose a couple of new high level net models, called MP-net and S-net in their earlier works. The S-net model uses tokens to hold and propagate information apart from controlling the firing of events. It uses two different types of places and transitions each as has been defied subsequently. In this paper, we have concentrated on the detection of faults in MINs using this S-net model",2000,0, 1385,Architecture of a fault tolerant system for real time embedded applications,"This paper aims to present a generalized fault tolerant architecture for time critical embedded systems where microprocessors/microcontrollers are used as basic processing elements. If such a system fails at any instant of time, a standby mechanism is required to take over the responsibility of task handling automatically, without affecting the processing of the tasks. The case of a digital telephony system may be cited as an example. If such a system becomes faulty and a standby unit takes the processing responsibility instantaneously or within a maximum allowable limit of time, then no call will be dropped. The proposed architecture is based on redundancy approach and a shadow of the main memory of a unit is kept so that the standby unit can access current data and status instantly. By introducing this type of approach restoration of processing can be achieved within a short interval of time. A monitoring logic is incorporated to monitor the health of the system continuously. It is assumed that this monitoring logic will work properly even if the other portions of the system fail.",2002,0, 1386,Technical loss calculation by distribution system segment with corrections from measurements,"This paper aims to present a methodology for the calculation of technical losses per segment of a power distribution system. One of the most important data is the billed energy of each customer. After this calculation, it is achieved the energy supplied by each feeder. This calculated energy is, then, compared with the measured energy. As a result of this comparison, it is possible to correct the technical losses calculated previously, considering the flow of the non-billed energy (theft and fraud) through the network. Consequently, it is achieved a new value for the technical losses and for the non-technical losses. On this paper it is presented the methodology for the calculation of technical losses with the correction through the measurements and the results obtained.",2009,0, 1387,Control of a full-converter Permanent Magnet Synchronous Wind Generator with Neutral Point Clamped converters during a network fault,"This paper analyses the behaviour of a full-converter wind generation system with a back-to-back conversion structure using Neutral Point Clamped (NPC) converters during network faults. A Permanent Magnet Synchronous Generator (PMSG) is used as the generator. The main problems of this structure during voltage sags are firstly, the accumulation of power in the DC-link due to the reduction of power delivered to grid, and secondly, the control of the neutral point voltage. Three control strategies are proposed with the purpose of optimizing operation under network fault conditions. The characteristics of those strategies with regard to DC-link voltage control, torque variations required of PMSG and neutral point voltage control are also discussed.",2010,0, 1388,Minimisation and prediction of the error dynamic range in finite wordlength FIR based architectures: application to the 2-D orthogonal DWT,"This paper analyses the effects of fixed-point arithmetic in FIR filter based architectures, based on the roundoff statistical noise model. A novel approach, which allows minimising the architecture's error dynamic range and predicting its value according to the wordlength fractional precision, is suggested. This permits a hardware designer to preset the fractional precision for the sought implementation accuracy. The efficiency of this approach is demonstrated through the 2D orthogonal Daubechies-8 DWT family.",2003,0, 1389,Fault-tolerance for PastryGrid middleware,"This paper analyses the performance of a decentralized and fault-tolerant software layer for Desktop Grid resources management. The grid middleware under concern is named PastryGrid. Its main design principle is to eliminate the need for a centralized server, therefore to remove the single point of failure and bottleneck of existing Desktop Grids. PastryGrid (based on Pastry) supports the execution of distributed application with precedence between tasks in a decentralized way. Indeed, each node can play alternatively the role of client or server. Our main contribution is to propose a fault tolerant mechanism for Pastry-Grid middleware. Since the management of PastryGrid is distributed over the participants without central manager, its control becomes a challenging problem, especially when dealing with faults. The experimental results on the Grid'5000 testbed demonstrate that our decentralized fault-tolerant system is robust because it supports high fault rates.",2010,0, 1390,Distortion analysis and error concealment for multi-view video transmission,"This paper analyzes the distortion in decoded multi-view video caused by random packet losses in the underlying transmission network and proposes an error concealment method for for multi-view video. Taking into account the interdependent coding among views, which is an important component of multi-view video coding, a distortion estimation formula is developed by mathematical discursion. Additionally, an error concealment scheme for multi-view stereo video is proposed to enhance the quality of the multi-view video over packet-switching networks.",2010,0, 1391,A Genetic Algorithm Based Method of Fault Maintenance in Software-Intensive System,"This paper analyzes the problem of fault maintenance in software-Intensive system. The problem is described and modeled by a genetic algorithm (GA) based method. After a complete investigation on elements involved in the problem, the controls parameters in GA are confirmed. The feasibility and the degree of accuracy of the GA based method are tested by an example in the end.",2009,0, 1392,A study of effect of element pointing error on power density in near field of reflector arrays,"This paper addresses the issue of acceptable power density levels in arrays of small antennas to be used for uplink transmission functions at NASA/JPL's deep space network (DSN). This problem is of interest due to safety considerations as well as interference with receivers on nearby flying objects such as helicopters or other systems. Specifically, array element pointing errors can produce hot spots due to coherent combining of the radiation from individual elements in the near and mid field regions of the antenna arrays which may be larger than is expected and could become unacceptable. Therefore care must be taken that such situations do not occur. In this paper the interference problem is formulated and limits are obtained to prevent the peak addition of signals which could be 6 dB higher than that achieved by individual elements. Ample plots and example are provided to clarify the concept and provide guidelines for pointing error specifications",2005,0, 1393,The Fault Tolerant Parallel Algorithm: the Parallel Recomputing Based Failure Recovery,"This paper addresses the issue of fault tolerance in parallel computing, and proposes a new method named parallel recomputing. Such method achieves fault recovery automatically by using surviving processes to recompute the workload of failed processes in parallel. The paper firstly defines the fault tolerant parallel algorithm (FTPA) as the parallel algorithm which tolerates failures by parallel recomputing. Furthermore, the paper proposes the inter-process definition-use relationship analysis method based on the conventional definition-use analysis for revealing the relationship of variables in different processes. Under the guidance of this new method, principles of fault tolerant parallel algorithm design are given. At last, the authors present the design of FTPAs for matrix-matrix multiplication and NPB kernels, and evaluate them by experiments on a cluster system. The experimental results show that the overhead of FTPA is less than the overhead of checkpointing.",2007,0, 1394,Fault-tolerant gait generation for locked joint failures,"This paper addresses the issue of tolerating a locked joint failure in gait planning for hexapod robots which have symmetric structures and legs in the form of an articulated arm with three revolute joints. A locked joint failure is one for which a joint cannot move and is locked in place. If a failed joint is locked, the workspace of the resulting leg is constrained, but hexapod walking machines have the ability to continue static walking. A strategy of fault tolerant tripod gait is proposed. In particular, a periodic gait is proposed as a special form of the proposed algorithm and its existence and efficiency are analytically proven. A case study on applying the proposed scheme to the standard tripod gait verifies its applicability and capability.",2003,0, 1395,Design of supervisory control scheme for fault tolerant control of telerobotic system in operational space,"This paper addresses the issues of developing fault tolerant control system for telerobotic systems in operational space. First, the characteristics of operational faults and the relevant sensor signals are studied and classified. Secondly, the framework for FDI and the associated supervisory control scheme are proposed to effectively integrate the FDI approaches into telerobotic systems.",2003,0, 1396,Improving Multilabel Analysis of Music Titles: A Large-Scale Validation of the Correction Approach,"This paper addresses the problem of automatically extracting perceptive information from acoustic signals, in a supervised classification context. Global labels, i.e., atomic information describing a music title in its entirety, such as its genre, mood, main instruments, or type of vocals, are entered by humans. Classifiers are trained to map audio features to these labels. However, the performances of these classifiers on individual labels are rarely satisfactory. In the case we have to predict several labels simultaneously, we introduce a correction scheme to improve these performances. In this scheme-an instance of the classifier fusion paradigm-an extra layer of classifiers is built to exploit redundancies between labels and correct some of the errors coming from the individual acoustic classifiers. We describe a series of experiments aiming at validating this approach on a large-scale database of music and metadata (about 30 000 titles and 600 labels per title). The experiments show that the approach brings statistically significant improvements.",2009,0, 1397,Fault-tolerant output tracking control for a flexible air-breathing hypersonic vehicle,"This paper addresses the problem of guaranteed cost fault-tolerant output tracking control against actuator faults for a flexible air-breathing hypersonic vehicle. Firstly, using the parameters of the trim condition, a linearized model is established around the trim point for a nonlinear, dynamically coupled simulation model. Secondly, the control objective and models of actuator faults are presented. Thirdly, the performance analysis condition is proposed in the frame of convex optimization problems via Lyapunov functional approach. Then, the stand controller and fault-tolerant controller are designed such that the resulting closed-loop system is asymptotically stable and satisfies a prescribed performance cost respectively. Finally, the simulation results are given to show the effectiveness of the proposed control method, which is verified by excellent reference altitude and velocity tracking performance.",2010,0, 1398,Test program synthesis for path delay faults in microprocessor cores,"This paper addresses the problem of testing path delay faults in a microprocessor core using its instruction set. We propose to self-test a processor core by running an automatically synthesized test program which can achieve a high path delay fault coverage. This paper discusses the method and the prototype software framework for synthesizing such a test program. Based on the processor's instruction set architecture, micro-architecture, RTL netlist as well as gate-level netlist on which the path delay faults are modeled, the method generates deterministic tests (in the form of instruction sequences) by cleverly combining structural and instruction-level test generation techniques. The experimental results for two microprocessors indicate that the test instruction sequences can be successfully generated for a high percentage of testable path delay faults",2000,0, 1399,Experimentally evaluating an automatic approach for generating safety-critical software with respect to transient errors,"This paper deals with a software modification strategy allowing on-line detection of transient errors. Being based on a set of rules for introducing redundancy in the high-level code, the method can be completely automated, and is therefore particularly suited for low-cost safety-critical microprocessor-based applications. Experimental results are presented and discussed, demonstrating the effectiveness of the approach in terms of fault detection capabilities",2000,0, 1400,Investigation on vector control of three-phase synchronous machines under supply fault condition,"This paper deals with modeling and control of a three-phase permanent magnet synchronous machine (PMSM) with sinusoidal back electromotive forces (emf) under supply fault conditions. The overall system is modeled thanks to the energetic macroscopic representation (EMR) formalism considering that a fusible element opens the phase circuit. Using systematic inversion rules of the EMR, a maximum control structure (MCS) is deduced. Based on the analysis of degrees of freedom (DOF) of the drive, two control strategies for constant-torque under supply fault conditions are inferred. One method balances generated perturbations caused by the supply fault and the other one consists in modifying current controller's structure. These methods are validated through simulations with Matlab software",2006,0, 1401,The application of synthetic focusing for imaging crack-like defects in pipelines using guided waves,"This paper deals with quantifying the performance of a technique for detection, location, and sizing of circumferential crack-like defects in pipelines using synthetically focused guided waves. The system employs a circumferential array of piezoelectric transducer elements. A torsional probing guided wave is excited using the array, which subsequently interacts with the reflecting features of the pipe, such as defects or weld caps. The recorded backscattered signals are synthetically focused to every point of interest in the pipe wall, to form an image of the reflecting features of the pipe. The defect image amplitude is used to estimate the defect depth, and the full width at half maximum of the defect image circumferential profile is used to estimate the circumferential extent of the defect. The imaging system is tested with data from finite element simulations and from laboratory experiments. It is found that reliable sizing of circumferential cracks in finite element simulations and experiments can be achieved if the circumferential extent of the defect is greater than 1.5 lambdaS, where lambdaS is the shear wavelength at the frequency of inspection. This result is theoretically valid for any pipe size, any axial defect location, and any inspection frequency. Amplitude gains of around 18 dB over an unfocused system have been observed experimentally in an 8-inch pipe with a 9 dB SNR improvement.",2009,0, 1402,A New Iterative Approach to the Corrective Security-Constrained Optimal Power Flow Problem,"This paper deals with techniques to solve the corrective security-constrained optimal power flow (CSCOPF) problem. To this end, we propose a new iterative approach that comprises four modules: a CSCOPF which considers only a subset of potentially binding contingencies among the postulated contingencies, a (steady-state) security analysis (SSSA), a contingency filtering (CF) technique, and an OPF variant to check post-contingency state feasibility when taking into account post-contingency corrective actions. We compare performances of our approach and its possible variants with classical CSCOPF approaches such as the direct approach and Benders decomposition (BD), on three systems of 60, 118, and 1203 buses.",2008,0, 1403,D-Q modeling and control of a single-phase three-level boost rectifier with power factor correction and neutral-point voltage balancing,"This paper deals with the analysis, design and operation of a control system for a single-phase three-level rectifier with a neutral-point-clamped (NPC) topology. Usually the desired operating conditions for this type of converter are: unity displacement factor, output DC voltage regulation and neutral point voltage balancing. A d-q reference frame has been used in this work to model the rectifier behaviour in order to exploit the results obtained in the field of three-phase converters. In this way, a space vector modulation PWM method has been used, with the possibility of using redundant switching states to achieve charge balancing of the capacitors. The time assignment of each redundant switching state is accomplished by utilizing a closed-loop control system. Validity of the modeling and control strategies are confirmed by the transient and steady state simulation and experimental results.",2002,0, 1404,Fault -Tolerant PM Motors in Automotive Applications,"This paper deals with the design of permanent magnet motors with intrinsic fault-tolerant capability. Redundant solutions and innovative motor figures are presented. Considerations on motor operations in presence of fault are also reported. Three-phase short-circuit maximum short-circuit currents and braking torque are investigated. Finally, hints to be used in the design of the motor are given",2005,0, 1405,Dynamic security corrective control by UPFCs,This paper deals with the development of a nonlinear programming methodology for evaluating corrective actions to improve the dynamic security of power systems when transient instability is detected. Remedial actions are implemented by exploiting the fast response of unified power flow controllers (UPFCs). The algorithm is implemented and tested on the Italian grid,2001,0, 1406,An improved error concealment strategy driven by scene motion properties for H.264/AVC decoders,"This paper deals with the possibility of improving the concealment effectiveness of an H.264 decoder, by means of the integration of a scene change detector. This way, the selected recovering strategy is driven by the detection of a change in the scene, rather than by the coding features of each frame. The scene detection algorithm under evaluation has been chosen from the technical literature, but a deep analysis of its performance, over a wide range of video sequences having different motion properties, has allowed the suggestion of simple but effective modifications, which provide better results in terms of final perceived video quality.",2006,0, 1407,Testing of embedded system using fault modeling,"This paper deals with the problem of testing methods for testing embedded systems The problem addressed in this paper is about methods for testing whether impletation of system are correct . The problem of testing a system for its correctness deals with the designing of test cases .Test cases for correctness of the system with respect to specifications ,are large in number which makes the testing of embedded system infeasible. Robust testing of embedded system will help in solving this problem .Robust testing can be done by considering specifications as finite state machine.",2009,0, 1408,DSP implementation of the multiple reference frames theory for the diagnosis of stator faults in a DTC induction motor drive,"This paper deals with the use of a new diagnostic technique, based on the multiple reference frame theory, for the diagnosis of stator winding faults in a direct torque controlled (DTC) induction motor drive. The theoretical aspects underlying the use of this diagnostic technique are presented but a major emphasis is given to the integration of the diagnostic system into the DSP board containing the control algorithm. Taking advantage of the sensors already built-in the drive for control purposes, it is possible to implement this diagnostic system at no additional cost, thus giving a surplus value to the drive itself. Experimental results show the effectiveness of the proposed technique to diagnose stator faults and demonstrate the possibility of its integration in the DSP board.",2003,0,8469 1409,Error rates for Nakagami-m fading multichannel reception of binary and M-ary signals,"This paper derives new closed-form formulas for the error probabilities of single and multichannel communications in Rayleigh and Nakagami-m (1960) fading. Closed-form solutions to three generic trigonometric integrals are presented as part of the main result, providing a unified method for the derivation of exact closed-form average symbol-error probability expressions for binary and M-ary signals with L independent channel diversity reception. Both selection-diversity and maximal-ratio combining (MRC) techniques are considered. The results are generally applicable for arbitrary two-dimensional signal constellations that have polygonal decision regions operating in a slow Nakagami-m fading environments with positive integer fading severity index. MRC with generically correlated fading is also considered. The new expressions are applicable in many cases of practical interest. The closed-form expressions derived for a single channel reception case can be extended to provide an approximation for the error rates of binary and M-ary signals that employ an equal-gain combining diversity receiver",2001,0, 1410,BIST and fault insertion re-use in telecom systems,"This paper describes a comprehensive off-line system level test strategy developed for several complex high speed, telecom chips. These chips feature a large number of clocks, mixed analog and digital logic and large high speed RAMs. The test strategy was based on the principles of hierarchical test, DFT reuse, BIST and at-speed test, and was implemented using DFT solutions provided by Logic Vision. The paper further specifies the requirements for integration of the DFT solutions within the ASIC vendor's flow. The final DFT implementation which includes scan, at-speed Logic BIST, RAM BIST, PLL BIST, user-defined IEEE 1149.1 TAP and Boundary Scan with fault insertion capability, is presented The system level test architecture focusing on an in-house developed TAP Master is given. Motivation from a cost saving perspective in system SW development and improved system quality monitoring are highlighted as major driving forces in support of this strategy. Finally, some practical issues for consideration when planning system test based on BIST are presented",2001,0, 1411,Zerotree pattern coding of motion picture residues for error-resilient transmission of video sequences,"This paper describes a compression scheme for difference-image residues in video coding. Structured spatial patterns are used to map residue pixel values into a quadtree structure, which is then coded in significance order with the SPIHT algorithm. Thus the wavelet coefficient values of standard zerotree coding are replaced by untransformed (but carefully positioned) residue pixel values. The new zerotree pattern coding method compresses as well as zerotree wavelet coding and much better than DCT coding (as in MPEG) over error-free channels. Over noisy channels, zerotree pattern coding provides built-in error resilience, allowing transmission of residue data without error control overhead. A simple postprocessing technique provides additional error concealment.",2000,0, 1412,Control and Fault Diagnosis of a Winding Machine Based on a LTV Model,"This paper describes a controller design and also a sensor fault diagnosis based on a LTV model for a winding machine. It shows that, according to an experimental identification approach, a LTV model is ideally suited to improve the control of web tension. Moreover, based on this model, an innovative bank of interpolated LTI Kalman filters is synthesised to allow sensor fault detection and isolation (FDI). The advanced control law has been successfully applied to a winding machine. Also, software sensor fault has been generated to improve the accuracy of FDI algorithm in winding machine",2005,0, 1413,Defect distribution for wearable system design,"This paper describes a design process for custom wearable systems produced in an academic setting. A set of 245 wearable design defects from two distinct periods separated by six years in time is presented. These data identify aspects of the process requiring significant developer effort, which we show using an orthogonal defect classification scheme. A comparison of defect attribute distributions across the two separate design periods is given. The results show that growing electronic complexity is increasing the number of defects caused by designer error, and that more defects are being observed in earlier phases of the design process.",2002,0, 1414,Error estimation of the heat conductivity coefficient determination when sample heated with the radiant flux,"This paper describes a method for determining the heat conductivity coefficient based on solving the quasi-inverse multivariate problem of the heat conductivity in cylinder, which is heated with the radiant flux from the face and radiates freely with the constant temperature from all sides into the medium. A sample radiant heating can be carried out by different methods, e.g. by means of optical furnace, laser or high temperature furnace. To determine the heat conductivity coefficient and the error estimation with the radiation flux the mathematical simulation was carried out. At first the direct problem of heat conductivity was solved. The calculation resulted in determination of the total heat flow departing from the low surface of the sample. Accuracy assurance of solving is the convergence of heat balance. Then the inverse problem was solved by means of which the heat conductivity coefficient was determined by the known density of incoming heat flow and the total heat flow.",2003,0, 1415,Building a reliable internet core using soft error prone electronics,"This paper describes a methodology for building a reliable internet core router that considers the vulnerability of its electronic components to single event upset (SEU). It begins with a set of meaningful system level metrics that can be related to product reliability requirements. A specification is then defined that can be effectively used during the system architecture, silicon and software design process. The system can then be modeled at an early stage to support design decisions and trade-offs related to potentially costly mitigation strategies. The design loop is closed with an accelerated measurement technique using neutron beam irradiation to confirm that the final product meets the specification.",2008,0, 1416,Practical applications of automated fault analysis,"This paper describes a new concept of automated fault analysis where fault transients and changes in power system equipment contacts are processed online. This allows faster confirmation of correct equipment operation and detection of unexpected equipment operations, as well as increased accuracy of fault location and analysis. The paper discusses two independent utility examples that illustrate automating some aspect of the fault analysis process. One approach is the substation level analysis, where local digital fault recorder (DFR) data is processed at the substation to obtain accurate fault location and analysis. Another approach is DFR data analysis at the master station location, where all DFR data files from remote locations are concentrated and processed",2000,0, 1417,Document retrieval system tolerant of segmentation errors of document images,"This paper describes a new document retrieval method that is tolerant of OCR segmentation errors in document images. To overcome the segmentation and recognition errors that most OCR-based retrieval systems suffer from, the proposed method consists of two processing phases. First, the OCR engine first generates multiple character-segmentation and recognition hypotheses. Then the retrieval engine extracts keywords from the recognition hypotheses by using lexicon-driven dynamic programming (DP) matching. We have applied this method to both handwritten and printed document images and have demonstrated its effectiveness in reducing false drops and false alarms.",2004,0, 1418,A new method in reducing the overcurrent protection response times at high fault currents to protect equipment from extended stress,"This paper describes a new method for protecting power equipment from extended stresses during high fault current conditions. This was achieved using a universal protection device with a software platform that can facilitate designing time-current characteristic (TCC) curves of different shapes, all in the same hardware. When combined, recloser control and relay response times are faster and more accurate than with conventional means. Reduced device response times are achieved by combining different overcurrent TCCs. A coordination example is presented for a typical distribution system loop scheme containing new multifunction relays and reclosers. The advantages of using integrated device functions over standard overcurrent relays and recloser controls are illustrated. A comparative analysis is presented to quantify the reduction in let-thru I2t values and equipment stress that can be realized using this method during high fault current conditions",2001,0, 1419,Fault Diagnosis of Mine Hoist Braking System Based on Wavelet Packet and Support Vector Machine,"This paper concerns mine hoist braking system fault diagnosis with the combination of wavelet packet and support vector machine. It is motivated by the scarce of fault samples in mine hoist such requiring very high security system. A novel approach is presented in order to diagnose blockage piston in cylinder, a typical fault of mine hoist braking system. This method mainly consists of three steps: (1) apply 3 levels wavelet package to construct and reconstruct signal of brake distance-time , extract fault feature vectors (2) set up training samples (3) establish a SVM fault classifier to complete fault diagnosis. Experimental results show that SVM method can effectively accomplish the blockage piston in cylinder fault diagnosis of braking system and has a high adaptability for fault diagnosis in the case of smaller number of samples.",2006,0, 1420,Fault-Tolerant Supervisory Control of Discrete Event Systems Modeled by Bounded Petri Nets,"This paper considers bounded Petri nets with both controllable and uncontrollable transitions, and addresses the synthesis of a fault-tolerant supervisor in a setting where the control specifications are described via arbitrary forbidden markings. When determining the supervisor, we handle uncontrollable transitions by analyzing the reverse net and by obtaining a set of weakly forbidden markings, based on which we determine the maximally permissive control policy. We implement the supervisor by encoding the system state information into two monitor places in a way that allows us to both determine the online control policy efficiently and identify/correct single place faults (i.e., faults that corrupt the number of tokens in a single place of the Petri net, including the monitor places). The overall method need not perform reachability analysis, has low complexity requirements for online computation, and can be generalized to monitor-based control schemes that are tolerant to any number of faults.",2007,0, 1421,Scalable image and video transmission using irregular repeat-accumulate codes with fast algorithm for optimal unequal error protection,"This paper considers designing and applying punctured irregular repeat-accumulate (IRA) codes for scalable image and video transmission over binary symmetric channels. IRA codes of different rates are obtained by puncturing the parity bits of a mother IRA code, which uses a systematic encoder. One of the main ideas presented here is the design of the mother code such that the entire set of higher rate codes obtained by puncturing are good. To find a good unequal error protection for embedded bit streams, we employ the fast joint source-channel coding algorithm in Hamzaoui et al. to minimize the expected end-to-end distortion. We test with two scalable image coders (SPIHT and JPEG-2000) and two scalable video coders (3-D SPIHT and H.26L-based PFGS). Simulations show better results with IRA codes than those reported in Banister et al. with JPEG-2000 and turbo codes. The IRA codes proposed here also have lower decoding complexity than the turbo codes used by Banister et al.",2004,0, 1422,Frisch scheme identification for Errors-in-Variables systems,"This paper considers the problem of identification of dynamic Errors-in-Variables (EIV) systems. Some fatal errors of the well-known Frisch Scheme for EIV identification have been presented, and on the basis of it an improved recursive algorithm is proposed. The new algorithm can estimate both the system parameters and the noise variance with higher accuracy and computational efficiency. Simulations illustrate the theoretical results.",2010,0, 1423,Fault tolerance in the WebCom metacomputer,"This paper addresses fault tolerance in the WebCom metacomputer. WebCom's computation platform is dynamically reconfigurable and volunteer-based. Since its constituent machines may join and leave unpredictability, fault survival and efficient fault recovery is of paramount importance. A fault tolerance mechanism is outlined, which relies on a fast and efficient processor replacement procedure. It is shown that the characteristics of this procedure, together with the hierarchical and referentially transparent nature of WebCom executions, can be used to limit the effect of a fault to its immediate neighbourhood",2001,0, 1424,Intelligent multi-agent approach to fault location and diagnosis on railway 10kv automatic blocking and continuous power lines,"This paper discusses the intelligent multi-agent technology, and proposes an intelligent multi-agent based accurate fault location detection and fault diagnosis system applied in 10kv automatic blocking and continuous power transmission lines along the railway. Agents are software processes capable of searching for information in the networks, interacting with pieces of equipment and performing tasks on behalf of their owner(device). Moreover, they are autonomous and cooperative. Intelligent agents also have the capability to learn as the power supply network topology or environment changed. The system architecture is proposed, the features of each agents are described. Analysis brings forth the merits of this fault location and diagnosis system.",2002,0, 1425,Domain Crossing Errors: Limitations on Single Device Triple-Modular Redundancy Circuits in Xilinx FPGAs,"This paper discusses the limitations of single-FPGA triple-modular redundancy in the presence of multiple-bit upsets on Xilinx Virtex-II devices. This paper presents results from both fault injection and accelerated testing. From this study we have found that the configurable logic block's routing network is vulnerable to domain crossing errors, or TMR defeats, by even 2-bit multiple-bit upsets.",2007,0, 1426,A simultaneous stabilization approach to (passive) fault tolerant control,"This paper discusses the problem of designing fault tolerant compensators that stabilize a given system both in the nominal situation, as well as in the situation where one of the sensors or one of the actuators has failed. It is shown that such compensators always exist, provided that the system is detectable from each output and that it is stabilizable. The proof of this result is constructive. A family of second order systems is described that requires fault tolerant compensators of arbitrarily high order.",2004,0, 1427,Dynamic code deployment enables fault diagnosis in the connected home,This paper discusses the role of the OSGi service platform in support of dynamic code deployment within home networks and its use to enable the operation of an extensible fault diagnosis capability. This capability takes the form of a prototype agent based system that can identify faults in software and hardware components by operating on a largely autonomous basis.,2008,0, 1428,Studies on the Internal Fault Simulations of a High-Voltage Cable-Wound Generator,"This paper discusses the set up of a mathematical model of the powerformer, a new type of salient-pole synchronous machine, for analyzing internal phase and ground faults in stator windings. The method employs a direct-phase representation considering the cable capacitance. To effectively implement the internal fault simulation, the magnetic axis locations of fault parts are arranged appropriately. Moreover, all machine windings supposed to be sinusoidally distributed in space and the system are magnetically linear. With the above-mentioned assumptions, the current-equivalent equations, voltage-equivalent equations, and the rotor-motion equations are formed and combined to implement the fault simulations. Simulation results showing the fault currents, during a single-phase-to-ground fault, a two-phase-to-ground fault, and a phase-to-phase fault, are presented here. With the data generated by this internal fault simulation model, the protection scheme used for the powerformer can be validated and improved accordingly.",2007,0, 1429,Some observations on prediction error minimization,This paper does not introduce any new methodology. Its primary aim is to present both real data analysis and associated simulation results that question an over-reliance on the prediction error minimisation approach to the identification and estimation of linear transfer function models from time series data.,2010,0, 1430,Neural network detection and identification of actuator faults in a pneumatic process control valve,"This paper establishes a scheme for detection and identification of actuator faults in a pneumatic process control valve using neural networks. First, experimental performance parameters related to the valve step responses, including dead time, rise time, overshoot, and the steady state error are obtained directly from a commercially available software package for a variety of faulty operating conditions. Acquiring training data in this way has eliminated the need for additional instrumentation of the valve. Next, the experimentally determined performance parameters are used to train a multilayer perceptron network to detect and identify incorrect supply pressure, actuator vent blockage and diaphragm leakage faults. The scheme presented here is novel in that it demonstrates that a pattern recognition approach to fault detection and identification, for pneumatic process control valves, using features of the valve step response alone, is possible.",2001,0, 1431,Practical considerations in making CORBA services fault-tolerant,"This paper examines the CORBA Naming, Event, Notification, Trading, Time and Security Services, with the objective of identifying the issues that must be addressed in order make these services fault-tolerant. The reliability considerations for each of these services involves strategies for replicating the service objects, and for keeping the states of the replicas consistent. Of particular interest are the sources of non-determinism in each of these services, along with the means for addressing the non-deterministic behavior in the interests of ensuring strong fault tolerance",2002,0, 1432,VNA error model conversion for N-port calibration comparison,"This paper examines the extended 12-term error model commonly used in commercial multiport vector network analyzers, introduces a generalized multiport error model, and applies this error model for the purposes of general N-port comparison of calibrations. These tools have been implemented in a commercially available calibration and measurement software product. Previous work demonstrated the utility of these tools in the estimation of calibration error associated with ignoring coupling and for evaluating measurement system repeatability. Equations are presented for bidirectional conversion between an extended 16-term-like error model and the extended 12-term model as well as for calculation of DUT-specific and worst-case multiport calibration comparison error bounds.",2007,0, 1433,How to avoid the generation of loops in the construction of fault trees,"This paper examines the question of the generation of loops in the construction of fault trees (FTs) starting from the printouts of a hazard operability studies (HazOp) analysis. Some examples of the formation of a loop are presented and the rules for its elimination are illustrated. The easiest way is to apply simple syntax rules in the graphic construction of a FT. This, however, presupposes prior recognition by the analyst of the initiation of a loop. The safest way, therefore, is to eliminate the problem at its root by deriving the FT straight from the modules of a well-structured HazOp procedure, such as recursive operability analysis",2002,0, 1434,A method for the measurement and analysis of SAR signal random phase error,This Paper explains the importance of phase error measurement and analysis through the research on the impact of SAR phase error on linear frequency-modulated imaging and introduces a method for the measurement and analysis of SAR phase error. The intrapulse phase error can be analyzed and measured correctly via the application of Tektronix Real-Time Spectrum Analyzer RSA6114A together with radar analysis software or SignalVu software matching with Tektronix Oscilloscope.,2009,0, 1435,Queuing Models for Field Defect Resolution Process,"This paper explores a novel application of queuing theory to the corrective software maintenance problem to support quantitative balancing between resources and responsiveness. Initially, we provide a detailed description of the states a defect traverses from find to fix and a definition and justification of mean time to resolution as a useful process metric. We consider the effect of queuing system structures, priority levels and priority disciplines on the differential mean times to resolution of defects of different severities. We find that modeling the defect resolution capacity of a software engineering group as n identical M/M/1 servers provides a flexible and realistic approximation to the queuing behavior of four different organizations. We consider three queuing disciplines. Though purely preemptive and non-preemptive priority disciplines may be suited for other groups, our data was best fit by a mixed discipline, one in which only the most severe defects preempt ongoing service activities of lesser severities. We provide two examples of the utility of such a model: given the reasonable assumption that the most severe defects have the highest impact on reliability, we find that the reduction of the resolution time for these defects must come from changes reducing the service time. On the other hand the effect of additional engineering resources on the resolution time of less severe defects is easily computed and can be significant",2006,0, 1436,Performance analysis of BPSK and QPSK using error correcting code through AWGN,"This paper highlight the performance analysis of BPSK and QPSK using error correcting code. To calculate the bit error rate, different types of error correcting code were used through an Additive White Gaussian Noise (AWGN) channel. Bose-Chaudhuri-Hocquenghem (BCH), Cyclic code and hamming code were used as the encoder/decoder technique. Basically, the performance was determined in term of bit rate error (BER) and signal energy to noise power density ratio (Eb/No). Both BPSK and QPSK were also being compared in the symbol error capability known as t in which expected that the performance is graded in response to the increasing of value of t. All simulations were done using MATLAB R2007b software. In general BCH codes demonstrate better performance than Hamming code and Cyclic code for both BPSK and QPSK.",2010,0, 1437,Achieving Fault Tolerance in Data Aggregation in Wireless Sensor Networks,"This paper identifies faulty sensor(s) in a polynomial-based data aggregation scenario, TREG proposed in our earlier work. In TREG, function approximation is performed over the entire range of data and only coefficients of a polynomial (P) are passed instead of aggregated data. Performing further mathematical operations on the calculated P can identify the maximum (max) and minimum (min) values of the sensed attribute and their locations. Therefore, if any sensor reports a data value outside the [min, max] range, it can be identified as a faulty sensor. We achieve the following goals: (1) uncorrelated readings from a specific sensor helps in detecting a faulty sensor, (2) faulty sensors are detected near the source and isolated preventing them from affecting the accuracy of the overall aggregated data and reducing the overall delay. Results show that a faulty sensor can be detected with an average accuracy of 94 %. With increase in node density, accuracy in faulty sensor detection improves as more nodes are able to report the information to their nearest tree node.",2007,0, 1438,Performance Analysis of Digital Flight Control Systems With Rollback Error Recovery Subject to Simulated Neutron-Induced Upsets,"This paper introduces a class of stochastic hybrid models for the analysis of closed-loop control systems implemented with NASA's Recoverable Computer System (RCS). Such systems have been proposed to ensure reliable control performance in harsh environments. The stochastic hybrid model consists of a stochastic finite-state automaton driven by a Markov input process, which in turn drives a switched linear discrete-time dynamical system. Stability and output tracking performance are analyzed using an extension of the existing theory for Markov jump-linear systems. The theory is then applied to predict the tracking error performance of a Boeing 737 at cruising altitude and in closed-loop with an RCS subject to neutron-induced single-event upsets. The results are validated using experimental data obtained from a simulated neutron environment in NASA's SAFETI Laboratory.",2008,0, 1439,Fault Detection and Isolation in Aircraft Systems Using Stochastic Nonlinear Modelling of Flight Data Dependencies,"This paper introduces a fault detection and isolation (FDI) scheme for aircraft systems based on the modelling of the relationships among flight variables. The modelling is performed by means of pooled nonlinear autoregressive with exogenous (NARX) excitation representations. During the system's operation in healthy mode, these relationships are valid. Hence, a scheme using statistical hypothesis testing is designed to detect changes in these relationships as a result of fault occurrence. The FDI scheme's performance and robustness are assessed with flights conducted under various external flight conditions (turbulence)",2006,0, 1440,Selecting a restoration technique to minimize OCR error,"This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.",2003,0, 1441,A content adaptive fast PDE algorithm for motion estimation based on matching error prediction,"This paper introduces a new fast motion estimation based on estimating a block matching error (i.e., sum of absolute difference (SAD)) between blocks which can eliminate an impossible candidate block much earlier than a conventional partial distortion elimination (PDE) scheme. The basic idea of the proposed scheme is based on predicting the total SAD of a candidate block using its partial SAD. In particular, in order to improve prediction accuracy and computational efficiency, a sub-sample based block matching and a selective pixel-based approaches are employed. In order to evaluate the proposed scheme, several baseline approaches are described and compared. The experimental results show that the proposed algorithm can reduce the computations by about 44% for motion estimation at the cost of 0.0005 dB quality degradation versus the general PDE algorithm.",2010,0, 1442,Deriving accurate ASIC cell fault models for VITAL compliant VHDL simulation,"This paper introduces a system for deriving accurate, technology specific fault models using analog defect simulation. It is implemented by a new software tool that provides a push-button solution for the tedious task of obtaining accurate ASIC cell defect to fault mappings. After completion of the cell defect analysis, the tool generates VITAL compliant, defect-injectable, VHDL cell models. These provide an efficient means to conduct accurate fault simulation of ASIC standard cell designs",2001,0, 1443,A Systematic Robot Fault-Tolerance Approach,"This paper introduces a systematic approach to suggesting proper fault-tolerance techniques for robots. We classified general robot faults into six types, and developed a UML profile to organize and model a dependancy structure of existing fault-tolerance techniques. In future, we will organize fault-tolerance techniques into the form of the UML profile and determine the relationships between each fault and techniques.",2009,0, 1444,Modular fault recovery in timed discrete-event systems: application to a manufacturing cell,"This paper extends the previous results of the authors on fault recovery to timed discrete-event systems (TDES), and discusses the application of the proposed methodology to a manufacturing cell. It is assumed that the plant can be modelled as a TDES, the faults are permanent, and that a diagnosis system is available that detects and isolates faults with a bounded delay (expressed in clock ticks). Thus, the combination of the plant and the diagnosis system, as the system to be controlled, has three modes: normal, transient and recovery. Initially, the plant is in the normal mode. Once a fault occurs, the system enters the transient mode. After the fault is detected and isolated by the diagnosis system, the system enters the recovery mode. This framework does not depend on the diagnosis technique used, as long as lower and upper bounds for diagnosis delay are available. A modular switching supervisory scheme is proposed to satisfy the system specifications. The design consists of a normal-transient supervisor, and multiple recovery supervisors each for recovery from a particular failure mode. The issue of the nonblocking property of the system under supervision, and also supervisor admissibility (controllability), in particular coerciveness, are studied. The proposed approach is applied to a manufacturing cell consisting of two machines and two conveyors. A modular switching supervisor is designed to ensure the specifications in the normal mode are met. In cases of failure, the supervisor sends appropriate recovery commands so that the cell can complete its production cycle",2005,0, 1445,Reconfiguration of carrier-based modulation strategy for fault tolerant multilevel inverters,"This paper focuses on the fault-tolerance of the multilevel inverters with the redundant switching states. The failure situations of the multilevel inverters are classified into two kinds of them according to relationship between the output voltage level and the switching states. When some of the power devices fail, the gate signals are reconfigured according to the failure types. The method is discussed for phase disposition PWM strategy (PDPWM) and phase shifted PWM strategy (PSPWM) in the paper and it can be extended for other carrier-based PWM strategies easily. The normal line-to-line voltage will be achieved with the proposed method when device failure occurs. At the same time, the circuit structures are the same as the generally used ones and the voltage stress of the devices does not increase. The simulation and experimental results are included in the paper to verify the proposed method",2005,0,6778 1446,Fault-tolerant Control for Maglev Suspension System Based on Simultaneous Stabilization,"This paper focuses on the problem of fault tolerant control for maglev suspension system based on simultaneous stabilization theory. Given two plants which are linear models of suspension system before and after the electromagnet failure respectively, we seek for a single compensator that stables both of them simultaneously. A systematic and simple linear control system design method for highly nonlinear plants is adopted. The simulation for the single magnet model shows that the designed fault tolerant controller has a better static and dynamic performance compared with a conventional PID controller. The method may provide valuable experience to the research on maglev suspension control system with more failures.",2007,0, 1447,A Systematic Approach of Improving a Fault Tolerant Fuel Cell Bus Powertrain Safety Level,"This paper focuses on the safety issue of fuel cell bus powertrain system. It demonstrates a systematic way of evaluation and refinement of the safety degree to advance the confidence of a kind of fuel cell bus powertrain system, and meanwhile a specific approach of safety level evaluation is used. Via functionally summing up the potential faults which may appear and cause economic losses or occupant injury, the deficiency of the powertrain is located. Based on the occurrence, the severity and the detecting possibility, the numerical expressions of the faults property are measured. Through combining these evaluating results, a general safety level can be achieved, which directly reflects the inappropriateness that should be adjusted or redesigned. Continuous improvement, including using the passive methods of hardware redundancy and the active methods of fault diagnosis, detection and fault tolerant control in the electric control system is carried out, leading to a much more reliable system and the updated safety level shows the enhanced confidence of the fuel cell powertrain system. In order to get a more practical control system, a hardware in the loop test bench is set up to test the performance of the controllers, the core of the powertrain system, under different working conditions, especially those when faults appear. The actual execution of the approach shows that it is an objective analysis method for the innovative powertrain system and can help to improve the system reliability of the fuel cell bus systematically.",2006,0, 1448,The Design of Long-Distance Steam Turbine Generator Rotor Windings Inter-Turn Short Circuit Fault Diagnose System Based on VB.Net,"This paper had built inter turn short-circuit fault diagnose system of steam generator rotor windings based on the fault criterion of rotor inter turn short circuit on-line identification. The frame of diagnose system is introduced, and then the long-distance rotor inter turn short-circuit diagnose system which based on B/S module is built by combining advanced VB.net technique's Web service and SQL server.",2008,0, 1449,Statements versus Predicates in Spectral Bug Localization,"This paper investigates the relationship between the use of predicate-based and statement-based program spectra for bug localization. Branch and path spectra are also considered. Although statement and predicate spectra can be based on the same raw data, the way the data is aggregated results in different information being lost. We propose a simple and cheap modification to the statement-based approach which retains strictly more information. This allows us to compare statement and predicate ''metrics'' (functions used to rank the statements, predicates or paths). We show that improved bug localization performance is possible using single-bug models and benchmarks.",2010,0, 1450,Soft-error classification and impact analysis on real-time operating systems,"This paper investigates the sensitivity of real-time systems running applications under operating systems that are subject to soft-errors. We consider applications using different real-time operating system services: scheduling, time and memory management, intertask communication and synchronization. We report results of a detailed analysis regarding the impact of soft-errors on real-time operating systems cores, taking into account the application timing constraints. Our results show the extent to which soft-errors occurring in a real-time operating system's kernel impact its reliability",2006,0, 1451,Analysis of Software Reliability Modeling Considering Testing Compression Factor and Failure-to-Fault Relationship,"This paper is an attempt to relax and improve the assumptions regarding software reliability modeling. To approximate reality much more closely, we take into account the concepts of testing compression factor and the quantified ratio of faults to failures in the modeling. Numerical examples based on real failure data show that the proposed framework has a fairly good prediction capability. Further, we also address the optimal software release time problem and conduct a detailed sensitivity analysis through the proposed model.",2010,0, 1452,Defect prevention through defect prediction: a case study at Infosys,"This paper is an experience report of a software process model which will help in preventing defects through defect prediction. The paper gives a vivid description of how the model aligns itself to business goals and also achieves various quality and productivity goals by predicting the number and type of defects well in advance and corresponding preventive action taken to reduce the occurrence of defects. Data have been collected from the case study of a live project in INFOSYS Technologies Limited, India. A project team always aims at a zero defect software or a quality product with as few a defects as possible. To deliver a defect free software, it is imperative that in the process of development maximum number of defects are captured and fixed them before we deliver to the customer. In other words our process model should help us detect maximum number of defects possible through various Quality Control activities. Also the process model should be able to predict defects and should help us to detect them quite early. Defects can be reduced in two ways - (i) By detecting it at each and every stage in the project life cycle or (ii) By preventing to occur",2001,0, 1453,Variance error quantifications that are exact for finite model order,"This paper is concerned with the frequency domain quantification of noise induced errors in dynamic system estimates. Preceding seminal work on this problem provides general expressions that are approximations whose accuracy increases with observed data length and model order. In the interests of improved accuracy, this paper provides new expressions whose accuracy depends only on data length. They are therefore 'exact' for arbitrarily small true model order and apply to the general cases of output-error and box-Jenkins model structures.",2003,0, 1454,A low-cost motion tracker and its error analysis,"This paper develops a physical model of an inertial/magnetic measurement unit by effectively integrating an accelerometer, a magnetometer, and two gyroscopes for low-g motion tracking applications. The proposed model breaks down the errors contributed by individual components, then determines error elimination methods based on sensor behavior and characteristics, and finally constructs a feedback loop for continuous self-calibration. Measurement errors are reduced by adopting a systematic design methodology: 1) tilt errors are minimized through a careful selection of A/D convertor resolution and by making compensation on sensor bias and scale factor; 2) heading errors are reduced by cancelling out nearby ferrous distortions and making tilt-compensation on the magnetometer; 3) errors from gyroscope measurements are eliminated via the least squares algorithm and continuous corrections using orientation data at the steady-state position. Preliminary tests for low-g motion sensing show that the motion tracker can achieve less than plusmn0.5deg accuracy in tilt and less than plusmn1deg accuracy in yaw angle measurement with above-mentioned methods.",2008,0, 1455,"Fault detection, isolation, and recovery using spline tools and differential flatness with application to a magnetic levitation system","This paper discusses fault detection and isolation for continuous-time systems using B-Splines and the notion of differential flatness. The idea is, from the system's flat outputs (which are obtained directly from measurement or from an observer), we algebraically produce every other measured signal, including the inputs. The corresponding signals are then compared. In nominal condition, a measured signal and its counterpart derived from the flat outputs are similar up to noise and the filter's bandwidth. In the occurrence of faults, they are different. We then use this information to signify a fault and to compensate for it. The techniques used to produce signals from the flat outputs, and filter out the noise, are based on a B-Splines parametrisation.",2010,0, 1456,Automated fault analysis: From requirements to implementation,"This paper discusses implementation of systems for automated processing and analysis of data recorded in substations during faults. Substation fault data is captured by various intelligent electronic devices (IEDs) such as digital fault recorders, digital protective relays, circuit breaker recorders, etc. Equipment in today's substations is provided by various vendors and comes in different vintages. System protection engineers, for example, have to deal with different IED product versions, diverse data collection and viewing software, and variety of proprietary and non-proprietary data formats. This paper illustrates how to bring together internal needs and requirements defined by a given utility, available standards and recommendations developed by professional organizations, as well as the requirements imposed externally, for example, standards defined by North American Electric Reliability Corporation (NERC).",2009,0, 1457,Test limitations of parametric faults in analog circuits,"This paper investigates the detectability of parameter faults in linear, time-invariant, analog circuits and sheds new light on a number of very important test attributes. We show that there are inherent limitations with regard to analog faults detectability. It is shown that many parameter faults are undetectable irrespective of which test methodology is being used to catch them. It is also shown that, in many cases, the detectable minimum-size parameter fault is considerably larger than the normal parameter drift. Sometimes the minimum-size detectable fault is two to five times the parameter drift. We show that one of the fault-masking conditions in analog circuits, commonly believed to be true, is, in fact, untrue. We illustrate this with a simple counter example. We also show that, in analog circuits, it is possible for a fault-free parameter to mask an otherwise detectable parametric fault. We define the small-size parameter fault coverage, and describe ways to calculate or estimate it. This figure of merit is especially suitable in characterizing the test efficiency in the presence of small-size parameter faults. We further show that circuit specification requirements may be translated into parameter tolerance requirements. By doing so, a test for parametric faults can, indirectly, address circuit specification compliance. The test limitations of parametric faults in analog circuits are illustrated using numerous examples.",2003,0, 1458,Fault containment and error detection in the time-triggered architecture,"This paper investigates the fault-containment and error-detection mechanisms of distributed safety-critical time-triggered systems. The following critical failure modes of a fault-containment region are introduced and analyzed in detail: babbling idiot failures, masquerading failures, slightly-off-specification (SOS) failures, crash/omission (CO) failures, and massive transient disturbances. After a short description of the two time-triggered protocols TTP/C and FlexRay this paper tries to show how these two protocols handle the listed failure modes at the architecture level.",2003,0, 1459,Design and implementation of one-terminal fault location system based on impedance-traveling wave assembled algorithm,"This paper introduces the design and implementation of one-terminal fault location system for transmission lines based on impedance-traveling wave assembled algorithm. It combines the measurements of impedance method and traveling wave method in one-terminal fault location system for transmission lines. The algorithm named impedance-traveling wave assembled algorithm is described at first. The assembled algorithm has complementarities of both methods, because measurement impedance method guarantees reliability and traveling wave method improves accuracy. Then it is how to implement the algorithm in power system, including hardware and software of fault location system, introducing main structure, fault location device, programming flows in industrial computer and high-speed card, data process, and so on.",2009,0, 1460,Optimize defect detection techniques through empirical software engineering method,"This paper introduces twelve defect detection techniques and describes a non-controlled experiment related to defect detection techniques to address the uncertainty of how to test an embedded software and find defects effectively. In this non-controlled experiment, three common testing techniques were applied to a large scale embedded system. This study is intended to evaluate different defect detection techniques that are actually used by software engineers using empirical software engineering method. The objective of empirical software engineering is to improve the software development processes and quality. This could be done by evaluating, comparing and controlling defect detection methods. This study is also intended to find a best method to reduce defects and increase the defect detection rate in a large scale embedded system, since defect detection is considered as one of the most costly development process in software development cycle",2005,0, 1461,Dynamic Error Recovery in The ATLAS TDAQ System,"This paper describes the new dynamic recovery mechanisms in the ATLAS Trigger and Data AcQuisition (TDAQ) system. The purpose is to minimize the impact certain errors and failures have on the system. The new recovery mechanisms are capable of analysing and recovering from a variety of errors, both software and hardware, without stopping the data gathering operations. It incorporates an expert system to perform the analysis of the errors and to decide what measures are needed. Due to the wide array of sub-systems there is also a need to optimize the way similar errors are handled for the different sub-systems. The main focus of the paper is to consider the design and implementation of the new recovery mechanisms and how expert knowledge is gathered from the different sub-systems and implemented in the recovery procedures.",2007,0, 1462,"Stator winding fault diagnosis in three-phase synchronous and asynchronous motors, by the extended Park's vector approach","This paper describes the use of the extended Park's vector approach (EPVA) for diagnosing the occurrence of stator winding faults in operating three-phase synchronous and asynchronous motors. The major theoretical principles related with the EPVA are presented and it is shown how stator winding faults can be effectively diagnosed by the use of this noninvasive approach. Experimental results, obtained in the laboratory, corroborate that these faults can be detected, in the EPVA signature, by the identification of a spectral component at twice the fundamental supply frequency. Onsite tests, conducted in a power generation plant and in a cement mill, demonstrate the effectiveness of the EPVA in the detection of these faults in large industrial motors, rated up to 5 MW",2001,0, 1463,Fault Detection of Railway Vehicles Using Multiple Model Approach,"This paper describes the estimation algorithm of the fault detection of railway vehicles. This algorithm is formulated based on the interacting multiple-model (IMM) algorithm. IMM algorithm which choose probable model from number of models is applied to the fault detection. In IMM method, changes of the systems structure and the systems parameters are called mode. We provide several suspension failure modes and sensor failure modes for the fault detection. The mode probabilities and states of vehicle suspension are estimated based on Kalman filter (KF). This algorithm is evaluated in simulation examples. Simulation results show that the algorithm is effective for on-board fault detection of the railway vehicle suspension",2006,0, 1464,Six-phase brushless DC motor for fault tolerant electric power steering systems,"This paper is focused on the development of a multiphase fault tolerant BLDC machine, oriented towards the stator windings, number of phases and commutation sequences. As a first step, a six-phase BLDC machine will be modeled and simulated, using JMAG-studio software, for no-load regime. The induced emfs, cogging torque, magnetic field density map and distribution will be processed. The results of simulation, in terms of induced emfs, will give the possibility to develop, in the second step, the commutation sequences in order to get optimal torque quality.",2007,0, 1465,The simulation of single-phase earthed fault in neutral ineffective grounding system based on MATLAB,"This paper is mainly used of MATLAB software to simulate neutral ineffective earthed power system. It focus on how to establish the simulink model and get the result from different conditions of earthed fault which affect the zero-sequence current and three-phase voltage. By comparing the zero-sequence current, the fault line could be selected.",2010,0, 1466,Requirement Error Abstraction and Classification: A Control Group Replicated Study,"This paper is the second in a series of empirical studies about requirement error abstraction and classification as a quality improvement approach. The Requirement error abstraction and classification method supports the developers' effort in efficiently identifying the root cause of requirements faults. By uncovering the source of faults, the developers can locate and remove additional related faults that may have been overlooked, thereby improving the quality and reliability of the resulting system. This study is a replication of an earlier study that adds a control group to address a major validity threat. The approach studied includes a process for abstracting errors from faults and provides a requirement error taxonomy for organizing those errors. A unique aspect of this work is the use of research from human cognition to improve the process. The results of the replication are presented and compared with the results from the original study. Overall, the results from this study indicate that the error abstraction and classification approach improves the effectiveness and efficiency of inspectors. The requirement error taxonomy is viewed favorably and provides useful insights into the source of faults. In addition, human cognition research is shown to be an important factor that affects the performance of the inspectors. This study also provides additional evidence to motivate further research.",2007,0, 1467,A case for fault tolerance and performance enhancement using chip multi-processors,"This paper makes a case for using multi-core processors to simultaneously achieve transient-fault tolerance and performance enhancement. Our approach is extended from a recent latency-tolerance proposal, dual-core execution (DCE). In DCE, a program is executed twice in two processors, named the front and back processors. The front processor pre-processes instructions in a very fast yet highly accurate way and the back processor re-executes the instruction stream retired from the front processor. The front processor runs faster as it has no correctness constraints whereas its results, including timely prefetching and prompt branch misprediction resolution, help the back processor make faster progress. In this paper, we propose to entrust the speculative results of the front processor and use them to check the un-speculative results of the back processor. A discrepancy, either due to a transient fault or a mispeculation, is then handled with the existing mispeculation recovery mechanism. In this way, both transient-fault tolerance and performance improvement can be delivered simultaneously with little hardware overhead",2006,0, 1468,AccMon: Automatically Detecting Memory-Related Bugs via Program Counter-Based Invariants,"This paper makes two contributions to architectural support for software debugging. First, it proposes a novel statistics-based, on-the-fly bug detection method called PC-based invariant detection. The idea is based on the observation that, in most programs, a given memory location is typically accessed by only a few instructions. Therefore, by capturing the invariant of the set of PCs that normally access a given variable, we can detect accesses by outlier instructions, which are often caused by memory corruption, buffer overflow, stack smashing or other memory-related bugs. Since this method is statistics-based, it can detect bugs that do not violate any programming rules and that, therefore, are likely to be missed by many existing tools. The second contribution is a novel architectural extension called the Check Look-aside Buffer (CLB). The CLB uses a Bloom filter to reduce monitoring overheads in the recently-proposed iWatcher architectural framework for software debugging. The CLB significantly reduces the overhead of PC-based invariant debugging. We demonstrate a PC-based invariant detection tool called AccMon that leverages architectural, run-time system and compiler support. Our experimental results with seven buggy applications and a total of ten bugs, show that AccMon can detect all ten bugs with few false alarms (0 for five applications and 2-8 for two applications) and with low overhead (0.24-2.88 times). Several existing tools evaluated, including Purify, CCured and value-based invariant detection tools, fail to detect some of the bugs. In addition, Purify's overhead is one order of magnitude higher than AccMon's. Finally, we show that the CLB is very effective at reducing overhead.",2004,0, 1469,Pedagogic data as a basis for Web service fault models,This paper outlines our method for deriving fault models for use with our WS-FIT tool that can be used to assess the dependability of SOA. Since one of the major issues with extracting these heuristic rules and fault models is the availability of software systems we examine the use of systems constructed through pedagogic activities to provide one source of information.,2005,0, 1470,The analysis for roundness error of revolution body based on hypothesized instrument,"This paper picks up a analysis system for roundness error of revolution body based on hypothesized instrument. Aiming at the signal can be interrupted from outside, this paper uses wavelet denoising algorithm and 53H algorithm which made by LabVIEW and MATLAB to finish error segregation. According to the smallest region method of roundness error, this paper adopts the real number code to evaluate the roundness error based on the improved genetic algorithm. It carry on directed mutation and change mutation step size adaptively as the trend of search points' adaptive value. It can achieve the high precision, and has good robustness.",2010,0, 1471,Poisonedwater: an adaptive approach to reducing the reputation ranking error in P2P networks,"This paper preliminarily proposes a reputation ranking algorithm called ldquoPoisonedwaterrdquo to resist front peer attack - peers that gain high reputation values by always cooperating with other peers and then promote their malicious friends through passing most of their reputation values to those malicious peers. Specifically, we introduce a notion of Poisoned Water (PW) that iteratively floods from identified malicious peers in the reverse direction of the incoming trust links towards other peers. Furthermore, we propose the concept of spreading factor (SF) that is logistically correlated to each peer's PW level. Then, we design the new reputation ranking algorithm seamlessly integrated with peers' recommendation ability (represented as SF), to infer the more accurate reputation ranking for each peer. Simulation results show that, in comparison with Eigentrust, Poisonedwater can significantly reduce the ranking error ratio up to 20%, when P2P systems exist many malicious peers and front peers.",2009,0, 1472,Application of SANTAD in network monitoring and fault detection of FTTH-PON,"This paper presented a new approach to monitor the network performance and detect any occurrence of fault in fiber-to-the-home passive optical network (FTTH-PON) using an efficient monitoring system named Smart Access Network Testing, Analyzing and Database (SANTAD). An optical time domain reflectometer (OTDR) with 1625 nm laser source is located at central office (CO) and connected to a remote personnel computer (PC) via Ethernet TCP/IP connection. A tapper circuit is designed to allow the OTDR testing signal to bypass the optical splitter in a conventional FTTH-PON when emitted in downstream direction. The key idea is to accumulate all OTDR measurement to be displayed on a PC computer screen for centralized monitoring and advanced data analyzing. The main advantages of this work is to improve the survivability and efficiency of FTTH-PON, while reduce the hands on workload as well as maintenance cost and time. The proposed system was implemented in a FTTH-PON network testbed and the experimental results showed that the proposed system achieves high accuracy in addressing the exact failure location.",2009,0, 1473,A 32-bit COTS-based fault-tolerant embedded system,"This paper presents a 32-bit fault-tolerant (FT) embedded system based on commercial off-the-shelf (COTS) processors. This embedded system uses two 32-bit Pentium processors with master/checker (M/C) configuration and an external watchdog processor (WDP) for implementing a behavioral-based error detection scheme called committed instructions counting (CIC). The experimental evaluation was performed using both power-supply disturbance (PSD) and software-implemented fault injection (SWIFI) methods. A total of 9000 faults have been injected into the embedded system to measure the coverage of error detection mechanisms, i.e., the checker processor and the CIC scheme. The results show that the M/C configuration is not enough for this system and the CIC scheme could cover the limitation of the M/C configuration.",2005,0, 1474,On-Line Defect Detection in Web Offset Printing,"This paper presents a detailed description of a vision system developed to detect and locate the non-uniformities that appear on the web offset printing machine. Specifically, the system is capable of monitoring the high-speed web offset printing in a real time environment and alerting the operator of any events (e.g., structural faults, color variations, missing characters, ink splashes, streaks, etc.) that disrupt the uniformities in the web offset printing. Such events are thought to affect crucial printing properties, resulting in non-uniformities in printing and impacting its quality and printability. This paper describes the vision system in terms of its hardware modules, as well as the image processing algorithms that it utilizes to scan the color images and locate areas of defect from the printing web. Basically, the system utilizes high-speed image scanning algorithms to detect edges and boundaries using linear and non-linear filters of dynamic size, threshold and transformation for further analysis. In addition to being tested in a laboratory environment, a prototype of this system was constructed and deployed to a web printing system, where its performance was evaluated under realistic conditions. The system was installed on a Flexo gravure-print-press machine for testing, and it was found that the vision system was able to successfully monitor and detect non-uniformities.",2003,0, 1475,SIFU!-a didactic stuck-at fault simulator,"This paper presents a didactic simulator for stuck-at (sa) faults on logic circuits. The tool has a set of features that helps to understand the concepts of single and multiple stuck-at faults, being these faults testable or not, and how to generate test vectors in order to test the detectable fault subset. An interface was developed to allow the edition of a circuit, the injection of faults and the fault simulation. The tool performs two simulations concurrently, one for the original circuit and another for the faulty circuit considering the injected faults. When the two simulations differ, for a given input vector, the tool shows the error (detection of the fault) graphically.",2003,0, 1476,Fault ride-through capability improvement of wind farms using doubly fed induction generator,This paper presents a doubly fed induction generator in a sample distribution network. Grid consist of two distributed generation (DG) units. DG1 is a 5MVA synchronous generator with its excited and governor systems. DG2 is a doubly fed induction generator wind turbine which encompasses a wound rotor induction generator and a partial scale power electronic converter as grid side and rotor side converter. Simulation is performed in PSCAD/EMTDC software. The objective is to keep DFIG connected to the grid in abnormal conditions which is called fault ride-through capability. An active diode bridge crowbar switch is proposed and effects of its parameters in dynamic response of DFIG are investigated. Simulation results show well defined parameters of crowbar maintain stability of DFIG and improve its fault ride-through capability.,2008,0, 1477,A Family of Electronic Ballasts Integrating Power Factor Correction and Power Control Stages to Supply HPS Lamps,"This paper presents a family of high power factor electronic ballasts applied to the public lighting system. Flyback, buck-boost, boost or SEPIC converter is employed in the power factor correction stage, integrated to the power control stage through a single active switch. The use of a half-bridge inverter, to supply the lamp, becomes possible through the employment of a flyback converter in the lamp power control stage. The lamp is supplied in a low frequency voltage square waveform in order to guarantee the safe lamp operation, regarding to the acoustic resonance phenomenon. The presented solutions to supply HPS lamps take the advantage of low cost and simplicity. The shared switch characteristics are analyzed and discussed during this work. A comparative analysis among the presented electronic ballasts is performed",2006,0, 1478,FPGA based real time fuzzy fault detection algorithm,"This paper presents a fast fuzzy algorithm, implemented using field-programmable gate array (FPGA) to detect the stator related faults in induction motors. Altera Cyclone III FPGA was employed for developing the proposed method. All coding of fuzzy algorithm was written using hardware description language (HDL) called very high speed integrated circuit description language (VHDL). Fuzzy system has three inputs and one output. The magnitudes of three phase current signals were given to the fuzzy system and stator related faults were detected. The validity of the proposed method was tested by experimental data set and was also compared to the results of MATLAB fuzzy logic toolbox. The experimental results show that the proposed method gives very efficient and reliable results for fault detection of induction motors.",2010,0, 1479,Fault restoration algorithm using fast tracing technique based on the tree-structured database for the distribution automation system,"This paper presents a fast tracing algorithm by adopting the LC/RS (left child/right sibling) tree structure for the database. In the distribution automation system (DAS), fast tracing of network connectivity is a vital issue for the application to large distribution systems. In this study, a tree-structured database has been adopted with the use of nondirectional data rather than the directional. The tree-structured database can afford to speed up the tracing algorithm with the use of the systematic search engine. The features of the proposed algorithm are (i) fast network tracing, (ii) convenience in the system data management, and (iii) convenient and fast modification of system data due to network changes",2000,0, 1480,Errors on Space Software Requirements: A Field Study and Application Scenarios,"This paper presents a field study on real errors found in space software requirements documents. The goal is to understand and characterize the most frequent types of requirement problems in this critical application domain. To classify the software requirement errors analyzed we initially used a well-known existing taxonomy that was later extended in order to allow a more thorough analysis. The results of the study show a high rate of requirement errors (9.5 errors per each 100 requirements), which is surprising if we consider that the focus of the work is critical embedded software. Besides the characterization of the most frequent types of errors, the paper also proposes a set of operators that define how to inject realistic errors in requirement documents. This may be used in several scenarios, including: evaluating and training reviewers, estimating the number of requirement errors in real specifications, defining checklists for quick requirement verification, and defining benchmarks for requirements specifications.",2010,0, 1481,A general method for calculating error probabilities over fading channels,"This paper presents a general method for calculating the average error rates and outage performance of a broad class of coherent, differentially coherent and noncoherent communication systems with/without diversity reception in a myriad of fading environments. Unlike the moment generating function (MGF) technique, the proposed characteristic function (CHF) method based on Parseval's theorem enables us to unify the average error rate analysis of different modulation formats and all commonly used predetection diversity techniques (i.e., maximal-ratio combining (MR), equal-gain combining (EG), selection diversity (SD), switched diversity (SW) and hybrid diversity systems) in a single common framework. The CHF method also lends itself to the averaging of the conditional error probability (CEP) involving the complementary incomplete Gamma function and the confluent hypergeometric function over fading amplitudes, which heretofore resisted to a simple form. As an aside, we show previous results as special instances of our unified framework",2000,0, 1482,A Fault-Tolerant Communication Algorithm of ARINC429 Based on Hybrid Redundancy,"This paper presents a hybrid redundancy algorithm to improve the fault-tolerance of ARINC429, which synthesize hardware redundancy with dual channels in physical layer and information redundancy with Cycle Redundancy Check (CRC) in data link layer. The concept of failover delay is introduced to analyze the capability of error detection in this algorithm. Validations of this algorithm is obtained through the method called fault injection, which is proved that the algorithm presented here can reduce failover delay to 18 ms in the practical system, and prevent any undetected error with even number of error bits caused by parity check thoroughly.",2009,0, 1483,Approach to Fault Identification for Electronic Products Using Mahalanobis Distance,"This paper presents a Mahalanobis distance (MD) based diagnostic approach that employs a probabilistic approach to establish thresholds to classify a product as being healthy or unhealthy. A technique for detecting trends and biasness in system health is presented by constructing a control chart for the MD value. The performance parameters' residuals, which are the differences between the estimated values (from an empirical model) and the observed values (from health monitoring), are used to isolate parameters that exhibit faults. To aid in the qualification of a product against a specific known fault, we suggest that a fault-specific threshold MD value be defined by minimizing an error function. A case study on notebook computers is presented to demonstrate the applicability of this proposed diagnostic approach.",2010,0, 1484,Optimal Method for Electronic System Partition in Order to Implement the Fault Protective Redundancy,This paper presents a method for the identification of optimal level for system partition in order to obtain the increase of reliability to its maximum value for the same level of protective redundancy in a digital system,2006,0, 1485,Preprocessing Correction for Micronucleus Image Detection Affected by Contemporaneous Alterations,"This paper presents a method that detects and corrects alterations of 1, exposure; 2, out-of-focus; and 3, Gaussian noise affecting, contemporaneously, the images that are acquired in flow cytometer measurement devices. These alterations reduce image quality and interfere with correct micronucleus (MN) detection in a lymphocyte. The objectives of the proposed correction are given as follows: 1, to correctly process the image with the pattern-matching algorithm in order to detect the MN in human lymphocytes; 2, to minimize doubtful detections; and 3, to enhance the confidence that, in rejected images, MNs cannot be detected. Numerical and experimental tests confirm the validity of the proposed correction method and permit the evaluation of the upper and lower bounds of the admissible variation range of each alteration.",2007,0,6571 1486,Application of non-parametric statistics of the parametric response for defect diagnosis,"This paper presents a method using only the rank of the measurements to separate a part's elevated response to parametric tests from its non-elevated response. The effectiveness of the proposed method is verified on the 130nm ASIC. Good die responses are correlated for same parametric tests at different conditions such as temperature, voltage and or other stress. Nonparametric correlation methods are used to calculate the intra-die correlation. When intra-die correlation is found to be low the elevated vectors that lower correlation are extracted and input to IDDQ-based diagnostic tools. Monte-Carlo simulations are described to obtain confidence bounds of the correlation for good die test response.",2009,0, 1487,Automatic Diagnosis of Defects of Rolling Element Bearings Based on Computational Intelligence Techniques,"This paper presents a method, based on classification techniques, for automatic detection and diagnosis of defects of rolling element bearings. We used vibration signals recorded by four accelerometers on a mechanical device including rolling element bearings: the signals were collected both with all faultless bearings and after substituting one faultless bearing with an artificially damaged one. We considered four defects and, for one of them, three severity levels. In all the experiments performed on the vibration signals represented in the frequency domain we achieved a classification accuracy higher than 99%, thus proving the high sensitivity of our method to different types of defects and to different degrees of fault severity. We also assessed the degree of robustness of our method to noise by analyzing how the classification performance varies on variation of the signal-to-noise ratio and using statistical classifiers and neural networks. We achieved very good levels of robustness.",2009,0, 1488,Characterization of wavelet-based image coding systems for algorithmic fault detection,"This paper presents a methodology for characterizing the behaviour of wavelet-based image coding systems in the presence of faults. This is a previous step in the development of efficient concurrent error detection techniques for such systems. The faulty behaviour of complex signal processing systems is better described at the algorithmic level (i.e., checking the accomplishment of a given functional property by large blocks of data) rather than using the ''classical'' approach at the structural (i.e., building block) level. Therefore, the issues related to algorithmic fault detection are addressed. Two different platforms for error characterization are presented and their main characteristics are discussed. Experimental results are presented that prove the suitability of the proposed methodology for the target application.",2005,0, 1489,Fuzzy mapping of human heuristics for defect classification in gas pipelines using ultrasonic NDE,"This paper presents a methodology for classifying the common defects in steel pipelines for transporting petroleum and gas. Usually, the nondestructive evaluation (NDE) experts in the industry judges the defect type by mere observation which, based on the experience, may or may not be correct. The proposed methodology has attempted to map this heuristic understanding from the shape of the defect waveforms (A-scans) using ultrasonic sensors with the help of fuzzy logic and fuzzy set associations. As such, a subset of features was selected for a set of commonly occurring defects and a fuzzy inference system is then generated using heuristic rules to classify the defect. The initial tests have shown over 90% success rate which is promising for further investigation.",2007,0, 1490,A Multi-agent System for Complex Vehicle Fault Diagnostics and Health Monitoring,"This paper presents a multi-agent system(MAS_VFD&HM) developed for complex vehicle fault diagnosis and health monitoring. The MAS_VFD&HM consists of signal diagnostic agents, special case agents, and a vehicle diagnostic/monitoring agent. A signal agent is responsible for the fault diagnosis or monitoring of one particular signal using either a single signal or multiple signals depending on the complexity of signal faults. Special case agents are those trained to detect specific component faults. All these agents are autonomous and report their results to the Vehicle System Agent. A computational framework is presented for agent learning and agent operation. The proposed MAS_VFD&HM is scalable, versatile, and has the capability of dealing complex problems such as multiple faults in a vehicle system. Although our focus was on the automotive diagnostics, the proposed MAS_VFD&HM is applicable to complex engineering diagnostic problems beyond vehicles.",2010,0, 1491,New method for current and voltage measuring offset correction in an induction motor sensorless drive,This paper presents a new algorithm for electromagnetic torque and flux estimation in a sensorless drive when uncompensated dc offset of current and/or voltage sensors are present. The novel feature of the offset error correction algorithm is an attempt not to eliminate the consequence of problem but to identify its source. The algorithm uses the first harmonic of estimated torque and dc value of estimated stator flux to identify the source and value of the current and/or voltage offset error. Identified values can be used for offset cancelation which improves estimation process.,2010,0, 1492,New Method for Discrimination of Transformers Internal Faults from Magnetizing Inrush Currents Using Wavelet Transform,"This paper presents a new algorithm for transformer differential protection, based on pattern recognition of the instantaneous differential currents. A decision logic by wavelet Transform has been devised using extracted feature from differential currents due to internal fault and inrush currents. In this logic, diagnosis criterion is based on time difference of amplitudes of wavelet coefficients over a specified frequency band. The proposed algorithm is evaluated using various simulated inrush and internal fault current cases on a power transformer that has been modeled using Electromagnetic Transients Program software. Results of evaluation study show that, proposed wavelet based differential protection scheme can discriminate internal faults from inrush currents in less than 5 ms.",2008,0, 1493,Virtual instrumentation and its application in diagnosis of faults in power transformers,"This paper presents a new approach to detect, localize and investigate the feasibility of identifying winding insulation failures. The diagnosis is based on the time-frequency analysis of signals recorded during lightning impulse tests. The virtual instrument is implemented with an acquisition board inserted into a PC and with software developed with lab view tools which sample the voltage and current signal and furnish the extent of insulation failure. The acquired signal is decomposed using multiresolution signal decomposition techniques to detect and localize the time instant of occurrence of fault",2000,0, 1494,Combined fuzzy-logic wavelet-based fault classification technique for power system relaying,"This paper presents a new approach to real-time fault classification in power transmission systems using fuzzy-logic-based multicriteria approach. Only the three line currents are utilized to detect fault types such as LG, LL, and LLG, and then to define the faulty line. An online wavelet-based preprocessor stage is used with data window of ten samples (based on 4.5-kHz sampling rate and 50-Hz power frequency). The multicriteria algorithm is developed based on fuzzy sets for the decision-making part of the scheme. Computer simulation has been conducted using EMTP programs. Results are shown and they indicate that this approach can be used as an effective tool for high-speed digital relaying, as the correct detection is achieved in less than half a cycle and that computational burden is much simpler than the recently postulated fault classification techniques.",2004,0, 1495,Accurate CMOS bridge fault modeling with neural network-based VHDL saboteurs,"This paper presents a new bridge fault model that is based on a multiple layer feedforward neural network and implemented within the framework of a VHDL saboteur cell. Empirical evidence and experimental results show that it satisfies a prescribed set of bridge fault model criteria better than existing approaches. The new model computes exact bridged node voltages and propagation delay times with due attention to surrounding circuit elements. This is significant since, with the exception of full analog simulation, no other technique attempts to model the delay effects of bridge defects. Yet, compared to these analog simulations, the new approach is orders of magnitude faster and achieves reasonable accuracy; computing bridged node voltages with an average error near 0.006 volts and propagation delay times with an average error near 14 ps.",2001,0, 1496,Local/global fault diagnosis of event-driven controlled systems based on probabilistic inference,"This paper presents a new local/global fault diagnosis strategy for the event-driven controlled systems such as the programmable logic controller (PLC). First of all, the controlled plant is decomposed into some subsystems, and the global diagnosis is formulated using the Bayesian network (BN), which represents the causal relationship between the fault and observation in subsystems. Second, the local diagnoser is developed using the conventional timed Markov model (TMM), and the local diagnosis results are used to specify the conditional probability assigned to each arc in the BN. By exploiting the local/global diagnosis architecture, the computational burden for the diagnosis can be drastically reduced. As the result, large scale diagnosis problems in the practical situation can be solved. Finally, the usefulness of the proposed strategy is verified through some experimental results of an automatic transfer line.",2007,0, 1497,Bayesian Calibration of a Lookup Table for ADC Error Correction,"This paper presents a new method for the correction of nonlinearity errors in analog-to-digital converters (ADCs). The method has been designed to allow a self-calibration in systems where an internal signal can be generated, such as base stations for mobile communications. The method has been implemented and tested in simulation on the behavioral model of commercial ADCs and on a hardware setup composed by a data acquisition board and a distorting circuit",2007,0, 1498,Thermal error modeling and compensation of spindles based on LS-SVM,"This paper presents a new modeling methodology for machine tool thermal error. The method uses the least squares support vector machine (LS-SVM) model to track nonlinear time-varying spindle thermal error under certain conditions. Experiments on spindle thermal deformation are conducted to evaluate the model performance in terms of model estimation accuracy and robustness. The comparison indicates that the LS-SVM performs better than other modeling methods, such as multi-variable least squares regression analysis, in terms of model accuracy and robustness. Using the constructed thermal error model, the thermal deformation can be compensated. After compensation, the machine tool accuracy improves greatly.",2006,0, 1499,An integrated defect tracking model for product deployment in telecom services,This paper presents a new reliability model for the time series of failures uncovered during testing of software products. The time series can be either an S-shaped curve or a slowing exponential. The model is statistically consistent; the variance of the estimated number of failures tends to zero as the test time increases. This model can be used for managing risks in the deployment of new products that support highly reliable telecommunication services.,2005,0, 1500,A new soft IP core for online-testing and fault-tolerant structures,"This paper presents a new soft IP core to construct online-testing and fault-tolerant structures, designs some example structures with commonly-used combinational circuits as CUTs, and analyzes the performance of these CL-ACL examples from an industrial perspective, such as area cost, power consumption, and timing. The results show that CL-ACL can be used to balance the area cost and reliability among all online-testing and fault-tolerant structures, but its power consumption is rather high, the timing of CL-ACL structure is determined by CUT and must be set separately according to different CUTs.",2007,0, 1501,Fault location of HV teed feeder based on synchronized voltage measurement and smooth support vector machines,"This paper presents a new technique for accurate fault locator based on synchronized voltage measurement and smooth support vector machines (SSVM) HV teed feeder transmission line. The approach consists of detection of faulted branch, classification of fault type and determination of exact fault location. Post-fault measured voltages waveforms are collected from only two ends of the three branches teed feeder system. The application of SSVM (Classification and Regression) is practiced for training, testing and validating of the faulted waveforms data set leading to the exact fault location on the system. Several fault conditions are analyzed, trained, tested and validated. The proposed technique is tested and found insensitive to variation of different parameters such as fault type, fault resistance and fault inception angle. ATP-EMTP program is used for simulation of faulted data for a 275KV teed feeder transmission system.",2010,0, 1502,A new technique for automatic error correction in modulators,"This paper presents a new technique for an automatic error correction in sigma-delta () converters for the first time. Especially intended for the detection and correction of integrator gain-errors in cascaded continuous-time modulators, the technique can also be used to correct errors induced by finite opamp gain-bandwidth (GBW) and adopted for correction of the same errors in discrete-time, switched capacitor implementations.",2005,0, 1503,Speech Enhancement Based on Minimum Mean-Square Error Estimation and Supergaussian Priors,"This paper presents a class of minimum mean-square error (MMSE) estimators for enhancing short-time spectral coefficients of a noisy speech signal. In contrast to most of the presently used methods, we do not assume that the spectral coefficients of the noise or of the clean speech signal obey a (complex) Gaussian probability density. We derive analytical solutions to the problem of estimating discrete Fourier transform (DFT) coefficients in the MMSE sense when the prior probability density function of the clean speech DFT coefficients can be modeled by a complex Laplace or by a complex bilateral Gamma density. The probability density function of the noise DFT coefficients may be modeled either by a complex Gaussian or by a complex Laplacian density. Compared to algorithms based on the Gaussian assumption, such as the Wiener filter or the Ephraim and Malah (1984) MMSE short-time spectral amplitude estimator, the estimators based on these supergaussian densities deliver an improved signal-to-noise ratio.",2005,0, 1504,Combined Wavelet-SVM Technique for Fault Zone Detection in a Series Compensated Transmission Line,"This paper presents a combined wavelet-support vector machine (SVM) technique for fault zone identification in a series compensated transmission line. The proposed method uses the samples of three line currents for one cycle duration to accomplish this task. Initially, the features of the line currents are extracted by first level decomposition of the current samples using discrete wavelet transform (DWT). Subsequently, the extracted features are applied as inputs to a SVM for determining the fault zone (whether the fault is before or after the series capacitor, as observed from the relay point). The feasibility of the proposed algorithm has been tested on a 300-km, 400-kV series compensated transmission line for all the ten types of faults through detailed digital simulation using PSCAD/EMTDC. Upon testing on more than 25000 fault cases with varying fault resistance, fault inception angle, prefault power transfer level, percentage compensation level, and source impedances, the performance of the developed method has been found to be quite promising.",2008,0, 1505,Fault-Tolerant Operating Strategies Applied to Three-Phase Induction-Motor Drives,"This paper presents a comparative analysis involving several fault-tolerant operating strategies, applied to three-phase induction-motor drives, that intend to compensate for inverter faults. The results presented show the advantages and the inconveniences of several fault-tolerant drive structures, under different control techniques, such as the field-oriented control and the direct torque control. Experimental results concerning the performance of the three-phase induction motor, based on the analysis of some key parameters, like induction-motor efficiency, motor power factor, and harmonic distortion of both motor line currents and phase voltages, will be presented",2006,0, 1506,A compensation method of the errors in palletizing work cell on the conversion from an off-line generated program to a real job program,"This paper presents a compensation method for considerable pose errors when a simulated program is used for a real palletizing work cell. Since most robot poses are generated by off-line programming (OLP) software recently it is very important to compensate the errors. To show the severity of the errors, the errors are examined when there is no compensation first. To compensate the errors, a palletizing application-oriented method is presented here. It extracts the errors between simulation and real one and makes a transformation matrix for the compensation. The errors are minimized by the proposed geometric method. Also, each position after the compensation is checked for collision-free condition using an algorithm in the controller. Finally, this method is applied to the real palletizing application and the validation is shown.",2010,0, 1507,Simulation of single-phase nonlinear and hysteretic transformer with internal faults,This paper presents a complete scheme for simulation of single-phase transformers with internal faults. Traditional methods may not be effective in displaying hysteretic characteristics in the transformer core. The approach in this paper uses transmission line method (TLM) incorporating Jiles-Atherton model to introduce nonlinear hysteresis. A small 25 kVA 11 kV/220 V power transformer with turn-to-earth fault is simulated by applying such a scheme. By comparing these results consistency is shown and by total harmonic distortion (THD) analysis the nonlinear hysteretic features of terminal voltages and currents when fault occurs are investigated,2006,0, 1508,Robust Register Caching: An Energy-Efficient Circuit-Level Technique to Combat Soft Errors in Embedded Processors,"This paper presents a cost-efficient technique to jointly use circuit- and architecture-level techniques to protect an embedded processor's register file against soft errors. The basic idea behind the proposed technique is robust register caching (RRC), which creates a cache of the most vulnerable registers within the register file in a small and highly robust cache memory built from circuit-level single-event-upset-protected memory cells. To guarantee that the most vulnerable registers are always stored in the robust register cache, the average number of read operations during a register's lifetime is used as a metric to guide the cache replacement policy. A register is vulnerable to soft errors when it holds a value that will be used in subsequent cycles. Consequently, while a register value is stored in the register cache, it is robust against single- and multiple-bit upsets. To minimize the power overhead of the RRC, the clock-gating technique is efficiently exploited by the main register file, resulting in significantly reduced power consumption. The RRC was experimentally evaluated using the LEON processor for two benchmarks, namely, the MiBench embedded benchmark suite and the SPEC CPU2006 general-purpose benchmark. Our experimental results show that if the cache size is selected appropriately, the architectural vulnerability factor (AVF) of the register file is significantly reduced while also offering the benefits of low power, area, and performance overheads.",2010,0, 1509,"A stamping technique to increase the error correction capacity of the (127,k,d) RS code","This paper presents a study of a stamping technique which allows an increase in the error-correction capacity of the (127,k,d) Reed-Solomon code. The decoding algorithm works on errors or on erasures or both. Furthermore, the error detection is still possible when the RS code is overflowed. The error-connection capacity has been evaluated for different Hamming distances d for the (127,k,d) RS stamped code. The evaluation was done through simulations using binary files representing real data from diversified environments. The simulation results are in good agreement with analytical results",2000,0, 1510,Fault detection of eccentricity by means of joint time-frequency analysis in PMSM under dynamic conditions,"This paper presents a study of eccentricity fault detection in permanent magnet synchronous machines (PMSM), under dynamic conditions. The fault simulation was made by means of two-dimensional (2-D) finite element analysis (FEA). Joint time - frequency transforms, as Wigner Ville distribution (WVD) and Zao-Atlas-Marks distribution, were proposed for signal analysis. Simulations carried out were compared with experimental results.",2007,0, 1511,Fault detection by means of Hilbert Huang Transform of the stator current in a PMSM with demagnetization,"This paper presents a study of the permanent magnet synchronous motor (PMSM) running under demagnetization. The simulation has been carried out by means of two dimensional (2-D) finite element analysis (FEA), and simulations results were compared with experimental results. The demagnetization fault is analyzed by means of decomposition of stator currents obtained at different speeds. The Hilbert Huang transform (HHT) is used as processing tool. This transformation represents time-dependent series in a two-dimensional (2-D) time-frequency domain by extracting instantaneous frequency components within the signal through an Empirical Mode Decomposition (EMD) process.",2007,0,8617 1512,"A Systematic (48, 32) Code for Correcting Double Errors and Detecting Random Triple Errors","This paper presents a systematic (48, 32) code constructional method and gives a illustration of the method. The systematic (48, 32) code is for a specified number of data bits, k is 32 and the number of check bits r is 16, which can correct double errors and detect all triple-random errors, and is suitable for applications to computer memories or digital communication.",2008,0, 1513,Improved Algorithm for Detection of Self-clearing Transient Cable Faults,This paper presents a transient cable fault detection technique for typical medium voltage cable. The subject of transient cable failures is firstly introduced together with an illustration of the typical fault characteristic. A power system test model was developed to replicate the physical phenomena and this is illustrated to show a good correlation with physical recordings. This model is shown to be very useful for validating the detection technique as it allows easy variation of the fault duration. Studies show that the technique is able to offer a number of significant advantages for distribution systems especially as an early warning system.,2008,0, 1514,Efficient stimuli generators for detection of path delay faults,"This paper presents a way to construct accumulator based test vector generators intended for efficient detection of path delay faults. Experiments conducted using our path delay fault simulator, GFault, shows that our proposed generator can give as much as 30times reduction in test time for circuits in the ISCAS85 benchmark suite compared to an accumulator based pseudo random generator",2005,0, 1515,Real-time simulation of critical evolving fault condition on a 500 kV transmission network for testing of high performance protection relays,"This paper presents advanced digital simulation studies on a comprehensive EHV transmission line system with an aim to validate high performance numeric distance relays. A 500 kV analogue transmission line system specially designed for evaluation of power system protection and control systems and devices is first introduced. A real time digital simulator (RTDS) system used to model and simulate the analogue system is then introduced, which is the main part of the paper. Special emphasis is placed on the utilisation of the newly developed advanced feature of the RTDS to suit the purposes of this application. Extensive simulation studies proved that the digital modelling and simulation is in close agreement with its analogue counterpart",2000,0, 1516,Adaptive error control scheme for multimedia applications in integrated terrestrial-satellite wireless networks,"This paper presents an adaptive error control (AEC) scheme for multimedia applications in integrated terrestrial-satellite wireless networks. The AEC protocol supports both real-time and non-real-time applications. In the AEC protocol, we propose new adaptive FEC (AFEC) and hybrid ARQ (HARQ) schemes for real-time and non-real-time traffic, respectively. Throughput performance for non-real-time application shows that the proposed AEC protocol outperforms hybrid ARQ (HARQ) protocols with the same code used. Under real-time application, the AEC protocol outperforms the static FEC (SFEC) protocols with respect to packet miss probability",2000,0, 1517,Adaptive error protection for Scalable Video Coding extension of H.264/AVC,"This paper presents an adaptive error protection method which provides different packet correction capacities by using only one Reed-Solomon code. The proposed method can be applied separately for each data part in a bit stream. The adaption of the error correction capacity works on-the-fly and only based on the way of data interleaving. In this work, the error protection is applied unequally to data units in the Network Abstraction Layer (NAL) of the Scalable Video Coding (SVC) extension of H.264/AVC. Simulation results show that the video quality increases 6 dB in average with the total overhead of ca. 9%. The advantage of our method is the simpleness and flexibility to apply. Therefore, it is suitable for real-time streaming applications.",2008,0, 1518,System-level analysis of soft error rates and mitigation trade-off explorations,"This paper presents a novel system-level analysis of soft error rates (SER) based on the Transaction Level Model (TLM) of a targeted System-On-a-Chip (SoC). This analysis runs 1000x faster than the conventional SoC analysis using a gate-level model. Moreover, it allows accurate prediction in the early design phase of a SoC, when only limited application details are available. Preliminary validation results from accelerated SER tests on the physical system have shown that the analysis can predict the SER with a reasonable accuracy (within 5x of the results from tests on physical systems). This system-level analysis is particularly suitable to handle the black-box models for industrial semiconductor IP libraries. Based on this system-level analysis, we also propose a SE mitigation solution using selective protection of SRAM of a SoC. This solution provides a series of trade-offs between the system dependability and cost (in terms of silicon area).",2010,0, 1519,Transient stability prediction algorithm based on post-fault recovery voltage measurements,This paper presents a novel technique for predicting transient stability status of a power system following a large disturbance. The prediction is based on the synchronously measured samples of the fundamental frequency voltage magnitudes at each generation station. The voltage samples taken immediately after clearing the faults are input to a support vector machine classifier to identify the transient stability condition. The classifier is trained using examples of the post-fault recovery voltage measurements (inputs) and the corresponding stability status (output) determined using a power angle-based stability index. Studies with the New England 39-bus system indicate that the proposed algorithm can correctly recognize when the power system is approaching to the transient instability.,2009,0, 1520,System unbalance effect on faulted distribution systems: A numerical study,"This paper presents a numerical study of the system unbalance effect on system voltage and current calculation during asymmetrical faults. Two different system operational conditions are tested considering a voltage imbalance at the substation bus: balanced system loads and voltage imbalance equal to 3.65% due to system loads. Five fault impedances are simulated. Two fault analysis methods, the symmetrical components and phase component algorithms are implemented and analyzed based on numerical simulations of asymmetrical faults on the IEEE 13 bus test feeder. The results show the accuracy of each fault analysis method and how the errors may vary according to the system pre-fault unbalance operation condition and the fault impedance.",2010,0, 1521,A rapid system prototyping platform for error control coding in optical CDMA networks,"This paper presents a rapid system prototyping platform for error-control codes (e.g. turbo and turbo product), which are to be used for optical CDMA transmission. The platform is based on system generator from Xilinx, a visual design tool based on Matlab/Simulink environment and enables a ""push of a button"" transition from specification to implementation. Components of the platform (a library of communication modules, debugging and emulation tools), design methodology of the platform and evaluation of some example communication systems are presented.",2005,0, 1522,Robust adaptive error control,"This paper presents a robust adaptive type-I hybrid ARQ scheme that can adapt to a slowly-varying wireless channel. The proposed system can improve system throughput substantially compared to a conventional type-I hybrid ARQ scheme. Most previous research on adaptive ARQ has assumed that the return channel used for acknowledgements is error free. This simplifying assumption is often necessary for initial performance evaluation but is unrealistic in practice. In this paper, we extend an existing adaptive ARQ scheme by making it robust to channel errors in both the forward and feedback channels. We also propose a solution to maintain code synchronization in the presence of packet errors and describe an implementation that combines the adaptive ARQ scheme with a SACK protocol that allows bi-directional transmission and piggybacked acknowledgements. In addition we have built a wireless test platform for mobile radio in an attempt to accurately measure the adaptive error control performance not only as a standalone entity, but also as a part of a complete protocol stack",2000,0, 1523,A Robust Fault Protection Strategy for a COTS-Based Spacecraft,"This paper presents a robust fault protection strategy for a low-cost single-string spacecraft that makes extensive use of COTS components. These components include commercial processors and microcontrollers that would traditionally be considered inappropriate for use in space. By crafting an avionics architecture that employs multiple distributed processors, and coupling this with an appropriate fault protection strategy, even a single-string COTS-based spacecraft can be made reasonably robust. The fault protection strategy is designed to trap faults at the highest possible level while preserving the maximum amount of spacecraft functionality, and can autonomously isolate and correct minor faults without ground intervention. For more serious faults, the vehicle is always placed in a safe configuration until the ground can diagnose the anomaly and recover the spacecraft. This paper will show how a multi-tiered fault protection strategy can be used to mitigate the risk of flying COTS components that were never intended for use in the space environment.",2007,0, 1524,Implementation of a sensor fault reconstruction scheme on an inverted pendulum,"This paper presents a robust sensor fault reconstruction scheme, using an unknown input observer, applied to an inverted pendulum. The scheme is adapted from existing work in the literature. A suitable interface between the pendulum and a computer enabled the application. Very good results were obtained.",2004,0, 1525,Fault Diagnosis of Parts of Electronic Embedded System Based on Fuzzy Fusion Approach,"This paper presents a novel approach to employing the fuzzy fusion to fault detection and localization for parts of electronic embedded systems. Elaborated method considers the characteristic of diagnostic object to establish system structure of fuzzy fusion diagnosis, realize the feature level data fusion algorithm based on fuzzy integral, establish the fuzzy phenomenon subset and subjection degree function corresponding with different fault type, and then utilize fuzzy reasoning to accomplish fuzzy synthesized judge of fault type and achieve fused fuzzy reasoning diagnosis result. The method is proved to be effective for fault location by instance. It makes diagnosed information more definite and improves the accuracy of diagnosis.",2009,0, 1526,Microprocessor fault-tolerance via on-the-fly partial reconfiguration,"This paper presents a novel approach to exploit FPGA dynamic partial reconfiguration to improve the fault tolerance of complex microprocessor-based systems, with no need to statically reserve area to host redundant components. The proposed method not only improves the survivability of the system by allowing the online replacement of defective key parts of the processor, but also provides performance graceful degradation by executing in software the tasks that were executed in hardware before a fault and the subsequent reconfiguration happened. The advantage of the proposed approach is that thanks to a hardware hypervisor, the CPU is totally unaware of the reconfiguration happening in real-time, and there's no dependency on the CPU to perform it. As proof of concept a design using this idea has been developed, using the LEON3 open-source processor, synthesized on a Virtex 4 FPGA.",2010,0, 1527,A Low Power JPEG2000 Encoder With Iterative and Fault Tolerant Error Concealment,"This paper presents a novel approach to reduce power in multimedia devices. Specifically, we focus on JPEG2000 as a case study. This paper indicates that by utilizing the in-built error resiliency of multimedia content, and the disjoint nature of the encoding and decoding processes, ultra low power architectures that are hardware fault tolerant can be conceived. These architectures utilize aggressive voltage scaling to conserve power at the encoder side while incurring extra processing requirements at the decoder to blindly detect and correct for encoder hardware induced errors. Simulations indicate a reduction of up to 35% in encoder power depending on the choice of technology for a 65-nm CMOS process.",2009,0, 1528,Full bridge Flyback isolated current rectifier with power factor correction,"This paper presents a single-stage isolated current rectifier with power factor correction, based on full bridge and flyback topologies. The proposed converter can operate as a step-down or a step-up converter, according to the input and output voltage levels. This paper presents the theoretical analysis of the converter as well as experimental results based on a 3.5 kW prototype.",2009,0, 1529,An observer-based mechanical sensor failure fault tolerant controller structure in PMSM drive,This paper presents a specific controller architecture devoted to obtain a Permanent Magnet Synchronous Motor drive robust to mechanical sensor failure. In order to increase the reliability which is a key issue in industrial and transportation applications (Electric or Hybrid ground vehicle or aerospace actuators) two virtual sensors (a two stage Extended Kalman Filter and a back-emf adaptive observer) and a voting algorithm are combined with the actual sensor to build a fault tolerant controller. The observers are evaluated off line with experimental data and the robustness against parameter variation is tested through simulation results. The Fault Tolerant Controller feasibility is proved through simulation of the PMSM drive.,2009,0, 1530,An efficient spatial domain error concealment method for H.264 video,"This paper presents an efficient spatial domain error concealment method for the forthcoming video coding standard H.264. In H.264, a frame is divided into 44 blocks during the encoding procedure. For natural image signal, the blocks are smoothly connected with each other. Based on this property, a linear smoothness constraint equation that describes the connection of the lost block and its neighboring blocks can be constructed. By solving this equation, the coefficients of lost block can be recovered. Because the reconstructed high frequency coefficients may be affected by noise, the recovered center pixel may have obvious error. To eliminate the error, we use the recovered pixels that are on the boundaries of the lost block and average pixel difference to interpolate the center pixels. The implementation is simple and is suitable for real-time video application. Experimental results show our method has better recovery result than conventional approach.",2003,0, 1531,An efficient placement and routing technique for fault-tolerant distributed embedded computing,"This paper presents an efficient technique for placement and routing of sensors/actuators and processing units in a grid network. The driver application that we present is a medical jacket which requires an extremely high level of robustness and fault tolerance. The power consumption of such jacket is another key technological constraint. Our proposed interconnection network is a mesh of wires. A jacket made of fabric and wires would be susceptible to accidental damage via tears. By modeling the tears, we evaluate the probability of having failures on every segment of wires in our mesh interconnection network. Then we study two problems of placement and routing in the sensor networks such that the fault tolerance is maximized while the power consumption is minimized. We develop efficient integer linear programming (ILP) formulations to address these problems and perform both placement and routing simultaneously. This ensures that the solution is a lower bound for both problems. We evaluate the effectiveness of our proposed techniques on a variety of benchmarks.",2005,0, 1532,Error analysis and simulator in cylindrical near field antenna measurement systems,"This paper presents an error estimator to analyze the most important errors in one cylindrical near field measurement system and the effect of these errors in the calculation of the far field radiation pattern. This study has been performed to improve the design or a cylindrical near field system for L-band RADAR antennas and to evaluate the error budget of the measurement of those antennas. The Antenna under test (AUT) is modeled with an array of vertical dipoles, and the simulator performs a number of virtual acquisitions with random or deterministic errors in order to evaluate the effect of each source of error in the electrical measured parameter. The results achieved are the variation in the main parameters of the antenna: directivity, side lobe level, beam width and beam pointing. This paper describes the source of errors considered, the error simulator implemented, and some results for the L-band RADAR antennas in a 17 meters (vertical dimension) cylindrical near field system. The paper established the methodology implemented for the analysis of the source of errors shown in (S. Burgos et.al., 2006).",2007,0, 1533,The framework of a web-enabled defect tracking system,"This paper presents an evaluation and investigation of issues to implement a defect management system; a tool used to understand and predict software product quality and software process efficiency. The scope is to simplify the process of defect tracking through a web-enabled application. The system will enable project management, development, quality assurance and software engineer to track and manage problem specifically defects in the context of software project. A collaborative function is essential as this will enable users to communicate in real time mode. This system makes key defect tracking coordination and information available disregards the geographical and time factor.",2004,0, 1534,On the emulation of software faults by software fault injection,"This paper presents an experimental study on the emulation of software faults by fault injection. In a first experiment, a set of real software faults has been compared with faults injected by a SWIFI tool (Xception) to evaluate the accuracy of the injected faults. Results revealed the limitations of Xception (and other SWIFI tools) in the emulation of different classes of software faults (about 44% of the software faults cannot be emulated). The use of field data about real faults was discussed and software metrics were suggested as an alternative to guide the injection process when field data is nor available. In a second experiment, a set of rules for the injection of errors meant to emulate classes of software faults was evaluated. The fault triggers used seem to be the cause for the observed strong impact of the faults in the target system and in the program results. The results also show the influence in the fault emulation of aspects such as code size, complexity of data structures, and recursive versus sequential execution",2000,0, 1535,Immune-Inspired Adaptable Error Detection for Automated Teller Machines,"This paper presents an immune-inspired adaptable error detection (AED) framework for automated teller machines (ATMs). This framework has two levels: one is local to a single ATM, while the other is network-wide. The framework employs vaccination and adaptability analogies of the immune system. For discriminating between normal and erroneous states, an immune-inspired one-class supervised algorithm was employed, which supports continual learning and adaptation. The effectiveness of the proposed approach was confirmed in terms of classification performance and impact on availability. The overall results are encouraging as the downtime of ATMs can de reduced by anticipating the occurrence of failures before they actually occur.",2007,0, 1536,Multiobjective Optimization for HTS Fault-Current Limiters Based on Normalized Simulated Annealing,"This paper presents an improved simulated annealing (SA) algorithm for multiobjective optimization, which is a positive approach in the design of high-temperature superconducting (HTS) fault-current limiters (SFCLs).The main goal of this paper is to achieve an effective and feasible approach in the structural design of HTS FCLs by means of multiobjective decision-making techniques, based on normalized SA. The combination of electrical and thermal models of a purpose-designed resistive-type HTS FCL is defined as a component in PSCAD/EMTDC simulations from which the proposed method will be used to optimize the selective parameters of the SFCL. The above requires the need of advanced numerical techniques for simulation studies by PSCAD on a sample distribution system for determining a global optimum HTS FCL, by considering individual parameters and accounting for the constraints, which is the main motivation for initiating this paper.",2009,0, 1537,Sub-cycle detection of incipient cable splice faults to prevent cable damage,"This paper presents an innovative method for subcycle detection of incipient cable failures caused by self-clearing faults occurring in cable splices due to insulation breakdown. Because of their short duration, conventional overcurrent protection will not detect these types of faults. The protection scheme described in this paper has been integrated into a universal relay platform. It is fast enough to operate for sub-cycle faults and has the logic to differentiate them from other types of faults. Imminent cable failure can be detected",2000,0, 1538,Multiagent-based monitoring-fault diagnosis-control integrated system for chemical process,"This paper presents an integrated system which has monitoring, fault diagnosis and control function for chemical processes. The system is based on multiagent technology. In this paper, three agents are made to build the integrated system. They are monitoring agent, fault diagnosis agent and control agent. They can work together in an integrated frame. The difficulty lies in solving the coordinating tasks in communication, interaction and cooperation and finally obtains a good solution. This multiagent system can supply optimal decision for the enterprise and it will be the fundamental of completed automation in future.",2005,0, 1539,Electromagnetic analyses and an equivalent circuit model of microstrip patch antenna with rectangular defected ground plane,"This paper presents an investigation of both electromagnetic wave model and equivalent circuit model for microstrip patch antenna (MPA) with rectangular defected ground plane structure (RDGS). The objective of the proposed design is to reduce antenna size from Wi-Fi wireless band to the Bluetooth band. The first part of the paper describes the electromagnetic wave modeling using the High Frequency Structure Simulator (HFSSreg) software to study the effect of RDGS on antenna resonant frequency. An optimization tool together with curve fitting were then used to formulate an approximate equation that describes the identified trends from the simulations. Second part is focused on estimating, using the Advanced Designing System (ADS) software the parameters of an equivalent circuit model for MPA with RDGS. The developed equivalent circuit consists of lumped elements for both MPA and RDGS structure and also include representation of the electrical and the magnetic coupling between rectangular grounded slot and MPA. Optimum values of the equivalent circuit elements were determined, and the overall simulation results were confirmed experimentally.",2009,0, 1540,Research of UAV fault and state forecast technology based on particle filter,"This paper presents an UAV fault and state prediction approach which is based on particle filter. In the UAV system, on account of its dynamic environment, mechanical complexity and other factors, it is difficult to avoid all potential faults. So, in order to early detect the potential fault, fault forecast is necessary so as to avoid enormous losses. As the input and output response model of UAV system is nonlinear and multi-parameters, it is need to find an appropriate way to of fault prediction for system maintenance and real-time command. Particle filters are sequential Monte Carlo methods based on point mass (or `particle') representations of probability densities, which can be applied to any state-space model. Their ability to deal with nonlinear and non-Gaussian statistics makes them suitable for application to the UAV fault prediction. UAV is an extremely complex system, two important aspects of monitoring are focused on this paper: 1) Engine condition monitoring and fault prediction; 2) UAV flight track forecast. The experimental result indicates the effectiveness of this approach.",2009,0, 1541,Emulation of software faults by educated mutations at machine-code level,"This paper proposes a new technique to emulate software faults by educated mutations introduced at the machine-code level and presents an experimental study on the accuracy of the injected faults. The proposed method consists of finding key programming structures at the machine code-level where high-level software faults can be emulated. The main advantage of emulating software faults at the machine-code level is that software faults can be injected even when the source code of the target application is not available, which is very important for the evaluation of COTS components or for the validation of software fault tolerance techniques in COTS based systems. The technique was evaluated using several real programs and different types of faults and, additionally, it includes our study on the key aspects that may impact on the technique accuracy. The portability of the technique is also addressed. The results show that classes of faults such as assignment, checking, interface, and simple algorithm faults can be directly emulated using this technique.",2002,0, 1542,A Template Model for Defect Simulation for Evaluating Nondestructive Testing in X-Radiography,"This paper proposes a new template model for the simulation of casting defects which are classified, according to shape, into three main types: the single defect with circular or elliptical shape, shrinkage defects with stochastic discontinuities, and the cavity or sponge shrinkage defects. For effective simulation, different nesting stencil plates are designed to reflect the characteristics of different casting defects. These include intensity, orientation, size, and shape. The proposed approach also uses geometric diffusion to demonstrate the production of simulated defects with effective shading and contrast when compared to their background. In order to evaluate the effectiveness of the proposed approach, the simulated casting defects are superposed on real radioscopic images of casting pieces and compared with real defects by extensive visual inspection. On the other hand, in order to verify the similarity of the simulated defects and the real defects, we have used our defect inspection algorithm to recognize both real and simulated defects in the same image. The experimental results show that the proposed defect simulation approach can produce a large range of simulated casting defects, which can be utilized as sample images to tune the parameters of casting inspection algorithms.",2009,0, 1543,Error-resilient packet video coding using harmonic frame-expansions and temporal prediction,"This paper proposes a novel error-resilient packet video codec through the use of harmonic frame-expansions. It provides robustness to packet video in erasure channels in an integrated framework without requiring inter/intra coding modes. In the proposed method, spatial redundancy is added by applying a frame-expansion to DCT coefficients while temporal redundancy is added by filtering the motion compensated reference blocks. The frame expansion provides excellent error-resilience at low frequencies, while filtering of motion-reference blocks prevents the propagation of errors at high frequencies. Mathematical expressions of the PSNR for packet video streams with arbitrary losses are derived using piecewise linear approximations. Preliminary simulation results have shown that the proposed codec has generated high quality video at low bit-rates with significant packet losses, and has much less visual artifact than the conventional codec's with similar PSNRs.",2005,0, 1544,Fault detection and classification in transmission lines based on wavelet transform and ANN,"This paper proposes a novel method for transmission-line fault detection and classification using oscillographic data. The fault detection and its clearing time are determined based on a set of rules obtained from the current waveform analysis in time and wavelet domains. The method is able to single out faults from other power-quality disturbances, such as voltage sags and oscillatory transients, which are common in power systems operation. An artificial neural network classifies the fault from the voltage and current waveforms pattern recognition in the time domain. The method has been used for fault detection and classification from real oscillographic data of a Brazilian utility company with excellent results",2006,0, 1545,A novel UIO-based approach for fault detection and isolation in finite frequency domain,"This paper proposes a novel unknown input observer (UIO) design approach to detect and isolate the actuator faults in finite frequency domain. Instead of completely rejecting the disturbance or noise of traditional UIOs, we exclude certain controlled inputs from the residuals. Meanwhile, knowledge on the disturbance/noise, such as frequency and statistical characteristics, is embedded into the design to enhance the performance of fault detection and isolation. One significant advantage of our approach lies in the number of UIOs which is dramatically reduced when considering multiple faults. All the design conditions are formulated in terms of LMI or BMI problems, which enable to consider other design specifications in addition to the robustness and sensitivity requirements.",2009,0, 1546,Characterization of operating systems behavior in the presence of faulty drivers through software fault emulation,"This paper proposes a practical way to evaluate the behavior of commercial-off-the-shelf (COTS) operating systems in the presence of faulty device drivers. The proposed method is based on the emulation of software faults in target device drivers and the observation of the behavior of the system and of a workload regarding a comprehensive set of failure modes analyzed according to different dimensions. The emulation of software faults itself is done through the injection at machine-code level of selected mutations that represent the code produced when typical programming errors are made in the high-level language code. An important aspect of the proposed methodology is the use of simple and established practices to evaluate operating systems failure modes, thus allowing its use as a dependability benchmarking technique. The generalization of the methodology to any software system built of discrete and identifiable components is also discussed.",2002,0, 1547,Optimized erasure-resilient streaming of SVC using unequal error protection,"This paper proposes a real-time optimized error resilient rate allocation methodology for streaming video encoded using the recently developed scalable extension of H.264/AVC. The proposed methodology takes into account the channel requirements and the end-device specifications, and jointly extracts and protects parts of the scalable video stream in order to achieve an optimized end-to-end quality. Moreover, the level of applied erasure protection is dynamically adapted to the importance of the transmitted information. A novel signaling mechanism for forwarding these different protection levels to the decoder is presented. This mechanism introduces a low bit rate overhead compared to previously proposed RTP formats. The experimental results show that unequal error protection slightly outperforms equal error protection in terms of achieved quality when the channel conditions are exactly known. Additionally, the experiments demonstrate that the proposed protection methodology yields graceful degradation in the presence of channel mismatches.",2008,0, 1548,Region-Based Color Correction of Images,"This paper proposes a region-based color correction method that corrects color-shifted images based on a reference image with a standard background. When correcting images without foreground objects, i.e., objects not in the reference image, both the reference image and the color-shifted image are segmented into several regions. A color correction process based on principal component analysis (PCA) is performed between the corresponding regions on the reference image and color-shifted image. The key concept of the color correction process is to align the color distribution of regions on the color-shifted image with the corresponding ones on the reference image by an Affine transform. The proposed color correction is also applicable to those images with foreground objects. In the experiments, the effectiveness of the proposed method is verified by testing some simple images and real-world images with and without foreground objects. Particularly, the proposed method effectively improves the skin-color area identification in a face detector, which is an important issue in many computer vision applications",2005,0, 1549,Reversible Data Hiding for Audio Based on Prediction Error Expansion,"This paper proposes a reversible data hiding method for digital audio using prediction error expansion technique. Firstly, the prediction error of the original audio is obtained by applying an integer coefficient predictor. Secondly, a location map is set up to record the expandability of all audio samples, and then it is compressed by lossless compression coding and taken as a part of secret information. Finally, the reconstructed secret information is embedded into the audio using prediction error expansion technique. After extracting the embedded information, the original audio can be perfectly restored. Experimental results show that the proposed algorithm can achieve high embedding capacity while keeping good quality of the stego-audio.",2008,0, 1550,LMI-based aircraft engine sensor fault diagnosis using a bank of robust H filters,"This paper proposes a sensor diagnostic approach based on a bank of H filters for aircraft gas turbine engines, taking into account real-time and anti-disturbance requirements to run on an aircraft on-board computer. First of all, by defining an H performance index to describe the disturbance attenuation performance of a dynamic system, the robust H filter design problem has been formulated in the linear matrix inequality (LMI) framework and been efficiently solved using off-the-shelf software. Then we make use of a bank of H filters for the sensor fault detection and isolation based on the logic status of a group of residuals. Finally, an illustrative simulation has been given to verify the effectiveness of the proposed design method. The benefit of this paper is to bridge the gap between the H filter theory and engineering applications for reliable diagnostics of aircraft engines.",2010,0, 1551,Sensor fault detection and isolation for mobile robots in a multi-robot team,"This paper presents fault detection and isolation (FDI) of dead reckoning and an external sensor (laser range sensor, LRS) on mobile robots in a multi-robot team. Each robot in the team monitors states (fault or fault-free) of the in-vehicle sensors by comparing velocity estimates obtained using its own two velocity providers: dead reckoning and laser-based scan matching. When the robot detects a fault, it compares two position estimates and identifies the faulty component (dead reckoning or LRS). The position estimates by each robot are obtained based on laser scan images captured by its own LRS and a leader robot's LRS. Each robot in a multi-robot team is equipped with a small number of sensors; however, there exist a large number of sensors in the entire team. Our FDI is thus designed based on hardware redundancy approach. It requires neither fault models nor dynamic models. Experimental results validate the effectiveness of our FDI method.",2009,0, 1552,"GosSkip, an Efficient, Fault-Tolerant and Self Organizing Overlay Using Gossip-based Construction and Skip-Lists Principles","This paper presents GosSkip, a self organizing and fully distributed overlay that provides a scalable support to data storage and retrieval in dynamic environments. The structure of GosSkip, while initially possibly chaotic, eventually matches a perfect set of Skip-list-like structures, where no hash is used on data attributes, thus preserving semantic locality and permitting range queries. The use of epidemic-based protocols is the key to scalability, fairness and good behavior of the protocol under churn, while preserving the simplicity of the approach and maintaining O(log(N)) state per peer and O(log(N)) routing costs. In addition, we propose a simple and efficient mechanism to exploit the presence of multiple data items on a single physical node. GosSkip's behavior in both a static and a dynamic scenario is further conveyed by experiments with an actual implementation and real traces of a peer to peer workload",2006,0, 1553,Model-Implemented Fault Injection for Hardware Fault Simulation,"This paper presents how model-implemented fault injection can be utilized to simulate the effect of hardware-related faults in embedded systems. A fault injection environment has been developed to enable comparison of experiments at model level and hardware level using Simulink and an Infineon microcontroller, respectively. Experiments at model level, leading to safety requirement violations, are automatically repeated at hardware level to compare the fault effects. Artifacts in a Simulink model (e.g. block output ports) are automatically mapped to memory addresses obtained from a linker generated map file. Thus, the same variable can be manipulated by the fault injection environment at both model and hardware level. For the automotive application evaluated, experiments show that the effects of data errors at model level and hardware level are similar excluding the experiments leading to exceptions.",2010,0, 1554,LIFTING: A Flexible Open-Source Fault Simulator,"This paper presents LIFTING (LIRMM fault simulator), an open-source simulator able to perform both logic and fault simulations for single/multiple stuck-at faults and single event upset (SEU) on digital circuits described in Verilog. Compared to existing tools, LIFTING provides several features for the analysis of the fault simulation results, meaningful for research purposes. Moreover, as an open-source tool, it can be customized to meet any user requirements. Experimental results show how LIFTING has been exploited on research fields. Eventually, execution time for large circuit simulations is comparable to the one of commercial tools.",2008,0, 1555,Distributed Induction Generators: 3 - Phase Bolted Short - Circuit Fault Currents,"This paper presents methods for estimation of the value of the 3-phase bolted short circuit current of induction distributed generators. It justifies new, quick, and fairly simple way for approximation of the value of peak currents during the first several cycles of the fault. During the study, numerical simulations of the fault have been performed and compared to the results from the estimation. The analytical and numerical results have been validated by a scaled-down experimental setup with a 2 kW induction generator.",2007,0, 1556,Multibit Error-Correction Methods for Latency-Constrained Flash Memory Systems,"This paper presents multibit error-correction schemes for nor Flash used specifically for execute-in-place applications. As architectures advance to accommodate more bits/cell and geometries decrease to structures that are smaller than 32 nm, single-bit error-correction codes (ECCs) are unable to compensate for the increasing array bit error rates, making it imperative to use 2-b ECC. However, 2-b ECC algorithms are complex and add a timing overhead on the memory read access time. This paper proposes low-latency multibit ECC schemes. Starting with the binary Bose-Chaudhuri-Hocquenghem (BCH) codes, an optimized scheme is introduced which combines a multibit error-correcting BCH code with Hamming codes in a hierarchical manner to give an average latency as low as that of the single-bit correcting Hamming decoder. A Hamming algorithm with 2-b error-correcting capacity for very small block sizes (< 1 B) is another low-latency multibit ECC algorithm that is discussed. The viability of these methods and algorithms with respect to latency and die area is proved vis-a??-vis software and hardware implementations.",2010,0, 1557,Fault Diagnosis With Convolutional Compactors,"This paper presents new nonadaptive fault-diagnosis techniques for scan-based designs. They guarantee accurate and time-efficient identification of failing scan cells based on results of convolutional compaction of test responses. The essence of the method is to use a branch-and-bound algorithm to narrow the set of scan cells down to certain sites that are most likely to capture faulty signals. This search is guided by a number of heuristics and self-learned information used to accelerate the diagnosis process for the subsequent test patterns. A variety of experimental results for benchmark circuits, industrial designs, and real fail logs confirm the feasibility of the proposed approach even in the presence of unknown states. The scheme remains consistent with a single test session scenario and allows high-volume in-production diagnosis.",2007,0, 1558,An On-Line UPS System With Power Factor Correction and Electric Isolation Using BIFRED Converter,"This paper presents the design consideration and performance analysis of an on-line, low-cost, high performance, and single-phase uninterruptible power supply (UPS) system based on a boost integrated flyback rectifier/energy storage dc/dc (BIFRED) converter. The system consists of an isolated ac/dc BIFRED converter, a bidirectional dc/dc converter, and a dc/ac inverter. It provides input power factor correction, electric isolation of the input from the output, low battery voltage, and control simplicity. Unlike conventional UPS topologies, the electrical isolation is provided using a high frequency transformer that results in a smaller size and lower cost. Detailed circuit operation, analysis, as well as simulation and experiment results are presented. A novel digital control technique is also presented for UPS inverter control. This controller follows the reference current and voltage of the inverter with a delay of two and four sampling periods, respectively.",2008,0,6856 1559,Tracking control for piecewise linear systems using an error space approach: A case-study in sheet control,"This paper presents the design of tracking controllers for piecewise linear systems, with application to sheet control in a printer paper path. The approach that we will take is based upon an error space approach, which is derived from linear systems theory. We will show that due to the discontinuity in the piecewise linear system, the resulting model in error space consists of both flow conditions, describing the dynamics in each regime, and jump conditions, describing the error dynamics at the switching boundaries. Two types of controllers are proposed that result in either full or partial linearization of the closed-loop error dynamics. To show the effectiveness of the control design approach in practice, the sheet controllers are implemented on an experimental paper path setup.",2008,0, 1560,Developments in directional power line protection using fault transients,"This paper presents the development of a new directional power line protection using fault transients. In the presented technique, the relay installed at the substation busbar is responsible for the detection of fault direction with respect to the busbar. The transient signals captured from each power line connected to the busbar are analyzed and compared to determine the fault direction. Simulation results show that this scheme is suitable for all types of faults occurring at different positions on the power lines. It also provides higher sensitivity as compared with traditional protection schemes.",2002,0, 1561,Influence on ANNs fault tolerance of binary errors introduced during training,This paper presents the effect of binary errors on artificial neural networks during the training phase and the fault tolerance of ANN. The tested network implements a problem (recognition of digits and letters) and is trained in the presence of binary errors. A significant improvement is obtained with increase in the number of perturbations during the training phase and recognition rate during the generalization phase.,2004,0, 1562,Fault localization in medium voltage networks with compensated and isolated star-point grounding,"This paper presents the experimental investigation of a fault localization algorithm in a network with compensated or isolated star-point grounding. The paper first discusses the theoretical background of the method and then the conclusions drawn from the discussion were tested using a complex network structure (NETOMAC) with the support of models developed in MATLAB. Finally, the complete method is proposed and its usability confirmed.",2010,0, 1563,Fault monitoring and control of PEM fuel cell as backup power for UPS applications,"This paper presents the expert fault monitoring and intelligent comprehensive control of a proton exchange membrane (PEM) fuel cell (PEMFC) as backup power source for an uninterruptible power supply (UPS) system. The failure status can be shown on the screen of a micro-computer (or a PC) linked with the UPS through an RS-232 and on the control panel of the UPS through LED indicator lights. The proposed intelligent comprehensive monitors and controllers of PEMFC and UPS can supply high quality power with flexible conversion functions, leading to the establishment of reliable power management for UPS applications. Finally, a suitable strategy and technique of fault monitoring and control for a UPS hybrid system with backup PEMFC and battery is implemented. The performances of the monitors and controllers are evaluated by experimental results, showing that the developed UPS system with backup PEMFC and battery power sources is suitable for industry applications.",2009,0, 1564,A fault diagnosis mechanism for a proactive maintenance scheme for wireless systems,"This paper presents the fault diagnosis mechanism for a proactive maintenance scheme for wireless systems. Its objective is to reduce the high operational costs encountered in the wireless industry by decreasing maintenance costs and system downtime. The fault diagnosis mechanism is based on the symbol frequency. An analytical method to calculate the symbol frequency is presented. The on-line monitoring system, based on the aforementioned fault diagnosis mechanism, is used to identify performance degradation, as well as its possible sources, so as to ensure that maintenance occurs only when necessary.",2008,0, 1565,A Two-terminal Fault Location Approach Based on Unsynchronized Phasors,"This paper presents the fundamentals of a two-terminal impedance-based fault location algorithm for transmission lines which works with unsynchronized phasors. The proposed approach is iterative and takes into account a distributed line model. At each iteration, the voltage magnitudes calculated from the data of the sending and remote ends are approximated by two straight lines, and the fault location estimate is then defined as the intersection point of these two lines. The process ends when the difference between two successive fault location estimates becomes smaller than a tolerance stipulated by the user. Since the search process is based on voltage magnitudes, synchronism is not required between the measurements obtained at each transmission line terminal.",2006,0, 1566,Automatic detection and correction of purple fringing using the gradient information and desaturation,"This paper proposes a method to automatically detect and correct purple fringing that is one of the color artifacts due to characteristics of charge coupled device sensors in a digital camera. The proposed method consists of two steps. In the first step, we detect purple fringed regions that satisfy specific properties: hue characteristics around highlight regions with large gradient magnitudes. In the second step, color correction of the purple fringed regions is made by desaturating the pixels in the detected regions. The proposed method is able to detect purple fringe artifacts more precisely than Kang's method. It can be used as a post processing in a digital camera.",2008,0, 1567,Total Occlusion Correction using Invariantwavelet Features,"This paper proposes a method which utilizes invariant wavelet features for correcting total occlusion in video surveillance applications. The proposed method extracts invariant wavelet features from the pre-occlusion spatial image of disappearing objects. When new objects are detected during occlusion, their extracted invariant wavelet features are compared to those of lost objects to check for reappearance. When reappearance occurs, the proposed method rebuilds the correct correspondence map between pre-occlusion and post occlusion objects to continue to track the ones that were lost during total occlusion. Our results show that the proposed method is more robust than referenced methods especially when objects change or reverse their motion direction during occlusion.",2007,0, 1568,A new approach to design cascade fault diagnosis observers for flexible spacecraft,"This paper proposes a new gyro and star sensor fault diagnosis architecture that designs two groups of cascade H optimal fault observers for a spacecraft with flexible appendages and schemes a Compensation PD controller to achieve fault tolerant control. The basic idea of the approach is to identify the gyro fault to good effect first and then makes a further diagnosis for the star sensor based on the former. The H optimal fault observer in design has the robustness with respect to flexible uncertainties and diagnosis uncertainties. The Compensation PD controller ensures attitude errors maintained to a small set around the equilibrium point for a long time even though gyro and star sensor faults occur simultaneously. Finally, simulation results demonstrate the effectiveness and feasibility of the proposed control algorithm.",2010,0, 1569,Design and implementation of a fault diagnosis system for transmission and subtransmission networks,"This paper proposes a new intelligent diagnostic system for on-line fault diagnosis of power systems using information of relays and circuit breakers. This diagnostic system consists of three parts: an interfacing hardware, a navigation software and an intelligent core. The interfacing hardware samples the protective elements of the power system. By means of this data, the intelligent core detects occurrence of fault and determines the fault features, such as type and location of fault. The navigation software manages the diagnostic system. The software controls the interfacing hardware and provides required data of the intelligent core. Moreover, this software is user interface of the fault diagnostic system. The proposed approach has been examined on a practical power system (Semnan Regional Electric Company) with real and simulated events. Obtained results confirm the validity of the developed approach.",2003,0, 1570,A New Model-Based Technique for the Diagnosis of Rotor Faults in RFOC Induction Motor Drives,"This paper proposes a new model-based diagnostic technique, which is the so-called virtual current technique (VCT), for the diagnosis of rotor faults in direct rotor field oriented controlled (DRFOC) induction motor drives. By measuring the oscillations at twice the slip frequency found in the rotor flux of the machine, and by conjugating this information with the knowledge of some motor parameters, as well as the parameters of the flux and current controllers, it is possible to generate a virtual magnetizing current which, after normalization, allows the detection and quantification of the extension of the fault. The proposed method allows one to overcome the major difficulties usually found in the diagnosis of rotor faults in closed-loop drives by providing information about the condition of the machine in a way that is independent of the working conditions of the drive such as the load level, reference speed, and bandwidth of the control loops. Although the VCT was primarily developed for traction drives used in railway applications, it can be incorporated in any DRFOC drive at almost no additional cost. Several simulation results, obtained with different types of DRFOC drives, as well as experimental results obtained in the laboratory, demonstrate the effectiveness of this new diagnostic approach.",2008,0, 1571,A new intelligent fast Petri-net model for fault section estimation of distribution systems,"This paper proposes a new Petri nets (PNs) knowledge representation scheme to quickly estimate the fault section of distribution systems when a fault occurs. Based on the practical guidelines and the heuristic rules obtained by interacting with the dispatchers of distribution systems, a PNs model is first built to represent the related knowledge about the task of substation fault diagnosis. The PNs model built is then transformed into matrix forms, which are relied on to infer the result of fault diagnosis through simple matrix operations. Due to its graphic representation of the heuristic rules and parallel rule-firing manner via matrix operations, the human expertise on fault diagnosis can be advantageously expressed and exploited by means of the PNs knowledge-representing approach. The system is demonstrated on a practical system of Chung-Hsiao substation at Tainan City, Taiwan. Flexibility and effectiveness of the PNs model have been validated for the fault section estimation",2000,0, 1572,Simulation and Fault Detection of Short Circuit Winding in a Permanent Magnet Synchronous Machine (PMSM) by means of Fourier and Wavelet Transform,This paper presents permanent magnet synchronous machines with short circuit of turns. Two-dimensional (2-D) finite element analysis (FEA) is used to simulation of PMSM under fault. PMSM is working at nominal condition and speed change of 5000 rpm. Relationships between stator-current-induced harmonics were investigated by means of Fourier (FFT) and discrete wavelet transforms (DWT). The simulation is also compared with experimental results.,2008,0, 1573,Atomic clock error modeling for GNSS software platform,"This paper presents satellite atomic clock error model for generating the satellite clock error in GNSS software platform for the GPS Ll C/A code signal, L2 civil signal, L5 signal, and Galileo El, E5a signals. Scheme to simulate the clock error through phase error characteristics of the atomic clock are given. Rubidium, cesium and hydrogen maser clock are considered for GPS or Galileo satellites and Quartz oscillator is considered for user receiver. The clock error consists of five typical noises such as white noise on phase, flicker noise on phase, white noise on frequency, flicker noise on frequency, random walk on frequency. The analysis software for the Allan variance, the Hadamard variance and the PSD (power spectral density) of the clock error is also included. The clock error model is implemented and its validity is analyzed by comparing with theoretical ones.",2008,0, 1574,Experiments with ABIST test methodology applied to path delay fault testing,This paper presents SIC based test stimuli with Arithmetic Built in Self-Test (ABIST) concept in order to detect the path delay faults. The presented generator with ABIST stimuli is quite useful for detecting the K-longest path-delay faults of the microprocessor. This paper extends the work of . Gjermundnes and presents its application and validation to the Intel 8051 microprocessor. The experimental results of this work with the given test case microprocessor allows us to validate the proposed test method is effective by the obtained fault coverage.,2010,0, 1575,SDG-based hazop and fault diagnosis analysis to the inversion of synthetic ammonia,"This paper presents some practical applications of signed directed graphs (SDGs) to computeraided hazard and operability study (HAZOP) and fault diagnosis, based on an analysis of the SDG theory. The SDG is modeled for the inversion of synthetic ammonia, which is highly dangerous in process industry, and HAZOP and fault diagnosis based on the SDG model are presented. A new reasoning method, whereby inverse inference is combined with forward inference, is presented to implement SDG fault diagnosis based on a breadth-first algorithm with consistency rules. Compared with conventional inference engines, this new method can better avoid qualitative spuriousness and combination explosion, and can deal with unobservable nodes in SDGs more effectively. Experimental results show the validity and advantages of the new SDG method.",2007,0, 1576,Linear Feedback Controller for D-Statcom in DPG Fault Application,This paper presents the application of linear feedback controller on distribution static synchronous compensator (D-STATCOM) for mitigating double phase to ground (DPG) fault at the distribution system. The pole placement technique is applied by shifting the existing poles to the new poles locations for fast response compare to the conventional controller. This type of inverter control is very useful in D-STATCOM application which controls the inverter to inject the unbalanced current or voltage or both for mitigating the fault condition that occurs at the distribution system. The controller will give respond to the lines that are affected by the fault and restore the system to its normal conditions. The simulation and design of the controller is done using MATLAB software with Power Blockset Toolbox and SIMULINK,2006,0, 1577,Weather radar equation correction for frequency agile and phased array radars,"This paper presents the derivation of a correction to the Probert-Jones weather radar equation for use with advanced frequency agile, phased array radars. It is shown that two additional terms are required to account for frequency hopping and electronic beam pointing. The corrected weather radar equation provides a basis for accurate and efficient computation of a reflectivity estimate from the weather signal data samples. Lastly, an understanding of calibration requirements for these advanced weather radars is shown to follow naturally from the theoretical framework.",2007,0, 1578,An Application Program and Error Sharing Agent Running on Ubiquitous Networks,This paper presents the design and implementation of an application program an error sharing agent for collaborative multimedia distance education system which is running on RCSM (reconfigurable context sensitive middleware) for ubiquitous networks. RCSM provides standardized communication protocols to interoperate an application with others under dynamically changing situations. It describes a hybrid software architecture that is running on situation- aware middleware for a web based distance education system which has an object with a various information for each session and it also supports multicasting with this information.,2006,0, 1579,Cursive word skew/slant corrections based on Radon transform,"This paper presents two fast and robust algorithms for word skew and slant corrections based on Radon transform. For the skew correction, we maximize a global measure which is defined by Radon transform of image and its gradient to estimate the slope. For the slant correction, Radon transform is used to estimate the long strokes and a word slant is measured by the average angle of these long strokes. Compared with the previous methods, these two algorithms do not require the setting of parameters heuristically. Moreover, the algorithms perform well on words of short length, where the traditional methods usually fail.",2005,0, 1580,Systematic and Adaptive Characterization Approach for Behavior Modeling and Correction of Dynamic Nonlinear Transmitters,"This paper proposes a comprehensive and systematic characterization methodology that is suitable for the forward and reverse behavior modeling of wireless transmitters (Txs) driven by wideband-modulated signals. This characterization approach can be implemented in adaptive radio systems since it does not require particular signal or training sequences. The importance of the nature of the driving signal and its average power on the behavior of radio-frequency Txs are experimentally investigated. Critical issues related to the proposed characterization approach are analytically studied. This includes a new delay-estimation method that achieves good accuracy with low computational complexity. In addition, the receiver linear calibration and its noise budget are investigated. To demonstrate the accuracy and robustness of the proposed method, a full characterization (including the memoryless nonlinearity and the memory effects) of a 100-W Tx driven by a multicarrier wideband code-division multiple-access signal is carried out, and its forward and reverse models are identified. Cascading the identified reverse model derived using the proposed methodology and the Tx prototype leads to excellent compensation of the static nonlinearities and the memory effects exhibited by the latter. Critical issues in implementing this approach are also discussed.",2007,0, 1581,Design of the Autonomous Fault Manager for learning and estimating home network faults,"This paper proposes a design of software autonomous fault manager (AFM) for learning and estimating faults generated in home network. Most of the existing researches employ rule-based fault processing mechanism, but those works depend on the static characteristics of rules for a specific home environment. Therefore, we focus on a fault estimating and learning mechanism that autonomously produces a fault diagnosis rule and predicts an expected fault pattern in the mutually different home environment. For this, the proposed AFM extracts the home network information with a set of training data using the 5W1H (Who, What, When, Where, Why, How) based contexts to autonomously produce a new fault diagnosis rule. The fault pattern with high correlations can then be predicted for the current home network operation pattern.",2009,0, 1582,A New Mesh Simplification Algorithm Based on Quadric Error Metrics,"This paper proposes a mesh simplification algorithm base on quadric error metric. Most of the simplification algorithms use the geometric distance as their simplification criteria, the distance metric is very efficient to measure geometric error, but it is difficult to distinguish important shape features such as a high-curvature region even though it has a small distance metric. The curvature is one of the good criteria of simplification to preserve the shape of an original model, if the curvature of the vertex is larger, it can present the geometric features of model well. Besides curvature, the size of the incident edges around the vertex can also reflect the geometric feature, if the edge lengths that adjoin the vertex are larger, it infects larger area on the surface of the model. We considered both the local curvature and the size of the incident edges around the vertex on the basis of the quadric error metrics, it can reflect changes on the model surface and still maintain many important geometric features after large scale simplified.",2008,0, 1583,Fault location in power networks using graph theory,This paper proposes a method for analyzing the vulnerability of a power system using network theory. It locates fault by combining the travelling waves methodology with the network topology to isolate the faulty link first and then locate the fault distance. The algorithm is verified on a test power network using Alternate Transients Program/Electromagnetic Transients Program (ATP/EMTP) and Matlab. The time stamps recorded are combined with the network topology to isolate the faulty link and calculate the fault distance.,2010,0, 1584,Effective Software Bug Localization Using Spectral Frequency Weighting Function,"This paper presents an approach of bug localization using a frequency weighting function. In an existing approach, only binary information of execution count from test executions is used. Information of each program statement being executed and not executed by a particular test is used; indicated by 1 and 0 respectively. In our proposed approach, frequency execution count of each program statement executed by a respective test is used. We evaluate several well-known spectra metrics using our proposed approach and the existing approach (using binary information of execution count) on two test suites; Siemens Test Suite and Unix datasets. We show that the bug localization performance is improved by using our proposed approach. We conduct statistical test and show that the improved bug localization performance using our approach (using frequency execution count) is statistically significant than using the existing approach (using binary information of execution count).",2010,0, 1585,Automatic red-eye detection and correction system for mobile sevices,"This paper presents an automatic red-eye detection and correction system. In order to detect red eye, we propose to use two information sources: color around the eyes and eye's round shape. The color information is from the red, highlight, and skin masks. Eye-shape is from our proposed real-time 2-dimensional grouping algorithm called ARTS. Our correction method promises to a natural-looking result. We designed the hardware using Verilog HDL, and successfully built and tested it using an FPGA device, a USB interface board, and a two-megapixel CMOS sensor. With a TSMC 0.25-mum ASIC library, the gate count was 325,167 gates, and the maximum data arrival time was 41.8 [MHz].",2008,0, 1586,Trajectory zero phase error tracking control using comparing coefficients method,This paper presents the studies on trajectory zero phase error tracking control without factorisation of zeros polynomial where the controller parameters are determined using comparing coefficients methods. The controller was applied to two types of third-order non-minimum phase plant. The first plant was having a zero outside and far from the unity circle. Another plant was having a zero outside and near to the unity circle. Simulation and experimental results will be presented to discuss its tracking performance.,2009,0, 1587,Transient-fault recovery for chip multiprocessors,"To address the increasing susceptibility of commodity chip multiprocessors (CMPs) to transient faults, we propose Chip-level Redundantly Threaded multiprocessor with Recovery (CRTR). CRTR extends the previously-proposed CRT for transient-fault detection in CMPs, and the previously-proposed SRTR for transient-fault recovery in SMT. All these schemes achieve fault tolerance by executing and comparing two copies, called leading and trailing threads, of a given application. Previous recovery schemes for SMT do not perform well on CMPs. In a CMP, the leading and trailing threads execute on different processors to achieve load balancing and reduce the probability of a fault corrupting both threads; whereas in an SMT, both threads execute on the same processor. The interprocessor communication required to compare the threads introduces latency and bandwidth problems not present in an SMT. To hide interprocessor latency, CRTR executes the leading thread ahead of the trailing thread by maintaining a long slack, enabled by asymmetric commit. CRTR commits the leading thread before checking and the trailing thread after checking, so that the trailing thread state may be used for recovery. Previous recovery schemes commit both threads after checking, making a long slack suboptimal. To tackle interprocessor bandwidth, CRTR not only increases the bandwidth supply by pipelining the communication paths, but also reduces the bandwidth demand. By reasoning that faults propagate through dependences, previously-proposed dependence based checking elision (DBCE) exploits (true) register dependence chains so that only the value of the last instruction in a chain is checked. However, instructions that mask operand bits may mask faults and limit the use of dependence chains. We propose death-and dependence-based checking elision (DDBCE), which chains a masking instruction only if the source operand of the instruction dies after the instruction. Register deaths ensure that masked faults do not corrupt later computation. Using SPEC2000, we show that CRTR incurs negligible performance loss compared to CRT for interprocessor (one-way) latency as high as 30 cycles, and that the bandwidth requirements of CRT and CRTR with DDBCE are 5.2 and 7.1 bytes/cycle, respectively.",2003,0, 1588,The Design and Implementation of Checkpoint/Restart Process Fault Tolerance for Open MPI,"To be able to fully exploit ever larger computing platforms, modern HPC applications and system software must be able to tolerate inevitable faults. Historically, MPI implementations that incorporated fault tolerance capabilities have been limited by lack of modularity, scalability and usability. This paper presents the design and implementation of an infrastructure to support checkpoint/restart fault tolerance in the Open MPI project. We identify the general capabilities required for distributed checkpoint/restart and realize these capabilities as extensible frameworks within Open MPI's modular component architecture. Our design features an abstract interface for providing and accessing fault tolerance services without sacrificing performance, robustness, or flexibility. Although our implementation includes support for some initial checkpoint/restart mechanisms, the framework is meant to be extensible and to encourage experimentation of alternative techniques within a production quality MPI implementation.",2007,0, 1589,Random defect limited yield using a deterministic model,"To be successful in the competitive semiconductor industry, the need to reduce cost per die is necessary and always challenging. It is important to produce better die per wafer by minimizing the cycle time to detect and fix yield problems associated with the technology. Yield, or wafer sort yield (number of good chips/wafer) can be separated into three components: random defect limited yield, systematic yield, and repeating yield loss. Random defect limited yield is caused by defects. Process equipment and byproducts primarily cause defects. Defects, usually randomly distributed, can also be localized to one, or multiple die on a wafer. In-line QC inspection tools can detect most defects. Systematic yield losses are process-related problems that can affect all die on a wafer, some die on a wafer, or die by region on a wafer. Systematic yield losses are not detectable by in-line QC defect inspection tools. Repeating yield loss is due to reticle defects. Reticle defects occur on the same die within a reticle field, e.g. a repeating defect can be caused by contamination on the stepper lens, by contamination on the pellicle that protects the reticle, or by contamination on the reticle itself. Reticle defects are sometimes detectable by in-line QC inspection tools. The focus of this paper is on the calculations and results of random defect limited yield (DLY) using the deterministic yield model. This model is used to prioritize defect problems, and to drive yield improvements. Examples are used to illustrate the benefits and strengths of the deterministic model. We also discuss the methodology, assumptions, and limitations of this model",2001,0, 1590,Application of Uni-Directional Microphone Array for Identifying English Pronunciation Errors,"To identify the English pronunciation errors made by Chinese learners, this paper utilizes uni-directional microphones to construct a superdirective beamformer for capturing high quality input speech, and integrates the techniques of anti-model and confidence measure into the speech recognizer for accurate identification of the speaker's pronunciation errors. As to the beamformer, although designing a superdirective beamformer using omni-directional microphones is widely reported, little work details the designing using uni-directional microphones. We integrate the transfer function of the uni-directional microphones into the signal model of the beamformer, and derive the expression of the superdirective beamformer under the diffuse noise assumption. As to the speech recognizer, an anti-model for each phone is trained from the training data excluding the tokens of that phone. By integrating these anti-models into the recognition network, the recognizer can align the user's speech with the prompted text more accurately. Confidence measure is utilized to judge whether a segment of utterance is mis-pronounced. To justify the proposed techniques, simulated noises are injected into utterances provided by some Chinese undergraduates in Nankai University. Recognition results are compared with the judgment made by several English linguistic experts. Experiment result shows the effectiveness of the proposed beamforming algorithm and the recognition techniques.",2009,0, 1591,An effective video temporal error concealment method,"To overcome inaccurate boundary matching algorithm bringing on the miss of the best motion vector and resulting in the bad error concealment effect, the paper proposed a novel effective temporal error concealment algorithm. Using effective temporal-spatial correlation of the motion vectors, it will adaptive construct a limited candidate motion vectors set among the motion vectors of neighboring macroblocks and extrapolated motion vectors, on the basis of it, selected the best motion vector from candidate motion vector set by using the boundary matching algorithm to conceal corrupted or lost block. The simulation results show that the proposed method reduced the miss of the best motion vector and obtained the satisfactory error concealment effective.",2009,0, 1592,Using Lightweight Transactions and Snapshots for Fault-Tolerant Services Based on Shared Storage Bricks,"To satisfy current and future application needs in a cost effective manner, storage systems are evolving from monolithic disk arrays to networked storage architectures based on commodity components. So far, this architectural transition has mostly been envisioned as a way to scale capacity and performance. In this work we examine how the block-level interface exported by such networked storage systems can be extended to deal with reliability. Our goals are: (a) At the design level, to examine how strong reliability semantics can be offered at the block level; (b) At the implementation level, to examine the mechanisms required and how they may be provided in a modular and configurable manner. We first discuss how transactional-type semantics may be offered at the block level. We present a system design that uses the concept of atomic update intervals combined with existing, block-level locking and snapshot mechanisms, in contrast to the more common journaling techniques. We discuss in detail the design of the associated mechanisms and the trade-offs and challenges when dividing the required functionality between the file-system and the block-level storage. Our approach is based on a unified and thus, non-redundant set of mechanisms for providing reliability both at the block and file level. Our design and implementation effectively provide a tunable, lightweight transactions mechanism to higher system and application layers. Finally, we describe how the associated protocols can be implemented in a modular way in a prototype storage system we are currently building. As our system is currently being implemented, we do not present performance results",2006,0, 1593,Software-Based Hardware Fault Tolerance for Many-Core Architectures,"This presentation will point out the new opportunities and challenges for applying software-based hardware fault tolerance to emerging many-core architectures. The paper will discuss the tradeoff between the application of these techniques and the classical hardware-based fault tolerance in terms of fault coverage, overhead, and performance.",2009,0, 1594,From experimental assessment of fault-tolerant systems to dependability benchmarking,This short contribution describes first the role of fault injection among the dependability assessment methods that are pertinent approach to the definition and development of dependability benchmarks. Specific problems and challenges faced by dependability benchmarking are then identified and some relevant advances are discussed.,2002,0, 1595,An improved neural network algorithm for classifying the transmission line faults,"This study introduces a new concept of artificial intelligence based algorithm for classifying the faults in power system networks. This classification identifies the exact type and zone of the fault. The algorithm is based on unique type of neural network specially developed to deal with a large set of highly dimensional input data. An improvement of the algorithm is proposed by implementing various steps of input signal preprocessing, through the selection of parameters for analog filtering, and values for the data window and sampling frequency. In addition, an advanced technique for classification of the test patterns is discussed and the main advantages compared to previously used nearest neighbor classifier are shown.",2002,0, 1596,Initial Evaluation of the Fracture Behavior of Piezoelectric Single Crystals Due to Artificial Surface Defects,"This study is part of a new research program to develop fundamental understanding of the fracture and fatigue behavior of piezoelectric single crystals through the combination of computational and experimental approaches. In this work we present 1) experimental results on the creation of artificial surface defects in piezoelectric single crystals using a focused ion beam (FIB) system and 2) initial observations on the crystal's fracture behavior under an electrical field. The major advantage of using a FIB is that one can control the size, shape, and orientation of artificial defects precisely, allowing realistic surface defects, e.g., half-penny-shaped, 100 mum long, <1 mum wide, and 50 mum deep. We have demonstrated that multiple artificial defects with varying inclination angles relative to the specimen's crystallographic orientation can be machined in a few hours. In this paper, we report the experimental details of the FIB milling, typical defect shape, and initial results on the effects of high electric field on the fracture behavior of single crystals.",2006,0, 1597,Assessment and elimination of errors due to electrode displacements in elliptical and square models in EIT,"This study modifies a Tikhonov regularized ""maximum a posteriori"" algorithm proposed for reconstructing both the conductivity changes and electrode positioning variations in EIT and uses this algorithm for reconstructing images of 2d elliptical and square models, instead of simple circular model used in previous works. This algorithm had been proposed By C. Gomez for compensating the errors due to electrode movements in image reconstruction. The jacobian matrix has been constructed via perturbation both conductivity and electrode positioning. The prior image matrix should incorporate some kind of augmented inter-electrode positioning correlations to impose a smoothness constraint on both the conductivity change distribution and electrode movement. For each model, conductivity change image is reconstructed in 3 cases: a) With no electrode displacement using standard algorithm b) With electrode displacement using standard algorithm c) With electrode displacement using proposed algorithm. In all models, a comparison between 3 cases has been implemented. Also, the results obtained from each model have been compared with the other models in similar cases. The results obtained in this study will be useful to investigate the ellipticity effects of organs being imaged in clinical applications. Moreover, the effects of model deviation from circular form on reconstructed images can be used in special industrial applications.",2010,0, 1598,Fault diagnosis techniques: Application to the thermoforming process,"This study presents two fault detection and isolation techniques for thermoforming reheat process actuators. The actuators are the top and bottom oven heating bands. The second technique presented, called generalized Kalman filter scheme, is more innovative and showed better fault isolation compared to the first technique of this study called the direct Kalman filter scheme (DKFS).",2003,0, 1599,Ground distance relaying algorithm for high resistance fault,"This study proposes a new fault impedance estimation algorithm of phase-ground fault for ground distance relaying based on the negative-, zero- and comprehensive negative-zero-sequence current component. The principle is based on the assumption that fault path is purely resistive, and the phase angle of fault point voltage and fault path current is equal to construct the fault impedance estimating equations for ground distance relay, which can eliminate the effect of fault path resistance, load current and power swing. PSCAD software simulations show the accuracy of proposed algorithm.",2010,0, 1600,Fault-tolerant decision fusion via collaborative sensor fault detection in wireless sensor networks,"This work addresses fault-tolerant distributed decision fusion in the presence of sensor faults when local sensors sequentially send their local decisions to a fusion center. This work also proposes a collaborative sensor fault detection (CSFD) scheme for eliminating unreliable local decisions when performing distributed decision fusion. In particular, the concept of pseudo-sensor faults is presented. Due to the difficulty in determining pseudo-sensor faults in real time based on the minimization of the error probability, an upper bound is established on the fusion error probability, where distributions of local decisions are not necessarily identical. Given a pre-designed fusion rule under the assumption of identical local decision rules in fault-free environments, this bound can then characterize the fusion error probability when local decisions are no longer identical due to sensor faults. Hence, a criterion is proposed based on this error bound to determine a set of pseudo-faulty nodes at each time. Once the fusion center identifies the pseudo-faulty nodes, all corresponding local decisions are removed from the computation of likelihood ratios adopted to make the final decision. To enhance the efficiency of the proposed search criterion, another, less complex criterion is developed by integrating the Kullback-Leibler distance. Simulation results indicate that this less complex criterion provides even better fault-tolerance capability in some situations when sensor faults significantly deviate from normally operating sensors. Performance evaluation results also indicate that the fault-tolerance capability of the proposed approach employing a CSFD scheme is superior to conventional decision fusion.",2008,0, 1601,Automatic detection of surface defects on rolled steel using Computer Vision and Artificial Neural Networks,"This work addresses the problem of automated visual inspection of surface defects on rolled steel, by using Computer Vision and Artificial Neural Networks. In recent years, the increasing throughput in the steel industry has become the visual inspection a critical production bottleneck. In this scenario, to assure a high rolled steel quality, novel sensor-based technologies have been developed. Unlike most common techniques, which are frequently based on manual estimations that lead to significant time and financial constraints, we present an automatic system based on (i) image analysis techniques, such as, Hough Transform to classify three defects with well-defined geometric shapes: welding, clamp and identification hole and (ii) two well-known feature extraction techniques: Principal Component Analysis and Self-Organizing Maps to classify three defects with complex shapes, specifically, oxidation, exfoliation and waveform defect. To demonstrate the effectiveness of our system, we tested it on challenging real-world datasets, acquired in a rolling mill of the steel company ArcelorMittal. The system was successfully validated, achieving overall accuracy of 87% and demonstrating its high potential to be applied in real scenarios.",2010,0, 1602,Software Faults Diagnosis in Complex OTS Based Safety Critical Systems,"This work addresses the problem of software fault diagnosis in complex safety critical software systems. The transient manifestations of software faults represent a challenging issue since they hamper a complete knowledge of the system fault model at design/development time. By taking into account existing diagnosis techniques, the paper proposes a novel diagnosis approach, which combines the detection and location processes. More specifically, detection and location modules have been designed to deal with partial knowledge about the system fault model. To this aim, they are tuned during system execution in order to improve diagnosis during system lifetime. A diagnosis engine has been realized to diagnose software faults in a real world middleware platform for safety critical applications. Preliminary experimental campaigns have been conducted to evaluate the proposed approach.",2008,0, 1603,Distributed Hierarchical Fault Management Paradigms of Satellite Network,"This work aims at fault management of satellite network. According to the intrinsic characteristics of satellite network, as well as the long delay features and the satellite automatic operative requirements possessed by the satellite-ground communication, this paper gives a distributed hierarchical fault management paradigms based on the clusters management, combining with the concept of clusters in satellite network management. By an analysis on performance indexes for several network management paradigms, this paper represents the theoretical basis for the paradigm adopted by fault management of network, and gives a performance analysis on the fault management mechanism of distributed hierarchical fault paradigms through simulation experiment.",2009,0, 1604,An offset self-correction sample and hold circuit for precise applications in low voltage CMOS,"This work describes a new topology for CMOS sample-and-hold circuits, in low voltage, with self-correction of the offset voltage caused by mismatches in the differential input pair of the operational amplifier. The charge injection of the NMOS switches is an important factor and it is minimized in this topology. The results were obtained using the ACCUSIM II simulator on the AMS CMOS 0.8 m CYE and they reveal the circuit has a reduced error of just 0.03% at the output.",2002,0,5236 1605,Error resilient video coding for wireless channels,"This work describes how the introduction of redundancy by means of coding constraints known at the decoder side improves the error resilience of a video communication scheme. The decoder can exploit the a priori knowledge of the constraints to verify the integrity of the decoded bitstream. If the transmission errors cause the violation of the coding constraints, the decoder detects an error and recovers the synchronization with the bitstream. The improved detection capability results in better localization of the decoded frame areas affected by transmission errors, so allowing more accurate error concealment. The adoption of coding constraints does not require any feedback channel between the decoder and the coder since the decoder can be notified of the constraints during the initial negotiation phase. The syntax of the coder output is not affected and the coded bitstream remains standard compliant. Therefore, this technique is quite general and it is well suited to many transport schemes, either packet switched or circuit switched. Furthermore, it can be easily integrated with different error resilience tools. The improvement achievable in the decoded image quality using the coding constraints is here shown in the case of H.264 coded video.",2004,0, 1606,A system for basic-level network fault management based on the GSM short message service (SMS),"This work describes the design and implementation of a distributed system which uses the wireless GSM short message service (SMS) data transfer technology to give network managers access to services on remote network equipment via a cellular phone. Existing work has been restricted to simple network monitoring, i.e. one way transmission of alert messages from network monitoring application to designated mobile phones, through a GSM modem connected to the computer hosting the monitoring application. In our work, we develop a multi-user, multi-session, two-way network management system via SMS. First, we present the SMS mechanism and discuss its features and restrictions. Then, we analyze the requirements of our system, and present a design of distributed architecture for it. Next we present in detail our implementation of the proposed system, based on Java servlet technology. Finally, we report our experiences with using it, and propose directions for future work.",2001,0, 1607,Integration of the universal earth-fault indicator in to the distribution network SCADA system,"This work describes the possibilities for the earth fault detection at the compensated neutral networks and represents one practical solution - a developed universal earth-fault indicator (UEI), which is allowed to detect the earth-faults at the compensated neutral cable networks and the fault detection with the fault indicators, which is connected into distribution network SCADA system by the radio signals. The universal earth-fault indicator is realized as a directional zero-sequence protection device based on comparison of the time when zero-sequence current and zero-sequence voltage occurred with the given time. There are presented block-diagram of the universal earth-fault indicator and working algorithms e.t.c. A pilot project ""integration of the fault indicators into the distribution network SCADA system"" has reviewed which was carried out at Jelgava cable distribution networks in the year 2003.",2005,0, 1608,Exploring FPGA structures for evolving fault tolerant hardware,"This work explores different types of FPGA (field programmable gate array) structures for evolving fault tolerant hardware. A three-tier model for providing fault tolerance to the digital circuits evolved on FPGAs is proposed. This model combines the process level redundancy provided by the GA (genetic algorithm) based evolution techniques and the structural level redundancy supported by the FPGA architectures. Simulation results using the ISCAS'89 benchmark circuits have been carried out to study the effect of granularity on the time taken for the evolution process, the dimensionality of the evolution and the number of solutions that need to be evolved for fault coverage. The effect of using a divide and conquer approach to reduce the time taken for evolution has been studied proving that this is a feasible approach even for complex circuits.",2003,0, 1609,Fault isolation for device drivers,"This work explores the principles and practice of isolating low-level device drivers in order to improve OS dependability. In particular, we explore the operations drivers can perform and how fault propagation in the event a bug is triggered can be prevented. We have prototyped our ideas in an open-source multiserver OS (MINIX 3) that isolates drivers by strictly enforcing least authority and iteratively refined our isolation techniques using a pragmatic approach based on extensive software-implemented fault-injection (SWIFI) testing. In the end, out of 3,400,000 common faults injected randomly into 4 different Ethernet drivers using both programmed I/O and DMA, no fault was able to break our protection mechanisms and crash the OS. In total, we experienced only one hang, but this appears to be caused by buggy hardware.",2009,0, 1610,An immune inspired fault diagnosis system for analog circuits using wavelet signatures,This work focuses on fault diagnosis of electronic analog circuits. A fault diagnosis system for analog circuits based on wavelet decomposition and artificial immune systems is proposed. It is capable of detecting and identifying faulty components in analog circuits by analyzing its impulse response. The use of wavelet decomposition for preprocessing of the impulse response drastically reduces the size of the detector used by the Real-valued Negative Selection Algorithm (RNSA). Results have demonstrated that the proposed system is able to detect and identify faults in a Sallen-Key bandpass filter circuit.,2004,0, 1611,H.264 video coding for low bit rate error prone channels: An application to TETRA systems,"This work investigates a H.264 coding scheme for video transmission over links characterized by heavy packet losses and low available bitrate. The H.264 resilient coding tools such Flexible Macroblock Ordering, Redundant Slices and Arbitrary Slice Ordering are here tuned in order to adapt the application layer coding parameters to the physical layers characteristics. Due to the limited bandwidth, the tools are differentiated on a Region Of Interest. Moreover, the Redundant Slices tool is integrated by suitable application level interleaving to counteract the bursty nature of the errors. The performances of the coding scheme choices are assessed on a TETRA communication channel, that is quite challenging due to both limited bandwidth and severe error conditions. However, the illustrated codec design criteria can be adopted in different low bit-rate, error prone channels.",2006,0, 1612,GA-Based Job Scheduling Strategies for Fault Tolerant Grid Systems,"This work mainly aims at the designs of the genetic algorithm based scheduling strategies by considering four different fault tolerance techniques in the grid environment, including retry, migration, checkpoint, replication. We also take into account the risk relationship between jobs and nodes to improve the system reliability in the scheduling algorithm. According to the simulation results, we can find out that the performance of fault tolerant algorithms is better than risky algorithm whether in makespan, average turnaround time, or the job failure rate. Checkpoint algorithm has the best performance in all algorithms. On the other hand, retry algorithm is recommended for the system where the job sizes are usually smaller because of its simplicity. Finally, replicated algorithm is not suitable for the grid since it imposes too much overhead.",2008,0, 1613,Hybrid Error Concealment with Automatic Error Detection for Transmitted MPEG-2 Video Streams over Wireless Communication Network,"This work presents a complete error concealment system, for overcoming visible distortions in video sequences which are transmitted over a lossy communication network. The system we propose provides an error concealment solution from the point of receiving the transmitted sequence by the decoder, until it is presented to viewers, without human interference. The system is composed of an automatic error detection algorithm, and a decision tree error concealment algorithm. The performance of the detection algorithm is estimated, along with a performance evaluation of the decision tree algorithm by comparing it to the other three error concealment methods. The results are evaluated using two quality measures. We show that our error concealment method achieves the highest quality compared to the other methods for most of the conducted tests.",2006,0, 1614,Simulation based system level fault insertion using co-verification tools,"This work presents a simulation-based, fault insertion environment, which allows faults to be ""injected"" into a Verilog model of the hardware. A co-verification platform is used to allow real, system level software to be executed in the simulation environment. A fault manager is used to keep track of the faults that are inserted on to the hardware and to monitor diagnostic messages to determine whether the software is able to detect, diagnose and/or cope with the injected fault. Examples be provided to demonstrate the capabilities of this approach as well as the resource requirements (time, system, human). Other benefits and issues of this approach also be discussed.",2004,0, 1615,A General Framework to Analyze the Fault-Tolerance of Unstructured P2P Systems,"This work presents a study on the fault-tolerance of unstructured P2P overlays, modeled as complex networks. A framework is proposed to derive the peers' degree distribution, once the P2P system is described through the evolution laws characterizing the distributed protocol, the attachment and failure rates. From the degree distribution, estimations may be derived on the mean number of m-neighbors, as well as the diameter of the net. We analyze three different P2P distributed protocols. The analytical tool is compared with results coming from simulation. Outcomes confirm that the approach can be employed to dynamically tune the peers' attachment rate and maintain the desired topology of the P2P network.",2010,0, 1616,Optimal bit allocation for maximum absolute error distortion in the application of JPEG2000 part 2,"This paper proposes a strategy to deal with bit rate allocation for the optimization of maximum absolute error (MAE), or 1-infinity, distortion metric in 3D data compression using JPEG2000. Part 2 of this standard has the capability to compress 3D data by treating data as separate 2D slices; these slices could be taken directly from the data or from the data after it has undergone a decorrelation transform (KLT) in one direction. To perform bit rate allocation we use a mixed model approximation to the MAE rate-distortion curve, that is used in an optimization algorithm. To solve the problem of MAE-related bit rate allocation in the KLT domain, we theoretically derive an upper bound for MAE based on the basis vectors of the KLT; we also develop an algorithm for optimizing this upper bound, and we illustrate how the minimization of this upper bound can decrease the actual MAE.",2004,0, 1617,Exploiting advanced fault localization methods for yield & reliability learning on SoCs,This paper proposes advances on fault localization methods suiting the learning of yield and reliability in VLSI CMOS technologies. Industrial methodologies and tools will be discussed and the experimental results obtained through their implementation will be presented.,2009,0, 1618,Instruction-based delay fault self-testing of processor cores,"This paper proposes an efficient methodology of delay fault testing of processor cores using its instruction set. These test vectors can be applied in the functional mode of operation, hence, self-testing of processor core becomes possible. Path delay fault model is used. The proposed approach uses a graph theoretic model (represented as an Instruction Execution Graph) of the datapath and a finite state machine model of the controller for the elimination of functionally untestable paths at the early stage without looking into the circuit details and extraction of constraints for the paths that can potentially be tested. Parwan processor is used to demonstrate the effectiveness of our method.",2004,0, 1619,Application of Entropy-Based Markov Chains Data Fusion Technique in Fault Diagnosis,"This paper proposes an entropy-based Markov (EMC) chain fusion technique to solve the problem that the sample set is incompletion in fault diagnostic field. Firstly, the concept about probability Petri net is defined. It can calculate the fault occurred probability from incidence matrix based on the complemental information. Secondly, probability Petri net diagnostic model is designed from diagnostic rules that obtained by Skowron default rule generation method after the sample set is reduced by rough set theory. And in order to simplify the framework of the diagnostic model, Petri net model is designed as distributed form. Finally, depending on the diagnosis of distributed diagnostic model, EMC technique will be used to obtain consensus output if the places that represent fault in the model have several tokens. The diagnostic result is the consensus output that with the maximum of posterior probability after normalized treatment. The design is described by an example about rotating machinery fault diagnosis, and is proved availability by test sample set.",2008,0, 1620,Error tolerant DNA self-assembly by link-fracturing,"This paper proposes and evaluates link fracturing as an approach for error tolerance in self-assembly by utilizing a DNA chain as a link between two blocks of molecules. Through the use of restriction enzymes, link fracturing breaks the connecting DNA chain between two blocks if an incorrect assembly has occurred due to the erroneous growth of tiles. Two error tolerant techniques are proposed by fracturing of the DNA chain links, namely 1-link and 2-link. Using the tool Xgrow, simulations under the Kinetic Tile Assembly Model (KTAM) are performed. Results show that 2-link fracturing achieves an improvement in error rate as compared to a normal assembly; moreover this is accomplished with little overhead in assembly size and execution complexity. The 1-link method shows 100% error free growth with moderate overhead as compared to normal growth and other existing error tolerant methods.",2009,0, 1621,Control of an antiresonance hybrid capacitor system for power factor correction,"This paper proposes control of a three-phase antiresonance hybrid shunt-capacitor system for power factor correction. In general, shunt capacitors connected in series with reactors should be designed carefully before installation in order to avoid series and/or parallel harmonic resonance between the capacitors and line inductances. However, the system parameters are dynamically changed according to the power system configurations and loads. Consequently, harmonic resonance might occur after the capacitors have been installed. The main objective of the proposed hybrid capacitor system is to compensate for reactive power without any harmonic resonance. The hybrid system is a combination of three-phase shunt capacitors connected in series with a small-rating three-phase inverter without any matching transformer. The inverter is used to improve the characteristic of the capacitors. As a result, no harmonic resonance occurs under any system condition. In this paper, simulation results verify the viability and the effectiveness of the proposed three-phase antiresonance hybrid shunt-capacitor system for reactive power compensation.",2004,0, 1622,Detection and Classification of Rolling-Element Bearing Faults using Support Vector Machines,"This paper proposes development of support vector machines (SVMs) for detection and classification of rolling-element bearing faults. The training of the SVMs is carried out using the sequential minimal optimization (SMO) algorithm. In this paper, a mechanism for selecting adequate training parameters is proposed. This proposal makes the classification procedure fast and effective. Various scenarios are examined using two sets of vibration data, and the results are compared with those available in the literature that are relevant to this investigation",2005,0, 1623,Integration of internal and external clock synchronization by the combination of clock-state and clock-rate correction in fault-tolerant distributed systems,"This paper proposes the integration of internal and external clock synchronization by a combination of a fault-tolerant distributed algorithm for clock state correction with a central algorithm for clock rate correction. By means of hardware and simulation experiments it is shown that this combination improves the precision of the global time base in a distributed single cluster system while reducing the need for high-quality oscillators. Simulation results have shown that the rate-correction algorithm contributes not only in the internal clock synchronization of a single cluster system, but it can be used for external clock synchronization of a multi-cluster system with a reference clock. Therefore, deployment of the rate-correction algorithm integrates internal and external clock synchronization in one mechanism. Experimental results show that a failure in the clock rate correction does not hinder the distributed fault-tolerant clock state synchronization algorithm, since the state correction operates independently from the rate correction. The paper introduces new algorithms and presents experimental results on the achieved improvements in the precision measured in a time-triggered system. Results of simulation experiments of the new algorithms in single-cluster and multi-cluster configurations are also presented.",2004,0, 1624,SVMs Interpolation Based Edge Correction Scheme for Color Filter Array,"To settle the problem of blurring and visible artifacts around the edge regions of color filter array (CFA) interpolation images, a support vector machines (SVMs) interpolation based edge correction scheme is proposed. In this scheme, a simple CFA interpolation method is used, and the support vector regression (SVR) is trained to rectify the color values at the edge of the result image. This scheme can produce visually pleasing full-color images and obtain better PSNR results than other conventional CFA interpolation algorithms. The correction is concentrated on the edge region, because the human perception is mainly focusing on edge region. The scheme can reduce the training time effectively, and can combine with other CFA interpolation algorithms freely. Simulation studies indicate that the proposed algorithm is effective.",2009,0, 1625,Research of Real-time Forecast Technology in Fault of Missile Guidance System Based on Grey Model and Data Fusion,"To solve fault forecast in missile guidance system, a new fault forecast method was presented, in which the grey system and multiple-sensor data fusion were used. Grey Model (GM) forecast is invalid when the data sequence is zero-mean random process, to overcome the drawback, present an improved GM method. The simulation results show the fault forecast method has better performance in missile guidance systems.",2007,0, 1626,A Fault Grading Methodology for Software-Based Self-Test Programs in Systems-on-Chip,"Today, electronic devices are increasingly employed in different fields, including safety- and mission-critical applications, where the quality of the product is an essential requirement. In the automotive field, the Software-Based Self-Test is a dependability technique currently demanded by industrial standards. This paper presents an approach employed by STMicro electronics for evaluating, or grading, the effectiveness of Software-Based Self-Test procedure used for on-line testing automotive microcontrollers to be included in safety-critical vehicle parts, such as in airbags and steering systems.",2010,0, 1627,Concurrent error detection in block ciphers,"Today, encryption is widely used to incorporate privacy in data communications. Hardware implementations of encryption algorithms are fast enough to cope with the high throughput required in modern transmission channels. However, faults may occur in such circuits that can cause errors in encrypted text. A new technique is proposed to concurrently detect errors in block ciphers. It introduces very low area overhead in the system. In addition, a new encoding scheme is presented that has higher detection capabilities than other common error detection codes, when applied to encryption systems. Experiments conducted with widely used encryption algorithms (DES, RC5, IDEA and SKIPJACK) demonstrate the advantages of the proposed technique",2000,0, 1628,A HW/SW Architecture to Reduce the Effects of Soft-Errors in Real-Time Operating System Services,"Today, real-time applications that have stringent critical constraint are placed and run in an environment with real-time operating system (RTOS) more and more. The services provided by RTOSs are severely exposed to faults that affect both functional and timing of tasks running based on RTOS. In this paper, we propose an architecture for real-time operating systems which provides more robust services in term of soft errors. We try to evaluate and analyze robustness of services due to soft-errors in two architectures which are SW-RTOS and HW/SW-RTOS. According to experimental results, the proposed architecture for HW/SW-RTOS gives more robust services regarding soft errors versus purely software bases RTOSs.",2007,0, 1629,Error Control Mechanisms using CODEC,"Todaypsilas communication system is susceptible to many types of errors and interferences. There are different types of error detection and correction mechanisms proposed by many authors. One of the important methods of error control is by means of encoding/decoding techniques at source/destination. In this context, we propose an evaluation process of convolutional encoder and decoder (CODEC), for the various constraint lengths and for different generator polynomials, for the given messages considering burst errors and distributed errors in the received messages. The frac12 rate and hard decision coding is considered in this paper. Performance evaluation in terms of error correction capability of CODEC is shown through simulation using two burst error recovery techniques.",2009,0, 1630,Memory Address Scrambling Revealed Using Fault Attacks,"Today's trend in the smart card industry is to move from ROM+EEPROM chips to Flash-only products. Recent publications have illustrated the vulnerability of Floating Gate memories to UV and heat radiation. In this paper, we explain how, by using low cost means, such a vulnerability can be used to modify specific data within an EEPROM memory even in the presence of a given type of counter-measure. Using simple means, we devise a fault injection tool that consistently causes predictable modifications of the targeted memories' contents by flipping `1's to `0's. By mastering the location of those modifications, we illustrate how we can reverse-engineer a simple address scrambling mechanism in a white box analysis of a given EEPROM. Such an approach can be used to test the security of Floating Gate memories used in security devices like smart cards. We also explain how to prevent such attacks and we propose some counter-measures that can be either implemented on the hardware level by chip designers or on the software level in the Operating System interacting with those memories.",2010,0, 1631,The computation model of code error distortion based on the rate-distortion theory,"This paper starts from the end to end error distortion, constructs the statistical model of the information sources encoding distortion, channel error distortion, intra frame encoding macro block error diffusion, and the inter frame encoding macro block error diffusion, and brings forwards a rate distortion robust encoding control algorithm model under a low computation quantity. This method not only can acquire the optimal control point in the rate-distortion curve from the global perspective under the code error environment, but also can self adaptive adjust the bit allocation and the quantization parameter under given network bandwidth. Furthermore, it can also implement intra frame macro block refresh at the same time and play important role in counteracting the channel code error to minimize the total distortion under certain bit-rated according to current network package loss probability, by which to establish the rate distortion model of associated signal source channel and apply the rate-distortion optimized solution method to optimal allocate the bit-rate between the signal source coding and the channel coding. This method can provide valuable reference for the video robust coding transmission and resource allocation under the wireless environment.",2005,0, 1632,Stator Windings Fault Diagnostics of Induction Machines Operated From Inverters and Soft-Starters Using High-Frequency Negative-Sequence Currents,"This paper studies the application of high-frequency voltage excitation-based stator winding diagnostic methods to three-phase ac machines operated from power converters that create the necessary high-frequency excitation as part of their normal operation. This paper focuses on two specific operating modes: 1) machines operated from inverters in the overmodulation region and 2) machines operated from soft-starters during startup. In both cases, high-frequency (in the range of the hundred hertz) voltage components at well-defined frequencies are created. The negative-sequence currents induced from these high-frequency voltages are shown to contain accurate information on the level of asymmetry (fault) in the machine. This information is significantly richer than exists in other modes of operation, i.e., inverters working in the linear modulation region or soft-starters in the steady state, and provides interesting opportunities to complement other diagnostic methods.",2009,0, 1633,Fault tolerant H control for a class of nonlinear discrete-time systems: Using sum of squares optimization,"This paper studies the fault tolerant control (FTC) problem for a class of nonlinear discrete-time systems with guaranteed Hinfin performance objective in the presence of actuator faults. The mode of faults under consideration is typical aberration of actuator effectiveness. The novelty of this paper is that the effect of the nonlinear terms is described as an index in order to transform the FTC design problem into a semi-definite programming (SDP). The proposed optimization approach is to find zero optimum for this index. Combined with Hinfin performance index, the conceived multi-objective optimization problem is solved by using sum of squares method (SOS) in a reliable and efficient way. A numerical example is included to verify the applicability of this new approach for the nonlinear FTC synthesis.",2008,0, 1634,An analysis on the effect of transmission errors in real-time H.264-MVC Bit-streams,"This paper studies the quality of transmitted multi-view video when the corrupted packets are not discarded by the underlying protocols of the decoder. It assumes a wireless channel where the errors can be significant and implements solutions within the current H.264-MVC to reduce their impact on the video quality perceived by the user. The results show that transmission errors drastically reduce the quality of the reconstructed 3D video and confirm that a new type of error propagation between views exists. Furthermore, employing the Context Adaptive Variable Length Coding (CAVLC) entropy encoder, coding and transmitting the video streams in smaller packets, and having a small cyclic-Intra coded period, all improve the error resilience of the system.",2010,0, 1635,Distance correction system for localization based on linear regression and smoothing in ambient intelligence display,"This paper suggests the method of correcting distance between an ambient intelligence display and a user based on linear regression and smoothing method, by which distance information of a user who approaches to the display can he accurately output even in an unanticipated condition using a passive infrared VIR) sensor and an ultrasonic device. The developed system consists of an ambient intelligence display and an ultrasonic transmitter, and a sensor gateway. Each module communicates with each other through RF (Radio frequency) communication. The ambient intelligence display includes an ultrasonic receiver and a PIR sensor for motion detection. In particular, this system selects and processes algorithms such as smoothing or linear regression for current input data processing dynamically through judgment process that is determined using the previous reliable data stored in a queue. In addition, we implemented GUI software with JAVA for real time location tracking and an ambient intelligence display.",2008,0, 1636,A Survey of Mobile Agent-Based Fault-Tolerant Technology,This paper surveys the state of the art of agentbased fault tolerance techniques. Existing mobile agent-based fault-tolerant techniques are identified on prevent mobile agents from being blocked by a failure.,2005,0, 1637,Fault tolerant amplifier system using evolvable hardware,"This paper proposes the use evolvable hardware (EHW) for providing fault tolerance to an amplifier system in a signal-conditioning environment. The system has to maintain a given gain despite the presence of faults, without direct human intervention. The hardware setup includes a reconfigurable system on chip device and an external computer where a genetic algorithm is running. For detecting a gain fault, we propose a software-based built-in self-test strategy that establishes the actual values of gain achievable by the system. The performance evaluation of the fault tolerance strategy proposed is made by adopting two different types of fault-models. The fault simulation results show that the technique is robust and that the genetic algorithm finds the target gain with low error.",2010,0, 1638,Checkpointing virtual machines against transient errors,"This paper proposes VM-Checkpoint, a lightweight software mechanism for high-frequency checkpointing and rapid recovery of virtual machines. VM-Checkpoint minimizes checkpoint overhead and speeds up recovery by saving incremental checkpoints in volatile memory and by employing copy-on-write, dirty-page prediction, and in-place recovery. In our approach, knowledge of fault/error latency is used to explicitly address checkpoint corruption, a critical problem, especially when checkpoint frequency is high. We designed and implemented VM-Checkpoint in the Xen VMM. The evaluation results demonstrate that VM-Checkpoint incurs an average of 6.3% execution-time overhead for 50ms checkpoint intervals when executing the SPEC CINT 2006 benchmark.",2010,0, 1639,M/sup 3/ filter with error ellipses control in mode probabilities calculation,"This paper provides the Multiple Maneuver Model (M/sup 3/) filter with error ellipses control in mode probabilities calculation. The conventional M/sup 3/ filter has the problem that mode probabilities oscillate in the long range radar with long sampling period. This oscillation is caused by extremely low likelihood functions in mode probabilities calculation. In order to overcome the oscillation problem, the proposed M/sup 3/ filter monitors quadratic forms of residuals and controls the size of error ellipses in mode probabilities calculation. The results of a Monte Carlo simulation show that the proposed M/sup 3/ filter has capability of reducing the oscillation. Tracking quality of the proposed M/sup 3/ filter is also improved compared with that of the conventional M/sup 3/ filter.",2003,0, 1640,Characterization of Physical Defects and Fault Analysis of Molecular and Nanoscaled Integrated Circuits,"This paper reports a concept to design the defect-tolerant molecular integrated circuits (MICs). The results are applicable to conventional ICs which utilize solid-state devices. By enhancing photolithography and other CMOS processes, advancing materials and optimizing devices, some device and circuit performance were improved. Unfortunately, some key performance characteristics and capabilities were significantly degraded. The performance tradeoffs and effects of the equivalent cell size reduction are well known. The defects and faults at the device and circuit levels must be accommodated. It is illustrated that in general, the defects and faults can be accommodated.",2008,0, 1641,Application of the Sensitivity Analysis to the Optimal Design of the Microstrip Low-Pass Filter With Defected Ground Structure,"This paper shows applied sensitivity analysis for easier design and practical application of a planar half-wavelength low-pass filter (LPF) using defected ground structure (DGS). Typically, it is difficult to deploy planar half-wavelength low-pass filters when high power durability is required because of the very narrow line-widths of high impedance transmission line. Here, we propose a new configuration for the high impedance microstrip line using DGS structure to allow broader line width and high power handling capability. The sensitivity of the scattering parameters was calculated using the self-adjoint sensitivity formula in order to determine the proposed filter's dimension. The paper also highlights the validity of the proposed LPF optimization with its measured performance.",2009,0, 1642,Rotor fault detection of electrical machines by low frequency magnetic stray field analysis,"This paper shows the reliability of fault detection on electrical machines by analysis of the low frequency magnetic stray field. It is based on our own experience about magnetic discretion of naval electrical propulsion machine. We try to apply the techniques developed in previous works on the subject to faults detection. In this paper we focus on rotor defaults in a synchronous generator (eccentricity and short-circuit in rotor). Two kinds of study are performed. The first one is numerical. Firstly, an adapted finite elements method is used to compute the stray field around the device. However, this approach is difficult to apply to fault detection and not well-adapted. A new model, simpler and faster, is developed. Results are compared for both modelling. The second one is experimental and is driven thanks to a laboratory machine representative of a real high power generator and to fluxgate magnetometers located around the device. Both studies show good agreement and demonstrate the reliability of the approach.",2005,0, 1643,Towards analyzing the fault-tolerant operation of server-CAN,"This work-in-progress (WIP) paper presents server-CAN and highlights its operation and possible vulnerabilities from a fault tolerance point of view. The paper extends earlier work on server-CAN by investigating the behaviour of server-CAN in faulty conditions. Different types of faults are described, and their impact on sever-CAN is discussed, which is the subject of on-going research",2005,0, 1644,Manipulator kinematic error model in a calibration process through quaternion-vector pairs,Three-dimensional modeling of rotations and translations in kinematic calibration is most commonly performed using homogeneous transformations. The paper presents a technique for the calibration of robots based on the quaternion-vector pair approach for the identification of geometrical errors.,2002,0, 1645,3D OCT eye movement correction based on particle filtering,"Three-dimensional optical coherence tomography (OCT) is a new ophthalmic imaging technique offering more detailed quantitative analysis of the retinal structure. Eye movement during 3D OCT scanning, however, creates significant spatial distortions that may adversely affect image interpretation and analysis. Current software solutions must use additional reference images or B-scans to correct eye movement in a certain direction. The proposed particle filtering algorithm is an independent 3D alignment approach, which does not rely on any reference image. 3D OCT data is considered as a dynamic system, while location of A-scan is represented by the state space. A particle set is generated to approximate the probability density of the state. The state of the system is updated frame by frame to detect A-scan movement. Seventy-four 3D OCT images with eye movement were tested and subjectively evaluated by comparing them with the original images. All the images were improved after z-alignment, while 81.1% images were improved after x-alignment. The proposed algorithm is an efficient way to align 3D OCT volume data and correct the eye movement without using references.",2010,0, 1646,Optimization of triangulations based on serial fault data,"Three-dimensional reconstructions based on serial fault data can be divided into boundary contour splicing and end contour closure. In boundary contour splicing, the Delaunay triangulation algorithm can generate long, narrow triangles or radial shapes and with end contour closure, the Delaunay triangulation based on the determination of the convex-concave vertices tends to generate long, narrow triangles and triangles whose sizes differ greatly, and in some cases failure. This paper presents a Delaunay triangulation algorithm based on the shortest distance first principle for boundary contour splicing and an improved algorithm which combines Delaunay triangulation based on the determination of convex-concave vertices with interpolation for end contour closure. The results show that the algorithms retain the original advantages of the algorithms while increasing the triangulation effectiveness and enhancing the universality of the algorithms.",2009,0, 1647,Fault analysis of a PM brushless DC Motor using finite element method,"Three-phase trapezoidal back-EMF permanent magnet (PM) machines are used in many applications where the reliability and fault tolerance are important requirements. Knowledge of the machine transient processes under various fault conditions is the key issue in evaluating the impact of machine fault on the entire electromechanical system. The machine electrical and mechanical quantities whose transient behaviors are of importance under fault conditions include the voltages and currents of the coils and phases, the electromagnetic torque, and the rotor speed. Experimental test based on true machines for such a purpose is impractical for its high cost and difficulty to make. Computer simulation based on the finite element method has shown its effectiveness in fault study in this paper. Before the finite element model was used to perform simulations under fault conditions, it was validated by test data under normal conditions. Three types of fault conditions-single-phase open circuit fault, phase-to-phase terminal short-circuit, and internal turn-to-turn short-circuit have been studied.",2005,0, 1648,An Approach to Tilt Correction of Vehicle License Plate,"Tilt correction is a very important link in the vehicle license plate automatic recognition process. By principal component analysis (PCA), the image character coordinates are arranged into two-dimension covariance matrix, which centralization is operated. Then the feature vector and the rotation angle alpha are computed. The whole image is rotated by alpha and image horizontal tilt correction is performed. In the vertical tilt correction course, two correction methods, namely, PCA method and the line fitting method using K-means clustering (LFMUKC) are proposed to compute the vertical tilt angle thetas. The rotated image is done to Shear Transform and the final correction image is obtained. The experimental results show that, this paper approach can be implemented easily, also can quickly and accurately get the tilt angle. It provides a new effective way for tilt correction of vehicle license plate.",2007,0, 1649,3D shape from multi-camera views by error projection minimization,"Traditional shape from silhouette methods compute the 3D shape as the intersection of the back-projected silhouettes in the 3D space, the so called visual hull. However, silhouettes that have been obtained with background subtraction techniques often present miss-detection errors (produced by false negatives or occlusions) which produce incomplete 3D shapes. Our approach deals with miss-detections and noise in the silhouettes. We recover the voxel occupancy which describes the 3D shape by minimizing an energy based on an approximation of the error between the shape 2D projections and the silhouettes. The energy also includes regularization and takes into account the visibility of the voxels in each view in order to handle self-occlusions.",2009,0, 1650,Novel frequency-domain-based technique to detect stator interturn faults in induction machines using stator-induced voltages after switch-off,"Traditionally, for medium- and high-voltage motors and generators, condition-based monitoring of stator faults is performed by measuring partial discharge activities. For low-voltage machines, negative-sequence impedance or currents are measured for the same. Such diagnostic schemes should be carefully implemented as supply voltage unbalance, manufacturing-related asymmetry, etc., also produce negative-sequence voltages. A few approaches based on motor current signature analysis have already been proposed to detect stator interturn faults. However, little or no physical insight was provided to explain the occurrence of certain harmonics in the line current or the influence of voltage unbalance on these harmonics. Also, in at least one of these papers, a large portion of the stator winding was shorted to emulate the faults. The method proposed in this paper monitors certain rotor-slot-related harmonics at the terminal voltage of the machine, once it is switched off. In the absence of supply voltage, issues such as voltage unbalance, time harmonics do not influence the measurements except as initial conditions, which is a very desirable feature when the machine is fed from an adjustable-speed drive. Satisfactory simulation and experimental results have been obtained with only about 1.5% (5/324) of the total number of turns shorted",2002,0, 1651,Infrared imaging trajectory correction fuze based GPS and accurate exploding-point control technology,"Trajectory correction is a crucial method which can improve shooting precision of ammunition, and it is developing energetically new technology for fuze to implement modification of trajectory by fuze technology abroad. Based on the research of smart ammunition, a new trajectory correction fuze of infrared imaging with the help of GPS and precision exploding point control technology is proposed in this paper. In the ammunition flight process, the data of target location detected by infrared imaging detector are transmitted to fire control system and command center by wireless communication network, then the fuze working style and setting parameter decided by fire control computer according to target characteristic and surroundings parameter is transmitted by the remote control information setting. At the same time the real-time space data of ammunition received by GPS receiver is given to precision exploding point control system, the flying attitude of the ammunition of all the time calculated by the control system is compared with theoretical attitude data at some special point of the theoretic trajectory, thus correcting trajectory by a gas generator of firing power at certain time. The control order and exploding order for aimed warhead is conducted to strike target accurately at destiny point. Besides, circuit of fuze information remote control setting, GPS receiver, precision exploding point control system and systematic software design are detailed. The results indicate that the infrared imaging fuze which can be used to correct trajectory can improve shooting precision and damage efficiency.",2008,0, 1652,Improved Technique for Fault Detection Sensitivity in Transformer Maintenance Test,"Transformer windings might be shifted because of short-circuit current, aging or impact during transportation. The shift modifies the dielectric space between the layers of the windings and may cause an insulation breakdown. Since transformers are expensive to replace, it is vital that their condition are determined accurately without having to dismantle the apparatus to inspect it visually. Generally, transformers test is performed for maintenance purposes, by either low voltage impulse (LVI) test or frequency response analysis (SFRA) test. Both methods have been adopted within the industrial applications. Nonetheless, they have drawbacks, including limited frequencies range for LVI test and time-consuming measurements for SFRA test. To obtain better signature analysis and to increase the detection sensitivity in the transformer maintenance test, this paper suggests a new input signal using a random pulse sequence (RPS) in the transfer function analysis. The results of RPS test are compared against the LVI and SFRA tests to complete the assessments.",2007,0, 1653,Scalable Program Comprehension for Analyzing Complex Defects,"We describe the query-model-refine (QMR) approach for retrieving and modeling information for program comprehension. The QMR approach allows the flexibility to design and execute application-specific problem-solving strategies to suit particular program comprehension goals. The QMR approach has been used for building a number of program comprehension tools for different applications: interactive automatic parallelization, business rule analysis, auditing safety-critical control software, and defect analysis. This presentation will be about a program comprehension tool called Atlas and we will show its use for analyzing complex defects.",2008,0, 1654,A Minimax Chebyshev Estimator for Bounded Error Estimation,"We develop a nonlinear minimax estimator for the classical linear regression model assuming that the true parameter vector lies in an intersection of ellipsoids. We seek an estimate that minimizes the worst-case estimation error over the given parameter set. Since this problem is intractable, we approximate it using semidefinite relaxation, and refer to the resulting estimate as the relaxed Chebyshev center (RCC). We show that the RCC is unique and feasible, meaning it is consistent with the prior information. We then prove that the constrained least-squares (CLS) estimate for this problem can also be obtained as a relaxation of the Chebyshev center, that is looser than the RCC. Finally, we demonstrate through simulations that the RCC can significantly improve the estimation error over the CLS method.",2008,0, 1655,Error probability analysis of TAS/MRC-based scheme for wireless networks [point-to-point link example],"We develop the framework to analyze the symbol-error probability (SEP) for the scheme integrating transmit antenna selection (TAS) with maximal-ratio combining (MRC) used in wireless networks. Applying this scheme, the transmitter always selects an optimal antenna out of all possible antennas based on channel state information (CSI) feedback. Over a flat-fading Rayleigh channel, we develop the closed-form SEP expressions for a set of commonly used constellations when assuming perfect and delayed CSI feedback, respectively. We also derive the Chernoff-bounds of the SEP's for both perfect and delayed feedback. Our analyses show that while the antenna diversity improves the system performance, the feedback delay can significantly impact the SEP of TAS/MRC schemes.",2005,0, 1656,"A case study: validation of guidance control software requirements for completeness, consistency and fault tolerance","We discuss a case study performed for validating a natural language (NL) based software requirements specification (SRS) in terms of completeness, consistency, and fault-tolerance. A partial verification of the Guidance and Control Software (GCS) Specification is provided as a result of analysis using three modeling formalisms. Zed was applied first to detect and remove ambiguity from the GCS partial SRS. Next, Statecharts and Activity-charts were constructed to visualize the Zed description and make it executable. The executable model was used for the specification testing and fault injection to probe how the system would perform under normal and abnormal conditions. Finally, a Stochastic Activity Networks (SANs) model was built to analyze how fault coverage impacts the overall performability of the system. In this way, the integrity of the SRS was assessed. We discuss the significance of this approach and propose approaches for improving performability/fault tolerance",2001,0, 1657,Error Control for IPTV over xDSL Networks,"We discuss the necessity of error control for supporting IPTV over imperfect access networks. In particular, we consider typical DSL environments, and examine the physical-layer impairments and error-mitigation techniques. For these networks, we evaluate the performance of two different application-layer Forward Error Correction (FEC) methods. An overview of hybrid error-control methods and recent developments in standardization is also presented.",2008,0, 1658,On concurrent error detection with bounded latency in FSMs,"We discuss the problem of concurrent error detection (CED) with bounded latency in finite state machines (FSMs). The objective of this approach is to reduce the overhead of CED, albeit at the cost of introducing a small latency in the detection of errors. In order to ensure no loss of error detection capabilities as compared to CED without latency, an upper bound is imposed on the introduced latency. We examine the necessary conditions for performing CED with bounded latency, based on which we extend a parity-based method to permit bounded latency. We formulate the problem of minimizing the number of required parity bits as an integer program and we propose an algorithm based on linear program relaxation and randomized rounding to solve it. Experimental results indicate that allowing a small bounded latency reduces the hardware cost of the CED circuitry.",2004,0, 1659,Instruction-Level Impact Comparison of RT- vs. Gate-Level Faults in a Modern Microprocessor Controller,"We discuss the results of an extensive fault simulation study involving the control logic of a modern alpha-like microprocessor. In this comparative study, faults are injected in both the RT- and the Gate-Level description of the design and are simulated under actual workload of the microprocessor, which is executing SPEC2000 benchmarks. The objective of this study is to analyze and contrast the impact of RT- and gate-level faults on the instruction execution flow of the microprocessor. The key observation is a pronounced consistency in the type and frequency of instruction level errors (ILEs) arising due to RT- vs. gate-level faults. The motivation for this work stems from the need to understand the relative importance of low-level faults based on their instruction-level impact, in order to appropriately allocate error detection and/or correction resources. Hence, the consistency revealed through this study implies that such decisions can be made equally effective based on RT-level fault simulation results, as with their far more computationally-expensive gate-level equivalents.",2009,0, 1660,The performance of checkpointing and replication schemes for fault tolerant mobile agent systems,"We evaluate the performance of checkpointing and replication schemes for the fault tolerant mobile agent system. For the quantitative comparison, we have implemented an experimental system on top of the Mole mobile agent system and also built a simulation system to include various failure cases. Our experiment aims to have the insight into the behavior of agents under two schemes and provide a guideline for the fault tolerant system design. The experimental results show that the checkpointing scheme shows a very stable performance; and for the replication scheme, some controllable system parameter values should be chosen carefully to achieve the desirable performance.",2002,0, 1661,Performance comparison of centralized versus distributed error recovery for reliable multicast,"We examine the impact of the loss recovery mechanism on the performance of a reliable multicast protocol. Approaches for loss recovery in reliable multicast can be divided into two major classes: centralized (source-based) recovery and distributed recovery. For both classes we consider the state of the art: for centralized recovery, an integrated transport layer scheme using parity multicast for error recovery (hybrid ARQ type 2) as well as timer-based feedback suppression, and for distributed recovery, a scheme with local data multicast retransmission and feedback processing in a local neighborhood. We also evaluate the benefits of combining the two approaches into distributed error recovery (DER) with local retransmissions using a type 2 hybrid ARQ scheme. The schemes are evaluated for up to 106 receivers under different loss scenarios with respect to network bandwidth usage and completion time of a reliable transfer. We show that using DER with type 2 hybrid ARQ gives best performance in terms of bandwidth and latency. For networks, where local retransmission is not possible, we show that a centralized protocol based on type 2 hybrid ARQ comes close to the performance of a protocol with local retransmissions",2000,0, 1662,Fault-tolerant high-performance matrix multiplication: theory and practice,"We extend the theory and practice regarding algorithmic fault-tolerant matrix-matrix multiplication, C=AB, in a number of ways. First, we propose low-overhead methods for detecting errors introduced not only in C but also in A and/or B. Second, we show that, theoretically, these methods will detect all errors as long as only one entry, is corrupted. Third we propose a low-overhead roll-back approach to correct errors once detected. Finally, we give a high-performance implementation of matrix-matrix multiplication that incorporates these error detection and correction methods. Empirical results demonstrate that these methods work well in practice while imposing an acceptable level of overhead relative to high-performance implementations without fault-tolerance.",2001,0, 1663,Constraint Based Automated Synthesis of Nonmasking and Stabilizing Fault-Tolerance,"We focus on constraint based automated addition of nonmasking and stabilizing fault-tolerance to hierarchical programs. We specify legitimate states of the program in terms of constraints that should be satisfied in those states. To deal with faults that may violate these constraints, we add recovery actions while ensuring interference freedom among the recovery actions added for satisfying different constraints. Since the constraint based manual design of fault tolerance is well known to be applicable in the manual design of nonmasking fault tolerance, we expect our approach to have a significant benefit in automation of fault tolerant programs. We illustrate our algorithms with three case studies: stabilizing mutual exclusion, stabilizing diffusing computation, and a data dissemination problem in sensor networks. With experimental results,we show that the complexity of synthesis is reasonable and that it can be reduced using the structure of the hierarchical systems. To our knowledge, this is the first instance where automated synthesis has been successfully used in synthesizing programs that are correct under fairness assumptions. Moreover, in two of the case studies considered in this paper, the structure of the recovery paths is too complex to permit existing heuristic based approaches for adding recovery.",2009,0, 1664,Reducing human error in simulation in General Motors,"We focus on the steps taken to minimize human error in simulation modeling in General Motors. While errors are costly and undesirable in any field, they are especially harmful in simulation which has been struggling to gain acceptance in the business world for a long time. The solution discussed can be summarized as ""enter the data once and use the best tool for the job"".",2003,0, 1665,Correctness of a Fault-Tolerant Real-Time Scheduler and its Hardware Implementation,"We formalize the correctness of a fault-tolerant scheduler in a time-triggered architecture. Where previous research elaborated on real-time protocol correctness, we extend this work to gate-level hardware. This requires a sophisticated analysis of analog bit-level synchronization and transmission. Our case-study is a concrete automotive bus controller (ABC), inspired by the FlexRay standard. For a set of interconnected ABCs, vulnerable to sudden failure, we prove at gate-level, that all operating ABCs are synchronized tightly enough such that messages are broadcast correctly. This includes formal arguments for startup, failures, and reintegration of nodes at arbitrary times. To the best of our knowledge, this is the first effort tackling fault-tolerant scheduling correctness at gate-level.",2008,0, 1666,Residual error models for the SOLT and SOLR VNA calibration algorithms,"Uncertainty calculation of vector network analyzers (VNAs) using the SOLT or SOLR calibration algorithms is often performed using residual directivity, match and tracking. In the literature the uncertainty equations are often stated without a derivation from a proper model equation. In this paper we derive the model equations for both the SOLT and SOLR calibration, the two cases do not result in the same model equation. The results are also compared to the commonly used expressions for uncertainty in the EA guidelines for VNA evaluation. For one-port measurements our results confirm the expressions in the EA guide but for two-ports there are significant differences. The symbolically derived model equations are verified using numerical simulations.",2007,0, 1667,"Particle Swarm Optimization Based GM(1,2) Method on Day-Ahead Electricity Price Forecasting with Predicted Error Improvement","Under deregulated environment, accurate electricity price forecasting is a crucial issue concerned by all participants. Experience shows that single forecasting model is very difficult to improve the forecasting accuracy due to the complicated factors affecting electricity prices. A particle swarm optimization (PSO) based GM(1,2) method on day-ahead electricity price forecasting with predicted error improvement is proposed, in which the moving average method is used to process the raw series, the PSO based GM(1,2) model to the processed series and the time series analysis to further improve the predicted errors. The numerical example based on the historical data of the PJM market shows that the method can reflect the characteristics of electricity price better and the forecasting accuracy can be improved virtually compared with the conventional GM(1,2) model. The forecasted prices accurate enough to be used by market participants to prepare their bidding strategies.",2010,0, 1668,A real-time fault diagnosis system for UPS based on FFT frequency analysis,"UPS provides emergency power when utility power is not available, so the reliability of UPS is more important than inverter drive systems. In this paper, a fault diagnosis system for UPS is proposed using FFT frequency analysis of output current of inverter side of UPS under linear and nonlinear load conditions. Software PLL for precise synchronization of one period sampling and double buffer memory for real time processing are proposed. Experimental results show the increase of even harmonics including dc offset in case of fault conditions such as increase of resistance and delay or misfiring of IGBT turn-on, and prove the possibility of UPS fault diagnosis system if the criteria for fault decision are well defined.",2010,0, 1669,The computer-aided rolling bearing faults diagnosis system,"Using the method of decomposition and reconstruction of wavelet analysis, the impact ingredients caused by the ball bearing surface fault were extracted. We developed a computer based fault diagnostic system to determine the characteristic frequencies of ball bearing by dynamic test. The system has been successfully applied in the fault diagnosis of ball bearing fault. Results show that the system increases the efficiency and accuracy compared with traditional method.",2010,0, 1670,A Novel Visual Programming Method Designed for Error Rate Reduction,"Utilize the programming environment at present, itAs effortless for us to design intuitionistic visual program. It only implements the visualization of programming ending. The process of programming is not fully visualization and itAs not visual programming language in its true sense. According to the related research on visual programming languages in the past, the language is designed for special area or education, which is of pertinence and ordinary. If the programming language is made up of a series of graphic expressions itAs under the name of visual programming language. So the goal of our research is to construct a kind of visual programming language which is propitious to program in graphic components, is convenient for comprehension to programmers and nonprogrammer and the most important is it can reduce error rate which is caused by former textual input and recede the workload in the course of lexical analysis and semantic parsing.",2008,0, 1671,"A Monte-Carlo Simulation Package, Multiple Comparison Corrections and Power Estimation Incorporating Secondary Supportive Evidence","Various approaches have been proposed to account for the family-wise type-I errors in neuroimaging studies. This study introduces new global features as alternatives to address the multiple-comparison issue. These global features can serve as alternative brain indices whose type-I error theoretical calculations are unknown. A Monte-Carlo simulation package was used to calculate the family-wise type-I error of the newly introduced global features, as well as the conventional multiple comparison corrected p-values related to the height of the statistic (and cluster size) of interest in situations where random field theorem based p-values might be validated. In addition, this package was designed to perform statistical power analyses, taking multiple comparisons into consideration for the conventional statistics and the new global features. The behaviors of the global index type-I error thresholds as a function of the degrees of freedom (D) of t-distribution were investigated. Data from an oxygen-15 water PET study of right hand movement was used to illustrate the use of the global features and their type-I error and statistical power. With this PET example, we showed the superior statistical power of some global indices in cases where there were moderate changes over a relatively large brain volume. We believe that the global features and the calculation of type-I errors/statistical powers by the computer simulation package provide researchers alternative ways to account for multiple comparisons in neuroimaging studies.",2007,0, 1672,Evaluation of fault-tolerant policies using simulation,"Various mechanisms for fault-tolerance (FT) are used today in order to reduce the impact of failures on application execution. In the case of system failure, standard FT mechanisms are checkpoint/restart (for reactive FT) and migration (for pro-active FT). However, each of these mechanisms create an overhead on application execution, overhead that for instance becomes critical on large-scale systems where previous studies have shown that applications may spend more time checkpointing state than performing useful work. In order to decrease this overhead, researchers try to both optimize existing FT mechanisms and implement new FT policies. For instance, combining reactive and pro-active approaches in order to decrease the number of checkpoints that must be performed during the application's execution. However, currently no solutions exist which enable the evaluation of these FT approaches through simulation, instead experimentations must be done using real platforms. This increases complexity and limits experimentation into alternate solutions. This paper presents a simulation framework that evaluates different FT mechanisms and policies. The framework uses system failure logs for the simulation with a default behavior based on logs taken from the ASCI White at Lawrence Livermore National Laboratory. We evaluate the accuracy of our simulator comparing simulated results with those taken from experiments done on a 32-node compute cluster. Therefore such a simulator can be used to develop new FT policies and/or to tune existing policies.",2007,0, 1673,On the relationships of faults for Boolean specification based testing,"Various methods of generating test cases based on Boolean specifications have previously been proposed. These methods are fault-based in the sense that test cases are aimed at detecting particular types of faults. Empirical results suggest that these methods are good at detecting particular types of faults. However, there is no information on the ability of these test cases in detecting other types of faults. The paper summarizes the relationships of faults in a Boolean expression in the form of a hierarchy. A test case that detects the faults at the lower level of the hierarchy will always detect the faults at the upper level of the hierarchy. The hierarchy helps us to better understand the relationships of faults in a Boolean expression, and hence to select fault-detecting test cases in a more systematic and efficient manner",2001,0, 1674,A Compression Error and Optimize Compression Algorithm for Vector Data,"Vector data compression plays an important role in the research areas such as terrain environment simulation, cartography generalization, GIS and digital entertainment, etc.... It can increase the storage capacity of mobile devices and improve the transmission efficiency of vector data on network. In this study, a new compression error is proposed by analyzing the existing compression error of vector data. For single-entity and multi-entity vector data, a compression method based on dynamic programming is given. Especially to multi-entity vector data compression, a method is used which combines compression ratio and compression error to weighted average distribute the compression nodes. Experimental results show that this compression method can better reflect the behavior of a vector data and has higher compression efficiency.",2009,0, 1675,A process to reduce reproducibility error in VNA measurements,"Vector Network Analyzers have proven to be useful for characterizing the electrical properties of passive interconnects and determining their ability to transmit high speed signals. It is highly desirable to have a measurement process that is both accurate and precise. Because of the complexity of the measurement, there are many potential factors that could affect the precision of the measurement. For example, when taking probed measurements, operators typically use different methods to align the probes, which often introduce subtle variations in measurements. Additionally calibration algorithms (or procedures) may have slight differences that introduce errors as well. This paper will present a method to identify the largest sources of variation that impact the precision of the measurement. The method is based on an extension of the analysis of variance (ANOVA) so that it can be applied to complex variables.",2010,0, 1676,Lazy verification in fault-tolerant distributed storage systems,"Verification of write operations is a crucial component of Byzantine fault-tolerant consistency protocols for storage. Lazy verification shifts this work out of the critical path of client operations. This shift enables the system to amortize verification effort over multiple operations, to perform verification during otherwise idle time, and to have only a subset of storage-nodes perform verification. This paper introduces lazy verification and describes implementation techniques for exploiting its potential. Measurements of lazy verification in a Byzantine fault-tolerant distributed storage system show that the cost of verification can be hidden from both the client read and write operation in workloads with idle periods. Furthermore, in workloads without idle periods, lazy verification amortizes the cost of verification over many versions and so provides a factor of four higher write bandwidth when compared to performing verification during each write operation.",2005,0, 1677,Fault Detection Framework for Video Surveillance Systems,"We consider cameras whose outputs do not reflect true scenes as faulty cameras. To build a fault detection video surveillance system without using additional hardware devices, we use the video outputs from cameras to do self-checking. We study two categories of faults: spatial faults and temporal faults, and reduce the two sub-problems into graph theoretical problems on two graphs (surveillance sharing graph (SSG) and surveillance partitioning graph (SPG)). We prove a theoretical upper bound for the spatial fault detection, and develop two algorithms for detecting the two types of faults respectively.Then, we integrate both in a framework, which is capable of the following: given the outputs from a video surveillance system, it isolates cameras which are faulty or suspected to be faulty. It gives warnings with types of faults, locations, detection confidence. Our experiments confirm the effectiveness of the framework's methodologies.",2008,0, 1678,Finite automata approximations with error bounds for systems with quantized actuation and measurement: a case study,"We consider stable, discrete time, first order LTI systems with finite input alphabets and quantized outputs. We propose an algorithm for generating deterministic finite state machine approximations of these systems with computable bounds on approximation error, and we describe the conditions under which the bounds are valid.",2004,0, 1679,A random sets framework for error analysis in estimating geometric transformations a first order analysis,"We consider the problem of estimating the geometric deformation of an object, with respect to some reference observation on it. Existing solutions, set in the standard coordinate system imposed by the measurement system, lead to high-dimensional, non-convex optimization problems. In we proposed a novel framework that employs a set of non-linear functionals to replace this originally high dimensional problem by an equivalent problem that is linear in the unknown transformation parameters. The non-linearity of the employed functionals implies that using standard methods for analyzing the estimation errors is complicated, and is tractable only under a high SNR assumption. In this paper we present an entirely different approach for deriving the statistics of the estimator. The basic principle of this novel approach is based on the understanding that since our goal is to estimate the geometric transformation, the appropriate noise model for the problem is a model that explicitly relates the presence of noise and the measures of the geometric entities in the observed image. This approach naturally leads to very efficient estimation procedures and alleviates the need for restrictive assumptions made in previous work.",2008,0, 1680,ASFALT: A Simple Fault-Tolerant Signature-based Localization Technique for Emergency Sensor Networks,"We consider the problem of robust node deployment and fault-tolerant localization in wireless sensor networks for emergency and first response applications. Signature-based localization algorithms are a popular choice for use in such applications due to the non-uniform nature of the sensor node deployment. But, random destruction/disablement of sensor nodes in such networks adversely affects the deployment strategy as well as the accuracy of the corresponding signature-based localization algorithm. In this paper, we first model the phenomenon of sensor node destruction as a non-homogeneous Poisson process and derive a robust and efficient strategy for sensor node deployment based on this model. Next, we outline a protocol, called Group Selection Protocol, that complements current signature-based algorithms by reducing localization errors even when some nodes in a group are destroyed. Finally, we propose a novel yet simple localization technique, ASFALT, that improves the efficiency of the localization process by combining the simplicity of range-based schemes with the robustness of signature-based ones. Simulation experiments are conducted to verify the performance of the proposed algorithms.",2007,0, 1681,Fault-tolerant network reliability and importance analysis using binary decision diagrams,"We consider the two-terminal reliability and link importance analysis of fault tolerant network systems in this paper. Two practical issues, imperfect coverage (IPC) and common-cause failures (CCF), which have generally been ignored by existing network models, are incorporated. The methodology is to separate the consideration of both IPC and CCF from the combinatorics of the solution and then solve the reduced problems using binary decision diagrams (BDD). The application and advantages of the proposed separable approach are illustrated using a concrete analysis of an example network system. Due to the consideration of IPC and CCF, our approach can evaluate a wider class of practical network systems as compared with existing network approaches. Due to the nature of the BDD and the separation of IPC and CCF from the solution combinatorics, our approach has low computational complexity and is easy to implement. The systems without IPC or CCF appear to be special cases of our approach.",2004,0, 1682,Robust radiometric terrain correction for SAR image comparisons,"We demonstrate a robust technique for radiometric terrain correction, whereby terrain-induced modulations of the radiometry of SAR imagery are modelled and corrected. The resulting normalized images may be more easily compared with other data sets acquired at different incidence angles, even opposing look directions. We begin by reviewing the radar equation, pointing out simplifications often made to reduce the complexity of calculating the backscatter coefficient, normalized either by ground area (sigma0), or illuminated area projected into the look direction (gamma0). The integral over the illuminated area is often approximated by a scale factor modelling a simple planar slope, departing only slightly from ""ideal"" flat terrain: for gamma0, the radar brightness (beta0) is normalized via modulation with the tangent of the local incidence angle. We quantify the radiometric errors introduced by ignoring terrain variations, comparing results based on (a) a robust radar image simulation-based approach properly modelling variations in local illuminated area, and (b) an ellipsoidal Earth assumption. A second simplification often made in solving for backscatter using the radar equation is the assumption that the local antenna gain does not vary significantly from a simple model draping the antenna gain pattern (AGP) across an Earth ellipsoid, returning the local antenna gain as a function of slant range alone. In reality, the AGP is draped across the Earth's rolling terrain retrieval of properly calibrated backscatter values should model these variations and compensate for them: although smaller than the errors caused by not property modelling variations in local illuminated area, they can be significant. We use well-calibrated and annotated ENVISAT ASAR images acquired over Switzerland to show how robust radiometric terrain correction, incorporating models for the variations of local illuminated area with terrain enables calibr- ated mixture of imagery acquired at differing incidence angles. Only robust retrieval of backscatter values enables such inter-mode comparisons - a capability that significantly reduces the required revisit time for monitoring changes to the radar backscatter. In conclusion, we describe a technique for combining a set of terrain-geocoded and radiometrically calibrated images derived from ascending and descending passes and multiple incidence angles to create composite radar backscatter maps. At each point, the contribution of each image to the composite is weighted according to its local resolution. The resulting composite image manifests relatively uniform high ground resolution, even in highly mountainous terrain",2004,0, 1683,Active Data Selection for Sensor Networks with Faults and Changepoints,"We describe a Bayesian formalism for the intelligent selection of observations from sensor networks that may intermittently undergo faults or changepoints. Such active data selection is performed with the goal of taking as few observations as necessary in order to maintain a reasonable level of uncertainty about the variables of interest. The presence of faults/changepoints is not always obvious and therefore our algorithm must first detect their occurrence. Having done so, our selection of observations must be appropriately altered. Faults corrupt our observations, reducing their impact; changepoints (abrupt changes in the characteristics of data) may require the transition to an entirely different sampling schedule. Our solution is to employ a Gaussian process formalism that allows for sequential time-series prediction about variables of interest along with a decision theoretic approach to the problem of selecting observations.",2010,0, 1684,Software-implemented fault detection for high-performance space applications,"We describe and test a software approach to overcoming radiation-induced errors in spaceborne applications running on commercial off-the-shelf components. The approach uses checksum methods to validate results returned by a numerical subroutine operating subject to unpredictable errors in data. We can treat subroutines that return results satisfying a necessary condition having a linear form; the checksum tests compliance with this condition. We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent infinite-precision numerical calculations. We test both the general effectiveness of the linear fault tolerant schemes we propose, and the correct behavior of our parallel implementation of them",2000,0, 1685,Adapting to dynamic registration errors using level of error (LOE) filtering,"We describe our initial work on generating augmented reality (AR) displays in the face of dynamically changing errors in the pose (position and orientation) of both the user and objects in the world. Dealing with this problem is particularly important in mobile AR environments, where the tracking accuracy of the user's head can change frequently and dramatically as she moves between areas with radically different tracking systems, such as in and out of buildings. We introduce the notion of level of error filtering, analogous to level of detail culling in 3D graphics systems, to help programmers build interfaces that automatically adapt to changing registration errors",2000,0, 1686,A Motion-Based Selective Error Protection Method for Scalable Video Over Error-Prone Channel,"Video transmission over unreliable networks introduces new challenges in video coding. Due to the predictive coding techniques, the effect of channel errors on the decoded video can be extremely severe when the compressed video is transmitted over error-prone channel. In this paper, the problem of scalable video transmission over error-prone channel is addressed. It is proposed to selectively add forward error correction (FEC) codes to partial information of the compressed bit-stream based on the motion activity of the input video. In addition, unequal error protection (UEP) is applied on the selected data of different temporal layers in a group of pictures (GOP), where the channel rates are optimally allocated. It is shown from the experimental results that our proposed method has a good performance and the improvement is up to 1.2 dB.",2007,0, 1687,Neural vision sensors for surface defect detection,"Vision sensors are built from a camera and intelligent hardware and/or software. Steadily decreasing microelectronic costs have spawned a large number of vision sensory applications, such as surface defect detection. A constructive method for defect detection entails a mixture of mathematical and intelligent modules. Such a heterogeneous modular system can be realized in many ways. In this paper we discuss a packet-switched implementation on a macro-enriched field-programmable gate-array.",2004,0, 1688,Semantic-oriented error correction for spoken query processing,"Voice input is often required in many new application environments such as telephone-based information retrieval, car navigation systems, and user-friendly interfaces, but the low success rate of speech recognition makes it difficult to extend its application to new fields. Popular approaches to increase the accuracy of the recognition rate have been researched by post-processing of the recognition results, but previous approaches were mainly lexical-oriented ones in post error correction. We suggest a new semantic-oriented approach to correct both semantic and lexical errors, which is also more accurate for especially domain-specific speech error correction. Through extensive experiments using a speech-driven in-vehicle telematics information application, we demonstrate the superior performance of our approach and some advantages over previous lexical-oriented approaches.",2003,0, 1689,Fault-Tolerant Overlay Protocol Network,"Voice over Internet Protocol (VoIP) and other time critical communications require a level of availability much higher than the typical transport network supporting traditional data communications. These critical command and control channels must continue to operate and remain available in the presence of an attack or other network disruption. Even disruptions of short duration can severely damage, degrade, or drop a VoIP connection. Routing protocols in use today can dynamically adjust for a changing network topology. However, they generally cannot converge quickly enough to continue an existing voice connection. As packet switching technologies continue to erode traditional circuit switching applications, some methodology or protocol must be developed that can support these traditional requirements over a packet-based infrastructure. We propose the use of a modified overlay tunneling network and associated routing protocols called the fault tolerant overlay protocol (FTOP) network. This network is entirely logical; the supporting routing protocol may be greatly simplified due to the overlays's ability to appear fully connected. Therefore, ensuring confidentiality and availability are much simpler using traditional cryptographic isolation and VPN technologies. Empirical results show for substrate networks, convergence time may be as high as six to ten minutes. However, the FTOP overlay network has been shown to converge in a fraction of a second, yielding an observed two order of magnitude convergence time improvement. This unique ability enhances availability of critical network services allowing operation in the face of substrate network disruption caused by malicious attack or other failure",2006,0, 1690,Ride-through alternatives of Adjustable Speed Drives (ASD's) during fault conditions,Voltage unbalance or sag conditions generated by the power system faults can have a significant negative impact on the ASD's (Adjustable Speed Drives). A practical ride- through scheme based on supercapacitor/ battery as an energy storage device for ASD's is presented in this paper. Energy storage module is connected to support the DC-link voltage during power system faults. The performance of ASD's under normal and power system faults is first simulated in MATLab Simulink and then experimentally verified. The Data AcQuisition boards (DAQ) of National Instruments along with LabVIEW software have been used to record the observed waveforms.,2010,0, 1691,Evaluation of the volumetric erosion of spherical electrical contacts using the defect removal method,"Volumetric erosion is regarded as a significant index for studying the erosion process of electrical switching contacts. Three-dimensional (3-D) surface measurement techniques provide an approach to investigate the geometric characteristics and volumetric erosion of electrical contacts. This paper presents a concrete data-processing procedure for evaluating volumetric erosion of spherical electrical contacts from 3-D surface measurement data using the defect removal method (DRM). The DRM outlined by McBride is an algorithm for evaluating the underlying form (prior to erosion) parameters of the surfaces with localized erosion and allowing the erosion characteristics themselves to be isolated. In this paper, a number of spherical electrical contacts that had undergone various electrical operations were measured using a 3-D surface profiler, the underlying form parameters of the eroded contacts were evaluated using the DRM, and then the volumetric erosions were isolated and calculated. The analysis of the correlations between the volumetric erosion and the number of switching cycles of electrical operation that the contacts had undergone showed a more accurate and reliable volumetric erosion evaluation using the DRM than that without using the DRM",2006,0, 1692,Research on Fault-tolerant Mechanism of Integrating Water-Domain Oriented Computing Resources,"Water domain grid platform, a grid platform based on cycle stealing technology is used to harness idle computing resources in one or several labs of one or several sites for its low costs and high performance. Volatility is the key challenge of this kind of platform and one fault will generate when a computing node leaves the platform. So how to make these volatile nodes work together without being influenced by generated faults is a key issue. So this paper presents a fault tolerance architecture aiming at minimizing generating faults. Once faults generated, other idle computation nodes in this platform can go on executing the unfinished task immediately. Finally some experiments based on this platform show that the framework has good performance in dealing with fault tolerance in water domain oriented computing resources integrated platform.",2009,0, 1693,Concurrent error detection in wavelet lifting transforms,"Wavelet transforms, central to multiresolution signal analysis and important in the JPEG2000 image compression standard, are quite susceptible to computer-induced errors because of their pipelined structure and multirate processing requirements. Such errors emanate from computer hardware, software bugs, or radiation effects from the surrounding environment. Implementations use lifting schemes, which employ update and prediction estimation stages, and can spread a single numerical error caused by failures to many output transform coefficients without any features to warn data users. We propose an efficient method to detect the arithmetic errors using weighted sums of the wavelet coefficients at the output compared with an equivalent parity value derived from the input data. Two parity values may straddle a complete multistage transform or several values may be used, each pair covering a single stage. There is greater error-detecting capability at only a slight increase in complexity when parity pairs are interspersed between stages. With the parity weighting design scheme, a single error introduced at a lifting section can be detected. The parity computation operation is properly viewed as an inner product between weighting values and the data, motivating the use of dual space functionals related to the error gain matrices. The parity weighting values are generated by a combination of dual space functionals. An iterative procedure for evaluating the design of the parity weights has been incorporated in Matlab code and simulation results are presented.",2004,0, 1694,Lightweight Fault-Tolerance for Peer-to-Peer Middleware,"We address the problem of providing transparent, lightweight, fault-tolerance mechanisms for generic peer-to-peer middleware systems. The main idea is to use the peer-to-peer overlay to provide for fault-tolerance rather than support it higher up in the middleware architecture, e.g. in the form of services. To evaluate our approach we have implemented a fault-tolerant middleware prototype that uses a hierarchical peer-to-peer overlay in which the leaf peers connect to sensors that provide data streams. Clients connect to the root of the overlay and request streams that are routed upwards through intermediate peers in the overlay up to the client. We report encouraging preliminary results for latency, jitter and resource consumption for both the non-faulty and faulty cases.",2010,0, 1695,MV distribution and neighborhood availability based error concealment order for video stream,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Aimed this problem, this paper proposes an adaptive concealment order based on well know boundary matching algorithm. The concealment order is carefully chosen according to a lost MB's priority, which is formulated considering 2 factors, MV (motion vector) distribution in the lost MB's area, and its available neighborhood information. Compared with Chen's work, the experiments show our proposal always achieves better performance of video recovery under different packet lost rate channel.",2009,0, 1696,An spatial error propagation reduction based temporal error concealment for 1Seg Video broadcasting,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. The error concealment (EC) is used to recover the lost data by the redundancy in videos. 1Seg is a recently widely used mobile TV service, which is one of services of ISDB-T (Integrated Services of Digital Broadcasting-terrestrial) in Japan and Brazil. In 1Seg, errors are drastically increased and lost areas are contiguous. Therefore the errors in earlier concealed MBs (macro blocks) may propagate to the MBs later to be concealed inside the same frame (spatial domain), so called spatial error propagation (SEP). Aiming at SEP reduction, this paper proposes a SEP reduction based EC (SEP-EC). In SEP-EC, besides the mismatch distortion in current MB, the potential propagated mismatch distortion in the following to be concealed MBs is also minimized. Compared with previous work, SEP-EC achieves better image recovery in both subjective and objective observation. The experiments under 1Seg broad-casting simulation confirmed this.",2009,0, 1697,Video Error Concealment Using Spatio-Temporal Boundary Matching,"Transmission of videos in error prone environments may lead to video corruption or loss. Therefore error concealment at the decoder side has to be applied. Commonly error concealment techniques make use of the surrounding correctly received image data or motion information for concealment. In this paper, a novel spatio-temporal boundary matching algorithm (STBMA) by exploiting both spatial and temporal information to reconstruct the lost motion vectors (MV) is proposed, and also introduce a new side smoothness measurement. By using the motion vector that is found by the proposed algorithm, the lost macro block (MB) can be recovered. Compared with the well known boundary matching algorithm (BMA), the proposed algorithm is able to achieve higher PSNR as well as better visual quality.",2009,0, 1698,The NanoBox project: exploring fabrics of self-correcting logic blocks for high defect rate molecular device technologies,"Trends indicate that emerging process technologies, including molecular computing, experiences an increase in the number of noise induced errors and device defects. In this paper, we introduce the NanoBox, a logic lookup table bit string. In this way, we contain and self-correct errors within the lookup table, thereby presenting a robust logic block to higher levels of logic design. We explore five different NanoBox coding techniques. We also examine the cost of implementing two different circuit blocks using a homogeneous fabric of NanoBox logic elements: 1) a floating point control unit from the IBM Power4 microprocessor and 2) a four-instruction ALU. In this initial investigation, our results are not meant to draw definitive conclusions about any specific NanoBox implementation, but rather to spur discussion and explore the feasibility of fine-grained error correction techniques in molecular computing systems.",2004,0, 1699,Design and realization of fault diagnostic system for trunking base station based on LabVIEW and Visual Basic,"Trunking communication system is a sort of specialized command and control system. It is widely used in rescue work and public safety emergency. The peculiar occasion of its application requires more efficient fault diagnostic system. The rapid development of virtual instrument provides a new approach to the fault diagnosis of communication equipment. In this paper, the construction and the realization of fault diagnostic system for trunking base station are discussed and described in detail. The VI which contains signal acquiring unit and signal processing unit is developed by LabVIEW. And the fault diagnosing unit is developed by Visual Basic. The fault diagnosing unit analyzes the physical parameters and the waveform provided by VI. Finally, the fault diagnostic result is given, and the scheme of resolving faults is provided. By analyzing, it can be shown that this kind of fault diagnotic means can be used in other communication equipments.",2010,0, 1700,Influence of quantization on the bit-error performance of turbo-decoders,"Turbo-codes are under consideration for third generation mobile communication systems. For any implementation of a turbo-decoder a fixed-point representation is mandatory. Usually, the bit-width of a fixed-point implementation has to be traded off versus its decoding performance. Past approaches towards a fixed-point representation show a degradation in performance. In this paper we apply a novel quantization methodology on turbo-decoders with (Max-)Log-MAP component decoders and present simulation results for both an AWGN and a Rayleigh fading channel model. The new quantization scheme leads to fixed-point implementations that do not degrade the bit-error performance. Under certain conditions it can even be slightly improved",2000,0, 1701,Spatial and Temporal Error Concealment Techniques for Video Transmission Over Noisy Channels,"Two novel error concealment techniques are proposed for video transmission over noisy channels in this work. First, we present a spatial error concealment method to compensate a lost macroblock in intra-coded frames, in which no useful temporal information is available. Based on selective directional interpolation, our method can recover both smooth and edge areas efficiently. Second, we examine a dynamic mode-weighted error concealment method for replenishing missing pixels in a lost macroblock of inter-coded frames. Our method adopts a decoder-based error tracking model and combines several concealment modes adaptively to minimize the mean square error of each pixel. The method is capable of concealing lost packets as well as reducing the error propagation effect. Extensive simulations have been performed to demonstrate the performance of the proposed methods in error-prone environments",2006,0, 1702,Analyzing and Modeling Open Source Software Bug Report Data,"We analyzed the major differences between closed and open source software development from a software reliability perspective. We examined real world bug report data from six open source software projects, as well as the relationship between the release cycles and the stability characteristics of the bug report data of open source software projects. We then modeled the bug report data using nonparametric techniques. The experimental results suggest that generalized additive models and exponential smoothing approaches are suitable for the estimation of software reliability at least for some of the open source projects.",2008,0, 1703,Detecting Software Faults in Distrubted Systems,"We are concerned with the problem of detecting faults in distributed software, rapidly and accurately. We assume that the software is characterized by events or attributes, which determine operational modes; some of these modes may be identified as failures. We assume that these events are known and that their probabilistic structure, in their chronological evolution, is also known, for a finite set of different operational modes. We propose and analyze a sequential algorithm that detects changes in operational modes rapidly and reliably. Further more, a threshold operational parameter of the algorithm controls effectively the induced speed versus correct detection versus false detection tradeoff.",2009,0, 1704,Intrinsic Spatial Resolution and Parallax Correction Using Depth-Encoding PET Detector Modules Based on Position-Sensitive APD Readout,"We are developing PET detectors with depth of interaction (DOI) capability based on lutetium oxyorthosilicate (LSO) scintillator arrays coupled at both ends to position-sensitive avalanche photodiodes (PSAPDs). The detector module consists of a 5x5 block of LSO crystals (1.5times1.5times20 mm3) coupled to 8times8 mm2 PSAPDs at both ends. We present the intrinsic spatial resolution for two complete modules and the radial resolution component obtained with and without parallax correction using DOI information. DOI resolution was measured to be ~3 mm. Intrinsic spatial resolution (FWHM) averaged 1.15 mm for opposing crystal pairs. The detectors were then rotated and moved closer together consistent with a 16 cm diameter cylindrical geometry scanner. Resolution was measured at positions corresponding to a radial offset of 4.7 cm and 7 cm, with the annihilation photons incident on the detector surface at 30deg and 52deg respectively. At an angle of 52deg the intrinsic spatial resolution was degraded to 7.7 mm. By incorporating DOI information, the measured spatial resolution (FWHM) was improved to 1.8 mm. This demonstrates that approximately 90% of the resolution degradation due to the parallax error can be removed using these depth-encoding detectors for a field of view with a diameter that is 87.5% of the detector separation",2006,0, 1705,The check-pointed and error-recoverable MPI JAVA library of agent teamwork grid computing middleware,"We are implementing a fault-tolerant mpiJava API on top of the AgentTeamwork grid-computing middleware system. Our mpiJava implementation consists of the mpiJava API, the GridTcp socket library, and the user program wrapper, each providing a user with the standard mpiJava functions, facilitating message-recording/error-recovering socket connections, and monitoring a user process. This paper presents the application framework, mpiJava implementation, and communication performance in AgentTeamwork.",2005,0, 1706,Understanding earthquake fault systems using QuakeSim analysis and data assimilation tools,"We are using the QuakeSim environment to model interacting fault systems. One goal of QuakeSim is to prepare for the large volumes of data that spaceborne missions such as DESDynI will produce. QuakeSim has the ability to ingest distributed heterogenous data in the form of InSAR, GPS, seismicity, and fault data into various earthquake modeling applications, automating the analysis when possible. Virtual California simulates interacting faults in California. We can compare output from long time-history Virtual California runs with the current state of strain and the strain history in California. In addition to spaceborne data we will begin assimilating data from UAVSAR airborne flights over the San Francisco Bay Area, the Transverse Ranges, and the Salton Trough. Results of the models are important for understanding future earthquake risk and for providing decision support following earthquakes. Improved models require this sensor web of different data sources, and a modeling environment for understanding the combined data.",2009,0, 1707,The effect of registration error on tracking distant augmented objects,"We conducted a user study of the effect of registration error on performance of tracking distant objects in augmented reality. Categorizing error by types that are often used as specifications, we hoped to derive some insight into the ability of users to tolerate noise, latency, and orientation error. We used measurements from actual systems to derive the parameter settings. We expected all three errors to influence userspsila ability to perform the task correctly and the precision with which they performed the task. We found that high latency had a negative impact on both performance and response time. While noise consistently interacted with the other variables, and orientation error increased user error, the differences between ldquohighrdquo and ldquolowrdquo amounts were smaller than we expected. Results of userspsila subjective rankings of these three categories of error were surprisingly mixed. Users believed noise was the most detrimental, though statistical analysis of performance refuted this belief. We interpret the results and draw insights for system design.",2008,0, 1708,Guaranteed error correction rate for a simple concatenated coding scheme with single-trial decoding,"We consider a concatenated coding scheme using a single inner code, a single outer code, and a fixed single-trial decoding strategy that maximizes the number of errors guaranteed to be corrected in a concatenated codeword. For this scheme, we investigate whether maximizing the guaranteed error correction rate, i.e., the number of correctable errors per transmitted symbol, necessitates pushing the code rate to zero. We show that this is not always the case for a given inner or outer code. Furthermore, to maximize the guaranteed error correction rate over all inner and outer codes of fixed dimensions and alphabets, the code rate of one (but not both) of these two codes should be pushed to zero",2000,0, 1709,Fault tolerance for arithmetic and logic unit,"Very large scale integration (VLSI) technology has evolved to a level where large systems, previously implemented as printed circuit boards with discrete components, are integrated into a single integrated circuit (IC). But aggressive new chip design technologies frequently adversely affect chip reliability during functional operation. Reliability is of critical importance in situations where a computer malfunction could have catastrophic results. Reliability is used to describe systems in which it is not feasible to repair (as in computers on board satellites) or in which the computer is serving a critical function and cannot be lost even for the duration of a replacement (as insight control computers on an aircraft) or in which the repair is prohibitively expensive. The use of concurrent error detection scheme with order to achieve the high reliability requirement of modern computer systems is becoming an important design technique. The present paper describes implementation of error-detection mechanisms for detecting faults within the arithmetic logic unit (ALU). The Boolean unit of the ALU uses duplication of hardware with comparison as the error detection mechanism. The arithmetic unit of the ALU uses residue codes as the error detection mechanisms. If a fault is detected in the ALU we have to replace it with the spare ALU which will make error correction possible. We will compare this fault tolerance mechanism with the current fault-tolerance mechanisms (triple redundancy with single voting scheme and triple modular redundancy with triplicated voting mechanism).",2009,0, 1710,Analysis of Packet Loss for Compressed Video: Effect of Burst Losses and Correlation Between Error Frames,"Video communication is often afflicted by various forms of losses, such as packet loss over the Internet. This paper examines the question of whether the packet loss pattern, and in particular, the burst length, is important for accurately estimating the expected mean-squared error distortion resulting from packet loss of compressed video. We focus on the challenging case of low-bit-rate video where each P-frame typically fits within a single packet. Specifically, we: 1) verify that the loss pattern does have a significant effect on the resulting distortion; 2) explain why a loss pattern, for example a burst loss, generally produces a larger distortion than an equal number of isolated losses; and 3) propose a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames. The accuracy of the proposed model is validated with H.264/AVC coded video and previous frame concealment, where for most sequences the total distortion is predicted to within plusmn0.3 dB for burst loss of length two packets, as compared to prior models which underestimate the distortion by about 1.5 dB. Furthermore, as the burst length increases, our prediction is within plusmn0.7 dB, while prior models degrade and underestimate the distortion by over 3 dB. The proposed model works well for video-telephony-type of sequences with low to medium motion. We also present a simple illustrative example, of how knowledge of the effect of burst loss can be used to adapt the schedule of video streaming to provide improved performance for a burst loss channel, without requiring an increase in bit rate.",2008,0, 1711,Network-adaptive Selection of Transport Error Control (NASTE) for Video Streaming over WLAN,"Video streaming over wireless networks is inherently vulnerable to burst packet losses caused by dynamic wireless channel variations with time-varying fading and interference. To alleviate this limitation, especially in the transport layer, the error control schemes based on FEC (forward error correction), ARQ (automatic repeat request), interleaving, and their hybrid are essential. However, each error control mode shows different performance according to the target application requirement and the channel status. Thus, in this paper, we propose a network-adaptive selection of transport error control (NASTE), where transport-layer error control modes are dynamically selected and applied. First, an effective embedded (software-based) realization of error control modes is proposed to support the flexible mode switching for NASTE. We then present a practical yet effective mode switching algorithm by linking channel/application monitoring, table-guided mode switching decision, and subsequent fine-tuning on error-control mode parameters. Finally, we have implemented a prototype video streaming system with NASTE support and verified it over an IEEE 802.11 g WLAN (wireless LAN) environment. The experimental results indicate that the proposed mechanism can enhance the overall transport performance by recovering packet losses and thus improves the quality of video streaming over WLAN.",2007,0, 1712,Error concealment scheme implemented in H.264/AVC,"Video transmission over noisy channels, like wireless channels, leads to errors on video. Effect of information loss is worse in case of transmission of compressed video. With growing interest in compressed video transmission over such environments, error concealment is becoming more important. In this paper we describe error concealment scheme which uses weighted pixel averaging to obtain each pixel value of lost macroblock in intra coded pictures and also method used for error concealment in inter coded pictures. This method uses boundary matching approach. We focus on performance evaluation of the error concealment technique implemented in JM reference software whereby we used extended profile in encoder.",2008,0, 1713,DEM Error Retrieval by Analyzing Time Series of Differential Interferograms,"Two-pass differential synthetic aperture radar interferometry processing have been successfully used by the scientific community to derive velocity fields. Nevertheless, a precise digital elevation model (DEM) is necessary to remove the topographic component from the interferograms. This letter presents a novel method to detect and retrieve DEM errors by analyzing time series of differential interferograms. The principle of the method is based on the comparison of fringe patterns with the perpendicular baseline. First, a mathematical description of the algorithm is exposed. Then, the algorithm is applied on a series of four one-day European Remote Sensing 1 and 2 satellite (ERS-1/2) interferograms.",2009,0, 1714,Use of Code Error and Beat Frequency Test Method to Identify Single Event Upset Sensitive Circuits in a 1 GHz Analog to Digital Converter,"Typical test methods for characterizing the single event upset performance of an analog to digital converter (ADC) have involved holding the input at static values. As a result, output error signatures are seen for only a few input voltage and output codes. A test method using an input beat frequency and output code error detection allows an ADC to be characterized with a dynamic input at a high frequency. With this method, the impact of an ion strike can be seen over the full code range of the output. The error signatures from this testing can provide clues to which area of the ADC is sensitive to an ion strike.",2008,0,5242 1715,Defect characterization using an ultrasonic array to measure the scattering coefficient matrix,"Ultrasonic nondestructive evaluation is used for detection, characterization, and sizing of defects. The accurate sizing of defects that are of similar or less size than the ultrasonic wavelength is of particular importance in assessing structural integrity. In this paper, we demonstrate how measurement of the scattering coefficient matrix of a cracklike defect can be used to obtain its size, shape, and orientation. The scattering coefficient matrix describes the far field amplitude of scattered signals from a scatterer as a function of incident and scattering angles. A finite element (FE) modeling procedure is described that predicts the scattering coefficient matrix of various cracklike defects. Experimental results are presented using a commercial 64-element, 5 MHz array on 2 aluminum test samples that contain several machined slots and through thickness circular holes. To minimize the interference from the reflections of neighboring defects, a subarray approach is used to focus ultrasound on each target defect in turn and extract its scattering coefficient matrices. A circular hole and a fine slot can be clearly distinguished by their different scattering coefficient matrices over a specific range of incident angles and scattering angles. The orientation angles of slots directly below the array are deduced from the measured scattering coefficient matrix to an accuracy of a few degrees, and their lengths are determined with an error of 10%.",2008,0, 1716,P3C-6 Role of Ultrasonic Velocity Estimation Errors in Assessing Inflammatory Response and Vascular Risk,"Ultrasound has great potential for accurate estimation of velocity gradients near blood vessel walls for measuring wall shear stress (WSS) at high spatial resolution. Arterial sites of low and oscillating WSS promote inflammatory responses that increase the risk of developing atherosclerotic plaques. We implemented broadband coded excitation techniques on a commercial scanner to estimate WSS with high spatial and temporal resolution. Ultrasonic measurement errors were quantified over the shear range of 0.3-1.5 Pa, where errors slowly increase with WSS. Expression of cellular adhesion molecules (CAM) associated with atherosclerosis development was also investigated over a similar range of shear stress (0-1.6 Pa) to study the impact of registering shear-mediated CAM expression to incorrect WSS estimates. Ultrasonic measurement errors generated the largest uncertainties in assessing endothelial cell function in the shear range where the sensitivity of CAM expression was high. For VCAM-1, errors near WSS = 0.4 Pa were most important, while for E-selectin errors near WSS = 0.8 Pa were greatest. These data help to guide the design of new ultrasonic techniques for monitoring vascular shear stress in patients particularly at the potential sites of early atherogenesis",2006,0, 1717,Transient fault models and AVF estimation revisited,"Transient faults (also known as soft-errors) resulting from high-energy particle strikes on silicon are typically modeled as single bit-flips in memory arrays. Most Architectural Vulnerability Factor (AVF) analyses assume this model. However, accelerated radiation tests on static random access memory (SRAM) arrays built using modern technologies show evidence of clustered upsets resulting from single particle strikes. In this paper, these observations are used to define a scalable fault model capable of representing fault multiplicities. Applying this model, a probabilistic framework for incorporating vulnerability of SRAM arrays to different fault multiplicities into AVF is proposed. An experimental fault injection setup using a detailed microarchitecture simulation running generic benchmarks was used to demonstrate vulnerability characterization in light of the new fault model. Further, rigorous fault injection is used to demonstrate that conventional methods of AVF estimation overestimate vulnerability up to 7 for some structures.",2010,0, 1718,Using Process-Level Redundancy to Exploit Multiple Cores for Transient Fault Tolerance,"Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point towards multi-threaded multi-core designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper proposes a software-based multi-core alternative for transient fault tolerance using process-level redundancy (PLR). PLR creates a set of redundant processes per application process and systematically compares the processes to guarantee correct execution. Redundancy at the process level allows the operating system to freely schedule the processes across all available hardware resources. PLR's software-centric approach to transient fault tolerance shifts the focus from ensuring correct hardware execution to ensuring correct software execution. As a result, PLR ignores many benign faults that do not propagate to affect program correctness. A real PLR prototype for running single-threaded applications is presented and evaluated for fault coverage and performance. On a 4-way SMP machine, PLR provides improved performance over existing software transient fault tolerance techniques with 16.9% overhead for fault detection on a set of optimized SPEC2000 binaries.",2007,0, 1719,Automatic selection of recognition errors by respeaking the intended text,"We investigate how to automatically align spoken corrections with an initial speech recognition result. Such automatic alignment would enable one-step voice-only correction in which users simply respeak their intended text. We present three new models for automatically aligning corrections: a 1-best model, a word confusion network model, and a revision model. The revision model allows users to alter what they intended to write even when the initial recognition was completely correct. We evaluate our models with data gathered from two user studies. We show that providing just a single correct word of context dramatically improves alignment success from 65% to 84%. We find that a majority of users provide such context without being explicitly instructed to do so. We find that the revision model is superior when users modify words in their initial recognition, improving alignment success from 73% to 83%. We show how our models can easily incorporate prior information about correction location and we show that such information aids alignment success. Last, we observe that users speak their intended text faster and with fewer re-recordings than if they are forced to speak misrecognized text.",2009,0, 1720,Measuring experimental error in microprocessor simulation,"We measure the experimental error that arises from the use of non-validated simulators in computer architecture research, with the goal of increasing the rigor of simulation-based studies. We describe the methodology that we used to validate a microprocessor simulator against a Compaq DS-10L workstation, which contains an Alpha 21264 processor. Our evaluation suite consists of a set of 21 microbenchmarks that stress different aspects of the 21264 microarchitecture. Using the microbenchmark suite as the set of workloads, we describe how we reduced our simulator error to an arithmetic mean of 2%, and include details about the specific aspects of the pipeline that required extra care to reduce the error. We show how these low-level optimizations reduce average error from 40% to less than 20% on macrobenchmarks drawn from the SPEC2000 suite. Finally, we examine the degree to which performance optimizations are stable across different simulators, showing that researchers would draw different conclusions, in some cases, if using validated simulators",2001,0, 1721,FDTD Study of Defect Modes in Two-Dimensional Silver Metallo-Dielectric Photonic Crystal,We model defective silver metallo-dielectric photonic crystal by the finite-difference time-domain (FDTD) method. The Drude model with parameters fit to empirical data was used.,2007,0, 1722,An Evolving Model of Software Bug Reports,"We model software bug reports as a topological network called reporter network. By statistical analysis we And that the reporter network displays a number of features (scale-free, small-world, and etc.) shared by other complex networks. In order to understand the origins of these features, an evolving complex network model is proposed for the first time. The experimental results show that the model is able to reproduce many of statistical properties of the reporter network. Moreover, the scaling exponents of power-law distribution are calculated analytically. The calculation results agree well with simulation results.",2009,0, 1723,Fault Analysis and Parameter Identification of Permanent-Magnet Motors by the Finite-Element Method,"We performed a time-stepping finite-element-method (FEM) analysis to study a rotor surface-mounted permanent-magnet synchronous machine with insulation failure inter-turn fault. We used FEM for magnetic field study and determining the machine parameters under various fault conditions. We studied the effect of machine pole number and number of faulted turns on machine parameters. Finally, we used the FEM machine model for studying permanent-magnet machine behavior under different fault conditions.",2009,0, 1724,Characterizing and predicting which bugs get fixed: an empirical study of Microsoft Windows,"We performed an empirical study to characterize factors that affect which bugs get fixed in Windows Vista and Windows 7, focusing on factors related to bug report edits and relationships between people involved in handling the bug. We found that bugs reported by people with better reputations were more likely to get fixed, as were bugs handled by people on the same team and working in geographical proximity. We reinforce these quantitative results with survey feedback from 358 Microsoft employees who were involved in Windows bugs. Survey respondents also mentioned additional qualitative influences on bug fixing, such as the importance of seniority and interpersonal skills of the bug reporter. Informed by these findings, we built a statistical model to predict the probability that a new bug will be fixed (the first known one, to the best of our knowledge). We trained it on Windows Vista bugs and got a precision of 68% and recall of 64% when predicting Windows 7 bug fixes. Engineers could use such a model to prioritize bugs during triage, to estimate developer workloads, and to decide which bugs should be closed or migrated to future product versions.",2010,0, 1725,How a cyber-physical system can efficiently obtain a snapshot of physical information even in the presence of sensor faults,We present a distributed algorithm for cyber-physical systems to obtain a snapshot of sensor data. The snapshot is an approximate representation of sensor data; it is an interpolation as a function of space coordinates. The new algorithm exploits a prioritized medium access control (MAC) protocol to efficiently transmit information of the sensor data. It scales to a very large number of sensors and it is able to operate in the presence of sensor faults.,2008,0, 1726,A fault-tolerant technique for scheduling periodic tasks in real-time systems,"We present a heuristic for producing a fault-tolerant schedule of given periodic tasks in distributed real-time systems. Tasks are divided into two classes according to their task utilization. In order to recover from faults, a hybrid scheme based on space redundancy and time redundancy is used. We use a very simple and fast heuristic to provide fault tolerance and reduce time overhead in case of transient faults in distributed real-time systems. We show that our approach can improve processor utilization.",2004,0, 1727,Optimizing testing efficiency with error-prone path identification and genetic algorithms,"We present a method for optimizing software testing efficiency by identifying the most error prone path clusters in a program. We do this by developing variable length genetic algorithms that optimize and select the software path clusters which are weighted with sources of error indexes. Although various methods have been applied to detecting and reducing errors in a whole system, there is little research into partitioning a system into smaller error prone domains for testing. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Typically only parts of a program can be tested, but these parts are not necessarily the most error prone. Therefore, we are developing a more selective approach to testing by focusing on those parts that are most likely to contain faults, so that the most error prone paths can be tested first. By identifying the most error prone paths, the testing efficiency can be increased.",2004,0, 1728,A model-based objective evaluation of eye movement correction in EEG recordings,"We present a method to quantitatively and objectively compare algorithms for correction of eye movement artifacts in a simulated ongoing electroencephalographic signal (EEG). A realistic model of the human head is used, together with eye tracker data, to generate a data set in which potentials of ocular and cerebral origin are simulated. This approach bypasses the common problem of brain-potential contaminated electro-oculographic signals (EOGs), when monitoring or simulating eye movements. The data are simulated for five different EEG electrode configurations combined with four different EOG electrode configurations. In order to objectively compare correction performance for six algorithms, listed in Table III, we determine the signal to noise ratio of the EEG before and after artifact correction. A score indicating correction performance is derived, and for each EEG configuration the optimal correction algorithm and the optimal number of EOG electrodes are determined. In general, the second-order blind identification correction algorithm in combination with 6 EOG electrodes performs best for all EEG configurations evaluated on the simulated data.",2006,0, 1729,Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment,We propose a low cost method for the correction of the output of OCR engines through the use of human labor. The method employs an error estimator neural network that learns to assess the error probability of every word from ground truth data. The error estimator uses features computed from the outputs of multiple OCR engines. The output probability error estimate is used to decide which words are inspected by humans. The error estimator is trained to optimize the area under the word error ROC leading to an improved efficiency of the human correction process. A significant reduction in cost is achieved by clustering similar words together during the correction process. We also show how active learning techniques are used to further improve the efficiency of the error estimator.,2009,0, 1730,Iterative refinement of range images with anisotropic error distribution,"We propose a method which refines the range measurement of range finders by computing correspondences of vertices of multiple range images acquired from various viewpoints. Our method assumes that a range image acquired by a laser rangefinder has anisotropic error distribution which is parallel to the ray direction. Thus, we find the corresponding points of range images along with the ray direction. We iteratively converge range images to minimize the distance of corresponding points. We demonstrate the effectiveness of our method by presenting the experimental results of artificial and real range data. Also, we show that our method refines a 3D shape more accurately as opposed to that achieved by using the Gaussian filter.",2002,0, 1731,A Fast QC Method for Testing Contact Hole Roughness by Defect Review SEM Image Analysis,"We propose a new and fast method for monitoring contact hole roughness (CHR), which can be a major yield-loss factor for advanced SRAMs. The method, defect-review scanning electron microscopy (SEM) image processing, can monitor CHR 100 times faster than the conventional method by critical dimension (CD)-SEM. The speed can facilitate faster identification of process countermeasures by, for example, making detailed monitoring of CHR variation within a wafer practicable. Results for CHR obtained by both new and conventional methods show similar trends for differences in process conditions. Also, we experimentally confirmed the new method's measuring variation of the rate of deformed contact holes.",2008,0, 1732,Static ownership inference for reasoning against concurrency errors,We propose a new approach for reasoning about concurrency in object-oriented programs. Central to our approach is static ownership inference analysis - we conjecture that this analysis has important application in reasoning against concurrency errors.,2009,0, 1733,Model-based fault localization in large-scale computing systems,"We propose a new fault localization technique for software bugs in large-scale computing systems. Our technique always collects per-process function call traces of a target system, and derives a concise execution model that reflects its normal function calling behaviors using the traces. To find the cause of a failure, we compare the derived model with the traces collected when the system failed, and compute a suspect score that quantifies how likely a particular part of call traces explains the failure. The execution model consists of a call probability of each function in the system that we estimate using the normal traces. Functions with low probabilities in the model give high anomaly scores when called upon a failure. Frequently-called functions in the model also give high scores when not called. Finally, we report the function call sequences ranked with the suspect scores to the human analyst, narrowing further manual localization down to a small part of the overall system. We have applied our proposed method to fault localization of a known non-deterministic bug in a distributed parallel job manager. Experimental results on a three-site, 78-node distributed environment demonstrate that our method quickly locates an anomalous event that is highly correlated with the bug, indicating the effectiveness of our approach.",2008,0, 1734,A mobile multicast protocol with error control for IP networks,"We propose a new protocol to achieve fault recovery of multicast applications in an IP internetwork with mobile participants. Our protocol uses the basic unicast routing capability of IETF Mobile IP as the foundation to leverage existing static host reliable IP multicast models to provide reliable multicast services for mobile hosts as well. We believe that the resulting scheme is simple, scalable, transparent, and independent of the underlying multicast routing facility. A key feature of our protocol is the use of a multicast forwarding agent (MFA) to address the scalability and reliability issues in the reliable mobile multicast applications. Our simulation results show the distinct performance advantages of our protocol using MFAs over two other approaches proposed for the mobile multicast service, namely mobile multicast protocol (MoM) and bi-directional tunneling, particularly as the number of mobile group members and home agents increases",2000,0, 1735,Measuring HMM similarity with the Bayes probability of error and its application to online handwriting recognition,"We propose a novel similarity measure for hidden Markov models (HMMs). This measure calculates the Bayes probability of error for HMM state correspondences and propagates it along the Viterbi path in a similar way to the HMM Viterbi scoring. It can be applied as a tool to interpret misclassifications, as a stop criterion in iterative HMM training or as a distance measure for HMM clustering. The similarity measure is evaluated in the context of online handwriting recognition on lower case character models which have been trained from the UNIPEN database. We compare the similarities with experimental classifications. The results show that similar and misclassified class pairs are highly correlated. The measure is not limited to handwriting recognition, but can be used in other applications that use HMM based methods",2001,0, 1736,A serial unequal error protection code system using trellis coded modulation and an adaptive equalizer for fading channels,"We propose a serial unequal error protection (UEP) scheme using trellis coded modulation and an adaptive equalizer for use in mobile fading channel communication environments. We propose two types of signal constellations, TRAP and RING, to realize unequal error protection and show their performance in fading channels using computer simulations.",2008,0, 1737,Adaptive unequal error control for video over the Internet,"We propose an adaptive unequal error protection scheme to help protect a video stream over the Internet. The data partition approach and the resynchronization marker are applied to generate two types of packets with different importance. Our scheme is very adaptive to network traffic conditions and can be easily implemented via software. With the proposed scheme, more protection is provided for the more important packets. The final quality can therefore be improved at a higher packet loss ratio.",2002,0, 1738,Construction of an Agent-based Fault-Tolerant Object Group Model,"We propose an agent-based fault-tolerant object group (AFTOG) model for achieving effective object management and reliable fault recovery. We define five kinds of agents as internal processing agent, registration agent, state handling agent, user interface agent, and service agent. The roles of the agents in the proposed model are not only to reduce the remote interactions between distributed objects but also to guarantee more effective service execution. Through the simulations, it is validated that the proposed model decreases the interactions of the object components and supports effective fault recovery, while providing more stable and reliable services.",2007,0, 1739,Tunable Infrared Semiconductor Lasers Based on Electromagnetically Induced Optical Defects,"We propose tunable midinfrared laser systems based on dynamic formation of electromagnetically induced optical defect sites. Such defects occur in a waveguide structure having a uniformly corrugated quantum well structure in the absence of any structural defect or phase slip. In the absence of a control laser field, such a corrugated structure causes a uniform perturbation of refractive index along the waveguide. However, when a relatively small region of such a waveguide structure is illuminated from the side by the control laser field, electromagnetically induced transparency occurs in this region while its refractive index corrugation is removed. We also show that such a coherently induced defect can be dynamically moved along the waveguide structure by just steering the control laser beam to illuminate different locations. We utilize the fact that the phase associated with this defect site can be adjusted via changing the length of the illuminated region to present a tunable distributed feedback laser where its lasing wavelength can be continuously varied within the stop-band. We study the case where two coherently induced defect sites happen along the waveguide structure and discuss the impacts of illumination of the whole waveguide structure with the control laser field. We show that the latter can either make the waveguide coherently transparent by destroying its refractive index perturbation and stop-band, or generate a gain-without-inversion grating. Formation of such a grating allows the waveguide to act as a tunable partly gain-coupled distributed feedback laser.",2007,0, 1740,Constellation space invariance of orthogonal space-time block codes with application to evaluation of the exact probability of error,"We prove a new interesting property of space-time block codes (STBCs) that are based on generalized orthogonal designs. For flat block-fading channels, it is shown that the internal structure of the vector space of the input constellation remains invariant to the combined effect of the STBC and the channel, except for a scaling factor. The established constellation space invariance property is entirely due to the specific structure of the STBCs based on the generalized orthogonal designs. Using this property, we obtain simple exact expressions for the error probability of the maximum likelihood (ML) decoder in the general case when the channel, STBC, and input signal constellations are arbitrary. Such expressions are obtained in both the cases when the channel realization is deterministic (fixed) and random. In the latter case, simple expressions are derived for the average error probability.",2003,0, 1741,Advances in scatter correction for 3D PET/CT,"We report on several significant improvements to the implementation of image-based scatter correction for 3D PET and PET/CT. Among these advances are: a new algorithm to scale the estimated scatter sinogram to the measured data, thereby largely compensating for external scatter; the ability to handle CT image truncation during this scaling; the option to iterate the scatter calculation for improved accuracy; the use of ordered subset estimation maximization (OSEM) reconstruction for the estimated emission images from which the scatter contributions are simulated; reporting of data quality parameters such as scatter and randoms fractions, and noise equivalent count rate (NECR), for each patient bed position; and extensive quality control output. Scatter correction (2 iterations, OSEM) typically requires 15-45 sec per bed. Very good agreement between the estimated scatter and measured emission data for several typical clinical scans is reported for CPS Pico-3D and HiRez LSO PET/CT systems. The data characteristics extracted during scatter correction are useful for patient specific count rate modeling and scan optimization",2004,0, 1742,Atomic-scale surface structures and structural defects of hydrothermal batio3 nanoparticles revealed by HRTEM,"We report on the effects of reactive conditions on the formation of BT nanoparticles via hydrothermal process with a focus on their atomic-scale surface microstructures and structural defects characterized by HRTEM. The results showed that large Ba/Ti molar ratios in the precursors could lead to large particles with a cubic morphology. Smaller particles with a weaker agglomeration behavior were observed in the products synthesized via solvothermal process using ethylene glycol (EG) as reaction medium, in comparison to using either pure water or water-EG mixed solution as reactive medium. A terrace-ledge-kink surface structure and structural defects such as anti-phase boundaries (APBs) and edge dislocations with Burger's vector of or were observed at the edges of the 1/2d100 or 1/2d111 were observed at the edges of the nanoparticles. The {110} surfaces were found to be reconstructed and composed of the corners bound by the {100} mini-faces. APBs near the edges of BT nanoparticles were formed by the intersection of two crystalline parts with displacement deviation from each other by 1/2d110.",2009,0, 1743,Application of CL/EBIC-SEM techniques for characterization of irradiation induced defects in triple junction solar cells,"We report the results of the characterization of irradiated InGaP2/GaAs/Ge multijunction (MJ) solar cells using the cathodoluminescence (CL) imaging/spectroscopy and electron beam induced current (EBIC) modes of scanning electron microscopy (SEM). These techniques were applied to verify the influence of irradiation damage on the optoelectronic properties of each subcell triple junction structure and correlate illuminated (AM0, 1 sun, 25C) current-voltage (IV) and quantum efficiency (QE) characteristics.",2010,0, 1744,Sensitivity Analysis on Bio-op Errors in DNA Computing,"We simulate the biological operations of a DNA computational algorithm for a combinatorial problem. We then perform sensitivity analysis in which we vary bio-op error rates to see which bio-ops affect the end result. Finally, we review three approaches to tune the algorithm in order to minimize significant error.",2009,0, 1745,Extended Hamming product codes analytical performance evaluation for low error rate applications,"We study product codes based on extended Hamming codes. We focus on their performance at low error rates, which are important for wireless multimedia applications. We present the basis and a complete set of techniques which allows one to analytically evaluate this performance without resorting to extremely long simulations. We present new theoretical results concerning the popular approximation where the bit error rate is nearly equal to the frame error rate times the ratio of the minimum distance to the codeword length. We prove that: 1) binary codes with a transitive automorphism group satisfy this approximation with equality; and 2) extended Hamming product codes belong to this class. Closed-form expressions for their dominant multiplicity values are derived. Analytical curves are plotted, discussed, and validated by comparison with iterative decoding. This analytical approach is then extended to both shortened and punctured codes, which are important for practical design. The first case is solved by applying the extended MacWilliams identity to the dual codes. For punctured codes, we present a new analytical approach for estimating their average performance using a ""random"" puncturer.",2004,0, 1746,Study on the detection and correction of software based on UML,"We have proposed an approach to the correction of anti-patterns; we believe that before attempting such corrections it is important to have confirmation from the developer. Assumptions are made when identifying or correcting certain anti-patterns; however, these assumptions that we recognize as the causes of the anti-pattern may be the behavior the developer actually intended with the design, or may not be the most optimal correction for the anti pattern. We propose these transformations as a guide for the improvement of the design; nevertheless the decision of applying the changes should be left to the user.",2010,0, 1747,Nano-scale fault tolerant machine learning for cognitive radio,"We introduce a machine learning based channel state classifier for cognitive radio, designed for nano-scale implementation. The system uses analog computation, and consists of cyclostationary feature extraction and a radial basis function network for classification. The description of the system is partially abstract, but our design choices are motivated by domain knowledge and we believe the system will be feasible for future nanotechnology implementation. We describe an error model for the system, and simulate experimental performance and fault tolerance of the system in recognizing WLAN signals, under different levels of input noise and computational errors. The system performs well under the expected non-ideal manufacturing and operating conditions.",2008,0, 1748,Time domain phase noise correction for OFDM signals,"We introduce an algorithm for compensating for carrier phase noise in an OFDM communication system. Through the creation of a linearized parametric model for phase noise, we generate a least squares (LS) estimate of the transmitted symbol. Using digitized DVB-T RF signals created in a laboratory and a DVB-T compliant receiver model, simulation results are presented to evaluate the effectiveness of the algorithm in practical environments.",2002,0, 1749,Scheduling optional computations in fault-tolerant real-time systems,"We introduce an exact schedulability analysis for the optional computation model under a specified failure hypothesis. From this analysis, we propose a solution for determining, before run-time, the degree of fault tolerance allowed in the system. This analysis will allow the system designer to verify if all the tasks in the system meet their deadlines and to decide which optional parts must be discarded if some deadlines would be missed. The identification of feasible options that satisfy some optimality criteria requires the exploration of a potentially large combinatorial space of possible optional parts to discard. Since this complexity is too high to be considered practical in dynamic systems, two heuristic algorithms are proposed for selecting which tasks must be discarded and for guiding the process of searching for feasible options. The performance of the algorithms is measured quantitatively with simulations using synthetic task sets",2000,0, 1750,Evaluation of the accuracy and robustness of a motion correction algorithm for PET using a novel phantom approach,"We introduce the use of a novel physical phantom to quantify the performance of a motion-correction algorithm. The goal of the study was to assess a PET-PET image registration, the final output of which is a motion-corrected high-statistics PET image volume, a procedure called Reconstruct, Register and Average (RRA). Methods: A phantom was constructed using 5~2mL Ge-68 filled spheres suspended in a water-filled tank via lightweight fishing line and driven by a periodic motion. Comparison of maximum and mean concentration and sphere volume was performed. Ground truth data were measured using no-motion. With motion, five replicate datasets of 3-minute phase-gated data for each of 3 different periods of motion were acquired. Gated PET images were registered using a multi-resolution level-sets-based non-rigid registration (NRR). The NRR images were then averaged to form a motion-corrected, high-statistics image volume. Spheres from all images were segmented and compared across the imaging conditions. Results: The average center-of-mass range of motion was 7.35, 5.83 and 2.66 mm for the spheres over the three periods of 8, 6 and 4 seconds. The center-of-mass for all spheres in all conditions was corrected to within 1mm on average using NRR as compared to the gated data. For the RRA data, the sphere maximum activity concentration (MAC) was on average 40.2% higher (-4.0% to 116.7%) and sphere volume was on average 12.0% smaller (-8.2% to 28.1%) as compared to the un-gated data with motion. The RRA results for MAC were on average 70% more accurate and for sphere volume 80% more accurate as compared to the un-gated data. Conclusions: The results show that the novel phantom setup and analysis methods are a promising evaluation technique for the assessment of motion correction algorithms. Benefits include the ability to compare against ground truth data without motion but with control of the statistical data quality and background variability. Use of a nonmoving object adjacen- - t to spheres in motion, the spatial extent of the motion correction algorithm was confirmed to be local to the induced motion and to not affect the stationary object. A further benefit of the assessment technique is the use of ground truth data.",2010,0, 1751,Semantic errors in SQL queries: a quite complete list,"We investigate classes of SQL queries which are syntactically correct, but certainly not intended, no matter for which task the query was written. For instance, queries that are contradictory, i.e. always return the empty set, are obviously not intended. However, current database management systems execute such queries without any warning. We give an extensive list of conditions that are strong indications of semantic errors. Of course, questions like the satisfiability are in general undecidable, but a significant subset of SQL queries can actually be checked. We believe that future database management systems perform such checks and that the generated warnings help to develop code with fewer bugs in less time.",2004,0, 1752,Corrective Maintenance Maturity Model: Problem Management,"We present our PhD thesis, in which we suggest a process model for handling software problems within corrective maintenance. Our model is called CM3: Problem Management.",2002,0, 1753,Design and implementation of color correction system for images captured by digital camera,"We present the design and implementation of a color correction system for images captured by digital camera. In general, photographing is affected by various factors such as objective camera settings and many environmental contents as well as individual user's skill. So it is not unusual for common users to take unnatural photographs which have inaccurate colors. Although there have been considerable research efforts for the color correction/reproduction, accurate handling of the color characteristics is still not a trivial task for the common users because it often requires specialized device, data format, and professional knowledge on the color science. Our goal in this paper is to develop an easy-to-use color correction system which is friendly to the average digital camera users. The experimental results show that the color correction problem can be greatly simplified by using the proposed system. The accuracy and robustness of the proposed system are also verified on the experimental results of several indoor and outdoor images.",2008,0, 1754,Communication protocols for a fault-tolerant automated highway system,"We present the design and verification of inter-vehicle communication protocols for the operation of an automated highway system in the presence of faults. The protocols form part of a fault-tolerant control hierarchy proposed in earlier work. Our goal here is to implement discrete-event supervisory controllers to stop the faulty vehicle or take it out of the highway in a safe manner. Because these actions require cooperation among vehicles in the neighborhood of the faulty vehicle, the supervisory controllers are implemented by means of inter-vehicle communication protocols. The logical correctness of the proposed protocols is verified using automatic verification tools. We discuss the safety of the proposed design in terms of the possibility of collisions and highlight the problems associated with carrying out a complete safety analysis",2000,0, 1755,Principles of multi-level reflection for fault tolerant architectures,"We present the principles of multi-level reflection as an enabling technology for the design and implementation of adaptive fault tolerant systems. By exhibiting the structural and behavioral aspects of a software component, the reflection paradigm enables the design and implementation of appropriate non-functional mechanisms at a meta-level. The separation of concerns provided by reflective architectures makes reflection a perfect match for fault tolerance mechanisms. However, in order to provide the necessary and sufficient information for error detection and recovery, reflection must be applied to all system layers in an orthogonal manner. This is the main motivation behind the notion of multi-level reflection that is introduced. We describe the basic concepts of this new architectural paradigm, and illustrate them with concrete examples. We also discuss some practical work that has recently been carried out to start implementing the proposed framework.",2002,0, 1756,Using Web services for atmospheric correction of remote sensing data,"We present the technical implementation details for a prototype central atmospheric correction parameter server called NOMAD (Networked On-line Mapping of Atmospheric Data). Using a Web service, aerosol optical depth (AOD) values are transmitted via the network to a Web service aware atmospheric correction application. Information about the date and location of the image are extracted automatically from the image, allowing a best estimate of AOD at 500 nm to be retrieved from the central server database. A Web service approach was adopted to allow easy cross platform development in multiple software languages. Using the Web Services Definition Language (WSDL) description of the atmospheric correction parameter server Web service, application developers can easily make their atmospheric correction applications ""Web aware"". On the server side, researchers maintaining the central database are free to concentrate on providing the best quality data available.",2002,0, 1757,Effective and efficient localization of multiple faults using value replacement,"We previously presented a fault localization technique called value replacement that repeatedly alters the state of an executing program to locate a faulty statement [9]. The technique searches for program statements involving values that can be altered during runtime to cause the incorrect output of a failing run to become correct. We showed that highly effective fault localization results could be achieved by the technique on programs containing single faults. In the current work, we generalize value replacement so that it can also perform effectively in the presence of multiple faults. We improve scalability by describing two techniques that significantly improve the efficiency of value replacement. In our experimental study, our generalized technique effectively isolates multiple simultaneous faults in time on the order of minutes in each case, whereas in , the technique had sometimes required time on the order of hours to isolate only single faults.",2009,0, 1758,A scalable fault tolerant approach to core election in an inter-domain multicast routing environment,"We propose a consensus protocol to rearrange the multicast core based tree, after a core failure. However, the applications of this protocol are not limited to core failure. It can be used in any situation where a common value should be fixed by consensus in an inter-domain routing environment. This protocol is a hierarchical (two-level) extension of the original proposal by Chandra and Toueg (1996). However, in order to use the algorithm in an inter-domain environment such as the Internet, we introduce randomization in our consensus protocol. Indeed, we minimise by this way the total number of rounds. A comparison with other consensus protocols shows that our solution gives a better complexity of the algorithm. This solution also opens new perspectives in terms of core election in an inter-domain multicast routing environment, since the classical approaches do not scale, or are based on manual configurations",2000,0, 1759,A Safe Fault Tolerant Multi-view Approach for Vision-Based Protective Devices,"We present a new approach that realizes an image-based fault tolerant distance computation for a multi-view camera system which conservatively approximates the shortest distance between unknown objects and 3D volumes. Our method addresses the industrial application of vision-based protective devices which are used to detect intrusions of humans into areas of dangerous machinery, in order to prevent injuries. This requires hardware redundancy for compensation of hardware failures without loss of functionality and safety. By taking sensor failures during the fusion process of distances from different cameras into account, this is realized implicitly, with the benefit of no additional hardware cost. In particular we employ multiple camera perspectives for safe and non-conservative occlusion handling of obstacles and formulate general system assumptions which are also appropriate for other applications like multi-view reconstruction methods.",2010,0, 1760,A new learning approach to design fault tolerant ANNs: finally a zero HW-SW overhead,"We present a new approach to design fault tolerant artificial neural networks (ANNs). Additionally, this approach allows estimating the final network reliability. This approach is based on the mutation analysis technique and is used during the training process of the ANN. The basic idea is to train the ANN in the presence of faults (single-fault model is assumed). To do so, a set of faults is injected into the code describing the ANN. This procedure yields mutation versions of the original ANN code, which in turn are used to train the network in an iterative process with the designer until the moment when the ANN is no longer sensible to the single faults injected. In other words, the network became tolerant to the considered set of faults. A practical example where an ANN is used to recognize an electrocardiogram (ECG) and to measure ECG parameters illustrates the proposed methodology.",2002,0, 1761,A New Class of Highly Fault Tolerant Erasure Code for the Disk Array,"We present a new class of erasure codes of size ntimesn (n is a prime number) called T-code, a new family of simple, highly fault tolerant XOR-based erasure codes for storage systems (with fault tolerance up to 15). T-code is not maximum distance separable (MDS), but has many other advantages, such as high fault tolerance, simple computability, and high efficiency of coding and decoding. Because of its superior quantity over many other erasure codes for the storage system, this new coding technology is more suited in RAID or dRAID systems.",2008,0, 1762,GOOFI: generic object-oriented fault injection tool,We present a new fault injection tool called GOOFI (Generic Object-Oriented Fault Injection). GOOFI is designed to be adaptable to various target systems and different fault injection techniques. The tool is highly portable between different host platforms since it relies on the Java programming language and an SQL compatible database. The current version of the tool supports pre-runtime software implemented fault injection and scan-chain implemented fault injection.,2001,0, 1763,"TRAILS, a Toolkit for Efficient, Realistic and Evolving Models of Mobility, Faults and Obstacles in Wireless Networks","We present a new simulation toolkit called TRAILS (Toolkit for Realism and Adaptivity In Large-scale Simulations), which extends the ns-2 simulator by adding important functionality and optimizing certain critical simulator operations. The added features provide the tools to study wireless networks of high dynamics. TRAILS facilitates the implementation of advanced mobility patterns, obstacle presence and disaster scenarios, and failures injection that can dynamically change throughout the execution of the simulation. Moreover, we define a set of utilities that enhance the use of ns-2. This functionality is implemented in a simple and flexible architecture, that follows design patterns, object oriented and generic programming principles, maintaining a proper balance between reusability, extendability and ease of use. We evaluate the performance of TRAILS and show that it offers significant speed-ups regarding the execution time of ns-2 in certain important, common wireless settings. Our results also show that this is achieved with minimum overhead in terms of memory usage.",2008,0, 1764,An approach for analysing the propagation of data errors in software,"We present a novel approach for analysing the propagation of data errors in software. The concept of error permeability is introduced as a basic measure upon which we define a set of related measures. These measures guide us in the process of analysing the vulnerability of software to find the modules that are most likely exposed to propagating errors. Based on the analysis performed with error permeability and its related measures, we describe how to select suitable locations for error detection mechanisms (EDMs) and error recovery mechanisms (ERMs). A method for experimental estimation of error permeability, based on fault injection, is described and the software of a real embedded control system analysed to show the type of results obtainable by the analysis framework. The results show that the developed framework is very useful for analysing error propagation and software vulnerability and for deciding where to place EDMs and ERMs.",2001,0, 1765,A novel technique for coupling three dimensional mesh adaptation with an a posteriori error estimator,"We present a novel error estimation driven 3D unstructured mesh adaptation technique based on a posteriori error estimation techniques with upper and lower error bounds. In contrast to other work (Oden, 2002; Prudhomme et al., 2003) we present this approach in three dimensions using unstructured meshing techniques to potentiate an automatically adaptation of 3D unstructured meshes without any user interaction. The motivation for this approach, the applicability and usability is presented with real-world examples.",2005,0, 1766,Consistent detection of global predicates under a weak fault assumption,"We study the problem of detecting general global predicates in distributed systems where all application processes and at most tdeg are likely to happen during the lifetime of PALSAR, which will significantly reduce the accuracy of geophysical parameter recovery if uncorrected. Therefore, the estimation and correction of FR effects is a prerequisite for data quality and continuity. In this paper, methods for estimating FR are presented and analyzed. The first unambiguous detection of FR in SAR data is presented. A set of real data examples indicates the quality and sensitivity of FR estimation from PALSAR data, allowing the measurement of FR with high precision in areas where such measurements were previously inaccessible. In examples, we present the detection of kilometer-scale ionospheric disturbances, a spatial scale that is not detectable by ground-based GPS measurements. An FR prediction method is presented and validated. Approaches to correct for the estimated FR effects are applied, and their effectiveness is tested on real data.",2008,0, 1812,Adaptive Motion Vector Retrieval Schemes for H.264 Error Concealment,"With the ubiquitous application of Internet and wireless networks, H.264 video communication becomes more and more popular. However, due to the high-efficiently predictive coding and the variable length entropy coding, it is more sensitive to transmission errors. Error concealment (EC) is just an approach to utilize the spatial and temporal correlations to conceal the corrupted region. In this paper, first we propose variable block size error concealment (VBSEC) scheme inspired by variable block size motion estimation (VBSME) in H.264. This scheme provides four EC modes and four sub-block partitions. The whole corrupted macro-block (MB) will be divided into variable block size adoptively according to the actual motion. More precise motion vectors (MV) will be predicted for each sub-block Then MV refinement (MVR) scheme is proposed to refine the MV of the heterogeneous sub-block by utilizing three step search (TSS) algorithm adaptively. Both VBSEC and MVR are based on our improved spatio-temporal boundary matching algorithm (STBMA). By utilizing these schemes, we can reconstruct the corrupted MB in the inter frame more accurately. The experimental results show that our proposed scheme can obtain maximum PSNR gain up to 1.82dB and 1.52dB, respectively compared with the boundary matching algorithm (BMA) adopted in the JM11.0 reference software and STBMA.",2008,0, 1813,Distributed event processing for fault management of Web Services,"Within service orientation (SO) web services (WS) are the defacto standard for implementing service-oriented systems. While consumers of WS want to get uninterrupted and reliable service from the service providers WS providers can not always provide services in the expected level due to faults and failures in the system. As a result the fault management of these systems is becoming crucial. This work presents a distributed event-driven architecture for fault management of Web Services. According to the architecture managed WS report different events to the event databases. From event databases these events are sent to event processors. The event processors are distributed over the network. They process the events, detect fault scenarios in the event stream and manage faults in the WS.",2009,0, 1814,Polish N-Grams and Their Correction Process,"Word n-gram statistics collected from over 1 300 000 000 words are presented. Eventhough they were collected from various good sources, they contain several types of errors. The paper focuses on the process of partly supervised correction of the n- grams. Types of errors are described as well as our software allowing efficient and fast corrections.",2010,0, 1815,A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data,"There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.",2008,1, 1816,A Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems,"To contribute to the body of empirical research on fault distributions during development of complex software systems, a replication of a study of Fenton and Ohlsson is conducted. The hypotheses from the original study are investigated using data taken from an environment that differs in terms of system size, project duration, and programming language. We have investigated four sets of hypotheses on data from three successive telecommunications projects: 1) the Pareto principle, that is, a small number of modules contain a majority of the faults (in the replication, the Pareto principle is confirmed), 2) fault persistence between test phases (a high fault incidence in function testing is shown to imply the same in system testing, as well as prerelease versus postrelease fault incidence), 3) the relation between number of faults and lines of code (the size relation from the original study could be neither confirmed nor disproved in the replication), and 4) fault density similarities across test phases and projects (in the replication study, fault densities are confirmed to be similar across projects). Through this replication study, we have contributed to what is known on fault distributions, which seem to be stable across environments.",2007,1, 1817,Based on CAN bus wind generating set online monitoring and fault diagnosis system,"Wind generating set online monitoring and fault diagnosis system, using CAN bus, combines by sensor system, scene gathering and processing, monitoring system, analysis and diagnosis system, network system, etc.. This system combines signal processing, artificial intelligence, communications, DSP, databases, computer networks and so on, decomposes a complete online monitor and diagnosis duty to each computer on different level, coordinates mutually, completes the monitor task together; and the online way integrated experts wisdom of diagnostics, has realized the long-distance monitor and the diagnosis, therefore enterprises may register the correlation the long-distance diagnostic center Website to carry on the analysis diagnosis directly. Practice has proved that this system integrated data collection, performance analysis, fault diagnosis, artificial intelligence technologies in one integrated information system, realized running status monitor and the breakdown diagnosis to the wind generating set. The system has a certain practicality and effectiveness.",2008,0, 1818,Simultaneous optimization for wind derivatives based on prediction errors,"Wind power energy has been paid much attention recently for various reasons, and the production of electricity with wind energy has been increasing rapidly for a few decades. In this work, we will propose a new type of weather derivatives based on the prediction errors for wind speeds, and estimate their hedge effect on wind power energy businesses. At first, we will investigate the correlation of prediction errors between the power output and the wind speed in a Japanese wind farm. Then we will develop a methodology that will optimally construct a wind derivative based on the prediction errors using nonparametric regressions. A simultaneous optimization technique of the loss and payoff functions for wind derivatives is demonstrated based on the empirical data.",2008,0, 1819,Robust and extreme unequal error protection scheme for the transmission of scalable data over OFDM systems,Wireless applications are subject to the end-to-end quality of service (QoS) requirements. This paper presents a new resources allocation algorithm that allows to transmit scalable multimedia data over a frequency selective channel with partial channel knowledge. The available resources are subject to payload and QoS constraints and the algorithm aims at maximizing the transmission robustness to channel estimation errors. The impact of this technique is evaluated for a MPEG-4 audio application.,2008,0, 1820,The Application of Wireless Sensor Networks in Machinery Fault Diagnosis,"Wireless sensor network is a thriving information collecting and processing technology, which is widely used in military field, industry and environmental monitoring, etc. In a wireless sensor network which is made up of tens of thousands battery-powered sensor nodes, data fusion technology can be used to reduce communication traffic in order to save energy. With respect to large mechanical equipment, traditional wired sensors are commonly used for fault detection and diagnosis. There will be no wiring problem if wireless sensor networks are used, which is favorable to detect potential problems in mechanical equipment without affecting normal production of enterprises. In this paper, the application of wireless sensor networks in machinery fault diagnosis is studied, a data fusion model for machinery fault diagnosis in wireless sensor networks and PCA neural data fusion algorithm are proposed, and the effectiveness of the method is demonstrated in an experiment.",2010,0, 1821,Fuzzy data fusion for fault detection in Wireless Sensor Networks,"Wireless Sensor Networks (WSN) can produce decisions that are unreliable due to the large inherent uncertainties in the areas which they are deployed. It is vital for the applications where WSN's are deployed that accurate decisions can be made from the data produced. Fault detection is a vital pursuit, however it is a challenging task. In this paper we present a fuzzy logic data fusion approach to fault detection within a Wireless Sensor Network using a Statistical Process Control and a clustered covariance method. Through the use of a fuzzy logic data fusion approach we have introduced a novel technique into this area to reduce uncertainty and false-positives within the fault detection process.",2010,0, 1822,A Mobile Agent-Based Architecture for Fault Tolerance in Wireless Sensor Networks,"Wireless Sensor Networks (WSNs) are prone to failures as they are usually deployed in remote and unattended environments. To mitigate the effect of these failures, fault tolerance becomes imperative. Nonetheless, it remains to be a second tier activityit should not undermine the execution of the mission oriented tasks of WSNs through overly taxing their resources. We define architecture for fault tolerance in WSNs that is based on a federation of mobile agents that is used both for diagnostic intelligence and repair regimen, focusing on being lightweight in energy, communication and resources. Mobile agents are classified here as local, metropolitan, and global, providing fault tolerance at node, network and functional levels. Interactions between mobile agents are inspired by honey bee dance language that builds on semantics of errors classification and their demographic distribution. Our quantitative modeling substantiates that the proposed fault tolerance framework mandates minimalist communication through contextualized bee-inspired interactions, achieving adaptive sensitivity, and hysteresis-based stability",2010,0, 1823,Fault management in wireless sensor networks,"Wireless sensor networks (WSNs) have gradually emerged as one of the key growth areas for pervasive computing in the twenty-first century. Recent advances in WSN technologies have made possible the development of new wireless monitoring and environmental control applications. However, the nature of these applications and harsh environments also created significant challenges for sensor networks to maintain a high quality of service in potentially harsh environments. Therefore, efficient fault management and robust management architectures have become essential for WSNs. In this article, we address these challenges by surveying existing fault management approaches for WSNs. We divide the fault management process into three phases: fault detection, diagnosis, and recovery and classify existing approaches according to these phases. Finally, we outline future challenges for fault management in WSNs.",2007,0, 1824,A High Energy Efficiency Link Layer Adaptive Error Control Mechanism for Wireless Sensor Networks,"Wireless sensor networks (WSNs) require simple and facile error control schemes because of the low complexity and high energy efficiency request of sensor nodes. In this paper, we discuss ARQ, FEC, and Chase combing hybrid ARQ (HARQ) schemes using energy efficiency analysis on different communication distances and link layer frame lengths. We proposed a high energy efficiency adaptive error control mechanism (AEC-RSSI). Our mathematical analysis shows that the AEC-RSSI mechanism achieves better performance of data transmission comparing with FEC, ARQ and HARQ, in terms of the overall energy efficiency of the communications in a WSN.",2010,0, 1825,Online drift correction in wireless sensor networks using spatio-temporal modeling,"Wireless sensor networks are deployed for the purpose of sensing and monitoring an area of interest. Sensors in the sensor network can suffer from both random and systematic bias problems. Even when the sensors are properly calibrated at the time of their deployment, they develop drift in their readings leading to erroneous inferences being made by the network. The drift in this context is defined as a slow, unidirectional, long-term change in the sensor measurements. In this paper we present a novel algorithm for detecting and correcting sensors drifts by utilising the spatio-temporal correlation between neigbouring sensors. Based on the assumption that neighbouring sensors have correlated measurements and that the instantiation of drift in a sensor is uncorrelated with other sensors, each sensor runs a support vector regression algorithm on its neigbourspsila corrected readings to obtain a predicted value for its measurements. It then uses this predicted data to self-assess its measurement and detect and correct its drift using a Kalman filter. The algorithm is run recursively and is totally decentralized. We demonstrate using real data obtained from the Intel Berkeley Laboratory that our algorithm successfully suppresses drifts developed in sensors and thereby prolongs the effective lifetime of the network.",2008,0, 1826,Application of error control codes (ECC) in ultra-low power RF transceivers,"Wireless sensor networks provide the ability to gather and communicate critical environmental, industrial or security information to enable rapid responses to potential problems. The limited embedded battery life time requires ultra low power sensing, processing and communication systems. To achieve this goal, new approaches at the device, circuit, system and network level need to be pursued (Roundy et al., 2003). Adoption of error control codes (ECC) reduces the required transmit power for reliable communication, while increasing the processing energy of the encoding and decoding operations. This paper discusses the above trade off for systems with and without standard ECC, such as convolutional and Reed Solomon codes. The comparison of the required energy per bit, based on several implemented decoders, shows that the adoption of an ECC with simple decoding structures (such as Reed Solomon) is quite energy efficient. This has specially been observed for long distances.",2005,0, 1827,ECC-Cache: A Novel Low Power Scheme to Protect Large-Capacity L2 Caches from Transiant Faults,"With dramatic scaling in feature size of VLSI technology, the capacity of on-chip L2 cache increases rapidly, how to guarantee the reliability of large capacity L2 cache has become an important issue. However, increasing the reliability of L2 cache tends to reduce its performance and brings more power consumption. This paper presents ECC-Cache, a novel low power fault-tolerant architecture which divided the traditional method of detecting and correcting errors using some uniform coding scheme into two steps, and uses a hybrid method which protects clean data and dirty data in different way to enhance the reliability of L2 cache. This paper also compares the performance and power consumption of ECC-Cache with that of some other proposed schemes, experimental results show that ECC-Cache is effective to guarantee the reliability of large-capacity L2 cache, while bringing little impact to system performance and power consumption. We find that ECC-Cache performs better than the uniform-ECC scheme adopted in some widespread used commercial processors and some proposed schemes in other papers. Compared with the cache which has no protection, ECC-Cache only consumes 3% to 6% additional power and degrades performance no more than 2%.",2009,0, 1828,Software Fault Protection with ARINC 653,"With flight software becoming ever more complex, assuming that it behaves perfectly is no longer realistic. At the same time Verification and Validation (V&V) is consuming up to 50% of flight software development costs. The adaptation of fault protection concepts to flight software is attractive, particularly in the context of the fault containment and health management capabilities of ARINC 653. We propose a proactive, unified, model-based approach in which the behavior of the software is monitored against a model of its expected behavior. We describe how that may be incorporated into the ARINC 653 health management architecture. We describe software capabilities that facilitate software fault protection. These capabilities include enhancements to the ARINC 653 application executive, tools for software instrumentation, and a temporal logic runtime monitoring framework for high-level specification and monitoring. We analyze the aspects of the software that should be modeled and the types of failure responses. We show how these concepts may be applied to the Mission Data System (MDS) flight software framework.",2007,0, 1829,Fault handling in embedded industrial measurement and control systems: issues and a case study,"With increasingly complex control systems used in a variety of commercial, aerospace, and military applications, system faults may occur during system operations. These faults inevitably result in abnormal operations and production shutdown or even disasters. Therefore, improving system reliability has become a major concern in safety-critical systems. This paper primarily addresses the issues in embedded fault-tolerant control system designs and presents a case study on vibration suppression in the aerospace industry. The design issues on embedded control systems such as component failures, sampling jitters and control delay, network-induced delay and packet loss in network transmission are discussed, all of which may have damaging effects on the closed-loop system performance. A case study on vibration control for a launch vehicle payload fairing using multiple embedded PZT actuators are also presented, where an adaptive actuator failure compensation scheme is successfully implemented. Fault-tolerant control turns out to be effective in creating more robust industrial measurement and control systems.",2003,0, 1830,Atmospheric correction of spectral imagery: evaluation of the FLAASH algorithm with AVIRIS data,"With its combination of good spatial and spectral resolution, visible to near infrared spectral imaging from aircraft or spacecraft is a highly valuable technology for remote sensing of the Earth's surface. Typically it is desirable to eliminate atmospheric effects on the imagery, a process known as atmospheric correction. We review the basic methodology of first-principles atmospheric correction and present results from the latest version of the FLAASH (fast line-of-sight atmospheric analysis of spectral hypercubes) algorithm. We show some comparisons of ground truth spectra with FLAASH-processed AVIRIS (airborne visible/infrared imaging spectrometer) data, including results obtained using different processing options, and with results from the ACORN (atmospheric correction now) algorithm that derive from an older MODTRAN4 spectral database.",2002,0, 1831,"System architecture for error-resilient, embedded JPEG2000 wireless delivery","With new, third generation mobile terminals several multimedia-based applications will be soon available. The reliable transmission of audiovisual content will probably become one of the most asked services. On the other hand, when wireless delivery is addressed, it would be desirable to reach very high compression ratio keeping a good image perceptual quality. With these requirements the choice of JPEG2000 as the source encoding stage assures excellent results. However a non-ideal wireless network could seriously affect the received image decoding, despite JPEG2000's error resilience capabilities. In this paper a flexible DSP/FPGA system with high error-resilience features is proposed. Experimental results show very promising visual quality, with a limited complexity overhead, even in the presence of a mean loss rate of 10%.",2002,0, 1832,Finite Element Analysis of Internal Winding Faults in Distribution Transformers,"With the appearance of deregulation, distribution transformer predictive maintenance is becoming more important for utilities to prevent forced outages with the consequential costs. To detect and diagnose a transformer internal fault requires a transformer model to simulate these faults. This paper presents finite element analysis of internal winding faults in a distribution transformer. The transformer with a turn-to-earth fault or a turn-to-turn fault is modeled using coupled electromagnetic and structural finite elements. The terminal behaviors of the transformer are studied by an indirect coupling of the finite element method and circuit simulation. The procedure was realized using a commercially available software. The normal case and various faulty cases were simulated and the terminal behaviors of the transformer were studied and compared with field experimental results. The comparison results validate the finite element model to simulate internal faults in a distribution transformer.",2001,0,1833 1833,Finite element analysis of internal winding faults in distribution transformers,"With the appearance of deregulation, distribution transformer predictive maintenance is becoming more important for utilities to prevent forced outages with the consequential costs. To detect and diagnose a transformer internal fault requires a transformer model to simulate these faults. This paper presents finite element analysis of internal winding faults in a distribution transformer. The transformer with a turn-to-earth fault or a turn-to-turn fault is modeled using coupled electromagnetic and structural finite elements. The terminal behaviors of the transformer are studied by an indirect coupling of the finite element method and circuit simulation. The procedure was realized using a commercially available software. The normal case and various faulty cases were simulated and the terminal behaviors of the transformer were studied and compared with field experimental results. The comparison results validate the finite element model to simulate internal faults in a distribution transformer.",2001,0, 1834,Mitigating Soft Errors in System-on-Chip Design,"With the continuous downscaling of CMOS technologies, the reliability has become a major bottleneck in the evolution of the next generation scaling. Technology trends such as transistor downsizing, use of new materials and high performance computer architecture continue to increase the sensitivity of systems to soft errors. Today the technologies are moving into the period of nanotechnologies and system-on-chip (SoC) designs are widely used in most of the applications, the issues of soft errors and reliability in complex SoC designs are set to become and increasingly challenging. This paper gives a review to the soft error in SoC designs and then presents the fault tolerant solution.",2008,0, 1835,A fault location scheme based on spectrum characteristic of fault-generated high-frequency transient signals,"When a fault occurs on a transmission line, fault-generated high-frequency signals contains a lot of information about fault. This paper proposes a fault location scheme for transmission lines based on spectrum characteristic of high-frequency components, the difference of adjacent two natural frequencies in one ends of transmission line is used to compute fault distance. This method need not identify arriving time of first traveling wave and traveling wave from fault point, it is insensitive to swings, fault type, fault resistance and system source parameters. The scheme performance was proven using power system computer aided design (PSCAD) simulations in 500-kV power system considering critical fault cases.",2009,0, 1836,An industrial environment for high-level fault-tolerant structures insertion and validation,"When designing a VLSI circuits, most of the efforts are now performed at levels of abstractions higher than gate. Correspondingly to this clear trend, there is a growing request to tackle safety-critical issues directly at the RT-level. This paper presents a complete environment for considering safety issues at the RT level. The environment was implemented and tested by an industry for devising a sample safety-critical device. Designers were permitted to assess the effects of transient faults, automatically add fault-tolerant structures, and validate the results working on the same circuit descriptions and acting in a coherent framework. The evaluation showed the effectiveness of the proposed environment.",2002,0, 1837,A model of soft error effects in generic IP processors,"When designing reliability-aware digital circuits, either hardware or software techniques may be adopted to provide a certain degree of failure detection/tolerance, caused by either hardware faults or soft-errors. These techniques are quite well established when working at a low abstraction level, whereas are currently under investigation when moving to higher abstraction levels, in order to cope with the increasing complexity of the systems being designed. This paper presents a model of soft error effects to be adopted when defining software-only techniques to achieve fault detection capabilities. The work identifies on a generic IP processor the misbehaviors caused by soft errors, classifies and analyzes them with respect to the possibility of detecting them by means of previously published approaches. An experimental validation of the proposed model is carried out on the Leon2 processor.",2005,0, 1838,Quantization errors in digital motor control systems,"When implementing a motor control drive scheme digitally, the quantization errors always exist in the system. Two major sources of quantization errors are analog-to-digital (A-D) process and numerical calculation in the fixed-point computing device. Typically, the effects of quantization errors due to A-D conversion contribute less than one produced by numerical calculation. This paper studies the quantization errors in a sensorless direct vector control system of induction motor system using a 32-bit fixed-point digital signal processor (DSP) from Texas Instruments (TMS320x28xx series). The investigation of quantization errors produced by numerical computation in such DSP is focused. Both simulation and experiment are carried out within DSP itself in three kinds of data formats; 16-bit fixed-point, 32-bit fixed-point, and floating-point. By comparing the results between floating-point and fixed-point implementation on one machine, numerical issues related to quantization errors can be verified and resolved. As a result, the system performance and behavior can be affected by quantization errors in 16-bit word length while it is not significant in the 32-bit word length.",2004,0, 1839,Decision trees for error concealment in video decoding,"When macro-blocks are lost in a video decoder such as MPEG-2, the decoder can try to conceal the error by estimating or interpolating the missing area. Many different methods for this type of post-processing concealment have been proposed, operating in the spatial, frequency, or temporal domains, or some hybrid combination of them. In this paper, we show how the use of a decision tree that can adaptively choose among several different error concealment methods can outperform each single method. We also propose two promising new methods for temporal error concealment.",2003,0, 1840,On maximizing the fault coverage for a given test length limit in a synchronous sequential circuit,"When storage requirements or limits on test application time do not allow a complete (compact) test set to be used for a circuit, a partial test set that detects as many faults as possible is required. Motivated by this application, we address the following problem. Given a test sequence T of length L for a synchronous sequential circuit and a length MS of length at most M such that the fault coverage of TS is maximal. A similar problem was considered before for combinational and scan circuits, and solved by test ordering. Test ordering is not possible with the single test sequence considered here. We solve this problem by using a vector omission process that allows the length of the sequence T to be reduced while allowing controlled reductions in the number of detected faults. In this way, it is possible to obtain a sequence TS that has the desired length and a maximal fault coverage.",2003,0, 1841,Design and implementation of the real-time color image correction system,"When the color linear array CCD camera works, the output color image has the dislocation phenomenon which has seriously affected the human visual effect. At the same time the dislocation phenomenon has also brought great trouble to the image processing. In order to resolve this problem, a real-time collection system with the PCI interface has been developed. The system not only has the image collection function but also has the communication function. The system is so convenient that it can be easy to achieve control and data transfer. In the system, hardware circuit module is designed with the block-style thinking. The FPGA is the core of the whole hardware circuit module, responsible for data transmission, timing logic control and serial communication. Double buffer structure ensures the data transmission reliably. The PCI driver development and the PC software application are described seriously in the software module. The DMA mode and the event notification method are adopted in the driver. The RGB color correction algorithm is introduced seriously in the software application. The experiment results show that the system has met the requirement targets. It can achieve the maximum transfer rate of 40MBps when running in the DMA mode. The system is stable and real-time when it works outdoors.",2010,0, 1842,Using Developer Information as a Factor for Fault Prediction,"We have been investigating different prediction models to identify which files of a large multi-release industrial software system are most likely to contain the largest numbers of faults in the next release. To make predictions we considered a number of different file characteristics and change information about the files, and have built fully- automatable models that do not require that the user have any statistical expertise. We now consider the effect of adding developer information as a prediction factor and assess the extent to which this affects the quality of the predictions.",2007,1, 1843,Error-controlled computation for termination of programs,"Whether a program can terminate or not has direct impact on software safety. As false results can occur due to calculation errors on floating point numbers, the terminability can be false given a loop program and any initial value on Rn. In this paper, a recursive algorithm is suggested for calculating the values of arithmetic expressions to arbitrary precision. Using the error-controlled computation method (ECC), we can determine the initial value is a terminating point or not.",2010,0, 1844,All digital ADC with linearity correction and temperature compensation,"While digital circuits benefit from high-density digital CMOS technology, the design of analog and mixed signal blocks in the same technology is a higher challenge at each new technology node. AD converters are an example of such blocks. A possible solution is the implementation of ADCs in digital technology using logic gates as a voltage controlled oscillator. However, the limited linearity and temperature sensitivity are known issues. In this paper, a linearity correction technique that is also able to compensate for temperature effects is used. Results indicate the feasibility of the approach.",2010,0, 1845,Software Fault Localization Based on Testing Requirement and Program Slice,"A heuristic approach is proposed to locate a fault according to the priority. To a given test case wt, fault localization has to be proceeded when its output wrong. Firstly, four assistant test cases, one failed and three successful test cases, are selected out according to the biggest cardinality of Req(wt,ti), which stand for the common testing requirements both covered by wt and ti. Then, code prioritization methodology is put forward based on program slice technique. Dynamic slice technique is taken for wt and execution slice technique for four assistant test cases. Some dices are constructed with different priority which means the possibility of containing bug and is evaluated according to the occurrences in the selected slices. Thirdly, the key algorithm including two procedures, refining and augmenting, is followed here to fault localization based on priority. In the refining phase, the most suspicious codes am checked step by step; in the augmenting phase, more codes will be gradually considered on the basis of direct data dependency. At last, experimental studies are performed to illustrate the effectiveness of the technique.",2007,0, 1846,The development of holonic information coordination systems with security considerations and error-recovery capabilities,"A holonic manufacturing system (HMS), which is designed to realize an agile manufacturing enterprise, must be able to integrate the entire range of manufacturing activities from market demands, design, modeling, production, through delivery. These entire activities are implemented by several distributed sites. In order to effectively integrate these distributed sites, we adopt distributed object, mobile object, and object web technologies as well as the holon and holarchy concepts derived from studying social organizations and living organisms to develop a holonic information coordination system (HICS). The generic holon is first developed to achieve the properties of holon, error recovery and security certification. Communication holons are then generated by inheriting generic holon. Finally, communication holons are used to establish HICS. Thus, communication holons have the basic holonic attributes, such as intelligence, autonomy, and cooperation. Further, communication holons can handle information sharing, coordination among enterprises and data exchange by different data format. It is believed that HICS can meet the future requirements of supply chain information integration for virtual enterprises.",2001,0, 1847,A hybrid error concealment method based on H.264 standard,"A hybrid error concealment method for H.264 compressed video is presented in this paper. The method can adequately exploit the spatial and temporal relativities of video sequence to adaptively select the spatial concealment or temporal concealment according to the boundary match criterion. In order to further improve the reconstructed video quality, a maximum a posteriori (MAP) estimator is then employed to post-dispose the concealed areas. The experimental results show that the proposed method can achieve excellent concealment performance with remarkably improved subjective quality of the reconstructed video",2006,0, 1848,A Study on the Non-Inductive Coils for Hybrid Fault Current Limiter Using Experiment and Numerical Analysis,"A Hybrid fault current limiter (FCL) proposed by our group previously is composed of a superconducting coil, a fast switch and a bypass reactor. The superconducting coil wound with two kinds of HTS wire has zero impedance when normal current flows in the coil. However, different quench characteristics of the HTS wire generate magnetic flux in the coil when fault current flows in the coil. As a result, the fast switch was opened by the repulsive force applied to the aluminum plate above the coil. In previous studies, our group verified operating characteristics and feasibility of the fast switch. In this paper, comparison of pancake type and solenoid type non-inductive coil wound with two kinds of the HTS wire was performed by using short-circuit test and finite element method. From these results, short-circuit characteristic of a coil can be acquired and magnitude of the repulsive force and magnetic field can be analysed.",2010,0, 1849,Study of hybrid intelligent fault diagnosis,"A hybrid intelligent fault diagnosis method is presented for the diversity, uncertainty and complexity of device faults. This method integrates respective advantages of fault tree, fuzzy theory, neural networks and genetic algorithms to form a hybrid approach and is applied to fault diagnosis of fan. Experiments show that this method is simple and effective. It can also be applied to other fault diagnosis of complex systems and has certain portability.",2010,0, 1850,Expert fault diagnosis using rule models with certainty factors for the leaching process,"A key point in the operation of the leaching process is to ensure the safe running of the process. The paper proposes an expert fault diagnosis strategy for the leaching process, which is based on rule models with certainty factors. A diagnosis procedure is first presented. Then, the rule models are constructed based on empirical knowledge, empirical data and statistical results on past fault countermeasures, and an expert reasoning method is used which employs the rule models and forward chaining. A real-word application is expounded finally",2000,0, 1851,Correction of myocardial artifact due to limited spatial resolution in PET,"A kind of artifact in myocardial positron emission tomography (PET) is the count loss due to the limited spatial resolution of PET, which is also named as partial volume effect (PVE). A deconvolution method was developed to correct for the artifact, in which the counts were evaluated as a parameter in the convolution. One-dimensional (1D) and two-dimensional (2D) models were used to create the convolution equation. Counts, background, myocardial wall position and thickness were parameters of the equation. The method needs non-linear fitting by an iterative process. Computer simulated myocardial images, PET image of phantoms and patient's myocardial PET images were used for evaluating the method. For simulated images, the corrected recovery coefficient is between 0.97 and 1.10 for 2D model and is near 0.9 for 1D model. 2D model can be applied for myocardium as thin as 0.25 times of the system spatial resolution while 1D model is only for 1.5 times of it. For phantom images, the recovery coefficient of 2D is near 1.0 for different thickness of the myocardium. The iterating process converges for a wide range of different size of myocardium and noise levels. 2D model allows correction for PVE exactly even for very thin myocardium.",2005,0, 1852,Incorporating fault debugging activities into software reliability models: a simulation approach,"A large number of software reliability growth models have been proposed to analyse the reliability of a software application based on the failure data collected during the testing phase of the application. To ensure analytical tractability, most of these models are based on simplifying assumptions of instantaneous & perfect debugging. As a result, the estimates of the residual number of faults, failure rate, reliability, and optimal software release time obtained from these models tend to be optimistic. To obtain realistic estimates, it is desirable that the assumptions of instantaneous & perfect debugging be amended. In this paper we discuss the various policies according to which debugging may be conducted. We then describe a rate-based simulation framework to incorporate explicit debugging activities, which may be conducted according to the different debugging policies, into software reliability growth models. The simulation framework can also consider the possibility of imperfect debugging in conjunction with any of the debugging policies. Further, we also present a technique to compute the failure rate, and the reliability of the software, taking into consideration explicit debugging. An economic cost model to determine the optimal software release time in the presence of debugging activities is also described. We illustrate the potential of the simulation framework using two case studies.",2006,0, 1853,An exploration of software faults and failure behaviour in a large population of programs,"A large part of software engineering research suffers from a major problem-there are insufficient data to test software hypotheses, or to estimate parameters in models. To obtain statistically significant results, a large set of programs is needed, each set comprising many programs built to the same specification. We have gained access to such a large body of programs (written in C, C++, Java or Pascal) and in this paper we present the results of an exploratory analysis of around 29,000 C programs written to a common specification. The objectives of this study were to characterise the types of fault that are present in these programs; to characterise how programs are debugged during development; and to assess the effectiveness of diverse programming. The findings are discussed, together with the potential limitations on the realism of the findings.",2004,0, 1854,A Three-Phases Byzantine Fault Tolerance Mechanism for HLA-Based Simulation,"A large scale HLA-based simulation (federation) is composed of a large number of simulation components (federates), which may be developed by different participants and executed at different locations. Byzantine failures, caused by malicious attacks and software/hardware bugs, might happen to federates and propagate in the federation execution. In this paper, a three-phases (i.e., failure detection, failure location, and failure recovery) Byzantine Fault Tolerance (BFT) mechanism is proposed based on the decoupled federate architecture. By combining the replication, check pointing and message logging techniques, some redundant executions of federate replicas are avoided. The BFT mechanism is implemented using both Barrier and No-Barrier federate replication structures. Protocols are also developed to remove the epidemic effect caused by Byzantine failures. As the experiment results show, the BFT mechanism using No-Barrier replication outperforms that using Barrier replication significantly in the case that federate replicas have different runtime performance.",2010,0, 1855,A multi-agent system-based intelligent identification system for power plant control and fault-diagnosis,"A large-scale power system is required to have a new control system to operate at a higher level of automation, flexibility, and robustness. In this paper, a multi-agent system based intelligent identification system (MAS-IIS) is presented for identification and fault-diagnosis methodologies that improve the performance of the plant in a wide-range of operation. With proposed architecture of a single agent and an organization of the multi-agent system, the MAS-IIS realizes on-line adaptive identifiers for control, and off-line identifiers for fault-diagnosis in real-time power plant operation. The proposed MAS-IIS is one of the functions in multi-agent system based intelligent control systems (MAS-ICSs) which has several functions that provide efficient ways to control locally and globally, and to accommodate and overcome the complexity of large-scale distributed systems",2006,0, 1856,A low complexity hierarchical QAM symbol bits allocation algorithm for unequal error protection of wireless video transmission,"A low complexity hierarchical quadrature amplitude modulation (QAM) symbol bits allocation algorithm for unequal error protection of video transmission over wireless channels is proposed in this paper. An unequal error protection (UEP) scheme using hierarchical QAM, which takes into consideration the non-uniformly distributed importance of intracoded frame (I-frame) and predictive coded frame (P-frame) in a group of pictures (GOP), is first proposed. In order to optimally allocate the hierarchical QAM's high priority (HP), medium priority (MP) and low priority (LP) symbol bits to the H.264/AVC video, a generic solution for optimal allocation of hierarchical QAM's high priority (HP), medium priority (MP) and low priority (LP) symbol bits to the H.264/AVC video is then proposed. Finally, a low complexity symbol bits allocation algorithm, namely ranking search algorithm, which reduces the computational complexity of the proposed optimal symbol bits allocation algorithm, is proposed. Simulation results show that our proposed UEP scheme outperforms the classical equal error protection (EEP) scheme and also the previous UEP scheme.",2009,0, 1857,Adaptive Checkpoint Replication for Supporting the Fault Tolerance of Applications in the Grid,"A major challenge in a dynamic Grid with thousands of machines connected to each other is fault tolerance. The more resources and components involved, themore complicated and error-prone becomes the system. Migol is an adaptive Grid middleware, which addresses the fault tolerance of Grid applications and services by providing the capability to recover applications from checkpoint files automatically. A critical aspect for an automatic recovery is the availability of checkpoint files: If a resource becomes unavailable, it is very likely that the associated storage is also unreachable, e. g. due to a network partition. A strategy to increase the availability of checkpoints isreplication.In this paper, we present the Checkpoint Replication Service. A key feature of this service is the ability to automatically replicate and monitor checkpoints in the Grid.",2008,0, 1858,Regularized B-spline deformable registration for respiratory motion correction in PET images,"A major challenge in respiratory motion correction of gated PET images is their low signal to noise ratios (SNR). This particularly affects the accuracy of image registration. This paper presents an approach to overcoming this problem using a deformable registration algorithm which is regularized using a Markov random field (MRF). The deformation field is represented using B-splines and is assumed to form a MRF. A regularizer is then derived and introduced to the registration, which penalizes noisy deformation fields. Gated PET images are aligned using this registration algorithm and summed. Experiments with simulated data show that the regularizer effectively suppresses the noise in PET images, yielding satisfactory deformation fields. After motion correction, the PET images have significantly better image quality.",2008,0, 1859,Phoenix: making data-intensive grid applications fault-tolerant,"A major hurdle facing data intensive grid applications is the appropriate handling of failures that occur in the grid-environment. Implementing the fault-tolerance transparently at the grid-middleware level would make different data intensive applications fault-tolerant without each having to pay a separate cost and reduce the time to grid-based solution for many scientific problems. We analyzed the failures encountered by four real-life production data intensive applications: NCSA image processing pipeline, WCER video processing pipeline, US-CMS pipeline and BMRB BLAST pipeline. Taking the result of the analysis into account, we have designed and implemented Phoenix, a transparent middleware-level fault-tolerance layer that detects failures early, classifies failures into transient and permanent and appropriately handIes the transient failures. We applied our fault-tolerance layer to a prototype of the NCSA image processing pipeline and considerably improved the failure handling and report on the insights gained in the process.",2004,0, 1860,A robust technique for motion correction in fMRI,"A major source of error in the analysis of functional Magnetic Resonance images is the presence of spurious activation arising on account of patient head movement at the time of image acquisition. This makes it imperative for the images to be subjected to motion correction through registration. A number of solutions to the problem currently do exist though there is always the need for faster approaches which produce better estimates of the motion parameters. In this paper, we propose a signal model for fMRI images with possible relative movement between scans and show how the least trimmed squares estimator, which is well-known in the statistical literature for its robustness, can be used. Since data obtained from actual fMRI studies do not provide the ldquocorrectrdquo values of the motion parameters with which the performances of various estimators may be compared, computer simulations are set up where these parameters may be controlled. Our simulations indicate that the proposed method produces smaller error in the estimated motion parameters, when compared to the existing estimators, including another robust estimator.",2009,0, 1861,Active power security correction in power market using genetic algorithms,"A method for active power security correction in power market based on genetic algorithms is proposed in this paper. The aim of active power security correction in a power market is to regulate the active power output of generators properly to alleviate overloads of transmission lines with the least increase of market purchase cost. The mathematical model of the problem is given, the principle of genetic algorithms and their application procedures are explained briefly as well. Some improvements to the crossover operator, and to crossover and mutation probability of simple genetic algorithm (SGA) are presented, forming improved genetic algorithm (IGA). The application of IGA in the active power security correction in a power market is then demonstrated. The method is proved to be correct in theory and feasible in engineering practice, according to the results of active power security correction calculation for a 22-node network example.",2002,0, 1862,A Research on I.C. Engine Misfire Fault Diagnosis Based on Rough Sets Theory and Neural Network,"A method for diagnosis of misfire fault in internal combustion engine based on exhaust density of HC, CO2, O2 and the engines work parameters are presented in this paper. Rough sets theory is used to simplify attribute parameter reflecting exhaust emission and conditions of internal combustion engine and in which unnecessary properties are eliminated. The engines work parameters, exhaust emission with misfire fault and without fault are tested by the experimentation of CA6100 engine. A diagnosis model which describing the relationship between the misfire degree and the internal combustion engines exhaust emission and work parameters is established based on rough sets theory and RBF neural network. The model reduces the sample size, optimizes the neural network, increase the diagnosis correctness. The model is also trained by test data and MATLAB software. The model has been used to diagnosis internal combustion engine misfire fault, the result illustrates that this diagnosis model is suitable. This system can reduce input node number and overcome some shortcomings, such as neural network scale is too large and the rate of classification is slow.",2010,0, 1863,Evaluation of the profile error of complex surface through particle swarm optimization (PSO),"A method for evaluating the profile error of complex surfaces, particularly the profile error of worm, with Minimum Zone Method (MZM) by way of particle swarm optimization (PSO) is presented. According to the minimum zone condition, the mathematical model of complex surface together with the optimal objective function is derived. The principle and implementation of PSO are explained and PSO algorithm is introduced to search the optimal solution. Experiments show that PSO method is capable of evaluating nonlinear optimization problems such as the profile error of complex surface and can obtain optimal solutions. The solutions agree well with the result given by Least Square Method (LSM). Moreover, the PSO-based approach features simple and easy to implement in computer. It can be applied to deal with the measured data of the profile error of complex surface obtained by three coordinate measurement machines (CMM).",2010,0, 1864,A probabilistic approach to fault diagnosis of industrial systems,"A method for fault diagnosis of industrial systems is presented. Plant devices, sensors, actuators and diagnostic tests are described as stochastic finite-state machines. A formal composition rule of these models is given to obtain: 1) the set of admissible fault signatures; 2) their conditional probability given any fault; and 3) the conditional probability of a fault given a prescribed signature. The modularity and flexibility of this method make it suitable to deal with complex systems made by a large number of components. The method is used in an industrial automotive application, specifically the diagnosis of the throttle body and of the angular sensors measuring the throttle plate angle is described in detail.",2004,0, 1865,Avoiding Crosstalk Influence on Interconnect Delay Fault Testing,"A method for reliable measurement of interconnect delays is presented in the paper. The mode of test vectors generation never induces crosstalks. That is why the delay measurement is reliable. Also, minimization of ground bounce noises and reduction of power consumption during the test is an additional advantage. The presented method allows also localizing and identifying static faults of both stuck-at (SaX) and short types. The paper deals with the hardware that is necessary for implementing the method.",2007,0, 1866,Motion correction of PET images using realignment for intraframe movement,"A method is presented for the motion correction of PET images using realignment for intraframe movement. A newly introduced aspect of the method is that it corrects not only interframe but also intraframe movement, using currently used PET images (which are not corrected for intraframe movement) and motion tracking data. Although our method requires motion tracking data, it does not require very short time image acquisition nor list mode data acquisition. So, it is applicable to currently used PET images. In our method the following hypothesis is assumed. That is if no movement happens, there exists a linear model such that counts of each voxel per unit time follows the model. Model parameters may depend on the voxels. In a simple example, our method successfully corrected motion artifact and estimates the parameters accurately. It may give a simple and practical solution to the motion correction problems.",2003,0, 1867,A new approach to defining corrective control actions in case of infeasible operating situations,"A method that deals with power system infeasible operating situations is proposed in this paper. In case these situations occur, appropriate corrective actions must be efficiently obtained and quickly implemented by (a) quantifying the system's unsolvability degree (UD), and (b) determining a corrective control strategy to pull the system back into the feasible operating region. UD is determined through the smallest distance between the infeasible (unstable) operating point and the feasibility boundary in parameter (load) space. In this paper, control strategies can be obtained by either the proportionality method (PM) or the nonlinear programming based method (NLPM). Capacitor banks, tap changing transformers and load shedding are the usual controls available. The idea of local search of the available controls is used as a way of using the most effective controls and minimizing the computational burden. Simulations have been carried out for small to large systems, under contingency and heavy load situations. It is shown that the proposed method can be a very useful tool in operation planning studies, particularly for voltage stability analysis",2001,0, 1868,Towards fault-tolerant software architectures,"""Software engineering has produced no effective methods to eradicate latent software faults. "" This sentence is, of course, a stereotype, but it is as true as a stereotype can get. And yet, it begs some questions. If it is not possible to construct a large software system without residual faults, is it at least possible to construct it to degrade gracefully if and when a latent fault is encountered? This paper presents the approach adopted on CAATS (Canadian Automated Air Traffic System), and argues that OO design and certain architectural properties are the enabling elements towards a true fault-tolerant software architecture",2001,0, 1869,To Send or Not to Send: An Empirical Assessment of Error Reporting Behavior," This study examines user perceptions that play a critical role in driving error reporting system (ERS) usage intentions. Building on the technology acceptance model (TAM) and literature on donation behavior, we theorize that value compatibility, role transparency, process transparency, and work interruption influence ERS usefulness and ERS usage intentions. Results show that congruence with user's value system, role transparency, and process transparency are important determinants of ERS usefulness. A direct effect of value compatibility, process transparency, and work interruption on intention to use the ERS is also observed. More importantly, our study elaborates on the applicability of TAM and its variables beyond their originally defined constraint. Prescriptive guidelines on effective promotion and design of the ERS are offered. ",2008,0, 1870,Study on 3D Modeling Method of Faults Based on GTP Volume,"3D fault modeling is one of the key issues of three-dimensional geosciences modeling system (3DGMS). Summarized the 3D modeling methods of faults at present, a new method, named as partition recursively modeling of the rock pillar body (RPB), for 3D modeling of faults (3DMF) based on generalized tri-prism (GTP) volume is put forward. This method takes borehole data as its original data source. The process for 3DMF includes: a) to generate the constrained delaunay TIN (CD-TIN) of terrain according to the collar data of borehole and the outcrops of faults; b) to construct the limit triangles in each PRB; c) to create 3D faults models according to the limit triangles. The test shows that the RPB method can accurately describe 3D complicated faults system involving obverse faults and reverse faults.",2006,0, 1871,Column parity and row selection (CPRS): a BIST diagnosis technique for multiple errors in multiple scan chains,"A BIST diagnosis technique is presented to diagnose multiple errors in multiple scan chains. An LFSR randomly selects outputs of multiple scan chains every scan cycle. The column parity and row parity of the selected scan outputs are observed every scan cycle and every scan unload, respectively. Compared with other techniques, which diagnose no more than 15% errors, CPRS correctly diagnoses all errors in the presence of 1% unknowns. The cost of this technique is area overhead and one additional output observed every scan cycle",2005,0, 1872,Identifying security bug reports via text mining: An industrial case study,"A bug-tracking system such as Bugzilla contains bug reports (BRs) collected from various sources such as development teams, testing teams, and end users. When bug reporters submit bug reports to a bug-tracking system, the bug reporters need to label the bug reports as security bug reports (SBRs) or not, to indicate whether the involved bugs are security problems. These SBRs generally deserve higher priority in bug fixing than not-security bug reports (NSBRs). However, in the bug-reporting process, bug reporters often mislabel SBRs as NSBRs partly due to lack of security domain knowledge. This mislabeling could cause serious damage to software-system stakeholders due to the induced delay of identifying and fixing the involved security bugs. To address this important issue, we developed a new approach that applies text mining on natural-language descriptions of BRs to train a statistical model on already manually-labeled BRs to identify SBRs that are manually-mislabeled as NSBRs. Security engineers can use the model to automate the classification of BRs from large bug databases to reduce the time that they spend on searching for SBRs. We evaluated the model's predictions on a large Cisco software system with over ten million source lines of code. Among a sample of BRs that Cisco bug reporters manually labeled as NSBRs in bug reporting, our model successfully classified a high percentage (78%) of the SBRs as verified by Cisco security engineers, and predicted their classification as SBRs with a probability of at least 0.98.",2010,0, 1873,Feedforward correction of the pulsed circularly polarizing undulator at the Advanced Photon Source,"A circularly polarizing undulator capable of switching the polarization very rapidly was installed at the Advanced Photon Source. The net magnetic field perturbation is characterized in both planes by a transient orbit motion, which lasts about 30 ms, and a DC orbit shift. In addition, multipole magnetic moment errors are present. The correction system consists of small dipole and multipole correction magnets at the ends of the undulator, a multichannel arbitrary function generator (AFG) to program the corrector magnet current triggered on the polarization change event, low-level software to load and interpolate the AFG waveforms, and high-level software running on a workstation to determine the optimum AFG waveforms for the dipole correctors. We rely on the existing real-time feedback system to acquire the orbit transient and to automatically generate a close approximation of the required corrector wave forms. A choice of deterministic correction or trial-and-error manual adjustments of the wave forms is available in the high-level software.",2003,0, 1874,Modeling of Multi-Finger SiGe HBTs and the Error Metrics of the Large Signal Model Performances,"A compact large-signal model for multi-finger SiGe HBTs is proposed and experimentally validated. The model formulation leads to a simple parameter extraction procedure. Model development was carried out for a multi-finger SiGe HBT fabricated in a commercial process technology. It consisted of 72 fingers each with a drawn emitter geometry of 2times20times0.5 mum2. Results show good fit between measured and simulated DC, S-parameter and LS characteristics",2006,0, 1875,A compact 3.5/5.5 GHz dual band-notched monopole antenna for application in UWB communication systems with defected ground structure,"A compact ultrawideband monopole antenna having dual band-notched characteristics with a defected ground structure (DGS) proposed. Two symmetrical L-shaped slots are created on the ground plane to generate the UWB characteristics in the proposed antenna. To generate the notch at 5.2/5.8 GHz band, a U-shaped slot is cut in the rectangular radiating element, which mitigate the potential interference with WLAN. To have another notch band simultaneously around 3.0/4.0 GHz, which is the operating band of WiMAX (3.33.6 GHz) and C-band (3.74.2 GHz), an inverted U-shaped element is printed on the opposite side of the substrate. By properly varying the dimensions of the U-shaped slot and the radiating element, not only two controllable notch resonances, but also a very wide bandwidth from 1.91 GHz to 3.91 GHz (152%) with two sharp notched bands covering all the 3.5/5.5 GHz WiMAX, 4 GHz C-band and 5.2/5.8 GHz WLAN, are achieved. The proposed antenna properly optimized and simulated providing broadband impedance matching, appropriate gain and stable radiation pattern characteristics.",2010,0, 1876,Experimental study of inter-laminar core fault detection techniques based on low flux core excitation,"A comparison between two inter-laminar stator core insulation failure detection techniques for generators based on low flux core excitation is presented in this paper. The two techniques under consideration are: 1) the iron core probe based method developed recently and 2) the existing air core probe based method. A qualitative comparison of the two techniques is presented along with an experimental comparison on a 120-MW generator stator core. The test results are compared in terms of fault detection sensitivity, signal to noise ratio, and ease of interpretation, which are the main requirements for stator core inspection. In addition to the comparison, the performance of the iron core probe technique for machines with short wedge depression depth is presented along with the recent improvements in the algorithm. It is shown that the main requirements for stator core inspection are significantly enhanced with the new iron core probe-based core fault detector.",2006,0, 1877,Improving Model-Based Gas Turbine Fault Diagnosis Using Multi-Operating Point Method,"A comprehensive gas turbine fault diagnosis system has been designed using a full nonlinear simulator developed in Turbotec company for the V94.2 industrial gas turbine manufactured by Siemens AG. The methods used for detection and isolation of faulty components are gas path analysis (GPA) and extended Kalman filter (EKF). In this paper, the main health parameter degradations namely efficiency and flow capacity of the compressor and turbine sections are estimated and the responsible physical faults such as fouling and erosion are found. Two approaches are tested: The single-operating point and the multi-operating point. Simulation results show good estimations for diagnosis of most of the important degradations in the compressor and turbine sections for the single-point and improved estimations for the multi-point approach.",2010,0, 1878,Research on active power factor correction based on PDC control,"A new digital control strategy of parallel duty cycle control (PDC) for power factor correction (PFC) is presented and analyzed in this paper. Based on this new control strategy, the duty cycle determination algorithm includes the current term and the voltage term, which can be calculated in parallel and requires only one multiplication and three additions or subtractions operations in digital implementation. So, as compared conventional digital PFC control methods, the PDC control can achieve higher switching frequency, lower cost, lower calculation requirement and better performance. The simulation and experimental results are provided to verify these virtues as well.",2009,0, 1879,Reliable JPEG 2000 wireless imaging by means of error-correcting MQ coder,"A new error resilience tool is proposed for robust JPEG 2000 imaging over noisy channels. In particular, a modified encoder, based on an MQ arithmetic coder with forbidden symbol, is introduced, along with a maximum likelihood error-correcting MQ decoder. The proposed technique features error detection, error concealment and error correction capability, thus adding new useful functionalities to JPEG 2000. Experimental results show that this technique largely outperforms the standard JPEG 2000 error resilience tools for error concealment and hard/soft channel decoding.",2004,0, 1880,Fast high-level fault simulator,"A new fast fault simulation technique is presented for calculating fault propagation through high level primitives (HLPs). Reduced ordered ternary decision diagrams are used to describe HLPs. The technique is implemented in an HTDD fault simulator. The simulator is evaluated with some ITC99 benchmarks. Besides high efficiency (in comparison with existing fault simulators), it shows flexibility for the adoption of a wide range of fault models.",2004,0, 1881,Fault Identification in Transformers through a Fuzzy Discrete Event System Approach,"A new fault detection and identification (FDI) scheme for transformer faults are suggested in this paper. The new method is based on fuzzy discrete event, from now FDES, composed from between a transformer's measured outputs and its faults. In the FDES, events and state membership functions take values between zero and one. All events occur at the same time with different membership degrees. The main advantage of the suggested scheme is that different types of incipient or abrupt faults of transformers can correctly be identified. Principal component analysis (PCA) is mainly used for fuzzy event generation purpose. Event based FDES diagnoser involves fuzzy IF-THEN rules created by an artificial neural network (ANN) based on radial basis functions to identify incipient faults in transformers. It shows single or multiple faults and occurring degrees of these faults. The study is concluded by giving some examples about distinguishability of the single or multiple fault types in transformers. The real-time laboratory experiments verify the effectiveness of the suggested method.",2007,0, 1882,Just-in-time statistical process control for flexible fault management,"A new fault detection and identification method is proposed to solve the problems that obstruct industrial applications of multivariate statistical process control (MSPC) techniques. The proposed method, referred to as just-in-time statistical process control (JIT-SPC), can realize flexible, adaptive, high-performance process monitoring. In addition, fault identification can be done through contribution plot in the framework of JIT-SPC. The usefulness of JIT-SPC is demonstrated through a numerical example, which conventional methods cannot cope with, and a case study of the vinyl acetate monomer production plant.",2010,0, 1883,A study of the accurate fault location system for transmission line using multi-terminal signals,"A new fault location system using two-terminal or three-terminal currents and voltages is presented in this paper, which can eliminate the errors caused by fault resistance. In the first part, the mathematical model is explained. In the second part, the factors that cause the errors of fault location are analyzed in detail and the methods used to reduce the errors are proposed. In the third part, the hardware structure and functional flow chart are given as well. Finally, the digital simulation results are presented. The results show that the fault locator based on the principle can realize fault location more accurately than conventional ones",2000,0, 1884,Research on Neural Network Integration Fusion Method and Application on the Fault Diagnosis of Automotive Engine,"A new fusion model is proposed, which is the combination of integration BP neural networks models and D-S evidence reasoning model, to solve the problems of low precision rate in automotive engine fault diagnosis by traditional expert system. The method of this paper not only realizes feature level fusion of all subjective observation data and expert experiments on different parts of engineer, but also realizes the predominance compensation of different models. In simulation experiment, by comparison between the two methods, this method proposed in the paper can improve diagnosis precision 7.1%more than expert system and reduce time complication degree.",2007,0, 1885,Model-based information extraction method tolerant of OCR errors for document images,"A new method for information extraction from document images is proposed in this paper as the basis for a document reader which can extract required keywords and their logical relationship from various printed documents. Such documents obtained from OCR results may have not only unknown words and compound words, but also incorrect words due to OCR errors. To cope with OCR errors, the proposed method adopts robust keyword matching which searches for a string pattern from two dimensional OCR results consisting of a set of possible character candidates. This keyword matching uses a keyword dictionary that includes incorrect words with typical OCR errors and segments of words to deal with the above difficulties. After keyword matching, a global document matching is carried out between keyword matching results in an input document and document models which consist of keyword models and their logical relationship. This global matching determines the most suitable model for the input document and solves word segmentation problems accurately even if the document has unknown words, compound words, or incorrect words. Experimental results obtained for 100 documents show that the method is robust and effective for various document structures",2001,0, 1886,Rolling element bearings fault classification based on SVM and feature evaluation,"A new method of fault diagnosis based on support vector machine (SVM) and feature evaluation is presented. Feature evaluation based on class separability criterion is discussed in this paper. A multi-fault SVM classifier based on binary classifier is constructed for bearing faults. Compared with the artificial neural network based method, the SVM based method has desirable advantages. Experiment shows that the algorithm is able to reliably recognize different fault categories. Therefore, it is a promising approach to fault diagnosis of rotating machinery.",2009,0, 1887,Roller bearings fault diagnosis based on LS-SVM,"A new method of roller bearings fault diagnosis based on least squares support vector machines (LS-SVM) was presented. Feature selection method based on simulated annealing (SA) algorithm was discussed in this paper. LS-SVM classifier was constructed for bearing faults. Compared with the Artificial Neural Network based method, the LS-SVM based method possessed desirable advantages. Experiment shows that the presented method is able to reliably recognize different fault categories.",2009,0, 1888,Corrective control strategies in case of infeasible operating situations,"A new method that deals with power systems infeasible operating situations is proposed in this paper. In case these situations occur, appropriate corrective actions must be efficiently obtained and quickly implemented. In order to accomplish this, it is necessary (a) to quantify the systems unsolvability degree (UD), and (b) to determine a corrective control strategy to pull the system back into the feasible operation region. UD is determined through the smallest distance between the infeasible (unstable) operating point and the feasibility boundary in parameter (load) space. In this paper the control strategies can be obtained by two methods, namely the proportionality method (PM) and the nonlinear programming based method (NLPM). Capacitor banks, tap changing transformers and load shedding are the usual controls available. Simulations have been carried out, for small to large systems, under contingency and heavy load situations, in order to show the efficiency of the proposed method. It can be a very useful tool in operation planning studies, particularly in voltage stability analysis.",2001,0, 1889,Signal decomposition and fault diagnosis of a SCARA robot based only on tip acceleration measurement,"A new online fault diagnosis method for a SCARA robot is presented in this paper. This approach is based on separating of the tip acceleration signal into program related acceleration (PRA) and transmission related acceleration (TRA). Unlike existing methods mounting varies sensors at every joints, the proposed technique applies only one accelerometer mounted at the tip of the robot. An advanced detrending algorithm has been developed to extract PRA and TRA signals from the measured acceleration signals. Based on dynamic analysis of the robot, the theoretical acceleration profile of the programmed motion is obtained, which is compared with the PRA signals measured during working. Typical FFT-based spectrum analysis is applied for analyzing the detrended TRA signal for fault diagnosis. The effectiveness of the proposed approach has been verified with experiments conducted on a 4DOFs SCARA manipulator.",2009,0, 1890,"Scan Testing for Complete Coverage of Path Delay Faults with Reduced Test Data Volume, Test Application Time, and Hardware Cost","A new scan architecture, called enhanced scan forest, is proposed to detect path delay faults and reduce test stimulus data volume, test response data volume, and test application time. The enhanced scan forest architecture groups scan flip- flops together, where all scan flip-flops in the same group are assigned the same value for all test vectors. All scan flip- flops in the same group share the same hold latch, and the enhanced scan forest architecture makes the circuit work in the same way as a conventional enhanced scan design. The area overhead of the proposed enhanced scan forest is greatly reduced compared to that for enhanced scan design. A low- area-overhead zero-aliasing test response compactor is designed for path delay faults. Experimental results for the IS- CAS benchmark circuits are presented to demonstrate the effectiveness of the proposed method.",2007,0, 1891,Research and Implementation of Fault-Tolerant Computer Interlocking System,"A new signal control system for railway stations, fault-tolerant all- electronic computer interlocking control system, is proposed,in which the computer-based interlocking system layer is constituted through the implementation of electronic security unit replacing the Relay, and the all-electronic fault-tolerant controlling for whole system is fulfilled through two of three fault-tolerant computer system. Furthermore, the overall structure, function and fault-tolerant security designing of the system are discussed in detail. The system can meet the requirements of high reliability, availability and real-time controlling, it also can monitor the external equipments and their own equipments real-timely. The system has been put into operation and run stably and reliabl.",2010,0, 1892,The Method of Error Controlling on the Vectorization of Dot Matrix Image,"A new vectorization method of dot matrix image based on error controlling is studied in this paper. The least-square algorithm is a main method of vectorization of dot matrix image. In the calculation of the fitting error by using the least-square algorithm, if the fitting error is in the range of threshold value, the result of vectorization will be considered to be right. So how to determine the threshold value is the key. In this paper, it has analyzed the vectorization errors of ideal bitmap by least-square algorithm and put forward the judgment formulas to evaluate recognition result. In order to complete vectorization automatically, it has presented a new method of the vectorization of dot matrix image-the least-square rolling recognition algorithm for double direction. It has solved the problem of selecting suitable dot group. The judgment formulas in this paper are scientific and accurate; the method of the vectorization is rapid and high efficient.",2008,0, 1893,LUT error modeling based on implicit cube-distance errors,"A new way of modeling FPGA functional faults by implicit cube-distance errors is proposed. The implicit fault model allows significant reduction in fault list size, which is of a particular importance in the case of mistakes in functions implemented by LUTs. Experiments are performed in a modified version of SIS where the quality of the model is tested by software emulation. Finally, methods of fault redundancy identification are also discussed",2005,0, 1894,A New Weak Fault Component Reactance Distance Relay Based on Voltage Amplitude Comparison,"A new weak fault component reactance distance relay is proposed in this paper. By adaptive setting of the compensated voltage, the scheme synthesizes the performance of the impedance distance relay and the reactance distance relay. The distance protection relay on the receiving end will misoperate when the fault resistance is larger than the critical resistance. So a new switching criterion is applied to eliminate this disadvantage. Based on that, the proposed scheme can detect the fault with the high fault resistance in the setting coverage, regardless of whether the relay is located at the receiving end or the sending end. Test results from the simulation and experimental conditions show that the new scheme is successful in detecting the internal fault. It has higher sensitivity and selectivity during different conditions than the traditional fault component protection schemes.",2008,0, 1895,Noise Makers Need to Know Where to be Silent Producing Schedules That Find Bugs,"A noise maker is a tool that seeds a concurrent program with conditional synchronization primitives, such as yield(), for the purpose of increasing the likelihood that a bug manifest itself. We introduce a novel fault model that classifies locations as ""good"", ""neutral"", or ""bad,"" based on the effect of a thread switch at the location. Using the model, we explore the terms under which an efficient search for real-life concurrent bugs can be conducted. We accordingly justify the use of probabilistic algorithms for this search and gain a deeper insight of the work done so far on noise- making. We validate our approach by experimenting with a set of programs taken from publicly available multi-threaded benchmarks. Our empirical evidence demonstrates that real-life behavior is similar to one derived from the model.",2006,0, 1896,Research on Errors Compensation of Novel Parallel Coordinate Measuring Machine,"A novel 5-UPS-PRPU PCMM (parallel coordinate measuring machine) mechanism that can perform three-dimensional translations and two rotations is proposed. In order to enhance the measuring precision of the PCMM, a method for its position and posture errors compensation is then put forward. According to the error model of the PCMM, its position and posture errors compensation model is derived. Then, through the control of the actuating limbs errors of the PCMM, a real-time compensation for its position and posture errors could be realized. The correctness of the errors compensation model is verified by numerical calculation.",2010,0, 1897,A fault tolerant signal processing computer,"A fault tolerant computer has been designed for radiation environments which employs COTS components. The use of radiation-tolerant but not fully hardened COTS devices provides significantly higher performance than specialty, fully hardened parts. The computer architecture consists of multiple, redundant processing nodes, each containing levels of internal redundancy, and multiple point-to-point communication ports on a crossbar switch. The nodes are linked together via ports to form a distributed crossbar network with inherent fault tolerance. A key attribute of the architecture is the provision for selectable levels of error detection and recovery. The trade-offs between performance and degree of fault tolerance can be dynamically adjusted to meet specific system needs and parts selection at any particular time",2000,0, 1898,Fault-tolerant algorithm based on multi-resolution of haar wavelets in opportunistic networks,"A fault-tolerant algorithm based on multi-resolution haar wavelets was proposed to deal with the problem of concentrated data packets lost in opportunistic networks. The basic idea of algorithm can be described as follows. First, the signals are decomposed by wavelet transformation to get the low-frequency coefficients and high-frequency coefficients before it is sent. The ARQ mechanism is used to transfer the low-frequency coefficients. The high-frequency is transferred directly. Then, the signals are restored based on conversation of energy in the receiver. Theoretic analysis and simulation results show that the proposed algorithm can efficiently recover the original signal and maintain good fault-tolerant performance even continuous packets lost because of the bad disturbance.",2010,0, 1899,Fault-tolerant wormhole routing in torus networks with overlapped block faults,"A fault-tolerant routing algorithm for torus networks that uses only three virtual channels is presented. The proposed algorithm is based on the block fault model, which is suitable for modelling faults at the board level in networks with grid structures. Messages are routed via shortest paths when there are no faults. However, if a message is blocked by a faulty block, the message will use a detour path to route around the faulty block. Previously at least six virtual channels were needed to achieve the same fault-tolerant ability. Simulation results using various workloads and fault patterns are presented.",2003,0, 1900,Fault-tolerant routing in meshes/tori using planarly constructed fault blocks,"A few faulty nodes can make an n-dimensional mesh or torus network unsafe for fault-tolerant routing methods based on the block fault model, where the whole system (n-dimensional space) forms a fault block. A new concept, called extended local safety information in meshes or tori, is proposed to guide fault-tolerant routing, and classifies fault-free nodes inside 2-dimensional planes. Many nodes globally marked as unsafe become locally enabled inside 2-dimensional planes. A fault-tolerant routing algorithm based on extended local safety information is proposed for k-ary n-dimensional meshes/tori. Our method does not need to disable any fault-free nodes, unlike many previous methods, and this enhances the computational power of the system and improves performance of the routing algorithm greatly. All fault blocks are constructed inside 2-dimensional planes rather than in the whole system. Extensive simulation results are presented and compared with the previous methods.",2005,0, 1901,Improved design and system approach of a three phase inductive HTS fault current limiter for a 12 kVA synchronous generator,"A further development of the high temperature superconducting (HTS) mini power plant (MPP) concept designed earlier by one of the authors is presented in the paper. HTS fault current limiters (FCL) will be inserted at the terminals of the synchronous generator. Joint operation of HTS generators (including fully superconducting generators) and HTS FCL's provide additional benefits viz. a significant increase of the generator's unit power rating as well as of its dynamic stability, shown in the paper. A three-phase inductive HTS FCL designed and built for the protection of a generator is made up of three one-phase units, each containing YBCO rings as secondary ""windings."" A new design idea was applied for the primary winding to further reduce the leakage reactance of the FCL resulting in low reactive power consumption. Simulations of the electromagnetic processes in the HTS FCL are shown. Theoretical studies on the joint operation of a fully superconducting generator and an HTS FCL are presented.",2003,0, 1902,The Application of CI-Based Virtual Instrument in Gear Fault Diagnosis,"A gear fault detection system, Virtual Instrument Diagnostic System, is developed by combining both advantages of VC++ and MATLAB and using the two hybrid programming method. The interface is designed by VC++ and the calculation of test data, signal processing and graphical display are completed by MATLAB. After conversion program converted in VC++ from *. m file is completed by interface software, a various multi-functional gear fault diagnosis software system is successfully designed. The software system, one possesses various functions including the introduction of gear vibration signals, signal processing, graphics display, fault detection and diagnosis, detecting and monitoring, which has the ability of diagnosing gear faults primely, has a considerable application foreground in the field of fault diagnosis.",2009,0, 1903,Fault location in transmission lines using one-terminal postfault voltage data,"A new algorithm for fault location in transmission lines, with fault distance calculation based on steady-state measured phasors in local terminal is presented. For the postfault, only voltage phasors are required, avoiding possible errors due to current transformer saturation; the current phasors are required only in prefault time, when saturation does not occur. The algorithm does not use simplifying hypothesis but requires system equivalent data at both line terminals and the fault classification, considering fault resistance purely resistive. In order to verify the algorithm performance, a parametric analysis of variables that influences short-circuit conditions is developed, including an analysis of remote equivalent setting. The results show that the algorithm is very accurate, even in cases when the remote equivalent is not well fitted.",2004,0, 1904,Mobile TV using scalable video coding and layer-aware forward error correction,"A new approach to error protection for scalable media is presented for mobile TV applications. Mobile TV is typically characterized by a number of receiver capabilities and connection qualities. A broadcast service should preferably work for multiple receiver capabilities without the need for downscaling or transcoding at the battery-powered mobile devices. Moreover, a media quality that gracefully degrades with reception quality instead of a complete signal loss is also a desirable feature. The scalable video coding (SVC) extension of H.264/AVC offers an efficient way to support the aforementioned features. In mobile broadcast channels forward error correction (FEC) is used to overcome packet losses. This work proposes a layer-aware forward error correction (L-FEC) approach in combination with SVC. L-FEC increases robustness of the more important layers by generating protection across layers. L-FEC is integrated as an extension of an Raptor FEC implementation. It is shown by experimental results that L-FEC outperforms traditional FEC and UEP protection schemes.",2008,0, 1905,Unknown input-proportional integral observer for singular systems: Application to fault detection,"a new approach to the observer design for descriptor continuous time systems is proposed and its application in the fault diagnosis problem is illustrated. In this observer, two features of disturbance decoupling and fault estimation are combined. Also a more general frame for fault estimation is used. Some numerical examples and simulation results are shown to justify the effectiveness of the algorithm.",2010,0, 1906,Utilization of matched pulses to improve fault detection in wire networks,"A new concept to fault detection in wire networks, based on the properties of time reversal, is presented. The method, called the matched pulse approach (MP), propose to adapt the testing signal to the analyzed network, instead of using a predefined signal, as opposed to existing reflectometry methods. Through mathematical study and numerical simulations, we show the benefits of this technique. A physical interpretation is also presented to better understand the proposed approach.",2009,0, 1907,Defect coverage of boundary-scan tests: what does it mean when a boundary-scan test passes?,"A new coverage definition and metric, called the 'PCOLA/SOQ"" model, introduced in K. Hird et al. (2002), has great utility in allowing the test coverage of defects on boards to be measured and compared rationally. This paper discusses the general topic of measuring test coverage of boundary-scan tests within this framework. A conclusion is that boundary-scan tests offer a large amount of test coverage when boundary-scan is implemented on a board, even if that implementation is partial. However, coverage is not perfect for certain defects in the PCOLA/SOQ model, as shown by example.",2003,0, 1908,Wavelet transform based power transmission line fault location using GPS for accurate time synchronization,"A continuous and reliable electrical energy supply is the objective of any power system operation. A transmission line is the part of the power system where faults are most likely to happen. This paper describes the use of the wavelet transform for analyzing power system fault transients in order to determine the fault location. Synchronized sampling was made possible by precise time receivers based on GPS time reference, and the sampled data were analyzed using wavelet transform. This paper describes a fault location monitoring system and fault locating algorithm with GPS, DSP processor, and data acquisition board, and presents some experimental results and error analysis",2001,0, 1909,Low-cost on-line fault detection using control flow assertions,"A control flow fault occurs when a processor fetches and executes an incorrect next instruction. Executable assertions, i.e., special instructions that check some invariant properties of a program, provide a powerful and low-cost method for on-line detection of hardware-induced control flow faults. We propose a technique called ACFC (Assertions for Control Flow Checking) that assigns an execution parity to a basic block, and uses the parity bit to detect faults. Using a graph model of a program, we classify control flow faults into skip, re-execute and multi-path faults. We derive some necessary conditions for these faults to manifest themselves as execution parity errors. To force a control flow fault to excite a parity error, the target program is instrumented with additional instructions. Special assertions are inserted to detect such parity errors. We have a developed a preprocessor that takes a C program as input and inserts ACFC assertions automatically. We have implemented a software-based fault injection tool SFIG which takes advantage of the GNU debugger. Fault injection experiments show that ACFC incurs less performance overhead (around 47%) and memory overhead (around 30%) than previous techniques, with no significant loss in fault coverage.",2003,0, 1910,Analysis of a Simple Feedback Scheme for Error Correction over a Lossy Network,"A control theoretic analysis of a simple error correction scheme for lossy packet-switched networks is presented. Based on feedback information from the error correction process in the receiver, the sender adjusts the amount of redundancy using a so called extremum-seeking controller, which do not rely on any accurate model of the network loss process. The closed-loop system is shown to converge to a limit cycle in a neighborhood of the optimal redundancy. The result are validated using packet-based simulations with data from wireless sensor network experiments.",2007,0, 1911,A complex fault-tolerant power system simulation,"A correct evaluation of the availability indices in the different vital areas of a complex power system needs a fault events simulation. On this goal is used a high level type of generalized stochastic Petri net (GSPN), modeling the system behavior in each of the selected vital area. The paper introduces a practical model to simulate the fault events evolution and to evaluate the fault indices named logical explicit stochastic Petri nets (LESPN). The paper exemplifies the LESPN model for a simple fault-tolerant system and extends the results for a complex power system. A simple fault-tolerant multiprocessor system is used for a comparative study of GSPN and LESPN models. The complex fault tolerant system presented in the paper is the isolated electric ship power system.",2005,0, 1912,Mode Identification of Hybrid Systems in the Presence of Fault,"A mode identification method for hybrid system diagnosis is proposed. The method is presented as a module of a quantitative health monitoring framework for hybrid systems. After fault occurrence, the fault is detected and isolated. The next step is fault parameters estimation, where the size of the fault is identified. Fault parameter estimation is based on data collected from the hybrid system while the system is faulty, and its dynamical model is partially unknown. A hybrid system's dynamics consists of continuous behavior and discrete states represented by modes. Fault parameter estimation requires knowledge of the monitored system's operating mode. The new method utilizes the partially known dynamical model to identify hybrid system modes in the presence of a single parametric fault.",2010,0, 1913,Tunable Bandstop Defected Ground Structure Resonator Using Reconfigurable Dumbbell-Shaped Coplanar Waveguide,"A modification of the conventional dumbbell-shaped coplanar waveguide defected ground structure (DGS) is proposed. This modification permits the continuous tuning of the rejected frequencies by using reconfiguration technique and it allows the control of the DGS equivalent-circuit model. The modified DGS possesses two-dimensional symmetry, hence, it has been studied under different symmetry conditions and the corresponding equivalent-circuit model in each case has been developed. Based upon this study, a tunable bandstop DGS resonator is proposed. 19% tuning range centered at 3.7 and 7.4 GHz, respectively, is achieved. The equivalent-circuit model of the resonator is also developed. All proposed structures have been fabricated. Measurements as well as three-dimensional simulations are found to be in a very good agreement with theoretical predictions",2006,0, 1914,Evaluation of several Efron bootstrap methods to estimate error measures for software metrics,"A narrow confidence interval of a sample statistic or a model parameter implies low variability of that statistic, and permits a strong conclusion to be made about the underlying population. Conversely, the analysis should be considered inconclusive if the confidence interval is wide. Efron's (1992) bootstrap statistical analysis appears to address the fact that many statistics used in software metrics analysis do not come with theoretical formulas to allow accuracy assessment. In this paper we will present preliminary results on an empirical analysis of the reliability of several Efron nonparametric bootstrap methods in assessing the accuracy of sample statistics in the context of software metrics. In particular, we focus on the standard errors and 90% confidence intervals of five basic statistics as a tool to evaluate the Bootstrap. It was found confidence intervals for mean and median were accurately estimated, those for variance grossly under-estimated with skewness and kurtosis grossly over-estimated.",2002,0, 1915,Evanescent Field Absorption Sensor in Aqueous Solutions using a Defected-core Photonic Crystal Fiber,A new absorption sensor is developed using a defected-core photonic crystal fiber with enhanced evanescent field. Excellent linearity is obtained between absorbance and concentration and length. The sensitivity is increased by more than sixty times compared with perpendicular direction measurement.,2008,0, 1916,A system for 3D error visualization and assessment of digital elevation models,"A digital elevation model (DEM) can be created using a variety of interpolation or approximation methods, any of which may yield errors in the final result. We present DEMEV (DEM Error viewer), a visualization system that displays a DEM and possible errors in 3D, along with its associated contour or sparse data and/or a comparison DEM. The system incorporates several error visualizations. One method compares the test DEM to source data and highlights discrepancies (difference error) beyond a user-variable threshold. A novel, vertical cutting tool can slice the DEM to create a profile view that shows the surface of the test and comparison DEMs simultaneously, allowing the user to discern small errors between the two files in minute detail. The cutting tool is semi-transparent so that the profile is seen in the context of the 3D surface. Another novel error visualization uses height classes to display possible problems with slope in a DEM computed from contours. Other features of the system include visualizations for local curvature and slope, a display of computed statistics such as RMSE, total squared curvature, etc., in addition to typical GIS tools. The system is designed as an error-visualization tool; the above functions are displayed and readily available on the user interface. The system has been tested with USGS data files to show its efficacy.",2007,0, 1917,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and unfault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlights high promptness in detecting faults, low false alarm rate, and very good diagnostic performance",2001,0,5482 1918,Fault Diagnosis Expert System Based on VXI Bus for Communication Circuit Board,"A fault diagnose expert system base on VXI is designed to proceed automatically a certain type high-tech information electronic equipment fault detection and improve its efficiency and accuracy of diagnosis. This paper mainly introduces the research on algorithm and realization of the fault diagnose expert system of the communication circuit board of such equipment, and example proving on the hardware platform is described as well. By use of such method, it's quicker and more convenient to locate fault on the circuit boards on this equipment. It's proved that this expert system can solve the problems of high cost and long intervals of maintenance and keep the equipment in a stable status.",2007,0, 1919,An Online Automatic Method for Correction of Piezoelectric Scanner,"A novel online auto-correction method is proposed in this paper, according to the nonlinearity and hysteresis of atomic force microscope (AFM) piezoelectric ceramics scanner. In this method, an automatic positioning algorithm is invoked to get the same characteristic points from the forward and backward scanning line; these points are used to calculate the distortion factors to realized image correction. Mathematical derivation of the correction principle is given, and automatic positioning algorithm based on successive template comparison is introduced. Finally situations may cause invalidation of the method are discussed and solutions are given. Practice proves that the method is effective and can be applied in AFM automation.",2009,0, 1920,Flyboost power factor correction cell and its applications in single-stage AC-DC converters,"A novel power factor correction (PFC) cell with direct-power-transfer (DPT) concept, called flyboost, is presented. This PFC cell combines power transfer characteristics of conventional flyback and boost topologies. Based on the proposed flyboost PFC cell, a new family of single-stage (S2) topologies is derived, and an example converter is experimentally verified with a 150 W/28 V prototype. While still achieving high power factor (above 0.97) and tight output regulation. The flyboost PFC cell helps improve the converter's efficiency by 3-5% over conventional converters.",2002,0, 1921,Enhanced fault ride-through scheme and coordinated reactive power control for DFIG,"A novel scheme for enhancing the fault ride-through (FRT) capability of Doubly Fed Induction Generator (DFIG) is proposed in this paper. In addition, a coordinated reactive power control strategy for grid side and rotor side converters of DFIG is also proposed. This coordinated control aids rapid recovery of terminal voltage at the clearance of severe grid fault. Extensive simulation study is carried out employing PSCAD/EMTDC software and the results demonstrate the effectiveness of the proposed FRT scheme and reactive power control strategy.",2010,0, 1922,A Novel Self-Correction Differential Active Pixel Sensor,"A novel self-correction differential active pixel with redundancy is proposed to produce high-quality image in harsh environment. In order to avoid common-mode noise from environment and interference between digital signal and analog signal, a couple of differential outputs are generated and transferred by splitting the photodiode and readout transistors into two half-size parts. The performance of differential active pixel in low power supply environment is also estimated. Simulation results indicate that the proposed pixel owns higher conversion gain than conventional active pixel and works more stably in harsh environment and low-voltage environment.",2009,0, 1923,A novel self-organizing neural network for defect image classification,"A novel self-organizing neural network called the evolving tree is applied to classification of defect images. The evolving tree resembles the self-organizing map (SOM) but it has several advantages over the SOM. Experiments present a comparison between a normal SOM, a supervised SOM, and the evolving tree algorithm for classification of defect images that are taken from a real web inspection system. The MPEG-7 standard feature descriptors are applied. The results show that the evolving tree provides better classification accuracies and reduced computational costs over the normal SOMs.",2004,0, 1924,Compact Microstrip Quasi-Elliptic Bandpass Filter Using Open-Loop Dumbbell Shaped Defected Ground Structure,"A novel square open-loop dumbbell-shaped defected ground structure (DGS) unit is proposed. This unit provides a quasi-elliptic bandpass characteristic and the two transmission zeros near the passband edges can be controlled by the dimensions of DGS. Two quasi-elliptic bandpass filters using one and two DGS units; centered at 1.5 GHz were designed and implemented. Both the simulation and experimental results show that the DGS filter response is in good accordance with ideal quasi-elliptic model. The prototype filter with two DGS units yields higher order quasi-elliptic filtering and reports the measured 0.72 dB as insertion loss, 34 dB as matching, 51.8 % fractional bandwidth and about 20 dB stopband attenuation up to 10 GHz",2006,0, 1925,A Novel Scheme to Identify Symmetrical Faults Occurring During Power Swings,"A novel, fast unblocking scheme for distance protection to identify symmetrical faults occurring during power swings has been proposed in this paper. First of all, the change rate of three-phase active power and reactive power being the cosine function and the sine function with respect to the phase difference between the two power systems during power swings has been demonstrated. In this case, they cannot be lower than the threshold of 0.7 after they are normalized. However, they will level off to 0 when a three-phase fault occurs during power swings. Thereafter, the cross-blocking scheme is conceived on the basis of this analysis. By virtue of the algorithm based on instantaneous electrical quantities, the calculation of the active and reactive power is immune to the variation of the system power frequency. As the integration-based criterion, it has high stability. Finally, simulation results show that this scheme is of high reliability and fast time response. The longest time delay is up to 30 ms.",2008,0, 1926,"Network Dependability, Fault-tolerance, Reliability, Security, Survivability: A Framework for Comparative Analysis","A number of qualitative and quantitative terms are used to describe the performance of what has come to be known as information systems, networks or infrastructures. However, some of these terms either have overlapping meanings or contain ambiguities in their definitions presenting problems to those who attempt a rigorous evaluation of the performance of such systems. The phenomenon arises because the wide range of disciplines covered by the term information technology have developed their own distinct terminologies. This paper presents a systematic approach for determining common and complementary characteristics of five widely-used concepts, dependability, fault-tolerance, reliability, security, and survivability. The approach consists of comparing definitions, attributes, and evaluation measures for each of the five concepts and developing corresponding relations. Removing redundancies and clarifying ambiguities will help the mapping of broad user-specified requirements into objective performance parameters for analyzing and designing information infrastructures",2006,0, 1927,Induction generator model in phase coordinates for fault ride-through capability studies of wind turbines,"A phase coordinates induction generator model with time varying electrical parameters as influenced by magnetic saturation and rotor deep bar effects, is presented in this paper. The model exhibits a per- phase formulation, uses standard data sheet for characterization of the electrical parameters, is developed in C-code and interfaced with Matlab/Simulink through an S-Function. Saturation uses a direct non-linear magnetizing inductance versus input voltage function. Deep-bar effect is evaluated using voltage reliant correction factors to make both rotor resistance and leakage inductance dependent on the rotor speed. The investigation compares the behaviour of three models in the presence of external faults, namely the ""classical"" DQ model, a phase coordinates model with constant parameters and the proposed phase coordinates model with time varying rotor and magnetic electrical parameters. Case studies are conducted in a representative sized system and results show aptness of the proposed model over other two models. This approach is also constructive to support grid code requirements.",2007,0, 1928,An efficient postprocessor architecture for channel mismatch correction of time interleaved ADCs,A pipelined post-processor architecture is proposed in this paper for digital background calibration of time interleaved ADCs. An adaptive filter technique is used for correction of offset and gain mismatches between ADC channels. Only one calibration unit is used for calibrating all ADC channels and increasing in the number of parallel channels in the time interleaved ADC does not considerably affect the required hardware for proposed postprocessor. FPGA synthesis of 10-bit 4-channel processor shows %55 reduction in hardware usage and %25 in power consumption over conventional architecture.,2010,0, 1929,DSP implementation of predictive control strategy for power factor correction (PFC),"A predictive algorithm for digital control PFC is presented in this paper. Based on this algorithm, all of the duty cycles required to achieve unity power factor in one half line period are calculated in advance by the DSP. A boost converter controlled by these precalculated duty cycles can achieve sinusoidal current waveform. Input voltage feed-forward compensation makes the output voltage insensitive to the input voltage variation and guarantees sinusoidal input current even if the input voltage is distorted. A prototype of boost PFC controlled by a DSP evaluation board was setup to implement the proposed predictive control strategy. Test results show that the proposed predictive strategy for PFC achieves unity power factor.",2004,0, 1930,A digital power factor correction (PFC) control strategy optimized for DSP,"A predictive algorithm for digital control power factor correction (PFC) is presented in this paper. Based on this algorithm, all of the duty cycles required to achieve unity power factor in one half line period are calculated in advance by digital signal processors (DSP). A boost converter controlled by these precalculated duty cycles can achieve sinusoidal current waveform. One main advantage is that the digital control PFC implementation based on this control strategy can operate at a high switching frequency which is not directly dependent on the processing speed of DSP. Input voltage feed-forward compensation makes the output voltage insensitive to the input voltage variation and guarantees sinusoidal input current even if the input voltage is distorted. A prototype of boost PFC controlled by a DSP evaluation board was set up to implement the proposed predictive control strategy. Both the simulation and experimental results show that the proposed predictive strategy for PFC achieves near unity power factor.",2004,0, 1931,Design and Analysis of Three-Phase Reversible High-Power-Factor Correction Based on Predictive Current Controller,"A predictive current control for three-phase PWM rectifiers is proposed to simplify the design of the current loop with less memory and interrupt resource occupation in a digital signal processor. Together with the decomposition-matrix method, the total algorithm can be completed with only one timer and its underflow interrupt subroutine. The predictive current controller helps to design the voltage regulator and make the input currents track the input voltages so well that high power factor is achieved. Finally, the experimental results are given to verify the proposed predictive current control.",2008,0, 1932,Electromagnetic bandgap filter with single-cell monolithic microwave integrated circuit-tuneable defect,"A prototype tuneable planar electromagnetic bandgap filter has been developed for implementing a voltage-controlled bandwidth, intended for adaptive spectral control. A tuneable single-cell defect that allows tuning of the bandwidth is introduced into the structure using monolithic microwave integrated circuit technology. A combination of three-dimensional electromagnetic modelling of the planar structure and equivalent circuit modelling techniques is presented to validate this work. The simulation and measured results are in very close agreement.",2010,0, 1933,An Adaptive Algorithm for Fault Tolerant Re-Routing in Wireless Sensor Networks,"A substantial amount of research on routing in sensor networks has focused upon methods for constructing the best route, or routes, from data source to sink before sending the data. We propose an algorithm that works with this chosen route to increase the probability of data reaching the sink node in the presence of communication failures. This is done using an algorithm that watches radio activity to detect when faults occur and then takes actions at the point of failure to re-route the data through a different node without starting over on an alternative path from the source. We show that we are able to increase the percentage of data received at the source node without increasing the energy consumption of the network beyond a reasonable level",2007,0, 1934,Research on the Abrasive Water-Jet Cutting Machine Information Fusion Fault Diagnosis System Based on Fuzzy Neural Network,"A system structure for water jet cutting machine fault diagnosis based on multi-information fusion is presented, which takes the time-varying, redundancy and uncertainty of the multi-fault characteristic information into consideration. We make use of the neural network's ability of better fault tolerance, strong generalization capability, characteristics of self-organization, self-learning, and self-adaptation, and take advantage of multi-source information fusion technology to realize comprehensive processing for uncertainty information. The characteristic layer fusing model of the water jet cutting machine fault diagnosis, which makes use of fuzzy neural network to realize feature layer fusion and D-S evidence theory to complete decision layer fusion, has been established. The simulation results of water jet cutting machine fault diagnosis show that the method can effectively improve the diagnostic credibility and reduce diagnostic uncertainty.",2010,0, 1935,A Support System for Teaching Computer Programming Based on the Analysis of Compilation Errors,"A system was developed to support teaching computer programming to a group of students who have common questions and make common mistakes on practice computer programs. The system extrapolates the causes and syntaxes of students' compilation errors by analyzing the trends of past compilation errors and presents the extrapolated result to the teacher in real time. By using the system, a teacher can understand in real time students' programming mistakes when they are writing computer programs, and can appropriately teach computer programming to a group of students who have common problems",2006,0, 1936,Simulation Study on the Effect of Multiple Node Charge Collection on Error Cross-Section in CMOS Sequential Logic,"A technique for estimating error cross-section for combinational circuits based on charge collection at multiple nodes is presented. Ordinarily, charge collection from an ion strike is assumed to occur only on a single node, but with decreasing feature sizes in nanometer technologies, charge sharing among devices is worsening, leading to charge collection on multiple nodes. When multiple SETs are considered, simulation results show a 380times increase in cross-section with a four-bit carry look-ahead generator, and a 27times increase with a four-bit arithmetic logic unit. Additionally, both circuits show a linear increase in cross-section as frequency increases.",2008,0, 1937,Fault location using wavelet packets,"A technique using wavelet packets is presented for accurate fault location on power lines with branches. It relies on detecting fault-generated transient traveling waves and identifies some waves reflected back from discontinuities and the fault point. Wavelet packets analysis is used to decompose and reconstruct high-frequency fault signals. An eigenvector matrix consists of local energies of high-frequency content of the fault signal; the faulty section is determined by comparing local energies in the eigenvector matrix with a given threshold. With the faulty section determined, the time ranges of two reflected waves related with fault point would be found out, then the fault point is located. The paper shows the theoretical development of the algorithm, and together with the results obtained using EMTP simulation software modeling, a simple 10 kV overhead line circuit.",2002,0, 1938,"Attacking ""bad actor"" and ""no fault found"" electronic boxes","A the percentage of what are termed ""bad actor"" and no fault found (NFF) electronic box in military weapon systems is steadily growing. These are boxes that fail during operation, but test NFF during back shop testing, or, that fail during back shop testing and then test NFF at the depot repair facility. During operation, an electronic box is stressed by various environmental conditions which are normally absent on a test bench. If there are cold or cracked solder joints, corroded or dirty connector contacts, loose crimp joints, hairline cracks in a ribbon cable trace, or other intermittent conditions, the intermittency can occur while the box is under stress conditions, yet seldom occur while the box is on a test bench at room temperature. Very little concerted effort is currently focused on detecting, isolating and repairing these intermittent problems. Virtually all testing activity simply tests the unit for normal operation, one function, one circuit, or one set of circuits at a time. If an intermittent circuit is not displaying its intermittent nature at the instant it is being tested, the intermittency remains undetected. A three-pronged effort is currently underway to attack and repair bad actor and NFF electronic boxes. The first is to collect detailed repair data to identify which boxes are bad actor and NFF units. The second is to collect test data to determine which units yield inconsistent test results between back shop testing and depot testing, and why. The third is to employ a system that detects and isolates electronic box intermittent circuits. This paper describes the success realized to date by employing each of the three techniques described above, and how they are now effectively being employed together to reduce maintenance costs and improve avionics reliability for the F-16 weapon system.",2005,0, 1939,Develop on feed-forward real time compensation control system for movement error in CNC machining,"A theory model of feed-forward compensation controlling system is constructed by the method of precision compensation. A feed-forward compensation hardware control system is designed to MCS51CPU as the core and structure of compensation data processing program. Established components of linear contour error mathematical model, thus determine the amount of feed-forward compensation algorithm. CNC x-y experiment platform simulation results indicate that this design can effectively eliminate the phase lag and amplitude errors of the computer numerical control (CNC) system, and improve the general CNC machining accuracy on the part contour.",2010,0, 1940,Three-phase capacitor-clamped converter with fewer switches for use in power factor correction,"A three-phase capacitor-clamped converter for use in power factor correction and harmonics elimination is presented. There are two legs in the proposed converter on to achieve three-level pulse-width modulation (PWM) and on to constrain the line currents to be sinusoidal waves. Only eight power switches, instead of the 12 power switches used in a conventional three-phase flying capacitor converter, are used in the proposed converter. Three control loops (a DC link voltage control, a neutral-point voltage control and a line current control) are used in the control scheme to achieve DC bus voltage regulation, to balance the neutral-point voltage and to draw balanced sinusoidal line currents respectively. A mathematical model of the converter is derived and a control scheme is presented. The proposed circuit topology can be applied to AC motor drives and power quality compensation. Computer simulations and experiments are presented that verify the effectiveness of the proposed control scheme.",2005,0, 1941,A fault-tolerant transactional agent model on distributed objects,"A transactional agent is a mobile agent to manipulate objects distributed on computers with some type of commitment condition. For example, a transactional agent commits only if at least one object could be successfully manipulated in the at-least-one commitment condition. Computers may stop by fault while networks are assumed to be reliable. In the client-server model, servers can be fault-tolerant according to traditional replication and checkpointing technologies. However, an application program cannot be performed if a client computer is faulty. An application program can be performed on another operational computer even if a computer is faulty in the transactional agent model. For example, a transactional agent can move to another operational computer if some destination computer where the agent to move is faulty. There are kinds of faulty computers for a transactional agent, current, destination, and sibling computers where a transactional agent now exist, will move, and has visited, respectively. We discuss how the transactional agent can be tolerant of the types of faults. We show how a program reliably manipulating objects can be realized in a mobile agent in presence of computer faults",2006,0, 1942,Quantitative VOI-based Analysis of Template-guided Attenuation Correction in 3D Brain PET,"A transmission template-guided attenuation correction method was recently proposed and validated in comparison to transmission-based attenuation correction using voxelwise SPM analysis of clinical data. In contrast to brain activation studies, brain PET research studies often involve absolute quantification. As the assessment was carried out by a SPM group analysis alone, validation as to how such quantification can be affected by the two methods needed to be performed to demonstrate how the proposed method performs individually, particularly for diagnostic applications or individual quantification. In this study, we assess the quantitative accuracy of this method using automated volume of interest (VOI)-based analysis by means of the BRASS software for automatic fitting and quantification of functional brain images. There is a very good correlation (R2 = 0.91) between the Atlas-guided and measured transmission-guided attenuation correction techniques and the regression line agreed well with the line of identity (slope = 0.96). The mean absolute relative difference between the two methods for all VOIs across the whole population is 2.3% whereas the maximum difference is less than 7%. No proof of statistically significant differences could be verified for all regions. These encouraging results provide further confidence in the adequacy of the proposed approach demonstrating its performance particularly for research studies or diagnostic applications involving quantification.",2006,0, 1943,Tropospheric correction for InSAR using interpolated ECMWF data and GPS Zenith Total Delay from the Southern California Integrated GPS Network,"A tropospheric correction method for Interferometric Synthetic Aperture Radar (InSAR) was developed using profiles from the European Centre for Medium-Range Weather Forecasts (ECMWF) and Zenith Total Delay (ZTD) from the Global Positioning System (GPS). The ECMWF data were interpolated into a finer grid with the Stretched Boundary Layer Model (SBLM) using a Digital Elevation Model (DEM) with a horizontal resolution of 1 arcsecond. The output were converted into ZTD and combined with the GPS ZTD in order to achieve tropospheric correction maps utilizing both the high spatial resolution of the SBLM and the high accuracy of the GPS. These maps were evaluated for three InSAR images, with short temporal baselines (implying no surface deformation), from Envisat during 2006 on an area stretching northeast from the Los Angeles basin towards Death Valley. The RMS in the InSAR images was greatly reduced, up to 32%, when using the tropospheric corrections. Two of the residuals showed a constant gradient over the area, suggesting a remaining orbit error. This error was reduced by reprocessing the troposphere corrected InSAR images with the result of an overall RMS reduction of 15-68%.",2010,0, 1944,An approach towards smart fault-tolerant sensors,"Acquisition and processing of sensor data has to cope with measurement uncertainties and complex failure modes. Additionally, multiple sensor types and modalities may be used to improve reliability of environment perception. Our work aims at providing an architecture for fault-tolerant sensors and offering a uniform interface to the application. In the paper, we present our fault-tolerant virtual sensor concept that is based on combining model-based estimation and redundant sensor data. To illustrate and evaluate our concept we simulate a mobile robot in an instrumented environment which integrates several smart position sensors. By using a mathematical model to evaluate sensor data we achieve a more reliable position estimation. The paper presents results of the fusion process and discusses methods for generalization.",2009,0, 1945,An initial position correction and model instance selection method for AAM based face alignment,"Active appearance models (AAM) is very useful for extracting attention objects from objects, e.g. faces from images. Traditional improved methods of AAM based face alignment always concentrate on fitting efficiency without any concrete analysis of characteristics of the initial position and model instance, thus the accuracy and speed are both not ideal when the face has a certain degree of deflection. An initial position correction and model instance selection method based on facial features detection and simple 3D pose estimation is proposed in this paper. Adaboost algorithm was applied to pre-detection of facial features in the images firstly, then features of images that could not be detected or had been incompletely detected were extracted by facial skin properties in YCbCr color space. Finally, we calculated the coordinate of the nose tip and deflecting angle of the face according to feature region, next properly adjusted the AAM fitting center position and model instance and introduced linear algebra software ATLAS into fitting process for matrixes optimization. Simulation experiments on IMM face database show that our method increased the fitting accuracy rate by about 43% and the time consumption was decreased by about 76% comparing with standard AAM algorithm.",2008,0, 1946,A nested invocation suppression framework for active replication fault-tolerant CORBA,"Active replication is a common approach to building highly available and reliable distributed software applications. The redundant nested invocation (RNI) problem arises when servers in a replicated group issue nested invocations to another server group in response to a client invocation. Automatic suppression of RNI is always a desirable solution, yet it is usually a difficult design issue. In this research, we propose a new determinism reference model based on accomplishing the verification process in a more systematic manner The proposed determinism reference model consists of four levels, namely: ideal determinism, isomorphic determinism, similar determinism, and non-determinism. We consider a class of multi-threading CORBA environments to demonstrate the power of the proposed determinism reference model.",2002,0, 1947,A nested invocation suppression mechanism for active replication fault-tolerant CORBA,"Active replication is a common approach to building highly available and reliable distributed software applications. The redundant nested invocation (RNI) problem arises when servers in a replicated group issues nested invocations to other server groups in response to a client invocation. Automatic suppression of RNI is always a desirable solution, yet it is usually a difficult design issue. If the system has multithreading (MT) support, the difficulties of implementation increase dramatically. Intuitively, to design a deterministic thread execution control mechanism is a possible approach. Unfortunately, some modern operating systems implement threads on the kernel level for execution fairness. For the kernel thread case, modification on thread control implies modifying the operating system kernel. This approach loses system portability which is one of the important requirements of CORBA or middleware. In this work, we propose a mechanism to perform the auto-suppression of redundant nested invocation in an active replication fault-tolerant (FT) CORBA system. Besides the mechanism design, we discuss the design correctness semantic and the correctness proof of our design.",2002,0, 1948,Impact of channel estimation error on performance of adaptive MIMO systems,"Adaptive modulation scheme has been widely used in multiple-input multiple-output (MIMO) systems to enhance the spectral efficiency while maintaining the bit-error-rate (BER) under a target level. In this work, we investigate the performance of adaptive modulation in the presence of imperfect channel estimation and the impact of estimation noise on the spectral efficiency. The closed-form expressions for the average spectral efficiency are derived. Two MIMO schemes are considered, i.e., orthogonal space-time block codes (OSTBC) and spatial multiplexing with zero-forcing receiver (SM-ZF), and a low complexity method to enable the transmitter to switch between OSTBC and SM-ZF is utilized to achieve higher spectral efficiency than adaptive OSTBC and adaptive SM-ZF.",2008,0, 1949,Fast mode decision for adaptive prediction error coding,"Adaptive prediction error coding (APEC) in spatial and frequency domain is proposed and significantly improves the coding efficiency of video coders. However, this approach comes with the expense of increased encoding complexity because of the need to perform rate distortion optimization (RDO) for each block, to decide whether the block is coded in spatial or frequency domain. In this work, we propose novel fast mode decision algorithm operating on macroblock and block level to reduce the encoding complexity of APEC. The proposed algorithm estimates the edge orientation of the residual block using spatial domain filtering, and based on the estimated edge orientation, reduces the number of candidates to be checked in RDO. Furthermore, proposed method includes an early termination step that stops searching the best candidate based on the difference between spatial and frequency domain coding. Experimental results show that the proposed fast mode decision algorithm can achieve averagely 90.7% encoding time reduction of APEC with only about 0.06 dB loss in coding efficiency.",2008,0, 1950,Analyzing fault effects in fault insertion experiments,Addresses the problem of evaluating system sensitivity to hardware faults. For this purpose the authors use a software implemented fault injector with extended statistical capabilities (FITS). The main contribution of the paper is the formulation of basic factors influencing system dependability and checking them in experiments. These experiments have been performed on an IBM PC platform,2001,0, 1951,Error-resilience in multimedia applications over ad-hoc networks,Ad-hoc networking has been of increasing interest in recent years. It encapsulates the ultimate notion of ubiquitous communications with the absence of reliance on any existing network infrastructure. This paper presents a concept for robust operation of multimedia applications over such networks. Error resilient communication is achieved by using a new error detection and concealment technique that exploits information from the decoded image data itself as well as using information from the underlying network. This approach unifies information from both traditional computer science and signal processing domains. A layered architecture framework for the implementation of the proposed system is also described,2001,0, 1952,Soft Error Considerations for Multicore Microprocessor Design,"Advanced integrated circuits with reduced operating voltages and higher transistor densities exhibit increased sensitivity to radiation effects. This sensitivity is not confined to just memory but can affect all logic elements of the circuit. As such, radiation-induced soft errors are becoming a dominant reliability-failure mechanism in modern CMOS technologies. With the recent trend towards multicore microprocessors, designers must now consider how those multicore designs will perform taking into account the effects of soft errors. This paper discusses the relationships among process technology, architecture, communication, operating system, and applications when designing a multicore microprocessor. Several key design considerations are presented which exemplify the linkages among those design elements in multicore microprocessors.",2007,0, 1953,The PCB defect inspection system design based on lab windows/CVI,"A novel approach used for Printed Circuit Board (PCB) defect inspection is presented in this paper. This image processing approach that developed in Lab Windows/CVI can be used to determine whether PCB components are put in the correct location, and matches areas of inspection to a template of valid components. The experimental result shows that the proposed approach can search the correct components in PCB image with a given template, therefore it can be used in an automatic optical inspection for online inspection.",2009,0, 1954,Neural network approach to diagnose faults in linear antenna array,"A novel approach using artificial neural network (ANN) is proposed to identify the faulty elements present in a non uniform linear array. The input to the neural network is amplitude of radiation pattern and output of neural network is the location of faulty elements. In this work, ANN is implemented with two algorithms; radial basis function neural network (RBF) and probabilistic neural network and their performance is compared. The network is trained with some of the possible faulty radiation patterns and tested with various measurement errors. It is proved that the method gives a high success rate.",2008,0, 1955,A Clock Fault Detection Circuit for Reliable High Speed System by Time-to-Voltage Conversion,"A novel architecture for clock fault detection circuit for high speed nanoelectronics system is delivered. Time-to-voltage converter is employed for transforming error of time to the error of voltage which is more convenient for error detection. The rapid discharging circuit is also used for system resetting. To illustrate the detection capability by the diction circuit, a prototype CMOS design of this proposed circuit is presented. Simulation result shows that the proposed architecture is very suit for integration to nanoelectronic circuit design to detect the fault of the clock.",2009,0, 1956,Fault Location in Power Transmission Lines Using a Second Generation Wavelet Analysis,"A novel but effective signal processing tool-second generation wavelet (SGW) analysis-is proposed to extract the features of fault-generated transient waves: the maxima of amplitude fluctuation and polarities. The transient current signals observed at each end of the power transmission line are firstly transformed into an aerial mode current signal when fault occurs. The original data of transient current signal is split into the approximation signal and the detail signal. The approximation signal is updated using the information contained in the detail signal. Meanwhile, The detail signal is predicted using information contained in the updated approximation signal. The updated approximation signal represents the steady component of signal and the updated detail signal contains the high-frequency component only and is exploited to extract the transient features. The maxima of the detail signal are recognized to identify the time-tags and the polarities of these time-tags are easy to determine. Finally, with reference to the maxima and their polarities derived from the high-frequency component of the aerial modal signals, the distance to the fault can be obtained. The simulation results have demonstrated that the proposed SGW analysis scheme is capable of extracting the transient features more effectively than the recently developed multi-resolution morphology gradient method",2005,0, 1957,A novel hybrid defected ground structure as low pass filter,"A novel DGS has been investigated to achieve better S-parameter performance than ever reported in the microwave literature. A chronological development in designs ranging from conventional square patterned photonic bandgap (PBG) structure and defected ground structures (DGS) to the proposed novel DGS has been investigated theoretically and experimentally. Novel design results from the application of Chebyshev distribution in dimensions of the conventional DGS. The proposed novel DGS provides excellent performance in terms of ripples in the passband, sharp selectivity at the cut-off frequency and spurious-free wide stopband.",2004,0, 1958,Error-free arithmetic for discrete wavelet transforms using algebraic integers,"A novel encoding scheme is introduced with applications to error-free computation of discrete wavelet transforms (DWT) based on Daubechies wavelets. The encoding scheme is based on an algebraic integer decomposition of the wavelet coefficients. This work is a continuation of our research into error-free computation of DCTs and IDCTs, and this extension is timely since the DWT is part of the new standard for JPEG2000. This encoding technique eliminates the requirements to approximate the transformation matrix elements by obtaining their exact representations. As a result, we achieve error-free calculations up to the final reconstruction step where we are free to choose an approximate substitution precision based on a hardware/accuracy trade-off.",2003,0, 1959,Research on architecture and design principles of COTS components based generic fault-tolerant computer,"A novel fault-tolerant architecture based on COTS components is put forward and implemented in this paper. In order to make observable the internal states of COTS components, and in order to concurrently perform fault-tolerance function and normal function and control the behavior of each COTS component, the authors have devised an intelligent hardware module dedicated to fault-tolerance processing, which can significantly offload application processors. This architecture digs every inherent fault-detection mechanism and adopts layered fault protection mechanism to raise fault-tolerance coverage. This architecture is efficient, flexible, scalable and transparent with respect to fault-tolerance. It is Byzantine fault safe and also supports online repair. The authors also raise some design tradeoffs when designing COTS components based fault-tolerant computer.",2005,0, 1960,A fault diagnosis approach for roll bearing based on wavelet-SOFM network,"A novel method of pattern recognition and fault diagnosis in roll bearing based on the wavelet-neural network is proposed according to the frequency spectrum characteristics of vibration signal. Based on the advantage of multi-dimensional multi-scaling decomposition of wavelet packets, the abrupt change information can be obtained and the features related to the fault of roll bearing is extracted through the decomposing and reconstruction of the vibration sign of the roll bearing. The extract features are inputted into SOFM to realize the automatic classification of the fault. The trained SOFM can be used to the online state monitor and real-time fault diagnosis of roll bearing. The feasibility of this novel method is proved by the simulation results.",2007,0, 1961,A novel high frequency pseudo noise correlator hardware design for cable fault diagnoses,A novel modification of a cross-correlation algorithm has been designed and implemented as digital hardware process. The algorithm was written in Verilog and tested running on FPGA hardware at 100 MHz. The algorithm presented has a modular architecture that is fully scalable and can be used to correlate a large number of different length pseudo random binary sequences (PRBS). A real-world interface (digital to analog and analog to digital converter) is added to the FPGA implementation to validate the algorithm in the field of cable testing and fault finding. The algorithm was functionality validated through coaxial cable testing and benchmarked for accuracy against known test results.,2010,0, 1962,"Variable-Structure Multiple-Model Approach to Fault Detection, Identification, and Estimation","A scheme is proposed to detect, identify, and estimate failures, including abrupt total, partial, and multiple failures, in a dynamic system. The new approach, named IM3L, is developed based on variable-structure multiple-model estimation, which allows to improve performance by online adaptation. It uses an interacting multiple model estimator for fault detection and identification but the maximum likelihood estimator for estimating the extent of failure. It provides an effective and integrated framework for fault detection, identification, and state estimation. For two aircraft examples, the proposed approach is evaluated and compared with hierarchical multiple-model approaches and a widely used single-model residual-based generalized likelihood ratio approach in terms of detection and estimation performance. The results show that the IM3L provides not only fast detection and proper identification, but also good estimation of the failure extent as well as robust state estimation.",2008,0, 1963,Chiller Unit Fault Detection and Diagnosis Based on Fuzzy Inference System,"A serials of adverse effects will occur after the chillers units' production of faults. The chiller units are strong non-linear and long-time delay system, so there are existences of fault detection and diagnosis (FDD) limitation if adoption regular fault judgment method. Fuzzy inference system (FIS) has an outstanding control effect because it is close to human thoughts. Moreover, the result of FIS is easy to be understood by operators. Firstly, fuzzy inference system of chillers unit was modelled in the paper, and then the FIS was trained and checked by the operation data obtained from a real building chiller unit under normal and unhealthy conditions respectively. The results indicated the chillers unit FDD based on FIS is feasible and prompt. Furthermore, this method is simple and prone to be automated.",2006,0,5520 1964,Intermittent faults and effects on reliability of integrated circuits,"A significant amount of research has been aimed at analyzing the effects of high energy particles on semiconductor devices. However, less attention has been given to the intermittent faults. Field collected data and failure analysis results presented in this paper clearly show intermittent faults are a major source of errors in modern integrated circuits. The root cause for these faults ranges from manufacturing residuals to oxide breakdown. Burstiness and high error rates are specific manifestations of the intermittent faults. They may be activated and deactivated by voltage, frequency, and operating temperature variations. The aggressive scaling of semiconductor devices and the higher circuit complexity are expected to increase the likelihood of occurrence of the intermittent faults, despite the extensive use of fault avoidance techniques. Herein we discuss the effectiveness of several fault tolerant approaches, taking into consideration the specifics of the errors generated by intermittent faults. Several solutions, previously proposed for handling particle induced soft errors, are exclusively based on software and too slow for handling large bursts of errors. As a result, hardware implemented fault tolerant techniques, such as error detecting and correcting codes, self checking, and hardware implemented instruction retry, are necessary for mitigating the impact of the intermittent faults, both in the case of microprocessors, and other complex integrated circuits.",2008,0, 1965,Fault tolerance evaluation using two software based fault injection methods,"A silicon independent C-Based model of the TTP/C protocol was implemented within the EU-founded project FIT. The C-based model is integrated in the C-Sim simulation environment. The main objective of this work is to verify whether the simulation model of the TTP/C protocol behaves in the presence of faults in the same way as the existing hardware prototype implementation. Thus, the experimental results of the software implemented fault injection applied in the simulation model and in the hardware implementation of the TTP/C network have been compared. Fault injection experiments in both the hardware and the simulation model are performed using the same configuration setup, and the same fault injection input parameters (fault injection location, fault type and the fault injection time). The end result comparison has shown a complete conformance of 96.30%, while the cause of the different results was due to hardware specific implementation of the built-in-self-test error detection mechanisms.",2002,0, 1966,FE-based equivalent Circuits for simulating transformer internal Faults,"A simple and efficient model is proposed to detect transformer internal faults. The winding's structure is preciously simulated for FEM. It is easy to be used for a transformer with internal turn-ground and turn-turn fault. Energy perturbation method is employed to calculate the equivalent circuit parameters of the transformer. With these parameters, state equations can be established to study the relationship among the terminal currents, flux and the fault locations. Through test of a transformer, it reveals that the method is reasonable and effective",2006,0, 1967,Design and testing of 230 V inductive type of Superconducting Fault Current Limiter with an open core,"A single-phase, 230 V Superconducting Fault Current Limiter using two Bi2223 HTS tubes with the total critical current 2.5 kA situated in vacuum insulated cryostat has been described in this paper. We designed and manufactured the inductive SFCL with an open core as core shielded type acquired the optimal design parameters by using Finite Element Method. We tested the limiter performances at liquid nitrogen temperature 77 K. We proved that the performances of properly designed limiter with open core could be comparable to the limiter with closed core.",2005,0, 1968,Biomechanical evaluation of unilateral maxillary defect restoration based on modularized finite element model of normal human skull,"A standardized and modularized finite element model of a normal human skull is established to simulate the unilateral defective pattern of the maxillary and the stress distribution of craniofacial skeleton with repair subject to the bite force. Based on the model, the stress evaluation of autologous bone grafted and zygomatic implant is to optimize the repairing method, and to rebuild the occlusal function",2005,0, 1969,Unknown Fault Diagnosis for Nonlinear Hybrid Systems Using Strong State Tracking Particle Filter,"A strong state tracking particle filter (SST-PF) is put forward for unknown fault diagnosis of hybrid system. SST-PF overcomes the problem of sample impoverishment for tracking the state of nonlinear hybrid system by setting permanent transition probabilities from one mode to another. Meanwhile threshold logic of normalization factor based on the statistics is built to detect unknown-faults, which is more accurate and reasonable for tiny mode differences of hybrid system. Simulation experiments are carried out to analyze the effects of SST-PF, and it is shown that our algorithm has strong tracking ability for states and pretty detection ability for both known and unknown faults.",2010,0, 1970,Evolutionary algorithm EPSO helping doubly-fed induction generators in ride-through-fault,"A tuning process of the PI (proportional-integral) controller gains of a doubly-fed induction generator's (DFIG) rotor side converter is described in this work. The purpose is to tune PI controllers to help the DFIG to survive to network faults, avoiding being tripped-off by under-voltage relays. The ride-through-fault capability of DFIGs improves system reliability and allows them to participate in the control and stabilization of a power system following system disturbances. The robust tuning of the DFIG rotor side converter's PI controllers may avoid undesired disconnections from the grid by, for instance, preventing over-currents in its variable frequency AC/DC/AC converter. This work presents an evolutionary particle swarm optimization-based (EPSO) approach to this tuning, with the aim of helping to limit the line-to-line voltage dip at the DFIG's terminals after a short-circuit, in order to avoid its tripping-off. The EPSO-based algorithm developed is validated at a typical Portuguese 15 kV distribution network with the integration of a DFIG, using the transient electromagnetic software package PSCAD/EMTDCtrade.",2009,0, 1971,Distribution incipient faults and abnormal events: case studies from recorded field data,"A typical distribution circuit consists of thousands of individual components, from transformers to switches to insulators. Failure of a single component can cause service quality and reliability problems for the entire circuit and even adjacent circuits. Under the sponsorship of the Electric Power Research Institute (EPRI), and with the cooperation of EPRI utility members, researchers at Texas A&M University have put in place an advanced, multi-site monitoring system. This system instruments dozens of circuits at multiple utility company substations across the United States and Canada. Extensive data from this multi-site monitoring system have documented numerous examples of incipient fault behavior preceding component failures.",2004,0, 1972,A vector set partitioning noisy channel image coder with unequal error protection,"A vector enhancement of Said and Pearlman's set partitioning in hierarchical trees (SPIHT) methodology, named VSPIHT, has been proposed for embedded wavelet image compression. A major advantage of vector-based embedded coding with fixed length VQs over scalar embedded coding is its superior robustness to noise. We show that vector set partitioning can effectively alter the balance of bits in the bit steam so that significantly fewer critical bits carrying significant information are transmitted, thereby improving inherent noise resilience. For low noise channels, the critical bits are protected, while the degradation in reconstruction quality caused by errors in noncritical quantization information, can be reduced by appropriate VQ indexing, or designing channel optimized VQs for the successive refinement systems. For very noisy channels, unequal error protection to the critical and noncritical bits with either block codes or convolution codes are used. Additionally, the error protection for the critical bits is changed from pass to pass. A buffering mechanism is used to produce an unequally protected bit stream without sacrificing the embedding property. Extensive simulation results are presented for noisy channels, including bursty channels.",2000,0, 1973,Receiver driven layered multicast with layer-aware forward error correction,A wide range of different capabilities and connection qualities typically characterizes receivers of mobile television services. Receiver driven layered multicast (RDLM) offers an efficient way for providing different capabilities over such a broadcast channel. Scalable video coding (SVC) allows for the transmission of multiple video qualities within one media stream. Using SVC generates a video bit stream with various inter layer dependencies due to references between the layers. This work proposes a layer-aware forward error correction (L-FEC) approach in combination with SVC. L-FEC increases robustness of the more important layers by generating protection across layers following existing dependencies of the media stream. The L-FEC is integrated as an extension of a Raptor FEC implementation in a DVB-H broadcast system. It is shown by experimental results that L-FEC outperforms traditional UEP protection schemes.,2008,0, 1974,Behavioral modular description of fault tolerant distributed systems with AADL Behavioral Annex,"AADL is an architecture description language intended for model-based engineering of high-integrity distributed systems. The AADL Behavior Annex (AADL-BA) is an extension allowing the refinement of behavioral aspects described through an AADL architectural description. When implementing Distributed Real-time Embedded system (DRE), fault tolerance concerns are integrated by applying replication patterns. We considered a simplified design of the primary backup replication pattern as a running example to analyze the modeling capabilities of AADL and its annex. Our contribution lies in the identification of the drawbacks and benefits of this modeling language for accurate description of the synchronization mechanisms integrated in this example.",2010,0, 1975,Investigation of fault tolerant of direct torque control in induction motor drive,"AC drives based on direct torque control (DTC) of induction machines are known for their high dynamic performances, obtained with very simple control schemes. So many studies have been performed with ASIC or FPGA DTC implementation. In this paper, we investigate for the tolerance of such drive to sensors defects, when the control algorithm is to be implemented in an FPGA. So, authors specially focus on the influence of the FPGA implementation design on the DTC fault tolerance. Simulations are carried out with system generator (SG) toolbox working in the MATLAB/SIMULINK environment. Results are presented and discussed to evaluate the DTC operating under considered faults.",2004,0, 1976,On methods of multi-beam data error correction,"According to analysis to the error sources of the multibeam sonar system, corresponding methods are applied in different error correction processes. The method of filtering is adopted to reduce sounding noises. To the calibration error, recalculation is amended to correction. As to the sound speed profile error, marginal beams are deleted and Equivalent Sound Velocity Profile method is brought to diminish the effect of the error. In this paper the method using the single beam data as a reference to correct the integrated errors of the multibeam data are detailed. All the methods that have been applied in the 908 projects of State Oceanic Administration achieved good results.",2010,0, 1977,Soft error optimization of standard cell circuits based on gate sizing and multi-objective genetic algorithm,"A radiation harden technique based on gate sizing and multi-objective genetic algorithm (MOGA) is developed to optimize the soft error tolerance of standard cell circuits. Soft error rate (SER), chip area and longest path delay are selected as the optimization goals and fast fitness evaluation algorithms for the three goals are developed and embedded into the MOGA. All the three goals are optimized simultaneously by optimally sizing the gates in the circuit, which is a complex NP-Complete problem and resolved by MOGA through exploring the global design space of the circuit. Syntax analysis technique is also employed to make the proposed framework can optimize not only pure combinational logic circuit but also the combinational parts of sequential logic circuit. Optimizing experiments carried out on ISCAS'85 and ISCAS'89 standard benchmark circuits show that the proposed optimization algorithm can decrease the SER 74.25% with very limited delay overhead (0.28%). Furthermore, the algorithm can also reduce the area for most of the circuit under test by average 5.23%. The proposed technique is proved to be better than other works in delay and area overhead and suitable to direct the design of soft error tolerance integrated circuits in high reliability realms.",2009,0, 1978,A Radiation Hardened by Design Register File With Lightweight Error Detection and Correction,"A radiation hardened by design 32times36 b register file with error detection and correction (EDAC) capability is presented. The lightweight EDAC scheme (LEDAC) supports fine granularity (byte) writes, with low area and latency overhead, suitable for small, fast memories such as register file and first-level cache memory. The LEDAC scheme is described and its impact on memory efficiency and speed are quantified. The register file has been tested to be functional on a foundry 0.13 mum bulk CMOS process with a measured speed over 1 GHz at VDD=1.5 V. The LEDAC scheme is implemented in an external FPGA. Accelerated heavy ion testing results are also described. The experimentally measured RHBD register file SEE behavior is examined, and the proposed LEDAC scheme is shown to alleviate all soft errors in accelerated testing.",2007,0, 1979,An efficient technique for error-free algebraic-integer encoding for high performance implementation of the DCT and IDCT,"A recently introduced algebraic integer encoding scheme allows low complexity, virtually error free computation of the DCT and IDCT. Efficiencies can be introduced into this method, but at the expense of some increase in error. In this paper, a modification to the encoding scheme is introduced for specific architectures which provides increased implementation efficiency, but with no sacrifice in accuracy. We provide a theoretical study of this new approach and illustrate the technique using selected DCT and IDCT algorithms",2001,0, 1980,Refined Spatial Error Concealment with Directional Entropy,"A refined error concealment method for intra frame in H.264 is proposed in this work. Directional entropy of neighboring edges is used to classify the content of the lost block. Some techniques aimed to shorten the computation time without degrading quality of reconstructed video are presented to enforce multi-directional interpolation. As for block with high texture, either bilinear or multi-directional interpolation alone is not so well to recover, we integrate the two methods to get the final result. Experimental results demonstrate the efficiency and better performance of proposed method as compared to other interpolation based methods.",2009,0, 1981,Managing urban network fault levels- A role for a resistive superconducting fault current limiter?,"A resistive superconducting fault current limiter (SCFCL) based on MnB2 has been investigated as a means of reducing network fault level. The deployment strategy of this device has been discussed and its performance as a fault current limiter has been analysed with the aid of network simulation results. It has been demonstrated that a resistive SCFCL can reduce all three components of fault currents: peak make, peak break and rms break. In turn reduced peak make leads to reduced electromagnetic forces on conductor carrying fault current, reduced torques on generator and prime mover and less damage at the point of fault.",2009,0, 1982,Development and research on fault diagnosis system of solar power tower plants,"According to the system configuration and operating characteristic of a constructing solar power tower (SPT) plant in China in this paper, the fault diagnosis system (FDS) was researched and developed. Furthermore, evaluation system of fault grade was established by the method of fuzzy comprehensive evaluation. In this FDS, the fault diagnosis structure was designed to adopt the expert system for priority and the radial basis function (RBF) neural network for assistant. The monitoring index of diagnosis object was built in expert system to set the fault symptom threshold and represent the fault symptom in quantification with the comprehensive methods of expert knowledge, fuzzy mathematics, and low probability identification and so on. The model of neural networks is established based on the structure of RBF multi-neural subnets. According to the analysis and verification results of a fault case, the structure design is reasonable and diagnosis methods are feasible in this FDS. Moreover, the fault could be accurately diagnosed and the evaluation of the fault grade could be made reliably with the great practical value.",2009,0, 1983,Design of fault diagnosis system for 3D Laser Scanning Machine based on internet,"According to the weakness of 3D Laser Scanning Machine in fault diagnosis, a new design approach for fault diagnosis has been developed. The new system consists of three layers, including the local fault diagnosis, fault service centre and multi-user cooperative diagnosis. The local fault diagnosis system, which is embedded in the existing CNC system, offers on-line, off-line diagnosis, fault compensation and remote communication services. With the support of the fault diagnosis database, the local fault diagnosis system chooses two agents that supply the user with a different view on problem as its intelligent analyzer to retrieve the optimal checkpoints. In addition, it builds the local fault diagnosis engine as a web service to facilitate users to share their diagnosis resource with others. The fault service center, which is supported by the customer service database, is designed to offer advanced suggestions to customers when their local fault diagnosis system fails to remove the malfunctions independently. It mainly provides four functions including fault diagnosis BBS, web-based retrieval, customer register and remote communication. It also adopts two agents to deal with web-based retrieval. The remote communication module supported by Audio/Video guide makes remote diagnosis effective. In order to acquire all of the users diagnosis resource, the customer register module registers all users web services in the registry; therefore the fault service center could bind and execute the customers web services when they are confused by a certain difficult problem. Finally, the key techniques for the system building are introduced, and the feasibility of the system is also testified.",2006,0, 1984,Fault Classification and Faulted-Phase Selection Based on the Initial Current Traveling Wave,"Accurate fault classification and swift faulted-phase selection are the bases of protection for transmission systems. This paper presents the algorithm of fault classification and faulted-phase selection based on the initial current traveling wave. The characteristics of various faults are investigated on the basis of the Karenbauer transform. The criteria of fault classification and faulted-phase selection are put forward. In order to extract the traveling wave from postfault signals and to construct the algorithm, wavelet transform technology is adopted. The simulations based on the electromagnetic transients program have been done to verify the performance of the proposed algorithm.",2009,0, 1985,Extraction of Tectonic Faults of Longmen Mountain Based on DEM,"According to the analysis of the tectonic characteristics of thrust belt in the Longmen Mountain, the present study aims to build a methodology to extract liner fault structures in the study area. The methodology is an approach which includes automatic extraction of major faults based on combined calculation of landform factors from the SRTM-DEM and revision of the automatic extraction result according to remote sensing images and geologic data. Therein, these landform factors including elevation, slope, aspect and variation of aspect, slope of slope (SOS) and slope of aspect (SOA). The compound method, including the spatial analysis techniques based on SRTM-DEM, interpretation of remote sensing images, and some geosciences' researches, provides strong technical support to achieve the quantization of the morphotectonics research.",2009,0, 1986,Analysis on Interruption and Plane Layout of Shear Wall for Frame-Shear Wall Structure with Top Fault Shear Wall,"According to the force-deformation characteristics of frame-shear wall structure, the calculation and analysis model of the ""Style Box"" type layout of frame-shear wall structure is advanced with the basic and simple arrangement of frame-shear wall structure. The different plane and vertical layout with interrupting shear walls of the frame-shear wall structure with top fault shear walls is discussed primarily by using the method of plane layout and vertical interruptable position being considered at the same time. Based on the main parameters of the frame-shear wall structure with top fault shear walls in different ways, it puts forward the importance of the shear wall location of the plane layout to the overall performance of the frame-shear wall structure with top fault shear walls except the lateral stiffness of the shear wall corresponding to the shear wall interrupted ratio.",2009,0, 1987,Wavelet-based fault diagnosis scheme for power system relay protection,"According to the real time feature of relay protection and the wave characteristics of fault phase current, a new fault diagnosis scheme is introduced in this paper. It is based on wavelet transform, particularly the multiresolution analysis technique. This work focuses on how to use the wavelet transform to locate fault abrupt points, so that detect whether fault had happened, meanwhile microprocessor circuit based relay protection is proposed. The effectiveness of the proposed scheme was verified in the experiment. The simulation result shows that this method has the ability of simple algorithm, high location accurance and anti-interference, can effectively improve the sensitivity and selectivity of relay protection. Furthermore the prospect of wavelet analysis in power system, especially in relay protection is estimated.",2008,0, 1988,Fault Location System for Safety Monitoring Based on CAN Bus,Aiming at existing problems of the traditional long distance transmission. The fault location system for safety monitoring based on CAN bus was studied for long distance transmission. The research and indicate that adopting CAN bus as information carrier can extend the communication distance and reduce the wiring of fault location system. The fault location system based on CAN bus is suitable for the application in the long distance safety monitoring.,2009,0, 1989,The robot remote monitoring and fault diagnosis system base on wireless network,"Aiming at the characteristic of the 6-DOF serial robot, a remote monitoring and fault diagnosis system was designed and realized. The principle of the system and its architecture were introduced first, then a server/client mode is designed to achieve remote control and intelligent fault diagnosis, the work cycle and the main functions of server and client system were given also, finally, through client operations, the remote motion control, real-time transmission and display were accomplished.",2010,0, 1990,A Fast Fault Location Algorithm Based on Pre-computed for Optical Burst Switching Network,"Aiming at the disadvantages of existing location algorithms, this paper proposed and experienced an effective method of fault location algorithm based on pre-computed for optical burst switching network. In order to minimize the monitoring cost, we introduced monitoring-cycle by which the network is divided into a number of monitoring domain. Each monitoring domain has a monitor, when faults occurred, fault codes are generated. According to the fault codes, we can search the binary tree algorithm to achieve the pre-computed of faults in the OBS network. Examples prove that the algorithm can not only realize single fault location but also multi-fault location.",2009,0, 1991,Three-dimensional finite element analysis of the prosthetic rehabilitation for unilateral maxillary defect after free flap reconstruction,"Aims: The aim of this study was to evaluate the stress on the prostheses and abutment teeth in a unilateral maxillary defect with free flap reconstruction. Methods and Materials: A three-dimensional finite element model of the human unilateral maxillary defect was constructed. Prostheses after free flap reconstruction were established. The modelling and analytical processes were performed using the ANSYS technologies. Results: The stress concentration was located at the junction of the palatal and the free flap. The highest stress value of was 49.656 Mpa. Stress distribution was uniform in the remaining posterior teeth. As for the free flap, the stress concentration were also found at the area adjacent to the palatal and the posterior area of the free flap, with a maximum stress value of 13.03 Mpa. Conclusions: Although the immediate surgical reconstruction for unilateral maxillary defect has some advantages, it might cause instability of the prostheses which may affect the overall prosthetic outcome to the point of view of biomechanics. Further research should be done to perfect this kind of treatment, leading to a successful surgical and prosthodontic rehabilitation.",2010,0, 1992,An Autofocus Approach for Residual Motion Errors With Application to Airborne Repeat-Pass SAR Interferometry,"Airborne repeat-pass SAR systems are very sensible to subwavelength deviations from the reference track. To enable repeat-pass interferometry, a high-precision navigation system is needed. Due to the limit of accuracy of such systems, deviations in the order of centimeters remain between the real track and the processed one, causing mainly undesirable phase undulations and misregistration in the interferograms, referred to as residual motion errors. Up to now, only interferometric approaches, as multisquint, are used to compensate for such residual errors. In this paper, we present for the first time the use of the autofocus technique for residual motion errors in the repeat-pass interferometric context. A very robust autofocus technique has to be used to cope with the demands of the repeat-pass applications. We propose a new robust autofocus algorithm based on the weighted least squares phase estimation and the phase curvature autofocus (PCA) extended to the range-dependent case. We call this new algorithm weighted PCA. Different from multisquint, the autofocus approach has the advantage of being able to estimate motion deviations independently, leading to better focused data and correct impulse-response positioning. As a consequence, better coherence and interferometric-phase accuracy are achieved. Repeat-pass interferometry based only on image processing gains in robustness and reliability, since its performance does not deteriorate with time decorrelation and no assumptions need to be made on the interferometric phase. Repeat-pass data of the E-SAR system of the German Aerospace Center (DLR) are used to demonstrate the performance of the proposed approach.",2008,0, 1993,Diagnostics using airborne survey and fault location systems as the means to increase OHTL reliability,"Airborne survey application for diagnostics of overhead transmission lines (OHTL) is quite relevant for power utilities of industrialized counties where power network counts thousands or millions km, of which considerable part has reached 30-50 year lifetime and older. Airborne survey based on aerial scanning as a method of OHTL condition monitoring, is efficient instrument of detection of the line elements deviation off regular condition, serves as a convenient facility of network utility inventory. Advantage of aerial scanning is a combination of high survey accuracy with high work productivity. Processing of digital survey data allows to get essential data required for OHTL reliability analysis: precise span lengths, sag and tension values, conductor clearance to ground, crossed and adjacent objects, clearance to vegetation, distance to nearby trees that may damage OHTL if fallen. For analysis of OHTL reliability, existing software packages allow to carry out modeling condition of separate elements and entire line under extreme ice and wind loads, check safety of conductor clearance to ground and crossed lines under condition of significant conductor overheating determined by necessity to ensure transmission under long-term or short-term (but considerable) load increase. Collection, storage, systemizing, practical use of survey data for development and implementing management decisions and rational usage of network resources is reasonable to accomplish with a specialized information system. Information system helps to provide integrate OHTL monitoring data, modules of record and analysis of technical condition of separate components and entire line, 2D and 3D representation of objects with high georeference accuracy. One of negative examples of insufficient OHTL reliability is fault current caused by lightning, conductor or insulator mechanical damage, etc. Duration of OHTL malfunction, timing and success of emergency elimination depends greatly on accuracy of fa- ult location (FL) on line. Advanced FL system allows to locate fault with accuracy of 5 to 150 m. Combined with aerial scanning data and visualizing line section detected by FL system essentially improves efficiency of service technology, emergency recovery of electric network by maintenance crew, and hence increases system reliability of power objects.",2005,0, 1994,Verification-Guided Soft Error Resilience,"Algorithmic techniques for formal verification can be used not just for bug-finding, but also to estimate vulnerability to reliability problems and to reduce overheads of circuit mechanisms for error resilience. We demonstrate this idea of verification-guided error resilience in the context of soft errors in latches. We show how model checking can be used to identify latches in a circuit that must be protected in order that the circuit satisfies a formal specification. Experimental results on a Verilog implementation of the ESA SpaceWire communication protocol indicate that the power overhead of soft error protection can be reduced by a factor of 4.35 by using our approach rather than protecting all latches",2007,0, 1995,Fault Tolerant Active Rings for Structured Peer-to-Peer Overlays,"Algorithms by which peers join and leave structured overlay networks can be classified as passive or active. Passive topology maintenance relies on periodic background repair of neighbor pointers. When a node passively leaves the overlay, subsequent lookups may fail silently. Active maintenance has been proven only for fault-free networks. We develop an active topology maintenance algorithm for practical, fault-prone networks. Unlike prior work, it a) maintains ring continuity during normal topology changes and b) guarantees consistency and progress in the presence of faults. The latter property is inherited by novel extension of the Paxos commit algorithm. The topology maintenance algorithm is formally developed using the B method and its event-driven extensions for dynamic systems. Messaging and storage overheads are quantified",2005,0, 1996,OBDD-based evaluation of reliability and importance measures for multistate systems subject to imperfect fault coverage,"Algorithms for evaluating the reliability of a complex system such as a multistate fault-tolerant computer system have become more important. They are designed to obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation, and common-cause failures. This paper presents an efficient method based on ordered binary decision diagram (OBDD) for evaluating the multistate system reliability and the Griffith's importance measures which can be regarded as the importance of a system-component state of a multistate system subject to imperfect fault-coverage with various performance requirements. This method combined with the conditional probability methods can handle the dependencies among the combinatorial performance requirements of system modules and find solutions for multistate imperfect coverage model. The main advantage of the method is that its time complexity is equivalent to that of the methods for perfect coverage model and it is very helpful for the optimal design of a multistate fault-tolerant system.",2005,0, 1997,Automated PET/CT Cardiac Registration for Accurate Attenuation Correction,"Alignment of PET and CT images is essential for accurate measurements of cardiac perfusion. Misalignment can produce an erroneous attenuation map that projects lung attenuation parameters onto the heart wall, thereby underestimating the attenuation, and creating artifactual areas of hypoperfusion which may be misinterpreted as myocardial ischemia or infarction. The main cause of misregistration between CT and PET images is the respiratory motion of the patient. In this paper, an automated cardiac software alignment method is proposed to overcome this motion artifact. In this approach, the heart is extracted from the PET data through windowing and c-mean clustering, and the CT scans are segmented to obtain the corresponding heart geometry. From this processed data, the heart geometries are registered, and a motion correction vector is calculated such that the alignment error of the two modalities is minimized. Results of this optimization procedure have been evaluated on 24 patient PET/CT cardiac data sets producing accurate cardiac alignment which eliminated PET/CT misregistration attenuation correction artifact",2006,0, 1998,Lightweight fault tolerance in CORBA,"Although fault-tolerant implementations of CORBA have been available for several years, the standard specification of fault-tolerant CORBA (FT-CORBA) has been finalized only recently. This specification defines simple, minimal mechanisms for regular clients to deal with fault-tolerant servers, as well as a wide spectrum of services and API for implementing replicated, fault-tolerant servers. While extremely powerful, these advanced server-side mechanisms come with significant complexity both for the FT-CORBA implementor and the application developer. This paper proposes an alternative, lightweight approach to fault tolerance for applications that do not have strong requirements in terms of data consistency. This approach builds on the client-side mechanisms of FT-CORBA and takes advantage of semantic knowledge of the server objects to mediate distributed interactions in an efficient and fault-tolerant manner. Although the approach proposed in this paper is not applicable to any application, it can be deployed in existing systems to transparently increase their reliability and availability without requiring any re-engineering",2001,0, 1999,Investigation of practical problems for digital fault location algorithms based on EMTP simulation,"Although most digital fault location algorithms are advantageous with a high percentage of precision during simulation tests, all of them encounter errors during experimental testing. This is mainly due to the considered simplified assumptions during the development of these algorithms as well as the differences between the models which are employed in the simulations and actual circumstances in real fields. In this paper, the simulation of different factors that effectively influence the performance of fault location algorithms in real systems is examined employing ATP-EMTP simulation including mutual coupling, parameter distribution, line configuration, parameter variations, hardware errors and fault resistance. Also, the behavior of fault location algorithms is evaluated under theses factors. Moreover, the impacts of some simplified assumptions on the accuracy of their relevant algorithms are discussed. This work is accomplished by employing most of the published digital fault location algorithms including one and two terminals data algorithms. It presents a full comprehensive study of the performance of these algorithms.",2002,0, 2000,Instruction-based delay fault self-testing of pipelined processor cores,"Although nearly all modern processors use a pipelined architecture, no method has yet been proposed in the literature to model these for the purpose of test generation. The paper proposes a graph theoretic model of pipelined processors and develops a systematic approach for delay fault testing of such processor cores using the processor instruction set. Our methodology consists of using a graph model of the pipelined processor, extraction of architectural constraints, classification of paths, and generation of tests using a constrained ATPG. These tests are then converted to a test program, a sequence of instructions, for testing the processor. Thus, the tests generated by our method can be applied in a functional mode of operation and can also be used for self-test. We applied our method to two example processors, namely a 16 bit five stage VPRO pipelined processor and a 32 bit pipelined DLX processor, to demonstrate the effectiveness of our methodology.",2005,0, 2001,Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware,"Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix - novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors",2006,0, 2002,A fault model for subtype inheritance and polymorphism,"Although program faults are widely studied, there are many aspects of faults that we still do not understand, particularly about OO software. In addition to the simple fact that one important goal during testing is to cause failures and thereby detect faults, a full understanding of the characteristics of faults is crucial to several research areas. The power that inheritance and polymorphism brings to the expressiveness of programming languages also brings a number of new anomalies and fault types. This paper presents a model for the appearance and realization of OO faults and defines and discusses specific categories of inheritance and polymorphic faults. The model and categories can be used to support empirical investigations of object-oriented testing techniques, to inspire further research into object-oriented testing and analysis, and to help improve design and development of object-oriented software.",2001,0, 2003,Fault tolerance design for computers used in humanoid robots,"Although the performance of humanoid robots is rapidly improving, very few dependability schemes suitable for humanoid robots have been presented thus far. In the future, various tasks ranging from daily chores to safety-related tasks will be carried out by individual humanoid robots. If the importance of the tasks is different, the required dependability will also vary accordingly. Therefore, for mobile humanoid robots operating under power constraints, fault tolerance that dynamically changes based on the importance of the tasks is desirable because fault-tolerant designs involving hardware redundancy are power intensive. This paper proposes a task-based dynamic fault tolerance scheme and a duplex computer system that is characterized by performing safety failover when the standby computer unit is in a cold standby state. Moreover, this paper presents the hardware design of the redundancy controller in which the safety failover subsystem is implemented.",2007,0, 2004,Task-based dynamic fault tolerance and its safety considerations in humanoid robot applications,"Although the performance of humanoid robots is rapidly improving, very few dependability schemes suitable for humanoid robots have been presented thus far. In the future, various tasks ranging from daily chores to safety-related tasks will be carried out by individual humanoid robots. If the characteristics and importance of the tasks are different, the required fault-tolerant capabilities will also vary accordingly. Therefore, for mobile humanoid robots operating under power constraints, it is desirable to develop a dynamic fault tolerance capable of reducing power consumption because fault-tolerant designs involving hardware redundancy are power intensive. This paper proposes a task-based dynamic fault tolerance scheme based on the hardware redundancy of the computer unit, and describes the implementation of the proposed scheme by considering the safety in humanoid robot applications.",2007,0, 2005,Fault-tolerant techniques for Ambient Intelligent distributed systems,"Ambient Intelligent Systems provide an unexplored hardware platform for executing distributed applications under strict energy constraints. These systems must respond quickly to changes in user behavior or environmental conditions and must provide high availability and fault-tolerance under given quality constraints. These systems will necessitate fault-tolerance to be built into applications. One way to provide such fault-tolerance is to employ the use of redundancy. Hundreds of computational devices will be available in deeply networked ambient intelligent systems, providing opportunities to exploit node redundancy to increase application lifetime or improve quality of results if it drops below a threshold. Pre-copying with remote execution is proposed as a novel, alternative technique of code migration to enhance system lifetime for ambient intelligent systems. Self-management of the system is considered in two different scenarios: applications that tolerate graceful quality degradation and applications with single-point failures. The proposed technique can be part of a design methodology for prolonging the lifetime of a wide range of applications under various types of faults, despite scarce energy resources.",2003,0, 2006,Fault classification and fault distance location of double circuit transmission lines for phase to phase faults using only one terminal data,"An accurate fault classification algorithm for double end fed parallel transmission lines based on application of artificial neural networks is presented in this paper. The proposed method uses the voltage and current available at only the local end of line. This method is virtually independent of the effects of remote end infeed and is insensitive to the variation of fault inception angle and fault location. The Simulation results show that phase-to-phase faults can be correctly detected, classified and located within one cycle after the inception of fault. Large number of faults simulations using MATLABA7.01 have demonstrated the accuracy and effectiveness of the proposed algorithm. The proposed scheme allows the protection engineers to increase the reach setting i.e. greater portion of line length can be protected as compared to conventional techniques. The technique neither requires a communication link to retrieve the remote end data nor zero sequence current compensation for healthy phases.",2009,0, 2007,Estimation of Defects Based on Defect Decay Model: ED^{3}M,"An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the product's required maintenance. Here, a new approach called ED3M is presented that computes an estimate of the total number of defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches the technique presented here does not depend on historical data from previous projects or any assumptions about the requirements and/or testers' productivity. It is a completely automated approach that relies only on the data collected during an ongoing testing process. This is a key advantage of the ED3M approach, as it makes it widely applicable in different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial projects and two data sets from the literature. In addition, a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The results are very promising; they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques, while only using defect data as the input.",2008,0, 2008,On-load tap changer diagnosis - an off-line method for detecting degradation and defects: Part 1,"An advanced procedure for off-line power transformer diagnosis has been presented in this paper. Several different diagnostic measurements can be made using only one device. Examples and case studies were discussed to show the variety of defects and degradation mechanisms that can be detected using the procedure. In particular, it was found that the most common OLTC defects can be detected. The measurements are very sensitive to degradation due to long-term aging and to OLTC maintenance errors. A large population of OLTCs was tested, and a substantial number showed contact degradation. Finally, several areas that require careful attention were discussed, i.e., test current amplitude, circuit resistance, secondary short circuiting, and winding configuration.",2010,0, 2009,A Distributed Diagnosis Approach to Fault Tolerant Multi-rate Real-Time Embedded Systems,"Advanced safety-critical control applications such as fly-by- wire and steer-by-wire are being realized as distributed systems comprising many embedded processors, sensors, and actuators interconnected via a communication medium. They have severe cost constraints but demand a high level of safety and performance. Recently, the authors in paper (Kandasamy et al., 2005) have developed a diagnosis approach for single rate steer-by-wire system. Motivated by the need for timely diagnosis of faulty actuators and processors in multi-rate fly-by-wire systems, we present a general method to implement failure diagnosis under deadline and resource constraints. The proposed method has been compared with the approach given in (Kandasamy et al., 2005). The diagnostic tasks are executed concurrently with control tasks so that actuators and system processors are diagnosed in a distributed fashion to reach an agreement over fault-free units thereby isolating the faulty units. The simulation results are presented evaluating the effectiveness of the proposed method under various design constraints.",2007,0, 2010,Enhanced Approaches in Defect Detection and Prevention Strategies in Small and Medium Scale Industries,"Advancement in fundamental engineering aspects of software development enables IT enterprises to develop a more cost effective and better quality product through aptly organized defect detection and prevention strategies. Software inspection has proved to be the most thriving and proficient technique for defect detection and prevention.The work analyzes data obtained for five different projects from progressive software company of varying software production capabilities. The defect prevention technique involves proactive, reactive and retrospective moves to uncover 70% of defects during inspections and developer unit testing. The validation testing uncovers 29% of defects. Inspection becomes imperative in creating much more ideal software in factories through enhanced methodologies of aided and unaided inspection schemes.",2008,0, 2011,Control Focused Soft Error Detection for Embedded Applications,"Advances in integrated circuits present several key challenges in system reliability as soft errors are expected to increase with successive technology generations. Computing systems must be able to continue functioning in spite of these soft errors, necessitating the development of new methods for self-healing circuits that can detect and recover from these errors. We present an area-efficient control focused soft error detector (CNFSED) capable of nonintrusively detecting soft errors within the execution of a software application without modifications to the software application or the target processor. This soft error detector achieves an error detection rate greater than 90% for control errors and 85% of unmasked errors while incurring minimal area overhead.",2010,0, 2012,Analysis of single-event effects in embedded processors for non-uniform fault tolerant design,"Advances in silicon technology and shrinking the feature size to nanometer scale make unreliability of nano devices the most important concern of fault-tolerant designs. Design of reliable and fault-tolerant embedded processors is mostly based on developing techniques that compensate adding hardware or software redundancy. The recently-proposed redundancy techniques are generally applied uniformly to a system and lead to inefficiencies in terms of performance, power, and area. Non-uniform redundancy requires a quantitative analysis of the system behavior encountering transient faults. In this paper, we introduce a custom fault injection framework that helps to locate the most vulnerable nodes and components of embedded processors. Our framework is based on an exhaustive transient fault injection to candidate nodes which are selected from a user-defined list. Furthermore, the list of nodes containing the microarchitectural state is also defined by user to validate execution of instructions. Based on the reported results, the most vulnerable nodes, components, and instructions are found and could be used for an effective non-uniform fault-tolerant redundancy technique.",2009,0, 2013,Voltage and Current Patterns for Fault Location in Transmission Lines,"After a severe disturbance due to an insulation failure in a transmission line, the precise fault location is a critical problem for the maintenance crew. In order to avoid further economical and social costs, fault diagnosis has to be performed as soon as possible. Fault diagnosis has been a major area of investigation among power system problems and intelligent system applications. Several approaches have been proposed for solving this problem. This paper advocates the application of neural networks for mapping the relationship between electrical signals and fault locations in transmission lines. The significance of voltages and currents is analysed using steady-state and electromagnetic transient information. Electromagnetic transient information has been extracted using fast fourier and wavelet transforms. Thus, three ways of feeding the fault location models are compared, i.e., with a steady-state input space and with two transient based input spaces, which are built by the previously mentioned transforms. The tests consider different operating and fault conditions, including different types of fault impedances, fault angles, line loading, equivalent system impedances, and fault locations.",2007,0, 2014,Defect-oriented testing and defective-part-level prediction,"After an integrated circuit (IC) design is complete, but before first silicon arrives from the manufacturing facility, the design team prepares a set of test patterns to isolate defective parts. Applying this test pattern set to every manufactured part reduces the fraction of defective parts erroneously sold to customers as defect-free parts. This fraction is referred to as the defect level (DL). However, many IC manufacturers quote defective part level, which is obtained by multiplying the defect level by one million to give the number of defective parts per million. Ideally, we could accurately estimate the defective part level by analyzing the circuit structure, the applied test-pattern set, and the manufacturing yield. If the expected defective part level exceeded some specified value, then either the test pattern set or (in extreme cases) the design could be modified to achieve adequate quality. Although the IC industry widely accepts stuck-at fault detection as a key test-quality figure of merit, it is nevertheless necessary to detect other defect types seen in real manufacturing environments. A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis",2001,0, 2015,Dual-Slices Algorithm for Software Fault Localization,"After software fault is detected by runtime monitor, fault localization is always very difficult. A new method to fault localization based on dual-slices algorithm is proposed. The algorithm reduces software fault area by slicing faulty trace into segments firstly and then slicing the trace segments based on trace slice. It mainly includes two steps: Firstly, the faulty run trace is divided into segments by analyzing the differences between correct run and faulty run, and only the segments that inducing the differences between dual-traces will be regarded as suspicious fault-area; Secondly, the suspicious fault-area will be further sliced by trace slice to reduce the fault-area, and the more accuracy fault-area will be gained finally. This method could overcome some drawbacks of manual debugging, and increase the efficiency of fault localization.",2009,0, 2016,Adaptive routing strategies for fault-tolerant on-chip networks in dynamically reconfigurable systems,"An investigation into an effective and low-complexity adaptive routing strategy based on stochastic principles for an asynchronous network-on-chip platform that includes dynamically reconfigurable computing nodes is presented. The approach is compared with classic deterministic routing and it is shown to have good properties in terms of throughput and excellent fault-tolerance capabilities. The challenge of how to deliver reliability is one of the problems that multiprocessor system architects and manufactures will face as feature sizes and voltage supplies shrink and deep-submicron effects reduce the ability to carry out deterministic computing. It is likely that a new type of deep-submicron complex multicore systems will emerge which will be required to deliver high performance within strict energy and area budgets and operate over unreliable silicon. Within this context, the paper studies an on-chip communication infrastructure suitable for these systems.",2008,0, 2017,Seeded Fault Testing and In-situ Analysis of Critical Electronic Components in EMA Power Circuitry,"An investigation into the development of feasible detection strategies capturing and trending incipient signs of failure in electronic power and control circuitry of electromechanical actuator (EMA) systems was jointly funded and conducted by Lockheed Martin Aeronautics Company, Parker Aerospace, and Impact Technologies, LLC. The objective of this study was to experimentally evaluate feature-based and efficiency-based prognostic approaches for power drive and control electronics through application of component-level Highly Accelerated Life Testing (HALT) and circuit board-level seeded fault testing. The authors of this paper discuss collaborative work identifying system-critical components through an enhanced failure mode effect and criticality assessment (FMECA++) followed by accelerated aging of these components leading to insertion into the EMA system and analysis of test results. Component accelerated aging and EMA system testing was performed at Impact's facility with test system- specific knowledge provided by Lockheed and Parker.",2008,0, 2018,Boron emitters: Defects at the silicon - silicon dioxide interface,"An investigation of defects caused by boron diffusion into silicon is presented, using two techniques to directly compare the defects at an undiffused and lightly boron diffused Si-SiO2 interface. The first technique uses field effect passivation induced by a MOS structure; the second uses Electron Paramagnetic Resonance measurements to determine the concentration of unpassivated Pb centers on <111> oriented surfaces. It is found that additional defects introduced by the boron diffusion account for a relatively small proportion of total recombination at a well passivated <100> interface, while for more at <111> interfaces, as both the defect density and recombination increase by a factor of more than 2. The effect of the addition of LPCVD nitride on top of oxide layers is also explored. We show that exposure of samples to hot phosphoric acid (used to selectively remove silicon nitride) leads to significant changes to the Si-SiO2 interface, so that this treatment cannot be considered noninvasive.",2008,0, 2019,Research on Cylindricity Error Calculation Based on Improved GA,"An objective function is proposed in this paper to evaluate the minimum zone cylindricity error. The error model is optimized by Genetic Algorithm (GA). The mathematical model may work out the minimum zone solution of cylindricity error with arbitrary position in space, and there are no special requirements in choosing measurement points. A test has been given to prove that optimal approximation solution to cylinder axis's vectors and minimum zone cylindricity can be worked out by the objective function. The approach can also be extended for solving other form and position errors when cylinder axis is used as datum.",2010,0, 2020,On-line fast motor fault diagnostics based on fuzzy neural networks,"An on-line method was developed to improve diagnostic accuracy and speed for analyzing running motors on site. On-line pre-measured data was used as the basis for constructing the membership functions used in a fuzzy neural network (FNN) as well as for network training to reduce the effects of various static factors, such as unbalanced input power and asymmetrical motor alignment, to increase accuracy. The preprocessed data and fuzzy logic were used to find the nonlinear mapping relationships between the data and the conclusions. The FNN was then constructed to carry motor fault diagnostics, which gives fast accurate diagnostics. The on-line fast motor fault diagnostics clearly indicate the fault type, location, and severity in running motors. This approach can also be extended to other applications.",2009,0, 2021,Pose measurement and tracking system for motion-correction of unrestrained small animal PET/SPECT imaging,"An optical landmark-based pose measurement and tracking system is under development to provide in-scan animal position data for a new SPECT imaging system for unrestrained laboratory animals. The animal position and orientation data provides motion correction during image reconstruction. This paper describes new developments and progress using landmark markers placed on the animal along with strobed infrared lighting with improvements in accuracy for the extraction of head feature positions during motion. A stereo infrared imaging approach acquires images of the markers through a transparent enclosure, segments the markers, corrects for distortion and rejects unwanted reflections. Software estimates intrinsic as well as extrinsic camera calibration parameters and provides a full six degree-of-freedom (DOF) camera-to-camera calibration. A robust stereo point correspondence and 3D measurement calculation based on the fundamental matrix provides the pose at camera frame rates. Experimental testing has been conducted on calibrated fixtures with six DOF measurement capabilities as well as on live laboratory mice. Results show significantly improved accuracy and repeatability of the measurements. The live mouse results have demonstrated that reliable, accurate tracking measurements can be consistently achieved for the full SPECT image acquisition.",2003,0, 2022,Combining Results of Accelerated Radiation Tests and Fault Injections to Predict the Error Rate of an Application Implemented in SRAM-Based FPGAs,An approach combining the SRAM-based field-programmable gate array static cross-section with the results of fault injection campaigns allows predicting the error rate of any implemented application. Experimental results issued from heavy ion tests are compared with predictions to validate the proposed methodology.,2010,0, 2023,A self-organized approach for unsupervised fault detection in multiple systems,"An approach is proposed for automatic fault detection in a population of mechatronic systems. The idea is to employ self-organizing algorithms that produce low-dimensional representations of sensor and actuator values on the vehicles, and compare these low-dimensional representations among the systems. If a representation in one vehicle is found to deviate from, or to be not so similar to, the representations for the majority of the vehicles, then the vehicle is labeled for diagnostics. The presented approach makes use of principal component coding and a measure of distance between linear sub-spaces. The method is successfully demonstrated using simulated data for a commercial vehiclepsilas engine coolant system, and using real data for computer hard drives.",2008,0, 2024,The Rule Extraction of Fault Classification Based on Formal Concept Analysis,"An approach to extract the fault classification rules based on formal concept analysis is presented. According to the features of the real time database of the surveillance system, designed a method to transfer the records of the real database to formal context as well as to construct concept lattice, then extracted the association rules and fault classification rules. The association rule reflects the relationship among fault phenomena, and the fault classification rule indicates the fault type. A good result is gotten when the approach is used to the intelligent surveillance system of broadcasting transmitters.",2009,0, 2025,An Automation System for High-Speed Detection of Printed Matter and Defect Recognition,"An automation system for high-speed detection of printed matter and defect recognition is proposed to detect defects in printed matter, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, and wrinkles, etc. Any tiny defect would be developed by using the high and low illumination angles design and anti-blooming techniques. A new image reference method bused on morphological preprocessing eliminates all false defects brought by slight distortion of printed matter and chromatography mistake. The fast objects searching algorithm based on run-length-encoding can locate the coordinates of defects and define the shape of the defects. The C/S parallel network structure was used, image data were processed distributed and quality data is managed centralized. Experimental results verify the speed, reliability and accuracy of proposed system.",2007,0, 2026,A novel microprocessor-based battery charger circuit with power factor correction,"An effective microcontrolled battery charger circuit that monitors the charging process avoiding the battery damage by overcharge is described, where a PWM forward topology with power factor correction is employed in order to provide DC/DC conversion. Depending on the battery charge state, which is determined by microcontroller PIC 16F873, the charging process is modified. In this way, fast charging does not have negative effects on the effective capacity of the battery and on battery cycle-life.",2004,0, 2027,DelaunayVoronoi Modeling of Power-Ground Planes With Source Port Correction,"An efficient Delaunay-Voronoi modeling of the power-ground planes suitable directly for SPICE compatibity is proposed to deal with the ground bounce noise and decoupling capacitors placement problems for the high-speed digital system designs. The model consists of virtual ports and triangular meshes with the lumped circuit elements, in which all the element values can be related to the mesh geometry shape by the analogy between the circuit equations and Maxwell's equations. Since the analogy fails to apply due to the singular fields near the input/output pins, the via effect of driving and sensing ports is not negligible and an analytical expression from the Hankel function is thus presented for the correction term. A simple rule has been investigated for the model with minimum lumped circuit elements to accurately represent the power-ground planes over the frequency range of interests. The full-wave simulation and measurement results verify the good correlations with the proposed models for the impedance responses of regular and defective plane shapes.",2008,0, 2028,"Eight-switch, three-phase rectifier for power factor correction","An eight-switch, three-phase neutral-point switch-clamped rectifier is presented in order to achieve a high input power factor and to draw sinusoidal currents from the AC mains. In the conventional three-phase, neutral-point switch-clamped rectifier, the circuit configuration consists of six power switches with voltage stress vdc and three AC power switches with voltage stress vdc/2. There are only four power switches with a voltage stress of the DC bus voltage and two AC power switches with a voltage stress of half the DC bus voltage in the proposed rectifier. The main functions of the proposed control scheme are to achieve unity input power factor, to draw sinusoidal line currents from the AC source, and to keep the DC link voltage constant. Analysis and a mathematical model of the proposed rectifier are derived. Some simulations and experimental results are provided in order to verify the effectiveness of the proposed control scheme.",2004,0, 2029,Error Compensation and Implementation of Embedded High-Precision Magnetometer,"An embedded tri-axial magnetometer of high precision based on AMR (Anisotropy Magnetic Resistance) sensor was designed and achieved in this paper. For purpose of avoiding ferromagnetic disturbances to magnetic sensor, this device applied the manner of separating the sensor and signal processing terminal. By theoretically analyzing main errors of the magnetometer, compensation model was established, and characteristic parameters of the model were calculated using Newton iteration method. Experimental test was carried out in different locations. The results showed that, measurement error was reduced effectively after compensation, which could significantly improve real-time measurement accuracy of embedded magnetometer.",2010,0, 2030,The error correction method of planar near field measurements,An error correction method for origin test data in planar near field measurement is discussed in this paper. The mathematical formula used in correcting the origin test data is given. Also results of analogue calculation for the site error are provided. The method given in this paper can be used to compensate for these errors which are caused by the nonlinear response of the test system. The structure error of the scan and system drift so that the final measurement accuracy can be improved.,2000,0, 2031,Automatic Fault Isolation by Cultural Algorithms With Differential Influence,"An evolutionary algorithm with a cultural mechanism of evolution influence for effectiveness and efficiency higher than classical genetic algorithms is proposed for industrial fault isolation. Moreover, the evolution influence is based on a differential concept in order to move toward better zones of the solution space by sensing the fitness gradient. The proposed cultural algorithm is designed in order to be portable and easily configurable in different diagnostic applications. On-field results of an industrial application to motor-vehicle fleet remote monitoring and automatic fault isolation of vehicle wear, operating danger, and fraud in a company that transports dangerous goods are shown.",2007,0, 2032,Exact modeling of defect modes in photonic crystals,"An exact model for defect modes in PC devices with infinite cladding is presented. Its accuracy is demonstrated, as is its role in corroborating that the fundamental mode of a conventional PCF has no cutoff.",2005,0, 2033,A system for fault detection and reconfiguration of hardware based active networks,"An experimental Active Network based on a PC running the Linux OS and operating as a router has been implemented. The PC carries a PCI-based FPGA board, which is the execution environment of the Active Applications. The users are able to send Active Packets and program dynamically the network by remote configuration of the target FPGA board. The FPGA can be reconfigured multiple times on-the-fly with several Active Applications (IP-cores). A fault detection module is permanently configured in one of the FPGAs of the PCI board. Its function is to monitor the Active Applications at run time and check the PCI bus transactions for violations of predefined rules. The fault detector module works as a ""firewall"" preventing the communication between the configured application and the host computer, if a violation is detected.",2004,0, 2034,Design and Implementation of a Fault Tolerant Single IP Address Cluster,"An F-FTCS mechanism that develops a fault tolerant single IP address cluster for TCP applications is proposed. The FTCS mechanism performs fine grain load balancing by handling all incoming TCP connection requests with a master node. Three fail-over algorithms are designed and implemented to carry out the fault tolerant FTCS mechanism. Discarding and Gathering Algorithms discard and gather TCP connections whose state is SYN-RECEIVED, respectively, at failure. A Scattering Algorithm synchronizes the information between nodes in the failure-free phase. These three algorithms are evaluated on Core 2 Duo machines. The Discarding Algorithm recovers from a failure from 440 to 950 msec earlier than the Gathering Algorithm, but it requires reprocessing the discarded TCP connection requests. The Scattering Algorithm requires from 120 to 160 usec more overhead during processing of a TCP connection request than that of the original FTCS mechanism.",2010,0, 2035,Fault Injection Results of Linux Operating on an FPGA Embedded Platform,"An FPGA-based Linux test-bed was constructed for the purpose of measuring its sensitivity to single-event upsets. The test-bed consists of two ML410 Xilinx development boards connected using a 124-pin custom connector board. The Design Under Test (DUT) consists of the hard core PowerPC, running the Linux OS and several peripherals implemented in soft (programmable) logic. Faults were injected via the Internal Configuration Access Port (ICAP). The experiments performed here demonstrate that the Linux-based system was sensitive to 92,542 upsets-less than 0.7 percent of all tested bits. Each sensitive bit in the bit-stream is mapped to the resource and user-module to which it configures. A density metric for comparing the reliability of modules within the system is presented.",2010,0, 2036,Analog circuits can be regarded as grey system to diagnose fault,"Analog circuits are regarded as grey system in this paper. The theory of grey system is used to diagnose the fault of analog circuits. Grey relational analysis is used to quantify the relevance between circuit and fault. Analog circuits can be diagnosed by grey relational degree. An example of fault diagnosis is provided in this paper. The result shows that fault diagnosis of analog circuits based on the theory of grey system has the advantages of simple algorithm, less consumption of samples, objective quantitative data. This method improves the accuracy and efficiency of fault diagnosis of analog circuits.",2010,0, 2037,New technique for fault location in interconnected networks using phasor measurement unit,"Application of PMU for fault location is conducted through a driven algorithm and is applied to different study systems through computer numerical simulation.The algorithm estimates the fault location based on synchronized phasors from both ends of the transmission line whether phase measurement units are installed to both ends or to only one end and the other end is calculated from synchronized phasors from another side. This algorithm allows for accurate estimation of fault location irrespective of fault resistance, load currents, and source impedance. A computer simulation using PSCAD program of the transmission line under study with various fault types and different locations is carried out. A transformation modal is used in the algorithm. Different fault types are simulated with different fault locations to more than one line in the Egyptian network, which has phase measurement units installed according to a selected allocating technique. The results obtained from applying the considered technique shows high levels of accuracy in locating the fault of different faults types. Hence, it is strongly recommended to use the PMU in the Egyptian Network.",2008,0, 2038,Computer aided design of fault-tolerant application specific programmable processors,"Application Specific Programmable Processors (ASPP) provide efficient implementation for any of m specified functionalities. Due to their flexibility and convenient performance-cost trade-offs, ASPPs are being developed by DSP, video, multimedia, and embedded lC manufacturers. In this paper, we present two low-cost approaches to graceful degradation-based permanent fault tolerance of ASPPs. ASPP fault tolerance constraints are incorporated during scheduling, allocation, and assignment phases of behavioral synthesis: Graceful degradation is supported by implementing multiple schedules of the ASPP applications, each with a different throughput constraint. In this paper, we do not consider concurrent error detection. The first ASPP fault tolerance technique minimizes the hardware resources while guaranteeing that the ASPP remains operational in the presence of all k-unit faults. On the other hand, the second fault tolerance technique maximizes the ASPP fault tolerance subject to constraints on the hardware resources. These ASPP fault tolerance techniques impose several unique tasks, such as fault-tolerant scheduling, hardware allocation, and application-to-faulty-unit assignment. We address each of them and demonstrate the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of industrial-strength designs.",2000,0, 2039,Fault testing for reversible circuits,"Applications of reversible circuits can be found in the fields of low-power computation, cryptography, communications, digital signal processing, and the emerging field of quantum computation. Furthermore, prototype circuits for low-power applications are already being fabricated in CMOS. Regardless of the eventual technology adopted, testing is sure to be an important component in any robust implementation. We consider the test-set generation problem. Reversibility affects the testing problem in fundamental ways, making it significantly simpler than for the irreversible case. For example, we show that any test set that detects all single stuck-at faults in a reversible circuit also detects all multiple stuck-at faults. We present efficient test-set constructions for the standard stuck-at fault model, as well as the usually intractable cell-fault model. We also give a practical test-set generation algorithm, based on an integer linear programming formulation, that yields test sets approximately half the size of those produced by conventional automatic test pattern generation.",2004,0, 2040,Induced voltages and currents on gas pipelines with imperfect coatings due to faults in a nearby transmission line,"An improved hybrid method is discussed, employing a finite-element method along with Faraday's law and standard circuit analysis, in order to predict the induced voltages and currents on a pipeline with defects on its coating, running parallel to a faulted line and remote earth. Such defects are a frequent situation especially when old pipelines are considered and are modeled as resistances, called leakage resistances. The fault is assumed to be outside the parallel exposure so that conductive interference is negligible and therefore the problem is a two-dimensional one. Input data are power line and pipeline configuration, physical characteristics of conductors and pipeline, fault and power system terminal parameters and location and value of leakage resistances. Simulation results show that for small values of leakage resistances, defects act as a mitigation method for the induced voltages on the pipeline. However, in that case large currents that flow to earth through the defects can damage the pipeline",2001,0, 2041,An on-line fault detection technique based on embedded debug features,"An increasing number of applications require being able to detect possible faults arising during the normal activity of the electronic system: for this reason, on-line fault detection is a hot topic today. This paper proposes a new technique which is suitable for microprocessor-based systems (no matter whether they are implemented in a single device or with discrete COTS) that exploit hardware duplication and combines it with the On-Chip Debug features existing in many processors. The new technique increases the observability of faults (thus increasing detection probability and reducing latency) and is characterized by a very reduced intrusiveness in terms of changes required in the application code.",2010,0, 2042,Domain fault model and coverage metric for SoC verification,"An innovative domain fault and coverage metric for SoC verification is proposed. The domain fault model is based on a geometrical analysis of the domain boundary and takes advantage of the fact that points on or near the boundary are most sensitive to domain errors. The purpose of this paper is to present an efficient fault model and coverage metric for measuring the completeness and quality of verification approach. The domain coverage metric has been implemented using VPI (Verilog procedural interface) and has been applied to verification of SoC (system on chip) design. Our domain coverage tool works smoothly with the simulator and vector generator. The results showed that the domain fault model is accurate and efficient, the domain coverage metric is powerful at finding the potential control path boundary faults.",2005,0, 2043,Apply bit-level interleaver to improve the error rate performance of the double binary turbo code with less decoder complexity,"An innovative interleaver design concept is proposed to enhance double binary turbo code which is adopted in IEEE 802.16e and DVB-RCS/RCT. Our concept features bit-level interleaving instead of conventional symbol-level interleaving. In order to prove this concept, we construct a bit-level interblock permutation interleaver. Comparing with IEEE 802.16e CTC, new design outperforms IEEE 802.16e CTC by 0.02-0.3 dB at frame error rate=10-4 while the associated decoder saves 33% storage extrinsic information. Our concept improves error rate performance and reduces double binary turbo decoder complexity at the same time.",2008,0, 2044,Adaptive correction of periodic errors improves telescope performance,"As a further step to improve the tracking performance of telescopes, the intrinsic errors in the drive systems are analyzed. These errors fall into two categories, torque disturbances and sensor errors and they have different impact on the performance. Models for the errors are developed and algorithms for online adaptive parameter identification are presented. The models can be used to significantly improve tracking precision and to monitor friction and unbalance of the elevation axis. The software is designed to allow for the adaptation of the process coefficients, or for the off-line modeling based on recorded process data. Finally, real test data are presented.",2005,0, 2045,An optimised fault classification technique based on Support-Vector-Machines,"As a new general machine-learning tool based on structural risk minimization principle, support vector machines (SVMs) have the advantageous characteristic of good generalization. For this reason, the application of SVMs in fault classification and diagnosis field has becomes one growing reach focus. In this paper a new approach to real-time fault detection and classification is presented for high speed protective relaying in power transmission systems using SVMs. The integration with an online wavelet-based pre-processing stage enhances the SVM learning capability and classification power. The classification criterion is based on using only the phase angles between the three line currents in the transmission line. The paper begins with the exploration of classifying different fault types (LG, LL, and LLG) using the SVMs. It proceeds with the classification concepts of the nine types of faults. Extensive theoretical studies and simulations using ATP and MATLAB-SVM Toolbox on an EHV transmission line model have proved that the veracity of the SVM classifier is very significant for fault classification.",2009,0, 2046,Error Tolerance in DNA Self-Assembly by (2k-1) x (2k-1) Snake Tile Sets,"As a possible technology for bottom-up manufacturing, DNA self-assembly requires tolerance to different errors. Among the methods for reducing the error rate, snake tile sets utilize a square block of even dimension of tiles (i.e., 2ktimes2k). In this paper, an odd-sized square block (i.e., (2k-1)times(2k-1)) is proposed as basis for snake tile sets. In comparison with all other tile sets, the proposed snake tile sets achieve a considerable reduction in error rate for growth and facet roughening errors. Simulation results are provided.",2007,0, 2047,Automated Error-Prevention and Error-Detection Tools for Assembly Language in the Educational Environment,"Automated tools for error prevention and error detection exist for many high-level languages, but have been nonexistent for assembly-language programs, embedded programs in particular. We present new tools that improve the quality and reliability of assembly-language programs by helping the educator automate the arduous tasks of exposing and correcting common errors and oversights. These tools give the educator a user-friendly, but powerful means of completely testing student programs. The new tools that we have developed are the result of years of research and experience by the authors in testing and debugging students' programming assignments. During this time, we created a few preliminary versions of these automated tools, allowing us to test our students' projects in one fell swoop. These tools gave us the ability to catch stack errors and memory-access errors that we would not have been able to detect with normal testing. These tools considerably shortened the amount of testing time and allowed us to detect a larger group of errors",2006,0, 2048,Optimal task ordering for troubleshooting systems faults,"Automated troubleshooting of system faults is an essential element of modern aerospace equipment. The increased efficiency and accuracy helps in dramatically lowering maintenance costs. The Prognostic Health Management (PHM) group at Pratt & Whitney is responsible for developing automated systems for fault detection, isolation, and accommodation for the Joint Strike Fighter (JSF) propulsion system. A fundamental question that arises in this context is the following: Given a list of suspected components that have been identified a priori as possible causes for failure symptom(s), what is the optimal troubleshooting task assignment strategy? This paper introduces an approach to optimal task ordering. We show that the correct strategy is to order the tasks based on an easily calculated metric - which we call the mean utility function - that takes into consideration the mean troubleshooting time, or cost, or a combination of the two, depending on what is considered to be most critical. A mathematical proof is given for this. The approach shown in the paper can also be applied, as a troubleshooting strategy, for any other machinery health management system",2005,0, 2049,Designing Real-Time and Fault-Tolerant Middleware for Automotive Software,"Automotive software development poses a great deal of challenges to automotive manufacturers since an automobile is inherently distributed and subject to fault-tolerance and real-time requirements. Middleware is a software layer that can handle the intrinsic complexities of distributed systems and arises as an indispensable run-time platform for automotive systems. This paper explains the concept of middleware by enumerating its functions and categorizes middleware according to adopted communication models. It also extracts five essential requirements of automotive middleware and proposes a middleware design for automotive systems based on the message-oriented middleware (MOM) structure. The proposed middleware effectively addresses the derived requirements and includes many essential features such as real-time guarantee, fault-tolerance, and a global time base",2006,0, 2050,An unifying architectural methodology for robot control design with adaptive fault tolerance,"Autonomous mobile robots typically interact with unknown environments and are subject to many kinds of faults. Adaptive fault tolerance is the dynamic adjustment of behavior of a system, as a consequence of variation of external or internal conditions or changes in immediate goals. This work presents a methodology for the design of robot control including adaptive fault tolerance. We use the hybrid architecture model, with control represented by finite state machine and dataflow graph. Redundancy is expressed by multiple paths to the same data element in the dataflow, resulting in different control configurations, allowing adaptation. Our approach intends to simplify the task of the robot control designer, transferring design complexity from program coding to description of components and interconnections. A control platform is responsible for the control flow, including phase transition, adaptations, and fault recovering. Using this platform we designed the control of a simple mission for a real robot",2005,0, 2051,Fault Recovery Using Evolvable Fuzzy Systems,Autonomous systems must survive for long periods without relying on humans for fault recovery. Faults may be impossible to analyze remotely and redundant hardware is frequently not allowed. The only recourse may be to replace the existing control strategy. This paper proposes a fuzzy logic controller architecture that executes a replacement control strategy for autonomous systems. This paper shows this architecture is ideally suited for fault recovery in autonomous systems with imprecisely defined faults. The controller is evolved in-situ and evaluated intrinsically,2007,0, 2052,The research of remote fault diagnosis system in manufacturing,"Based on an analysis of the realization mode of remote fault diagnosis system in manufacturing, this paper introduced the general structure of remote fault diagnosis system in manufacturing thoroughly. The treatment is illustrated by the example of a Shanghai Volkswagen-oriented application in a Sino-German cooperative project",2000,0, 2053,Intelligent system of cable fault location and its data fusion,"Based on an improved method for the electric bridge used for cable fault location, which was proposed earlier by two of the authors (Wang Mei and Wang Jianping, 1996), this paper is devoted to the study of multi-sensor data fusion in an intelligent system for cable fault location. The function modules of data fusion and the method of data fusion are designed in detail using Bayes estimation. In addition, the framework and layers as well as the spatio-temporal problems about data fusion are discussed. Finally, the experimental results are given, by which it is proved that, both the testing principle of the system and the design of software and hardware are correct, and the testing precision of the intelligent system is better.",2002,0, 2054,Modeling of InGaN PIN solar cells with defect traps and polarization interface charges,"Based on Crosslight APSYS, we have studied InGaN PIN solar cells with the effects of defect traps and piezoelectric polarization interface charges. The defect traps and the piezoelectric polarization interface charges both show adverse influences to the cell performance and high interface charge density deteriorates the fill factor dramatically. The results indicate that piezoelectric polarization interface charges also need to be taken into consideration for analyzing and understanding the experimental results for the InGaN solar cells.",2010,0, 2055,Study on errors compensation of a vehicular MEMS accelerometer,"Based on micro electronic mechanical system technology, the micro inertial sensors are introduced, and the research and application conditions of MEMS micro accelerometers are also analyzed. Taking auto navigation and testing system as an example, simple mathematical models and compensational methods are developed to correct sensor errors and the validity is evaluated by experimental verification.",2005,0, 2056,Compensation Algorithm of Device Error for Rate Strapdown Inertial Navigation System,"Based on the character of Strapdown Inertial Navigation System (SINS) on certain ballistic missile, using established mathematical model of inertial measuring system, the compensation method of SINS between samplings is deduced, which supply reference in the design of software compensation algorithm. The experiment result shows that, after using the software designed by the Algorithm, the error of IMU is reduced and the SINS accuracy is improved effectively.",2008,0, 2057,Getting errors to catch themselves - self-testing of VLSI circuits with built-in hardware,"As the electronics industry continues to grow, technology feature sizes continue to decrease, and complex systems and levels of integration continue to increase, the need for better and more effective methods of testing to ensure reliable operations of chips, the mainstay of today's all digital systems, is being increasingly felt. One obvious way to significantly improve the testability of digital VLSI circuits and save testing time is to use built-in self-testing (BIST), where the basic idea is to have the chip test itself. BIST is a design methodology that combines the concepts of built-in test (BIT) and self-test (ST) in one, termed BIST. This technique generates test patterns and evaluates test responses inside the chip system, and has been widely used in many commercial VLSI products with appreciable success. The subject paper endeavors to present a comprehensive overview of the general methodology of BIST from its various perspectives, and in the sequel attempts to relate its significance in the particular context of modern embedded cores-based system-on-chip (SOC) technology.",2005,0, 2058,An object-based approach to optical proximity correction,"As the feature sizes of integrated circuits have been continually reducing to below exposure wavelength, some correcting techniques, such as OPC and PSM are indispensable to compensate for the distortions on wafer images. In this paper, we describe an object-based approach to OPC named OPCM, which is a model-based OPC tool. Also, a rule-based OPC has been adopted to enhance the practicability of the software",2001,0, 2059,A Robot Fault-Tolerance Approach Based on Fault Type,"As the field of robot service expands, reliability has become one of the highest priorities in robot development. Fault-tolerance is an important characteristic for robots to increase their reliability levels. However, the literature on fault-tolerance in robotics has focused mainly on developing a single fault-tolerance technique that is focused on improving reliability for a limited set of context and situations. To meet the demands for reliable robots, a set of appropriate fault-tolerance techniques should be used in the given context and situation. In this paper, we present a systematic approach to facilitate the selection of appropriate robot fault-tolerance techniques on the basis of the context in the robot domain. We have applied the approach to build a fault-tolerance architecture for a robot platform.",2009,0, 2060,AVR-INJECT: A tool for injecting faults in Wireless Sensor Nodes,"As the incidence of faults in real Wireless Sensor Networks (WSNs) increases, fault injection is starting to be adopted to verify and validate their design choices. Following this recent trend, this paper presents a tool, named AVR-INJECT, designed to automate the fault injection, and analysis of results, on WSN nodes. The tool emulates the injection of hardware faults, such as bit flips, acting via software at the assembly level. This allows to attain simplicity, while preserving the low level of abstraction needed to inject such faults. The potential of the tool is shown by using it to perform a large number of fault injection experiments, which allow to study the reaction to faults of real WSN software.",2009,0, 2061,Model and Fault Inference with the Framework of Probabilistic SDG,"As the scale of systems increases, traditional models and fault diagnosis methods are not applicable. Qualitative signed directed graphs (QSDG) are used to model the variables and relationships among them in large-scale complex systems. However, they have distinct limitations of resulting spurious solutions due to the lack of utilization of knowledge or information. This article proposes a kind of probabilistic SDG (PSDG) model to describe the propagation of faults among variables. The fault diagnosis method is also investigated, where Bayesian network has been employed. Finally, examples are given and the future topics are listed",2006,0, 2062,Increasing fault tolerance to multiple upsets using digital sigma-delta modulators,"As the transistor gate length goes straightforward to the sub-micron dimension, there is an increased possibility of occurrence of external interferences in these devices. The direct effect of such external and/or intrinsic interferences is, in many cases, the total mismatch between the desired answer of the system and the obtained response. So, new techniques must be studied in order to guarantee the correct operation of these systems, under multiple simultaneous faults. This work presents the use of a totally digital sigma-delta modulator that is used to develop arithmetic operations with much better results than if a common digital operator was used. Simulation results show that, even under multiple simultaneous faults, the system presents very good results, as in the addition case, where a maximum standard deviation of 0.7 is achieved for sigma-delta-modulated signals, while for the digital adder alone, this value is 57.5. Such behavior is good enough to be used in operators that tolerate small errors, like in the digital filters where these errors are embedded in the system noise.",2005,0, 2063,Stereoscopic video error concealment for missing frame recovery using disparity-based frame difference projection,"At low bit-rate video communications, packet loss may easily cause whole-frame loss that, in return, leads to annoying frame drop phenomenon. In this paper, a novel error concealment algorithm is specifically developed for stereoscopic video, called the disparity-based frame difference projection (DFDP), to recover the lost frames at the decoder. The proposed DFDP contains three key components: 1) change detection; 2) disparity estimation; and 3) frame difference projection, which exploits both the intra-view frame difference from one view and interview correlation to estimate the lost frame in another view. The change region computed on the correctly received frame will be used to predict the change region between current missing frame and its previous frame through the estimated disparity, which is the summation of the estimated global disparity and the estimated local disparity. Experimental results have shown that the proposed stereoscopic video error concealment method can effectively restore the lost frames at the decoder and deliver attractive performance, in terms of objective measurement (in peak signal-to-noise ratio) and subjective visual quality.",2009,0, 2064,Evaluating Sigma-Delta Modulated Signals to Develop Fault-Tolerant Circuits,"As microelectronics evolves smaller into the nanometric scale, external interferences starts to be harmful to the system expected behavior. As classical systems do not handle adequately faults caused by such sources, new topologies are proposed. Our present work proposes a solution for this problem consisting on the use of sigma-delta modulation in order to obtain a fault-tolerance even in presence of multiple faults. This paper provides the mathematical analysis and demonstration to support the proposed approach",2006,0, 2065,Research of Secure Multicast Key Management Protocol Based on Fault-Tolerant Mechanism,"As multicasting is increasingly used as an efficient communication mechanism for group-oriented applications in the Internet, the research of the multicast key management is becoming a hot issue. Firstly, we analyze the n-party GDH.2 multicast key management protocol and point out that it has the following flaws: lack of certification, vulnerability to man-in-the-middle attacks, and a single-point failure. In order to settle the issues mentioned above, a fault-tolerant and secure multicast key management protocol (FTS, for short) with using the fault-tolerant algorithm and the password authentication mechanism is proposed in this paper. In our protocol, legal members are able to agree on a key despite failures of other members. The protocol can also prevent man-in-the-middle attacks. Finally, we evaluate the security of FTS, and compare our protocol with the FTKM through performance analysis. The analytic results show that the protocol not only avoids the single-point failure but also improves the comprehensive performance.",2009,0, 2066,A new approach based on mobile agents to network fault detection,"As networks become larger and more complex, the need for advanced fault management capabilities becomes critical. Faults are unavoidable in large and complex communication networks, but quick detection and identification can significantly improve network reliability. Packet monitoring has become a standard technique in network fault detection, but when it is applied to a large-scale network it yields a high volume of packets. To overcome this problem, some techniques are proposed. However, the proposed techniques are based on the popular SNMP agent, RMON technology, which are characterized by centralization, and they do not provide strong scalability and local processing ability. We present an intelligent mobile agent-based approach to deal with the problem of network fault detection and implemented it by using the Java technology and Aglet which is a mobile agent system from IBM. Experiment results show the proposed approach is very effective in network fault detection",2001,0, 2067,Bridging Security and Fault Management within Distributed Workflow Management Systems,"As opposed to centralized workflow management systems, the distributed execution of workflows can not rely on a trusted centralized point of coordination. As a result, basic security features including compliance of the overall sequence of workflow operations with the pre-defined workflow execution plan or traceability become critical issues that are yet to be addressed. Besides, the detection of security inconsistencies during the execution of a workflow usually implies the complete failure of the workflow although it may be possible in some situations to recover from the latter. In this paper, we present security solutions supporting the secure execution of distributed workflows. These mechanisms capitalize on onion encryption techniques and security policy models to assure the integrity of the distributed execution of workflows, to prevent business partners from being involved in a workflow instance forged by a malicious peer and to provide business partners identity traceability for sensitive workflow instances. Moreover, we specify how these security mechanisms can be combined with a transactional coordination framework to recover from faults that may be caught during their execution. The defined solutions can easily be integrated into distributed workflow management systems as our design is strongly coupled with the runtime specification of decentralized workflows.",2008,0, 2068,Combining dynamic fault trees and event trees for probabilistic risk assessment,"As system analysis methodologies, both event tree analysis (ETA) and fault tree analysis (FTA) are used in probabilistic risk assessment (PRA), especially in identifying system interrelationships due to shared events. Although there are differences between them, ETA and FTA, are so closely linked that fault trees (FT) are often used to quantify system events that are part of event tree (ET) sequences (J.D. Andrew et al., 2000). The logical processes employed to evaluate ET sequences and quantify the consequences are the same as those used in FTA. Although much work has been done to combine FT and ET, traditional methods only concentrate on combining static fault trees (SFT) and ET. Our main concern is considering how to combine dynamic fault trees (DFT) and ET. We proposed a reasonable approach in this paper, which is illustrated through a hypothetical example. Because of the complexity of dynamic systems, including the huge size and complicated dependencies, there may exist contradictions among different dynamic subsystems. The key benefit of our approach is that we avoid the generation of such contradictions in our model. Another benefit is that efficiency may be improved through modularization.",2004,0, 2069,Error analysis of the moment method,"Because of the widespread use of the Method of Moments for simulation of radiation and scattering problems, analysis and control of solution error is a significant concern in computational electromagnetics. The physical problem to be solved, its mesh representation, and the numerical method all impact accuracy. Although empirical approaches such as benchmarking are used almost exclusively in practice for code validation and accuracy assessment, a number of significant theoretical results have been obtained in recent years, including proofs of convergence and solution-error estimates. This work reviews fundamental concepts such as types of error measures, properties of the problem and numerical method that affect error, the optimality principle, and basic approximation error estimates. Analyses are given for surface-current and scattering-amplitude errors for several scatterers, including the effects of edge and corner singularities and quadrature error. We also review results on ill-conditioning due to resonance effects and the convergence rates of iterative linear-system solutions.",2004,0, 2070,A PH complex control system built-in correction factor,"Besides the pH deployment process's non-linear, large hysteretic nature, the system's requirement of real-time and accuracy, the traditional control methods can not get to the high quality control results. The fuzzy control does not rely on a mathematical model of the object. It is very difficult to eliminate the steady-state deviation from the root. Because PI control has a very good scavenging effect of the steady-state, therefore the system uses a built-correction factor of the Fuzzy-PI composite control strategy.",2010,0, 2071,Error-detection codes: algorithms and fast implementation,"Binary CRCs are very effective for error detection, but their software implementation is not very efficient. Thus, many binary nonCRC codes (which are not as strong as CRCs, but can be more efficiently implemented in software) are proposed as alternatives to CRCs. The nonCRC codes include WSC, CXOR, one's-complement checksum, Fletcher checksum, and block-parity code. We present a general algorithm for constructing a family of binary error-detection codes. This family is large because it contains all these nonCRC codes, CRCs, perfect codes, as well as other linear and nonlinear codes. In addition to unifying these apparently disparate codes, our algorithm also generates some nonCRC codes that have minimum distance 4 (like CRCs) and efficient software implementation.",2005,0, 2072,Demodulation of bluetooth GFSK signals under carrier frequency error conditions,"Bluetooth, the short range wireless communications standard, operates in the unlicensed ISM frequency band at 2.4 GHz using Gaussian Frequency Shift Keying (GFSK) modulation. Previous work has shown that the BER rate performance of a Bluetooth link can degrade by more than 19 dB when the transmitted carrier frequency differs from the ideal, yet is still within the limits imposed by the Bluetooth specification. Therefore a combination of demodulator and decision algorithm with adaptive thresholding which mitigates for carrier frequency errors is analysed. The efficacy of this method is demonstrated through the presentation of simulation results. An performance improvement of 13 dB for an initial carrier frequency offset of 70 kHz is reported.",2003,0, 2073,"A ""defect level versus cost"" system tradeoff for electronics manufacturing","Both cost and quality are important features when manufacturing today's high-performance electronics. Unfortunately, the two design goals (low) cost and (high) quality are somewhat mutually exclusive. High testing effort (and thus, quality) comes with a considerable cost, and lowering test activities has significant impact on the delivered quality. In this paper, we present a new structured search method to obtain the best combination of these two goals. It features a Petri-net oriented cost/quality modeling approach and uses a Pareto chart to visualize the results. The search for the Pareto-optimal points is done by means of a genetic algorithm. With our method, we optimize a manufacturing process for a global positioning system (GPS) front end. The optimized process clearly outperformed the standard fabrication process.",2004,0, 2074,A Framework for Analyzing Correlative Software and Hardware Faults,"Both software and hardware of computer systems are subject to faults, however, traditional approaches, ignoring the relationship between software faults and hardware faults, are unavailable for analyzing complex software and hardware faults. This paper proposes a systematic framework to analyze correlative software and hardware fault. It includes two associated processes, module level analysis and code level location, and can be used to achieve fault modules and locate fault reasons. The framework has been integrated into the maintenance system for software-intensive system, and provides an effective and feasible method to deal with complex faults between software and hardware.",2008,0, 2075,Knowledge representation of software faults based on open bug repository,"Bug repository of open source software contains a mass of historical data about behaviors of software failure and life-cycle of bug process. By surveying and analyzing three kinds of open source software as Apache, Eclipse, and Firefox, a practical knowledge representation of software faults is properly depicted in this paper. An algorithm is further given to transfer a software bug report to a software fault case, greatly reducing cost and time for constructing case base of software fault from scratch. All of these results have significant advantages for developing an intelligent fault diagnosis system based on case-reasoning.",2010,0, 2076,Automated bug tracking: the promise and the pitfalls,"Bug tacking systems give developers a unique and clear view into user's everyday product experiences. Adding some statistical analysis and software teams can efficiently improve product quality. It's hard to tell precisely how well the error reporting system working, but this seems to be a bug weapon that has landed a permanent spot in microsoft's arsenal. Automated bug tracking, combined with statistical reporting, plays a key role for developers at the Mozilla Foundations, best known for its open source Web browser and email software. The sparse, random sampling approach produces enough data for the team to do what it call ""statistical debugging""-bug detection through statistical analysis.",2004,0, 2077,"Correlation between bug notifications, messages and participants in Debian's bug tracking system","Bugs are an essential part of software projects because they lead its evolution. Without bug notifications developers cannot know if their software is accomplishing its tasks properly. However, few analytical studies have been made about this aspect of projects. We have developed a tool to extract and to store information from Debian's BTS (Bug Tracking System) in a relational database. In this paper we show that there is a strong dependence between three variables which can be used to analyze the activity of a project through its bugs: bug notifications, communications between users and developers and people involved.",2007,0, 2078,Assessing reliability risk using fault correction profiles,"Building on the concept of the fault correction profile - a set of functions that predict fault correction events as a function of failure detection events - introduced in previous research, we define and apply reliability risk metrics that are derived from the fault correction profile. These metrics assess the threat to reliability of an unstable fault correction process. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Applying these metrics to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that reliability risk can be measured and used to identify the need for process improvement.",2004,0, 2079,Image quality enhancement support system by gamma correction using interactive evolutionary computation,"By noting gamma correction as one of the image quality adjustment parameters, the authors create an image quality enhancement support system capable of reflecting user subjectivity. In this study, the derivation of an optimum gamma value is formulated as an optimization problem. To reflect user subjectivity, an image quality enhancement support system is realized by interactive evolutionary computation. This technique is verified by comparing it with a manually derived gamma value, image quality, and derivation time.",2007,0, 2080,Analyzing the defects of C-130 aircraft through maintenance history,"C-130 aircraft is considered as one of the most reliable military cargo aircraft around the world. Its popularity mainly rests with military users especially with the air forces around the world. Its airframe, engines, electric and instrumentation systems are considered as of critical importance due to its ability to operate from and to a variety of terrain and environmental conditions. During the same mission it might get exposed to takeoffs or landings at sub zero temperature at high altitude airbases as well as at elevated temperatures on airfields below sea level. This flexibility lays high demand on its systems to continuously give outstanding performance under all such conditions with least or limited maintenance requirements. Behavior of these systems has been analyzed on the basis of data collected over two years in this research; and findings have been established from comparative analysis of each system. The results are considered reasonably important for such aircraft operators as well as maintenance crew. It may also be used for prediction of maintenance costs and system spares requirements forecasting.",2010,0, 2081,RFID and IPv6-enabled Ubiquitous Medication Error and Compliance Monitoring System,Became of the world's rapidly-aging population' costs for healthcare are getting higher and higher due partially to people who failed to comply with their medication regimens costing billion per year and affecting few million patients. This paper presents a combination of IPv6 network and RFID-based medication error monitoring system integrating with Wi-Fi and GSM wireless communication techniques that is able to collect user's medicine-taking records any time - anywhere as reference for the purpose of proper diagnosis and reduce the healthcare costs as a result.,2007,0, 2082,Analog circuit fault diagnosis based on fuzzy support vector machine and kernel density estimation,"Because analog circuits such as abnormal noise contained in the information, to the support vector machine to build up the optimal classification brings difficulties, this paper proposes a new method for analog circuit fault diagnosis. First of all, time-domain signal extraction circuit statistical parameters, a set of fault characteristics and then use kernel density estimation method, proposed a form of fuzzy membership function construction, to eliminate the impact of noise characteristics. The establishment of such a membership functions with fuzzy support vector machines on the circuit fault diagnosis. Through the training of support vector machine fault diagnosis model was to achieve single-fault and multi-circuit fault diagnostic classification. The method is applied on CSTV filter circuit, the simulation experiment results show that the method can highlight the different characteristics of fault can be diagnosed correctly and effectively multi-fault types, comprehensive diagnostic accuracy of 95%, and the method for analog circuit fault diagnosis a new way. This technology has good prospects for engineering applications.",2010,0, 2083,A mixed expert system for fault diagnosis,"Because of the high structure complexity and variety of working condition, it is greatly difficult to fault diagnosis for heavy machines. This paper puts forward a method of fault diagnosis based on mixed expert system, which uses the rules and cases. The architecture and diagnosis flow of the system are both proposed. The knowledge database of fault diagnosis of heavy machine are mainly set up, including the rule database and case database which are respectively based on the Structure Fault Tree and breakdown maintenance record. Finally, the prototype system is designed.",2010,0, 2084,"A Framework to Evaluate the Trade-Off among AVF, Performance and Area of Soft Error Tolerant Microprocessors","Because of the increasing susceptibility of the integrated circuits to soft errors, many techniques have been proposed in all the design levels to reduce the AVF (architecturally vulnerable factor) of microprocessors with extra performance and area overheads. These overheads have a negative impact on the reliability. Conventional reliability evaluation frameworks do not take both performance and area overheads into account. A new metric, mMWTF (modified mean work to failure), is proposed in this paper to capture the trade-off among AVF, performance and area. A quantitative approach to evaluate mMWTF is also presented, in which fault injection is used to estimate the AVF. To modify the conventional fault injection methods which inject only SEU (single event upset), a new method is proposed to injects both SEU and MBU (multi bits upset), the latter of which happens more frequently with the shrinking feature size. Because of the new metric and the new fault injection method, the framework presented in this paper is more accurate than conventional ones. As a case study, two control flow checking techniques are proposed and evaluated in this paper. The evaluation results demonstrate that the techniques with better balance among AVF, performance and area can better improve the reliability of microprocessors.",2008,0, 2085,Software design for the fault diagnosis system of ATC based on neural networks,"Because of the limitation of the expert system which is based on the rule, this text proposes introducing the neural network technology into the fault diagnosis system. Then we recommend the frame and the principle of the expert system. And at last, we complete the simulation experiment which indicates the rationality of this design.",2009,0, 2086,Determining Implementation Expertise from Bug Reports,"As developers work on a software product they accumulate expertise, including expertise about the code base of the software product. We call this type of expertise 'implementation expertise'. Knowing the set of developers who have implementation expertise for a software product has many important uses. This paper presents an empirical evaluation of two approaches to determining implementation expertise from the data in source and bug repositories. The expertise sets created by the approaches are compared to those provided by experts and evaluated using the measures of precision and recall. We found that both approaches are good at finding all of the appropriate developers, although they vary in how many false positives are returned.",2007,0, 2087,Fault Tolerance Techniques for the Merrimac Streaming Supercomputer,"As device scales shrink, higher transistor counts are available while soft-errors, even in logic, become a major concern. A new class of architectures, such as Merrimac and the IBM Cell, take advantage of the higher transistor count by exposing control, communication, and a large number of functional-units at the architectural level, thus achieving high performance and efficiency. This paper explores soft-error fault tolerance in the context of these computeintensive architectures, which differ significantly from their control-intensive CPU counterparts. The main goal of the proposed schemes for Merrimac is to conserve the critical and costly off-chip bandwidth and on-chip storage resources, while maintaining high peak and sustained performance. We achieve this by allowing for reconfigurability and relying on programmer input. The processor is either run at full peak performance employing software fault-tolerance methods, or reduced performance with hardware redundancy. We present several methods, their analysis, and detailed case studies.",2005,0, 2088,Master Defect Record Retrieval Using Network-Based Feature Association,"As electronic records (e.g., medical records and technical defect records) accumulate, the retrieval of a record from a past instance with the same or similar circumstances, has become extremely valuable. This is because a past record may contain the correct diagnosis or correct solution to the current circumstance. We refer to the two records of the same or similar circumstances as master and duplicate records. Current record retrieval techniques are lacking when applied to this special master defect record retrieval problem. In this study, we propose a new paradigm for master defect record retrieval using network-based feature association (NBFA). We train the master record retrieval process by constructing feature associations to limit the search space. The retrieval paradigm was employed and tested on a real-world large-scale defect record database from a telecommunications company. The empirical results suggest that the NBFA was able to significantly improve the performance of master record retrieval, and should be implemented in practice. This paper presents an overview of technical aspects of the master defect record retrieval problem, describes general methodologies for retrieval of master defect records, proposes a new feature association paradigm, provides performance assessments on real data from a telecommunications company, and highlights difficulties and challenges in this line of research that should be addressed in the future.",2010,0, 2089,Case Studies on Transition Fault Test Generation for At-speed Scan Testing,"At-speed scan testing for intra-clock and inter-clock transition delay faults in a SOC design with multiple clock domains is an important and challenging issue. Current practice in industry usually applies a test scheme targeted on intra-clock transition fault delay testing (i.e., intra testing). In this paper a test scheme targeting both intra-clock and inter-clock domains for transition delay fault testing (i.e., intra-inter testing) is applied. This paper presents an empirical study by comparing between intra testing and intra-inter testing in terms of fault classification, test detection, test coverage, test volume, and test power by using industrial circuits. The information provided by this paper is beneficial to both practitioners and researchers in their pursuit of improving the quality of transition delay testing, which is critical to the quality of deep-sub micron VLSI chips.",2010,0, 2090,A survey of methods for detection of stator related faults in induction machines,"As evidenced by industrial surveys, stator related failures account for a large percentage of faults in induction machines. The objective of this paper is to provide a survey of existing techniques for detection of stator related faults, which include stator winding turn faults, stator core faults, temperature monitoring and thermal protection, and stator winding insulation testing. The root causes of fault inception, available techniques for detection and recommendations for further research are presented. Although the primary focus is on-line, sensorless methods that use machine voltages and currents to extract fault signatures, off-line techniques such as partial discharge detection are also examined.",2003,0, 2091,A microdriver architecture for error correcting codes inside the Linux kernel,"Coding tasks, such as encryption of data or the generation of failure-tolerant codes, belong to the most computationaly expensive tasks inside the Linux kernel. Their integration into the kernel enables the user to transparently access these functionalities, encrypted hard disks can be used in the same way as unencrypted ones. Nevertheless, Linux as a monolithic kernel is not prepared to support these expensive tasks by accessing modern hardware accelerators, like graphics processing units (GPUs), as the corresponding accelerator libraries, like the CUDA-API for NVIDIA GPUs, only offer user-space APIs. Linux is often used in conjunction with parallel file systems in high performance cluster environments and the tremendous storage growth in these environments leads to the requirement of multi-error correcting codes. Parallel file systems, which often run on a storage cluster, are required to store the calculated results without huge waiting times. Whereas the frontend of such a storage cluster can be build with standard PCs, it is in contrast nearly impossible to build a capable RAID backend with end user hardware up to now. This work investigated the potential of graphic cards for such coding applications like RAID in the Linux kernel. For this purpose, a special microdriver concept (Barracuda) has been designed that can be integrated into Linux without changing kernel APIs. For the investigation of the performance of this concept, the Linux RAID 6-system and the applied Reed-Solomon code have been exemplary extended and studied. The resulting measurements outline opportunities and limitations of our microdriver concept. On the one hand, the concept achieves a speed-up of 72 for complex, 8-failure correcting codes, while no additional speed-up can be generated for simpler, 2-error correcting codes. An example application for Barracuda could therefore be replacement of expensive RAID systems in cluster storage environments.",2009,0, 2092,Error Detection Capabilities of Automotive Network Technologies and Ethernet - A Comparative Study,"Coding the payload data and sending the codeword as an overhead of the packet is a very common way to protect data in communication networks. The protection level of the coding technique is chosen depending on the importance of the transmitted data. While high priority, safety critical applications do not tolerate any single bit error in data packets, lower priority services can handle few bit errors. Coding techniques provide error detection and up to a certain level also error correction. Bit errors that are not detectable by coding techniques at the receiver side occur with a residual error probability. The better the coding technique is, the lower is its residual error probability for different bit error rates. Most automotive network systems use cyclic redundancy codes (CRC) mainly in order to detect transmission bit errors. Instead of correcting the identified bit errors which is quite time consuming, usually a retransmission of the damaged data packet is triggered. Similar to automotive network systems, Ethernet, the most applied network technology in local area networks uses the CRC error detection technique. In this work, we present a comparative study of the error detection capabilities of automotive network systems and Ethernet as a possible network system for time critical applications in the car. We evaluate the related residual error probabilities for a reasonable range of bit error rates. Furthermore, several commercial concepts are presented from the automation field that increase the error detection capability of the standard Ethernet technology significantly.",2007,0, 2093,Frame error model in rural Wi-Fi networks,"Commonly used frame loss models for simulations over Wi-Fi channels assume a simple double regression model with threshold. This model is widely accepted, but few measurements are available in the literature that try to validate it. As far as we know, none of them is based on field trials at the frame level. We present a series of measurements for relating transmission distance and packet loss on a Wi-Fi network in rural areas and propose a model that relates distance with packet loss probability. We show that a simple double regression propagation model like the one used in the ns-2 simulator can miss important transmission impairments that are apparent even at short transmitter-receiver distances. Measurements also show that packet loss at the frame level is a Bernoullian process for time spans of few seconds. We relate the packet loss probability to the received signal level using standard models for additive white Gaussian noise channels. The resulting model is much more similar to the measured channels than the simple models where all packets are received when the distance is below a given threshold and all are lost when the threshold is exceeded.",2007,0, 2094,Evaluation of Error Resilience Mechanisms for 3G Conversational Video,"Communication in 3G networks may experience packet losses due to transmission errors on the wireless link(s) which may severely impact the quality of video services, with conversational video being most challenging to repair due to tighter delay constraints. Many error resilience mechanisms have been developed that can be applied at the source (codec) level and transport/application layer to address these challenges. Their respective performance varies depending on the network conditions. This paper analyzes and compares the performance of four error resilience mechanisms under different realistic wireless link conditions: selective retransmissions, slice size adaptation, reference picture selection, and unequal error protection using packet-based forward error correction. We derive suggestions for the applicability of the individual mechanisms.",2008,0, 2095,Fault Tolerance and Recovery in Grid Workflow Management Systems,"Complex scientific workflows are now commonly executed on global grids. With the increasing scale complexity, heterogeneity and dynamism of grid environments the challenges of managing and scheduling these workflows are augmented by dependability issues due to the inherent unreliable nature of large-scale grid infrastructure. In addition to the traditional fault tolerance techniques, specific checkpoint-recovery schemes are needed in current grid workflow management systems to address these reliability challenges. Our research aims to design and develop mechanisms for building an autonomic workflow management system that will exhibit the ability to detect, diagnose, notify, react and recover automatically from failures of workflow execution. In this paper we present the development of a Fault Tolerance and Recovery component that extends the ActiveBPEL workflow engine. The detection mechanism relies on inspecting the messages exchanged between the workflow and the orchestrated Web Services in search of faults. The recovery of a process from a faulted state has been achieved by modifying the default behavior of ActiveBPEL and it basically represents a non-intrusive checkpointing mechanism. We present the results of several scenarios that demonstrate the functionality of the Fault Tolerance and Recovery component, outlining an increase in performance of about 50% in comparison to the traditional method of resubmitting the workflow.",2010,0, 2096,A design tool for fault tolerant systems,"Complex systems may have to meet severe availability objectives related to the importance of the service being provided; such systems must be fault tolerant. Designers of fault-tolerant systems try to implement diagnostics to detect as many faults as possible because, in complex systems, uncovered faults lead to latent highly undesired situations. Unfortunately, diagnostics themselves may fail. Starting from the basics of FMECA, a design methodology and a tool have been developed. It is called DIANA (DIagnostic ANAlysis). The basic idea of DIANA is to perform coverage analysis during hardware and firmware design together with reliability engineering analysis. To this purpose, DIANA has been integrated into the computer aided design (CAD) tools in the same way that logic simulation timing analysis and analog transmission simulation are performed. Two main results have been obtained by the DIANA project: the first is to give the designers a tool that helps them to think in such a way as to prevent uncovered fault situations; the second is to calculate the effects of faults on diagnostics in order to provide transition rates to system availability models when real, rather than ideal, cases are taken into account",2000,0, 2097,Error Reduction in Contingency Ranking of Power Systems Using Combined Indexes,"Complexity of calculations prevents utilities from doing online security studies, and that makes ranking and selection of events an essential part in the field of dynamic security assessment. In this paper, after a short review on security assessment, some indexes such as generators rotor angle, generators speed, rotor acceleration, the difference between kinetic and potential energy etc. have been introduced for ranking of probable contingencies. These indexes have been calculated and compared for different events. In addition to description and investigation of presented indexes, some probable events have been ranked in a case study network. Calculated indexes were finally compared with transient energy margin in order to determine their accuracy. Evaluation of results showed that some errors are inevitable and an individual index is not capable of describing the severity of an event completely. In order to reduce the error of ranking, it is suggested to use a combination of indexes. Several simulations were done in a case study system to find proper combinations of individual indexes, which leaded to finding new combined indexes which are able to rank the contingencies with minimum errors.",2010,0, 2098,Error detection and concealment for video conferencing service,"Compressed video bitstream is intended for real-time transmission over communication networks. However, it is very sensitive to transmission errors. Even a single bit error can lead to disastrous video quality degradation in both time and spatial domain. This quality deterioration is exacerbated when no error control mechanism is employed to protect coded video data against the error prone environments. In this paper, we present a new error detection and concealment algorithm to reduce the effects of transmission error in the video decoder.We verified that the proposed algorithm generates good performances in PSNR and objective visual quality through the computer simulation by H.324M mobile simulation set.",2009,0, 2099,Automatic detection of pronunciation errors in CAPT systems based on confidence measure,"Computer aided pronunciation training (CAPT) systems aim at listening to a learner's utterance, judging the overall pronunciation quality and pointing out any pronunciation errors in the utterance. However, the performance of the current error detection techniques can not satisfy the users' expectation. In this paper, we introduce confidence measures and anti-models into the error detection algorithm. The basic theory and the roles of confidence measures and anti-models are discussed. We then propose a new algorithm, and evaluate it on an English learning system. Experiments are conducted based on the TIMIT database and an adaptive database, which involves 40 Chinese undergraduates. The results show that confidence measures can be utilized to effectively improve the performance of the CAPT system.",2008,0, 2100,Single byte error control codes with double bit within a block error correcting capability for semiconductor memory systems,"Computer memory systems when exposed to strong electromagnetic waves or radiation are highly vulnerable to multiple random bit errors. Under this situation, we cannot apply existing SEC-DED or SbEC capable codes because they provide insufficient error control performance. This correspondence considers the situation where two random bits in a memory chip are corrupted by strong electromagnetic waves or radioactive particles and proposes two classes of codes that are capable of correcting random double bit errors occurring within a chip. The proposed codes, called Double bit within a block Error Correcting-Single byte Error Detecting ((DEC)B-SbED) code and Double bit within a block Error Correcting-Single byte Error Correcting ((DEC)B-Sb EC) code, are suitable for recent computer memory systems",2000,0, 2101,Research of fault current limiter for 500kV power grid,"Configuring large power supply in high load density networks and interconnections between power grids will cause enormous short-circuit current. If the short-circuit current rating of equipment in power system is exceeded, the equipment must be replaced, which is a very cost- and/or time-intensive procedure. A fault current limiter offers a choice in such case. First, this paper attempts to give an overview of fault current limiting measures for medium and high voltage applications. Then several representative fault current limiters including developing and developed measures are examined with an emphasis upon the thyristor protected series capacitor (TPSC) based short-circuit current limiter (SCCL). The SCCL would be the only technical proposal which can be used in high voltage system for its proven technology, low cost, high reliability, powerful performance and easy realization in high voltage system. The technology is used in the fault current limiter demonstration project in East China power grid. First, the main circuit topology of the fault current limiter demonstration project in East China power grid was introduced, together with the parameter specification of main-circuit equipments such as capacitor, reactor, damping circuit, thyristor, spark gap, MOV, and bypass circuit breaker. Then the measures and strategies of the current limiting reactor fast insertion and the over-voltage control and protection upon the series capacitors were introduced. After that the equivalent system of the fault current limiter demonstration project in East China SOOkV power grid was established using PSCAD/EMTDC, with which, an approach to identify fault signal fast based on the line current slope, and the coordination between the slope value and the instantaneous value of line current were studied in details by the aid of PSCAD/EMTDC simulation results. The optimal coordination and corresponding time setting value were given. FCL is series connected to the transmission line a- - nd should have influence upon protective relays to some extent. The effect of automatic circuit re-closer on FCL and the co-ordination between FCL control and protection system and protective relays must be researched. The influence of automatic reclosing on FCL and the coordination between FCL protection and line protection were performed by PSCAD/EMTDC simulation, and a simulative analysis-based rational solution was given. To demonstrate the validity of PSCAD/EMTDC simulation results, the real time digital simulation (RTDS) experiment of the control and protection system for FCL demonstration project in East China SOOkV power grid was done. The paper presents the RTDS results which consist of experimental scheme, experimental models, simulation results, and analysis of fault signal identification method and influence of fault current limiter on protective relaying above. Results of RTDS experiment can partially justify the operation of the FCL control and protection system.",2010,0, 2102,Dynamic response of distributed synchronous generators on faults in HV and MV networks,"Connection of distributed generators (DG) on existing power distribution networks requires solution of technical, economical and regulatory issues. A brief description of current status of DG in Bosnia and Herzegovina, with special attention to the technical impact of integration of small hydro power plants with synchronous generators, is given in this paper. Dynamic response of DG (namely small hydro power plants with synchronous generators) on disturbances in HV and MV networks is analyzed in detail. Simulations of DG dynamic response are performed using free, open-source MATLAB/PSAT software package on a simple test model of distribution network with DG. Several scenarios are studied. Finally, areas for further research are identified and presented.",2009,0, 2103,Minimizing the impact of distributed generation on distribution protection system by solid state fault current limiter,Connection of new distributed generation (DG) to the existing distribution network increases the fault current and disturbs the existing distribution protection system. In this paper a solid state fault current limiter (SSFCL) application is proposed to minimize the effect of the DG on the distribution protection system in a radial system during a fault. The protection problems at distributed generation presence are studied in detail to determine the effectiveness of the SSFCL for the proposed application. The effectiveness of propose SSFCL in protection problems mitigation is determined and examined in the test system. Simulation is accomplished in PSCAD/EMTDC. The proposed method is fully validated on test system,2010,0, 2104,Designing model-based fault estimator for a separation column,Considers a practical approach to model-based fault diagnostics. The fault estimation is performed by a delayed FIR filter that is designed as a noncausal Wiener filter. The approach is demonstrated using a case study of a process upset in a separation column in a petrochemical plant. The designed fault estimator demonstrates early and reliable detection of the fault. It presents a conclusive evidence of the fault more than 250 minutes earlier than the action the operator took during the upset based on direct sensory data,2001,0, 2105,The state of documentation practice within corrective maintenance,"Consistent, correct and complete documentation is an important vehicle for the maintainer to gain understanding of a software system, to ease the learning and/or relearning processes, and to make the system more maintainable. Former studies have shown that documentation is one of the most neglected process issues within software engineering today. The authors check the current state of documentation practice within corrective maintenance in Sweden",2001,0, 2106,Fault Localization in Constraint Programs,"Constraint programs such as those written in high level modeling languages (e.g., OPL, ZINC, or COMET) must be thoroughly verified before being used in applications. Detecting and localizing faults is therefore of great importance to lower the cost of the development of these constraint programs. In a previous work, we introduced a testing framework called CPTEST enabling automated test case generation for detecting non-conformities. In this paper, we enhance this framework to introduce automatic fault localization in constraint programs. Our approach is based on constraint relaxation to identify the constraint that is responsible of a given fault. CPTEST is henceforth able to automatically localize faults in optimized OPL programs. We provide empirical evidence of the effectiveness of this approach on classical benchmark problems, namely Golomb rulers, n-queens, social golfer and car sequencing.",2010,0, 2107,Job Migration and Fault Tolerance in SLA-Aware Resource Management Systems,"Contractually fixed service quality levels are mandatory prerequisites for attracting the commercial user to Grid environments. Service level agreements (SLAs) are powerful instruments for describing obligations and expectations in such a business relationship. At the level of local resource management systems, checkpointing and restart is an important instrument for realizing fault tolerance and SLA- awareness. This paper highlights the concepts of migrating such checkpoint datasets to achieve the goal of SLA- compliant job execution.",2008,0, 2108,Live demonstration: Neuro-inspired system for realtime vision tilt correction,"Correcting digital images tilt needs huge quantities of memory, high computational resources, and use to take a considerable amount of time. This demonstration shows how a spikes-based silicon retina dynamic vision sensor (DVS) tilt can corrected in real time using a commercial accelerometer. DVS output is a stream of spikes codified using the address-event representation (AER). Event-based processing is focused on change in real time DVS output addresses. Taking into account this DVS feature, we present an AER based layer able to correct in real time the DVS tilt, using a high speed algorithmic mapping layer and introducing a minimum latency in the system. A co-design platform (the AER-Robot platform), based into a Xilinx Spartan 3 FPGA and an 8051 USB microcontroller, has been used to implement the system.",2010,0, 2109,An empirical validation of the relationship between the magnitude of relative error and project size,"Cost estimates are important deliverables of a software project. Consequently, a number of cost prediction models have been proposed and evaluated. The common evaluation criteria have been MMRE, MdMRE and PRED(k). MRE is the basic metric in these evaluation criteria. The implicit rationale of using a relative error measure like MRE, rather than an absolute one, is presumably to have a measure that is independent of project size. We investigate if this implicit claim holds true for several data sets: Albrecht, Kemerer, Finnish, DMR and Accenture-ERP. The results suggest that MRE is not independent of project size. Rather, MRE is larger for small projects than for large projects. A practical consequence is that a project manager predicting a small project may falsely believe in a too low MRE. Vice versa when predicting a large project. For researchers, it is important to know that MMRE is not an appropriate measure of the expected MRE of small and large projects. We recommend therefore that the data set be partitioned into two or more subsamples and that MMRE is reported per subsample. In the long term, we should consider using other evaluation criteria.",2002,0, 2110,The Impact of Coupling on the Fault-Proneness of Aspect-Oriented Programs: An Empirical Study,"Coupling in software applications is often used as an indicator of external quality attributes such as fault-proneness. In fact, the correlation of coupling metrics and faults in object oriented programs has been widely studied. However, there is very limited knowledge about which coupling properties in aspect-oriented programming (AOP) are effective indicators of faults in modules. Existing coupling metrics do not take into account the specificities of AOP mechanisms. As a result, these metrics are unlikely to provide optimal predictions of pivotal quality attributes such as fault-proneness. This impacts further by restraining the assessments of AOP empirical studies. To address these issues, this paper presents an empirical study to evaluate the impact of coupling sourced from AOP-specific mechanisms. We utilise a novel set of coupling metrics to predict fault occurrences in aspect-oriented programs. We also compare these new metrics against previously proposed metrics for AOP. More specifically, we analyse faults from several releases of three AspectJ applications and perform statistical analyses to reveal the effectiveness of these metrics when predicting faults. Our study shows that a particular set of fine-grained directed coupling metrics have the potential to help create better fault prediction models for AO programs.",2010,0, 2111,Accurate Conjunction of Yield Models for Fault-Tolerant Memory Integrated Circuits,"Critical defects, i.e., faults, inevitably occur during semiconductor fabrication, and they significantly reduce both manufacturing yield and product reliability. To decrease the effects of the defects, several fault-tolerance methods, such as the redundancy technique and the error correcting code (ECC), have been successfully applied to memory integrated circuits. In the semiconductor business, accurate estimation of yield and reliability is very important for determining the chip architecture as well as the production plan. However, a simple conjunction of previous fault-tolerant yield models tends to underestimate the manufacturing yield if several fault-tolerance techniques are employed simultaneously. This paper concentrates on developing and verifying an accurate yield model which can be applied successfully in such situations. The proposed conjunction model has been derived from the probability of remaining redundancies and the average number of defects after repairing the defects with the remaining redundancies. The validity of the conjunction yield model is verified by a Monte Carlo simulation.",2009,0, 2112,Simple converter structure for fault tolerant motors,"Critical electrical machines and drives systems used in diverse fields like aerospace, defense, medical, nuclear power plants, etc. require both special motor and converter topologies. In our days due to the recent technological advances and developments in the area of power electronics and motor control the fault tolerant electrical machine and drive concept reached a level where it begun to be feasible to be used widely in practice [1], [2]. Hence any new results obtained are of real interest for all the specialists working in these highly developed fields of electrical engineering. The paper presents an analysis of a nine phase fault tolerant permanent magnet synchronous machine fed by a simple power converter under different realistic stator winding fault conditions. By advanced Flux 2D and Simulink transient cosimulations the behavior of the drive system under the considered four different winding fault conditions was studied. It was proved that using the simplified converter topology near the same torque development capability of the machine in faulty states can be assured.",2008,0, 2113,Architectural Design of CA-Based Double Byte Error Correcting Codec,"Cellular Automata (CA) is a novel approach for designing byte error-correcting codes. The regular, modular and cascaded structure of CA can be economically built with VLSI technology. In this correspondence, a modular architecture of CA based (32, 28) byte error correcting encoder and decoder has been proposed. The design is capable of locating and correcting all double byte errors. CA-based implementation of the proposed decoding scheme provides a simple cost effective solution compared to the existing decoding scheme for the Reed-Solomon (RS) decoder, having double error correcting capability.",2008,0, 2114,The same is not the same - postcorrection of alphabet confusion errors in mixed-alphabet OCR recognition,"Character sets for Eastern European languages typically contain symbols that are optically almost or fully identical to Latin letters. When scanning documents with mixed Cyrillic-Latin or Greek-Latin alphabets, even high-quality OCR-software is often not able to correctly separate between Cyrillic (Greek) and Latin symbols. This effect leads to an error rate that is far beyond the usual error rates observed when recognizing single-alphabet documents. In this paper we first survey similarities between Latin and Cyrillic (Greek) letters and words for distinct languages and fonts. After briefly introducing a new and public corpus collected by our groups for evaluating OCR-technology over mixed-alphabet documents, we describe how to adapt general algorithms and tools for postcorrection of OCR results to the new context of mixed-alphabet recognition. Experimental results on Bulgarian documents from the corpus and from other sources demonstrate that a drastic reduction of error rates can be achieved.",2005,0, 2115,Off-device fault tolerance for digital imaging devices,"Charged-coupled device (CCD) is one of the widely-used optical sensing device technologies for various digital imaging systems such as digital cameras, digital camcorders, and digital X-ray imaging systems. Pixels on a CCD may suffer from defective or faulty pixels due to numerous causes such as imperfect fabrication, excessive exposure to light radiation and sensing element aging to mention a few. As the use of high-resolution CCDs increase, defect and fault tolerance of such devices demands immediate attention. In this context, this paper proposes a testing and repair technique for defects/faults on such devices with inability of on-device fault tolerance, referred to as off-device fault tolerance. Digital image sensor devices such as CCD are by their nature, can not readily utilize traditional on-device fault tolerance techniques because each pixel on the device senses a unique image pixel coordinate. No faulty pixel can be replaced nor repaired by a sparse pixel as any displacement of an original pixel coordinate can not sense the original image pixel. Therefore, to effectively provide and enhance the reparability of such devices with inability of on-device fault tolerance, a novel testing and repair method for defects/faults on CCD is proposed based on the soft testing/repair method proposed in our previous work (Jin et al, 2003) under both single and clustered distribution of CCD pixel defects. Clustered fault model due to unwanted diffusion should be considered as practical model and for comparison purpose with single fault model. Also, a novel default/fault propagation model is proposed to effectively capture the on-device effects and faults off the device for an effectiveness and practicality of testing and repair process. The efficiency and effectiveness of the method is demonstrated with respect to the yield enhancement by the soft-testing/repair method under a clustered fault model as well as single fault model, as referred to as soft yield. Extensive numerical simulations are concluded.",2004,0, 2116,A Job Pause Service under LAM/MPI+BLCR for Transparent Fault Tolerance,"Checkpoint/restart (C/R) has become a requirement for long-running jobs in large-scale clusters due to a meantime-to-failure (MTTF) in the order of hours. After a failure, C/R mechanisms generally require a complete restart of an MPI job from the last checkpoint. A complete restart, however, is unnecessary since all but one node is typically still alive. Furthermore, a restart may result in lengthy job requeuing even though the original job had not exceeded its time quantum. In this paper, we overcome these shortcomings. Instead of job restart, we have developed a transparent mechanism for job pause within LAM/MPI+BLCR. This mechanism allows live nodes to remain active and roll back to the last checkpoint while failed nodes are dynamically replaced by spares before resuming from the last checkpoint. Our methodology includes LAM/MPI enhancements in support of scalable group communication with fluctuating number of nodes, reuse of network connections, transparent coordinated checkpoint scheduling and a BLCR enhancement for job pause. Experiments in a cluster with the NAS parallel benchmark suite show that our overhead for job pause is comparable to that of a complete job restart. A minimal overhead of 5.6% is only incurred in case migration takes place while the regular checkpoint overhead remains unchanged. Yet, our approach alleviates the need to reboot the LAM run-time environment, which accounts for considerable overhead resulting in net savings of our scheme in the experiments. Our solution further provides full transparency and automation with the additional benefit of reusing existing resources. Executing continues after failures within the scheduled job, i.e., the application staging overhead is not incurred again in contrast to a restart. Our scheme offers additional potential for savings through incremental checkpointing and proactive diskless live migration, which we are currently working on.",2007,0, 2117,Template-Based Development of Fault-Tolerant Embedded Software,"Currently there are different approaches to develop fault-tolerant embedded software: implementing the system from scratch or using libraries respectively specialized hardware. By implementing from scratch the developer has all options concerning system design, the used programming language and hardware. But on the other hand the implementation is errorprone and time- and cost-intensive. The usage of libraries or specialized hardware reduces the design possibilities, while increasing the quality of the developed system and accelerating the development. We present a new technique for developing fault-tolerant systems that combines the advantages of these approaches. We suggest the implementation of reusable templates that solve different aspects of fault-tolerant systems, for example temporal synchronization. In addition we introduce a code generator that realizes a mapping of these templates into application-dependent source code.",2006,0, 2118,Dynamic simulation of renewable energy sources and requirements on fault ride through behavior,"Currently, power generation from renewable energy sources is of global significance and will continue to grow during the coming years. The grid integration of intermitting renewable energy sources is not only an issue of distribution networks, but effects the transmission grids as well. The largest amount of new installations are connected to the transmission grid, especially into 110-kV. Further, large wind farms with some 100 MW have already been connected to the 400-kV-system. The renewable energy sources are connected to the power network via power electronic converters, and this type of connection schema can change the short circuit conditions in the network. Today, usually a tripping command is generated within the first period, but in the future contributions to the short circuit current by renewable sources are necessary to fulfil the selective clearing of faults. To investigate the new requirements some novel simulation models are necessary, and described in the paper, for: converter connected DC sources (fuel cell, PV plants), doubly fed induction generator, DFIG, converter connected synchronous generator. Using these models the simulation allows for investigating the behavior for various control design. Requested is the delivery of rated current up to voltage dips of 100% for the whole duration of the fault. The results of some simulations and discussion of them have led to new rules for grid access. These rules are implemented into the grid code",2006,0, 2119,The Simplex Reference Model: Limiting Fault-Propagation Due to Unreliable Components in Cyber-Physical System Architectures,"Cyber-physical systems are networked, component-based, real-time systems that control and monitor the physical world. We need software architectures that limit fault-propagation across unreliable components. This paper introduces our simplex reference model which is distinguished by: a plant being controlled in an external context, a machine performing the control, a domain model that estimates the plant state, and the safety requirements that must be met. The simplex reference model assists with constructing CPS architectures which limit fault-propagation. We present a representative case study to highlight the ideas behind the model and our particular decomposition.",2007,0, 2120,"A Fast-Start, Fault-Tolerant MPI Launcher on Dawning Supercomputers","Daemon-based MPI launchers are the mainstream in nowadays, because they can startup processes rapidly. However, effective task management and fault tolerance become more important as the scale of supercomputers enlarges. A new fast-start and fault tolerant launcher, called SFLauncher, has been used to startup MPICH task on Dawning supercomputers. This paper details its features and implementation, with emphasis on scalability, self-organization algorithm and garbage reclamation. The results of performance evaluation on SFLauncher are also given.",2008,0, 2121,Module size distribution and defect density,"Data from several projects show a significant relationship between the size of a module and its defect density. We address implications of this observation. Does the overall defect density of a software project vary with its module size distribution? Even more interesting is the question can we exploit this dependence to reduce the total number of defects? We examine the available data sets and propose a model relating module size and defect density. It takes into account defects that arise due to the interconnections among the modules as well as defects that occur due to the complexity of individual modules. Model parameters are estimated using actual data. We then present a key observation that allows use of this model for not just estimation of the defect density, but also potentially optimizing a design to minimize defects. This observation, supported by several data sets examined, is that the module sizes often follow exponential distribution. We show how the two models used together provide a way of projecting defect density variation. We also consider the possibility of minimizing the defect density by controlling module size distribution",2000,0, 2122,Fault tolerant distributed information systems,"Critical infrastructures provide services upon which society depends heavily; these applications are themselves dependent on distributed information systems for all aspects of their operation and so survivability of the information systems is an important issue. Fault tolerance is a mechanism by which survivability can be achieved in these information systems. We outline a specification-based approach to fault tolerance, called RAPTOR, that enables structuring of fault-tolerance specifications and an implementation partially, synthesized from the formal specification. The RAPTOR approach uses three specifications describing the fault-tolerant system, the errors to be detected, and the actions to take to recover from those errors. System specification utilizes an object-oriented database to store the descriptions associated with these large, complex systems. The error detection and recovery specifications are defined using the formal specification notation Z. We also describe an implementation architecture and explore our solution with a case study.",2001,0, 2123,Application-level fault tolerance in real-time embedded systems,"Critical real-time embedded systems need to make use of fault tolerance techniques to cope with operation time errors, either in hardware or software. Fault tolerance is usually applied by means of redundancy and diversity. Redundant hardware implies the establishment of a distributed system executing a set of fault tolerance strategies by software, and may also employ some form of diversity, by using different variants or versions for the same processing. This work proposes and evaluates a fault tolerance framework for supporting the development of dependable applications. This framework is build upon basic operating system services and middleware communications and brings flexible and transparent support for application threads. A case study involving radar filtering is described and the framework advantages and drawbacks are discussed.",2008,0, 2124,ECC design for fault-tolerant crossbar memories: A case study,"Crossbar memories are promising memory technologies for future data storage. Although the memories offer trillion-capacity of data storage at low cost, they are expected to suffer from high defect densities and fault rates impacting their reliability. Error correction codes (ECCs), e.g., Redundant Residue Number System (RRNS) and Reed Solomon (RS) have been proposed to improve the reliability of memory systems. Yet, the implementation of the ECCs was usually done at software level, which incurs high cost. This paper analyzes ECC design for fault-tolerant crossbar memories. Both RS and RRNS codes are implemented and experimentally compared in terms of their area overhead, speed and error correction capability. The results show that the encoder and decoder of RS requires 7.5 smaller area overhead and operates 8.4 faster as compared to RRNS. Both ECCs has fairly similar error correction capability.",2010,0, 2125,Improving ATPG Gate-Level Fault Coverage by using Test Vectors generated from Behavioral HDL Descriptions,"Current hardware design flows include test pattern generation as a single step to be performed only after logical synthesis. However, early generation of few high level test patterns can provide higher test quality and reduce ATPG effort. In this work, the authors apply a software engineering technique for control flow based path testing, to extract test vectors from the behavioral HDL description of digital circuits. The authors show how one can adapt this software testing approach to test hardware devices. Experimental results show that combining high level generated test vectors with gate level ATPG can improve test quality, either increasing fault coverage and/or reducing test set size",2006,0, 2126,A Systems Approach to Fault Detection and Diagnosis for Condition-Based Maintenance,"Current methods for actuator condition monitoring on the railways have been developed as individual solutions, using simple thresholding techniques to raise alarms. These are only partly helpful to the maintainer as considerable diagnostics must still be carried out when a fault is detected. A more comprehensive approach is developed here as a general solution to diagnosis for all actuators belonging to the class known as single-throw mechanical equipment. A systems engineering approach has been used to develop the solution, by taking a basic set of requirements as a starting point and using these to form functions by collecting and evaluating methods from all parts of industry",2006,0, 2127,Evaluating the Use of Requirement Error Abstraction and Classification Method for Preventing Errors during Artifact Creation: A Feasibility Study,"Defect prevention techniques can be used during the creation of software artifacts to help developers create high-quality artifacts. These artifacts should have fewer faults that must be removed during inspection and testing. The Requirement Error Taxonomy that we have developed helps focus developers' attention on common errors that can occur during requirements engineering. Our claim is that, by focusing on those errors, the developers will be less likely to commit them. This paper investigates the usefulness of the Requirement Error Taxonomy as a defect prevention technique. The goal was to determine if making requirements engineers' familiar with the Requirement Error Taxonomy would reduce the likelihood that they commit errors while developing a requirements document. We conducted an empirical study in which the participants were given the opportunity to learn how to use the Requirement Error Taxonomy by employing it during the inspection of a requirements document. Then, in teams of four, they developed their own requirements document. This requirements document was then evaluated by other students to identify any errors made. The hypothesis was that participants who find more errors during the inspection of a requirements document would make fewer errors when creating their own requirements document. The overall result supports this hypothesis.",2010,0, 2128,Bio-Inspired Integrated Chips for Telecommunications S/W Defect-Tracking,"Defect tracking is important in evaluating the reliability of the software used in telecommunication networks. Bio-inspired integrated approaches and embedded chips have been developed and implemented to track improvements in the software reliability. In this paper, the integrated model for the failure discovery during testing is combined with bio-inspired approaches using the recurrent dynamic neural network (RDNN) with parametric adjustments and wavelets as basis; and the adaptive parameters RDNN (ARDNN) where the criterion is to minimize the error in failure intensity estimation, subject to the model constraints. Simulation results favor our adaptive recurrent dynamic neural network, with reduced error from 88% to 1.25 -to- 8% based on the number of iterations in the training phase.. The ARDNN approach provides optimum solution to the dynamic problem at hand since it iterates on the shape of the wavelet basis and provide adequate recovery of the data in the form of piecewise linear differential.",2007,0, 2129,User Interface Defect Detection by Hesitation Analysis,"Delays and errors are the frequent consequences of people having difficulty with a user interface. Such delays and errors can result in severe problems, particularly for mission-critical applications in which speed and accuracy are of the essence. User difficulty is often caused by interface-design defects that confuse or mislead users. Current techniques for isolating such defects are time-consuming and expensive, because they require human analysts to identify the points at which users experience difficulty; only then can diagnosis and repair of the defects take place. This paper presents an automated method for detecting instances of user difficulty based on identifying hesitations during system use. The method's accuracy was evaluated by comparing its judgments of user difficulty with ground truth generated by human analysts. The method's accuracy at a range of threshold parameter values is given; representative points include 92% of periods of user difficulty identified (with a 35% false-alarm rate); 86% (24% false-alarm rate); and 20% (3% false-alarm rate). Applications of the method to addressing interface defects are discussed",2006,0, 2130,A Multi-Perspective Taxonomy for Systematic Classification of Grid Faults,Classification turns chaotic knowledge into regularity by systematizing a domain and providing a common vocabulary. Currently there is a lack of systematic and comprehensive studies in organization and classification of Grid faults. We address this gap with a multi-perspective Grid fault taxonomy describing an incident using eight different characteristics. It is hard to define a taxonomy of broad validity and acceptance that satisfies the vast number of requirements of the many Grid user communities. Nevertheless we proof that our taxonomy can serve as a solid basis for defining project-specific custom classification schemes by giving a concrete example created for a state-of-the-art Grid middleware environment.,2008,0, 2131,Fault-tolerant and reliable computation in cloud computing,"Cloud computing, with its great potentials in low cost and on-demand services, is a promising computing platform for both commercial and non-commercial computation clients. In this work, we investigate the security perspective of scientific computation in cloud computing. We investigate a cloud selection strategy to decompose the matrix multiplication problem into several tasks which will be submitted to different clouds. In particular, we propose techniques to improve the fault-tolerance and reliability of a rather general scientific computation: matrix multiplication. Through our techniques, we demonstrate that fault-tolerance and reliability against faulty and even malicious clouds in cloud computing can be achieved.",2010,0, 2132,Web-Interface on Grid System for Atmospheric Corrections of Modis Data in Coastal Area,"Coastal areas ecosystem, representing one of the most delicate and complex relationship between natural environment and human activities, is one of the main subject of governments programs for environmental monitoring and eco-sustainable environmental management. This paper describes a quantitatively evaluation of MODIS water leaving reflectance products on coastal areas, where the occurrence of land contaminated pixels (mixed sea-land) and a different atmospheric absorption, make often unreliable or ambiguous the values measured from satellite and the quality flag related to values of these areas, often marks them as unreliable. The evaluation is performed by comparison to in situ measurements acquired in the framework of IMCA project (Integrated Monitoring of Coastal Areas) from June 2006 to May 2008. The user can perform his own atmospheric corrections setting the best models and parameters for the area of interest.In order to allow an easy access and management of different data set, pre-processing and processing parameters selection, a prototype of Web Gis for environmental monitoring has been realized.",2009,0, 2133,Dissociating Early and Late Error Signals in Perceptual Recognition,"Decisions about object identity follow a period in which evidence is gathered and analyzed. Evidence can consist of both task-relevant external stimuli and internally generated goals and expectations. How the various pieces of information are gathered and filtered into meaningful evidence by the nervous system is largely unknown. Although object recognition is often highly efficient and accurate, errors are common. Errors may be related to faulty evidence gathering arising from early misinterpretations of incoming stimulus information. In addition, errors in task performance are known to elicit late corrective performance monitoring mechanisms that can optimize or otherwise adjust future behavior. In this study, we used functional magnetic resonance imaging (fMRI) in an extended trial paradigm of object recognition to study whether we could identify performance-based signal modulations prior to and following the moment of recognition. The rationale driving the current report is that early modulations in fMRI activity may reflect faulty evidence gathering, whereas late modulations may reflect the presence of performance monitoring mechanisms. We tested this possibility by comparing fMRI activity on correct and error trials in regions of interest (ROIs) that were selected a priori. We found pre- and postrecognition accuracy-dependent modulation in different sets of a priori ROIs, suggesting the presence of dissociable error signals.",2008,0, 2134,Decomposition of error based control system optimization,"Decomposition possibilities of closed-loop control error is investigated to find the limits of optimality in generic two-degree of freedom control systems. New relationships are introduced for the different degradation components in a general and in a Youla-parametrized control system. It is investigated, how the major components in the decomposed problem can be optimized for open-loop stable plants. The new decomposition approach introduced helps the construction of new algorithms for simultaneous robust identification and control.",2005,0, 2135,A practical fault location approach for double circuit transmission lines using single end data,"Double circuit transmission lines are frequently subjected to a variety of technical problems from the perspective of protection engineering. These problems are mainly due to the mutual coupling effects between adjacent circuits of the line. In this paper, a new fault location approach for double circuit transmission lines is introduced. It depends only on the data extracted from one end of the line. This practically facilitates implementing and developing this approach, as it needs no information from the other end. The approach is based on modifying the apparent impedance method using modal transformation. Depending on modal transformation, the coupled equations of the transmission line are converted into decoupled ones. This greatly eliminates the mutual effects resulting in an accurate estimation for the fault distance in a straightforward manner. Also the effects of prefault currents, charging currents, and the unknown fault resistance on the estimation accuracy are compensated. The proposed approach was tested via digital simulation using ATP-EMTP in conjunction with MATLAB. Applied test results corroborate the superior performance of the proposed approach.",2003,0, 2136,InVerS: An Incremental Verification System with Circuit Similarity Metrics and Error Visualization,"Dramatic increases in design complexity and advances in IC manufacturing technology affect all aspects of circuit performance and functional correctness. As interconnect increasingly dominates delay and power at the latest technology nodes, much effort is invested in physical synthesis optimizations, posing great challenges in validating the correctness of such optimizations. Common design methodology delays the verification of physical synthesis transformations until the completion of the design phase. However, this approach is not sustainable because the isolation of potential errors becomes extremely challenging in current complex design efforts. In addition, the lack of interoperability between verification and debugging tools greatly limits engineers' productivity. Since the design's functional correctness should not be compromised, considerable resources are dedicated to checking an ensuring correctness at the expense of improving other aspects of design quality. To address these challenges, the paper proposed a fast incremental verification system for physical synthesis optimizations, InVerS, which includes capabilities for error detection, diagnosis, and visualization. This system helps engineers to discover errors earlier, simplifies error isolation and correction, thus reducing verification effort and enabling more aggressive optimizations to improve performance",2007,0, 2137,The application of a CBR MAS hierarchical model in fault diagnosis system,"Due to complexity and diversity of large rotating machinery and working conditions, timely and accurate fault diagnosis can not be guaranteed. The method and process of distributed intelligent fault diagnosis based on MAS(Multi-Agent System) are analyzed, and a hierarchical model which uses case based reasoning (CBR) method to carry out fault diagnosis in fault diagnosis expert system is proposed. Using the model, the system can quickly and accurately analyzes the cause of machinery fault, and provide reasonable and constructive decision-making advice.",2010,0, 2138,Fault diagnosis in optical access network using Bayesian network,"Due to ever rising need for higher bandwidth dictated by the increasing amounts and quality of services (Asymmetric Digital Subscriber Line (ADSL), Internet Protocol television (IPTV) and Voice over Internet Protocol (VOIP)), and the global development of technology, Fiber To The Home (FTTH) is becoming an affordable answer for the service provider. Before fully launching the service for the mass market the most important prerequisite is being able to provide customers with a high quality experience. To be able to achieve that, one of the main factors that service providers have to ensure is accurate and precise diagnosis of customer reported faults. Low quality of diagnostics prolongs the period of service degradation and increases the likelihood of repeated faults which results in dissatisfaction amongst customers. That is why our objective is to improve the accuracy of diagnostics performed by network technicians to at least 80%. In this paper we are introducing a solution based on Bayesian network and presenting the results of the applied method.",2010,0, 2139,Software-based self-test methodology for crosstalk faults in processors,"Due to signal integrity problems inherent sensitivity to timing, power supply voltage and temperature, it is desirable to test AC failures such as crosstalk-induced errors at operational speed and in the circuit's natural operational environment. To overcome the daunting cost and increasing performance hindrance of high-speed external testers, Software-Based. Self-Test (SBST) is proposed as a high-quality. low-cost at-speed testing solution for AC failures in programmable processors and System-on-Chips (SoC). SBST utilizes low-cost testers, applies tests and captures test responses in the natural operational environment. Hence SBST avoids artificial testing environment and external tester induced inaccuracies. Different from testing for stuck-at faults, testing for crosstalk faults requires a sequence of test vectors delivered at the operational speed. SBST applies tests in functional mode using instructions. Different instructions impose different controllability and observability constraints on a module-under-test (MUT). The complexity of searching for an appropriate sequence of instructions and operands becomes prohibitively high. In this paper, we propose a novel methodology to conquer the complexity challenge by efficiently combining structural test generation technique with instruction-level constraints. MUT in several time frames is automatically flattened and augmented with Super Virtual Constraint Circuits (SuperVCCs), which guide an automatic test pattern generation (ATPG) tool to select. appropriate test instructions and operands. The proposed methodology enables automatic test-program generation and high-fidelity test solution:for AC failures. Experimental results are shown on a commercial embedded processor (Xtensa/sup /spl trade// from Tensilica Inc).",2003,0, 2140,Research on Extension Method and Detection of Fault-Line Selection in Resonant Grounded Systems,"Due to small grounding current and unstable electric arc at fault position, the faulty circuit selection in resonant grounded systems has always been a complex and difficult problem. This paper proposes a new diagnostic method based on the matter-element and correlation analysis of Extension Theory. The characteristics variables of zero-sequence current and the correlation function have been used to identify the correlation. With the results, we may select the faulty lines. In the new method, the weighting coefficient of fault characteristics is endowed with variance of correlation function. Which not only widening the gap between correlations but also erasing the influences of little current and the unstable electric arc. When the fault is in the short circuit, the method may erase the influence of capacitive current in normal long one too. Lots of digital simulation results show that the proposed method is effective and reliable.",2008,0, 2141,An Approach of Fault Diagnosis for System Based on Fuzzy Fault Tree,"Due to the complicacy of the communication control system (CCS) structure and the variations in operating conditions, the occurrence of a fault inside CCS is uncertain and random. Aiming at limitations of the current fault diagnosis of CCS, the paper presents an approach to fault diagnosis of fuzzy fusion based on fuzzy fault tree. Elaborated method considers the characteristic of diagnostic object to establish fuzzy fault tree, convert the index of fault rate into fuzzy number of fault rate, perform the fuzzy analysis for the fault tree, determine the confidence interval of probability of top event, and achieve fuzzy reasoning diagnosis result. The details of fuzzy number design are described in the paper and an application example of the method is also provided. The results show that the proposed fuzzy fault tree analysis method is effective and available for fault diagnose of CCS.",2008,0, 2142,Research on adaptive Error Concealment for Video Transmission over Packet-lossy Channel,"Due to the error concealment recommended by H.264 cannot reconstruct the loss image effectively, an efficient error concealment method is proposed in this paper. The proposed scheme adoptively selects a proper concealment candidate to conceal the artifact of a lost block. To determine the best concealment candidate, a trial process in which the concealment candidates are examined based on analyzing motion activity of the MBs. Simulations show that the proposed scheme outperforms the existing method both on PSNR and visual quality obviously.",2007,0, 2143,A Case Study of Software Security Test Based on Defects Threat Tree Modeling,"Due to the increasing complexity of software applications, traditional function security testing ways, which only test and validate software security mechanisms, are becoming ineffective to detect latent software security defects (SSD). However, the most of vulnerabilities result from some typical SSD. According to CERT/CC, ten defects known are responsible for 75% of security breaches in today software applications. On the base of threat tree modeling, we use it in the integrated software security test model. For introducing the usefulness of the method, we use the test model in M3TR software security test.",2010,0, 2144,Improved Configuration of the Inductive Core-Saturation Fault Current Limiter with the Magnetic Decoupling,"Due to the increasing levels of the short-circuit currents, the fault current limiters (FCLs) are expected to play an important role in the protection of the future power grids. The inductive FCLs, that have a dc winding that drives the core into saturation, are particularly interesting due to their inherent reaction on the fault. To avoid an over-voltage on the dc winding during a fault, FCL with decoupled ac and dc magnetic circuits has been proposed [5]. However, this FCL configuration reduces the value of the normal current, since its ac leg operates around saturation knee, i.e. the FCL impedance increases during the nominal operation. The size of the core and the peak value of the limited fault current have to be increased if the FCL impedance in the normal operation is to be diminished. This paper presents an improved version of the inductive FCL with magnetic decoupling of ac and dc circuits. The new FCL design reduces the device weight and size, while decreasing the FCL impedance to a low value during the normal operation. The increase of the peak value of the limited fault current is avoided. Comparison results are obtained through simulations in software SaberDesigner.",2008,0, 2145,Tracking Probabilistic Correlation of Monitoring Data for Fault Detection in Complex Systems,"Due to their growing complexity, it becomes extremely difficult to detect and isolate faults in complex systems. While large amount of monitoring data can be collected from such systems for fault analysis, one challenge is how to correlate the data effectively across distributed systems and observation time. Much of the internal monitoring data reacts to the volume of user requests accordingly when user requests flow through distributed systems. In this paper, we use Gaussian mixture models to characterize probabilistic correlation between flow-intensities measured at multiple points. A novel algorithm derived from expectation-maximization (EM) algorithm is proposed to learn the ""likely"" boundary of normal data relationship, which is further used as an oracle in anomaly detection. Our recursive algorithm can adaptively estimate the boundary of dynamic data relationship and detect faults in real time. Our approach is tested in a real system with injected faults and the results demonstrate its feasibility",2006,0, 2146,Failure and Coverage Factors Based Markoff Models: A New Approach for Improving the Dependability Estimation in Complex Fault Tolerant Systems Exposed to SEUs,"Dependability estimation of a fault tolerant computer system (FTCS) perturbed by single event upsets (SEUs) requires obtaining first the probability distribution functions for the time to recovery (TTR) and the time to failure (TTF) random variables. The application cross section (sigmaAP) approach does not give directly all the required information. This problem can be solved by means of the construction of suitable Markoff models. In this paper, a new method for constructing such models based on the system's failure and coverage factors is presented. Analytical dependability estimation is consistent with fault injection experiments performed in a fault tolerant operating system developed for a complex, real time data processing system.",2007,0, 2147,"Towards fault-tolerant digital microfluidic lab-on-chip: Defects, fault modeling, testing, and reconfiguration","Dependability is an important attribute for microfluidic lab-on-chip devices that are being developed for safety-critical applications such as point-of-care health assessment, air-quality monitoring, and food-safety testing. Therefore, these devices must be adequately tested after manufacture and during bioassay operations. This paper presents a survey of early work on fault tolerance in digital microfluidic lab-on-chip systems. Defects are related to logical fault models that can be viewed not only in terms of traditional shorts and opens, but which also target biochip functionality. Based on these fault models, test techniques for lab-on-chip devices and digital microfluidic modules are presented.",2008,0, 2148,Time-Varying Network Fault Model for the Design of Dependable Networked Embedded Systems,"Dependability is becoming a key design aspect of today networked embedded systems (NES's) due to their increasing application to safety-critical tasks. Dependability evaluation must be based on modelling and simulation of faulty application behaviors, which must be related to faulty NES behaviors under actual defects. However, NES's behave differently from traditional embedded systems when testing activities are performed on them. In particular, issues arise on the definition of correct behavior, on the best point to observe it, and on the temporal properties of the faults to be injected. The paper describes these issues, discusses some possible solutions and presents a new time-varying network-based fault model to represent failures in a more abstract and efficient way. Finally, the fault model has been used to support the design of a network-based control application where packet losses, end-to-end delay and signal distortion must be carefully controlled.",2009,0, 2149,Fault Tolerant Scheduling on Controller Area Network (CAN),"Dependable communications is becoming a critical factor due to the pervasive usage of networked embedded systems that increasingly interact with human lives in one way or the other in many real-time applications. Though many smaller systems are providing dependable services employing uniprocesssor solutions, stringent fault containment strategies etc., these practices are fast becoming inadequate due to the prominence of COTS in hardware and component based development (CBD) in software as well as the increased focus on building 'system of systems'. Hence the repertoire of design paradigms, methods and tools available to the developers of distributed real-time systems needs to be enhanced in multiple directions and dimensions. In future scenarios, potentially a network needs to cater to messages of multiple criticality levels (and hence varied redundancy requirements) and scheduling them in a fault-tolerant manner becomes an important research issue. We address this problem in the context of Controller Area Network (CAN), which is widely used in automotive and automation domains, and describe a methodology which enables the provision of appropriate scheduling guarantees. The proposed approach involves definition of fault-tolerant windows of execution for critical messages and the derivation of message priorities based on earliest deadline first (EDF).",2010,0, 2150,Fault tolerance in a layered architecture: a general specification pattern in B,"Dependable control systems are usually complex and prone to errors of various natures. Such systems are often built in a modular and layered fashion. To guarantee system dependability, we need to develop software that is not only fault-free but also is able to cope with faults of other system components. In this paper we propose a general formal specification pattern that can be recursively applied to specify fault tolerance mechanisms at each architectural layer. Iterative application of this pattern via stepwise refinement in the B method results in development of a layered fault tolerant system correct by construction. We demonstrate the proposed approach by an excerpt from a realistic case study - development of liquid handling workstation Fillwell.",2004,0, 2151,Study on Engine Fault Diagnosis and Realization of Intelligent Analysis System,"Depending on the types of engine faults, we attain some means to abstract the eigenvalues of signal in frequency field and time field, and design an intelligent analysis system based on TMS320VC5402. The particular hardware and software design based on the DSP device is present in the paper. The analysis system is high in speed, low in power consumption and small in size to be portable. It is fit for run-time supervising and analyzing",2005,0, 2152,Formal guides for experimentally verifying complex software-implemented fault tolerance mechanisms,"Describes a framework allowing the experimental verification of complex software-implemented fault-tolerance algorithms and mechanisms (FTAMs). This framework takes into account two of the most important aspects which are increasingly required in newly-developed fault-tolerant systems: the considerations of COTS (commercial off-the-shelf) based architectures and the compliance with severe safety certification procedures. The strategy proposed shows how a rigorous FTAM specification, based on a multiple-viewpoint architectural description, may help to mechanically monitor the verification of its implementation under real conditions. The proposed strategy has been instantiated using two mechanized techniques: model checking and fault injection. The preliminary conclusions of the application of this automated approach to a small part of a commercial fault-tolerant system help us clarify its usage and its suitability for validating complex dependable systems",2001,0, 2153,OFTT: a fault tolerance middleware toolkit for process monitoring and control Windows NT applications,"Describes OFTT (OLE Fault Tolerance Technology), a fault tolerance middleware toolkit running on the Microsoft Windows NT operating system that provides the required fault tolerance for networked PCs in the context of industrial process monitoring and control applications. It is based on the Microsoft Component Object Model (COM) and consists of components that perform checkpoint-saving, failure detection, recovery and other fault tolerance functions. The ease with which this technology can be incorporated into an application represents the primary innovation. It is hoped that, by making fault tolerance more compatible with standard software architectures, more reliable PC-based monitoring and control systems can be built conveniently",2000,0, 2154,Delivering error detection capabilities into a field programmable device: the HORUS processor case study,"Designing a complete SoC or reuse SoC components to create a complete system is a common task nowadays. The flexibility offered by current design flows offers the designer an unprecedented capability to incorporate more and more demanded features like error detection and correction mechanisms to increase the system dependability. This is especially true for programmable devices, were rapid design and implementation methodologies are coupled with testing environments that are easily generated and used. This paper describes the design of the HORUS processor, a RISC processor augmented with a concurrent error mechanism, the architectural modifications needed on the original design to minimize the resulting performance penalty.",2002,0, 2155,Design time reliability analysis of distributed fault tolerance algorithms,"Designing a distributed fault tolerance algorithm requires careful analysis of both fault models and diagnosis strategies. A system will fail if there are too many active faults, especially active Byzantine faults. But, a system will also fail if overly aggressive convictions leave inadequate redundancy. For high reliability, an algorithm's hybrid fault model and diagnosis strategy must be tuned to the types and rates of faults expected in the real world. We examine this balancing problem for two common types of distributed algorithms: clock synchronization and group membership. We show the importance of choosing a hybrid fault model appropriate for the physical faults expected by considering two clock synchronization algorithms. Three group membership service diagnosis strategies are used to demonstrate the benefit of discriminating between permanent and transient faults. In most cases, the probability of failure is dominated by one fault type. By identifying the dominant cause of failure, one can tailor an algorithm appropriately at design time, yielding significant reliability gain.",2005,0, 2156,Low voltage fault detection and localisation using the TOPAS 1000 disturbance recorder,"During the past 20 years an increasingly competitive power industry has recognised the importance of addressing the issue of power quality. Many companies now take an active roll in addressing the problems associated with power quality. They are investing in the research and development of equipment to overcome power dips, surges and interruptions, and offering the customer a wide and varied choice of solutions and services to their power quality needs. These solutions are usually based on overcoming the limitations of individual equipment, which is being used by the customer, rather than making improvement to the quality of power supplied.",2005,0, 2157,Fault-tolerant clustering of wireless sensor networks,"During the past few years distributed wireless sensor networks have been the focus of considerable research for both military and civil applications. Sensors are generally constrained in on-board energy supply therefore efficient management of the network is crucial to extend the life of the system. Sensors' energy cannot support long haul communication to reach a remote command site, thus they require multi-tier architecture to forward data. An efficient way to enhance the lifetime of the system is to partition the network into distinct clusters with a high-energy node called a gateway as cluster-head. Failures are inevitable in sensor networks due to the inhospitable environment and unattended deployment. However, failures in higher level of hierarchy e.g. cluster-head cause more damage to the system because they also limit accessibility to the nodes that are under their supervision. In this paper we propose an efficient mechanism to recover sensors from a failed cluster. Our approach avoids a full-scale re-clustering and does not require deployment of redundant gateways.",2003,0, 2158,Cloud-Rough Model Reduction with Application to Fault Diagnosis System,"During the system fault period, usually the explosive growth signals including fuzziness and randomness are too redundant to make right decision for the dispatcher. So intelligent methods must be developed to aid users in maintaining and using this abundance of information effectively. An important issue in fault diagnosis system (FDS) is to allow the discovered knowledge to be as close as possible to natural languages to satisfy user needs with tractability, and to offer FDS robustness. At this junction, the cloud theory is introduced. The mathematical description of cloud has effectively integrated the fuzziness and randomness of linguistic terms in a unified way. A cloud-rough model is put forward. Based on it, a method of knowledge representation in FDS is developed which bridges the gap between quantitative knowledge and qualitative knowledge. In relation to classical rough set, the cloud-rough model can deal with the uncertainty of the attribute and make a soft discretization for continuous ones. A novel approach, including discretization, attribute reduction, value reduction and data complement, is presented. The data redundancy is greatly reduced based on an integrated use of cloud theory and rough set theory. Illustrated with a power distribution FDS shows the effectiveness and practicality of the proposed approach.",2006,0, 2159,An Efficient Re-quantization Error Compensation for MPEG2 to H.264 Transcoding,"During transcoding, the coefficients have to pass through another quantization step. This introduces re-quantization errors to the coefficients. H.264 integer transform and quantization features are different from those of MPEG2 and other standards. Based on these features, in MPEG2 to H.264 transcoding, an efficient algorithm that measures the re-quantization error is proposed. Then this measured error is used to compensate for the quality loss in transcoding. The experimental results from four typical video test sequences show that the proposed compensation procedure improves the PSNR value by about 4.82 dB. The error calculation and compensation could be carried in the transform domain resulting in significant computational saving",2006,0, 2160,"A toolkit for building secure, fault-tolerant virtual private networks","Dynamic coalition networks connect multiple administrative domains. The domains have a need to communicate, but have limited mutual trust. To establish communication services, these networks must be configured consistently with respect to global service requirements and security policies. The configuration must also be done in a way that respects the autonomy of the separate domains. Commercial network configuration tools do not provide sufficient functionality for this purpose. This document outlines a toolkit for solving these problems and reports on its deployment over a wide area network between Telcordia Technologies and BBN's TIC.",2003,0, 2161,Validation of 3D radiographical image distortion correction and calibration algorithms,Dynamic Roentgen Stereogrammetric Analysis (DRSA) provides a highly accurate research and clinical tool in human movement sciences. Essential for the respective DRSA measurements is proper alignment and calibration of the device. An improved method for determining the sufficiency of those preparations is introduced and discussed.,2009,0, 2162,"Static VAr compensators (SVC) required to solve the problem of delayed voltage recovery following faults in the power system of the Saudi Electricity Company, Western Region (SEC-WR)","Each power system is unique in its load pattern, growth trends and type, generation resources and network configuration. One of the main objectives of the power system operation planners is the operation and control of the power system to satisfy the most secure, and reliable power supply. The power system of the Saudi Electricity Company in the Western Region (SEC-WR) faced a high load growth during the past few years. This load increase gave rise to a very high loading of the transmission system elements mainly power transformers and cables. The Western Region load is mainly composed of air conditioner (AC) during high load season. In case of faults this nature of load induces delayed voltage recovery following fault clearing on the transmission system. The sustained low voltage following transmission line faults could cause customer interruptions and may be equipment damage. The integrity of the transmission system may also be affected. The transient stability of the system may be affected. This may also influence the stability of the generating units in the system. The existing dynamic model of SEC-WR System has been described. The response of the model to the actual faults is compared with actual records obtained from the dynamic system monitor (DSM) installed in several locations in the SEC-WR System. To avoid as much as possible brown and black out following system faults, SVC systems will be installed. An automatic under voltage load shedding scheme has been set up and optimized as an additional security and backup measures to cater for sever disturbance such as three phase and single phase faults.",2003,0, 2163,Fault Diagnosis for Induction Motors Using the Wavelet Ridge,"Early detection and diagnosis of incipient faults is desirable for online condition assessment, product quality assurance, and improved operational efficiency of induction motors. The characteristic frequency component(CFC) of broken rotor bars is very close to the power frequency component in frequency domain but far less in amplitude, which brings about great difficulty in detecting the broken bars in induction motors. A new method based on wavelet ridge is presented in this paper. As a motor accelerates progressively and the CFC of its broken rotor bars approaches the power frequency component gradually during the motor's starting period, the wavelet ridge-based method is adopted to analyze this transient procedure and the CFC is extracted effectively. The influence of power frequency can be eliminated, and the detection accuracy can be greatly improved. Furthermore, experimental results show this is truly a novel but excellent approach for the detection of the broken rotor bars in squirrel-cage induction motors.",2007,0, 2164,Fault tolerant data flow modeling using the generic modeling environment,"Designing embedded software for safety-critical, real-time feedback control applications is a complex and error prone task. Fault tolerance is an important aspect of safety. In general, fault tolerance is achieved by duplicating hardware components, a solution that is often more expensive than needed. In applications such as automotive electronics, a subset of the functionalities has to be guaranteed while others are not crucial to the safety of the operation of the vehicle. In this case, we must make sure that this subset is operational under the potential faults of the architecture. A model of computation called fault-tolerant data flow (FTDF) was recently introduced to describe at the highest level of abstraction of the design the fault tolerance requirements on the functionality of the system. Then, the problem of implementing the system efficiently on a platform consists of finding a mapping of the FTDF model on the components of the platform. A complete design flow for this kind of application requires a user-friendly graphical interface to capture the functionality of the systems with the FTDF model, algorithms for choosing an architecture optimally, (possibly automatic) code generation for the parts of the system to be implemented in software and verification tools. In this paper, we use the generic modeling environment (GME) developed at Vanderbilt University to design a graphical design capture system and to provide the infrastructure for automatic code generation. The design flow is embedded into the Metropolis environment developed at the University of California at Berkeley to provide the necessary verification and analysis framework.",2005,0, 2165,Fault-aware scheduling for Bag-of-Tasks applications on Desktop Grids,"Desktop grids have proved to be a suitable platform for the execution of bag-of-tasks applications but, being characterized by a high resource volatility, require the availability of scheduling techniques able to effectively deal with resource failures and/or unplanned periods of unavailability. In this paper we present a set of fault-aware scheduling policies that, rather than just tolerating faults as done by traditional fault-tolerant schedulers, exploit the information concerning resource availability to improve application performance. The performance of these strategies have been compared via simulation with those attained by traditional fault-tolerant schedulers. Our results, obtained by considering a set of realistic scenarios modeled after real desktop grids, show that our approach results in better application performance and resource utilization",2006,0, 2166,An Analysis of Missed Structure Field Handling Bugs,"Despite the importance and prevalence of structures (or records) in programming, no study till now has deeply analyzed the bugs made in their usage. This paper makes a first step to fill that gap by systematically and deeply analyzing a subset of structure usage bugs. The subset, referred to as MSFH bugs, are errors of omission associated with structure fields when they are handled in a grouped context. We analyze the nature of these bugs by providing a taxonomy, root cause analysis, and barrier analysis. The analysis provided many new insights, which suggested new solutions for preventing and detecting the MSFH bugs.",2008,0, 2167,A New Algorithm of Detecting and Correction Cycle Slips in Dual-Frequency GPS,"Detecting and reconstructing the cycle slip are very important in GPS carrier phase time transfer (GPS CPTT). Many kinds of algorithms have been developed in the past several years especially using GIPSY OASIS software developed by JPL (Jet Propulsion Laboratory) for estimating ambiguity resolution and solving cycle slip problem. NTSC has installed one geodetic like dual frequency Ashtech Z12T receiver for GPS CPTT investigation. The GPS carrier phase time transfer data have been obtained using this receiver at NTSC (National time service center, Chinese Academy of Sciences). In this paper, a new algorithm of TWO STEP method for detecting outliers and cycle slips in dual-frequency GPS is presented. Firstly, using Kalman filter in the model of the 3rd order polynomial for a larger cycle slips, such as larger than 2cycles. Secondly, estimating and reconstruction small cycle slips(less than 1) by using Daubechies wavelet. The details of algorithm are given. The processing result using the algorithm is presented. It is applicable to the carrier phase measurement with one sample per 30 second. The calculated results using data of GPS on NTSC show that the algorithm is so efficiency that it can accurately find the small cycle slip less than one cycle",2006,0, 2168,How Good is Static Analysis at Finding Concurrency Bugs?,"Detecting bugs in concurrent software is challenging due to the many different thread interleavings. Dynamic analysis and testing solutions to bug detection are often costly as they need to provide coverage of the interleaving space in addition to traditional black box or white box coverage. An alternative to dynamic analysis detection of concurrency bugs is the use of static analysis. This paper examines the use of three static analysis tools (Find Bugs, J Lint and Chord) in order to assess each tool's ability to find concurrency bugs and to identify the percentage of spurious results produced. The empirical data presented is based on an experiment involving 12 concurrent Java programs.",2010,0, 2169,Diagnosing Failures in Wireless Networks Using Fault Signatures,"Detection and diagnosis of failures in wireless networks is of crucial importance. It is also a very challenging task, given the myriad of problems that plague present day wireless networks. A host of issues such as software bugs, hardware failures, and environmental factors, can cause performance degradations in wireless networks. As part of this study, we propose a new approach for diagnosing performance degradations in wireless networks, based on the concept of ``fault signatures''. Our goal is to construct signatures for known faults in wireless networks and utilize these to identify particular faults. Via preliminary experiments, we show how these signatures can be generated and how they can help us in diagnosing network faults and distinguishing them from legitimate network events. Unlike most previous approaches, our scheme allows us to identify the root cause of the fault by capturing the state of the network parameters during the occurrence of the fault.",2010,0, 2170,"Techniques for unveiling faults during, knitting, production","Detection of faults during production of knitted fabric is crucial for improved quality and productivity. The yarn input tension is an important parameter that can he used for this purpose. This paper presents and discusses a computer-based monitoring system which was developed for the detection of faults and malfunctions during the production of weft knitted fabric, using the yarn input tension. In particular, it presents the method used to unveil the appearance of faults, based on two different approaches: comparison with a previously acquired waveform and a particular pattern matching technique average magnitude cross-difference.",2004,0, 2171,A Developed Dynamic Environment Fault Injection Tool for Component Security Testing,"Developers using third party software components need to test them to satisfy quality requirements. In this paper, according to the characteristics of component security test, we present a new tool called GCDEFI (generic component dynamic environment fault injection). GCDEFI adopt environment fault injection based on API interception technology. Faults can be injected by GCDEFI without the source code of target applications under assessment, nor does the injection process involve interruption. To evaluate our tool, we conduct several environment fault injection testing experiments. The results show that our tool is stable and effective.",2009,0, 2172,Design and performance of a fault-tolerant real-time CORBA event service,"Developing distributed real-time and embedded (DRE) systems in which multiple quality-of-service (QoS) dimensions must be managed is an important and challenging problem. This paper makes three contributions to research on multi-dimensional QoS for DRE systems. First, it describes the design and implementation of a fault-tolerant real-time CORBA event service for the ACE ORB (TAO). Second, it describes our enhancements and extensions to features in TAO, to integrate real-time and fault tolerance properties. Third, it presents an empirical evaluation of our approach. Our results show that with some refinements, real-time and fault-tolerance features can be integrated effectively and efficiently in a CORBA event service",2006,0, 2173,Topographic correction of landsat ETM-images in finnish lapland,Different topographic correction methods for Landsat ETM-image were compared. Corrected images were compared using land cover classification and estimation of forest inventory variables. Also the effect of the amount of vegetation to the correction coefficients were studied. Generally the best correction methods were Ekstrand and C-correction when their coefficients were determined by stratifying data according to the amount of vegetation.,2003,0, 2174,Fault Tolerant Differential Evolution Based Optimal Reactive Power Flow,"Differential evolution (DE) is a new branch of evolutionary algorithms (EAs) and has been successfully applied to solve the optimal reactive power flow (ORPF) problems in power systems. Although DE can avoid premature convergence, large population is needed and the application of DE is limited in large-scale power systems. Grid computing, as a prevalent paradigm for resource-intensive scientific application, is expected to provide a computing platform with tremendous computational power to speed up the optimization process of DE. When implanting DE based ORPF on grid system, fault tolerance due to unstable environment and variation of grid is a significant issue to be considered. In this paper, a fault tolerant DE-based ORPF method is proposed. In this method, when the individuals are distributed to the grid for fitness evaluation, a proportion of individuals, which returns from the grid slowly or fails to return, are replaced with new individuals generated randomly according to some specific rules. This approach can deal with the fault tolerance and also maintain diversity of the population of DE. The superior performance of the proposed approach is verified by numerical simulations on the ORPF problem of the IEEE 118-bus standard power system",2006,0, 2175,Human optic nerve DTI with EPI geometric distortion correction,Diffusion tensor imaging (DTI) of the human optic nerve presents challenges from geometric distortion in echo planar imaging (EPI) caused by magnetic field inhomogeneity and partial volume artifacts caused by confounding signals from surrounding fat and cerebrospinal fluid (CSF). A protocol for human optic nerve DTI was developed with geometric distortion correction and suppression of fat and CSF signals. This protocol was modified from a conventional DTI protocol to acquire images and field maps covering the whole brain including contiguous slices of the optic nerves. The technique was applied to healthy volunteers and multiple sclerosis (MS) patients with and without history of unilateral optic neuritis (ON). DTI measures of the optic nerves before and after distortion correction were compared. Means and standard deviations of these measures from different cohorts were reported.,2009,0, 2176,Advanced PD noise suppression and its relevance for computer aided PD defect identification,"Digital partial discharge (PD) defect identification has become state of the art. But computer aided procedures based on pattern recognition principles are negatively effected by pulse-shaped disturbances. It is pointed out that a sufficient noise suppression and noise resistivity is a must for automated diagnosis. A novel approach is presented, which improves the defect identification on-site at noisy conditions. The benefits and limits of expert diagnosis compared to machine intelligent systems are discussed. Experimental results are presented, taken into account hard- and software noise suppression components. It is shown that PD defect identification at operating voltages and respectively disturbed data is still a challenge. A method of resolution with an intelligent neural noise filter and noise resistant diagnosis concepts are discussed",2001,0, 2177,Error Models for the Transport Stream Packet Channel in the DVB-H Link Layer,"Digital video broadcasting for hand held terminals (DVB-H) is a broadcast system designed for high-speed data transmission in highly dispersive mobile channel conditions. In this paper, methods of reproducing the statistical properties of measured DVB-H packet error traces are presented. Statistical and finite-state modeling approaches are found to be suitable for simulating the error performance of a DVB-H system operating in typical urban channel conditions. Evaluation of these models focuses on the accuracy of the models in replicating the high-order statistical properties of measured DVB-H transport stream error traces. Also, the effect of these error statistics on the DVB-H link layer frame error rate is considered.",2006,0, 2178,An asymptotically-exact expression for the variance of classification error for the discrete histogram rule,"Discrete classification is fundamental in GSP applications. In a previous publication, we provided analytical expressions for moments of the sampling distribution of the true error, as well as of resubstitution and leave-one-out error estimators, and their correlation with the true error, for the discrete histogram rule. When the number of samples or the total number of quantization levels is large, computation of these expression becomes difficult, and approximations must be made. In this paper, we provide an approximate expression for the variance of the classification error, which is shown to be asymptotically exact as the total number of quantization levels increases to infinity, under a mild distributional assumption.",2008,0, 2179,Error detection in 2-D Discrete Wavelet lifting transforms,"Discrete Wavelet transform is a powerful mathematics technique which is being adopted in different applications including physics, image processing, biomedical signal processing, and communication. Due to its pipelined structure and multirate processing requirements, a single numerical error in one stage can easily affect multiple outputs in final result. In this paper, we propose a weighted checksum code based fault tolerance technique for 2-D discrete wavelet transform. The technique encodes the input array at the 2-D discrete wavelet transform algorithm level, and algorithms are designed to operate on encoded data and produce encoded output data. The proposed encoding technique can perfectly fit into the lifting structure and existing general purpose 2-D discrete wavelet lifting VLSI architectures, without significant modification and overhead. We present the mathematics proof of this coding technique and show this technique can detect the errors in 2-D wavelet transforms. The hardware overhead using this technique is significantly lower than existing methods.",2009,0, 2180,Empirical studies to identify defect prevention opportunities using process simulation technologies,"Discusses results of a multi-year research project to optimize software and systems test quality throughout Motorola. Our approach is to apply process modeling and simulation technologies to evaluate process improvement, automation and defect prevention strategies. Part of our approach is to conduct experiments using process modeling and simulation technologies to evaluate and characterize technological changes meant to improve quality, reliability, efficiency or schedule. We focus on defect prevention specifically, and process and automation strategies that would result in preventing defects",2001,0, 2181,Shape-space based negative selection algorithm and its application on power transformer fault diagnosis,"Dissolved gas analysis is an effective and important method for power transformer fault diagnosis. The negative selection algorithm has much advantage for some faults which lack a great deal of training sample data in the power transformer faults. It can examine the abnormality of the infinite categories by using fewer detectors, covering with wide space. However, the existing negative selection algorithm also exists some shortages. For these shortages, this paper researches a shape-space based negative selection algorithm, which uses the shape-space model in the mathematics theory, mapped the data of detector, self space and non-self space to the n dimension space, using the calculation of affinity to carry out matching. Experiments show that the method can make use of few fault data (i.e. antigen) to obtain the mature training set, so it is suitable for the small sample fault diagnosis of which the failure data is difficult to get.",2007,0, 2182,An Initial Study of Customer-Reported GUI Defects,"End customers increasingly access the delivered functionality in software systems through a GUI. Unfortunately, limited data is currently available on how defects in these systems affect customers. This paper presents a study of customer-reported GUI defects from two different industrial software systems developed at ABB. This study includes data on defect impact, location, and resolution times. The results show that (1) 65% of the defects resulted in a loss of some functionality to the end customer, (2) the majority of the defects found (60%) were in the GUI, as opposed to the application itself, and (3) defects in the GUI took longer to fix, on average, than defects in the underlying application. The results are now being used to improve testing activities at ABB.",2009,0, 2183,Identification of the high SNR frequency band for bearing fault signature enhancement,"Enhancement of the vibration signals measured from faulty bearings is the major step towards a successful fault detection and diagnosis. Bandpass filtering has shown to be an effective de-noising approach. Though simple, this method requires prior knowledge of the dominant resonance frequencies excited by the fault impacts. In this paper an on-line resonance frequency estimation algorithm is presented. This approach exploits the effect of variable shaft rotational speed on the measured vibration and applies a reliable wavelet based instantaneous frequency calculation method to find the proper center frequency for the bandpass filter. The proposed algorithm is evaluated using the vibrations measured from a faulty bearing and further validated with the results obtained based on the bearing impact analysis.",2007,0, 2184,Temporal Error Concealment Algorithm Using Multi-Side Boundary Matching Principle,"Error concealment is an effective approach to reduce the influence of errors that occur at the decoders. When error occurs, it may degrade the reconstructed pictures and lead to undesirable visual distortion. In this paper, we propose a novel temporal error concealment method for H.264 video streaming. A block-matching algorithm based on multi-side boundary matching (MSBM) principle, is presented to refine the concealed video. The proposed algorithm divides a corrupted macroblock (MB) into four 8times8 sub-blocks, and uses the information of the nearest rows and columns to reconstruct the MB. Experimental results show that the proposed algorithm yields more satisfying image quality than the relative algorithms presented in previous studies",2006,0, 2185,Interpolation with Sigmoid Functions for Spatial Error Concealment,"Error concealment methods for Intra frames in block-based video systems reconstruct the missing macroblock often by computing weighted average of the boundary pixels of the neighboring blocks. However, the simple averaging of pixel values leads to blurring and degrades the picture quality severely. Directional interpolation, in which the interpolation is performed along the possible edge direction, has been proved to alleviate the problem in some cases. However, still directional approach results in degraded performance when edges are not clear. First, we propose weighted interpolation with the sigmoid function which gives more weight to closer values, so overall blurring is reduced considerably. Secondly, an adaptive approach is presented in which directional interpolation is chosen by a threshold computed from the distribution of local directions around the lost block. Experiments show improvement of picture quality of about 0.5-2.0 dB compared to existing methods.",2007,0, 2186,PED: Proof-Guided Error Diagnosis by Triangulation of Program Error Causes,"Error diagnosis, which is the process of identifying the root causes of bugs in software, is a time-consuming process. In general, it is hard to automate error diagnosis due to the unavailability of a full ldquogoldenrdquo specification of the system behavior in realistic software development. We propose a repair-based proof-guided error diagnosis (PED) framework, that provides a first-line attack to find the root causes of the errors in programs by pin-pointing the possible error-sites (buggy statements), and suggesting possible repair fixes. Our framework does not need a complete system specification. Instead, it automatically ldquominesrdquo partial specifications of the intended program behavior from the proofs obtained by static program analysis for standard safety checkers. It uses these partial specifications along with the multiple error traces provided by a model checker to narrow down the possible error sites. It also exploits inherent correlations among the program statements. To capture common programming mistakes, it directs the search to those statements that could be buggy due to simple copy-paste operations or syntactic mistakes such as using les instead of <. To further improve debugging, it prioritizes the repair solutions. We implemented and integrated the PED tool as a plug-in module to a software verification framework. We show the efficacy of such a framework on public benchmarks.",2008,0, 2187,Slice Your Bug: Debugging Error Detection Mechanisms Using Error Injection Slicing,"Error injection is a well accepted method to evaluate hardware error detection mechanisms. An error detection mechanism is effective if it considerably reduces the amount of silently corrupted output of protected applications compared to unprotected applications. For a good representativeness of the error injection, the error model used has to mirror real world errors as accurately as possible. We introduce Error Injection Slicing (EIS) which emulates the symptoms of hardware errors. Furthermore, EIS provides means to debug single injection runs using slicing. With EIS we make the following novel contributions: (1) easy usage through hardware independence, (2) a symptom-based, flexible and comprehensive error model (e.g., not only bit-flips), and (3) debugging support to improve the detection coverage of the evaluated error detection mechanism. We evaluated the usefulness of the injector by analyzing the AN-encoding compiler that applies an AN-code to applications to facilitate hardware error detection.",2010,0, 2188,Planner based error recovery testing,"Error recovery testing is an important part of software testing, especially for safety-critical systems. We show how an AI planning system and the concepts of mutation testing can be combined to generate error recovery tests for software. We identify a set of mutation operations on the representation that the planner uses when generating test cases. These mutations cause error recovery test cases to be generated. The paper applies these concepts to the testing of a large tape storage system",2000,0, 2189,Loki: a state-driven fault injector for distributed systems,"Distributed applications can fail in subtle ways that depend on the state of multiple parts of a system. This complicates the validation of such systems via fault injection, since it suggests that faults should be injected based on the global state of the system. In Loki, fault injection is performed based on a partial view of the global state of a distributed system, i.e. faults injected in one node of the system can depend on the state of other nodes. Once faults are injected, a post-runtime analysis, using off-line clock synchronization, is used to place events and injections on a single global timeline and to determine whether the intended faults were properly injected. Finally, experiments containing successful fault injections are used to estimate the specified measures. In addition to briefly reviewing the concepts behind Loki and its organization, we detail Loki's user interface. In particular, we describe the graphical user interfaces for specifying state machines and faults, for executing a campaign and for verifying whether the faults were properly injected",2000,0, 2190,C-ERROR simulator for development for sensor and location aware sensing applications,"Distributed wireless sensor applications are useful for visualizing spatially and geographically related data such as location, neighborhood, weather, and measuring specific changes in the environment. Desires to augment these interfaces with additional specifications needed for distributed applications such as Power-Aware, Fault-tolerance and Processor agnostic deployment requirements have led to create a custom distributed Network Embedded Test-Bed that locally aggregate the measured signal from individual sensors and send it to a central coordinator for combined processing. We envision publishing and querying real-time (e.g. from sensors) over such distributed sensor farm applications which are deployed wirelessly and form a large sensor network. Existing solutions, although useful for writing the simple applications mentioned above, have several drawbacks in achieving this vision. First, publishing even a single stream of data as a useful service is a non-trivial task. Much useful data is not being stored yet because the need for managing a sensor farm has lots of complexities which make them unreliable in terms of polling time and communications costs. Second, existing applications are mutually incompatible and are processor centric and needs many ports which may introduce un-reliability. Third communication costs are not scalable to handle a sensor farm application and it does not provide an easy way to extend such a Network Embedded Test-Bed. The Network Embedded Test-Bed project aims to address these challenges, we like to model existing applications needs into a cross layer sensor network simulator called C-ERROR(Cross Layer Reusable Resource Optimized Routing) which allows different clustering algorithms to be integrated and measure its performance at each layer of the stack. To have a platform independent sensor OS and a scheduler which allows creating sensing tasks that have real-time constraints.",2008,0, 2191,On Scan Chain Diagnosis for Intermittent Faults,"Diagnosis is increasingly important, not only for individual analysis of failing ICs, but also for high-volume test response analysis which enables yield and test improvement. Scan chain defects constitute a significant fraction of the overall digital defect universe, and hence it is well justified that scan chain diagnosis has received increasing research attention in recent years. In this paper, we address the problem of scan chain diagnosis for intermittent faults. We show that the conventional scan chain test pattern is likely to miss an intermittent fault, or inaccurately diagnose it. We propose an improved scan chain test pattern which we show to be effective. Subsequently, we demonstrate that the conventional bound calculation algorithm is likely to produce wrong results in the case of an intermittent fault. We propose a new lower bound calculation method which does generate correct and tight bounds, even for an intermittence probability as low as 10%.",2009,0, 2192,Automatic generator health assessment system that embedded with advanced fault diagnosis and expert system,"Diesel engine type of generators that uses fuel to produce electricity are common in many countries which suffer from inconsistency in electricity supply. To lower the cost, manufacturers tend to purchase secondhand generators to ensure constant supply of electricity during production. However, manufacturers do not have the proper skill to assess the health of second-hand generators. This, coupled with poor maintenance due to insufficient technical staff, the efficiency of generators is intolerably low and facing frequent breakdown. Such deficiencies not only waste expensive fuel, but also reduce the productivity due to sudden loss of electricity supply. Hence, there is an urgent need to have an automatic generator health assessment system so that the efficiency of generators can be increased and the chance of fatal breakdown can be minimized. This paper presents a sophisticated generator health assessment system that can automatically diagnose the current health status of an inspected generator even without a baseline for comparison purpose. Initially, we focused on diagnosing the faults generated from the combustion process. Our system uses low-cost encoders to assess the health of engines. Compared to conventional methods, such capability could only be achieved using expensive pressure sensors. Moreover, an expert system was built to diagnose engine faults automatically. With the help of this automatic and low-cost system, the operators will be alerted if any anomaly is occurring in the engine. Hence, our system makes the monitored generator more reliable in operation and saves expensive fuel wasted by improper combustion process.",2010,0, 2193,An Adaptive-Rate Error Correction Scheme for NAND Flash Memory,"ECC has been widely used to enhance flash memory endurance and reliability. In this work, we propose an adaptive-rate ECC scheme with BCH codes that is implemented on the flash memory controller. With this scheme, flash memory can trade storage space for higher error correction capability to keep it usable even when there is a high noise level.",2009,0, 2194,Fault location using radio frequency emissions,Electromagnetic radiation in the form of atmospheric radio waves (or sferics) originate from power system apparatus when transient fault currents are present. This paper discusses research conducted into the reception of these events using purpose-built VLF and VHF monitoring equipment. Experience in the use of the monitoring equipment is described and typical results presented. Results of sferic propagation tests are given and current understanding of the sferic generation process is given. The paper concludes with a discussion on the potential of this technique for fault location,2001,0, 2195,Elimination of DWDM transponders over a deployed IP over DWDM network using novel DWDM XFP transceivers with integrated G.709 and Forward Error Correction,"Elimination of DWDM transponders in an IP over DWDM network is demonstrated, for the first time, using DWDM XFP MSA transceivers with integrated G.709 framing for OAM management and Forward Error Correction for performance capabilities.",2009,0, 2196,A new mitigation approach for soft errors in embedded processors,"Embedded processors, like for example processor macros inside modern FPGAs, are becoming widely used in many applications. As soon as these devices are deployed in radioactive environments, designers need hardening solutions to mitigate radiation-induced errors. When low-cost applications have to be developed, the traditional hardware redundancy-based approaches exploiting m-way replication and voting are no longer viable as too expensive, and new mitigation techniques have to be developed. In this paper we present a new approach, based on processor duplication, checkpoint and rollback, to detect and correct soft errors affecting the memory elements of embedded processors. Preliminary fault injection results performed on a PowerPC-based system confirmed the goodness of the approach.",2007,0,5777 2197,A New Approach for the Construction of Fault Trees from System Simulink,"Fault tree analysis is a common method for reliability, safety, and availability assessment of digital systems. Since 70s, a number of construction and analysis methods have been introduced in the literature. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a novel methodology for the construction of fault tree from a system Simulink model, and introduces a fault tree analysis approach in the Simulink environment. The analysis method evaluates static fault tree of a system. The method is introduced and explained in details and its correctness and completeness is validated by using a number of examples. The limitations of the proposed methodology are related to the limitations of the MATLAB-Simulink toolbox. Important advantages of the method are also stated.",2009,0, 2198,The research of information security risk assessment method based on fault tree,"Fault tree technology has been broadly used in the industry system but seldom used in the field of risk assessment for information system. In this study, by consulting the standard of BS7799, the fault tree technology is introduced to evaluate the risks of information system. Based on integrity, usability and confidentiality of information system, fault tree model for the information system is established. This model can quantitatively calculate the risk faced by the system; tree framework structure was adopted to analyze faults, which can be easily understood and programmed; Importance of every bottom faults was carefully analyzed, which offers the new model and effective implementation for the risk analysis and the searching of fault sources. In this research, an idiographic example was used to demonstrate the method and to validate the algorithms.",2010,0, 2199,Repairable fault tree for the automatic evaluation of repair policies,"Fault trees are a well known mean for the evaluation of dependability of complex systems. Many extensions have been proposed to the original formalism in order to enhance the advantages of fault tree analysis for the design and assessment of systems. In this paper we propose an extension, repairable fault trees, which allows the designer to evaluate the effects of different repair policies on a repairable system: this extended formalism has been integrated in a multi-formalism multi-solution framework, and it is supported by a solution technique which transparently exploits generalized stochastic Petri nets (GSPN)for modelling the repairing process. The modelling technique and the solution process are illustrated through an example.",2004,0, 2200,Fault-based side-channel cryptanalysis tolerant Rijndael symmetric block cipher architecture,"Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for Rijndael symmetric encryption algorithm. These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations",2001,0, 2201,Faulted circuit indicators and system reliability,"Faulted circuit indicators (FCIs) can be a useful tool for improving power system reliability. While FCIs alone may not prevent outages or other problems with system reliability, FCI application can help identify problem areas of the electric distribution system, as well as reduce crew patrol time in locating faulted cables, thus reducing outage duration. The use of FCIs can be linked to outage statistics such as the system average interruption duration index (SAIDI) and customer average interruption duration index (CAIDI), to track reduction in outage duration. By spending less time identifying and locating faulted cables, crew productivity can also be improved through the use of FCIs, allowing more time to be spent on productive system operation and improvement. In addition to real cost savings and increased productivity for utilities, reduced outage duration through FCI use can lead to increased customer satisfaction. For commercial and industrial customers, who can incur significant costs in lost production due to power interruptions, minimizing outage duration can lead to a reduction in customer costs resulting from outages. As innovations in rate design lead to more performance based ratemaking, FCIs can help utilities avoid paying penalties by keeping certain service quality measurements within acceptable levels. Technology advancements in recent years have led to more reliable fault indicators, along with new and unique features that help ensure the most efficient application of faulted circuit indicators for improving system reliability",2000,0, 2202,An empirical study of the effects of test-suite reduction on fault localization,"Fault-localization techniques that utilize information about all test cases in a test suite have been presented. These techniques use various approaches to identify the likely faulty part(s) of a program, based on information about the execution of the program with the test suite. Researchers have begun to investigate the impact that the composition of the test suite has on the effectiveness of these fault-localization techniques. In this paper, we present the first experiment on one aspect of test-suite composition--test-suite reduction. Our experiment studies the impact of the test-suite reduction on the effectiveness of fault-localization techniques. In our experiment, we apply 10 test-suite reduction strategies to test suites for eight subject programs. We then measure the differences between the effectiveness of four existing fault-localization techniques on the unreduced and reduced test suites. We also measure the reduction in test-suite size of the 10 test-suite reduction strategies. Our experiment shows that fault-localization effectiveness varies depending on the test-suite reduction strategy used, and it demonstrates the trade-offs between test-suite reduction and fault-localization effectiveness.",2008,0, 2203,On the Longest Fault-Free Paths in Hypercubes with More Faulty Nodes,"Faults in a network may take various forms such as hardware/software errors, node/link faults, etc. In this paper, node-faults are addressed. Let F be a faulty set of f les 2n - 6 conditional node-faults in an injured n-cube Qn such that every node of Qn still has at least two fault - free neighbors. Then we show that Qn - F contains a path of length at least 2n - 2f - 1 (respectively, 2n - 2f - 2) between any two nodes of odd (respectively, even) distance. Since an n-cube is a bipartite graph, such kind of the fault- free path turns out to be the longest one in the case when all faulty nodes belong to the same partite set.",2008,0, 2204,Concepts and methods in fault-tolerant control,"Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to technical parts of the plant, to personnel or the environment. Fault-tolerant control combines diagnosis with control methods to handle faults in an intelligent way. The aim is to prevent that simple faults develop into serious failure and hence increase plant availability and reduce the risk of safety hazards. Fault-tolerant control merges several disciplines into a common framework to achieve these goals. The desired features are obtained through online fault diagnosis, automatic condition assessment and calculation of appropriate remedial actions to avoid certain consequences of a fault. The envelope of the possible remedial actions is very wide. Sometimes, simple re-tuning can suffice. In other cases, accommodation of the fault could be achieved by replacing a measurement from a faulty sensor by an estimate. In yet other situations, complex reconfiguration or online controller redesign is required. This paper gives an overview of recent tools to analyze and explore structure and other fundamental properties of an automated system such that any inherent redundancy in the controlled process can be fully utilized to maintain availability, even though faults may occur",2001,0, 2205,Research of Environment-Oriented Fault Representation for Software-Intensive Equipment,"Faults triggered by variability and inconsistency of running environment emerge in the highest frequency, and are difficult to recognize and diagnose in the software-intensive equipment. By systematically analyzing and summarizing environmental factors which can disable the equipment for work, a kind of fresh fault classification is put forward in this paper. Further, the formal representation based on frame is clearly given to each type of environmental fault in detail. Diagnosis system based on this classification is successful and effective to resolve most of faults in practice.",2010,0, 2206,Comparative performance evaluation of software-based fault-tolerant routing algorithms in adaptively-routed tori,"Fault-tolerance and network routing have been among the most widely studied topics in the research of parallel processing and computer networking. A fault- tolerant routing algorithm should guarantee the delivery of messages in the presence of faulty components. In this paper, we present a comparative performance study of nine prominent fault-tolerant routings in 2D wormhole-switched tori. These networks carry the software-based routing scheme which has been suggested as an instance of a fault-tolerant method widely used in the literature to achieve high adaptivity and support inter-processor communications in parallel computer networks due to its ability to preserve both communication performance and fault-tolerant demands in such systems. The performance measures studied are the throughput, average message latency, power, and average usage of virtual channels per node. Results obtained through simulation suggest two classes of presented routing schemes as high performance candidates in most faulty networks.",2008,0, 2207,Fault-tolerance and noise modelling in nanoscale circuit design,"Fault-tolerance in integrated circuit design has become an alarming issue for circuit designers and semiconductor industries wishing to downscale transistor dimensions to their utmost. The motivation to conduct research on fault-tolerant design is backed by the observation that the noise which was ineffective in the large-dimension circuits is expected to cause a significant downgraded performance in low-scaled transistor operation of future CMOS technology models. This paper is destined to give an overview of all the major fault-tolerance techniques and noise models proposed so far. Summing and analysing all this work, we have divided the literature into three categories and discussed their applicability in terms of proposing circuit design modifications, finding output error probability or methods proposed to achieve highly accurate simulation results.",2010,0, 2208,Three types of fault coverage in multi-state systems,"Fault-tolerance is an essential architectural attribute for achieving high reliability in many critical applications of digital systems. Automatic fault and error handling mechanisms play a crucial role in implementing fault tolerance because an uncovered (undetected) fault may lead to a system or a subsystem failure even when adequate redundancy exists. Examples of this effect can be found in computing systems, electrical power distribution networks, pipelines carrying dangerous materials etc. Because an uncovered fault may lead to overall system failure, an excessive level of redundancy may even reduce the system reliability. We consider three types of coverage models: 1. element level coverage where the fault coverage probability of an element does not depend on the states of other elements; 2. the multi-fault coverage where the effectiveness of recovery mechanisms depends on the coexistence of multiple faults in a group of elements that collectively participate in detecting and recovering the faults in that group; 3. the performance dependent coverage where the effectiveness of recovery mechanisms in a group depends on the entire performance level of this group. The paper presents a modification of the generalized reliability block diagram (RBD) method for evaluating reliability and performance indices of complex multi-state series-parallel systems with all these types of fault coverage. The suggested method based on a universal generating function technique allows the system performance distribution to be obtained using a straightforward recursive procedure.",2009,0, 2209,An Improved Identity-based Fault-tolerant Conference Key Distribution Scheme,"Fault-tolerance is an important property of conference key distribution protocol. Recently, Yang et al. proposed an identity-based fault-tolerant conference key distribution scheme which is much different to the traditional ones. But their scheme can not withstand passive attack and modification attack, moreover, it also can not provide forward security. An improved identity-based fault-tolerant conference key distribution scheme is proposed in this paper. Compared with the Yang et al. 's scheme, Tzeng's (2002) scheme and Xun's scheme, the new scheme has illustrated the highest security and its communication cost is intervenient Yang et al. 's scheme and Tzeng's (2002) scheme. As security is the first-line property in conference key establishment protocol, our scheme is the most practical one in the mass.",2007,0, 2210,Petri-nets Based Availability Model of Fault-Tolerant Server System,"Fault-Tolerant capacity of the key server will effect integrality and restorability of the data in network system, so in order to increase system's availability, cluster technique has been adopted widely. In despite of achieving high system availability with cluster technique, it makes system very complexly and availability analysis very difficult. By analyzing redundant server system's structure and work process, stochastic Petri net method is adopted to model processing cell availability, data memory disc availability and server availability. Integrating above models, cluster system availability model can be obtained.",2008,0, 2211,Soft Defects: Challenge and Chance for Failure Analysis,"Failure analysis on advanced logic and mixed signal ICs more and more has to deal with so called 'soft defects'. In this paper, an analysis flow especially for parameter dependent scan fails is presented. For the two major localization techniques, namely soft defect localization (SDL) and internal signal measurement enhanced activation and localization procedures using test systems are proposed.",2007,0, 2212,A Unified Framework for Defect Data Analysis Using the MBR Technique,"Failures of mission-critical software systems can have catastrophic consequences and, hence, there is strong need for scientifically rigorous methods for assuring high system reliability. To reduce the V&V cost for achieving high confidence levels, quantitatively based software defect prediction techniques can be used to effectively estimate defects from prior data. Better prediction models facilitate better project planning and risk/cost estimation. Memory based reasoning (MBR) is one such classifier that quantitatively solves new cases by reusing knowledge gained from past experiences. However, it can have different configurations by varying its input parameters, giving potentially different predictions. To overcome this problem, we develop a framework that derives the optimal configuration of an MBR classifier for software defect data, by logical variation of its configuration parameters. We observe that this adaptive MBR technique provides a flexible and effective environment for accurate prediction of mission-critical software defect data",2006,0, 2213,UPnP based Service Discovery and Service Violation Handling for Distributed Fault Management in WBEM-based Network Management,"Fast fault recovery depends on immediate detection of occurred fault. In some case, it is rather difficult to determine the cause of a fault in order to provide fast fault restoration/isolation. Also, service level fault, such as service level agreement (SLA) violation may not be due to physical device failure. This paper explores in detail the aspects of service level fault detection. We present the design and implementation of service level fault detection mechanism with SLA in WBEM-based inter AS traffic engineering (TE). We changed existing service location protocol (SLP) based service discovery mechanism to universal plug and play (UPnP) based one, and extended existing providers with new UPnP Provider which has not been supported by OpenPegasus WBEM. We also designed DMTF MOF (managed object format) for service discovery for SLA management in order to handle service violation. UPnP event notification mechanism, service violation handling, and C++ based provider implementation on OpenPegasus WBEM for inter-AS TE is explained in detail.",2007,0, 2214,Protecting RSA against Fault Attacks: The Embedding Method,"Fault attacks constitute a major threat toward cryptographic products supporting RSA-based technologies. Most often, the public exponent is unknown, turning resistance to fault attacks into an intricate problem. Over the past few years, several techniques for secure implementations have been published, but none of them is fully satisfactory. We propose a completely different approach by embedding the public exponent into [the description of] the private key. As a result, we obtain a very efficient countermeasure with a 100% fault detection.",2009,0, 2215,Research of Main Circuit on the Series Resonance Fault Current Limiter,"Fault current limiter can limit short circuit current of the bulk load centre. The series resonant fault current limiter (SRFCL) has more advantages than others applied on the EHV. It increases the system stability and has flexible ways to adapt the system operation mode. Bypass breaker is the key technology of SRFCL. The multi-bypass breaker design for protecting the capacitor is introduced in this paper. Based on the main circuit, the resonance parameters are analyzed in the background of one machine-infinity bus system. The results of simulation experiments show that the parameters are optimized and SFRCL gets good response to the fault.",2006,0, 2216,Guidelines for 2D/3D FE transient modeling of inductive saturable-core Fault Current Limiters,"Fault Current Limiters (FCLs) are expected to play an important role in protection of future power systems. FCLs can be classified in three groups: passive, solid-state and hybrid FCLs. Passive FCLs have merit to inherently react on a fault, requiring no fault detection and triggering circuit. Inductive FCLs based on core saturation belong to this group. Analytical models, used for design of inductive FCLs, are not accurate enough; BH curve cannot be expressed as an explicit function. Numerical models offer better approximations, but they often do not include effects such as leakage and fringing fluxes, which can have considerable influence on the result. Verification of such models is of utmost importance. Finite element modeling (FEM) tools offer possibility to model any inductive FCL topology, while all the effects, e.g. non-linear BH curve, fringing effects etc., are taken into account. However, the modeling of these devices in FEM softwares is difficult. This paper introduces the guidelines for development of 2D/3D transient FE models of inductive FCLs in Ansys. The guidelines are developed with respect to the single-core inductive FCL topology. The model can be applied to any inductive FCL and presents a valuable tool for design, verification and optimization of these devices. Signals' waveforms, obtained through the transient analysis, provide precise depiction of FCL operation during both normal and fault regimes. The model is validate by means of lab experiment. Simulation and experimental results show very good matching. In addition, modeling results are used to prove that single-core FCL topology operates properly during both nominal and fault regimes.",2009,0, 2217,Fault Data Synchronization Using Wavelet for Improving Two-terminal Fault Location Algorithm,"Fault data synchronization for improving Two-terminal fault location algorithm is proposed. Conceptually, employing Wavelet technique, the instantaneous currents of both ends, sampled by Digital Fault Recorders (DFR), will be used to determine a transient reference point for synchronizing the DFR data before applying the Two-terminal algorithm. In the study, both symmetrical and unsymmetrical faults have been simulated for a simplified 230 kV transmission system. The case studies of applying Two-terminal fault location algorithm to the simulated unsynchronized sampled data with assumed phase-shift error varying from -180 to 180 degree, and to the corresponding synchronized data using the proposed Wavelet technique with fixed threshold are conducted for performance verification. Additionally, the effect of fault resistance in the range of 0.01 50 ohm has been investigated. In all test cases, the proposed synchronization method yields satisfactory results such that the error of fault location is independent of the phase-shift error.",2010,0, 2218,Voltage Sensor Fault Detection and Reconfiguration for a Doubly Fed Induction Generator,"Fault detection and reconfiguration of the control loops of a Doubly-Fed Induction Generator are described in this paper. The stator voltage is measured as well as observed. During fault free operation, the measured signal is used for the field oriented control. In case of a voltage sensor fault, the faulty measurement is identified and the control is reconfigured using the observer output. Operation without measuring the stator voltage is possible. Laboratory measurements prove this concept.",2007,0, 2219,Motor fault detection using Elman neural network with genetic algorithm-aided training,"Fault detection methods are crucial in acquiring safe and reliable operation in motor drive systems. Remarkable maintenance costs can also be saved by applying advanced detection techniques to find potential failures. However, conventional motor fault detection approaches often have to work with explicit motor models. In addition, most of them are deterministic or non-adaptive, and therefore cannot be used in time-varying cases. We propose an Elman neural network-based motor fault detection scheme to overcome these difficulties. The Elman neural network has the unique time series prediction capability because of its memory nodes as well as local recurrent connections. Motor faults are detected from changes in the expectation of the feature signal prediction error. A genetic algorithm (GA)-aided training strategy for the Elman neural network is further introduced to improve the approximation accuracy and achieve better detection performance. Computer simulations of a practical automobile transmission gear with an artificial fault are carried out to verify the effectiveness of our method. Encouraging fault detection results have been obtained without any prior information of the gear model",2000,0, 2220,Embedded fault diagnosis for digital logic exploiting regularity,"Fault diagnosis for digital integrated circuits has become a matter of intense research in recent years. The reason is that only a fast feedback loop between IC production and testing can facilitate high figures of production yield in nanometer IC technologies. Looking at emerging technologies of IC built-in self repair, fault diagnosis in the field is a pre-condition for self-repair. However, then the availability of reference data for fault diagnosis becomes a crucial bottleneck because of limited memory resources. The paper tries to identify possible solutions to the problem, using specific properties of digital signal processing circuits.",2007,0, 2221,Fault location for a series compensated transmission line based on wavelet transform and an adaptive neuro-fuzzy inference system,"Fault diagnosis is a major area of investigation for power system and intelligent system applications. This paper proposes an efficient and practical algorithm based on using wavelet MRA coefficients for fault detection and classification, as well as accurate fault location. A three-phase transmission line with series compensation is simulated using MATLAB software. The line currents at both ends are processed using an online wavelet transform algorithm to obtain wavelet MRA for fault recognition. Directions and magnitudes of spikes in the wavelet coefficients are used for fault detection and classification. After identifying the fault section, the summation of the sixth level MRA coefficients of the currents are fed to adaptive neuro-fuzzy inference system (ANFIS) to obtain accurate fault location. The proposed scheme is able to detect all types of internal faults at different locations either before or after the series capacitor, at different inception angles, and at different fault resistances. It can also detect the faulty phase(s) and can differentiate between internal and external faults. The simulation results show that the proposed method has the characteristic of a simple and clear recognition process. We conclude that the algorithm is ready for series compensated transmission lines.",2010,0, 2222,An Enhanced Fault-Tolerant Routing Algorithm for Mesh Network-on-Chip,"Fault-tolerant routing is the ability to survive failure of individual components and usually uses several virtual channels (VCs) to overcome faulty nodes or links. A well-known wormhole-switched routing algorithm for 2-D mesh interconnection network called f-cube3 uses three virtual channels to pass faulty regions, while only one virtual channel is used when a message does not encounter any fault. One of the integral stages of designing network-on-chips (NoCs) is the development of an efficient communication system in order to provide low latency networks. We have proposed a new fault-tolerant routing algorithm based on f-cube3 as a solution to reduce the delay of network packets which uses less number of VCs in comparison with f-cube3. Moreover, in this method we have improved the use of VCs per each physical link by reducing required channels to two. Furthermore, simulations of both f-cube3 and our algorithm based on same conditions have been presented.",2009,0, 2223,QAFT: A QoS-Aware Fault-Tolerant Scheduling Algorithm for Real-Time Tasks in Heterogeneous Systems,"Fault-tolerant scheduling, effective means of improving system reliability, plays a significant role in some mission-critical applications. Although extensive fault-tolerant scheduling algorithms have been proposed for real-time tasks in distributed systems, quality of service (QoS) requirements demanded by mission-critical tasks have not been taken into consideration. This paper proposes a QoS-aware fault-tolerant scheduling algorithm named QAFT that can tolerate one processor's permanent failure at one time instant for real-time tasks with QoS needs in heterogeneous systems. QAFT strives to advance the start time of primary copies and delay the start time of backup copies to make backup copies adopt passive execution scheme or decrease the simultaneous execution time of the primary and backup copies of a task as much as possible to improve resource utilization. Besides, overlapping technology of backup copies is employed. Compared with NOQAFT and DYFARS, QAFT shows obvious superiority to others with higher scheduling quality by simulation experiments.",2010,0, 2224,Analysis of a multi-layer fault-tolerant COTS architecture for deep space missions,"Fault-tolerant systems are traditionally divided into fault containment regions and custom logic is added to ensure the effects of a fault within a containment region would not propagate to the other regions. This technique may not be applicable in a commercial-off-the-shelf (COTS) based system. While COTS technology is attractive due to its low cost, they are not developed with the same level of rigorous fault tolerance in mind. Furthermore, COTS suppliers usually have no interest to add any overhead or sacrifice performance to implement fault tolerance for a narrow market of high reliability applications. To overcome this shortcoming, Jet Propulsion Laboratory (JPL) has developed a multi-layer fault protection methodology to achieve high reliability in COTS-based avionics systems. This methodology has been applied to the bus architecture that uses the COTS bus interface standards IEEE 1394 and I2C. The paper first gives an overview of the multi-layer fault-protection design methodology for COTS based mission-critical systems. Then the effectiveness of the methodology is analyzed in terms of coverage and cost. The results are compared to the traditional custom designed system",2000,0, 2225,Deterministic high-speed simulation of complex systems including fault-injection,"FAUmachine is a virtual machine for the highly detailed simulation of standard PC hardware together with an environment. FAUmachine comes with fault injection capabilities and an automatic experiment controller facility. Due to its use of just-in-time compiler techniques, it offers good performance. This tool description introduces the new feature of FAUmachine to simulate systems deterministically. This will enable developers to design and test complex systems for fault tolerance by running identically reproducible automated tests in reasonable time and thus even allow testing for real time constraints.",2009,0, 2226,Real-time recognition of feedback error-related potentials during a time-estimation task,"Feedback error-related potentials are a promising brain process in the field of rehabilitation since they are related to human learning. Due to the fact that many therapeutic strategies rely on the presentation of feedback stimuli, potentials generated by these stimuli could be used to ameliorate the patient's progress. In this paper we propose a method that can identify, in real-time, feedback evoked potentials in a time-estimation task. We have tested our system with five participants in two different days with a separation of three weeks between them, achieving a mean single-trial detection performance of 71.62% for real-time recognition, and 78.08% in offline classification. Additionally, an analysis of the stability of the signal between the two days is performed, suggesting that the feedback responses are stable enough to be used without the needing of training again the user.",2010,0, 2227,Feedback-Based Error Tracking for AVS-M,"Feedback-based error tracking AVS-M is a video encoding standard used for mobile video in wireless environment, developed by the Audio and Video Coding Standard Working Group of China. In order to cope with the burst error in error-prone wireless networks, in this paper, we present an error tracking mechanism, which utilizing a feedback channel to judge which areas are contaminated by the prediction from its preceding frames. Through this positioning, we can terminate the error propagation effects by INTRA refreshing the affected areas. Simulations demonstrate that the proposed algorithm can effectively stop the quality degradation and not cause much bit-rate increase.",2009,0, 2228,Study on Fault Diagnosis of Gear with Spall Using Ferrography and Vibration Analysis,"Ferrographic technique and vibration analysis are the two main identification techniques for condition monitoring and fault diagnosis in machinery. Wear mechanisms and changes are investigated for gear with spall fault using ferrographic analysis technique. Furthermore, the features of time and frequency spectrums have been investigated using the time and frequency spectrums of vibration analysis technology in this study. Analysis results show that quantity and size of wear debris and four eigenvalues of the time spectrum gradually become greater in order of the gear with minor spall, the gear with moderate spall and the gear with severe spall. Frequency spectrums of spalling gear appear some sidebands that meshing frequency of output terminal as centre frequency, the rotating frequencies of output terminal as modulating frequency occur. The investigation on wear and vibration characteristics of simulating spall fault gear provides the auxiliary basis to determine fault for the fatiguing spall failure of the actual using gear.",2009,0, 2229,Calculation and error analysis of electromagnetic torque for a wheel permanent magnet motor,"Finite element analysis (FEA) is applied to analyze square wave permanent-magnet motors with exterior rotor for electric vehicle application. The. electromagnetic torque and the torque/speed characteristics of the machine are calculated, tested and analyzed systematically. The techniques to improve computation accuracy without increasing elements are introduced in the paper.",2003,0,2230 2230,Calculation and Error Analysis of Electromagnetic Torque for a Wheel Permanent-Magnet Motor,"Finite-element analysis (FEA) is applied to analyze square-wave permanent-magnet (PM) motors with exterior rotor for electric vehicle application. The electromagnetic torque and the torque/speed characteristics of the machine are calculated, tested, and analyzed systematically. The techniques to improve computation accuracy without increasing elements are introduced in this paper",2006,0, 2231,AC distributed power supplies used for solid state short-circuit fault current limiter,"First of all, the principle of the high-frequency AC current bus distributed power system, which has the characteristic of good load expansibility and isolation of high voltage is introduced. Then, based on the principle, a novel topology of DC power supply with multiple and isolated output is presented and working stage is analyzed in detail. A prototype has been used in solid state fault current limiter with 10 kV voltage successfully.",2007,0, 2232,Design of the Fault Diagnosis System of Generator Rotor Winding Inter-Turn Short Circuit Based on Virtual Instrument,"Firstly the stator winding parallel-connected branches circulating current characteristics, caused by the rotor winding inter-turn short circuit fault, are analyzed. The results discover that the second harmonics circulating current will increase when the fault occurs. Then the software of the Lab VIEW and the hardware of data acquisition card PCI-6251 is selected respectively, the fault diagnoses system based on virtual instrument is designed, the function is made up of the signal acquisition, signal analyzing, database management, parameter query, and fault diagnosis. Finally, the fault diagnosis system is successfully applied to SDF-9 generator.",2007,0, 2233,Fault tolerant scheduling for fixed-priority tasks with preemption threshold,"Fixed-priority with preemption threshold (FPPT) is an important form of real-time scheduling algorithm, which fills the gap between fixed-priority preemptive (FPP) and fixed-priority non-preemptive (FPNP). However, this scheduling scheme together with the requirements of fault tolerance makes the prediction of a real-time system's behaviors more difficult than traditional scheduling scheme. The major contribution of this work is twofold. First, we present an appropriate schedulability analysis, based on response time analysis, for supporting fault-tolerant FPPT scheduling in hard real-time systems. The error-recovery techniques are considered to be used to carry out fault tolerance. Second, we propose the optimal priority assignment algorithm which can be used, together with the schedulability analysis, to improve system fault resilience.",2005,0, 2234,Attention-based adaptive intra refresh for error-prone video transmission,"Error-resilient video coding plays an important role in video transmission over error-prone networks. Intra coding, which can suppress error propagation very well in a simple way, is often employed in error-resilient video coding in some special ways and named as intra refresh. However, most of the existing intra refresh methods do not make good use of subjective human vision property. In this article a novel intra refresh method is presented, taking the fact that people usually intend to pay much more attention to a certain particular area in a video frame (attention area) than to other areas (nonattention area) in the same frame. In this method, blocks in the attention area of each video frame have a higher priority to be refreshed as intra mode than the blocks outside the attention area. Meanwhile, an attention-based end-to-end rate distortion model with reference restriction for both inter coding and intra coding is developed to obtain a better subjective quality. Experimental results in H.264/AVC-based video coding demonstrate that, when compared at the same bit rate and the same packet loss condition, the proposed method can provide a much better subjective feeling than existing intra refresh methods",2007,0, 2235,System dynamics approach for error and change management in concurrent design and construction,"Errors and changes, particularly in concurrent design and construction, require a careful approach to their management, since they can generate unanticipated impacts on construction performance, which is often related to softer aspects of management (e.g., fatigue). Focusing on this issue, this paper explores the use of system dynamics in identifying multiple feedback processes and softer aspects of managing errors and changes. Applying the developed model into the design-build highway project in Massachusetts, this paper concludes that the system dynamics approach can be an effective tool in the understanding of complex and dynamic construction processes and in supporting the decision making process of making appropriate policies to improve construction performance.",2005,0, 2236,Diagnosing Estimation Errors in Page Counts Using Execution Feedback,"Errors in estimating page counts can lead to poor choice of access methods and in turn to poor quality plans. Although there is past work in using execution feedback for accurate cardinality estimation, the problem of inaccurate estimation of page counts has not been addressed. In this paper, we present novel mechanisms for diagnosing errors in page count by monitoring query execution at low overhead. Detection of inaccuracy in the optimizer estimates of page count can be leveraged by database administrators to improve plan quality. We have prototyped our techniques in the Microsoft SQL Server engine, and our experiments demonstrate the ability to estimate page counts accurately using execution feedback with low overhead. For queries on several real world databases, we observe significant improvement in plan quality when page counts obtained from execution feedback are used instead of the traditional optimizer estimations.",2008,0, 2237,Defect content estimation for two reviewers,"Estimation of the defect content is important to enable quality control throughout the software development process. Capture-recapture methods and curve fitting methods have been suggested as tools to estimate the defect content after a review. The methods are highly reliant on the quality of the data. If the number of reviewers is fairly small, it becomes difficult or even impossible to get reliable estimates. This paper presents a comprehensive study of estimates based on two reviewers, using real data from reviews. Three experience-based defect content estimation methods are evaluated vs. methods that use data only from the current review. Some models are possible to distinguish from each other in terms of statistical significance. In order to gain an even better understanding, the best models are compared subjectively. It is concluded that the experience-based methods provide some good opportunities to estimate the defect content after a review.",2001,0, 2238,Error in estimation of power switching losses based on electrical measurements,"Estimation of the error in power switching loss measurement is presented. The analysis is based on the modeling of the current and voltage probes, including the cables. It is shown that the measured waveforms are not simply delayed by the probes, but overshoots and distortions are introduced by the probes, that may not be corrected easily, thus introduce errors. Accurate comparisons between simulation and measurement results are discussed when adequate probe models are used in simulation",2000,0, 2239,Nonlinear observers with approximately linear error dynamics: the multivariable case,"Exact error linearization uses nonlinear input-output injection to design observers with linear error dynamics in certain coordinates. This approach can only be applied nongenerically. We propose an observer for a wider class of multivariable systems which uniformly minimizes the nonlinear part of the system that cannot be canceled by nonlinear input-output injection. Our approach is numerical, constructive, and provides locally exponentially stable error dynamics. An example compares our design with a high-gain method",2001,0, 2240,An Input-Aware Method of Trace-Based Fault Diagnosis,"Execution trace indicates which part of the software statements are involved in the execution, which is an important diagnosis basis of the several existed diagnosis methods. However, the features of the input of the trace have been ignored in these researches, which may lead to a wrong or oversized result and impact the efficiency of the diagnosis directly. In this paper, we introduce the trace input analysis into diagnosis, and propose an input- aware method, which can select the statements that need to be examined further of small amount. Then, the suspiciousness of each selected statement is calculated according to the number of passed traces that execute the statement. These statements with higher suspiciousness should be checked earlier. An experimental study is performed for several programs, together with another two trace-based diagnosis methods. The results show that our method can improve the accuracy and efficiency of the diagnosis.",2010,0, 2241,"Advanced photonic subsystems to implement reconfigurable, fault-tolerant avionics","Existing and emerging combat, support, and commercial aircraft and rotorcraft will continue to incorporate new sensors, data processors, and related systems that will make the complex, mostly federated avionics systems of the 70's through 90's took like antiques. As the data processing, multisensor fusion, and automatic operations continue to burden the conventional avionics infrastructures, new and innovative architectures and subsystems will be needed to meet the high-bandwidth, low-latency, and other criteria of future avionics systems. Implicit in such future systems will be the requirement for fault-tolerance and reconfiguration in the presence of faults. The tremendous growth in the telecommunications infrastructure has led to a series of technologies and devices that provide the basis for unique subsystems that can implement such advanced, reconfigurable, fault-tolerant avionics systems. At Systran Federal Corp., we are developing two photonic subsystems that can realize the vision of such avionics systems. These subsystems are called the JANUS fault-tolerant communications device and the AEGLE photonic switching system. JANUS is a specialized fibre-channel interface that offers 2 Gbit/s of data movement across multiple interfaces. AEGLE is a photonic switching system that incorporates emerging switching devices at its core. This paper provides our vision for the ROSAA (Reconfigurable Open-Systems Avionics Architecture) and details of the JANUS and AEGLE prototype subsystems we are developing to achieve this vision",2001,0, 2242,On re-configurable methods and gate array based solutions of fast forward error correction systems for software radio and set-top-box applications,"Existing communication systems and the corresponding interfaces differ with respect to bandwidth, carrier frequency and modulation patterns. The investigation of digital receivers for processing of multiple standards and modulation schemes within a single system are one of the research topics today. In this paper fast forward error correction units, required for broadband receivers and software radio solutions have been investigated. The proposed FEC systems are based on a radix-2/radix-4 decoding approach and have been verified within high density FPGAs",2000,0, 2243,A Crosstab-based Statistical Method for Effective Fault Localization,"Fault localization is the most expensive activity in program debugging. Traditional ad-hoc methods can be time-consuming and ineffective because they rely on programmers' intuitive guesswork, which may neither be accurate nor reliable. A better solution is to utilize a systematic and statistically well-defined method to automatically identify suspicious code that should be examined for possible fault locations. We present a crosstab-based statistical method using the coverage information of each executable statement and the execution result (success or failure) with respect to each test case. A crosstab is constructed for each executable statement and a statistic is computed to determine the suspiciousness of the corresponding statement. Statements with a higher suspiciousness are more likely to contain bugs and should be examined before those with a lower suspiciousness. Three case studies using the Siemens suite, the Space program, and the Unix suite, respectively, are conducted. Our results suggest that the crosstab-based method is effective in fault localization and performs better (in terms of a smaller percentage of executable statements that have to be examined until the first statement containing the fault is reached) than other methods such as Tarantula. The difference in efficiency (computational time) between these two methods is very small.",2008,0, 2244,Aspects on development of distribution network fault location and management,"Fault management in distribution networks is one of the most challenging and at least the most visible task of the utilities. From the customer point of view the power cut means at least harm but in many cases also costs, which may be tens of times greater than the cost of delivered energy. The fault management process consists of several sub-tasks: fault location, fault isolation and service restoration, trouble call management and outage reporting. The paper presents practical aspects on the development of distribution network fault location and overall management of faults.",2004,0, 2245,A novel family of weighted average voters for fault-tolerant computer control systems,"Fault masking is a widely used strategy for increasing the safety and reliability of computer control systems. The approach uses some form of voting to arbitrate between the results of hardware or software redundant modules for masking faults. Several voting algorithms have been used in fault tolerant control systems; each has different features, which makes it more applicable to some system types than others. This paper introduces a novel family of weighted average voters suitable for redundant sensor (and other inertial measurement unit) planes, at the interface level, of control systems. It uses two tuneable parameters, each with a ready interpretation, to provide a flexible voting performance when using the voter in different applications. The weight assignment technique is transparent to the user because the impact of the degree of agreement between any voter input and the other inputs is directly reflected in the weight value assigned to that input. The voter can be tuned to behave as the well-known inexact majority voter that is generally used in safety-critical control systems at different voting planes. We evaluated the performance of four versions of the novel voter through a series of fault injection experiments, and compared the results with those of the well-known Lorczak's weighted average voter. The experimental results showed that the novel voter gives more correct outputs (1%12% higher reliability) than the Lorczak's voter in the presence of small permanent and transient errors. With large errors, lower-order versions of the novel voter give better performance than the ones with higher orders.",2003,0, 2246,Data-mining-based system for prediction of water chemistry faults,"Fault monitoring and prediction is of prime importance in process industries. Faults are usually rare, and, therefore, predicting them is difficult. In this paper, simple and robust alarm-system architecture for predicting incoming faults is proposed. The system is data driven, modular, and based on data mining of merged data sets. The system functions include data preprocessing, learning, prediction, alarm generation, and display. A hierarchical decision-making algorithm for fault prediction has been developed. The alarm system was applied for prediction and avoidance of water chemistry faults (WCFs) at two commercial power plants. The prediction module predicted WCFs (inadvertently leading to boiler shutdowns) for independent test data sets. The system is applicable for real-time monitoring of facilities with sparse historical fault data.",2006,0, 2247,A benchmark for fault monitors in distributed systems,"Fault monitoring is one of the main activities of fault tolerant distributed systems. It is required to determine the suspected/crashed component and proactively take the recovery steps to keep the system alive. The main objective of the fault monitoring activity is to quickly and correctly identify the faults. There are many techniques for fault monitoring which have general and specific parameters which influence their performance. In this paper we find the parameters that can help us classify the fault monitoring techniques. We created a benchmark ACI (adaptation, convergence, intelligence) and applied it on current techniques.",2009,0, 2248,The accuracy of fault prediction in modified code - statistical model vs. expert estimation,"Fault prediction models still seem to be more popular in academia than in industry. In industry, expert estimations of fault proneness are the most popular methods of deciding where to focus the fault detection efforts. In this paper, we present a study in which we empirically evaluate the accuracy of fault prediction offered by statistical models as compared to expert estimations. The study is industry based. It involves a large telecommunication system and experts that were involved in the development of this system. Expert estimations are compared to simple prediction models built on another large system, also from the telecommunication domain. We show that the statistical methods clearly outperform the expert estimations. As the main reason for the superiority of the statistical models we see their ability to cope with large datasets, which results in their ability to perform reliable predictions for larger number of components in the system, as well as the ability to perform prediction at a more fine-grain level, e.g., at the class instead of at the component level",2006,0, 2249,ORTEGA: An Efficient and Flexible Software Fault Tolerance Architecture for Real-Time Control Systems,"Fault tolerance is an important aspect in real-time computing. In real-time control systems, tasks could be faulty due to various reasons. Faulty tasks may compromise the performance and safety of the whole system and even cause disastrous consequences. In this paper, we describe ORTEGA (On-demand Real-TimE GuArd), a new software fault tolerance architecture for real-time control systems. ORTEGA has high fault coverage and reliability. Compared with existing real-time fault tolerance architectures, such as Simplex, ORTEGA allows more efficient resource utilizations and enhances flexibility. These advantages are achieved through the on-demand detection and recovery of faulty tasks. ORTEGA is applicable to most industrial control applications where both efficient resource usage and high fault coverage are desired.",2008,0, 2250,Tunable fault tolerance for runtime reconfigurable architectures,"Fault tolerance is becoming an increasingly important issue, especially in mission-critical applications where data integrity is a paramount concern. Performance, however, remains a large driving force in the market place. Runtime reconfigurable hardware architectures have the power to balance fault tolerance with performance, allowing the amount of fault tolerance to be tuned at run-time. This paper describes a new built-in self-test designed to run on, and take advantage of, runtime reconfigurable architectures using the PipeRench architecture as a model. In addition, this paper introduces a new metric by which a user can set the desired fault tolerance of a runtime reconfigurable device",2000,0, 2251,Dynamic Fault Tolerance with Misrouting in Fat Trees,"Fault tolerance is critical for efficient utilisation of large computer systems. Dynamic fault tolerance allows the network to remain available through the occurance of faults as opposed to static fault tolerance which requires the network to be halted to reconfigure it. Although dynamic fault tolerance may lead to less efficient solutions than static fault tolerance, it allows for a much higher availability of the system. In this paper we devise a dynamic fault tolerant adaptive routing algorithm for the fat tree, a much used interconnect topology, which relies on misrouting around link faults. We show that we are guaranteed to tolerate any combination of less than (num_switch_ports)/2 link faults without the need for additional network resources for deadlock freedom. There is also a high probability of tolerating an even larger number of link faults. Simulation results show that network performance degrades very little when faults are dynamically tolerated",2006,0, 2252,Region-based stage construction protocol for fault tolerant execution of mobile agent,"Fault tolerance is essential to the development of reliable mobile agent systems in order to guarantee continuous execution of mobile agents. For this purpose, previous work has proposed fault tolerant protocols for mobile agent execution based on stage construction. However, when previous protocols are applied to a multiregion mobile agent computing environment, the overhead of work such as monitoring, election, voting and agreement is increased. We propose a region-based stage construction (RBSC) protocol for fault tolerant execution of mobile agents in a multiregion mobile agent computing environment. The RBSC protocol uses new concepts of quasiparticipant and substage in order to put together some places located in different regions within a stage in the same region. Therefore, the RBSC protocol decreases the overhead of stage works. Consequently, the RBSC protocol decreases the total execution time of mobile agents.",2004,0, 2253,Derivation of Fault Tolerance Measures of Self-Stabilizing Algorithms by Simulation,"Fault tolerance measures can be used to distinguish between different self-stabilizing solutions to the same problem. However, derivation of these measures via analysis suffers from limitations with respect to scalability of and applicability to a wide class of self-stabilizing distributed algorithms. We describe a simulation framework to derive fault tolerance measures for self-stabilizing algorithms which can deal with the complete class of self-stabilizing algorithms. We show the advantages of the simulation framework in contrast to the analytical approach not only by means of accuracy of results, range of applicable scenarios and performance, but also for investigation of the influence of schedulers on a meta level and the possibility to simulate large scale systems featuring dynamic fault probabilities.",2008,0, 2254,Secure Fault Tolerance in Wireless Sensor Networks,"Fault tolerance provides wireless sensor networks (WSN) with reliable collection and dissemination of data while preserving limited resources in sensor nodes, especially power energy. Although data redundancy achieves the goal of fault tolerance in the data-centered network infrastructure of WSNpsilas, it also incurs security concerns by making data available in multiple locations. More attention should be paid when WSNpsilas are deployed in hostile environments where sensor nodes are easy to be captured for deleterious use by an adversary. In this context, cryptographic keys are of low efficiency for protecting data not involved in communication. In this paper, we propose a secure fault tolerance scheme for WSNpsilas, which uses secret sharing to checkpoint the state of the sink over multiple nodes. Through security analysis, we show that our scheme enhances the resiliency against node capture in the presence of data redundancy.",2008,0, 2255,Basic Fault Tree Analysis for use in protection reliability,Fault tree analysis (FTA) is a tool originally developed in 1962 by Bell Labs for use in studying failure modes in the launch control system of the Minuteman missile project. The tool now finds wide use in numerous applications from accident investigation to design prototyping and is also finding use for protection and control related applications. This paper provides an elementary background to the application of fault tree analysis for use in protection applications. The construction of the fault tree as well as the use of reliability data is considered. A simple example is presented. The intention is to provide a brief introduction to the concept to allow users to at least understand how a fault tree is constructed and what can be done with it.,2007,0, 2256,Deductive Fault Simulation for Asynchronous Sequential Circuits,"Fault simulation of the asynchronous sequential circuits is more complicated than fault simulation of their synchronous counterparts. It needs to deal with hazards, oscillations and races. The complex gates in the asynchronous circuits are another challenge especially for deductive fault simulation. In this paper a deductive fault simulator for the speed-independent (SI) asynchronous sequential circuits is presented. The implemented deductive fault simulator was tested using the SI benchmark circuits. The experimental results show significant reduction of the computation time and negligible increase of memory requirements.",2009,0, 2257,Fractal study on fault system of Carboniferous in Junggar Basin based on GIS,"Fault system is a significant evidence of tectonic movement during crust tectonic evolution and may play an more important role in oil-gas accumulation process than other tectonic types in sedimentary basin. Carboniferous surface faults in Junggar Basin developed well and varied in size and distribution. There are about 200 faults in Carboniferous, and 187 of them are thrust faults. Chaos-fractals theories have been widely investigated and great progress has been made in the past three decades. One of the important conception-fractal dimension had become a powerful tool for describing non-linearity dynamical system characteristic. The clustered objects in nature are often fractal and fault system distribution in space is inhomogeneous, always occurs in groups, so we can describe spatial distribution of faults from the point of fractal dimension. Fractal dimension of fault system is a comprehensive factor associated with fault number, size, combination modes and dynamics mechanism, so it can evaluate the complexity of fault system quantitatively. The relationship between fault system and oil-gas accumulation is a focus and difficulty problem in petroleum geology, and fractal dimension is a new tool for describing fault distribution and predicting potential areas of hydrocarbon resources. Geographic Information System (GIS) is a kind of technological system collecting, storing, managing, computing, analyzing, displaying and describing the geospatial information supported by computer software and hardware. In the last 15-20 years, GIS have been increasingly used to address a wide variety of geoscience problems. Weights-of-evidence models use the theory of conditional probability to quantify spatial association between fractal dimension and oil-gas accumulation. The weights of evidence are combined with the prior probability of occurrence of oil-gas accumulation using Bayes'rule in a loglinear form under an assumption of conditional independence of the dimension maps t- - o derive posterior probability of occurrence of oil-gas accumulation. In this paper, we first vectorize the fault system in Carboniferous of Junggar Basin in GIS software and store it as polyline layer in Geodatabase of GIS to manage and analyze, then calculate the fractal dimension of three types which are box dimension, information dimension and cumulative length dimension using spatial functions of GIS, in the last use weights-of-evidence model to calculate the correlation coefficients in GIS environment between oil-gas accumulation and three types of fractal dimension in order to quantity the importance of fault system.",2010,0, 2258,Fault Tolerant Approaches for Distributed Real-time and Embedded Systems,"Fault tolerance (FT) is a crucial design consideration for mission-critical distributed real-time and embedded (DRE) systems, which combine the real-time characteristics of embedded platforms with the dynamic characteristics of distributed platforms. Traditional FT approaches do not address features that are common in DRE systems, such as scale, heterogeneity, real-time requirements, and other characteristics. Most previous R&D efforts in FT have focused on client-server object systems, whereas DRE systems are increasingly based on component-oriented architectures, which support more complex interaction patterns, such as peer-to-peer. This paper describes our current applied R&D efforts to develop FT technology for DRE systems. First, we describe three enhanced FT techniques that support the needs of DRE systems: a transparent approach to mixed-mode communication, auto-configuration of dynamic systems, and duplicate management for peer-to-peer interactions. Second, we describe an integrated FT capability for a real-world component-based DRE system that uses off-the-shelf FT middleware integrated with our enhanced FT techniques. We present experimental results that show that our integrated FT capability meets the DRE system's real-time performance requirements for both the responsiveness of failure recovery and the minimal amount of overhead introduced into the fault-free case.",2007,0, 2259,Fault tolerance with shortest paths in regular and irregular networks,"Fault tolerance has become an important part of current supercomputers. Local dynamic fault tolerance is the most expedient way of tolerating faults by preconfiguring the network with multiple paths from every node/switch to every destination. In this paper we present a local shortest path dynamic fault-tolerance mechanism inspired by a solution developed for the Internet, that can be applied to any shortest path routing algorithm such as dimension ordered routing, fat tree routing, layered shortest path, etc., and provide a solution for achieving deadlock freedom in the presence of faults. Simulation results show that 1) for fat trees this yields the to this day highest throughput and lowest requirements on virtual layers for dynamic one-fault tolerance, 2) we require in general few layers to achieve deadlock freedom, and 3) for irregular topologies it gives at most a 10 times performance increase compared to FRoots.",2008,0, 2260,Image-dependent spatial shape-error concealment,"Existing spatial shape-error concealment techniques are broadly based upon either parametric curves that exploit geometric information concerning a shapepsilas contour or object shape statistics using a combination of Markov random fields and maximum a posteriori estimation. Both categories are to some extent, able to mask errors caused by information loss, provided the shape is considered independently of the image/video. They palpably however, do not afford the best solution in applications where shape is used as metadata to describe image and video content. This paper presents a novel image-dependent spatial shape-error concealment (ISEC) algorithm that uses both image and shape information by employing the established rubber-band contour detecting function, with the novel enhancement of automatically determining the optimal width of the band to achieve superior error concealment. Experimental results qualitatively and numerically corroborate the enhanced performance of the new ISEC strategy compared with established shape-based concealment techniques.",2008,0, 2261,On predicting the time taken to correct bug reports in open source projects,"Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results: (1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, (2) there is a strong linear relationship (about 92%) between the number of people participating in a bug report and the time taken to correct it, (3) a linear model can be used to predict the time taken to correct bug reports.",2009,0, 2262,Fault isolation using stateless server model in L4 microkernel,"Existing system architecture model cannot guarantee isolation of applications or system components from system faults that occur in system services. If an error causes the failure by operating system services, the entire system can be corrupted or components which are dependent that services are affected. Fault tolerant architecture model can be achieved by decoupling state information from the server. In this paper, we compare four kinds of architecture models. For constructing stateless server model, we use simple lightweight middleware. It can support fault tolerant architecture with low performance overhead.",2010,0, 2263,Modeling Alpha and Neutron Induced Soft Errors in Static Random Access Memories,Experimental thermal neutron and alpha soft error test results of a 4 Mbit SRAM fabricated on a 0.25 mum process are evaluated using Vanderbilt University's RADSAFE toolkit. The capabilities of the radiation transport code are demonstrated by accurately reproducing experimental results and predicting operational soft error rates for the memory.,2007,0, 2264,Study of process of convergence of relative method to correction,Explored process of convergence of compensator-patch.in the relative method of correction.,2004,0, 2265,A New Model of Mine Hoist Fault Diagnosis Based on the Rough Set Theory,"Extraction of simple and effective rules for fault diagnosis is one of the most important issues needed to be addressed in fault diagnosis, because available information is often inconsistent and redundant. This paper presents a fault diagnosis model based on rough set theory. Firstly, this model can discretize fault continued attributes using a modified genetic algorithm. Then, reduce diagnosis rule by using heuristic algorithm of rough set theory, a set of diagnosis rules are generated and a rule database for fault diagnosis is established. Simulation results for fault diagnosis of mine hoist show that this method improves the accuracy rate of fault diagnosis, predigest the number of feature parameters and diagnostic rules, and reduces the cost of diagnosis, with more applicable than the classical RS-method in practical applications.",2008,0, 2266,Probabilistic Approach to Fault Detection in Discrete Event Systems,"Fault diagnosis is performed by an external observer/diagnoser that functions as a finite state machine and which has access to the input sequence applied to the system but has only limited access to the system state or output. The observer/diagnoser is only able to obtain partial information regarding the state of the given system at intermittent time intervals that are determined by certain synchronizing conditions between the system and the observer/diagnoser. By adopting a probabilistic framework, mathematical analysis has been made to optimally choose the synchronizing conditions and develop adaptive strategies that achieve a low probability of aliasing, i.e., a low probability that the external observer/diagnoser incorrectly declares the system as fault-free",2007,0, 2267,China's Research Status Quo and Development Trend of Power Grid Fault Diagnosis,"Fault diagnosis is the basic condition for smart grid to achieve the self-healing function, and it is also one of the important research topics of the intelligent dispatching decision support system. On the basis of analyzing its concept and aiming at the current status of China's studies, this paper reviewed and summarized several intelligent fault diagnosis methods, including expert system, artificial neural networks, rough set theory, data mining techniques, multi-agent technology and the entropy theory, then pointed out their application characteristics and existing problems, and finally the prospects of further development in this field were presented.",2010,0, 2268,Fault signal filtering for improving fault section estimation,"Fault diagnostic processes obtain data from circuit-breaker relay status and SCADA messages. To speed up fault diagnosis processes, a scheme of fault signal filtering (FSF) is proposed for the rapid estimation of fault regions, prior to further pinpointing of the fault. By considering circuit-breaker relay status, boundary regions are formed by joining up devices that have tripped. A search process is then initiated to locate generators in each region for deciding whether the region is `alive' or `dead' with faulted components. As an independent process, the proposed scheme is not meant to replace but to complement, reduce and speed up existing fault diagnosis techniques. Tests performed on a 174-bus network have established the potential of the proposed scheme for online implementation",2000,0, 2269,Optimized generation of VHDL mutants for injection of transition errors,"Fault injection in VHDL descriptions has become an efficient solution to analyze at an early stage of the design the potential faulty behaviors of a complex digital circuit. Such injections may use either saboteurs or mutants. In this paper, the focus is placed on controlled generation of mutants, which means that the generation of mutants is (1) done for precise fault/error models related to faults occurring in the field and (2) optimized for synthesis onto emulation hardware. Several approaches are proposed to inject transition errors in FSMs or RT-level control flowcharts. These approaches are compared and the results show the impact of the mutant generation on the result efficiency",2000,0, 2270,A fault hypothesis study on the TTP/C using VHDL-based and pin-level fault injection techniques,"Fault injection techniques are frequently used for validating dependable systems. VHDL-based techniques are good resources that support fault injection with many advantages such as a high level of accessibility, controllability and precision. This paper presents the results obtained with a VHDL-based tool (VFIT) injecting single and multiple faults at pin-level in a TTP/C model. The study is focused on the fault hypothesis of a modelled communications protocol based on the Time-Triggered Architecture. Results are analysed and compared with the experiments carried out in the real prototyped system with a pin-level fault injection tool (AFIT). Conclusions strengthen the usability of VHDL-based fault injection tools and reveal technique weaknesses.",2002,0, 2271,Fault Injection Technology for Software Vulnerability Testing Based on Xen,"Fault injection technology devotes an efficient way for verifying fault tolerance of computer and detecting the vulnerability of software system. In this paper, we present a Xen-based fault injection technology for software vulnerability test (XFISV) in order to build an efficient and general-purpose software test model, which injects faults into interactive layer between software applications and their environments. This technology has two main contributions: First, detecting the software vulnerability according to this model needs less number of fault test cases. Second, this model enhances the flexibility and the robustness of the fault injection tools with economical resource cost.",2009,0, 2272,An FMO based error resilience method in H.264/AVC and its UEP application in DVB-H link layer,"Flexible Macroblock Ordering (FMO) is one of the new error resilience tools introduced in H264/AVC. Several slice grouping methods have been studied for improving error robustness using FMO. In this paper, a simple and fast slice grouping method for inter frames is introduced. Fast mode decision and early Skip Mode decision are applied for the first encoding pass, and only the features that are available at the stage of early Skip Mode decision are used for the classification. The computation time cost can be reduced by about 50% on average compared to traditional methods. The proposed scheme is tested under the proposed Unequal Error Protection scheme at the DVB-H link layer. The results are compared to the standard MPE-FEC EEP scheme using traditional FMO type `interleaved' at the DVB-H link layer. It is shown that the proposed scheme can provide improved error robustness for high error rate channels in a DVB-H system.",2010,0, 2273,Development of non-energy-saving fault diagnosis software and hardware integration platform of groundwater source heat system,"For a groundwater source heat pump (GWHP) system, how to realize system high efficiency operation is very important issue. On basis of development idea of modular and open energy management, a model of non-energy-saving (NES) fault diagnosis has been presented, the model involves input of basic characteristic data and index characteristic data and output of NES fault factors, the relationship database between input and output can be established by using related special knowledge and experts experiences, the weigh values and threshold values to connect the complex non linear mapping relationship of input and output can be obtained based on artificial neural network back propagation algorithm, then a NES fault diagnosis software of GWHP system operation can be developed. According to the requirements of monitoring data of the software, various sensors and a data acquisition device etc. are selected, finally the hardware system combined with a NES fault diagnosis software can be generated and become a effective part of total energy management information platform.",2010,0, 2274,Design of Range Correction Fuze Trajectory Calculation and Control Device,"For a range correction fuze of a rocket range-extended projectile, a DSP-based trajectory calculation and correction controller that is cooperated with MCU is designed to examine the axial acceleration during the flying of a projectile, and to perform the trajectory calculation and the start-up time calculation of a correction mechanism, and to control the action of a resistance-correction mechanism. When designing the hardware and the software, the tasks for DSP and MCU are reasonably assigned. The DSP mainly completes the collection of the acceleration data, the trajectory calculation and the start-up time calculation of the resistance-correction mechanism, while MCU mainly completes the time counting and controls the startup of the resistance-correction mechanism according to the time calculated by DSP. This device also can store the flying acceleration measurement data of the projectile, the trajectory calculation results, etc., which can be adopted as the basic data for analyzing the working correctness of the range correction fuse at the research stage. It is proved in the range test that this device can resist the high-impact overloading during the emission, and can normally complete the tasks such as the trajectory calculation and the resistance-correction mechanism startup control during the flying of the projectile.",2009,0, 2275,Design of safety monitor schemes for a fault tolerant flight control system,"For a research aircraft, ""conventional"" control laws (CLs) are implemented on a ""baseline"" flight computer (FC) while research CLs are typically housed on a dedicated research computer. Therefore, for an experimental aircraft used to test specific fault tolerant flight control systems, a safety logic scheme is needed to ensure a safe transition from conventional to research CLs (while at nominal conditions) as well as from research CLs at nominal conditions to conditions with ""simulated"" failures on specific control surfaces. This paper describes the design of such a safety scheme for the NASA Intelligent Flight Control System (IFCS) F-15 Program. The goals of the IFCS F-15 program are to investigate the performance of a set of fault tolerant CLs based on the use of dynamic inversion with neural augmentation. The different transitions are monitored using information relative to flight conditions and controller-related performance criteria. The testing of the scheme is performed with a Simulink-based flight simulation code and interface developed at West Virginia University for the NASA IFCS F-15 aircraft.",2006,0, 2276,A Motion-Compensated Error Concealment Scheme for H.264 Video Transmission,"For an entropy-coded video sequence, a transmission error in a codeword will not only affect the underlying codeword but may also affect subsequent codewords, resulting in a great degradation of received video frames. In this study, a motion-compensated error concealment scheme is proposed for H.264 video transmission. For H.264 inter-coded P frames, the predicted motion vector (PMV) for each corrupted block is first determined by the spatially neighboring motion vectors (MVs) around the corrupted block and the temporally motion-projected overlapping MVs in the previous frame. With the PMV being the central search point, three rood search patterns are developed for motioncompensated error concealment of small-, medium-, and large-motion corrupted blocks. Then error concealment refinement using Lagrange interpolation is performed on all the initially concealed blocks. Finally, the improved ELA (edge based line average) algorithm [11] is employed to refine each concealed block in P frames.",2006,0, 2277,Scene-effect detection and insertion MPEG encoding scheme for video browsing and error concealment,"For an MPEG coding scheme, the encoder can do more than video compression. In this paper, a novel MPEG codec embedded with scene-effect information detection and insertion is proposed to provide more functionality at the decoder end. Based on the macroblock (MB) type of information that is generated simultaneously in the encoding process, a single-pass automatic scene-effect insertion MPEG coding scheme can be achieved. Using the USER_DATA of picture header, the video output bitstreams by our method still conform to the conventional MPEG decoding system. The proposed method provides a solution toward upgrading the existing MPEG codec with low complexity to accomplish at least two major advantages. Precise and effective video browsing resulting from the scene-effect extraction can significantly reduce the user's time to look up what they are interested in. For video transmission, the bitstreams containing scene-effect information can obtain better error concealment performance when scene changes are involved. Compared with the gain it achieves, the payout of our algorithm is very worthy with comparatively small efforts.",2005,0, 2278,Kraft Inequality and Zero-Error Source Coding With Decoder Side Information,"For certain source coding problems, it is well known that the Kraft inequality provides a simple sufficient condition for the existence of codes with given codeword lengths. Motivated by this fact, a sufficient condition based on the Kraft inequality can also be sought for the problem of zero-error instantaneous coding with decoder side information. More specifically, it can be envisioned that a sufficient condition for the existence of such codes with codeword lengths {l x} is that for some 0xepsivFscr (2-l x)lesalpha for each clique Fscr in the characteristic graph G of the source-side information pair. In this correspondence, it is shown that (1) if the above is indeed a sufficient condition for a class G of graphs, then it is possible to come as close as 1-log2 alpha bits to the asymptotic limits for each graph in G, (2) there exist graph classes of interest for which such a can indeed be found, and finally (3) no such n can be found for the class of all graphs.",2007,0, 2279,Global fault diagnosis method of traction transformer based on Improved Fuzzy Cellular Neural Network,"For compensating the deficiency of dissolved gases analysis (DGA) method of traction transformer fault diagnosis, a global fault diagnosis method of traction transformer based on improved fuzzy cellular neural network (IFCNN) is introduced in model building mode. Global fault diagnosis model is comprised of input space, fault diagnosis rule and output space. Input space is fault symptom set and output space is fault type set. As to input space, fault symptom is enriched by increasing water in oil, key device resistance and electric current besides using DGA analysis content. Fault diagnosis rule is depended on fuzzy integrated judging method and the combination between DGA and IFCNN fault diagnosis model designed in this paper. Output space is diagnosed fault types through defuzzification processing of diagnosis result. And this paper uses experiment to test fault diagnosis precision. The experiment result indicates that global fault diagnosis method has better practicable performance and high precision on analyzing causal relation of different fault, ascertains valid input and fault characteristic types, avoided localization of traction transformer fault diagnosis by DGA, and collectivity precision can reach 90.91%.",2009,0, 2280,Attitude correction algorithm using GPS measurements for flight vehicles,For flight systems with an on-board seeker the attitude error is the major factor to determine the seeker pointing error at the time of object acquisition. To achieve a desired mission it must be minimized. The proposed algorithm corrects the attitude error in the guidance computer during flight by taking its position and velocity measurements from GPS or radar. This is possible since navigator's position and velocity states are correlated with attitude state. Computer simulation is shown to prove the proposed algorithm.,2002,0, 2281,FPGA-Based Online Detection of Multiple-Combined Faults through Information Entropy and Neural Networks,"For industry, a faulty induction motor signifies production reduction and cost increase, besides, it is a hazard for people and nearby machinery. Real-world induction motors can have one or more faults at the same time, so that one faulty condition could interfere with the detection of another one, and mislead to a wrong decision about the operational condition of the motor. The detection of multiple-combined faults is still a demanding task, difficult to accomplish even with computing intensive techniques. This work introduces a low-cost, real-time FPGA-based hardware processing unit for multiple-combined fault detection utilizing information entropy and artificial neural networks as tools for analyzing the information contents of the 3-axis vibration signals from the rotary machine during the startup transient. Results show great performance of the entropy neural system on accurately identifying in an automatic way a healthy condition, half-, one-, and two-broken rotor bars, outer-race bearing defect, unbalance, and their combinations.",2010,0, 2282,Identification of the faulted line using controlled grounding,"For neutrally ungrounded systems, it is very difficult to identify the line experiencing single-phase to ground fault. This is due to the fact that the non-effectively grounded system produces a very small fault current. This paper presents a novel method that can help to overcome this difficulty. The idea is to convert the ungrounded system into a grounded system temporarily through a judicially controlled grounding of the system neutral. A controllable ground fault current that is large enough for identifying the faulted line and yet small enough not to cause system problems is extracted by subtraction method. The criterion to distinguish the faulted and unfaulted line short-circuit impulse is established through spectral analysis and the sensitivity of the proposed method is analysed on the criterion, The effectiveness of the proposed method is proved by the lab experiment finally. The proposed method may be solve the problem of faulted line identification completely.",2009,0, 2283,Modeling and analysis of a multibus reticulation network with multiple DG. Part II. Electrical fault analysis,"For pt.I see ibid., vol.2 p.805-10. A significant growth in the utilization of autonomous- and distributed power sources deployed at subtransmission (132 - 22 kV) and reticulation levels (< 22 kV) in stand-alone or grid connection notations has been seen. In many electricity industries across the world. With electricity industry reform, an open access regime is a standard policy governing the transmission grid, thus providing for full competition at generation and distribution end of the electricity delivery value chain. It has become necessary to investigate the technical and economic impact future connections of distributed generators will have on electric power distribution networks, and evaluate some of these effects of power sector deregulation. This work presents the modeling and analysis of a multibus reticulation network with multiple distributed generation (DG) injection. Results for the dynamic performance, steady-state and fault condition are presented and discussed.",2004,0, 2284,A Fault Tolerant Scheduling Algorithm for Stochastic Fault Model in Real-Time Operating System,"For real-time system, fault tolerant scheduling algorithm is an important method to guarantee the timing constraints of tasks when fault occurs. In the time before, researchers usually describe faults by assuming a constant number of faults or the minimum time of fault inter-arrivals. In practice, occurrence of faults of systems are stochastic. In this paper, we model the occurrence of faults as a stochastic process with a Poisson distribution having a mean inter-arrival rate of A. And a fault tolerant task scheduling algorithm which is a greedy algorithm with a lower complexity of computation is given according to the fault model. Through simulation we can conclude the algorithm is effective for fault tolerant. The loss ratio of tasks on this algorithm is much lower than that in no fault scheduling algorithm.",2009,0, 2285,Tuning the defect density in chemically synthesized graphene,"Gram-scale quantities of graphene sheets can be synthesized in a bottom-up chemical approach and we have sought to address the extent of the defect density using various characterization techniques which include X-ray diffraction, high resolution transmission electron microscopy, single area electron diffraction, Raman spectroscopy, atomic force microscopy and X-ray photoelectron spectroscopy. It was found that the chemically synthesized graphene sheets have a tendency to stack without inter-planar coherence such as that found in graphite. The driving force behind this stacking is believed to be due to A-A interactions between overlaid carbon sheets. The overall defect density was shown to decrease by simply varying the carbon precursor used in the chemical synthesis.",2009,0, 2286,Adaptive threshold modulation for error diffusion halftoning,"Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients",2001,0, 2287,A hierarchical fault detection and recovery in a computational grid using watchdog timers,"Grid computing basically means applying the resources of individual computers in a network to focus on a single problem/task at the same time. But the disadvantage of this feature is that the computers which are actually performing the calculations might not be always trustworthy and may fail periodically. Hence larger the number of nodes in the grid, greater is the probability that a node fails. Hence in order to execute the workflows in a fault tolerant manner we go for fault tolerance and recovery strategies. This paper proposes a method in which the instantaneous snapshot of the local state of processes within each node is recorded. An efficient algorithm is introduced for the detection of the node failures using watch dog timers. For recovery we make use of divide and conquer algorithm that avoids redoing of already completed jobs, enabling faster recovery.",2010,0, 2288,Optimized resource allocation in grid networks using genetic algorithm with error rate factor,"Grid computing is an emerging computing paradigm that will have significant impact on the next generation information infrastructure. Due to the largeness and complexity of grid system, its quality of service, performance and reliability are difficult to model, analyze and evaluate. In real time evaluation, various noises will influence the model and which in turn accounts for increase in packet loss and Bit Error Rate (BER). Therefore, a novel optimization model for maximizing the expected grid service profit is mandatory. In our work, to achieve the improvement in the end to end grid network performance, an optimizer, which is based on Genetic Algorithm (GA) with Fitness Evaluation parameters considers BER and Service Execution Time, is designed in the RMS. This paper presents the novel tree structured model, is better than other existing models for grid computing performance and reliability analysis by not only considering data dependence and failure correlations, but also takes link failure, packet loss & BER real time parameters in account. The algorithm based on the Graph theory and Probability theory.",2009,0, 2289,Performance Optimization of Tree Structured Grid Services Considering Influence of Error Rate,"Grid computing is an emerging computing paradigm that will have significant impact on the next generation information infrastructure. Due to the largeness and complexity of grid system, its quality of service, performance and reliability are difficult to model, analyze and evaluate. In real time evaluation, various noises will influence the model and which in turn accounts for increase in packet loss and bit error rate (BER). Therefore, a novel optimization model for maximizing the expected grid service profit is mandatory. This paper presents the novel tree structured model, is better than other existing models for grid computing performance and reliability analysis by not only considering data dependence and failure correlations, but also takes link failure, packet loss & BER real time parameters in account. Based on this model, an algorithm for evaluating the grid service time distribution and the service reliability indices are suggested. The algorithm is based on the graph theory and probability theory.",2009,0, 2290,Towards Optimal Fault Tolerant Scheduling in Computational Grid,Grid environment has significant challenges due to diverse failures encountered during job execution. Computational grids provide the main execution platform for long running jobs. Such jobs require long commitment of grid resources. Therefore fault tolerance in such an environment cannot be ignored. Most of the grid middleware have either ignored failure issues or have developed adhoc solutions. Most of the existing fault tolerance techniques are application dependant and causes cognitive problem. This paper examines existing fault detection and tolerance techniques in various middleware. We have proposed fault tolerant layered grid architecture with cross-layered design. In our approach Hybrid Particle Swarm Optimization (HPSO) algorithm and Anycast technique are used in conjunction with the Globus middleware. We have adopted a proactive and reactive fault management strategy for centralized and distributed environments. The proposed strategy is helpful in identifying root cause of failures and resolving cognitive problem. Our strategy minimizes computation and communication thus achieving higher reliability. Anycast limits the effect of Denial of Service/Distributed Denial of Service D(DoS) attacks nearest to the source of the attack thus achieving better security. Significant performance improvement is achieved through using Anycast before HPSO. The selection of more reliable nodes results in less overhead of checkpointing.,2007,0, 2291,Improving Grid Fault Tolerance by Means of Global Behavior Modeling,"Grid systems have proved to be one of the most important new alternatives to face challenging problems but, to exploit its benefits, dependability and fault tolerance are key aspects. However, the vast complexity of these systems limits the efficiency of traditional fault tolerance techniques. It seems necessary to distinguish between resource-level fault tolerance (focused on every machine) and service-level fault tolerance (focused on global behavior). Techniques based on these concepts can handle system complexity and increase dependability. We present an autonomous, self-adaptive fault tolerance framework for grid systems, based on a new approach to model distributed environments. The grid is considered as a single entity, instead of a set of independent resources. This point of view focuses on service-level fault tolerance, allowing us to see the big picture and understand the system's global behavior. The resulting model's simplicity is the key to provide system-wide fault tolerance.",2010,0, 2292,FNDAM: A Fault Tolerance network aware double auction meta-scheduler for grid,"Grid technology can integrate distributed resources in geographical latitude for a special application. Since users need increasingly to grid services with high quality, resource management and scheduling methods should try to fulfill competent order on the limited resources well. One of the suggested methods for grid resource and scheduling management is economical method, which is well noticed. Goal of economical models is providing services with higher quality for customers and creating motive for resource owners to participate in the grid. Customer consent is the result of a valid economical model and this causes they relegate more works to the grid for execution. This cause more benefit for providers, then they can provide more resources for grid. In this paper, we propose a new meta-scheduler that does scheduling by Fault Tolerance in the network. Then the proposed meta-scheduler will be compared with that which does not consider faults.",2010,0, 2293,Collaborative fault diagnosis in grids through automated tests,"Grids have the potential to revolutionize computing by providing ubiquitous, on demand access to computational services and resources. However, grid systems are extremely large, complex and prone to failures. A survey we have conducted reveals that fault diagnosis is still a major problem for grid users. When a failure appears at the user screen, it becomes very difficult for the user to identify whether the problem is in his application, somewhere in the grid middleware, or even lower in the fabric that comprises the grid. To overcome this problem, we argue that current grid platforms must be augmented with a collaborative diagnosis mechanism. We propose for such mechanism to use automated tests to identify the root cause of a failure and propose the appropriate fix. We also present a Java-based implementation of the proposed mechanism, which provides a simple and flexible framework that eases the development and maintenance of the automated tests.",2006,0, 2294,Towards model based prediction of human error rates in interactive systems,"Growing use of computers in safety-critical systems increases the need for Human Computer interfaces (HCIs) to be both smarter-to detect human errors-and better designed-to reduce likelihood of errors. We are developing methods for determining the likelihood of operator errors which combine current theory on the psychological causes of human errors with formal methods for modelling human-computer interaction. We present the models of the HCI and operator in an air-traffic control (ATC) system simulation, and discuss the role of these in the prediction of human error rates",2001,0, 2295,Intra mode dependent quantization error estimation of each DCT coefficient in H.264/AVC,"H.264/AVC employs intra prediction to reduce spatial redundancy between neighboring blocks. Different directional prediction modes are used to cater diversified video content. Although it achieves quite high coding efficiency, it is desirable to establish a proper theoretical quantization error model under different prediction modes, since this allows us to explain the behavior of existing codecs and to design better ones. Actually, residue after different intra prediction modes exhibits different characteristics in frequency domain. In this paper, an intra mode dependent quantization error estimation is presented. For a complete analysis, we investigate not only the coding distortion in current JM reference software, but also its effect on intra mode dependent residue. Based on the mode dependent residue characteristics, a mode dependent quantization error estimation for each frequency position is proposed. Simulation results show that the proposed model can estimate the mode dependent quantization error with high accuracy. Furthermore, the estimation accuracy remains high for various sequences and QP.",2010,0, 2296,Joint Power Control and FEC Unequal Error Protection for Scalable H.264 Video Transmission over Wireless Fading Channels,"H.264/AVC scalable video coding (SVC) is an upto-date video compression standard. This paper deals with the issue of transmitting H.264 scalable video bitstreams over wireless fading channels. The contribution is twofold: Firstly, to exploit the importance of prioritized video packets in different temporal layer, quality layer and group of pictures (GOP), a simple and accurate performance metric, namely, layer-GOP-weighted expected zone of error propagation (LGW-EZEP) model is proposed. Secondly, a joint power control and forward error correction (FEC) unequal error protection (UEP) scheme is proposed to transmit the video streams over orthogonal frequency division multiplexing (OFDM) systems efficiently and robustly. Meanwhile, a new iterative algorithm is given to solve the joint optimization problem. Compared to other independent power control or FEC UEP schemes, the combined protecting scheme demonstrates stronger robustness and flexibility via various fading channels.",2009,0, 2297,Hybrid Fault Detection Technique: A Case Study on Virtex-II Pro's PowerPC 405,"Hardening processor-based systems against transient faults requires new techniques able to combine high fault detection capabilities with the usual design requirements, e.g., reduced design-time, low area overhead, reduced (or null) accessibility to processor internal hardware. This paper proposes the adoption of a hybrid approach, which combines ideas from previous techniques based on software transformations with the introduction of an Infrastructure IP with reduced memory and performance overheads, to harden system based on the PowerPC 405 core available in Virtex-II Pro FPGAs. The proposed approach targets faults affecting the memory elements storing both the code and the data, independently of their location (inside or outside the processor). Extensive experimental results including comparisons with previous approaches are reported, which allow practically evaluating the characteristics of the method in terms of fault detection capabilities and area, memory and performance overheads",2006,0, 2298,Performance improvement of event-based motion correction for PET using a PC cluster,"Head motion during PET scanning produces significant artifact or spatial resolution loss on the reconstructed image. Event-based motion correction (EBMC) technique has been developed to correct head movement during the scan incorporated with list mode acquisition of PET and an optical tracking system. In EBMC technique, each line-of-response (LOR) in the list-mode data was reoriented due to the motion data by the optical tracking system. Although EBMC technique has potential to correct head movement during PET acquisition, large size of list mode data set hampers the capability of on-line processing for the correction. In order to improve in the speed of computing time for EBMC, we implemented EBMC on a Beowulf PC cluster consisting of 7 PCs (24 GHz Xeon for a master node and 1.4 GHz PentiumIII for slave nodes) connecting each other through Gbit Ethernet. MPI (Message Passing Interface) protocol was utilized for parallelizing the task of EBMC. The performance of the PC cluster was evaluated using list-mode data and head motion data acquired by ECAT EXACT HR+ (CTI/Siemens) PET scanner and POLARIS (Northern Digital) optical tracking system. The six list-mode data sets (file sizes are from 161 to 253 Mbytes) were corrected for motion by EBMC technique on a single PC and the PC cluster. The PC cluster was 5.2 times faster than the single PC to perform the motion correction. The PC cluster remarkably improves the performance of EBMC with low cost.",2003,0, 2299,Error robust video communications for wireless networks,"Header compression increases the transmission throughput and improves the quality of service in error prone environments. ROHC (RObust Header Compression) [1] is an IETF (Internet Engineering Task Force) standard, which is used to compress the network and transport headers to save the precious air interface of mobile communication systems. The U-mode of ROHC profile 1 is suitable for transmission of packets without a feedback channel. However, in error prone environments, ROHC U-mode may lose synchronization between the compressor and decompressor due to bursts of errors in wireless communications. This results in unnecessary packet loss, which has severe detrimental effects on video quality. In this paper, the causes of context damage are analyzed for U-mode, and its effects on H.264 video quality and packet loss are demonstrated. A scheme to minimize the occurrences of context damage is proposed. Using the proposed scheme, packet loss rate reduces significantly for ROHC U-mode.",2008,0, 2300,Head-in-pillow defect - role of the solder ball alloy,"Head-in-pillow (HIP) defect is a growing concern in the electronics industry. This defect is usually believed to be the result of several factors, individually or in combination. Some of the major contributing factors to the HIP defect are: surface quality of the BGA spheres, activity of the paste flux, improper placement / misalignment of the components, a non-optimal re flow profile, and warpage of the components. From the electronics components packaging industry's perspective, the contribution of the solder ball composition and its surface quality are two of the most important factors. To understand the role of each of these factors in producing the head-in-pillow defect and to find ways to mitigate the same, we designed an apparatus that simulates the reflow process and has an in-situ monitoring of the solder joint formation process. This apparatus facilitates the study of the effect of the thermal history of the components, in particular the oxidation of BGA spheres. A detailed comparative study of a number of lead-free solder spheres has been undertaken. The base alloy in these spheres is a low silver SnAgCu alloy. A number of minor alloy additions, focused on reducing the surface oxidation, have been tried. A comparative study of the fresh spheres and those oxidized at high temperature in air has been undertaken. Results show a big difference in wetting speed of the spheres with and without oxidation. Highly oxidized spheres have a thick oxide layer at the surface which, in certain cases, is impossible to breach. It is this ""non-wetting"" surface that is in-part responsible for the head-in-pillow defect. In this paper, quantitative measurements of the wetting time of spheres that come in contact with the solder paste at different times in the reflow cycle will be presented. Videos showing the in-situ soldering process will be shown along with the real time temperature and time measurements. Results show the effect of micro alloy additives on spheres' surfac- - e oxidation and their wetting behavior. Non-collapsing spheres resulting in the head-in pillow defect will be shown. In addition, effect of the solder paste activity will also be presented. Further, analysis of the study done under different reflow conditions will be presented showing the best and the worst reflow conditions for the formation of the head-inpillow defect.",2010,0, 2301,Comparative Performance Analysis of Forward Error Correction Techniques Used in Wireless Communications,"High bit error rates of the wireless communication medium require employing forward error correction methods on the data transferred, where usually convolutional coding techniques are utilized. As a result of their strong mathematical structure these codes are superior to their counterparts in especially real-time applications. The main decoding strategy for convolutional codes is based on the Viterbi algorithm. Common use of convolutional codes has boosted the development of different decoding schemes. These studies resulted in a new error correcting method called Turbo code. In this research work, Viterbi decoding algorithm that is the basis for forward error correction (FEC) techniques, and log-MAP and SOVA turbo decoding algorithms are studied using MATLAB software. An example image transfer application has also been realized for comparative performance analysis of these techniques. The simulation results obtained show that the log-MAP decoding algorithm achieves up to 100 times better BER performance especially for increasing SNR values than that of Viterbi algorithm.",2007,0, 2302,Haptic system to alert users before impending human errors,High performance cognitive environments such as surgery or driving pose extensive constraints on efficient perception of salient information. In such environments it is beneficial to track physiological signals from the operator and predict errors and their type before they occur and alert the operator to take preventive action. The challenge lies in interpreting complex neural data obtained through sensors such as EEG signals and subsequently alerting the subject to possible errors. This paper presents a EEG based analysis system coupled with a haptic glove and visual feedback based alert system to provide such functionality. The haptic glove was made from six vibratory motors placed on fingers and palm. The EEG system consisted of a Bluetooth EEG cap that monitored attention distraction and drowsiness. Results show that the hand based system for delivering visio-haptic signals to alert users to impending errors can help in avoiding human errors.,2009,0, 2303,Multi Gigabit Transceiver Configuration RAM Fault Injection Response,"High performance processing and memory systems require enormous amounts of I/O bandwidth. Wide parallel bus architectures have reached their practical limits for high bandwidth transport. High speed serial interfaces that support 10's of Gbps are now displacing wide shared bus architectures for many systems. Xilinx FPGAs serial links support this transition by providing more than 10 Gbps in their multi gigabit transceiver (MGT) I/Os. For space applications, these links are susceptible to single event effects (SEE). Many of these effects are due to upsets in the FPGAs configuration RAM that control the many features and functions of the I/O. This paper details the functional effects of configuration RAM upsets in Xilinx MGTs. These effects are realized by injecting upsets in the FPGA configuration RAM while monitoring MGT functional operation. Configuration RAM upset effects are described and functional upset rates due to configuration RAM upsets are calculated for an example orbit. The results of this work provide insight into the on-orbit upset rate and effects of Xilinx multigigabit transceivers",2005,0, 2304,Analysis of long-lived isotopes in the presence of short-lived isotopes using zero dead time correction,"High Purity Germanium (HPGe) detector systems are routinely used in counting laboratories in many types of nuclear facilities such as nuclear power plants and fuel production sites. These systems generally consist of a lead-shielded HPGe detector, Multi Channel Analyzer (MCA), and analytical software. These systems are used to analyze a wide variety of sample types for many different isotopes. Analysis of certain sample types, such as those from the reactor coolant or the off-gas extraction system, in nuclear power plant radiochemistry laboratories is complicated by the presence of short-lived isotopes. With these isotopes present, the sample count rate begins at a higher value than the ending count rate with a rapid change in count rate often observed. This decaying of the sample count rate causes the true count rate of the peaks to be unknown for those MCAs that use the traditional Live Time Clock extension methods. The current method of compensating for these short-lived isotopes is simply to delay starting the acquisition until these isotopes decay (typically 45-60 minutes). This has the effect of reducing the throughput capacity of the laboratory meaning fewer samples can be counted in any given period. The use of ""loss free counting"" methods in radiochemistry laboratories has been unacceptable because these methods do not provide the uncertainty in the measurement which must be reported with the activity calculation from the counting laboratory. An innovative MCA with a zero dead time (ZDTTM) correction method will be presented which (1) compensates for the decaying count rate caused by the short-lived isotopes, thus eliminating the need for delaying the start time of the acquisition; and (2) calculates the uncertainty in the activity determination, thus satisfying the reporting requirements of the counting laboratory. Data from the analysis, including the uncertainty, of long-lived isotopes in reactor coolant samples both in the presence and absence of short-lived isotopes will be presented.",2001,0, 2305,Using a pulsed supply voltage for delay faults testing of digital circuits in a digital oscillation environment,"High-performance digital circuits with aggressive timing constraints are usually very susceptible to delay faults. Much research done on delay fault detection needs a rather complicated test setup together with precise test clock requirements. In this paper, we propose a test technique based on the digital oscillation test method. The technique, which was simulated in software, consists of sensitizing a critical path in the digital circuit under test and incorporating the path into an oscillation ring. The supply voltage to the oscillation ring is then varied to detect delay and stuck-at faults in the path.",2002,0, 2306,"Noninvasive Fault Classification, Robustness and Recovery Time Measurement in Microprocessor-Type Architectures Subjected to Radiation-Induced Errors","In critical digital designs such as aerospace or safety equipment, radiation-induced upset events (single-event effects or SEEs) can produce adverse effects, and therefore, the ability to compare the sensitivity of various proposed solutions is desirable. As custom-hardened microprocessor solutions can be very costly, the reliability of various commercial off-the-shelf (COTS) processors can be evaluated to see if there is a commercially available microprocessor or microprocessor-type intellectual property (IP) with adequate robustness for the specific application. Most existing approaches for the measurement of this robustness of the microprocessor involve diverting the program flow and timing to introduce the bit flips via interrupts and embedded handlers added to the application program. In this paper, a tool based on an emulation platform using Xilinx field programmable gate arrays (FPGAs) is described, which provides an environment and methodology for the evaluation of the sensitivity of microprocessor architectures, using dynamic runtime fault injection. A case study is presented, where the robustness of MicroBlaze and Leon3 microprocessors executing a simple signal processing task written in C language is evaluated and compared. A hardened version of the program, where the key variables are protected, has also been tested, and its contributions to system robustness have also been evaluated. In addition, this paper presents a further improvement in the developed tool that allows not only the measurement of microprocessor robustness but, in addition, the study and classification of single-event upset (SEU) effects and the exact measurement of the recovery time (the time that the microprocessor takes to self repair and recover the fault-free state). The measurement of this recovery time is important for real-time critical applications, where criticality depends on both data correctness and timing. To demonstrate the proposed improvements, a new software program t- - hat implements two different software hardening techniques (one for Data and another for Control Flow) has been made, and a study of the recovery times in some significant fault-injection cases has been performed over the Leon3 processor.",2009,0, 2307,Simulation on Compound Action Between Deep Fault and Karst Collapse Column Take GuQiao Coal Mine as an Example,"In deep mining area, karst and fault developed, water inrush characteristic is the result of compound action between fault and karst collapse column, Discontinuous Deformation Analysis software (DDA) was used to simulate the compound action between fault and karst collapse column, The results showed that, on the one hand, the fault slip direction and scope was controlled by the column geometry form, on the other hand, the growth of fault will speed up the formation of collapse column.",2010,0, 2308,Investigation on compatible remote fault diagnosis system for power electronic equipments,"In different power electronic equipments, the circuit topologies and signal's levels of voltage/current are usually different. So the fault diagnosis models for these equipments are also different. Based on the power electronics, fault diagnosis and network technology, a distributed system with remote fault diagnosis function and high compatibility is presented in the paper. It puts emphasis on introducing the system structure, real-time and synchronous data acquisition method, on-line and off-line fault diagnosis, as well as compatibility design for power electronic equipments.",2003,0, 2309,MMAR:A deadlock recovery-based fault tolerant routing algorithm for mesh/torus networks,"In direct networks, such as mesh and torus, the switching capacity will increase as the number of components increase. But the fault probability of the network also increases with the increasing of components. This paper proposes a novel fault-tolerant algorithm, named as minimal misrouted adaptive routing (MMAR) which is based on true fully adaptive routing algorithm and deadlock recovery mechanism. Due to the high adaptability, MMAR can accommodate arbitrary shaped fault models using minimal number of virtual channels in each physical link. When encountering concave fault models, MMAR minimizes the length of the misrouted path by avoiding routing the message into the irrespective holes. Simulation results show that MMAR can work efficiently and achieve favorable performance.",2007,0, 2310,Autonomous Decentralized VoD System Architecture and Fault-Tolerant Technology to Assure Continuous Service,"In distributed and ubiquitous computing systems, not only the composing subsystems and their functions, but also the system structure would be changed constantly under the evolving situation. With the advances of compression, storage and network technologies, Video on Demand (VoD) service is becoming more and more popular. However, it is difficult for conventional systems to meet the continuous and heterogeneous requirements from service providers and users simultaneously. This paper introduces an autonomous decentralized VoD system sustained by mobile agents for information service provision and utilization. Under the proposed architecture, autonomous fault detection and recovery technologies are proposed to assure continuous service. The effectiveness of the proposed technology is proved by the simulation. The results show that an average of 30% improvement in recovery time and users' video service can be recovered without stopping compared with the conventional system.",2009,0, 2311,Theoretical consideration of adaptive fault tolerance architecture for open distributed object-oriented computing,"In distributed object-oriented computing, a complex set of requirements has to be considered for maintaining the required level of reliability of objects to give higher level of performance. The authors proposed a mechanism to analyze the complexity of these underlying environments and design a dynamically reconfigurable architecture according to the changes of the underlying environment. The replication should have a suitable architecture for the adaptable fault tolerance so that it can handle the different situations of the underlying system before the fault occurs. The system can provide the required level of reliability by measuring the reliability of the underlying environment and then either adjusting the replication degree or migrating the object appropriately. It is also possible to improve the reliability with shifting to a suitable replication protocol adaptively. However, there should be a mechanism to overcome the problems when ""replication protocol transition period"" exists. This will also find a client-server group communication protocol that communicates with objects under both different environmental conditions and different replication protocols.",2005,0, 2312,A Fault Tolerance Scheme for Hierarchical Dynamic Schedulers in Grids,"In dynamic grid environment failures (e.g. link down, resource failures) are frequent. We present a fault tolerance scheme for hierarchical dynamic scheduler (HDS) for grid workflow applications. In HDS all resources are arranged in a hierarchy tree and each resource acts as a scheduler. The fault tolerance scheme is fully distributed and is responsible for maintaining the hierarchy tree in the presence of failures. Our fault tolerance scheme handles root failures specially, which avoids root becoming single point of failure. The resources detecting failures are responsible for taking appropriate actions. Our fault tolerance scheme uses randomization to get rid of multiple simultaneous failures. Our simulation results show that the recovery process is fast and the failures affect minimally to the scheduling process.",2008,0, 2313,The Level of Decomposition Impact on Component Fault Tolerance,"In fault tolerant software systems, the Level of Decomposition (LoD) where design diversity is applied has a major impact on software system reliability. By disregarding this impact, current fault tolerance techniques are prone to reliability decrease due to the inappropriate application level of design diversity. In this paper, we quantify the effect of the LoD on system reliability during software recomposition when the functionalities of the system are redistributed across its components. We discuss the LoD in fault tolerant software architectures according to three component failure transitions: component failure occurrence, component failure propagation, and component failure impact. We illustrate the component aspects that relate the LoD to each of these failure transitions. Finally, we quantify the effect of the LoD on system reliability according to a series of decomposition and/or merge operations that may occur during software recomposition.",2010,0, 2314,Modelling the fault correction process,"In general, software reliability models have focused on modeling and predicting failure occurrence and have not given equal priority to modeling the fault correction process. However, there is a need for fault correction prediction, because there are important applications that fault correction modeling and prediction support. These are the following: predicting whether reliability goals have been achieved, developing stopping rules for testing, formulating test strategies, and rationally allocating test resources. Because these factors are related, we integrate them in our model. Our modeling approach involves relating fault correction to failure prediction, with a time delay between failure detection and fault correction, represented by a random variable whose distribution parameters are estimated from observed data.",2001,0, 2315,DC reactor effect on bridge type superconducting fault current limiter during load increasing,"In high power applications, the fault current limiter has been discussed for many years because of some limitations of conventional circuit breakers. Many types of fault current limiter have already been introduced in papers. In this work, a simple bridge-type fault current limiter has been designed and constructed. The performances of the limiter have been tested successfully. In the bridge-type current limiter, a DC reactor appears in the line when the connected load is increasing. This causes a voltage drop across the load terminal during load changing. The DC reactor effect of the current limiter has been studied. Some experimental results regarding the reactor effect of the limiter have been considered and compared with the results obtained from computer simulation",2001,0, 2316,HLBP: A Hybrid Leader Based Protocol for MAC Layer Multicast Error Control in Wireless LANs,"In IEEE 802.11 Wireless LANs current standard MAC layer protocols do not provide any error correction scheme for broadcast/multicast. In our previous work, we enhanced a Leader Based Protocol (LBP) and proposed a Beacon-driven Leader Based Protocol (BLBP) for the MAC layer multicast error control. In this paper, we combine BLBP and packet level Forward Error Correction (FEC) and propose a Hybrid Leader Based Protocol (HLBP). HLBP transmits the original data packets using raw broadcast and retransmits parity packets using an improved BLBP which is based on block feedback. To guarantee the required Packet Loss Ratio (PLR) under strict delay constraints, we analyze HLBP, develop formulas to optimize its performance and evaluate its performance via simulation experiments. Both theoretical analysis and simulation results show that HLBP is much more efficient than LBP and BLBP especially for large multicast groups and is even more efficient than the best application layer multicast error correction scheme.",2008,0, 2317,High-speed forward error correction IP blocks for system-on-chip design,"High-speed forward error correction IP (intellectual property) blocks suitable for system-on-chip were designed for a wireless communications transceiver. Synthesizable HDL (hardware description language) code was written for the blocks including a differential encoder/decoder, a Reed-Solomon encoder/decoder and a convolutional interleaver/deinterleaver. The code was compiled, tested and verified for timing and function in both FPGA and 0.18 m CMOS.",2002,0, 2318,Development of a simulation model of a HTS-fault current limiter for the network computation software DIgSILENT PowerFactory,"High-temperature superconducting fault current limiters (HTS-FCL) are excellent equipment for fault current limitation. In the scope of an extensive research project regarding current limitation in power station auxiliary systems, a HTS-FCL simulation is currently developed. With help of this model the widespread implications of using a HTS-FCL at various locations and voltage levels in the auxiliaries systems will be examined.",2010,0, 2319,Detection and correction of deformed historical arabic manuscripts,"Historical manuscripts are considered one of the most imperative human riches and a source of intellectual production. Unfortunately, due to aging effects, multiple noises and deviations are found in the document image. Moreover, challenges for several images of ancient documents show defects of inclinations and curvatures of text lines. These defects arise due to bad storage conditions, or during the digitization process. In order to improve the readability and the automatic recognition of historical Arabic manuscripts, preprocessing steps are imperative. This paper presents a novel method that consists of two major phases. The first refer to binarization and enhancement of the scanned document image. In the second phase, correction of skew angle in the text line passes by the detection of curvature/inclination of the baseline. Then, calculating the skewed angle of this line, and finally, correcting the line with a rotation relative to its centre. The proposed method was implemented on different scanned Arabic documents. The proposed methodology overcomes the defects of global binarization method, also, save the high computation effort of adaptive binarization techniques. Moreover, it works well with both Arabic handwritten words and printed text.",2010,0, 2320,On Topology Reconfiguration for Defect-Tolerant NoC-Based Homogeneous Manycore Systems,"Homogeneous manycore systems are emerging for tera-scale computation and typically utilize Network-on-Chip (NoC) as the communication scheme between embedded cores. Effective defect tolerance techniques are essential to improve the yield of such complex integrated circuits. We propose to achieve fault tolerance by employing redundancy at the core-level instead of at the microarchitecture level. When faulty cores exist on-chip in this architecture, however, the physical topologies of various manufactured chips can be significantly different. How to reconfigure the system with the most effective NoC topology is a relevant research problem. In this paper, we first show that this problem is an instance of a well known NP-complete problem. We then present novel solutions for the above problem, which not only maximize the performance of the on-chip communication scheme, but also provide a unified topology to Operating System and application software running on the processor. Experimental results show the effectiveness of the proposed techniques.",2009,0, 2321,A Human Factors fault tree analysis method for software engineering,"Human Factors Analysis has realistic and profound significance to improve the quality and reliability of software. However, there is little research on the methods applied in software engineering to analyze human error. This paper proposes a human factors analysis method, which applies the fault tree analysis method to seek the human factors causing software accidents. Fault tree analysis method brings great flexibility and it is a graph deduction, which makes it easier to find the critical links of human errors.",2008,0, 2322,User-Centered Interface Reconfiguration for Error Reduction in Human-Computer Interaction,"Human-computer interaction (HCI) is greatly influenced by findings in psychological research concerning interaction with complex technical processes. Psychological concepts like perception and comprehension of information, as well as projection of future system states can be summarized as situation awareness. Poor situation awareness can increase the probability of human error during interaction, caused by erroneous recall and interpretation of percept information through an interface that does not match the user's understanding and mental capabilities. Therefore, a new approach to human-centric reconfiguration of user interfaces will be presented to support situation awareness and, in this way, reduce human errors that occur during interaction.",2010,0, 2323,Reliability-based structural integrity assessment of Liquefied Natural Gas tank with hydrogen blistering defects by MCS method,"Hydrogen blistering is one of the serious threats to safe operation of a Liquefied Natural Gas (LNG) tank, therefore safety analysis of hydrogen blistering defects is very important. In order to assess the reliability-based structural integrity of the LNG tank with defects of hydrogen blistering, the following steps were carried out. Firstly, Abaqus code, one of the Finite Element Method (FEM) software, was utilized to calculate 100 J-integral values of crack tip by defining directly. Secondly, the 100 J-integral values of crack tip were used as training data and testing data by Optimized Least Squares Support Vector Machine (OLS-SVM), Least Squares Support Vector Machine (LS-SVM) and Artificial Neural Networks (ANN) to get other 20000 J-integral values of crack tip. Finally, Monte-Carlo Simulation (MCS) was used to assess the reliability-based structural integrity analysis. The results showed that the hydrogen blistering defect with crack will propagate with about 14 percent chance in such a case. It also proved that MCS combined with FEM and SVM was an effective and prospective method for research and application of integrity assessment, which could overcome the data source problem.",2010,0, 2324,Compact ASIC implementation of the ICEBERG block cipher with concurrent error detection,"ICEBERG is a block cipher that has been recently proposed for security applications requiring efficient FPGA implementations. In this paper, we investigate a compact ASIC implementation of ICEBERG and consider the novel application of concurrent error detection to protect the implementation from fault-based attacks. The compact architecture of ICEBERG requires about 5800 gates with a throughput of 552 Mbps in an ASIC implementation based on 0.18 mum CMOS technology. The addition of an effective multiple parity concurrent error detection scheme to protect the hardware from fault attacks results in a 62% area overhead.",2008,0, 2325,Novel method for fault section identification,"From stability point of view fault section identification is an important task for a transmission systems. In this paper, a novel method for section identification using extended Kalman filter (EKF) has been proposed. Subsynchronous frequency as indicator of fault section identification is estimated by EKF algorithm. Several tests as different fault locations have been performed to show performance of the method. Simulation results reveal high performance of the method. In all tests, the proposed algorithm detects accurately presence of subsynchronous frequency.",2009,0, 2326,Determination of GPS positioning errors due to multi-path in civil aviation,"Fully automated landing systems based on GPS still suffer from errors, mainly due to multipath propagation. This paper presents a simulation tool to quantify positioning errors by combining a ray-tracing based three-dimensional multipath propagation model with a standard GPS receiver model. The propagation model accounts for the reflecting environment geometry, transmitting and receiving antenna patterns, reflections and depolarization. For each ray the time delay, carrier phase delay and multipath-to-direct amplitude ratio is computed and fed into the receiver model to determine the pseudorange errors. The performance of the method is illustrated with numerical examples.",2005,0, 2327,Using Field-Repairable Control Logic to Correct Design Errors in Microprocessors,"Functional correctness is a vital attribute of any hardware design. Unfortunately, due to extremely complex architectures, widespread components, such as microprocessors, are often released with latent bugs. The inability of modern verification tools to handle the fast growth of design complexity exacerbates the problem even further. In this paper, we propose a novel hardware-patching mechanism, called the field-repairable control logic (FRCL), that is designed for in-the-field correction of errors in the design's control logic-the most common type of defects, as our analysis demonstrates. Our solution introduces an additional component in the processor's hardware, a state matcher, that can be programmed to identify erroneous configurations using signals in the critical control state of the processor. Once a flawed configuration is ldquomatched,rdquo the processor switches into a degraded mode, a mode of operation which excludes most features of the system and is simple enough to be formally verified, yet still capable to execute the full instruction-set architecture at one instruction at a time. Once the program segment exposing the design flaw has been executed in a degraded mode, we can switch the processor back to its full-performance mode. In this paper, we analyze a range of approaches to selecting signals comprising the processor's critical control state and evaluate their effectiveness in representing a variety of design errors. We also introduce a new metric (average specificity per signal) that encodes the bug-detection capability and amount of control state of a particular critical signal set. We demonstrate that the FRCL can support the detection and correction of multiple design errors with a performance impact of less than 5% as long as the incidence of the flawed configurations is below 1% of dynamic instructions. In addition, the area impact of our solution is less than 2% for the two microprocessor designs that we investigated in our experiments.",2008,0, 2328,Testing content-addressable memories using functional fault models and march-like algorithms,"Functional tests for content-addressable memories (CAM's) are presented in this paper. In addition to several traditional functional fault models for RAM's, we also consider the fault models based on physical defects, such as shorts between two circuit nodes and transistor stuck-on and stuck open faults. Accordingly, several functional fault models are proposed. In order to make our approach suited to various application-specific CAM's, we propose tests which require only three fundamental types of operation (i.e., write, erase, and compare), and the test results can be observed entirely from the single-bit Hit output. A complete, compact test is also proposed, which has low complexity and is suitable for modern high-density and large-capacity CAMs-it requires only 2N+3w+2 compare operations and 8N write operations to cover the functional fault models discussed, where N is the number of words and w is the word length",2000,0, 2329,FSimGP^2: An Efficient Fault Simulator with GPGPU,"General Purpose computing on Graphical Processing Units (GPGPU) is a paradigm shift in computing that promises a dramatic increase in performance. But GPGPU also brings an unprecedented level of complexity in algorithmic design and software development. In this paper, we present an efficient parallel fault simulator, FSimGP2, that exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 42 compared to another GPU-based fault simulator and up to 53 over a state-of-the-art algorithm on conventional processor architectures.",2010,0, 2330,Fault-Tolerance for Component-Based Systems - An Automated Middleware Specialization Approach,"General-purpose middleware, by definition, cannot readily support domain-specific semantics without significant manual efforts in specializing the middleware. This paper presents GRAFT (GeneRative Aspects for Fault Tolerance), which is a model-driven, automated, and aspects-based approach for specializing general-purpose middleware with failure handling and recovery semantics imposed by a domain.Model-driven techniques are used to specify the special fault tolerance requirements, which are then transformed into middleware-level code artifacts using generative programming. Since the resulting fault tolerance semantics often crosscut the middleware architecture, GRAFT uses aspect-oriented programming to weave them into the original fabric of the general-purpose middleware. We evaluate the capabilities of GRAFT using a representative case study.",2009,0, 2331,An adaptive inverse time-delay characteristic of the zero-sequence overvoltage protection for identification of the single-phase earth fault in the neutral non-effectively grounded power systems,"In a neutral non-effectively grounded power system, the correct identification and isolation of the faulty feeder on the occurrence of a single-phase earth fault is always a difficult task. To improve the level of distribution automation, a novel method of negative-sequence current compensation based adaptive zero-sequence over-voltage protection is put forward in this paper. On the basis of the analysis of the transient negative-sequence current, it can be proved that the negative sequence current through the faulty feeder is far higher than those through the sound feeder in any configuration of distribution network and neutral compensation modes. Therefore, an adaptive inverse time delay characteristic is adopted by the basic zero-sequence over-voltage protection. A compensated voltage is generated by means of multiplying the modal maximum of the negative-sequence current of the feeder by a settable compensated reactance, and then it is combined with the magnitude of the zero-sequence voltage to consist of the compounded compensated voltage. This compensated voltage is utilized to revise the characteristic of the inverse time-delay. By this means, the zero-sequence over-voltage protection will possess the selectivity adaptively. The effectiveness of the proposed method has been verified with the results of the theoretical analysis and simulations.",2009,0, 2332,The construction of a novel agent fault-tolerant migration model,"In agent migration process, malicious hosts can compromise the agent. To solve the problem, the paper introduces the measure of agent integrity verification and constructs a novel agent fault-tolerant migration model. The model avoids much agent replicas in migration process. By simulation experiment, the results prove that the model provided by the paper is feasible and efficient, and can save network resource much than other relative works.",2004,0, 2333,Performance evaluation of a fault-tolerant irregular network,"In an attempt to improve the fault-tolerance of the Omega network, this paper examines the performance of the proposed Theta network (THN), and compares it with other networks having similar characteristics. The irregular nature of the network has the inherent advantage of improving the latency of the network. Analytical results exhibit the favorable performance of THN at low cost, making the reliability degrade gracefully with time, while maintaining full-access capability over a reasonably long time. We study methods for routing requests in the presence and absence of faulty components in THN, where 50% of the requests pass at the minimum path length of 2.",2002,0, 2334,Error Estimation Models Integrating Previous Models and Using Artificial Neural Networks for Embedded Software Development Projects,"In an earlier paper, we established 9 models for estimating errors in a new project. In this paper, we integrate the 9 models into 5 by investigating similarities among the models. In addition, we establish a new model using an artificial neural network (ANN). It is becoming increasingly important for software-development corporations to ascertain how to develop software efficiently, whilst guaranteeing delivery time and quality, and keeping development costs low. Estimating the manpower required by new projects and guaranteeing the quality of software are particularly important, because the estimation relates directly to costs while the quality reflects on the reliability of the corporations. In the field of embedded software, development techniques, management techniques, tools, testing techniques, reuse techniques, real-time operating systems and so on, have already been studied. However, there is little research on the relationship between the scale of the development and the number of errors using data accumulated from past projects. Hence, we integrate the previous models and establish a new model using an artificial neural network (ANN). We also compare the accuracy of the ANN model and the regression analysis models. The results of these comparisons indicate that the ANN model is more accurate than any of the 5 integrated models.",2008,0, 2335,Goal-based fault tolerance for space systems using the mission data system,"In anticipating insitu exploration and other circumstances with environmental uncertainty, the present model for space system fault tolerance breaks down. The perplexities of fault-tolerant behavior, once confined to infrequent episodes, must now extend to the entire operational model. To address this dilemma we need a unified approach to robust behavior that includes fault tolerance as an intrinsic feature. This requires an approach capable of measuring operators' intent in the light of present circumstances, so that actions are derived by reasoning, not by edict. The Mission Data System (MDS), presently under development by NASA is one realization of this paradigm -part of a larger effort to provide multi-mission flight and ground software for the next generation of deep space systems. This paper describes the MDS approach to fault tolerance, contrasting it with past efforts, and offering motivation for the approach as a general recipe for similar efforts",2001,0, 2336,An Extension of Differential Fault Analysis on AES,"In CHES 2006, M. Amir et al. introduced a generalized method of differential fault attack (DFA) against AES-128. Their fault models cover all locations before the 9th round in AES-128. However, their method cannot be applied to AES with other key sizes, such as AES-192 and AES-256. On the differential analysis, we propose a new method to extend DFA on AES with all key sizes. Our results in this study will also be beneficial to the analysis of the same type of other iterated block ciphers.",2009,0, 2337,Chip Error Pattern Analysis in IEEE 802.15.4,"IEEE 802.15.4 standard specifies physical layer (PHY) and medium access control (MAC)sublayer protocols for low-rate and low-power communication applications. In this protocol, every 4-bit symbol is encoded into a sequence of 32 chips that are actually transmitted over the air. The 32 chips as a whole is also called a pseudo-noise code (PN-Code). Due to complex channel conditions such as attenuation and interference, the transmitted PN-Code will often be received with some PN-Code chips corrupted. In this paper, we conduct a systematic analysis on these errors occurring at chip-level. We find that there are notable error patterns corresponding to different cases. Recognizing these patterns will enable us to identify the channel condition in great details. We believe that understanding what happened to the transmission in our setup can potentially bring benefit to channel coding, routing and error correction protocol design.",2010,0, 2338,IFRA: Post-silicon bug localization in processors,"IFRA overcomes challenges associated with an expensive step in post-silicon validation of processors - pinpointing the bug location and the instruction sequence that exposes the bug from a system failure. On-chip recorders collect instruction footprints (information about flows of instructions, and what the instructions did as they passed through various design blocks) during the normal operation of the processor in a post-silicon system validation setup. Upon system failure, the recorded information is scanned out and analyzed off-line for bug localization. Special self-consistency-based program analysis techniques, together with the test program binary of the application executed during post-silicon validation, are used. Major benefits of using IFRA over traditional techniques for post-silicon bug localization are: 1. It does not require full system-level reproduction of bugs, and, 2. It does not require full system-level simulation. Simulation results on a complex super-scalar processor demonstrate that IFRA is effective in accurately localizing electrical bugs with very little impact on overall chip area.",2009,0, 2339,Low-Cost Hardening of Image Processing Applications Against Soft Errors,"Image processing systems are increasingly used in safety-critical applications, and their hardening against soft errors becomes an issue. The authors propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. The authors call a soft error uncritical if its impact is provably limited to image perturbations during a very short period of time (number of cycles) and the system is guaranteed to recover thereafter. Uncritical errors do not require hardening as their effects are unperceivable for the human user of the system. The authors focus on soft errors in the motion estimation subsystem of MPEG-2 and introduce different definitions of uncritical soft errors in that subsystem. A method is proposed to automatically determine uncritical errors and provide experimental results for various parameters. The concept can be adapted to further systems and enhance existing methods",2006,0, 2340,Investigating Test Teams' Defect Detection in Function test,"In a case study, the defect detection for functional test teams is investigated. In the study it is shown that the test teams not only discover defects in the features under test that they are responsible for, but also defects in interacting components, belonging to other test teams' features. The paper presents the metrics collected and the results as such from the study, which gives insights into a complex development environment and highlights the need for coordination between test teams in function test.",2007,0, 2341,Quantization errors in committee machine for gas sensor applications,"In a digital implementation of a gas identification system, the mapping of continuous real parameter values into a finite set of discrete values introduces an error into the system. This paper presents the results of an investigation into the effects of parameter quantization on different classifiers (KNN, MLP and GMM). We propose a committee machine to decrease the classification performance degradation due to the quantization errors. The simulation results show that the committee machine always outperforms a single classifier and the gain in classification performance is greater for a reduced number of bits.",2005,0, 2342,Random defect limited yield using a deterministic model,"For success in the competitive semiconductor industry, the need to reduce cost per die is necessary. One way to accomplish this is to move to the next generation, or shrink technology to produce more die per wafer. Similarly, it is just as important to produce better die per wafer by minimizing the cycle time to detect and fix yield problems associated with the current technology. Wafer sort yield (good chips/wafer) can be broken into three components: random defect limited yield, systematic yield, and repeating yield loss. Random defect limited yield is due to defects, primarily caused by process equipment and their byproducts. Defects, usually randomly distributed, can also be localized to one or multiple die on a wafer. In-line QC inspection tools can detect most defects. Systematic yield losses are process-related and can affect all or some die on a wafer, or die by wafer region. Systematic yield losses are not detectable by in-line QC defect inspection tools. Repeating yield loss is due to reticle defects occurring on the same die within a reticle field. Repeating defects are caused by contamination on the stepper lens, by contamination of the pellicle that protects the reticle, or by reticle contamination. Reticle defects are sometimes detectable by in-line QC inspection tools. The paper focuses on calculations of random defect limited yield using a deterministic yield model. The model is used to prioritize defect problems and drive yield improvements. Actual examples are used to demonstrate the benefits and strengths of the deterministic model. We also discuss the methodology, assumptions, and limitations of this model",2001,0,1589 2343,Development of a Petri net-based fault diagnostic system for industrial processes,"For the improvement of the reliability and safety of industrial processes, a fault detection and tracing approach has been proposed. In this paper, the P-invariant of Petri nets (PN) is applied to discover sequence faults, while both sensor faults and actuator faults are detected using exclusive logic functions. For industrial applications, the proposed fault detector has been implemented within a programmable logic controller (PLC) by converting the fault detection logic functions into ladder logic diagrams (LLD). Moreover, a fault tracer has been modeled by an AND/OR tree and a tracing procedure is provided to locate the faults. A mark stamping process is demonstrated as an example to illustrate the proposed diagnostic approach.",2009,0, 2344,Comparative study of single-phase earth fault responses of FCC-HVDC and LCC-HVDC,"In order to study the transient response characteristics of Filter Commutated Converter (FCC) under the condition of single-phase earth fault, in the paper, referring to the CIGRE HVDC benchmark testing model (LCC-HVDC), on the basic of non-changing the short circuit ration (SCR) and some parameters, combining the connecting features of FCC, the new testing model based on FCC (FCC-HVDC) is established in the PSCAD/EMTDC; based the two models of LCC-HVDC and FCC-HVDC, the magnetizing and harmonic characteristics are analyzed comparatively under the single-phase earth fault, the results indicated: (1) the magnetizing current of the converter transformer are both a sudden increase and unidirectional magnetization, but the magnetizing characteristics of FCC-HVDC is better than LCC-HVDC; (2) FCC-HVDC still maintains its good harmonic suppression characteristics.",2010,0, 2345,New EMTP-RV Equivalent Circuit Model of Core-Shielding Superconducting Fault Current Limiter Taking Into Account the Flux Diffusion Phenomenon,"In order to successfully integrate superconducting fault current limiters (SFCL) into electric power system networks, accurate and fast simulation models are needed. This led us to develop a generic electric circuit model of an inductive SFCL, which we implemented in the EMTP-RV software. The selected SFCL is of shielded-core type, i.e. a HTS hollow cylinder surrounds the central leg of a magnetic core, and is located inside a primary copper winding, generating an AC magnetic field proportional to the line current. The model accounts for the highly nonlinear flux diffusion phenomenon across the superconducting cylinder, governed by the Maxwell equations and the non-linear E-J relationship of HTS materials. The computational efficiency and simplicity of this model resides in a judicious 1-D approximation of the geometry, together with the use of an equivalent electric circuit that reproduces accurately the actual magnetic behavior for the flux density (B) inside the walls of the HTS cylinder. The HTS properties are not restricted to the simple power law model, but instead, any resistivity function depending on J, B and T can be used and inserted directly in the model through a non-linear resistance appearing in the equivalent circuit.",2009,0, 2346,Achieving fault tolerance in FTT-CAN,"In order to use the FTT-CAN protocol (flexible time-triggered communication over controller area network) in safety-critical applications, the impact of network errors and node failures must be thoroughly determined and minimized. This paper presents and discusses fault-tolerance techniques to limit that impact. The particular configuration of the communication system can be more or less complex and fault-tolerant as desired by the system designer. The paper includes the fault hypothesis and presents a replicated network architecture using bus guardians. An important aspect is the replication of the master node that schedules the time-triggered traffic. In this case, it is particularly important to assure correct synchronization of the master replicas. The mechanisms that support masters' replication and synchronization are described and their performance is evaluated. The resulting architecture allows a reduction of the conflicts between safety and flexibility, supporting the use of FTT-CAN in safety critical applications.",2002,0, 2347,Evaluation of distribution fault diagnosis algorithms using ROC curves,"In power distribution fault data, the percentage of faults with different causes could be very different and varies from region to region. This data imbalance issue seriously affects the performance evaluation of fault diagnosis algorithms. Due to the limitations of conventional accuracy (ACC) and geometric mean (G-mean) measures, this paper discusses the application of Receiver Operating Characteristic (ROC) curves in evaluating distribution fault diagnosis performance. After introducing how to obtain ROC curves, Artificial Neural Networks (ANN), Logistic Regression (LR), Support Vector Machines (SVM), Artificial Immune Recognition Systems (AIRS), and K-Nearest Neighbor (KNN) algorithm are compared using ROC curves and Area Under the Curve (AUC) on real-world fault datasets from Progress Energy Carolinas. Experimental results show that AIRS performs best most of the time and ANN is potentially a good algorithm with a proper decision threshold.",2010,0, 2348,An approach of automatically performing Fault Tree Analysis and failure mode and effect techniques to software processes,"In practice, engineers find that software quality depends heavily on the maturity and reliability of the software process. Therefore, organizations are placing increased attentions on finding an efficient way of integrating elements of software process in order to produce high quality software with lower cost and risk. Various kinds of techniques are used to manage, monitor and analyze software process. We proposed an idea of modeling software process in Little-JIL language, and then introduce automatic Fault Tree Analysis (FTA) technique and Failure Mode and Effect Analysis (FMEA) technique, two fully-fledged and widely used safety analysis techniques, to analyze software process. Results of these two techniques can be combined to improve software process, which may lead to software products with higher quality and reliability, also lower cost and risk.",2010,0, 2349,Study of the relationship of bug consistency with respect to performance of spectra metrics,"In practice, manual debugging to locate bugs is a daunting and time-consuming task. By using software fault localization, we can reduce this time substantially. The technique of software fault localization can be performed using execution profiles of the software under several test inputs. Such profiles, known as program spectra, consist of the coverage of correct and incorrect executions statement from a given test suite. We have performed a systematic evaluation of several metrics that make use of measurement obtained from program spectra on Siemens test suite. In this paper, we discuss how the effectiveness of various metrics degrade in determining buggy statements as the bug consistency (error detection accuracy, qe) of a statement approaches zero. Bug consistency of a statement refers to the ratio of the number of failed tests executing the statement over the total number of tests executing the statement. We proposed effect(M) as to measure the effectiveness of these metrics as qe value varies. We also demonstrate that the qe (previously not considered as a metric), is just as effective as some of the metrics proposed. We also formally prove that qe is identical to the metric that Tarantula system uses for bug localization.",2009,0, 2350,Influences of different excitation parameters upon PEC testing for deep-layered defect detection with rectangular sensor,"In pulsed eddy current testing, repetitive excitation signals with different parameters: duty-cycle, frequency and amplitude have different response representations. This work studies the influences of different excitation parameters on pulsed eddy current testing for deep-layered defects detection of stratified samples with rectangular sensor. The sensor had been proved to be superior in quantification and classification of defects in multi-layered structures compared with traditional circular ones. Experimental results show necessities to optimize the parameters of pulsed excitation signal, and advantages of obtaining better performances to enhance the POD of PEC testing.",2010,0, 2351,Performance Modeling of Fault Tolerant Fully Adaptive Wormhole Switching 2-D Meshes in Presence of Virtual Channels,"In recent decades, researchers try to construct high performance interconnection networks with emphasis on fault tolerance. Their studies are based on this fact that, a network can be a major performance bottleneck in parallel processors. This paper proposes an analytical model to predict message latency in wormhole-switched mesh as an instance of a fault tolerant routing. The mesh topology has desirable properties such as modularity, regularity in structure, partitioning to smaller one, simplicity in implementation. To achieve our purpose we modeled an algorithm suggested by Linder-Harden, then performance of this model was evaluated by using XMulator.",2008,0, 2352,Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level,"In recent technology nodes, reliability is considered a part of the standard design ow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach.",2010,0, 2353,Behavioral analysis of a fault-tolerant software system with rejuvenation,"In recent years, considerable attention has been devoted to continuously running software systems whose performance characteristics are smoothly degrading in time. Software aging often affects the performance of a software system and eventually causes it to fail. A novel approach to handle transient software failures due to software aging is called software rejuvenation, which can be regarded as a preventive and proactive solution that is particularly useful for counteracting the aging phenomenon. In this paper, we focus on a high assurance software system with fault-tolerance and preventive rejuvenation, and analyze the stochastic behavior of such a highly critical software system. More precisely, we consider a fault-tolerant software system with two-version redundant structure and random rejuvenation schedule, and evaluate quantitatively a dependability measure like the steady-state system availability based on the familiar Markovian analysis. In numerical examples, we examine the dependence of two system diversity techniques; design and environment diversity techniques, on the system dependability measure.",2005,0, 2354,Modeling with extended fault trees,"In the areas of both safety and reliability analysis, the precise modeling of complex technical systems during their development and for evaluation purposes is of great importance. Traditionally, fault tree models have been used to accomplish this, and, more recently, stochastic Petri-net models have begun to be employed. To provide engineers with an intuitive high-level modeling interface to Petri nets, this paper introduces an approach combining extended fault trees for the description of the system and stochastic Petri nets for the evaluation and analysis of the model",2000,0, 2355,An Algorithm for Vehicle License Plate Tilt Correction Based on Line Fitting Method,"In the automatic recognition process of vehicle license plate (VLP), tilt correction is a very crucial link. According to the feature of K-Means Clustering (KMC), the image character coordinates are divided into two clusters (classes) to fit a straight line and then the line slope k is obtained. The rotation angle a is calculated using k and the whole image is rotated by In the vertical tilt correction process, two correction methods, such as line fitting method based on KMC (LFMKMC) and line fitting method based on the least square (LFMLS), are proposed to compute the vertical tilt angle . After is obtained, Shear Transform (ST) is done to the rotated image and the final correction image is created. The experimental results show that, this paper algorithm (TPA), Compared with Hough transform, shortens the processing time and is more effective. Under the same condition, the processing time of TPA is 36 times faster than that of Hough Transform. It also provides a new effective way for VLP image tilt correction.",2010,0, 2356,"Symbol Error Probability of Rectangular QAM in MRC Systems With Correlated Fading Channels","In the context of - generalized fading scenarios, this paper provides a general closed-form expression for the average symbol error probability (SEP) of arbitrary M-ary quadrature amplitude modulation (QAM) constellations in maximal-ratio combining (MRC) schemes over non-identical correlated channels. For such a scenario, the moment-generating function (MGF) of the signal-to-noise ratio (SNR) at the combiner output is obtained by rearranging the Gaussian components used to model the correlation between the diversity branches. The average SEP is then derived in terms of Lauricella multivariate hypergeometric functions, which can be implemented in most popular numerical softwares, such as Mathematica. The proposed analytical expressions are validated through Monte Carlo simulations, and insightful discussions are provided from the numerical results.",2010,0, 2357,QoS-constrained Fault-tolerant Routing in MANETs based on Segment-Backup Paths,"In the context of mobile ad hoc networks (MANETs), we consider the problem of identifying (a) an optimal primary path which satisfies the required QoS constraints, and (b) a set of alternate paths that may be used in case a link or a node on the primary path fails. The alternate paths are also required to satisfy the same set of QoS constraints as is the case with the primary path. This methodology ensures two things: (a) when there is no link or node failure the traffic moves along the preferred optimal route, but (b) if there is link or node failure the traffic will be instantly re-routed along a route that continues to satisfy the same QoS constraints, although it may experience some performance degradation. In the paper, we have proposed that the traffic be re-routed along a sub-path that by-passes a segment of the primary path that contains the failed link or node. The identification of the segments is not fixed a priori but is determined based on (a) availability of alternate paths, and (b) so that QoS constraints are met. This approach ensures that if connectivity between a given pair of nodes is rich enough then for any primary path one can always find alternate paths so as to address the problem of link or node failure. This flexibility in identifying the segments can also be used to ensure that the delay in switching traffic over to an alternate path, and the resulting packet loss, are bounded. We have described a protocol to identify (a) a primary path, (b) the collections of segments, and (c) the corresponding set of alternate paths, one for each segment, each of which satisfies specified QoS constraints, and so that the delay in switching traffic over to an alternate path is bounded",2006,0, 2358,Identification Research on Coordinate Control Influencing Factors among Enterprises of B2B EC Based on Fault Diagnosis,"In the era of network, B2B electronic commerce (EC) has become new bright spot and trend of future business development gradually. B2B EC has huge development potential in China. But there is lack of research on coordinate control among enterprises of B2B EC. That is disadvantage to the running of B2B EC smoothly in china. Therefore, the paper analyses B2B EC types and coordinate control influencing factors among enterprises of B2B EC. Then the paper constructs research model about identification on coordinate control influencing factors among enterprises of B2B EC, and shows how to use this research model by a calculative example. That provides gist for the coordinate control among enterprises of B2B EC.",2008,0, 2359,Fault-tolerant scheduling with dynamic number of replicas in heterogeneous systems,"In the existing studies on fault-tolerant scheduling, the active replication schema makes use of + 1 replicas for each task to tolerate E failures. However, in this paper, we show that it does not always lead to a higher reliability with more replicas. Besides, the more replicas implies more resource consumption and higher economic cost. To address this problem, with the target to satisfy the user's reliability requirement with minimum resources, this paper proposes a new fault tolerant scheduling algorithm: MaxRe. In the algorithm, we incorporate the reliability analysis into the active replication schema, and exploit a dynamic number of replicas for different tasks. Both the theoretical analysis and experiments prove that the MaxRe algorithm's schedule can certainly satisfy user's reliability requirements. And the MaxRe scheduling algorithm can achieve the corresponding reliability with at most 70% fewer resources than the FTSA algorithm.",2010,0, 2360,Research on Fault Diagnosis Based on SVM and Monkey-King Genetic Algorithm,"In the fault diagnosis based on support vector machine (SVM), SVM parameters are mostly selected artificially or obtained through experiment time after time, a certain and effective method has not been found. Aiming at this problem, a method optimizing the SVM parameters with Monkey-King genetic algorithm (MKSVM) is presented. In the built model the optimized parameters are used, and the superiority of SVM in processing finite samples is fully brought into play. The experimental result shows that the method can obtain higher diagnosis accuracy with fewer feature and the proposed method can find out the optimum accurately in a wide range and the value can be used to diagnose the fault effectively.",2009,0, 2361,Cooperating search agents explore more than defecting search agents in the Internet information access,"In the Internet Information Access Problem, information-seeking agents (software or humans) are selfishly rational in obtaining the information sought. From a single agent's perspective, sending out as many queries as possible maximizes the chance of achieving the information sought. However, if every agent does the same, the information servers will be overloaded and most of the search agents won't be able to retrieve the information. Our previous results suggest that when behaviorally similar information-seeking agents cluster together, cooperation is promoted. In these experiments, the ranges of query (i.e., maximum logical distance from the information-seeking agents to potential information severs) is fixed for each search agent; agents only inquire the severs within the distance. The article evolves the range of the access distance. When similar agents (cooperators with cooperators and defectors with defectors), cluster together, cooperators tend to access diversified information sites while defectors tend to access only common information sites, resulting in high congestion. This phenomenon can be seen in human agents as well. When an agent sees too much competition or overuse of resource, it considers alternative choices. For example, when people see a congested highway, they tend to take other routes even if the routes may be longer. A similar phenomena is observed in our experiments. The results of the research can be used to help designing the Internet search agents that are efficient and less burdensome to information servers",2001,0, 2362,An Adaptive and Flexible Fault Tolerance Mechanism Designed on Multi-behavior Agents for Wireless Sensor/Actuator Network,"In the last few years, WSN has been object of an intense research activity that has determined an important improvement by technologic and computation point of view both. The notable level got and the increasing request of applications designed over Sensor Networks make WSN commercial diffusion next to be a fact. One of key issues for commercial diffusion of WSN is related to the robustness of architectures. An adaptive and flexible fault tolerant mechanism for WSN is proposed in the paper. Considering the tradeoffs between robustness and energy efficiency as central issue, a programming model based on multi-behavior agents that can guarantee an efficient, dynamic and extendible implementation is proposed too.",2007,0, 2363,Kalman filter estimation of the contention dynamics in error-prone IEEE 802.11 networks,"In the last years, several strategies for maximizing the throughput performance of IEEE 802.11 networks have been proposed in literature. Specifically, it has been shown that optimizations are possible both at the medium access control (MAC) layer, and at the physical (PHY) layer. In fact, at the MAC layer, it is possible to minimize the channel wastes due to collisions and backoff expiration times, by tuning the minimum contention window as a function of the number n of competing stations. At the PHY layer, it is possible to improve the transmission robustness, by selecting a suitable modulation/coding scheme as a function of the channel quality perceived by the stations. However, the feasibility of these optimizations rely on the availability of MAC/PHY measurements, which are often impracticable or very rough. In this paper, we propose a joint MAC/PHY estimator based on a bi-dimensional extended Kalman filter, devised to track the number of competing stations and the frame error probability suffered in the network. To this purpose, we derive a relationship between the unobservable system state and measurements which are distributely performed by all the competing stations.",2008,0, 2364,Phenomenon rotor fault Multiple electrical rotor asymmetries in induction machines,"In the literature the effects caused by a single or several adjacently broken rotor bars, or a broken end ring are thoroughly investigated. The phenomenon of various non-adjacently broken rotor bars has not been studied, so far. Since non-adjacently broken rotor bars may give rise to fault signatures which are not directly related with the fault extent, it is important to understand the nature of multiple electrical rotor asymmetries in induction machines. The purpose of this paper is thus to investigate several combinations of electrical rotor asymmetries to systematically elaborate the phenomena related with broken bars and end rings. In this paper a sophisticated simulation model and measurement results are used to analyze the phenomenon rotor fault.",2009,0, 2365,Phenomenon Rotor Fault-Multiple Electrical Rotor Asymmetries in Induction Machines,"In the literature, the effects caused by a single or several adjacently broken rotor bars, or a broken end ring are thoroughly investigated. The phenomenon of various nonadjacently broken rotor bars has not been studied so far in detail. Since nonadjacently broken rotor bars may give rise to fault signatures, which are not directly related with the fault extent, it is important to understand the nature of multiple electrical rotor asymmetries in induction machines. The purpose of this paper is thus to investigate several combinations of electrical rotor asymmetries, to systematically elaborate the phenomena related to the broken bars and end rings. In this paper, a sophisticated simulation model and measurement results are used to analyze the phenomenon rotor fault.",2010,0,2364 2366,ANN based detection of electrical faults in generator-transformer units,"In the paper a model of decision system based on ANN, will be shown. As protected object the generator-transformer unit has been taking into consideration. The range of detected faults are initially narrows to faults of electromagnetic character (three-phase, two-phase, two-phase to earth and one-phase faults) within the generator - unit transformer - high voltage transmission line configuration.",2004,0, 2367,"A new [56,49,5] high rate code for erasure correction constructed over GF(8)","In the paper by McAuley (1994), a new family of error detection codes called Weighted Sum Codes was proposed. In another paper by Farkas (1995), it was shown that it is possible to use these codes for an error correction of one error in the codeword over GF(2(h2)/). In the paper by Farkas and Rakus (1999), a weight spectra of Generalized Weighted Sum Codes (GWSC-II) for erasure correction constructed over GF(8) were presented. An attempt to improve erasure correction capabilities of these codes were investigated by Rakus (2002). In the presented paper we introduce a new high-rate [56,49,5] code constructed over GF(8) for erasure correction found using computer search.",2003,0, 2368,Improving the performance of classifiers in high-dimensional remote sensing applications: an adaptive resampling strategy for error-prone exemplars (ARESEPE),"In the past, ""active learning"" strategies have been proposed for improving the convergence and accuracy of statistical classifiers. However, many of these approaches have large storage requirements or unnecessarily large computational burdens and, therefore, have been impractical for the large-scale databases typically found in remote sensing, especially hyperspectral applications. In this paper, we develop a practical on-line approach with only modest storage requirements. The new approach improves the convergence rate associated with the optimization of adaptive classifiers, especially in high-dimensional remote sensing data. We demonstrate the new approach using PROBE2 hyperspectral imagery and find convergence time improvements of two orders of magnitude in the optimization of land-cover classifiers.",2003,0, 2369,"The development of GM(1,1) error toolbox","In the prediction research, the main purpose is to minimize the prediction error; however, the goals cannot be fulfilled completely. Even we choose GM(1,1) model, we also need to minimize the prediction error. Hence, in this paper, we first focus on the influence parameter alpha in GM(1,1) model, then, analyze the characteristics of alpha step by step. Second, we give up the alpha = 0.5 method, and use numerical method to find the prediction error corresponding with alpha value and plot the figure of the function of error. Finally, after the mathematics model has been presented; we also develop a toolbox, which based on C language to assist us to implement our approach. Consequently, we conclude that the value of alpha is adaptive in the interval of [0,1] in GM(1,1) model.",2007,0, 2370,A Method for Artifact Correction Due to Demodulation Phase Errors in Magnetic Resonance Imaging,"In the process that MR signal is quadrature demodulated, the initial phases of sinusoid and cosinusoid used as demodulation reference signal exist errors, which results in artifacts in the image. It is typically overcome through precise control of the initial phases with the help of the special hardware, but the errors are not reduced completely. In this work, a method based on reference scan is proposed. Reference scan without frequency encoding gradient and phase encoding gradient is executed before FSE sequence, and an echo train is acquired. The phase errors of sinusoid and cosinusoid that are used to demodulate each echo in the reference echo train are calculated. Then the k-space data of image is corrected by these errors and artifacts are removed. The experiments using 0.35T MRI system demonstrate the effectiveness of this method",2005,0, 2371,Matching of multi-scale digital raster map using precise geometric correction,"In the processing and applying of the digital map, it is one of the key technologies that aligning the maps from different projective coordinate systems to a standard. Using the principium of geometric correction, the paper compared and analyzed three correction models of the raster map, presented the idea of correcting the maps of different scale with corresponsive correction models, and citing a map of certain scale, the author compared the correction effects and errors of three models by programming and proved the accuracy and feasibility of this idea",2006,0, 2372,"PSoC design in GM(1,1) error analysis and its application in temperature prediction","In the study of prediction filed, no matter what methods we used, the main purpose is to minimize the prediction error; however, the goals cannot be fulfilled completely. Even we choose GM(1,1) model, which in the newest soft computing method, we also need to minimize the prediction error. Hence, in this paper, we focus on the influence parameter alpha in GM(1,1) model in the first, then, analyze the characteristics of alpha step by step, and use numerical method to find the prediction error corresponding with alpha value. Second, after the mathematics model is presented, we use PSoC to design a GM(1,1) error analysis model, which based on the characteristic of GM(1,1) model. Also an example, which is temperature prediction case is given to assist us to implement our approach in the final section.",2008,0, 2373,Design of intelligent fault diagnosis system for energy remote terminal unit based on agent technology,"In recent years, Guangdong Power Grid Corporation has built the largest scale electric energy telemetering system in China. More than 400,000 field units had been put into use in Guangdong province. With the scale of system becoming larger, the municipal power supply bureau faces more impression of maintenance work load. In order to decrease the maintenance work load, we have designed and implemented the intelligent fault diagnosis system for field terminal unit based on multi-agent. By using multi-agent technology, the flexible intelligent communication protocol interface and optimal load balancing policy are designed to tackle the problems of various communication protocols and heavy network loads. The production rule knowledge representation is used to build diagnosis knowledge database. The system realized the function of communication monitor and on-line fault-diagnosis, which decreased maintenance work load a lot.",2010,0, 2374,Generic partial dynamic reconfiguration controller for fault tolerant designs based on FPGA,"In recent years, many techniques for self repairing of the systems implemented in FPGA were developed and presented. The basic problem of these approaches is bigger overhead of unit for controlling of the partial reconfiguration process. Moreover, these solutions generally are not implemented as fault tolerant system. In this paper, a small and flexible generic partial dynamic reconfiguration controller implemented inside FPGA is presented. The basic architecture and usage of the controller in the FPGA-based fault tolerant structure are described. The implementation of controller as fault tolerant component is described as well. The basic features and synthesis results of controller for Xilinx FPGA and comparison with MicroBlaze solution are presented.",2010,0, 2375,Availability and Cost Analysis of a Fault-Tolerant Software System with Rejuvenation,"In recent years, remarkable attention has been paid to software aging phenomena, in which the performance of software systems degrades with time. Software aging may eventually leads to transient crash/hang failures. The well-known technique of software rejuvenation can be regarded as the most effective procedure to counteract the aging phenomena. In this paper, the concept of common software-aging-related faults in fault-tolerant systems is proposed. Then the common faults defined are integrated into a behavior model of a double-version fault-tolerant software system. The dependability measures, such as availability, cost, and the availability to cost ratio, are evaluated as bivariate functions using continuous time Markov chain (CTMC). Finally, the effects of common software-aging-related faults are investigated based on several numerical examples.",2008,0, 2376,Fault tolerant error coding and detection using reversible gates,"In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction - double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4times4 reversible gate called 'HCG' for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.",2007,0, 2377,Evaluation of a SPECT attenuation correction method using CT data registered with automatic registration software,"In recent years, various SPECT attenuation correction systems using CT data have been developed. For attenuation correction of cerebral SPECT data in routine studies, the software method using CT and SPECT data registered with automatic registration software has been used much more than the hardware method using CT data acquired with combined SPECT/CT systems. In this work, the software-based method was compared with a method using TCT data acquired with a sequential SPECT/TCT scan with no subject motion as the golden standard. Attenuation corrected SPECT values using the registered CT data were compared to those using TCT data. Ten sets of normal volunteer data were acquired. The differences in attenuation corrected SPECT values between the SPECT-CT and SPECT-TCT methods were 1.41.9% for the entire brain, and the maximum regional difference was 7.8% for both white and gray matter regions. Other regions within the brain where SPECT values were low (e.g., skull, ventricles) were excluded from evaluation. The results indicate that automatic registration software can register CT to SPECT data quite accurately and that a software-based attenuation correction method using CT data can correct attenuation accurately for cerebral data. Consequently, such a software-based attenuation correction method using CT data that requires no specialized hardware seems feasible for use in routine studies.",2003,0, 2378,GNSS pseudorange error density tracking using Dirichlet Process Mixture,"In satellite navigation system, classical localization algorithms assume that the observation noise is white-Gaussian. This assumption is not correct when the signal is reflected on the surrounding obstacles. That leads to a decrease of accuracy and of continuity of service. To enhance the localization performances, a better observation noise density can be use in an adapted filtering process. This article aims to show how the Dirich-let Process Mixture can be employed to track the observation density on-line. This sequential estimation solution is adapted when the noise is non-stationary. The approach will be tested under a simulation scenario with multiple propagation conditions. Then, this density modeling will be used in Rao-Blackwellised Particle Filter.",2010,0, 2379,Estimate of wavefront error introduced by encoding of computer generated holograms,"In testing aspheric optics with CGHs, errors introduced by the CGH must be budgeted. This paper presents a method for estimating the wavefront error introduced in the encoding process. Some results are shown.",2009,0, 2380,Auto defect repair algorithm for LCD panel review & repair machine,"In TFT-LCD manufacturing process, various defects are generated by manufacturing machine trouble or particle. These defects can be repaired through the TFT-laser repair process that only canpsilat be automated in TFT-LCD manufacturing process. In this paper, we propose auto defect algorithm for TFT-LCD laser repair machine using image processing algorithm in order to automate process. Proposed algorithm can detect very small defects (under 2 um) in 98% success ratio, and generated laser repair path guarantee highly precise position accuracy. Through proposed system, much of the work still done the old-fashioned way, by hand, can be automated and manufacturing company can be strengthen the competitiveness of cost.",2008,0, 2381,"Corrections to Design and Performance Analysis of a Unified, Reconfigurable HMAC-Hash Unit [Dec 07 2683-2695]","In the above titled paper (ibid., vol 54, no. 12, pp. 2683-2695), the first paragraph of Section I (p. 2683), was incomplete. The correct paragraph is presented here.",2008,0, 2382,Delay-fault tolerance to power supply Voltage disturbances analysis in nanometer technologies,"In nanometer technologies, as variability is becoming one of the leading causes for chip failures, signal integrity is a key issue for high-performance digital System-on-Chip (SoC) products. In this paper, analysis is focused on the occurrence of Delay-faults due to Power-supply disturbances in nanometer technologies. Using a previously proposed VT (power supply Voltage and Temperature)-aware time management methodology, it is shown that nanometer technologies impose the need of fault-tolerance methodologies, although the margins of tolerance or fault-free operations are being reduced as technology scales down. SPICE simulation results with 350 nm, 130 nm, 90 nm, 65 nm, 45 nm and 32 nm CMOS technologies show an increasing dependence of propagation delays on power supply variations, as technology is being scaled down. Monte Carlo simulations show that, even in the presence of process variations, a dynamic delay-fault tolerance methodology can be rewarding even at nanometer scale, although the margins for Power-supply variations are becoming smaller.",2009,0, 2383,Converters fault-diagnosis in PMSG drives for wind turbine applications,"In order to compensate the worldwide energy consumption that is still rising, the wind energy is becoming more and more important. For this reason, the analysis of wind energy conversion systems, in which the occurrence of faults has a high negative impact, becomes a very important issue. Considering this, the aim of this paper is to present some diagnostic methods for open-circuit faults in the two power converters of a permanent magnet synchronous generator (PMSG) drive for wind turbine applications. To achieve this goal, fault analysis is the first step, in order to know which parameters can be used for the diagnosis. The following step is the development of reliable diagnostic methods. Various simulation results, considering multiple faults in the grid-side converter and a single fault in the PMSG-side converter, are presented.",2010,0, 2384,Risk analysis of the city gas pipeline network based on the fault tree,"In order to control and reduce city gas accidents, it's very important to identify and analyze the relative risk factors existed in the city gas pipeline network comprehensively. Through the fault tree analysis, Agas leakageA was taken as the top event and four main reasons that led to gas leakage were found. According to above four reasons, the comprehensive fault trees were drew, which reflected the whole reasons to the top event Agas leakageA. Finally, taking the third-party interference fault tree as the target to have a further analysis, and then a risk check list about third-party interference of city gas pipeline network was obtained, according to which, the prevention to gas pipeline network could be more effective.",2009,0, 2385,Study on Intelligent Fault Diagnosis Expert System for Radio Compass,"In order to diagnose and obviate faults of the radio compass, an intelligent fault diagnosis expert system based on the PC/104 computer is presented in this paper. By using the technology of the blurring nerve network, the fault converse illation, knowledge database and so on, the system reliability and flexibility is greatly enhanced. The testing results illustrate the readings relative errors of the system data acquisition and parameter simulation are lower than 0.5%, the veracity of fault isolation is 96%, the direct current power <; 300 W, the alternating current power <;250W, the cost of this system is cheaper 80% than ever.",2010,0, 2386,Human errors in the cockpit and accidents prevention strategies from cockpit resources management perspective,"In order to improve safety in flight and assure the life of pilots and maintain high cost of fighters, an investigation of aircraft accidents in recent years is stressed and systematized in the R.O.C. Air Force. This is to submit detailed proposals and assure correct strategy for preventing accidents. The study is established on the basis of the development of cockpit resource management. The study investigates the major causes and categories of accidents in the R.O.C. Air Force in the past twenty years. By content analysis, the survey collects data, analyze reasons and segments different cases from the accident reports. As a result, a series of accident prevention strategies are submitted to the aviation authority according to quantitative data. Accidents caused by human error are the major topic of accident prevention in aviation operations. The aim of cockpit resource management is to ensure that hardware resources, software resources and human resources are each used as effectively as possible in the cockpit. The purposes of this study are: (1) to investigate the relationship between human errors and accidents; (2) to study the relationship between human errors and operational stages; (3) to develop accident prevention strategies",2000,0, 2387,Software Defect Prediction Using Dissimilarity Measures,"In order to improve the accuracy of software defect prediction, a novel method based on dissimilarity measures is proposed. Different from traditional predicting methods based on feature space, we solve the problem in dissimilarity space. First the new unit features in dissimilarity space are obtained by measuring the dissimilarity between the initial units and prototypes. Then proper classifier is chosen to complete prediction. By prototype selecting, we can reduce the dimension of units' features and the computational complexity of prediction. The empirical results in the NASA database KC2 and CM1 show that the prediction accuracies of KNN, Bayes, and SVM classifier in dissimilarity space are higher than that of feature space from 1.86% to 9.39%. Also the computational complexities reduce from 18% to 67%.",2010,0, 2388,What Determines Appropriate Trust of and Reliance on an Automated Collaborative System? Effects of Error Type and Domain Knowledge,"In this investigation we evaluated the effect of two types of factors that affect human-automation interaction: those specific to the automation (Error Type: miss versus false alarm) and those specific to the human (domain experience, in this study automated farm equipment experience versus no experience). Participants performed a simulated harvesting task and used an obstacle avoidance automated decision aid. The type of unreliability of the automation had a major impact on behavioral reliance as a function of components of the avoidance decision task. The analysis of the effects of domain experience on automation use indicated that those with experience operating agricultural vehicles had different tendencies of reliance. Specifically, participants with experience operating agricultural vehicles were less likely to rely on automated alarms than those without experience. The results of this investigation have important implications for understanding how humans adjust their behavior according to the characteristics of an automated system",2006,0, 2389,Microstrip Monopole Antenna With Enhanced Bandwidth Using Defected Ground Structure,"In this letter, a double U-shaped defected ground structure (DGS) is proposed to broaden impedance bandwidth of a microstrip-fed monopole antenna. The antenna structure consists of a simple trapezoid monopole with a DGS microstrip feedline for excitation and impedance bandwidth broadening. Measurement shows that the antenna has 10-dB return loss from 790 to 2060 MHz, yielding 112.4% impedance bandwidth improvement over that of traditional design.",2008,0, 2390,Decorrelating compensation scheme for coefficient errors of a filter bank parallel A/D converter,"In this letter, a parallel analog-digital (A/D) conversion scheme with a filter bank for low intermediate-frequency receivers is presented. The analysis filters of the filter bank divide the frequency components of the received signal, and achieve parallel A/D conversion. Therefore, the required conversion rates and the resolution of the A/D converters can be reduced and the receiver can demodulate wideband signals. As the analysis filters consist of analog components, their coefficients include errors. These errors cause mutual interference between signals in orthogonal frequencies. In order to remove this interference, a decorrelating compensation scheme is proposed.",2004,0,96 2391,BER Performance of FSO Links over Strong Atmospheric Turbulence Channels with Pointing Errors,"In this letter, we investigate the error rate performance of free-space optical (FSO) links over strong turbulence fading channels together with misalignment (pointing error) effects. First, we present a novel closed-form expression for the distribution of a stochastic FSO channel model which takes into account both atmospheric turbulence-induced fading and misalignment-induced fading. Then, we evaluate the average bit-error rate in closed form of a FSO system operating in this channel environment, assuming intensity modulation/direct detection with on-off keying. Numerical examples are further provided to collaborate on the derived analytical expressions.",2008,0, 2392,A DSP based controller for power factor correction (PFC) in a rectifier circuit,In this paper a digital signal processor (DSP) based power factor correction (PFC) scheme is presented. A dual-loop controller is designed to control the average input AC current as well as DC bus voltage. The DSP controller is implemented and tested. Design methodologies and trade-offs such as discrete-time implementation methods are also presented,2001,0, 2393,Sensor fault diagnosis for manipulators performing interaction tasks,"In this paper a fault diagnosis approach for sensor faults in cooperative robotic manipulators involved in interaction tasks is presented. The approach is developed for a two-arm cooperative workcell, although it can be easily applied to single-manipulator systems. A bank of discrete-time model-based diagnostic observers is adopted to detect, isolate and identify failures of both sensors at the joints of the manipulators and force/torque sensors mounted at the wrists. The effectiveness of the proposed scheme has been experimentally tested on a cooperative industrial setup composed by two six-dof COMAU Smart-3 S robots.",2010,0, 2394,Verifying Autonomic Fault Mitigation Strategies in Large Scale Real-Time Systems,"In large scale real-time systems many problems associated with self-management are exacerbated by the addition of time deadlines. In these systems any autonomic behavior must not only be functionally correct but they must also not violate properties of liveness, safety and bounded time responsiveness. In this paper we present and analyze a realtime reflex engine for providing fault mitigation capability to large scale real time systems. We also present a semantic domain for analyzing and verifying the properties of such systems along with the framework of real-time reflex engines",2006,0, 2395,Stator winding fault detection for an induction motor drive using actuator as sensor principle,"In many industrial processes reliability, safety and economy are gaining particular importance. Critical dive applications requiring the inverter and motor ask for fault diagnosis techniques in order to assure a stable operation during a time period before the system can be repaired, consequently this kind systems including normal operation and analysis fault detection becomes more and more complicated. Recently, the field of investigation is focused on electric motor faults such as the short and open circuit stator winding. This work proposes a fault detection in a single-phase induction motor by experimental and simulation results analysis obtained with a test platform. This study is focused on simplify the complex system by using the power converters switches waveforms in order to detect motor faults.",2003,0, 2396,Certified and Fast Computation of Supremum Norms of Approximation Errors,"In many numerical programs there is a need for a high-quality floating-point approximation of useful functions f, such as such as exp, sin, erf. In the actual implementation, the function is replaced by a polynomial p, which leads to an approximation error (absolute or relative) epsiv = p - s or epsiv = p/f -1. The tight yet certain bounding of this error is an important step towards safe implementations.The problem is difficult mainly because that approximation error is very small and the difference p-f is subject to high cancellation. Previous approaches for computing the supremum norm in this degenerate case, have proven to be unsafe, not sufficiently tight or too tedious in manual work.We present a safe and fast algorithm that computes a tight lower and upper bound for the supremum norms of approximation errors. The algorithm is based on a combination of several techniques, including enhanced interval arithmetic, automatic differentiation and isolation of the roots of a polynomial. We have implemented our algorithm and give timings on several examples.",2009,0, 2397,Histogram-offset-based color correction for multi-view video coding,"In multi-view video system, variations of different camera setups (for example, camera positions, lighting conditions and camera characteristics) might cause discrepancies on the luminance and chrominance components of different views. From the viewpoint of source compression, this will lead to inaccurate inter-view prediction and lower coding efficiency. In this paper, a histogram-offset-based color correction method is developed for benefitting multi-view video coding. First, disparity estimation is conducted on the rank-transformed domain to identify the maximum matching regions between the reference view and the target view. Within the identified matching regions, the histograms of the reference view and the target view are then calculated, respectively. By using an iterative thresholding approach, a histogram offset is generated and exploited to correct the target view. Experimental results have shown that the proposed color correction method outperforms the histogram matching method on the improvement of coding efficiency.",2010,0, 2398,The Fault-Tolerant Design in Space Information Processing System Based on COTS,"In order to make Space Information Processing System (SIPS) based on Commercial-Off-The-Shelf (COTS) have a much stronger ability of radiation resistance in space, this paper presents multilevel fault-tolerant technique based on FPGA and Single Event Latch-up (SEL) resistance protection circuit. The multilevel fault-tolerant technique includes the dual fault-tolerant design on system level, the redundancy design of memory on module level and the fault-tolerant design of FPGA on chip level. By reliability analysis and experimentation, it can be concluded that the reliability of SIPS has been greatly increased by making use of fault-tolerant design. Moreover, this fault-tolerant design has been implemented successfully and run well.",2009,0, 2399,A Novel Sequential Pattern Mining Algorithm for the Feature Discovery of Software Fault,"In order to obtain the useful sequential pattern knowledge from the historical sequence database, which reflects the characteristic behavior of software fault, a novel sequential pattern mining algorithm oriented feature discovery of software fault based on location matrix named SPM-LM is proposed. The pattern growth theory and the concept of location matrix are introduced into the new proposed algorithm. Firstly, the fault feature database is scanned and a location matrix for each event is constructed to record the frequent sequence information, which produces the frequent 1-sequence. Secondly, the sequence is extended through the dual pointer operation for the location matrix. And the frequent k-sequence for the prefix to frequent 1-sequence is generated. Finally, all of the generated frequent sequential patterns are saved into the corresponding layer of the tree structure. Therefore, the software fault sequences are matched in the tree structure to find the software failures and improve the software performance. The experimental results indicate that the algorithm improves the efficiency of pattern discovery significantly.",2009,0, 2400,The dual-core fault-tolerant control for Electronic Control Unit of Steer-By- Wire system,"In order to solve the reliability and security problems which are caused by the structural alteration to the traditional steering system, the fault diagnosis and fault-tolerant control method for the Electronic Control Unit (ECU) of Steer-By-Wire (SBW) system is studied. The architecture of the dual-core fault-tolerant control system based on the mechanisms of distributed processing and exception decision is proposed. And hardware-in-the-loop simulation is done using the existing SBW system test rig. The simulation results show that the dual-core fault-tolerant control structure and coordination mechanisms are feasible. And, can be applied to the ECU fault-tolerant control of the SBW system, to effectively improve the reliability and security of the ECU in the SBW system.",2010,0, 2401,Error propagation in decision feedback equalization for a terrestrial digital television receiver,"In the U.S. terrestrial digital television receiver, when the channel presents additive white Gaussian noise and multipath, particularly strong multipath, error propagation in the feedback filter of a decision feedback equalizer severely affects the performance at the output of the trellis decoder. In these cases, the output of the trellis decoder may present a worse performance when the equalizer is in automatic switching mode rather than in blind mode, even though the opposite is true at the output of the equalizer. In the automatic switching the mode, the equalizer is in blind mode before convergence is detected and switches to decision directed mode after convergence. This work studies the use of a soft automatic switching mode, that is, the equalizer is in blind mode before convergence is detected and switches to a soft decision directed mode after convergence. In the soft decision directed mode, the input to the equalizer feedback filter is the equalizer output, instead of the slicer output, yielding a linear equalizer implementation. By utilizing the soft automatic switching mode in such channel conditions, the performance at the output of the trellis decoder is improved not only with respect to automatic switching mode, but also with respect to blind mode.",2003,0, 2402,Bad data analysis using the composed measurements errors for power system state estimation,"In this paper, a topological/geometrical based approach is used to define the undetectability index (UI), which provides the position of a measurement related to the range space of the Jacobean matrix of the power system. The higher the value of this index, for a measurement, the closer it will be to the range space of that matrix, that is, the error in measurements with high UI is not reflected in their residuals, thus masking a possible gross error those measurements might have. Using the UI of a measurement, the possible gross error the state estimation process might mask is recovered; then the total gross error of that measurement is composed and used to the gross error detection and identification test. The gross error processing turns out to be very simple to implement, requiring only a few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.",2010,0, 2403,Optical Fiber Connector Surface Defect Detection Using Wavelets,"In this paper, a wavelet-based surface defect detection of optical fiber ferrules is proposed. Surface defects on optical fiber connectors can be damaging to passing signals when coupled with other connectors. Our quality control enhancement work is a visual control stage, using magnified images, whereby morphological operations segment the image and wavelet transforms then detect defects on optical fiber connector surfaces to improve the overall acceptability of the manufactured components.",2007,0, 2404,Integrated fault-tolerant scheme for a DC speed drive,"In this paper, an active fault-tolerant control (FTC) scheme is presented with disturbance compensation. Fault-detection and compensation are merged together to propose a robust algorithm against model uncertainties. The GIMC control architecture is used as a feedback configuration for the active fault-tolerant scheme. The synthesis procedure for the parameters of the fault-tolerant scheme is carried out by using tools of robust control theory. A detection filter is designed for fault isolation taking into account uncertainties and disturbances in the mathematical model. Finally, the fault compensation strategy incorporates an estimate of the disturbances into the system to improve the performance of the closed-loop systems after the fault is detected. In order to illustrate these ideas, the speed regulation of a dc motor is selected as a case study, and experimental results are reported.",2005,0, 2405,Correction of article errors in machine translation using Web-based model,"In this paper, an approach is proposed for correcting article errors in English translation results in order to improve the performance of a MT system. We check the article and the singular/plural form of the headword in a NP at the same time. This is different from most of early researches in which only articles are considered. Our correcting algorithm is based on simple, viable n-gram model whose parameters can be obtained using the WWW search engine Google. Using much less features than those used in the early researches, we experimentally showed that our approach could perform the promising results with a precision of 86.2% on all classes of article errors.",2005,0, 2406,"An automated industrial fish cutting machine: Control, fault diagnosis and remote monitoring","In this paper, an automated industrial fish cutting machine, which was developed and tested in the Industrial Automation Laboratory (IAL) of the University of British Columbia, is presented including its hardware structure, control sub-system, fault diagnosis sub-system and the remote monitoring sub-system. First, the hardware of the machine including the mechanical conveyer system, pneumatic system and the hydraulic system, and the associated sensors are introduced. Next, a fuzzy position control system is designed for the control of the cutting table moving along the horizontal (x) direction and its performance is compared with that with traditional proportional-integral-derivative (PID) control. A multi-sensor neuro-fuzzy fault diagnosis system is developed as well for the purpose of providing accurate and reliable diagnosis of the machine states in an automated factory environment. Finally, a web-based remote monitoring system is discussed, which allows engineers and researches to remotely monitor the health of the machine from any remote geographic location through the Internet.",2008,0, 2407,Automatic Positioning of Spinning Projectile in Trajectory Correction Fuze via GPS/IMU Integration,"In this paper, an automatic positioning system for spinning projectile in trajectory correction fuze is described in detail. Aiming at the characteristics of spinning, this system is designed through integration of GPS (Global Positioning System) and IMU (Inertial Measurement Unit). Compared with general projectiles, the signals from spinning projectile are modulated by rotation. Therefore, how to obtain valid signals from the rotation modulated ones is a key problem. With the special rotation tracking loop, GPS module in this system can demodulate the rotation-modulated signals and thus provide correct position information. This part of design is based on SDR (Software Defined Radio) and FPGA (Field Programable Gate Array) means in order to speed it up and get a small size. Meanwhile, IMU module including several accelerometers provides attitude information which gives aiding signals to GPS section for rotation demodulation. Via the integration of IMU and GPS in fuze, the system can realize automatic positioning and trajectory correction. Accordingly, the device is suitable for dynamic conditions of high G-forces and vibration and spinning rate experienced frequently by these applications. Besides, due to its advantages of low-cost and flexibility it can adapt itself to different satellite navigation systems easily.",2007,0, 2408,A Decentralized Fault-Tolerant Control System for Accommodation of Failures in Higher-Order Flight Control Actuators,"In this paper, an effective integrated failure detection and identification (FDI) and fault-tolerant control (FTC) technique is developed for a class of nonlinear systems actuated by actuators that may undergo several different types of failures. Assuming that the actuator dynamics are fast, a baseline controller is designed and, using the singular perturbation arguments, shown to achieve the control objective. Typical failures in flight control actuators are considered next, and online FDI algorithms are derived for second-order actuator dynamics with non-measurable actuator rates, and third-order actuator dynamics when only output of the actuator is measurable. The FDI subsystem is decentralized in that an observer is run at each of the actuators, and the parameter estimates are adjusted using only the local information. The major issue of how to use this information to reconfigure the control law and assure the stability of the resulting closed-loop control system is addressed. An adaptive fault-tolerant controller that uses the parameter estimates from the FDI subsystem at every instant is designed next. It is demonstrated that all the signals in the system are bounded and that the tracking error converges to zero asymptotically despite multiple simultaneous actuator failures even in the case of second or third order actuator dynamics. The properties of the proposed FDI-FTC algorithms are evaluated through piloted simulations of the F/A-18 aircraft.",2010,0, 2409,Adaptive Error Resilient Video Coding Based on Redundant Slices of H.264/AVC,"In this paper, an error resilience video coding scheme is proposed based on the redundant slice feature in H.264/AVC. With the proposed scheme, the source redundancy in a coded bitstream can be adjusted flexibly according to changing channel conditions such as available bandwidth and data loss rate, without the need of complicated re-encoding or transcoding. To achieve good resilience performance, an algorithm is further developed to select a proper set of redundant slices and provide unequal error protection gain. Experimental results demonstrate its improved adaptive capability and similar performance compared to other error resilience coding scheme.",2007,0, 2410,Detection of defects in wood slabs by using a microwave imaging technique,"In this paper, an experimental set up based on interrogating microwaves is used to obtain images of the cross section of dielectric cylinders. In particular, a microwave tomographic configuration is used to inspect wood slabs in order to search for defects and voids. The measured data (samples of the scattered electric field) are inverted by using an efficient reconstruction technique, which is able to handle the ill-posedness of the inverse scattering problem. The developed experimental apparatus is validated in this paper by means of several numerical simulations. Preliminary experimental results are also reported.",2007,0, 2411,Application of immune-based optimization method for fault-section estimation in a distribution system,"In this paper, an immune algorithm-based (IA-based) optimization approach for the fault-section estimation of a distribution system is proposed. To apply the method to solve this estimation problem, each section of a power system model can be considered as an antibody. Through the immunology evolution, an antibody that most fits the antigen of concern becomes the solution. An affinity calculation has been employed in this computation process to measure the combination intensity. As this method can operate the population of antibodies simultaneously, the process stagnation can be better prevented. The proposed approach has been tested on Taiwan Power System (Taipower) through the utility data. Test results demonstrated the feasibility and effectiveness of the method for the applications.",2002,0, 2412,Gradient Based Error Concealment for H.264 Inter Frames,"In this paper, an improved BMA(boundary matching algorithm) using gradient vectors is presented to conceal channel errors in inter-frames of H.264 video images. BMA computes the sum of pixel differences around the perimeter of the lost block between the candidate block and its neighboring blocks to estimate the validity of the candidate block, assuming that adjacent pixels in an image have almost the same value. In real images, however, there exist some gradients in local area of the image, which means that the pixel values are increasing or decreasing in a specific slope. A simple and efficient method for estimating candidate blocks using gradient information has been developed and thereby the modified BMA has been applied to recover lost blocks. Experiments show the proposed method improves picture quality of about 1 ~3dB compared to existing methods.",2007,0, 2413,2D-6 Automatic Detection of High Temperature Hydrogen Attack Defects from Ultrasonic A-Scan Signals,"In this paper, an out-of-service pressure vessel known to have lots of high temperature hydrogen attach (HTHA) defects is used to develop in a cost effective manner a database of ultrasonic A-scan signals of this defect. Basic feature extraction method, coupled with traditional classifiers is shown to distinguish accurately the hydrogen attack defects from geometrically similar defects.",2007,0, 2414,Research on Fault Diagnosis of Gearbox Based on Particle Swarm Optimization Algorithm,"In this paper, base on studying learning rate of PSO, in order to adjust the social part and the cognition part proportions, learning rate change linearly with velocity-formula evolving is made; the BP neural network PSO training heavily increases the congruence speed of the networks to avoid involving local extremum. According to actual data of two levels gearbox in vibration lab, signals are analyzed and their feature values are abstracted. By applying trained BP neural networks to diagnosing gearbox faults got sound effect",2006,0, 2415,Active fault detection and isolation strategy for an unmanned aerial vehicle with redundant flight control surfaces,"In this paper, a diagnosis system using unknown input observers in order to detect, isolate and estimate faulty control surface positions for a small UAV, is presented. As this aircraft is equipped with redundant actuators, flap and aileron positions are not input observable and an active diagnosis process has to be implemented.",2008,0, 2416,Rotor position sensor fault detection Isolation and Reconfiguration of a Doubly Fed Induction Machine control,"In this paper, a Doubly Fed Induction Machine (DFIM) operating in motor mode and supplied by two Voltage Source Inverters (VSI), in stator and rotor sides, is presented. The aim is to analyze the position sensor fault effects on a Direct Torque Control (DTC) of the DFIM. This justifies the necessity of a reconfiguration control when a position sensor fault appears in order to avoid an interruption in system operations. In the other hand, this study emphasizes the close dependency between system performance and the output accuracy of the rotor position sensor. Moreover, simulation results point out the operation system deterioration in case of position sensor fault, which leads in most cases to its shut down in contrast to industrial expectations. This work presents a control reconfiguration for a DFIM speed drive when a position sensor fault occurs, in order to ensure system service continuity. For this purpose, SABER simulation results illustrate the system behavior before and after a position sensor fault. System performance preservation is carried out after control reconfiguration. The proposed solution is relevant especially due to its simplicity.",2009,0, 2417,Minimizing Euclidian state estimation error for uncertain dynamic systems based on multisensor and multi-algorithm Fusion,"In this paper, a dynamic system with model uncertainty and bounded noises is considered. We propose several efficient methods of centralized fusion, distributed fusion and fusion of multiple parallel algorithms for minimizing Euclidian estimation error of the state vector. To make Euclidian estimation error as small as possible, the classic measure of size of an ellipsoid-trace of the shape matrix of the ellipsoid is extended to a class of weighted measure which can emphasize the importance of the interested entry of the state vector and make its confidence interval smaller. Moreover, it can be proved that both the centralized fusion and the distributed fusion are better than the estimation of single sensor in the class of the weighted measures. These results are illustrated by a numerical example. Most importantly, sufficiently taking advantages of the two facts that minimizing a scalar objective cannot guarantee to derive an optimal multi-dimensional confidence ellipsoid solution, as well as, multiple sensors and multiple algorithms have the feature of advantage complementary, we will construct various estimation fusion methods at both the fusion center and local sensors to yield as significantly as possible interlaced estimate intervals of every entry of the state vector for minimizing Euclidian estimation error.",2010,0, 2418,A General Approach for the Transient Detection of Slip-Dependent Fault Components Based on the Discrete Wavelet Transform,"In this paper, a general methodology based on the application of discrete wavelet transform (DWT) to the diagnosis of the cage motor condition using transient stator currents is exposed. The approach is based on the identification of characteristic patterns introduced by fault components in the wavelet signals obtained from the DWT of transient stator currents. These patterns enable a reliable detection of the corresponding fault as well as a clear interpretation of the physical phenomenon taking place in the machine. The proposed approach is applied to the detection of rotor asymmetries in two alternative ways, i.e., by using the startup current and by using the current during plugging stopping. Mixed eccentricities are also detected by means of the transient-based methodology. This paper shows how the evolution of other non-fault-related components such as the principal slot harmonic (PSH) can be extracted with the proposed technique. A compilation of experimental cases regarding the application of the methodology to the previous cases is presented. Guidelines for the easy application of the methodology by any user are also provided under a didactic perspective.",2008,0, 2419,Geometrical approach on masked gross errors for power systems state estimation,"In this paper, a geometrical based-index, called undetectability index (UI), that quantifies the inability of the traditional normalized residue test to detect single gross errors is proposed. It is shown that the error in measurements with high UI is not reflected in their residues. This masking effect is due to the ldquoproximityrdquo of a measurement to the range of the Jacobian matrix associated with the power system measurement set. A critical measurement is the limit case of measurement with high UI, that is, it belongs to the range of the Jacobian matrix, has an infinite UI index, its error is totally masked and cannot be detected in the normalized residue test at all. The set of measurements with high UI contains the critical measurements and, in general, the leverage points, however there exist measurements with high UI that are neither critical nor leverage points and whose errors are masked by the normalized residue test. In other words, the proposed index presents a more comprehensive picture of the problem of single gross error detection in power system state estimation than critical measurements and leverage points. The index calculation is very simple and is performed using routines already available in the existing state estimation software. Two small examples are presented to show the way the index works to assess the quality of measurement sets in terms of single gross error detection. The IEEE-14 bus system is used to show the efficiency of the proposed index to identify measurements whose errors are masked by the estimation processing.",2009,0, 2420,Transient Error Detection in Embedded Systems Using Reconfigurable Components,"In this paper, a hardware control flow checking technique is presented and evaluated. This technique uses re configurable of the shelf FPGA in order to concurrently check the execution flow of the target micro processor. The technique assigns signatures to the main program in the compile time and verifies the signatures using a FPGA as a watchdog processor to detect possible violation caused by the transient faults. The main characteristic of this technique is its ability to be applied to any kind of processor architecture and platforms. The low imposed hardware and performance overhead by this technique makes it suitable for those applications in which cost is a major concern, such as industrial applications. The proposed technique is experimentally evaluated on an 8051 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected control flow errors. The watchdog processor occupied 26% of an Altera Max-7000 FPGA chip logic cells. The performance overhead varies between 42% and 82% depending on the workload used.",2006,0, 2421,A simulation method for bit-error-rate-performance estimation for arbitrary angle of arrival channel models,"In this paper, a model for performing bit-error-rate (BER) analysis of various channel models is presented. Traditional simulation methods model the mobile radio channel as having Rayleigh fading, and are focused on the fluctuation of the amplitude of the received signal. Modern spatial models include information such as the angle of arrival of the incoming signals, the time-delay spread, and the number of multipath components. A simulation tool is developed that exploits the spatial statistical characteristics of the channel in order to derive estimates of the expected BER performance. The specific case of the geometrically based single-bounce elliptical model (GBSBEM) is presented and compared to the Rayleigh model. The impact of the employment of antenna arrays at the receiver is also examined. The possibility of determining the BER performance of communication systems, assuming arbitrary channel models, is justified.",2004,0, 2422,A New Method for Discriminating Between Internal Faults and Inrush Current Conditions in Power Transformers Based on Neuro Fuzzy,"in this paper, a new algorithm based on Neuro Fuzzy for differential protection of the power transformer is proposed. This algorithm consists of considering the ratio and the difference phase angle of the second harmonic to the fundamental component of differential currents under various conditions. These two protection functions are computed and the protective system operates in less than one cycle after the occurrence of disturbance. Another advantage of this algorithm is that fault detection doesn't depend on the selection of thresholds and one of the best advantages of this algorithm is that all kinds of internal fault can be recognized from magnetizing inrush current even those which are mixed with inrush currents. The proposed Nerue Fuzzy is trained by obtained data from simulation of power system under different faults and switching conditions. The suitable operation of this algorithm by simulation of faults and various switching conditions on a power transformer is shown.",2007,0, 2423,Hierarchical Calculation of Malicious Faults for Evaluating the Fault-Tolerance,"In this paper, a new hierarchical multi-level technique for malicious fault list generation for evaluating the fault tolerance is presented. For the description of the system three levels are exploited: behavioral, functional signal path and structural gate-network levels, whereas at each level the model of decision diagrams and uniform fault analysis procedures are used. Malicious faults are found by top-down technique, keeping the complexity of candidate fault sets at each level as low as possible.",2008,0, 2424,Induction motor fault diagnosis using voltage Park components of an auxiliary winding - voltage unbalance,"In this paper, a new method for induction motor fault diagnosis is presented. It is based on the so-called voltage Park components of an auxiliary winding which is a small coil inserted between two of the stator phases. Expressions of the inserted winding voltage and its Park components are presented. After that, discrete Fourier transform analyzer is required for converting the signals from the time domain to the frequency domain. A Lissajoux curve formed of the two components is associated to the spectrum. Simulation results curried out for non defected and voltage unbalance motor show the effectiveness of the proposed method.",2009,0, 2425,A new method of blocking fault diagnosis on stator winding based on ANN,"In this paper, a new method of stator winding blocking fault diagnosis based on artificial neural networks (ANN) has been proposed. At first, the characteristic of blocking fault is analyzed, and the temperature distribution in all kinds of faults is calculated, which made the one-to-one corresponding relationship mathematics model between the temperature-rise of each stator winding measuring points and the different faults founded. Then the neural network is trained, which can identify the. blocking fault position and the serious level, and the real-time example is adopted to verify the method.",2002,0, 2426,Enhancing the fault-tolerance of nonmasking programs,"In this paper we focus on automated techniques to enhance the fault-tolerance of a nonmasking fault-tolerant program to masking. A masking program continually satisfies its specification even if faults occur. By contrast, a nonmasking program merely guarantees that after faults stop occurring, the program recovers to states from where it continually satisfies its specification. Until the recovery is complete, however a nonmasking program can violate its (safety) specification. Thus, the problem of enhancing fault-tolerance from nonmasking to masking requires that safety be added and recovery be preserved. We focus on this enhancement problem for high atomicity programs-where each process can read all variables-and for distributed programs-where restrictions are imposed on what processes can read and write. We present a sound and complete algorithm for high atomicity programs and a sound algorithm for distributed programs. We also argue that our algorithms are simpler than previous algorithms, where masking fault-tolerance is added to a fault-intolerant program. Hence, these algorithms can partially reap the benefits of automation when the cost of adding masking fault-tolerance to a fault-intolerant program is high. To illustrate these algorithms, we show how the masking fault-tolerant programs for triple modular redundancy and Byzantine agreement can be obtained by enhancing the fault-tolerance of the corresponding nonmasking versions. We also discuss how the derivation of these programs is simplified when we begin with a nonmasking fault-tolerant program.",2003,0, 2427,Design of fault-tolerant logical topologies in wavelength-routed optical IP networks,"In this paper we illustrate a new methodology for the design of fault-tolerant logical topologies in wavelength-routed optical networks exploiting wavelength division multiplexing, and supporting both unicast and multicast IP datagram flows. Our approach to protection and restoration generalizes the ""design protection"" concepts, and relies on the dynamic capabilities of IP routing to re-route IP datagrams when faults occur, thus leading to high-performance cost-effective fault-tolerant logical topologies. Our design methodology for the first time considers the resilience properties or the topology during the logical topology optimization process, thus extending the optimization of the network resilience performance also on the space of the logical topologies. Numerical results clearly show that our approach is able to obtain very good logical topologies with limited complexity",2001,0, 2428,Test metric assessment of microfluidic systems through heterogeneous fault simulation,"In this paper we introduce a microfluidic system workflow, incorporating system design, simulation and test. We introduce our microfluidic fault library and novel method of fault injection, the Fault Block. We cross-validate our simulation to experimental work with reference to two test methods; impedance spectroscopy and Levich sensors. The workflow is described with reference to a Y channel hydrodynamic system, the results from which show greater test capability using our approach than conventional sensors.",2010,0, 2429,Extended Hypercube with Cross-Connection - A New Interconnection Fault Tolerant Network for Parallel Computers,"In this paper we introduce a new interconnection network, the extended hypercube with cross connection denoted by EHC(n,k). This network has hierarchical structure and it overcomes the poor fault tolerant properties of extended hypercube (EH). This network has low diameter, constant degree connectivity and low message traffic density.",2009,0, 2430,Maintenance-oriented fault tree analysis of component importance,"In this paper we investigate and compare a set of existing component importance measures and select the most informative and appropriate one for guiding the maintenance of the system. Efficient methods to compute the selected measure are presented. An important concern in the traditional fault tree reliability analysis, common-cause failure, is also addressed in the component importance analysis using the selected measure. A simple example is designed and analyzed to show the selection process.",2004,0, 2431,Model-based respiratory motion correction using 3-D echocardiography,"In this paper we investigate the use of 3-D echocardiography (echo) data for respiratory motion correction of MRI-derived roadmaps in image-guided interventions. By a combination of system calibration and tracking the MRI and echo coordinate systems are aligned. 3-D echo images at different respiratory positions are registered to an end-exhale 3-D echo image using a registration algorithm that uses a similarity measure based on local orientation and phase differences. We first assess the use of the echo-echo registration alone to perform motion-correction in the MRI coordinate system. Next, we investigate combining the echo-echo similarity measure with a MRI-derived motion model. Using experiments with cardiac MRI and 3-D echo data acquired from 2 volunteers, we demonstrate that significantly faster and more robust performance can be obtained using the motion model.",2008,0, 2432,A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction,"In this paper we present a comparative analysis of the predictive power of two different sets of metrics for defect prediction. We choose one set of product related and one set of process related software metrics and use them for classifying Java files of the Eclipse project as defective respective defect-free. Classification models are built using three common machine learners: logistic regression, naive Bayes, and decision trees. To allow different costs for prediction errors we perform cost-sensitive classification, which proves to be very successful: >75% percentage of correctly classified files, a recall of >80%, and a false positive rate <30%. Results indicate that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.",2008,0, 2433,A Hybrid Fault-Tolerant Algorithm for MPLS Networks,In this paper we present a novel fault-tolerant algorithm for use in MPLS based networks. The algorithm is employing both protection switching and path rerouting techniques and satisfies four selected performance criteria,2006,0, 2434,Fault Tolerance Using a Front-End Service for Large Scale Distributed Systems,"In this paper we present a solution to ensuring a high degree of availability and reliability in service-based large scale distributed systems. The proposed architecture is based on a set of replicated services running in a fault-tolerant container and a proxy service able to mask possible faults, completely transparent for a client. The solution not only masks possible faults but also optimizes the access to the distributed services and their replicas using a load-balancing strategy, whilst ensuring a high degree of scalability. The advantages of the proposed architecture were evaluated using a pilot implementation. The obtained results prove that the solution ensures a high degree of availability and reliability for a wide range of service-based distributed systems.",2009,0, 2435,Scheduling and voltage scaling for energy/reliability trade-offs in fault-tolerant time-triggered embedded systems,"In this paper we present an approach to the scheduling and voltage scaling of low-power fault-tolerant hard real-time applications mapped on distributed heterogeneous embedded systems. Processes and messages are statically scheduled, and we use process re-execution for recovering from multiple transient faults. Addressing simultaneously energy and reliability is especially challenging because lowering the voltage to reduce the energy consumption has been shown to increase the transient fault rates. In addition, time-redundancy based fault-tolerance techniques such as re-execution and dynamic voltage scaling-based low-power techniques are competing for the slack in the schedules. Our approach decides the voltage levels and start times of processes and the transmission times of messages, such that the transient faults are tolerated, the timing constraints of the application are satisfied and the energy is minimized. We present a constraint logic programming-based approach which is able to find reliable and schedulable implementations within limited energy and hardware resources.",2007,0, 2436,Synthesis of Fault-Tolerant Schedules with Transparency/Performance Trade-offs for Distributed Embedded Systems,"In this paper we present an approach to the scheduling of fault-tolerant embedded systems for safety-critical applications. Processes and messages are statically scheduled, and we use process re-execution for recovering from multiple transient faults. If process recovery is performed such that the operation of other processes is not affected, we call it transparent recovery. Although transparent recovery has the advantages of fault containment, improved debuggability and less memory needed to store the fault-tolerant schedules, it will introduce delays that can violate the timing constraints of the application. We propose a novel algorithm for the synthesis of fault-tolerant schedules that can handle the transparency/performance trade-offs imposed by the designer, and makes use of the fault-occurrence information to reduce the overhead due to fault tolerance. We model the application as a conditional process graph, where the fault occurrence information is represented as conditional edges and the transparent recovery is captured using synchronization nodes",2006,0, 2437,Computational system to detect defects in mounted and bare PCB Based on connectivity and image correlation,"In this paper we present an image analysis system for printed circuit board (PCB) automated inspection. In the last years PCB manufacturing industry has been advanced in inspection automation systems, especially to solve smaller tolerance requirements. A PCB consists in a circuit and electronic components assembled in a surface. There are three main process involved in its manufacture, where the inspection is necessary. The main process consists in the printing itself. Another important procedure is the components placement over the PCB surface. And the third is the components soldering. In the proposed inspection system we consider the board printing and components placements defects. We first compare a PCB standard image with a PCB image, using a simple subtraction algorithm that can highlight the main problem-regions. Then we used connection analysis in the printed circuit to find fatal and potential errors, like breaks, circuit shorts, missing components. Besides, using digital image correlation techniques, the system detects component errors, like absence, change, and wrong position. In other to develop this methodology in real PCB, we propose to magnify the problem-regions and start to find the errors in a set of PCB sections, which are smaller than the main PCB image.",2008,0, 2438,The Sliced Gaussian Mixture Filter with adaptive state decomposition depending on linearization error,"In this paper, a novel nonlinear/nonlinear model decomposition for the Sliced Gaussian Mixture Filter is presented. Based on the level of nonlin-earity of the model, the overall estimation problem is decomposed into a ""severely"" nonlinear and a ""slightly"" nonlinear part, which are processed by different estimation techniques. To further improve the efficiency of the estimator, an adaptive state decomposition algorithm is introduced that allows decomposition according to the linearization error for nonlinear system and measurement models. Simulations show that this approach has orders of magnitude less complexity compared to other state of the art estimators, while maintaining comparable estimation errors.",2010,0, 2439,A novel pulse echo correlation tool for transmission path testing and fault finding using pseudorandom binary sequences,"In this paper, a novel pulse sequence testing methodology is presented as an alternative to time domain reflectometry (TDR) for transmission line 'health' condition monitoring, faultfinding and location. This scheme uses pseudo random binary sequence (PRBS) injection with cross correlation (CCR) techniques to build a unique response profile, as a characteristic signature, to identify the type of fault, if any, or load termination present as well as its distance from the point of stimulus insertion. This fault characterization strategy can be applied to a number of industrial application scenarios embracing high frequency (HF) printed circuit board (PCB) and integrated circuit (IC) device operation, overhead lines and underground cables in inaccessible locations, which rely on a transmission line pathway or 'via' common to all cases either for signal propagation or power conveyance. In this paper a lumped parameter circuit model is presented to emulate generalized transmission line behaviour, using the well-known pSpice simulation package, for a range of known load-terminations mimicking fault conditions in a range of application scenarios encountered in practice. Numerous line behavioural simulations for various fault conditions, known a priori, with measured CCR response demonstrate the capability of and establishes confidence in the effectiveness of the PRBS test method in fault type identification and location. The accuracy of the method is further validated through theoretical calculation using known lumped parameters, fault termination conditions and link distance in transmission line simulation.",2005,0, 2440,Fault tolerant two-level pyramid networked control systems,"In this paper, a pyramid control hierarchy is proposed. It is based on the presence of a supervisor controller on top of separate controller nodes. A simulation study is conducted to test the functionality of the system. The proposed model is an enhancement of machine modeled in form of networked control systems (NCS). Two models are tested: one supervisor/two sub-controllers, one supervisor/three sub-controllers. All possible combinations of supervisor-controller intercommunication are tested. Also, all supervisor/controller inter-changeability possibilities are taken into consideration. Results are illustrated and discussed. Recommendations are drawn out. All machine models of this study are built using switched gigabit Ethernet in star topology",2005,0, 2441,Research of air-drive AMT fault diagnostic system based on models,"In this paper, a research about air-drive AMT (automatic mechanical transmission) fault diagnostic system is made. The redundancy analysis method and action analysis method are used to detect the failures in the AMT system and a fault diagnostic system of air-drive AMT in heavy commercial vehicle is developed by model based design which is very adapt to the development of automotive control system. The fault diagnostic models of sensors and execute components in AMT system are successfully built in Matlab/Simulink. The fault diagnostic models are implemented through a RCP (rapid control prototyping) experiment based on dSPACE.",2008,0, 2442,A robust error protection technique for JPEG2000 codestream and its evaluation in CDMA environment,"In this paper, a robust error protection scheme for JPEG2000 codestream is proposed. The error protection is achieved by combining the advantage of the codestream layer structure, a data hiding technique and an FEC code. At encoding stage, multiple quality layers of the codestream are protected by an FEC code with various strengths. The parity data is then hidden in the least significant layer. Prior to image decoding, error recovery is done by means of extracting the hidden data and performing the FEC decoding to the corresponding layers. The proposed method offers several benefits: it preserves the same codestream structure as the one in the JPEG2000 standard part 1, does not require additional bandwidth and can be integrated with the existing JPEG2000 error resilience tools. Hence, it accommodates one of the requirements for the upcoming wireless JPEG2000 (JPWL or JPEG2000 part 11). Simulations in a CDMA environment confirmed the proposal's effectiveness.",2003,0, 2443,A selection sort method of test cases based on severity of defects,"In this paper, a selection sort method of test cases is presented that is according to historical datum from the test cases pools, statistical analysis of test cases based on defect severities and choose the order of test cases and then measure the efficiency by the formula to find the choice of test cases in testing process in order. Finally, the test case is given overall selection method in regression testing. Examples of this method for analysis, experimental results show that the proposed method effectively reduce the number of test cases and improve the efficiency of discovery the defects in regression testing.",2010,0, 2444,Reducing of soft error effects on a MIPS-based dual-core processor,"In this paper, a simulation-based fault injection analysis of a MIPS-based dual-core processor is presented, an approach is proposed to improve the reliability of most vulnerable parts of the processor components and then the improvement is evaluated. In the first series of experiments, a total of 9100 transient faults were injected in 114 different fault sites of the processor. These experiments demonstrate that the Message Passing Interface, the Arbiter and the Program Counters are the most vulnerable parts of the processor. Thus, these parts were selected as targets for the improvement. The fault tolerance method used for improving the Arbiter is based on using the Triple Modular Redundancy. As for the Message Passing Interface and the Program Counters the single bit error correction Hamming code is used. The experimental results show 11.8% improvement in error recovery and 15.1% reduction of failure rate at the cost of 1.01% area overhead.",2010,0, 2445,"An Effective Approach for the Diagnosis of Transition-Delay Faults in SoCs, based on SBST and Scan Chains","In this paper, a Software-Based Diagnosis (SBD) procedure suitable for SoCs is proposed to tackle the diagnosis of transition-delay faults. The illustrated methodology takes advantage of an initial Software-Based Self-Test (SBST) test set and of the scan-chains included in the final SoC design release. In principle, the proposed methodology consists in partitioning the considered SBST test set in several slices, and then proceeding to the evaluation of the diagnostic ability owned by each slice with the aim of discarding diagnosis-ineffective test programs portions. The proposed methodology is aimed to provide precise feedback to the failure analysis process focusing the systematic timing failures characteristic of new technologies. Experimental results show the effectiveness and feasibility of the proposed approach on a suitable SoC test vehicle including an 8-bit microcontroller, 4 SRAM memories and an arithmetic core, manufactured by STMicroelectronics, whose purpose is to provide precise information to the failure analysis process. The reached diagnostic resolution is up to the 99.75%, compared to the 93.14% guaranteed by the original SBST procedure.",2007,0, 2446,Fault-Tolerant Cognitive Diversity Scheme for Topology Information-Based Hybrid Ubiquitous Sensor Networks,"In this paper, a special scenario of the topology in the hybrid ubiquitous sensor networks is studied and a new cross layer fault-tolerant cognitive scheme based on topology information is proposed to exploit both the channel diversity and the spatial reusability. The proposed Cross Layer Fault-tolerant Topology Information-based Cognitive Diversity Scheme (CL-FTICDS) integrated the attributes both of the new performance evaluation of the cooperative diversity and the attributes of the topology space. It resides between MAC and network layer and aims to improve the network throughput by coordinating the transmission power, channel assignment and route selection among multiple nodes in a distributed way. The CL-FTICDS jointly coordinates the transmission power at each node, the channel selection on each wireless interface, and route selection among interfaces based on the traffic information measured and exchanged among the multi-hop neighbor nodes. The Cognitive Diversity Time Topology Metric (CDTTM) is presented to quantify the difference of various adjustment candidates. It achieves efficient utilization of available channels by selecting feasible adjustment candidate with the smallest CDTTM value and coordinating affected nodes to realize the adjustment. The average achievable sum rates and the outage probability of the networks are all examined with the cross layer constraints.",2008,0, 2447,Recovering of masked errors in power systems state estimation and measurement gross error detection and identification proposition,"In this paper, a topological/geometrical based approach is used to define an index, undetectability index (UI), which provides the distance of a measurement from the range space of the Jacobean matrix of the power system. The higher the value of this index, for a measurement, the closer it will be to the range space of that matrix, that is, the error in measurements with high UI is not reflected in their residuals, thus masking a possible gross error those measurements might have. Using the UI of a measurement, the possible gross error the state estimation process might mask is recovered; then the total gross error of that measurement is composed and used to the gross error detection and identification test. The gross error processing turns out to be very simple to implement, requiring only a few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.",2010,0, 2448,Detection of steel defect using the image processing algorithms,"In this paper, detection and classification of steel surface defects is investigated. Image processing algorithms are applied for detecting four popular kind of steel defects, i.e., hole, scratch, Coil break and rust. The results show that the applied algorithms have a good performance on steel defect detection. Numerical results indicate that the implemented image processing algorithms have 88.4%, 78%, 90.4%, 90.3 % accuracy respectively on the hole, scratch, Coil break and rust defect.",2008,0, 2449,Application of Non-superconducting Fault Current Limiter to improve transient stability,"In this paper, enhancement of transient stability of Single Machine Infinite Bus (SMIB) system with a double circuit transmission line using a Non-superconducting Fault Current Limiter (NSFCL) is proposed. Stability analysis for such system is discussed in detail. It is shown that, the stability depends on the resistance of NSFCL in fault condition. To effective improvement of stability, the optimum value of NSFCL resistance is calculated. Simulation results by PSCAD/EMTDC software are presented to confirm the analytic analysis accuracy.",2010,0, 2450,Exact analytical bit error rates for multiple access chaos-based communication systems,"In this paper, exact analytical expressions for the bit error rates (BERs) in a multiple-access chaos-based digital communication system are derived. Comparisons are made with those obtained using traditional approximation methods which assume a Gaussian distribution for the conditional decision parameter. The obtained results are compared to the results of brute-force (BF) numerical simulations. It is found that the exact analytical BERs are in perfect agreement with BF simulations and hence provide better prediction of the BER performance than those given by traditional Gaussian-approximation-based methods.",2004,0, 2451,Fault tolerant gaits of legged robots for locked joint failures,"In this paper, fault detection and tolerance in static walking of legged robots are addressed. A kind of fault events, locked joint failure, is defined and its properties are investigated in the frame of gait study and robot kinematics. For the purpose of tolerating a locked joint failure, an algorithm of fault tolerant gaits for a quadruped robot is proposed in which the robot can continue its walking after a locked failure occurs to a joint of a leg. A case study on applying the proposed scheme to wave gaits verifies its applicability and capability",2000,0, 2452,A fault detection scheme using condition systems,"In this paper, fault detection is examined using condition systems. The method presented relies on existing results for controller synthesis using these models. First we present a generalized discussion of the controller module called a taskblock. Then a method to incorporate on-line fault detection into the taskblock framework is presented. We show that the resulting modified controller module is effective for control and detection.",2005,0, 2453,Research on fault mechanism of icing of wind turbine blades,"In this paper, fault mechanism of icing of wind turbine blades is studied theoretically and experimentally. First, the aerodynamic performance of a wind turbine's primary airfoil under normal and icing conditions are simulated using Fluent software separately to analyze the geometry effect of icing on wind turbine. Then the formation and characteristics of vibration signal on main shaft caused by mass unbalance of wind turbine's rotor are analyzed using rotor dynamics theory to evaluate mass effect of icing on wind turbine. Finally, corresponding experiments are carried out on a self-made small wind turbine under laboratory conditions to testify the former theoretical analysis.",2009,0, 2454,"A new life system approach to the Prognostic and Health Management (PHM) with survival analysis, dynamic hybrid fault models, evolutionary game theory, and three-layer survivability analysis","In this paper, I propose a new architecture for PHM, which is characterized by life-system approach- treating PHM as a hierarchical system with fundamental properties similar to those of life systems. Conceptually, besides drawing on the important concepts from existing PHM theory and practice such as life cycle, condition-based maintenance (CBM), remaining useful lifetime (RUL), I draw on the dynamic hybrid fault models (DHF) from fault tolerance theory and agreement algorithms, three-layer survivability analysis from survivable network systems (SNS), population dynamics from population ecology, and survival analysis from biostatistics and biomedicine. Methodologically, three main mathematical tools: survival analysis (including competing risks analysis and multivariate survival analysis), dynamic hybrid fault models and evolutionary game theory (EGT) are applied for PHM modeling and analysis. Operationally, the three-layer survivability analysis is applied to deal with the so-called UUUR(Unpredictable, latent, unobserved or unobservable risks) events and to achieve sound decision-making. Overall, the advantages of the new architecture include: (1) Offer a flexible architecture that is not only compatible with existing components/approaches of PHM, such as lifetime, reliability, maintainability, safety, data-driven prognostics, and model-based prognostics, but also readily extendable to incorporate fault tolerance, survivability, and security. (2) Utilize survival analysis, competing risks analysis, and multivariate survival analysis for better modeling of lifetime and reliability at both individual and population (group of components) levels. (3) Approach the fault tolerance and reliability of the monitoring sensor network in PHM and those of the underlying physical system with the DHF models. (4) Analyze the system survivability (sustainability) with the three-layer survivability analysis approaches. (5) Capture security events with UUUR events and incorporate se- curity policies into PHM.",2009,0, 2455,Efficient residual prediction with error concealment in extended spatial scalability,"In this paper, inter-layer residual prediction's trade-off between concealing lost macroblock (MB) and removing visual artifact in extended spatial scalability (ESS) is investigated. In order to improve visual artifacts, various schemes are proposed in the literature, including higher distortion to the residual prediction used in ESS enhancement layer for non matching residuals. Whereas, residual prediction is also used to conceal lost MBs in inter-layer error concealment methods. We propose an efficient use of residual prediction to prevent those artifacts as well as to conceal lost MBs, exploiting features in homogenous characteristics in video objects for error resilient coding. Simulation results show that the proposed scheme achieves up to 0.29 dB PSNR gain, with overall average of 0.1 dB PSNR gain at the decoder for various tested sequences compared to JVT-W123 under testing conditions specified in JVT-V302.",2010,0,6052 2456,A Miniaturized Dual-mode Bandpass Filter using Triangular Loop Resonator with Wide Stopband by U-Shaped Defected Ground Structure (DGS),"In this paper, miniaturized dual-mode microstrips bandpass filter using the triangular loop structure with harmonics suppression by double U-shaped deflected ground structure is proposed by contain excellent performance. The proposed filter shows a 3.6% fractional bandwidth at the center frequency of 1.4 GHz with a return loss of better than 10 dB and an insertion loss of less than 2 dB. The size of the proposed structure is 50% less than conventional dual-mode microstrip triangular loop resonator filter at the same center frequency. With proper design, the stopband performances of the propose filter have been drastically improved that the first and second spurious suppression of better than -15 dB.",2007,0, 2457,Robust fault detection using interval LPV models,"In this paper, robust fault detection using a passive robust approach interval based on generating an adaptive threshold for interval linear parameter varying (LPV) models is proposed. This approach allows to consider parameters and associated uncertainty intervals of such model dependence on the operating point. Additionally, a parameter estimation algorithm for interval LPV models is presented. Finally, an piece of a sewer system has been used to test the validity of the proposed approach.",2007,0, 2458,Five-phase induction motor behaviour under faulted conditions,"In this paper, simple phase variable model of a five-phase induction motor drive is presented. The model is implemented using MATLAB/SIMULINK software. A number of electrical faults which are common in inverter fed drives are created. The faults considered are open circuit faults in one phase and two phases. As these motor drives are intended for disturbance free operation, the analysis of these is an important aspect to investigate under different faults. Simulation results for each fault have been provided and their detailed analysis has been carried out. Experimental proof is also provided to support the findings.",2008,0, 2459,High level fault simulation: experiments and results on ITC'99 benchmarks,"In this paper we present our approach for performing Behavioral Fault Simulation (BFS). This approach involves three main steps (i) the definition of an internal modeling of behavioral descriptions, and the determination of a fault model; (ii) the definition of a fault simulation technique; (iii) the implementation of this technique. Finally, this paper deals with experiments conducted on ITC'99 benchmarks in order to validate a VHDL behavioral fault simulator (BFS). The effectiveness of the BFS software is clearly demonstrated through the obtained results",2000,0, 2460,A Millimeter Wave Direct QPSK modulator MMIC Using PIN Technology And A Novel Approach to Self Error-Correction,"In this paper we present two topologies of QPSK (Quadrature Phase Shift Keying) Modulator for direct carrier modulation at 29.5 GHz for satellite communications using PIN technology. Previously published designs use PHEMT technology. PIN diodes allow operation at high power level (several watts at Ka band) and yield better switching performance compared to PHEMT switches. Our designs exhibit wide bandwidth (3GHz) and reasonable loss (5.5 db). In addition, we present a novel method for a self error correction QPSK modulator, which can yield high quality modulator at millimeter waves.",2002,0, 2461,Study of the Dispersion Characteristics of One Dimensional EBG with Defects,"In this paper we propose a simplified model for studying the Brillouin diagrams based on dielectric multilayers inside a parallel plate waveguide. The main objective of this work is the use of a simplified approach in modelling EBG structures with periodical defects. The effect of layer permittivity, defect length and periodicity were studied using simulation software with appropriated periodical boundary conditions. Physical insights and intuitive justifications for the simulation findings and concepts are also presented. It is shown that the two forbidden band-gaps can either be controlled independently by varying the permittivity or the size of the defects",2005,0, 2462,Using clone detection to identify bugs in concurrent software,"In this paper we propose an active testing approach that uses clone detection and rule evaluation as the foundation for detecting bug patterns in concurrent software. If we can identify a bug pattern as being present then we can localize our testing effort to the exploration of interleavings relevant to the potential bug. Furthermore, if the potential bug is indeed a real bug, then targeting specific thread interleavings instead of examining all possible executions can increase the probability of the bug being detected sooner.",2010,0, 2463,Using FPGA technology towards the design of an adaptive fault tolerant framework,"In this paper we propose architecture for a reconfigurable, adaptive, fault-tolerant (RAFT) framework for application in real time systems with require multiple levels of redundancy and protection. Typical application environments include distributed processing, fault-tolerant computation, and mission and safety-critical systems. The framework uses field programmable gate array (FPGA) technologies with on the fly partial programmability achieving reconfiguration of a system component when the existing components fail or to provide extra reliability as required in the specification. The framework proposes the use an array of FPGA devices to implement a system that, after detecting an error caused by a fault, can adoptively reconfigure itself to achieve fault tolerance. The FPGAs that are becoming widely available at a low cost are exploited by defining a system model that allows the system user to define various levels of reliability choices, providing a monitoring layer for the system engineer.",2005,0, 2464,On the Intrinsic Fault-Tolerance Nature of Parallel Genetic Programming,"In this paper we show how parallel genetic programming can run on a distributed system with volatile resources without any lack of efficiency. By means of a series of experiments, we test whether parallel GP - and consistently evolutionary algorithms - are intrinsically fault-tolerant. The interest of this result is crucial for researchers dealing with real-life problems in which parallel and distributed systems are required for obtaining results on a reasonable time. In that case, parallel GP tools will not require the inclusion of fault-tolerant computing techniques or libraries when running on meta-systems undergoing volatility, such us desktop grids offering public resource computing. We test the performance of the algorithm by studying the quality of solutions when running over distributed resources undergoing processors failures, when compared with a fault-free environment. This new feature, which shows its advantages, improves the dependability of the parallel genetic programming algorithm",2007,0, 2465,Effect on soft error rate of kinks and open loops in the read-head transfer curve,"In this paper we study the effects of nonlinearities in the GMR sensor on soft error rate. Previous studies have focused on the effects of nonhysteretic, low order nonlinearities. Here we focus mainly on transfer curves with kinks and open loops",2001,0, 2466,Decentralized fault diagnosis for sensor networks,In this paper we study the problem of fault diagnosis for sensor networks.We examine faults that involve an anomalous behavior of the sensor and investigate their diagnosis only through the local interaction between faulty nodes and healthy ones. We provide heuristics to actively diagnose faults and recover the nominal behavior.,2009,0, 2467,A probabilistic approach for fault tolerant multiprocessor real-time scheduling,"In this paper we tackle the problem of scheduling a periodic real time system on identical multiprocessor platforms, moreover the tasks considered may fail with a given probability. For each task we compute its duplication rate in order to (1) given a maximum tolerated probability of failure, minimize the size of the platform such at least one replica of each job meets its deadline (and does not fail) using a variant of EDF namely EDF(k) or (2) given the size of the platform, achieve the best possible reliability with the same constraints. Thanks to our probabilistic approach, no assumption is made on the number of failures which can occur. We propose several approaches to duplicate tasks and we show that we are able to find solutions always very close to the optimal one",2006,0, 2468,Diagnosis of scan-chains by use of a configurable signature register and error-correcting codes,"In this paper a new diagnosis method for scan designs with many scan-paths based on error correcting linear block codes with N information bits and K control bits is proposed, where N is the number of scan-paths. The new approach can be implemented on a modified STUMPS-architecture. In diagnosis mode the test has K times to be repeated. In the K repetitions of the test the outputs of the scan-paths are connected to a configurable signature register (with disconnected feedback logic) according to the coefficients of the K syndrome equations of the code. By monitoring the one-dimensional output sequence of the configurable signature register the failing scan-cells in the different scan-paths can be identified with the resolution of the selected error correcting code. Since for the relevant codes, e.g. (shortened) Hamming codes, T-error correcting BCH-code, the ratio K/N decreases very fast with an increasing number N the method is useful for a large number of scan-paths.",2004,0, 2469,Scalable multiple description video coding for error-resilient transmission over hybrid networks,In this paper a scalable multiple description video coding approach based on embedded multiple description scalar quantization (EMDSQ) is presented. The proposed approach enables the progressive transmission of video over unreliable channels with variable bandwidth. Experimental results show that in lossy transmission conditions the proposed embedded multiple description coding system yields better rate-distortion performance compared to single description video coding and can efficiently sustain 20% of losses.,2008,0, 2470,Analytical circuit-based model of PMSM under stator inter-turn short-circuit fault validated by time-stepping finite element analysis,In this paper a simple dynamic circuit-based model for a rotor surface mounted PMSM with inter-turn winding short-circuit fault is presented. Finite element analysis is used for parameter determination and study the PMSM under fault conditions. The Analytical expressions are proposed for determining the fault model parameters. The proposed circuit model and parameter expressions are validated by time stepping FEA. The circuit-based model results exhibit the same trend as predicted by FEM analysis for different fault insulation resistances.,2010,0, 2471,Broadband Planar Filters with Enhanced Couplings Using Defected Ground Structures,"In this paper a study of some microwave microstrip bandpass filters (BPF) using defected ground structures (DOS) is presented. This technique allows designs of tight couplings without the necessity of using very narrow coupling gaps. Based on the results of the study, four-pole cross-coupled planar microwave bandpass filters were designed, with a single or with two ground slots. Compared to similar microstrip filters without defected ground, the simulated performances of these novel structures indicate some technological advantages",2006,0, 2472,Analysis of Transient Stability Enhancement of LV-Connected Induction Microgenerators by Using Resistive-Type Fault Current Limiters,"In this paper an analytical method by which the transient stability of an induction machine is maintained regardless of the fault clearance times is introduced. The method can be applied in order to improve the transient stability of a large penetration of low-voltage (LV) connected microgeneration that can be directly interfaced by single-phase induction generators within domestic premises. The analysis investigates the effectiveness of using resistive-type superconducting fault current limiters (RSFCLs) as remedial measures to prevent the microgenerators from reaching their speed limits during remote faults, and hence improving their transient stability. This will prevent unnecessary disconnection of a large penetration of LV-connected microgeneration and thus avoiding the sudden appearance of hidden loads, and unbalanced voltage conditions. The minimum required value of a resistive element of RSFCL for mitigating the transient instability phenomena of LV-connected microgeneration based on the system and connected machine parameters is determined. The analytical method has been validated by conducting informative transient studies by using detailed models of a small microwind turbine with constant mechanical output interfaced directly within residential dwellings by a single-phase induction generator, a transient model of resistive superconducting fault current limiter (RSFCL), and a typical suburban distribution network with residential loads. All the models are developed in the time-domain PSCAD/EMTDC dynamic simulation.",2010,0, 2473,A robust model-based information system for monitoring and fault detection of large scale belt conveyor systems,"In this paper an information system is presented, which is developed to meet the requirements on fault detection and online monitoring of large scale belt conveyor systems. The core of this information system consists of a mathematical model, observer and fault detection system.",2002,0, 2474,Fault diagnosis in a plant using Fisher discriminant analysis,"In this paper Fisher's discriminant analysis (FDA) is used for detecting and diagnosing faults in a real plant. FDA provides an optimal lower dimensional representation in terms of discriminating between classes of data, where, in this context of fault diagnosis, each class corresponds to data collected during a specific, known fault. A discriminant function is applied to detect and diagnose faults using both simulated and real data collected from a plant: a two-tank system, showing good results.",2008,0, 2475,Packet error and frame rate controls for real time video stream over wireless LANs,"In this paper QOS Control of realtime multimedia communication system under heterogeneous environment by, the wired and the wireless networks is proposed. In our suggested system, as channel coding, FEC (Forward Error Correction) method with Reed-Solomon coding is introduced to reduce the packet error rate on the wireless network. On the other hand, as source coding, transcoding methods including transformation of various video codings such as M-JPEG, MPEG and Quicktime, controls of Q-factor within a frame, frame rate and color depth is introduced to maintain the required QOS, particularly the end-to-end throughput. The increases of the required bandwidth by redundant packet addition with FEC can be suppressed by the transcoding functions while the packet error rate is reduced to the accepted value. In order to verify the functionality and the efficiency in our suggested system, numerical simulation was carried out. As the result, our suggested system by combination of transcoding and FEC could correct the packet error rate with accepted order while maintaining the frame rate and the amount of data transform at a constant.",2003,0, 2476,Algorithm of MTIE point estimate computing for non-uniform sampling of time error,In this paper the algorithm enabling assessment of maximum time interval error (MTIE) for non-uniform sampling of time error is proposed. The reasons of non-uniform sampling are presented. Then the idea of MTIE computing for non-uniform sampled data is described. Next the details of the algorithm are presented and described.,2008,0, 2477,Application of fuzzy neuro for generator stator earth fault detection,"In this paper the use of a fuzzy neural net for stator earth fault detection is presented. A generator model is simulated using EMTDC software. Earth faults are simulated between 0.1% to 100% distance points from the generator neutral. The combination of both EMTDC simulation and neural network presented in this paper introduces a new, complementary method that performs better in instances where the interpretation of traditional methods is somewhat dubious.",2004,0, 2478,Non-Uniform Slant Correction for Handwritten Text Line Recognition,In this paper we apply a novel non-uniform slant correction preprocessing technique to improve the recognition of offline handwritten text lines. The local slant correction is expressed as a global optimisation problem of the sequence of local slant angles. This is different to conventional slant removal techniques that rely on the average slant angle. Experiments based on a state-of-the-art handwritten text line recogniser show a significant gain in word level accuracy for the investigated preprocessing methods.,2007,0, 2479,Task Mapping and Bandwidth Reservation for Mixed Hard/Soft Fault-Tolerant Embedded Systems,"In this paper we are interested in mixed hard/soft real-time fault-tolerant applications mapped on distributed heterogeneous architectures. We use the Earliest Deadline First (EDF) scheduling for the hard real-time tasks and the Constant Bandwidth Server (CBS) for the soft tasks. The bandwidth reserved for the servers determines the quality of service (QoS) for soft tasks. CBS enforces temporal isolation, such that soft task overruns do not affect the timing guarantees of hard tasks. Transient faults in hard tasks are tolerated using checkpointing with rollback recovery. We have proposed a Tabu Search-based approach for task mapping and CBS bandwidth reservation, such that the deadlines for the hard tasks are satisfied, even in the case of transient faults, and the QoS for the soft tasks is maximized. Researchers have used fixed execution time models, such as the worst-case execution times for hard tasks and average execution times for soft tasks. However, we show that by using stochastic execution times for soft tasks, significant improvements can be obtained. The proposed strategy has been evaluated using an extensive set of benchmarks.",2010,0, 2480,Probability of error metrics for best-basis selection,In this paper we derive a metric for selecting the best-basis of a wavelet packet used to compress a classifier database. The metric will choose a basis that minimizes the probability of error given that the database must be represented with a finite number of bits. We also solve the corresponding bit allocation problem.,2000,0, 2481,Engineering knowledge-based condition analyzers for on-board intelligent fault classification: A case study,"In this paper we describe the design of a knowledge-based condition analyzer that performs on-board intelligent fault classification. The system is designed to be deployed as a prototype on E414 locomotives, a series of downgraded highspeed vehicles that are currently employed in standard passenger service. Our goal is to satisfy the requirements of a development scenario in the Integrail project for a condition analyzer that leverages an ontology-based description of some critical E414 subsystems in order to classify faults considering mission and safety related aspects.",2008,0, 2482,Defect recognition algorithm based on curvelet moment and support vector machine,"In this paper, a new recognition algorithm based on curvelet moment and support vector machine(SVM) is proposed for chip defect recognition. The proposed recognition method is implemented through a reference comparison method. First the defect regions of chips are extracted through preprocessing, and then the curvelet moment feature of the defect region is computed as the input of SVM classifier, the output of the trained SVM classifier is the result of defect recognition. The algorithm combines the good properties of curvelet moment and SVM classifier, the former can provide multi-scale, local details and orientation information of the defect region, and the latter is suitable to solve the small samples, nonlinear and high dimensions pattern recognition problem. Experimental results show that the algorithm has higher recognition rate compared with PCA based method and can solve the complex defects recognition problem effectively.",2010,0, 2483,Implementation of a three-level rectifier for power factor correction,"In this paper, a new single-phase switching mode rectifier (SMR) for three-level pulse width modulation (PWM) is proposed to achieve high input power factor, low current harmonics, low total harmonic distortion (THD) and simple control scheme. The mains circuit of the proposed SMR consists of six power switches, one boost inductor, and two DC capacitors. The control algorithm is based on a look-up table. There are five control signals in the input of the look-up table. These control signals are used to control the power flow of the adopted rectifier, compensate the capacitor voltages for the balance problem, draw a sinusoidal line current with nearly unity power factor, and generate a three-level PWM pattern on the AC side of adopted rectifier. The advantages of using three-level PWM scheme compared with two-level PWM scheme are using low voltage stress of power switches, decreasing input current harmonics, and reducing the conduction losses. The performances of the proposed multilevel SMR are measured and shown in this paper. The high power factor and low harmonic currents at the input of the rectifier are verified by software simulations and experimental results from a laboratory prototype",2000,0, 2484,Exact pairwise error probability of distributed space-time coding in wireless relays networks,"In this paper, we analyze the pairwise error probability (PEP) of distributed space-time codes employing Alamouti scheme. We restrict our attention to the space-time code construction for Protocol III in the work of Nabar et al. (2004). In particular, we derive two exact closed-form expressions for PEP when the relay is either close to the source or destination. Using the alternative definition of Q-function, we can express these PEPs in terms of finite integral whose integrand is composed of trigonometric functions. We further show that with only one relay assisted source-destination link, system still achieves diversity order of two, assuming single-antenna terminals. We also perform Monte-Carlo simulations to verify the analysis.",2007,0, 2485,Probabilistic approaches to fault detection in networked discrete event systems,"In this paper, we consider distributed systems that can be modeled as finite state machines with known behavior under fault-free conditions, and we study the detection of a general class of faults that manifest themselves as permanent changes in the next-state transition functionality of the system. This scenario could arise in a variety of situations encountered in communication networks, including faults occurred due to design or implementation errors during the execution of communication protocols. In our approach, fault diagnosis is performed by an external observer/diagnoser that functions as a finite state machine and which has access to the input sequence applied to the system but has only limited access to the system state or output. In particular, we assume that the observer/diagnoser is only able to obtain partial information regarding the state of the given system at intermittent time intervals that are determined by certain synchronizing conditions between the system and the observer/diagnoser. By adopting a probabilistic framework, we analyze ways to optimally choose these synchronizing conditions and develop adaptive strategies that achieve a low probability of aliasing, i.e., a low probability that the external observer/diagnoser incorrectly declares the system as fault-free. An application of these ideas in the context of protocol testing/classification is provided as an example.",2005,0, 2486,A combined decision fusion and channel coding scheme for distributed fault-tolerant classification in wireless sensor networks,"In this paper, we consider the distributed classification problem in wireless sensor networks. Local decisions made by local sensors, possibly in the presence of faults, are transmitted to a fusion center through fading channels. Classification performance could be degraded due to the errors caused by both sensor faults and fading channels. Integrating channel decoding into the distributed fault-tolerant classification fusion algorithm, we obtain a new fusion rule that combines both soft-decision decoding and local decision rules without introducing any redundancy. The soft decoding scheme is utilized to combat channel fading, while the distributed classification fusion structure using error correcting codes provides good sensor fault-tolerance capability. Asymptotic performance of the proposed approach is also investigated. Performance evaluation of the proposed approach with both sensor faults and fading channel impairments is carried out. These results show that the proposed approach outperforms the system employing the MAP fusion rule designed without regard to sensor faults and the multiclass equal gain combining fusion rule",2006,0, 2487,On fault-sensitive feasibility analysis of real-time task sets,"In this paper, we consider the problem of checking the feasibility of a set of n aperiodic real-time tasks while provisioning for timely recovery from (at most) k transient faults. We extend the well-known processor demand approach to take into account the extra overhead that may be induced by potential recovery operations under earliest deadline first scheduling. We develop a necessary and sufficient test using dynamic programming technique. An improvement upon the previous solutions is to address and efficiently solve the case where the recovery blocks associated with faults of a given task do not have necessarily the same execution time. Further, we provide an on-line version of our algorithm that does not require a priori knowledge of release times. The on-line algorithm runs in O(mk2) time where m is the number of ready tasks. We also show how to quickly adjust the recovery-related parameters of the algorithm for the remaining part of the execution when a fault is detected.",2004,0, 2488,Improving the design of parallel-pipeline cyclic decoders towards fault-secure versions,"In this paper, we consider the problem of designing fault-secure decoders for various cyclic linear codes. The principle relies on a slight modification of the high speed parallel-pipeline decoder architecture in [6], to control the correct operation of the cyclic decoder as well. The complexity evaluation has been obtained by synthesizing parallel-pipeline decoder for various code on a Stratix II FPGA using the Alterapsilas Quartus II software. It shows that their FS versions compare favorably against the unprotected ones, with respect to the area and the maximal operation frequency.",2008,0, 2489,Fault detection and isolation in the NT-OMS/RCS,"In this paper, we consider the problem of test design for real-time fault detection and diagnosis in the Space Shuttle's non-toxic orbital maneuvering system and reaction control system (NT-OMS/RCS). For demonstration purposes, we restrict our attention to the shaft section of the NT-OMS/RCS, which consists of 160 faults (each fault being either a leakage, blockage, igniter fault, or regulator fault) and 128 sensors. Using the proposed tests, we are able to uniquely isolate a large number of the faults of interest in the NT-OMS/RCS. Those that cannot be uniquely isolated can generally be resolved into small ambiguity groups and then uniquely isolated via manual/automated commands. Simulation of the NT-OMS/RCS under various fault conditions was conducted using the TRICK modeling software.",2004,0, 2490,Symbol Error Probability Analysis for Multihop Relaying over Nakagami Fading Channels,"In this paper, we derive closed-form expressions for the average symbol error probability (SEP) of arbitrary rectangular I X J-ary quadrature amplitude modulation (QAM) in cooperative amplify-and-forward (A&F) relaying systems, when no direct line-of-sight exists between the source and the destination nodes and when the links between the K successive nodes forming the multihop cooperation chain (including the source and the destination nodes) follow independent but not-necessarily identical Nakagami-m fading distributions with arbitrary real indexes fmkgK k =1 not less than 1/2 and arbitrary average power levels f1kgK k=1. The average SEP of rectangular QAM for this set-up is provided in closed-form as a linear combination of the first Lauricella's multivariate hypergeometric function, F(K+1)A , K being the number of multihop links, which can be efficiently evaluated using standard numerical softwares. Simulation results sustaining our analysis are provided, and the impacts of various parameters on the overall multihop system performance are investigated.",2010,0, 2491,Correcting Sampling Oscilloscope Timebase Errors With a Passively Mode-Locked Laser Phase Locked to a Microwave Oscillator,"In this paper, we describe an apparatus for correcting the timebase errors when calibrating the response of an equivalent-time sampling oscilloscope using a passively mode-locked erbium-doped fiber laser that is phase locked to a microwave signal generator. This enables us to simultaneously correct both the random jitter and the systematic timebase distortion in the oscilloscope. As a demonstration of the technique, we measure the electrical pulse generated by a fast photodiode that is excited by our laser. We show that the pulse that is reconstructed using our technique has significantly lower uncertainty than the pulse that is reconstructed using a separate correction for timebase distortion followed by jitter deconvolution.",2010,0, 2492,An efficient error resilient technique for applications of one-way video using transcoding and analysis by synthesis,"In this paper, we describe an efficient error resilient technique for one-way video transmitted over lossy packet networks. In one-way video applications, video sequences are pre-encoded off-line to be stored in a server without any information on transmission error. Therefore, video encoding algorithm, should be designed without any error resilience or with a hypothetical PLR (packet loss rates) for a certain degree of robustness to be transmitted over error prone environments. However, the PLRs differ according to the various kinds of networks, and even the time varying in a network. A dynamic update of video stream for the adaptive enhancement of error resilience was introduced with a transcoding technique to deal with variable PLRs. An error-sensitivity of each pixel is defined and computed in the transcoder, according to the PLR from network feedback and the effects of error propagations. The error-sensitivity was monitored by the transcoder to modify the video stream for the enhancement of error resilience. The basic error resilience scheme is blocking spatial and temporal error propagation. An analysis by synthesis scheme is introduced for 1) the estimate of accurate distortion at the decoder by exact simulation of error concealment, 2) motion vector re-estimation for blocking temporal error propagation. The optimal mode of each macroblock was decided in the rate-distortion (R-D) framework of the receiver at the given PLR and macroblock location. The performance of the proposed algorithm was evaluated with MPEG-2 video stream over lossy packet network scenario. Experimental results show that the proposed error resilience algorithm outperformed the MPEG-2 TM5 and boundary motion compensated error concealment scheme. It was also shown that the proposed algorithm makes stored one-way video stream applicable for any kinds of error prone networks with variable error rates.",2004,0, 2493,Fault Tolerance and Recovery of Scientific Workflows on Computational Grids,"In this paper, we describe the design and implementation of two mechanisms for fault-tolerance and recovery for complex scientific workflows on computational grids. We present our algorithms for over-provisioning and migration, which are our primary strategies for fault-tolerance. We consider application performance models, resource reliability models, network latency and bandwidth and queue wait times for batch-queues on compute resources for determining the correct fault-tolerance strategy. Our goal is to balance reliability and performance in the presence of soft real-time constraints like deadlines and expected success probabilities, and to do it in a way that is transparent to scientists. We have evaluated our strategies by developing a Fault-Tolerance and Recovery (FTR) service and deploying it as a part of the Linked Environments for Atmospheric Discovery (LEAD) production infrastructure. Results from real usage scenarios in LEAD show that the failure rate of individual steps in workflows decreases from about 30% to 5% by using our fault-tolerance strategies.",2008,0, 2494,A Passive Fault Tolerant Control Strategy for the uncertain MIMO Aircraft Model F-18,"In this paper, we design a passive fault tolerant control strategy for an uncertain MIMO aircraft model F-18. A novel variable structure controller with sliding surface and Lyapunov function is proposed in order to eliminate the effect of certain type of pre-specified faults. The main features of the proposed control strategy are its simplicity and robustness against uncertainties and parameter variations and some pre-specified faults. Computer experiments illustrating the application of the proposed approach to the longitudinal flight control of an F-18 aircraft model are presented to show the effectiveness of the design method.",2007,0, 2495,Robust Monitoring of Link Delays and Faults in IP Networks,"In this paper, we develop failure-resilient techniques for monitoring link delays and faults in a Service Provider or Enterprise IP network. Our two-phased approach attempts to minimize both the monitoring infrastructure costs as well as the additional traffic due to probe messages. In the first phase, we compute the locations of a minimal set of monitoring stations such that all network links are covered, even in the presence of several link failures. Subsequently, in the second phase, we compute a minimal set of probe messages that are transmitted by the stations to measure link delays and isolate network faults. We show that both the station selection problem as well as the probe assignment problem are NP-hard. We then propose greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factor for the probe assignment problem. These approximation ratios are provably very close to the best possible bounds for any algorithm",2006,0,4289 2496,The overview of fiber fault localization technology in TDM-PON network,"In this paper, we discussed the mechanism of optical fiber break in time division multiplexing passive optical network (TDM-PON) and the upwardly or downwardly monitoring issues with conventional fiber fault localization technique by using optical time domain reflectometer (OTDR). We also studied the previous fault localization technology that had been recommended. Finally, we proposed a centralized inline monitoring and network testing system named as centralized failure detection system (CFDS). CFDS will be installed with optical line terminal (OLT) at the central office (CO) or network operation center to centrally monitoring each optical fiber line's status and detecting the failure location that occurs in the multi-line drop region of fiber-to-the-home (FTTH) access network downwardly from CO towards the customer premises to improve the service reliability and reduce the restoration time and maintenance cost.",2008,0, 2497,Development and implementation of an ANN-based fault diagnosis scheme for generator winding protection,"In this paper, the development and implementation of a new fault diagnosis scheme for generator winding protection using artificial neural networks (ANN) is introduced. The proposed scheme performs internal fault detection, fault type classifications and faulted phases identification. This scheme is characterized with higher sensitivity and stability boundaries as compared with the differential relay. Effect of the presence of nonsynchronous frequencies on the scheme performance is examined. Effect of different values of ground resistance on ground fault detection sensitivity is outlined. The scheme hardware is implemented based on a digital signal processing (DSP) board interfaced with a multi input/output (MIO) board. Test results of the proposed scheme corroborate the scheme stability and sensitivity",2001,0, 2498,Fault Tolerant System under Open Phase Fault for BLDC Motor Drives,"In this paper, the fault tolerant system for BLDC motor has been proposed to maintain the control performance under the open fault of inverter. The fault identification method is proposed to two different types. The one is the method by the difference between reference and actual current. The other is the method using additional voltage sensors across lower legs of inverter. The reconfiguration scheme is achieved by the four-switch topology connecting a faulty leg to the middle point of DC-link using bidirectional switches. The proposed fault tolerant system quickly recovers the control performance by short detecting time and reconfiguration of system topology. Therefore, continuous free operation of the BLDC motor drive system after faults is available. The feasibility of the proposed fault tolerant system is proved by simulation",2006,0, 2499,Application of an Improved Particle Swarm Optimization for Fault Diagnosis,"In this paper, the feasibility of using probabilistic causal-effect model is studied and we apply it in particle swarm optimization algorithm (PSO) to classify the faults of mine hoist. In order to enhance the PSO performance, we propose the probability function to nonlinearly map the data into a feature space in probabilistic causal-effect model, and with it, fault diagnosis is simplified into optimization problem from the original complex feature set. The proposed approach is applied to fault diagnosis, and our implementation has the advantages of being general, robust, and scalable. The raw datasets obtained from mine hoist system are preprocessed and used to generate networks fault diagnosis for the system. We studied the performance of the improved PSO algorithm and generated a Probabilistic Causal-effect network that can detect faults in the test data successfully. It can get >90% minimal diagnosis with cardinal number of fault symptom sets greater than 25.",2009,0, 2500,Digital correction techniques for accuracy improvement in measurements of SnO2 sensor impedance,"In this paper, the performance improvement of a gas-sensing system by digital correction techniques is discussed. The considered system operates as a vectorial impedance meter and performs impedance measurements of eight sensors arranged in an array in the frequency range 10 Hz-15 MHz. The measurements of the chemical sensors' impedance is an innovative technique that allows highlighting different adsorption mechanisms taking place when the sensors are exposed to gases. Of course, impedance analyzers are commercially available, but they usually make measurements on only one device at time and they are very expensive. The proposed PC-based impedance analyzer is a versatile one and shows good performances for gas-sensing applications. A digital correction technique is used in this work to improve the impedance measurement accuracy of each channel of the gas-sensing system (eight sensors eight channels), in order to compensate for the conditioning electronics response. The latter is evaluated in a characterization procedure. A linear black box two-port model is used to take into account crosstalk, amplitude, and phase distortions. Two different techniques to evaluate the response of the measurement system are discussed in this paper, and experimental results are presented on both the measure of reference impedances and on the measure of chemical sensors.",2004,0, 2501,A Fault-Tolerant Channel-Allocation Algorithm for Cellular Networks With Mobile Base Stations,"In this paper, the performance of scheduling algorithms exploiting the multiuser selection diversity is studied. The authors consider schedulers with affordable-rate transmission and adaptive transmission based on the absolute signal-to-noise ratio (SNR) and the normalized SNR. In contrast to previous studies on multiuser diversity systems, channel dynamics is taken into consideration in this paper by a novel formulation based on the level-crossing analysis of stochastic processes. Then, a connection is made between the Doppler frequency shift, which indicates the channel temporal correlation, and the average (channel) access time, the average waiting time between accesses, and the average access rate of active users. These properties are important for the scheduler design, especially for applications where delay is a concern. In addition, analytical expressions for the system throughput and the degree of fairness when users have nonidentical average channel conditions are presented. These expressions quantify the effect of disparateness in users' average channel conditions on the system performance",2007,0, 2502,Real-time low-complexity adaptive approach for enhanced QoS and error resilience in MPEG-2 video transport over RTP networks,"In this paper, the problems of redundancy allocation for providing effective error-resilience and service class distribution for enhanced quality of service (QoS) in real-time MPEG-2 video transport are addressed. A real-time low-complexity content-based adaptive error-resilient approach is proposed for the transport of MPEG-2 video streams, encapsulated using real-time transport protocol (RTP) and delivered over heterogeneous networks. An algorithm is derived using spatial and temporal properties of MPEG-2 video for assigning weights to each packet based on the estimated perceptual error. These weights, which indicate the relative importance of RTP packets, together with the communication channel characteristics are used to determine the allocation of resources for providing improved error-resilience and for assigning data packets to various classes of service in order to enhance the quality of transmission. Parameters extracted from the RTP header are used to determine the weights, so that the proposed algorithm can be implemented in real-time. This algorithm is used for adaptively allocating redundant forward error correction packets as well as for marking and forwarding of RTP packets in differentiated services (DiffServ). Simulation results are presented to show the significant improvement in performance based on our proposed approach to video transport.",2005,0, 2503,Traveling Wave Fault Location for Power Cables Based on Wavelet Transform,"In this paper, traveling wave fault location equipment for power cables is designed, and the characteristic waveforms of cable fault point broken down and not broken down are simulated respectively. Then a new traveling wave fault location method based on wavelet transform is presented. Wavelet transform have good performance in denoising and singularity detection, which well solved the difficulty in identifying the initial point of the reflected traveling wave because of the local time-frequency characteristic. The fault distance can be calculated by the round-trip times which the traveling wave spends in the cable. The only required parameter is the length of cable. With the method, the result of fault location is not influenced by the change of propagation velocity of traveling wave. The correctness and effectiveness of this method are analyzed by computer simulation. The obtained results show an acceptable degree of accuracy for fault location.",2007,0, 2504,Improving the performance of speech recognition systems using fault-tolerant techniques,"In this paper, using of fault tolerant techniques are studied and experimented in speech recognition systems to make these systems robust to noise. Recognizer redundancy is implemented to utilize the strengths of several recognition methods that each one has acceptable performance in a specific condition. Duplication-with-comparison and NMR methods are experimented with majority and plurality voting on a telephony Persian speech-enabled IVR system. Results of evaluations present two promising outcomes, first, it improves the performance considerably; second, it enables us to detect the outputs with low confidence.",2008,0, 2505,Temporal error concealment for video transmission,"In this paper, we propose a temporal error concealment algorithm for video transmission in an error-prone environment. The error concealment algorithm employed an edge detection algorithm and progressive median motion vector concealment. First, the edges are detected and concealed portion by portion. Then, the corrupted MB is partitioned by the reconstructed edges and each partition is concealed by progressive median motion vector individually. The proposed algorithm shows better performance on both objective and subjective quality over the existing temporal error concealment algorithm",2004,0, 2506,A Fast Analytical Approach to Multi-cycle Soft Error Rate Estimation of Sequential Circuits,"In this paper, we propose a very fast analytical approach to measure the overall circuit Soft Error Rate (SER) and to identify the most vulnerable gates and flip-flops. In the proposed approach, we first compute the error propagation probability from an error site to primary outputs as well as system bistables. Then, we perform a multi-cycle error propagation analysis in the sequential circuit. The results show that the proposed approach is four to five orders of magnitude faster than the Monte Carlo (MC) simulation-based fault injection approach with 92% accuracy. This makes the proposed approach applicable to industrial-scale circuits.",2010,0, 2507,Joint Adaptive Intra Refreshment and Unequally Error Protection Algorithms for Robust Transmission of H.264/AVC Video,"In this paper, we propose an efficient intra refreshment algorithm, which can achieve both the global optimization in determining the intra refreshing ratio and the local accuracy in selecting the proper macroblocks to be intra coded before the current frame is actually encoded. Furthermore, incorporated with our proposed intra refreshment algorithm, an effective UEP scheme based on a dynamic FMO mapping mode is also proposed in this paper to better protect the intra macroblocks in a frame. The experimental results show that our joint intra refreshment and UEP algorithm can remarkably improve the reconstructed video quality in the packet lossy network",2006,0, 2508,A novel fault diagnosis algorithm for K-connected distributed clusters,"In this paper, we propose an on-line two phase (TPD) fault diagnosis algorithm for distributed clusters that follows an arbitrary network topology with connectivity k. Intermediate nodes communicate heartbeat messages between different source destination pairs. The algorithm addresses a realistic fault model considering crash and value faults in the cluster nodes. The algorithm is shown to produce a time complexity of O(l) and message complexity of O(n. e) respectively. The algorithm has been simulated using discrete event simulation techniques and the results show that the algorithm is feasible for large distributed clusters.",2010,0, 2509,On realization of fault-tolerant fuzzy controllers,"In this paper, we propose concurrent compensation for fuzzy controllers. The concurrent fault location is executed by observing each sum of two degrees of adjacent membership functions. Instead of the faulty degree in the antecedent part, we employ either 0 or degree of next membership function to the faulty part at the easily calculated abscissa. To compensate the faults in the consequent part, we shift several fuzzy variables, and infer with the membership functions representing the variables after shifts. The amount of shifting the variables is determined systematically. Experimental results show that our method is valid for any non-redundant single stuck-at fault both in each of the antecedent parts and in the consequent part",2000,0, 2510,On the Effect of Fault Removal in Software Testing - Bayesian Reliability Estimation Approach,"In this paper, we propose some reliability estimation methods in software testing. The proposed methods are based on the familiar Bayesian statistics, and can be characterized by using test outcomes in input domain models. It is shown that the resulting approaches are capable of estimating software reliability in the case where the detected software faults are removed. In numerical examples, we compare the proposed methods with the existing method, and investigate the effect of fault removal on the reliability estimation in software testing. We show that the proposed methods can give more accurate estimates of software reliability",2006,0, 2511,Real-time model based sensor fault tolerant control system on a chip,"In this paper, we proposed a model based sensor fault tolerant control system embedded in a generic PIC microcontroller for use in a temperature control system. The model based fault tolerant control algorithm is embedded in the microcontroller for stand-alone real-time implementation. The algorithm consists of a PID controller element (for nominal control) and a fault compensating element. Results from simulations and real time implementation are shown to demonstrate the ease of real time implementation.",2009,0, 2512,A partitioned linear minimum mean square estimator for error concealment [video decoder error concealment],"In this paper, we proposed a partitioned linear minimum mean square error estimator (P-LMMSE) for error concealment. The proposed P-LMMSE estimator adopts the multi-hypothesis motion compensation (MHMC) technique to reconstruct the corrupted block, in which the lost blocks are predicted by a linear combination of motion compensated blocks (hypotheses). In our proposed P-LMMSE estimator, the weighting coefficients are optimal in the sense that they minimize the mean square error. In addition, our proposed estimator exploits the properties of the hypotheses to improved accuracy of prediction. Each hypothesis has its own assumption and hence works well only in a particular situation, or equivalently, the statistics in different situations are not the same. Therefore, the dataset is divided into finer partitions and the weighting coefficient set in the most appropriate partition is selected to reconstruct the corrupted block.",2005,0, 2513,Error-robust Scalable Extension of H.264/AVC Ubiquitous Streaming Using the Adaptive Packet Interleaving Mechanism,"In this paper, we realize error-robust scalable extension of H.264/AVC ubiquitous video streaming using an adaptive packet interleaving mechanism among heterogeneous networks and devices. The state-of-the-art SVC is used to provide combined scalability to adjust spatiotemporal resolutions and SNR quality. In an error-prone channel, a method of bandwidth estimation and a three-state network transition chain are proposed to adjust the transmission bit-rate and the interleaving window size, respectively. Unlike past packet interleaving methods, the interleaving window size can be adjusted dynamically to compromise pre-processing delay, e.g., packet re-arrangement time for interleaving, and quality improvement. Additionally, with the understanding of the global SVC bitstream structure, a SVC coder is designed to increase the decoding efficiency and real-timely extract the proper SVC-quality bitstream based on the network condition. In our experiments, performances of the adaptive packet interleaving mechanism w.r.t. distinct interleaving window sizes and network conditions are demonstrated for distinct kinds of videos.",2009,0, 2514,Using Extended Letter-to-Sound Rules to Detect Pronunciation Errors Made by Chinese Learner of English,"In this paper, we use extended letter-to-sound rules for automatic mispronunciation detection, aiming at checking pronunciation errors made by Chinese learners of English. The knowledge-based approach is used to generate extended pronunciation lexicon and incorporated into the HMM-based mispronunciation detection system. The pronunciation errors lead to misunderstanding of a word are expected to be identified. The TIMIT text prompts are used to collect data from Chinese university students, and the test set includes a total of 1900 sentences. Experiments show that the F-measure is about 0.86 at word level and about 0.91 at phone level. The system shows a high degree of accuracy in classifying correct and erroneous pronunciation.",2010,0, 2515,Towards acceleration of fault simulation using Graphics Processing Units,"In this paper, we explore the implementation of fault simulation on a graphics processing unit (GPU). In particular, we implement a fault simulator that exploits thread level parallelism. Fault simulation is inherently parallelizable, and the large number of threads that can be computed in parallel on a GPU results in a natural fit for the problem of fault simulation. Our implementation fault- simulates all the gates in a particular level of a circuit, including good and faulty circuit simulations, for all patterns, in parallel. Since GPUs have an extremely large memory bandwidth, we implement each of our fault simulation threads (which execute in parallel with no data dependencies) using memory lookup. Fault injection is also done along with gate evaluation, with each thread using a different fault injection mask. All threads compute identical instructions, but on different data, as required by the Single Instruction Multiple Data (SIMD) programming semantics of the GPU. Our results, implemented on a NVIDIA GeForce GTX 8800 GPU card, indicate that our approach is on average 35 x faster when compared to a commercial fault simulation engine. With the recently announced Tesla GPU servers housing up to eight GPUs, our approach would be potentially 238 times faster. The correctness of the GPU based fault simulator has been verified by comparing its result with a CPU based fault simulator.",2008,0, 2516,Finite Automata Applied to a Classification of Fault in an Eletric Power System,"In this paper, we have as our objective to use the theory of an electric power system (EPS). Specifically when the EPS is on fault condition (the eleven types). This study has as importance the states of a fault on a computational angle of automata. It is applied to get better digital devices protection algorithms, in special that one that is adapted to the rule IEC 61850",2006,0, 2517,The global fault-tolerance of interconnection networks,"In this paper, we introduce a new concept in fault-tolerance, namely the global fault-tolerance of interconnection networks. We pose the problem of characterizing the fault-tolerance of an interconnection network, modelled as an undirected unweighted graph, by a scalar, in a global manner. This can be achieved by defining an adequate metric. In this paper, we propose such a metric and we apply it on two comparative analysis: for three infinite families of minimum broadcast graphs (hypercubes, recursive circulants, and Knodel graphs), and for five families of hypercubic graphs (butterfly, wrapped butterfly, shuffle exchange, de Bruijn, and cube connected cycles)",2006,0, 2518,Managing Post-Development Fault Removal,"In this paper, we manage fault removal by classifying and prioritizing fault warnings reported by a static analysis tool. We present our findings from analyzing three cross-platform industrial code bases at Yahoo! totaling approximately 3.6+ MLOC. The tool found 1.2K potential fault warnings as follows: 52.29% true faults and 47.71% false/noise. The 52.29% correctly reported faults were prioritized based on severity. Additionally, we connected the tool classification to a standard software weakness schema, Common Weakness Enumeration (CWE) to standardized discourse. The results from creating a management system for post-development fault removal are intended to be shifted back into earlier stages of software development.",2009,0, 2519,Globally optimal uneven error-protected packetization of scalable code streams,"In this paper, we present a family of new algorithms for rate-fidelity optimal packetization of scalable source bit streams with uneven error protection. In the most general setting where no assumption is made on the probability function of packet loss or on the rate-fidelity function of the scalable code stream, one of our algorithms can find the globally optimal solution to the problem in O(N2L2) time, compared to a previously obtained O(N3L2) complexity, where N is the number of packets and L is the packet payload size. If the rate-fidelity function of the input is convex, the time complexity can be reduced to O(NL2) for a class of erasure channels, including channels for which the probability function of losing n packets is monotonically decreasing in n and independent erasure channels with packet erasure rate no larger than N/2(N + 1). Furthermore, our O(NL2) algorithm for the convex case can be modified to rind an approximation solution for the general case. All of our algorithms do away with the expediency of fractional bit allocation, a limitation of some existing algorithms.",2004,0, 2520,Sample-separation-margin based minimum classification error training of pattern classifiers with quadratic discriminant functions,"In this paper, we present a new approach to minimum classification error (MCE) training of pattern classifiers with quadratic discriminant functions. First, a so-called sample separation margin (SSM) is defined for each training sample and then used to define the misclassification measure in MCE formulation. The computation of SSM can be cast as a nonlinear constrained optimization problem and solved efficiently. Experimental results on a large-scale isolated online handwritten Chinese character recognition task demonstrate that SSM-based MCE training not only decreases the empirical classification error, but also pushes the training samples away from the decision boundaries, therefore a good generalization is achieved. Compared with conventional MCE training, an additional 7% to 18% relative error rate reduction is observed in our experiments.",2010,0, 2521,On the design of fault-tolerant logical topologies in wavelength-routed packet networks,"In this paper, we present a new methodology for the design of fault-tolerant logical topologies in wavelength-routed optical networks supporting Internet protocol (IP) datagram flows. Our design approach generalizes the ""design protection"" concepts, and relies on the dynamic capabilities of IP to reroute datagrams when faults occur, thus achieving protection and restoration, and leading to high-performance cost-effective fault-tolerant logical topologies. In this paper, for the first time we consider resilience properties during the logical topology optimization process, thus extending the optimization of the network resilience also to the space of logical topologies. Numerical results clearly show that our approach outperforms previous ones, being able to obtain very effective survivable logical topologies with limited computational complexity.",2004,0, 2522,On Line Testing of Single Feedback Bridging Fault in Cluster Based FPGA by Using Asynchronous Element,"In this paper, we present a novel technique for online testing of feedback bridging faults in the interconnects of the cluster based FPGA. The detection circuit will be implemented using BISTER configuration. We have configured the Block Under Test (BUT) with a pseudo-delay independent asynchronous element. Since we have exploited the concept of asynchronous element known as Muller-C element in order to detect the fault, the fault has high ingredient of delay dependent properties due to variation of the feedback path delay. Xilinx Jbits 3.0 API (Application Program Interface) is used to implement the BISTER structure in the FPGA. By using Jbits, we can reconfigure dynamically the device, in which the partial bit stream only affects part of the device. In the comparison to the traditional FPGA development tool (ISE), Jbits is faster to map the specific portion of the circuit to a specific tile. We also have more controllability over the utilization of internal resources of FPGA, so that we can perform this partial reconfiguration.",2008,0, 2523,A genetically trained neural network application for fault finding in antenna arrays,"In this work, an extension of the previous approach on the Genetic Algorithm based Backpropagation Network (GA/BPN) is presented for finding the positions of defective elements in antenna arrays. The backpropagation network (BPN) takes samples of radiation pattern of the array with fault elements and maps it to the location of the faulty element in that array. The weights were extracted and optimized using GA. The result of the conventional ANN procedure is compared with genetically trained neural network approach. The developed methodology is tested for a linear array. The developed network can be used at the base stations to find out the number and location of the fault elements in the array in space platforms.",2009,0, 2524,A characterization of instruction-level error derating and its implications for error detection,"In this work, we characterize a significant source of software derating that we call instruction-level derating. Instruction-level derating encompasses the mechanisms by which computation on incorrect values can result in correct computation. We characterize the instruction-level derating that occurs in the SPEC CPU2000 INT benchmarks, classifying it (by source) into six categories: value comparison, sub-word operations, logical operations, overflow/precision, lucky loads, and dynamically-dead values. We also characterize the temporal nature of this derating, demonstrating that the effects of a fault persist in architectural state long after the last time they are referenced. Finally, we demonstrate how this characterization can be used to avoid unnecessary error recoveries (when a fault will be masked by software anyway) in the context of a dual modular redundant (DMR) architecture.",2008,0, 2525,On the efflciency of error concealment techniques in H.264/AVC coders,"In videoconferencing and video telephony applications operating in real time, a fluent transmission, even presenting visible errors, is often preferred over a correct but jerking transmission. This is the reason why error concealment techniques are adopted, within video codecs, to recover the transmission quality without affecting its fluency. In the framework of motion estimation based video codecs, like H.263 and H.264, error resilience facilities are made available in order to mitigate the effects of information loss during transmissions on packet networks. In this paper we focus on the adoption of error concealment techniques in H.264/AVC video coding, providing examples of both objective and subjective performance evaluation, when different algorithms are implemented at the decoder. Besides evaluating the hybrid concealment scheme already foreseen by the standard implementation, we also present a simple ""pure temporal"" replacement technique, which could be interesting for its good performance combined with a very low impact on the overall processing time",2005,0, 2526,"Impact of code orthogonality, power control error and source activity on the capacity of multicast transmissions in WCDMA","In WCDMA, data transfer to a group may either take place on multiple point-to-point (FTP) channels transmitted to individual group members separately or on a single point-to-multipoint (PTM) channel that is broadcast over the entire cell. We evaluate the capacity of both channel selection strategies for one-to-many multicast over the WCDMA air interface in terms of outage probability. We study the capacity of both schemes by first deriving suitable outage probability expressions and then calculating the overall capacity, i.e., the number of multicast users that can be supported by both schemes. Particular attention is paid to the impact of code orthogonality, power control error and the activity factor of the multicast source on the capacity of both schemes for multicast transmissions.",2004,0, 2527,Bit Error Correction without Redundant Data: a MAC Layer Technique for 802.11 Networks,"In wireless network monitoring, received packets such as MAC frames may contain bit errors. Such errors either makes the monitoring data less accurate, or forces the monitoring software to drop errored frames to avoid such inaccuracy. We present a MAC layer based measurement technique for monitoring 802.11 networks. It can correct bit errors in certain MAC headers without the use of FEC or redundant data. This allows greater accuracy and/or extended range in network monitoring even when signal reception is less than ideal. This technique has several other applications. First, it provides better support for cross-layer protocols such as UDP-Lite, because a MAC frame with a corrupted destination MAC address would have been discarded in the OS stack even with UDP-Lite enabled. Second, the technique allows blind estimation of bit error rate in the received MAC frame, so that a multimedia decoder can decide to whether perform soft-decoding on the corrupted media payload, or to rely on previous media packet for error concealment. Finally, it facilitates the use of NAK frames, which could improve 802.11 MAC layer performance if such modifications are adopted. We evaluate our technique in an 802.11 testbed, quantifying its performance gain over a normal 802.11 monitoring application.",2006,0, 2528,Distributed Fault-Tolerant Topology Control in Static and Mobile Wireless Sensor Networks,"In wireless sensor networks, minimizing power consumption and at the same time maintaining desired properties in the network topology is of prime importance. In this work, we present a distributed algorithm for assigning minimum possible power to all the nodes in the wireless sensor network, such that the network is K-connected. In this algorithm, a node collects the location and maximum power information from all the nodes in its vicinity, and then it adjusts the powers of the nodes in its vicinity in such a way that it can reach all the nodes in the vicinity through K optimal vertex-disjoint paths. We prove that, if each node maintains K optimal vertex-disjoint paths to all the nodes in its vicinity then the resulting topology is globally K-connected, provided the topology obtained when all nodes transmit with their maximum power Gmax is K-connected. This topology control algorithm has been extended to mobile scenario and the proof of connectivity in the mobile scenario has been presented. Simulation results show that significant power saving can be achieved by using this algorithm.",2007,0, 2529,Induction machines fault simulation based on FEM modelling,Induction machines operated by inverter use to present efficiency decrease and sometimes present additional rotor and stator fault. FEM has been used for faulty motor simulation and shows results of motor fault effects. Experimental results corroborate the simulation and theoretical effects.,2007,0, 2530,Synchronization and fault detection in autonomous robots,"In this study, we show a group of robots can synchronize based on firefly-inspired flashing behavior and how dead robots can be detected by other robots. The algorithm is completely distributed. Each robot flashes by lighting up its on-board LEDs and neighboring robots are driven to flash in synchrony. Since robots that are suffering catastrophic failures do not flash periodically, they can be detected by operational robots. On a real multi-robot system of 10 autonomous robots, we show how the group can correctly detect multiple faults, and that when given (simulated) repair capabilities, the group can survive a relatively high rate of failure.",2008,0, 2531,FPGA design on saturation correction in radar digital IF receiver,"In this work a customized digital hardware function unit is developed for saturation correction in weather radars. This unit acts to assure wide dynamic range of the radar receiver as AGC unit does in analog circuitry. To meet the requirement for real time processing, FPGA device is employed with VHDL design. Several experiments were conducted and the results show that receiver's dynamic range increment is more than 6 dB",2005,0, 2532,Prediction correction tractography through statistical tracking,"In this work we describe a novel approach to diffusion tractography that is a notion common to a class of techniques based on diffusion MRI data aiming on tracking axonal pathways in the brain. Our approach, named Prediction-correction Diffusion-based Tractography (PDT), is based on Extended Kalman Filtering: at each step the local fibers orientation is estimated from their orientation in the previous step. This estimate is then corrected from an estimate of the local diffusivity through a principled model of fibers orientation. PDT has been implemented using a diffusion tensor (DTI) as local diffusion model, but higher order models can be used as well. Results on both synthetic and in-vivo data are reported and discussed. PDT produces tractograms comparable to those obtained with the widely distributed tractography method provided in the FSL package [18], also in the case where crossing fibers are of relevance. From preliminary data, PDT proved superior when one fiber of low fractional anisotropy crosses a fiber with a higher fractional anisotropy, that is a critical condition for other tractography methods.",2008,0, 2533,Mesh Denoising Using Quadric Error Metric,"In this work we present a new method for mesh denoising that uses an operator based on the Quadric Error Metric. This operator is able to estimate the local shape of the surface for each vertex, despite severe noise condition, distinguishing corners, edges and smooth regions in order to best adjust the vertex geometry to recover piecewise smoothing while preserving sharp features. Our method results in a simple algorithm for mesh denoising that can also be used to enhance sharp features present in the surface corrupted by noise. A frequency response analysis is also presented in order to evaluate the characteristics of this operator in the frequency spectrum of the mesh.",2010,0, 2534,Fixing Design Errors with Counterexamples and Resynthesis,"In this work we propose a new error-correction framework, called CoRe, which uses counterexamples, or bug traces, generated in verification to automatically correct errors in digital designs. CoRe is powered by two innovative resynthesis techniques, goal-directed search (GDS) and entropy-guided search (EGS), which modify the functionality of internal circuit's nodes to match the desired specification. We evaluate our solution to designs and errors arising during combinational equivalence-checking, as well as simulation-based verification of digital systems. Compared with previously proposed techniques, CoRe is more powerful in that: (1) it can fix a broader range of error types because it does not rely on specific error models; (2) it derives the correct functionality from simulation vectors, hence not requiring golden netlists; and (3) it can be applied to a range of verification flows, including formal and simulation-based.",2007,0,4304 2535,An adaptive filter design based on error estimation,"In this paper, we present a novel weight adjustment method of adaptive filter that is based on error estimation. The new adaptive filter's adjustment of weight coefficient doesn't directly computed form the last time's error value, but is estimated based on history data. The proposed adaptive filter's structure is simple and can be widely applied. At the latter part of this paper, the stabilization of this new adaptive filter is proved. Simulation results show it has fast convergence rate and the better precision.",2004,0, 2536,Fault-Tolerant Bit-Parallel Multiplier for Polynomial Basis of GF(2m),"In this paper, we present novel fault-tolerant architecture for bit-parallel polynomial basis multiplier over GF(2m) which can correct the erroneous outputs using linear code. We have designed a parity prediction circuit based on the code generator polynomial that leads lower space overhead. For bit-parallel architectures, the space overhead is about 11%. Moreover, there is only marginal time overhead due to incorporation of error-correction capability that amounts to 3.5% in case of the bit-parallel multiplier. Unlike the existing concurrent error correction (CEC) multipliers or triple modular redundancy (TMR) techniques for single error correction, the proposed architectures have multiple error-correcting capabilities.",2009,0, 2537,Low complexity error concealment scheme for intra-frames in H.264/AVC,"In this paper, we propose a low complexity spatial error concealment method for H.264/AVC coded video sequences. The proposed prediction modes error concealment (PMEC) scheme exploits the Intra prediction mode information from the coded bit-stream to provide an improved concealment performance by taking into account the edge considerations. The proposed scheme gives an improved performance in terms of both the PSNR and structural similarity quality index while having a reduced computational complexity relative to the existing schemes.",2009,0, 2538,A matlab/simulink tool for power converters teaching - a power factor correction approach,"In this paper, the computer aided teaching of power converter via matlab/simulink mainly concentrated on power factor correction (PFC) topic to satisfy IEC 1000-3-2. Quality improvement of input current to reduce current harmonics is demonstrated. Matlab/simulink satisfied to nonlinear state space models, is used to increase system performances implementing and developing the controller included in toolbox i.e. sliding mode control, fuzzy logic control etc. The teaching module of power converter are classified to 3 topics 1) switching mode rectifier using single switch, 2) single phase AC-DC converter and 3) three phases AC-DC converter. However, with the matlab/simulink application, not only simulation time can be reduced but also convergence problem can be solved.",2003,0, 2539,Effect of channel estimation error onto the BER performance of PSAM-OFDM in Rayleigh fading,"In this paper, the current analysis focuses on the influence of BER performance in Rayleigh fading propagation environments, which results from the channel estimation error of pilot symbol assisted modulation (PSAM) in orthogonal frequency division multiplexing (OFDM) systems. This paper first characterizes the distribution of the amplitude and phase estimates using PSAM, and the formula for BER as a function of channel correlation and interpolation filter in time and frequency is given. Also interchannel interference due to Doppler effects is taken into account. Theoretical and simulation results show that channel estimation error leads to a 1-dB degradation in average signal-to-noise ratio for the parameters considered.",2003,0, 2540,Analysis and methodology for multiple-fault diagnosis,"In this paper, we propose a multiple-fault-diagnosis methodology based on the analysis of failing patterns and the structure of diagnosed circuits. We do not consider the multiple-fault behavior explicitly, but rather partition the failing outputs and use an incremental simulation-based technique to diagnose failures one at a time. Our methodology can be further improved by selecting appropriate diagnostic test patterns. The n-detection tests allow us to apply a simple single-fault-based diagnostic algorithm, and yet achieve good diagnosability for multiple faults. Experimental results demonstrate that our technique is highly efficient and effective. It has an approximately linear time complexity with respect to the fault multiplicity and achieves a high diagnostic resolution for multiple faults. Real manufactured industrial chips affected by multiple faults can be diagnosed in minutes of central processing unit (CPU) time.",2006,0, 2541,A study of flight-critical computer system recovery from space radiation-induced error,"It is well known that space radiation, containing energetic particles such as protons and ions, can cause anomalies in digital avionics onboard satellites, spacecraft and aerial vehicles flying at high altitude. Semiconductor devices embedded in these applications become more sensitive to space radiation as the features are shrunk in size. One of the adverse effects of space radiation on avionics is a transient error known as single event upset (SEU). Given that it is caused by bit-flips in computer memory, SEU does not result in a damaged device. However, the SEU induced data error propagates through the run-time operational flight program, causing erroneous outputs from a flight-critical computer system. This study was motivated by a need for finding a cost-effective solution to keep the flight-critical. computers functioning after SEU occurs. The result of the study presents an approach to recover flight-critical computer systems from SEU induced error by using an identity observer array. The identity observers replicate the state data of the controller in distinct data partitions. The faulty controller can be recovered by replacing data image of the faulty data partition with that of the healthy data partition. The methodology of applying such an approach from the fault tolerant control perspective is presented. The approach is currently being tested via computer simulation",2001,0,7898 2542,Using Duplication with Compare for On-line Error Detection in FPGA-based Designs,"It is well known that SRAM-based FPGAs are susceptible to single-event upsets (SEUs) in radiation environments. A variety of mitigation strategies have been demonstrated to provide appropriate mitigation and correction of SEUs in these environments. While full mitigation of SEUs is appropriate for some situations, some systems may tolerate SEUs as long as these upsets are detected quickly and correctly. These systems require effective error detection techniques rather than costly error correction methods. This work leverages a well-known error detection technique for FPGAs called duplication with compare (DWC). This technique has been shown to be very effective at quickly and accurately detecting SEUs using fault injection and radiation testing.",2008,0, 2543,Susceptibility of commodity systems and software to memory soft errors,"It is widely understood that most system downtime is accounted for by programming errors and administration time. However, a growing body of work has indicated an increasing cause of downtime may stem from transient errors in computer system hardware due to external factors, such as cosmic rays. This work indicates that moving to denser semiconductor technologies at lower voltages has the potential to increase these transient errors. In this paper, we investigate the susceptibility of commodity operating systems and applications on commodity PC processors to these soft-errors and we introduce ideas regarding the improved recovery from these transient errors in software. Our results indicate that, for the Linux kernel and a Java virtual machine running sample workloads, many errors are not activated, mostly due to overwriting. In addition, given current and upcoming microprocessor support, our results indicate that those errors activated, which would normally lead to system reboot, need not be fatal to the system if software knowledge is used for simple software recovery. Together, they indicate the benefits of simple memory soft error recovery handling in commodity processors and software.",2004,0, 2544,Fault-Tolerant Discrete Dynamical Systems Over Finite Ring,"It is worked out some general method of fault- tolerant synthesis for implementations of information-looseness dynamical systems over finite ring, based on application of error control codes. Corresponding self-checking systems are designed complexity and some basic characteristics of designed implementations is characterized.",2007,0, 2545,Self-organizing and fault-tolerant behaviors approach in bio-inspired hardware redundant network structures,"It's well-known, biological organisms offer the ability to grow with fault-tolerance and self-organization behaviors. By adapting basic properties and capabilities from nature, scientific approaches have helped researches understand related phenomena and associated with principles to engine complex novel digital systems and improve their capability. Founded by these observations, the paper is focused on modeling and simulation artificial embryonic structures, with the purpose to develop VLSI hardware architectures able to imitate cells or organism operation mode, with similar robustness like their biological equivalents from nature. Self-healing algorithms and artificial immune properties implementation is investigated and experimented on the developed models. The presented theoretical and simulation approaches were tested on a FPGA-based embryonic network architecture (embryonic machine), built with the purpose to implement on silicon fault-tolerant and surviving properties of living organisms.",2010,0, 2546,Statistical modeling of the geometric error in cardiac electrical imaging,"Kalman filter approach provides a natural way to include the spatio-temporal prior information in cardiac electrical imaging. This study focuses on the performance of Kalman filter approach with geometric errors present in inverse Electrocardiography (ECG) problem. The geometric errors considered here are the wrong determination of the heart's size and location. In addition to Kalman filtering, we also compare the performances of Tikhonov regularization and Bayesian MAP estimation when geometric errors are present. After presenting the effects of geometric errors on the solutions, a possible model to reduce the effects of the geometric errors in the inverse ECG problem for Bayes-MAP and Kalman solution is studied. To this purpose, a method that is suggested to overcome modeling errors in inverse problem solutions by Heino et. al. is modified and its effectiveness for inverse ECG problem is shown. Here the main idea is to assume geometric errors as additive noise and adding them to the covariance matrices used in the algorithms. To the best of our knowledge, this is the first study in which it has been applied to the inverse problem of ECG.",2009,0, 2547,Exploring the Relationship of a File's History and Its Fault-Proneness: An Empirical Study,"Knowing which particular characteristics of software are indicators for defects is very valuable for testers in order to allocate testing resources appropriately. In this paper, we present the results of an empirical study exploring the relationship between history characteristics of files and their defect count. We analyzed nine open source Java projects across different versions in order to answer the following questions: 1)Do past defects correlate with a filepsilas current defect count? 2) Do late changes correlate with a filepsilas defect count? 3) Is the file's age a good indicator for its defect count? The results are partly surprising. Only 4 of 9 programs show moderate correlation between a file's defects in previous and in current releases in more than the half of analysed releases. In contrast to our expectations, the oldest files represent the most fault-prone files. Additionally, late changes influence filepsilas defect count only partly.",2008,0, 2548,A fast and fault-tolerant convex combination fusion algorithm under unknown cross-correlation,"Knowledge of the cross-correlation of errors of local estimates is needed for many distributed fusion algorithms. However, in a fully distributed system or decentralized network, the calculation of cross-correlation between local estimates is quite involved and may be impractical. The covariance intersection (CI) algorithm has been proposed under unknown correlation. But the CI algorithm has high computational complexity because it requires optimization of a nonlinear cost function. This paper presents a fast CI algorithm, and an alternative optimization criterion with a closed form solution. Based on this criterion, a fast and fault-tolerant convex combination fusion algorithm is presented by introducing an adaptive parameter, which can obtain robust estimate when estimates to be fused are inconsistent with each other, and the degree of robustness of fusion result varies with that of inconsistency between estimates to be fused.",2009,0, 2549,On-Line Reconfigurable XGFT Network-on-Chip Designed for Improving the Fault-Tolerance and Manufacturability of the MPSoC Chips,"Large System-on-Chip (SoC) circuits will contain an increasing number of processors which will communicate with each other across Networks-on-Chip (NOC). The faulty processors could be replaced with faultless ones, whereas only a single defect in the NOC can make the whole chip unusable. Therefore, the fault-tolerance of the NOC is a crucial component of the fault-tolerance and manufacturability of the SoCs. This paper presents a fault-tolerant extended generalized fat tree (XGFT) NOC developed for future multi-processor SoCs (MPSoC). Its fault-tolerance is improved with a new version of fault-diagnosis-and-repair (FDAR) system, which makes it possible to diagnose and repair the NOC on-line. It detects such static, dynamic and transient faults which block packets or produce bit errors, and reconfigures the faulty switches to operate correctly. Processors can also use it for reconfiguring the faulty switch nodes after the faults are located with other test methods. Simulation and synthesis results show that slightly defected XGFTs are able to achieve good performance after they are repaired with the FDAR while the costs of the FDAR remain tolerable",2006,0, 2550,Improving fault-tolerance of distributed multi-agent systems with mobile network-management agents,"Large-scale agent-based software solutions need to be able to assure constant delivery of services to end-users, regardless of the underlying software or hardware failures. Fault-tolerance of multi-agent systems is, therefore, an important issue. We present an easy and flexible way of introducing fault-tolerance to existing agent frameworks. The approach is based on two new types of mobile agents that manage efficient construction and maintenance of fault-tolerant multi-agent system networks, and implement a robust agent tracking technique.",2010,0, 2551,Secure Byzantine Fault Tolerant LDAP System,"LDAP is a set of protocols for accessing information directories which provides data integrity and authentication. It takes attacks on clients, Internet and benign attacks on servers into account. But the malicious attacks on servers and software errors are rarely involved. In this paper, a secure aware Byzantine fault tolerant LDAP system is proposed, which can tolerate malicious faults occurred in the servers. By using a new Byzantine-fault-tolerant algorithm, the proposed LDAP system guarantees safety and liveness properties assuming no more than f replicas are faulty while it consists of 3f+1 tightly coupled servers. For the series of optimization, the system not only provides a much higher degree of security and reliability but also is practical",2006,0, 2552,Enhanced detection of electrode placement/connection errors,Lead connection and electrode positional errors are a common problem in ECG recording. This study set out to review the sensitivity and specificity of existing criteria in the Glasgow program using an older (1997) version of the software and to produce enhancements where required for incorporation into the current version of the program still in development. 50 volunteers were recruited to the study. Arm and leg lead connection errors were introduced as were V1/V2 and V2/V3 connection reversals. It was shown that detection of arm lead connection errors could be enhanced from 64% to 88% at 100% specificity. Chest lead misconnections were detected with improved sensitivity. V1 and V2 reversal was much more easily detected than V2 and V3 reversal while maintaining high specificity.,2008,0, 2553,Detection of errors in case hardening processes brought on by cooling lubricant residues,"Life cycle of case hardened steel work pieces depends on the quality of hardening. A large influencing factor on the quality of hardening is the cleanliness of the work pieces. In manufacturing a large amount of auxiliary materials such as cooling lubricants and drawing compounds are used to ensure correct execution of cutting and forming processes. Especially the residues of cooling lubricants are carried into following processes on the surfaces of the machined parts. Stable and controlled conditions cannot be guaranteed for these subsequent processes as the residues' influence on the process performance is known insufficiently, leading to a high uncertainty and consequently high expense factor. Therefore, information is needed about the type and amount of contamination. In practice the influence of these cooling lubricants on case hardening steels is a well-known phenomenon but correlation of the residue volume and resulting hardness are not known. A short overview of the techniques to detect cooling lubricant residues will be given in this paper and a method to detect the influence of the residues on the hardening process of case hardening steels will be shown. An example will be given for case hardening steel 16MnCr5 (1.7131). The medium of contamination is ARAL SAROL 470 EP.",2003,0, 2554,Distinguish between lightning strikes and faults using wavelet-multi resolution signal decomposition,"Lightning strokes will cause high frequency transients, which may result in an incorrect response by the protective relays operating on the measurement of fault generated high frequency transients. The paper presents a new approach to distinguish lightning strokes from faults using a wavelet-multiresolution signal decomposition technique. Based on the energy distribution features, a criterion is given to discriminate between the two kinds of disturbances. Extensive simulation studies prove the presented approach feasible.",2004,0, 2555,Study of a novel fault current limiter on the basis of high speed switch and triggered vacuum switch,Limiting the short-circuit current is of great significance to keep the safe and steady running of the power system. A novel fault current limiter (FCL) is presented. A high speed switch is used to rapidly break the short-circuit current. A triggered vacuum switch (TVS) is used to discharge a precharged pulse capacitor so as to quench the arc in the high speed switch quickly and to realize the active transfer of the short-circuit current. A resistor is used as the current limiting component. The parameters of a FCL of 10 kV voltage level are designed. Simulations of the FCL in an oscillating circuit are performed by means of the ATP software. Experiments of a FCL sample device are also carried out in the oscillating circuit. The experimental results are in agreement with the simulation results. The current transfer characteristics and the current limiting effect are testified.,2010,0, 2556,Load cell response correction using analog adaptive techniques,"Load cell response correction can be used to speed up the process of measurement. This paper investigates the application of analog adaptive techniques in load cell response correction. The load cell is a sensor with an oscillatory output in which the measurand contributes to response parameters. Thus, a compensation filter needs to track variation in measurand whereas a simple, fixed filter is only valid at one load value. To facilitate this investigation, computer models for the load cell and the adaptive compensation filter have been developed and implemented in PSpice. Simulation results are presented demonstrating the effectiveness of the proposed compensation technique.",2003,0, 2557,Current leakage fault localization using backside OBIRCH,"Localization of current leakage faults in modern ICs is a major challenge in failure analysis. To deal with this issue, several techniques such as liquid crystal thermography and emission microscopy can be used. However, traditional front-side failure analysis techniques are unable to localize faults obscured by several metal layers. This trend, as well as the appearance of new packaging technologies, has driven alternative approaches from the backside of the die. Of the infrared light optical techniques, the optical beam induced resistance change (OBIRCH) technique has shown to be very promising for locating current leakage type faults (Barton et al, 1999; Nikawa et al, 1999). In this paper, a backside failure analysis case study on four-level interconnection BICMOS ICs is presented. Different front side defect localization approaches such as liquid crystal were tried, but none worked since interconnection layers obscured the fault. Backside emission microscopy also failed due to the resistive nature of the defect. Only the OBIRCH technique could quickly and precisely localize the defect causing current leakage from the backside of the die",2001,0, 2558,Geometrical model to drive vision systems with error propagation,"Localization with respect to a reference model is a key feature for mobile robots. Urban environment offers numerous landmarks that can be used for the localization process. This paper deals with the use of an environment model stored in a geographic information system, to drive a vision system i.e. highlights what to look for and where to look for. This task is achieved by propagating uncertainties along the image acquisition system to highlight some region of interest in the image.",2004,0, 2559,Effective Fault Localization using Code Coverage,"Localizing a bug in a program can be a complex and time- consuming process. In this paper we propose a code coverage-based fault localization method to prioritize suspicious code in terms of its likelihood of containing program bugs. Code with a higher risk should be examined before that with a lower risk, as the former is more suspicious (i.e., more likely to contain program bugs) than the latter. We also answer a very important question: how can each additional test case that executes the program successfully help locate program bugs? We propose that with respect to a piece of code, the aid introduced by the first successful test that executes it in computing its likelihood of containing a bug is larger than or equal to that of the second successful test that executes it, which is larger than or equal to that of the third successful test that executes it, etc. A tool, chiDebug, was implemented to automate the computation of the risk of the code and the subsequent prioritization of suspicious code for locating program bugs. A case study using the Siemens suite was also conducted. Data collected from our study support the proposal described above. They also indicate that our method (in particular Heuristics III (c), (d), and (e)) can effectively reduce the search domain for locating program bugs.",2007,0, 2560,Logic soft errors in sub-65nm technologies design and CAD challenges,"Logic soft errors are radiation induced transient errors in sequential elements (flip-flops and latches) and combinational logic. Robust enterprise platforms in sub-65nm technologies require designs with built-in logic soft error protection. Effective logic soft error protection requires solutions to the following three problems: (1) accurate soft error rate estimation for combinational logic networks; (2) automated estimation of system effects of logic soft errors, and identification of regions in a design that must be protected; and, (3) new cost-effective techniques for logic soft error protection, because classical fault-tolerance techniques are very expensive.",2005,0, 2561,Maintaining Consistency between Loosely Coupled Services in the Presence of Timing Constraints and Validation Errors,"Loose coupling is often cited as a defining characteristic of service-oriented architectures. Interactions between services take place via messages in an asynchronous environment where communication and processing delays can be unpredictable; further, interacting parties are not required to be on-line at the same time. Despite loose coupling, many service interactions have timing and validation constraints. For example, business interactions that take place using RosettaNet partner interface processes (PIPs) such as request price and availability, request purchase order, notify of invoice, etc. have to meet several timing and message validation constraints. A failure to deliver a valid message within its time constraint could cause mutually conflicting views of an interaction. For example, one party can regard it as timely whilst the other party regards it as untimely, leading to application level inconsistencies. The paper describes how business interactions, such as PIPs can be wrapped by simple handshake synchronisation protocols to provide bilateral consistency, thereby simplifying the task of coordinating peer-to-peer business processes",2006,0, 2562,"The $100,000 Keying Error","Losing $100K hurts, but other input mistakes can cost much more.",2008,0, 2563,LA-RDO based error resilient coding using multi-layer Lagrange multiplier selection in SVC,"Loss-aware rate distortion optimized (LA-RDO) MB mode decision has been introduced in scalable video coding (SVC) to minimize the quality degradation impact of transmission errors. It aims to optimize rate-distortion performance considering transmission error and resulted error propagation. However, Lagrange multiplier (A) used in LARDO based mode decision does not consider the correlations between layers. Consequently, A is not optimal for multi-layer scenario in error resilient coding. In this paper, modified A selection method in multi-layer scenario, for mode decision of LA-RDO based error resilient coding is proposed. Simulation results show that significant PSNR gain is reportedly achieved for various tested sequences compared to JSVM 9.8 with quality and spatial scalabilities for IPPP coding, as well as for hierarchical B picture coding.",2010,0, 2564,Lowering Error Floor of LDPC Codes Using a Joint Row-Column Decoding Algorithm,"Low-density parity-check codes using the belief-propagation decoding algorithm tend to exhibit a high error floor in the bit error rate curves, when some problematic graphical structures, such as the so-called trapping sets, exist in the corresponding Tanner graph. This paper presents a joint row-column decoding algorithm to lower the error floor, in which the column processing is combined with the processing of each row. By gradually updating the pseudo-posterior probabilities of all bit nodes, the proposed algorithm minimizes the propagation of erroneous information from trapping sets into the whole graph. The simulation indicates that the proposed joint decoding algorithm improves the performance in the waterfall region and lowers the error floor. Implementation results into field programmable gate array (FPGA) devices indicate that the proposed joint decoder increases the decoding speed by a factor of eight, compared to the traditional decoder.",2007,0, 2565,Ground Fault Location in Low-Voltage High-Resistance Grounded Systems via the Single-Processor Concept for Circuit Protection,"Low-voltage resistance-grounded systems may provide significant advantages to the facility in terms of system reliability and safety. However, maintenance of these systems is more complex than for solidly grounded wye systems and improper first-fault isolation and lack of timely repair may present a risk to the facility. Identifying the location of ground faults within the distribution system is the main problem. We describe a new way to detect which feeder has been faulted in a lineup of low-voltage switchgear protected via the ""single-processor concept for protection and control of circuit breakers in low-voltage switchgear"", M.E. Valdes (2004). We describe the detection methodology and how the potential sources of error are addressed",2006,0, 2566,Vibration faults simulation system (VFSS): a system for teaching and training on fault detection and diagnosis,"Machine condition monitoring is vital in many plants as any shutdown can lead to both materials and financial losses. Since vibration fault signals and their causes are important for fault detection and diagnosis, a vibration faults simulation system (VFSS) is developed to gain good understanding of vibration faults signals. This paper is aimed at simulating and analyzing vibration fault signals, which can be useful for teaching and training purposes especially for companies dealing with predictive maintenance. To achieve this a vibration faults simulation rig (VFSR) is designed and developed to simulate and study most common vibration faults signals encountered in rotating machines. A LabVIEW-based data acquisition system (DAS) is used to process the fault signals. The complete system was developed and tested and the fault signals were compared with normal signals so as to ascertain the condition of the machine under investigation.",2002,0, 2567,Multi-valued logic and its application in the fault diagnosis of the sensors of magnetic bearings,"Magnetic bearings are accurate and complex electromechanical devices. The sensors of a magnetic bearing are important to the bearing. Only when the sensor work normally makes the rotor of the magnetic bearing rotate stably. Since the sensors of a magnetic bearing are invisible in the device with very high positioning precision, the traditional ways of fault diagnosis of the sensors are difficult to be used in magnetic bearings. Considering the characteristics of the sensors of magnetic bearings, a multi-valued logic algebra method based on sequential variables is presented for the fault diagnosis of the sensors of magnetic bearings. The definition of the logic algebra is given, and some theorems are proved, which are usually used in deduction. According to the presented method, the corresponding experiments should be done, in which only the states of the magnetic bearings are measured by processing the signals from the sensors. The states of the sensors can be deduced with the multi-valued logic algebra based on sequential variables. It is significant to know the states of the sensors before fixing or compensating the sensors.",2004,0, 2568,Efficient method for correction and interpolation signal of magnetic encoders,"Magnetic encoders are widely used for speed or position measurement. This research presents a suitable method to correct the quadratic signal from a magnetic sensor. A new quadrature all digital phased-locked loop (QADPLL) method is presented. The method minimizes the effect of amplitude imbalance, noise, phase shift, and signal offsets. It also can solve waveform distortion and time-lag problems. Moreover, this paper proposes an interpolation technique to improve the accuracy of position information. By deriving the high-order signal from a sinusoidal signal, a high-resolution position can be obtained from a low-resolution encoder. Simulation and experiment on a linear motor were conducted. The results verify the performance of the proposed methods.",2008,0, 2569,Real-time fault tolerant control of a Reverse Osmosis desalination plant based on a hybrid system approach,"Many applications of reverse osmosis desalination plants (RO plants) require a fault tolerant system, in particular when human life depends on the availability of the plant for producing fresh water. However, RO plants are little studied from the control engineering point of view: modeling, design of control algorithms and real-time experiments are scarcely reported in the literature. The present work is a study on a real RO plant in order to discover possible faults, to analyze potential methods for fault-tolerant control (FTC) and the real-time experimentation. In order to implement model based control, the plant is identified in several operating points. Model predictive control (MPC) is used as control law and a hybrid supervisor is proposed to combine different methods, which perform better for different kind of faults. Satisfactory results are obtained for the real-time operation.",2009,0, 2570,Prior Training of Data Mining System for Fault Detection,"Many approaches have been used to fault discovery in complex systems. Model based reasoning; data mining analysis; rule base methods are the few among those approaches. To be successfully applied, these approaches all have to have some knowledge about the system prior to faults detection during the system run. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when an error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International space station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing the challenge of time limit. To automate this process, this paper presents an approach that uses decision trees to discover faults from data in real-time and capture the contents of fault trees as prior knowledge and use them to set the initial state of the decision trees.",2007,0, 2571,High Performance Computational Grids Fault Tolerance at System Level,"Many complex scientific, mathematical applications require large time for completion. To deal with this issue, parallelization is popularly used. Distributing an application onto several machines is one of the key aspects of grid-computing. This paper focuses on a check point/restart mechanism used to overcome the problem of job suspension at a failed node in a computational Grid. The ability to checkpoint a running application and restart it later can provide many useful benefits including fault recovery by rolling back an application to a previous checkpoint, advanced resources sharing, better application response time by restarting applications from checkpoints instead of from scratch, and improved system utilization, efficient high performance computing and improved service availability.",2008,0, 2572,R-BATCH: Task Partitioning for Fault-tolerant Multiprocessor Real-Time Systems,"Many emerging embedded real-time applications such as SCADA (Supervisory Control and Data Acquisition), autonomous vehicles and advanced avionics, require a high degree of dependability. Dealing with tasks having both hard real-time requirements and high reliability constraints is a key challenge faced in such systems. This paper addresses the problem of guaranteeing reliability requirements with bounded recovery times on fail-stop processors in fault-tolerant multiprocessor real-time systems. We classify tasks based on their recovery-time requirements into (i) Hard Recovery, (ii) Soft Recovery, and (iii) Best-Effort Recovery tasks. Then, the notion of a Hot Standby for Hard Recovery tasks along with a Cold Standby for Soft Recovery and Best-Effort Recovery tasks is introduced. In order to maximize the benefits of using a Hot Standby, replicas should not be co-located on the same processor. For this purpose, we propose a task allocation algorithm for Hot Standby replicas called R-BFD (Reliable Best-Fit Decreasing) that uses 37% fewer number of processors than BFD-P (Best-Fit Decreasing augmented with placement constraints). For tasks with more relaxed recovery-time constraints, however, additional optimization can be applied by using a Cold Standby that gets activated only when failures occur. Given a system reliability requirement and hence a maximum number of processor failures to tolerate, the required resource overprovisioning for Cold Standby replicas from multiple processors can be consolidated. An algorithm called R-BATCH (Reliable Bin-packing Algorithm for Tasks with Cold standby and Hot standby) reduces the required number of processors by up to 45% compared to R-BFD-based pure Hot Standby replication technique.",2010,0, 2573,"Self-routing, reconfigurable and fault-tolerant cell array","Many examples of reconfigurable fault-tolerant hardware consist of cells that have the same hardware structure. The arithmetic or logical functions of the cells can be configured. The interconnection of the configured cells can perform a complex task. The interconnection of these cells can be configured too. When some cells are faulty, the function and routing of some cells have to be reconfigured to restore the system's normal function. Most published fault-tolerant schemes cannot automatically achieve fault tolerance without the aid of external software and hardware, or they will require a long routing time to restore normal system function. Some published fault-tolerant schemes require a massive amount of spare cells. A self-routing, reconfigurable and fault-tolerant cell array is presented. When some cells are faulty, spare cells can automatically replace the faulty cells and rerouting can be automatically achieved. The cell array automatically achieves fault tolerance without the aid of external software or hardware, using a small amount of spare cells. In addition, the proposed cell array achieves short routing time to quickly restore normal system function.",2008,0, 2574,A method for the evaluation of behavioral fault models,"Many fault models have been proposed which attempt to capture design errors in behavioral descriptions, but these fault models have never been quantitatively evaluated. The essential question which must be answered about any fault model is, ""If all faults in this model are detected, is the design guaranteed to be correct?"" In this paper we present a method to examine the degree to which an arbitrary fault model can ensure the detection of all design errors. The method involves comparing fault coverage to error coverage as defined by a practical design error model which we describe. We have employed our method to perform a limited analysis of the statement and branch coverage fault models.",2003,0, 2575,Fault Tolerance and Scaling in e-Science Cloud Applications: Observations from the Continuing Development of MODISAzure,"It can be natural to believe that many of the traditional issues of scale have been eliminated or at least greatly reduced via cloud computing. That is, if one can create a seemingly well functioning cloud application that operates correctly on small or moderate-sized problems, then the very nature of cloud programming abstractions means that the same application will run as well on potentially significantly larger problems. In this paper, we present our experiences taking MODISAzure, our satellite data processing system built on the Windows Azure cloud computing platform, from the proof-of-concept stage to a point of being able to run on significantly larger problem sizes (e.g., from national-scale data sizes to global-scale data sizes). To our knowledge, this is the longest-running eScience application on the nascent Windows Azure platform. We found that while many infrastructure-level issues were thankfully masked from us by the cloud infrastructure, it was valuable to design additional redundancy and fault-tolerance capabilities such as transparent idempotent task retry and logging to support debugging of user code encountering unanticipated data issues. Further, we found that using a commercial cloud means anticipating inconsistent performance and black-box behavior of virtualized compute instances, as well as leveraging changing platform capabilities over time. We believe that the experiences presented in this paper can help future eScience cloud application developers on Windows Azure and other commercial cloud providers.",2010,0, 2576,"Correction Technique for Cascade Gammas in I-124 Imaging on a Fully-3D, Time-of-Flight PET Scanner","It has been shown that I-124 PET imaging can be used for accurate dose estimation in radio-immunotherapy techniques. However, I-124 is not a pure positron emitter, leading to two types of coincidence events not typically encountered: increased random coincidences due to non-annihilation cascade photons, and true coincidences between an annihilation photon and primarily a coincident 602 keV cascade gamma (true coincidence gamma-ray background). The increased random coincidences are accurately estimated by the delayed window technique. Here we evaluate the radial and time distributions of the true coincidence gamma-ray background in order to correct and accurately estimate lesion uptake for I-124 imaging in a time-of-flight (TOF) PET scanner. We performed measurements using a line source of activity placed in air and a water-filled cylinder, using F-18 and I-124 radio-isotopes. Our results show that the true coincidence gamma-ray backgrounds in I-124 have a uniform radial distribution, while the time distribution is similar to the scattered annihilation coincidences. As a result, we implemented a TOF-extended single scatter simulation algorithm with a uniform radial offset in the tail-fitting procedure for accurate correction of TOF data in I-124 imaging. Imaging results show that the contrast recovery for large spheres in a uniform activity background is similar in F-18 and I-124 imaging. There is some degradation in contrast recovery for small spheres in I-124, which is explained by the increased positron range, and reduced spatial resolution, of I-124 compared to F-18. Our results show that it is possible to perform accurate TOF based corrections for I-124 imaging.",2009,0, 2577,Test generation and fault localization for quantum circuits,"It is believed that quantum computing will begin to have a practical impact in industry around year 2010. We propose an approach to test generation and fault localization for a wide category of fault models. While in general we follow the methods used in test of standard circuits, there are two significant differences: (2) we use both deterministic and probabilistic tests to detect faults, (2) we use special measurement gates to determine the internal states. A fault table is created that includes probabilistic information. ""Probabilistic set covering"" and ""probabilistic adaptive trees"" that generalize those known in standard circuits, are next used.",2005,0, 2578,Analysis on fault voltage and secondary arc current of single phase refusing-shut of the 500kV extra high voltage transmission line,"It is common knowledge that 500 kV extra high voltage and long distant transmission line join a shunt reactor and a neutral grounding via small reactor; This paper analysis systematically an possible condition of the frequency-regulating resonance over-voltage on single phase cut fault to refusing-shut of the 500 kV extra high voltage transmission line which join a shunt reactor, the system compose an complex series resonance circuits, and present a rational mode of reactive compensation. This paper also build rational mathematic mode on systemic parameter of 500 kV ci-yong transmission line, and resolute detailedly its power frequency component, low frequency component and its DC component of single phase cut fault voltage and secondary arc current by the mean of Laplacian transformation ruling formula. All the this is to offer an farther analysis on switching over-voltage and secondary arc current interrupter of long distant transmission line. In the end, this system also implemented using MATLAB software, compute the transient process on single phase cut fault voltage and secondary arc current.",2009,0, 2579,Dark Frame Correction Via Bayesian Estimator in the Wavelet Domain,"It is generally known that every astronomical image which was acquired by a CCD sensor, have to be corrected by the dark frame. The dark frame maps the dark current of the CCD. If we don't have the dark frame, we cannot directly correct the astronomical images. This work deals with dark frame correction based on Bayesian estimator in the wavelet domain. The models of the marginal probability density function (PDF) of the wavelet coefficients of astronomical images and dark frame images based on generalized Laplacian is used by this estimator. The parameters of the models, which were mentioned above, were estimated by the least square error method on the set of images from our image database. The correction of the astronomical images by dark frame is better than the Bayesian estimator, but further work deals with more sophisticated Bayesian estimators with more robust statistical description of the images",2006,0, 2580,A novel intensity based optical proximity correction algorithm with speedup in lithography simulation,"It is important to reduce the optical proximity correction (OPC) runtime while maintaining a good result quality. In this paper, we obtain a better formula, which theoretically speeds up the widely used method, optimal coherent approximations (OCA's), by a factor of 2times. We speed up the OPC algorithm further by making it intensity based (IB-OPC), because it requires much less intensity simulations than the conventional edge placement error (EPE) based OPC algorithms. In addition, the IB-OPC algorithm, which uses the efficiently computed sensitivity information, converges faster than the EPE based OPC. Our IB-OPC experimental results show a runtime speedup of up to 15times with a comparable result quality as of the EPE based OPC.",2007,0, 2581,A microcontroller cellular based communication network for a GPS error correction system,"It is intended to provide low cost tracking and navigation services in Jamaica. These services will be possible using the University of the West Indies Cellular Based Error Correction System (UWI-CBECS) (Scarlett et al., 2003). The UWI-CBECS system incorporates the Global Positioning System (GPS) (El-Rabbany, 2002 and Kaplan, 1995), a Global Navigation Satellite System (GNSS), which is able to provide position information anywhere on earth. The GPS system by itself cannot provide the accuracy needed, due to uncontrollable factors. The UWI-CBECS system will have means to significantly reduce or eliminate the errors introduced by these factors, thus significantly increasing the accuracy of the system. The UWI-CBECS system will require a reliable low cost bidirectional communication network in order to provide the services at low cost. Bidirectional capability will help in significantly reducing the cost to users as it will allow error correction to be done at a centralized location and thus eliminating the need for expensive receivers. This work will introduce a multi-node communication network that is built incorporating an existing cellular phone network. A network like this would eliminate the cost of setting up expensive broadcast sites and reduce maintenance and operational cost significantly. In addition, the network will also allow bidirectional communication which is required by the UWI-CBES system. Bidirectional communication is very valuable; it makes additional features, such as, remote control, possible. Although a cellular network offers such great advantages there may be a possible disadvantage when it comes to reliability. The cellular system is used by many cellular subscribers and at times when the traffic is high on the network the system is unable to handle a large number of subscribers, thus a number of subscribers will be denied access to the system.",2004,0, 2582,Design of fault simulation training system for a certain tank,"It is necessary to carry out simulation driving training before forces trainees conduct real vehicle driving training for tanks, which can save training funds and raise the level of military modernization. A fault simulation training system for a certain tank is designed and its hardware and software platform is introduced. The dynamics model of tank in linear motion is derived. The simulation shows that the system has good interaction between the trainees on the tank driving platform and the instructors on the ground master platform. The functions of driving simulation and fault exclusion have been realized initially.",2010,0, 2583,A routing methodology for achieving fault tolerance in direct networks,"Massively parallel computing systems are being built with thousands of nodes. The interconnection network plays a key role for the performance of such systems. However, the high number of components significantly increases the probability of failure. Additionally, failures in the interconnection network may isolate a large fraction of the machine. It is therefore critical to provide an efficient fault-tolerant mechanism to keep the system running, even in the presence of faults. This paper presents a new fault-tolerant routing methodology that does not degrade performance in the absence of faults and tolerates a reasonably large number of faults without disabling any healthy node. In order to avoid faults, for some source-destination pairs, packets are first sent to an intermediate node and then from this node to the destination node. Fully adaptive routing is used along both subpaths. The methodology assumes a static fault model and the use of a checkpoint/restart mechanism. However, there are scenarios where the faults cannot be avoided solely by using an intermediate node. Thus, we also provide some extensions to the methodology. Specifically, we propose disabling adaptive routing and/or using misrouting on a per-packet basis. We also propose the use of more than one intermediate node for some paths. The proposed fault-tolerant routing methodology is extensively evaluated in terms of fault tolerance, complexity, and performance.",2006,0, 2584,Aliasing Error Reduction Based Fast VBSME Algorithm,"Mathematical analysis reveals that high frequency signal components are the main issues that make MRF algorithm essential. Moreover, the aliasing problem of subsampling algorithm also comes from high frequency signal components. So based on these mathematical investigations, two fast VBSME algorithms are proposed in this paper, namely roberts cross edge detector based subsampling method and motion vector based MRF early termination algorithm. Experiments show that strong correlation exists among the motion vectors of those blocks belonging to the same macroblock. Through exploiting this feature, a dynamically adjustment of the search ranges of integer motion estimation is proposed in this paper. Combing our proposed algorithms with UMHS almost saves 96%-98% Integer Motion Estimation (IME) time compared to the exhaustive search algorithm the induced coding quality loss is less than 0.8% bitrate increase or 0.04 db PSNR decline on average.",2008,0, 2585,Utilizing MATLAB on Multicore Processors for WiMAX Error Rate Simulation,"MATLAB is a very useful tool when it comes to simulating the bit error rate (BER) of a WiMAX 802.16e system. However, despite its flexibility, the time taken for a simulation to be completed is rather significant. This is due to the time cumulated to simulate a sufficient number of frames to get an accurate result. Multiple simulations with varying parameters such as signal-to-noise ratio (SNR) are also required to reflect the dynamic real world channel conditions which consequently contribute into a lengthy computation time. To overcome this, we present a solution utilizing the parallel computing capability of MATLAB in which the simulation is divided into multiple parallel instances running in multiple processing cores. This results into a noticeable decrease in simulation time proportionate to the number of processing cores used while maintaining a decent accuracy.",2010,0, 2586,Accurate self-synchronizing technique for measuring transmitter phase and frequency errors in TDMA digitally encoded cellular systems,"Measurement of phase and frequency errors, affecting the transmitted signal in digitally encoded cellular systems, is discussed here. To this aim, a new, cost-effective technique, particularly suitable in time domain multiple access (TDMA) systems, is proposed. In contrast to other measurement methods or instruments, the proposed technique gains burst synchronization on the analyzed signal through digital signal-processing, thus avoiding the need of sophisticated and expensive analog triggering solutions. Only a general-purpose data acquisition system is, in fact, required to capture a proper time window of the transmitted signal, preliminarily down-converted to intermediate frequency. Furthermore, in the same way as other methods or instruments, phase and frequency errors are evaluated according to a standard digital signal-processing procedure, which applies both for the actual and reference phase trajectory of the transmitted signal. The technique makes both trajectories available just at the end of the aforementioned burst synchronization stage, thus also optimizing the computational burden",2002,0, 2587,A new approach to detecting memory access errors in C programs,"Memory access errors such as out-of-bounds, pointer abuses, and illegal freeing are one of the principal reasons for failures of C programs. As a result, a number of research works have focused on the problem of dynamic detection of memory access errors in C programs. However, existing approaches have one or more of the following problems: inability to detect all memory errors, changing the memory allocation mechanism, incompatibility with libraries, and excessive performance overheads. In this paper, we suggest a new approach to cope with these problems. The primary goal of our approach is to present an effective technique with high precision, better performance, and the relatively low space overheads. Our approach combines source code transformation to improve accuracy and the efficient data structures (i.e., bitmap scheme) to obtain better performance.",2007,0, 2588,Evaluation of memory built-in self repair techniques for high defect density technologies,"Memory built in self repair (BISR) is gaining importance since several years. New fault tolerance approaches are mandatory to cope with increasing defect levels affecting memories produced with current and upcoming nanometric CMOS process. This problem will be exacerbated with nanotechnologies, where defect densities are predicted to reach levels that are several orders of magnitude higher than in current CMOS technologies. This work presents an evaluation of the area cost and yield of BISR architectures addressing memories affected by high defect densities. Statistical fault injection simulations were conducted on several memories. The obtained results show that BISR architectures can be used for future high defect technologies, providing close to 100% memory yield, by means of reasonable hardware cost.",2004,0, 2589,Low-cost error containment and recovery for onboard guarded software upgrading and beyond,"Message-driven confidence-driven (MDCD) error containment and recovery, a low-cost approach to mitigating the effect of software design faults in distributed embedded systems, is developed for onboard guarded software upgrading for deep-space missions. In this paper, we first describe and verify the MDCD algorithms in which we introduce the notion of ""confidence-driven"" to complement the ""communication-induced"" approach employed by a number of existing checkpointing protocols to achieve error containment and recovery efficiency. We then conduct a model-based analysis to show that the algorithms ensure low performance overhead. Finally, we discuss the advantages of the MDCD approach and its potential utility as a general-purpose, low-cost software fault tolerance technique for distributed embedded computing",2002,0, 2590,Error compensation in an intelligent sensing instrumentation system,"Methods of improving the measurement accuracy by estimation and correction of the maximum error components, are analyzed. The functional structure of the measurement channel in an intelligent sensing instrumentation system is described along with the procedures of component error correction. An experimental setup, implementing such methods in a multi-processing neural network configuration, is presented",2001,0, 2591,"Megaphone: Fault Tolerant, Scalable, and Trustworthy P2P Microblogging","Micro-blogging, or the posting of weblogs entries that have a small number of characters (160 characters or less), has recently become more mainstream. Services that implement micro-blogging such as Twitter are usually based on the client- server model. This limits their scalability and fault tolerance. In this paper, we present a new secure microblogging system that is based on a peer-to-peer network. The network is arranged based on user certificates and is scalable, does not have a single point of failure, and does not depend on a single vendor's proprietary service. The paper outlines the protocol specifics and provides implementation details for a secure, scaleable microblogging system.",2010,0, 2592,Analysis of multi-wavelength photonic crystal single-defect laser arrays,"Microcavities based upon 2-D photonic crystals (PhCs) can provide very high quality factors (Q) and strong confinement of light in small regions. This brings a significant enhancement in the spontaneous emission rate of the cavity mode, which then increases the spontaneous coupling factor and consequently allows the fabrication of low threshold lasers. However one of the main limitations is that single-defect cavities only produce very low output power, generally in the range of a few nWs and generally emit light at a single wavelength. One way to overcome this problem is to employ photonic crystal nanocavity laser arrays. A uniform array of single-defect cavities can produce multiple resonant peaks but with non-uniform spacing in wavelength spectrum and different peaks have significantly different Q factors. This paper demonstrates that square lattice single-defect cavity arrays based on pseudonoise (PN) sequences can provide multiwavelength spectrum with nearly uniform channel spacing and similar Q factors for different modes.",2010,0, 2593,A portable gait analysis and correction system using a simple event detection method,"Microcontrollers are widely used in the area of portable control systems, though they are only beginning to be used for portable, unobtrusive Functional Electrical Stimulation (FES) systems. This paper describes the initial prototyping of such a portable system. This has the intended use of detecting time variant gait anomalies in patients with hemiplegia, and correcting for them. The system is described in two parts. Firstly, the portable hardware implementing two independent communicating microcontrollers for low powered parallel processing and secondly the simplified low power software. Both are designed specifically for long term, stable use and also to communicate with PC based visual software for testing and evaluation. The system operates by using bend sensors to defect the angles of the hip, knee and ankle of both legs. It computes an error signal with which to produce a stimulation wave cycle, that is synchronised and timed for the new gait cycle from that in which the error was observed. This system uses a PID controller to correct for the instability inherent with such a large time delay between observation and correction.",2002,0, 2594,Patching Processor Design Errors,"Microprocessors can have design errors that escape the test and validation process. The cost to rectify these errors after shipping the processors can be very expensive as it may require replacing the processors and stalling the shipment. In this paper, we discuss architecture support to allow patching the design errors in the processors that have already been shipped out. A contribution of this paper is our analysis showing that a majority of errors can be detected by monitoring a subset of signals in the processors. We propose to incorporate a programmable error detector in the processor that monitors these signals to detect and initiate recovery using one of the mechanisms that we discuss. The proposed hardware units can be programmed using patches consisting of the errata signatures which the manufacturer develops and distributes when errors are discovered in the post-design phase.",2006,0, 2595,Towards noise and error reduction on foundry data gathering processes,"Microshrinkages are known as probably the most difficult defects to avoid in high-precision foundry. The presence of this failure renders the casting invalid, with the subsequent cost increment. Modelling the foundry process as an expert knowledge cloud allows properly-trained machine learning algorithms to foresee the value of a certain variable, in this case, the probability that a microshrinkage appears within a casting. Our previous research presented outstanding results with a machine-learning-based approach. Still, the data gathering phase for the training of these algorithms is performed in a manual way. Thereby, this learning process is subject to an accuracy reduction due to the noise introduced in such archaic data collection method. In this paper, we address the use of Singular Value Decomposition (SVD) and Latent Semantic Analysis (LSA) in order to reduce the number of ambiguities and noise in the dataset. Further, we have tested this approach comparing the results without this preprocessing step in order to show the effectiveness of the proposed method.",2010,0, 2596,Does Hardware Configuration and Processor Load Impact Software Fault Observability?,"Intermittent failures and nondeterministic behavior complicate and compromise the effectiveness of software testing and debugging. To increase the observability of software faults, we explore the effect hardware configurations and processor load have on intermittent failures and the nondeterministic behavior of software systems. We conducted a case study on Mozilla Firefox with a selected set of reported field failures. We replicated the conditions that caused the reported failures ten times on each of nine hardware configurations by varying processor speed, memory, hard drive capacity, and processor load. Using several observability tools, we found that hardware configurations that had less processor speed and memory observed more failures than others. Our results also show that by manipulating processor load, we can influence the observability of some faults.",2010,0, 2597,Interval Arithmetic and Computational Science: Rounding and Truncation Errors in N-Body Methods,"Interval arithmetic is an alternative computational paradigm that enables arithmetic operations to be performed with guarantee error bounds. In this paper interval arithmetic is used to compare the accuracy of various methods for computing the electrostatic energy for a system of point charges. A number of summation approaches that scale as O(N2) are considered, as is an O(N) scaling Fast Multipole Method (FMM). Results are presented for various sizes of water cluster in which each water molecule is described using the popular TIP3P water model. For FMM a subtle balance between the dominance of either rounding or truncation errors is demonstrated.",2007,0, 2598,P3A-5 Two Methods for Catheter Motion Correction in IVUS Palpography,"Intravascular ultrasound (IVUS) strain imaging of the luminal layer in coronary arteries, coined as IVUS palpography, utilizes conventional radiofrequency (RF) signals. The signals, acquired at two different levels of a compressional load, are cross-correlated to obtain the microscopic tissue displacements. The latter can be directly translated into local strain of the vessel wall. However, (apparent) tissue motion due to catheter wiggling reduce signal correlation and result in void strain estimates. To compensate for the motion artifacts in IVUS palpography, a novel method, based on the feature-based scale-space optical flow (OF), and classical block matching (BM) algorithms were employed. The computed OF vector and BM displacement fields quantify the amount of local tissue misalignment in consecutive frames. Subsequently, the extracted motion pattern is used to realign the signals prior to the cross- correlation analysis, reducing the RF signal decorrelation and increasing the number of valid strain estimates. The advantage of applying the motion compensation algorithms in IVUS palpography was demonstrated in a mid-scale validation study on 14 in-vivo pullbacks. Both methods substantially increase the number of valid strain estimates in the partial and compounded palpograms. The best method, OF, attained a mean relative improvement of 28% and 14%, respectively. Implementation of motion compensation methods boosts the diagnostic value of IVUS palpography.",2007,0, 2599,Service Architecture of Grid Faults Diagnosis Expert System Based on Web Service,"Introduced method that used multi-layer open grid services architecture (OGSA) to construct grid-based fault diagnosis expert system (GFDES) based on multi-model. Designed the grid service layer based on OGSA, and developed the programs of public service and parallel service in the server. Formulated data grid market-based architecture (DGMA) was made up of independent distributed grid nodes, corresponding to a faults diagnosis expert system. The data transfer adopted grid FTP and was implemented through the API functions provided by Java cog kit.",2007,0, 2600,Attenuation correction of small animal SPECT images acquired with 125I-iodorotenone,"Iodine-125 is an inexpensive and widely available radioisotope that is used frequently in biological experiments. It is also possible to perform small animal imaging experiments with this isotope, although its low photon energy (27.5 keV) may lead to significant photon attenuation. We have developed a method to calibrate x-ray computed tomography (CT) image data in order to use microCT images to provide objective specific attenuation maps that are included in an iterative reconstruction algorithm to correct for photon attenuation. Phantom experiments with iodine-125 show that this method can compensate for the effects of photon attenuation. A uniform phantom (3.8 cm diameter) imaged without attenuation correction has a decrease in image intensity at its center of approximately 25%, but reconstruction with attenuation correction virtually eliminates the decreased image intensity in the center of the phantom. Using 125I-iodorotenone, an experimental myocardial flow tracer, we demonstrate photon attenuation correction for iodine-125 imaging in a rat. The addition of attenuation correction improves the uniformity of the resulting perfusion images, better matching the results obtained with autoradiography.",2006,0, 2601,Two-point Multi-Section Nonuniformity Correction Algorithm of Infrared Image And Implementation of Its Simulation Platform,Infrared Focal Plane Array often suffer from problems of non-uniformity that limit its overall performance. This paper describes the traditional two-point correction algorithm and a new two-point multi-section algorithm and its simulation platform. The result of simulation indicate that this algorithm owns the features of low operation quantity and high precision and strong practicability.,2006,0, 2602,Fault Modeling and Detection Capabilities for EFSM Models,"Inherent timing variables and constraints in communication protocols require new extended finite-state machine (EFSM) models to formally represent their behavior, particularly for test generation purposes. However, infeasible paths due to the conflicts among the timing condition and action variables in the timed EFSM models with the activation and expiration of concurrent timers complicate the test generation process. In a test measurement laboratory, such timers, if not properly taken into account by formal methods at the test generation step, can generate false results by failing correct implementations or, worse, passing faulty implementations. This paper analyzes the fault detection capability of the timed EFSM models introduced in our earlier work in the presence of multiple timing faults. It is proven that, for a class of timing faults, test sequences generated from our models can detect multiple occurrences of pairwise combinations of such faults. A simplified version of the session initiation protocol (SIP) registration process, which is widely used by voice over IP (VoIP) telephones, has been used as a working example throughout this paper.",2008,0, 2603,Fault Diagnosis of Power Electronics in Renewable Energy equipment Based on Fuzzy Expert System,"Intelligent control (IC) is a novel stage of automatic control development. Its development brings a new method for power electronic systems. This paper has applied fuzzy expert system (FES) to the power electronics in renewable energy equipment for state-monitoring and fault-diagnosis. By the analysis of typical power circuits, a kind of universal configuration of fault diagnostic system, which can be used to solve the existing maintenance of the power electronics in renewable energy equipment, has been introduced. The paper has established a foundation for further applications of FES in maintenance of the power electronics in renewable energy equipment.. The fault diagnosis system is verified to be feasible by simulation of MATLAB and experiment on line.",2007,0, 2604,Use of data standardization to improve inverter - induction machine fault detection,"Intensive research efforts have been focused on the signature analysis (SA) to detect electrical and mechanical fault condition of induction machines. Different signals can be used: voltage, current and flux. The characteristic frequency research by a current spectral analysis is a well-known method widely used. This method is valid when the motor is supplied by the three-phase main network. However nowadays, in industrials applications, the asynchronous motors are more and more supplied by converters, in particular for variable speed. The current spectral analysis is almost not exploitable because of appearance of multiple harmonics of the commutation frequency. This paper presents a diagnosis method applied to a set ""converter-machine-load"". This method is based on pattern recognition approach. The use of the data standardization makes it possible to free from the level of load and thus to represent an operating mode by only one class. This fact allows decreasing the number of initial data necessary to the training phase and improving the final diagnosis",2006,0, 2605,Flux signature analysis: An alternative method for the fault diagnosis of induction machines,"Intensive research efforts have been focused on the signature analysis (SA) to detect electrical and mechanical faults of induction machines. Different signals can be used: voltage, current, flux and power. In the case of current signals (CSA), the interpretation of one-phase current spectrum or the three-phase current space vector spectrum provides direct information on the presence of abnormal conditions. With proper operative condition, a similar interpretation can be obtained by using flux sensor (FSA). In this paper, it is proved that a simple external leakage flux sensor is more efficient than the classical motor current signature analysis (MCSA) to detect both stator and rotor faults in induction machines.",2005,0, 2606,Broken rotor bars fault detection in squirrel cage induction machines,"Intensive research efforts have been focused on the signature analysis (SA) to detect electrical and mechanical faults of three-phase squirrel-cage induction machines. For this purpose, different signals can be used such as stator voltages, stator currents, stray flux and input power. The diagnosis methods based on current signals analysis (CSA) are the most popular. The interpretation of one-phase current spectrum or three-phase current space vector spectrum provides direct information on the presence of abnormal conditions. In this paper, attention has been paid to the stray flux for rotor fault diagnosis of the three-phase squirrel-cage induction machines",2005,0, 2607,Covering arrays for efficient fault characterization in complex configuration spaces,"Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are ""option-related""-those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice.",2006,0, 2608,Increasing the efficiency of fault detection in modified code,"Many software systems are developed in a number of consecutive releases. Each new release does not only add new code but also modifies already existing one. In this study we have shown that the modified code can be an important source of faults. The faults are widely recognized as one of the major cost drivers in software projects. Therefore we look for methods of improving fault detection in the modified code. We suggest and evaluate a number of prediction models for increasing the efficiency of fault detection. We evaluate them against the theoretical best model, a simple model based on size, as well as against analyzing the code in a random order (not using any model). We find that using our models provides a significant improvement both over not using any model at all and using the simple model based on the class size. The gain offered by the models corresponds to 30% to 60% of the theoretical maximum.",2005,0, 2609,Fault-Tolerant Behavior-Based Motion Control for Offroad Navigation,"Many tasks examined for robotic application like rescue missions or humanitarian demining require a robotic vehicle to navigate in unstructured natural terrain. This paper introduces a motion control for a four-wheeled offroad vehicle trying to tackle the problems arising. These include rough ground, steep slopes, wheel slippage, skidding and others that are difficult to grasp with a physical model and often impossible to acquire with sensory equipment. Therefore, a more reactive approach is chosen using a behavior-based architecture. This way a certain generalization in unknown environment is expected. The resulting behavior network is described and experiments performed in a simulation environment as well as in real world are presented. Additionally the performance of the utilized vehicle in case of mechanical or electronic defects is examined in simulation.",2005,0, 2610,Segment based X-Filling for low power and high defect coverage,"Many X-Filling strategies are proposed to reduce test power during scan based testing. Because their main motivation is to reduce the switching activities of test patterns in the test process, some of them are prone to reduce the test ability of test patterns, which may lead to low defect coverage. In this paper, we propose a segment based X-filling(SBF) technique to reduce test power using multiple scan chains, with minimal impact on defect coverage. Different from the previous filling methods, our X-filling technique is segment based and defect coverage aware. The method can be easily incorporated into traditional ATPG flow to keep capture power below a certain limit and keep the defect coverage at a high level.",2009,0, 2611,The neural network algorithms for correction of signals distorted in photodetectors,Neural network algorithms for correction of distortions in photodetectors are considered. The first one allows one to eliminate the errors due to single-photon pulses overlapping. The second algorithm is used for correction of spatial dependence of sensitivity and cross-talk effects in line image detector,2001,0, 2612,Improving tolerance of neural networks against multi-node open fault,"Neural networks are not intrinsically fault tolerant and their fault tolerance has to be improved by employing extra mechanisms. During the last decades, some simple fault types of feedforward neural networks have been widely investigated. In this paper, a rather complicated fault type, i.e. a multi-node open fault where several hidden nodes are out of work at the same time, is formally analyzed, and an approach named T3 is proposed. The ground of T3 is the recognition that the performance of trained neural networks does not linearly decrease with the increasing of the severity of fault. T3 utilizes a validation set to build the fault curve of a trained network. It then locates the inflection point of the fault curve and repeatedly trains the network according to the corresponding fault rate so that the redundancy are appended to the network appropriately. Experiments show that T3 can improve the tolerance against multi-node open fault of some feedforward neural networks at the expense of relatively small redundancy",2001,0, 2613,Neuro-inspired system for real-time vision sensor tilt correction,"Neuromorphic engineering tries to mimic biological information processing. Address-Event-Representation (AER) is an asynchronous protocol for transferring the information of spiking neuro-inspired systems. Currently AER systems are able sense visual and auditory stimulus, to process information, to learn, to control robots, etc. In this paper we present an AER based layer able to correct in real time the tilt of an AER vision sensor, using a high speed algorithmic mapping layer. A co-design platform (the AER-Robot platform), with a Xilinx Spartan 3 FPGA and an 8051 USB microcontroller, has been used to implement the system. Testing it with the help of the USBAERmini2 board and the jAER software.",2010,0, 2614,A General Framework for Symbol Error Probability Analysis of Wireless Systems and Its Application in Amplify-and-Forward Multihop Relaying,"New exact single-integral expressions for the evaluation of the average error probability of a wireless communication system are derived for a variety of modulation schemes in terms of the moment-generating function (MGF) of the reciprocal of the instantaneous received signal-to-noise ratio (SNR). The expressions obtained form a framework for performance evaluation of wireless communication systems for which the well-known MGF-based performance analysis method cannot be used, that is, systems for which the MGF of the instantaneous received SNR is not known or cannot be derived in closed-form. Using the framework obtained, the error probability performance in general fading of an amplify-and-forward (AF) multihop relaying system with both variable-gain and fixed-gain relays is then evaluated. In particular, a new expression for the MGF of the reciprocal of the instantaneous received SNR of an AF multihop system with fixed-gain relays is derived. Numerical examples show precise agreement between simulation results and theoretical results.",2010,0, 2615,"New methodologies for eliminating No Trouble Found, No Fault Found and other non repeatable failures in depot settings","New methodologies for eliminating No Trouble Found (NTF), No Fault Found (NFF) and other non repeatable failures in depot (or other) repair settings. Trying to find NTFs or NFFs has been as elusive as catching a leprechaun (and with the price of gold these days, who wouldn't want to catch a leprechaun and capture his pot of gold!). In fact, in some instances getting to the root cause has become the largest area of investment for a test strategy. In this paper we explore the fundamentals of NTFs and NFFs and show developments in several areas that will allow depots to dramatically reduce these types of errors (results) with innovative solutions.",2008,0, 2616,Minimal March tests for unlinked static faults in random access memories,"New minimal March test algorithms are proposed for detection of (all) unlinked static faults in random access memories. In particular, a new minimal March MSS test of complexity I8N is introduced detecting all realistic simple static faults, as March SS (22N), (S. Hamdioui, van de Goor, Rodgers, MTDT 2002).",2005,0, 2617,Efficient Error Correcting Codes for On-Chip DRAM Applications for Space Missions,"New systematic single error correcting codes-based circuits are introduced for random access memories, with ultimate minimal encoding/decoding complexity, low power and high performance. These new, codes-based circuits can be used in combinational circuits and in on-chip random access memories of reconfigurable architectures with high performance and ultimate minimum decoding/encoding complexity. Due to the overhead of parity check bits associated with the error-correcting-codes, there has always been a demand for an efficient and compact code for small memories in terms of data width. The proposed codes give improved performance even for small memories over the other codes. Area and power comparisons have been performed to benchmark the performance index of our codes. The code-centric circuits offer significant advantages over existing error correcting codes-based circuits in the literature in terms of lower size, power and cost which make them suitable for wider range of applications such as those targeting space. The paper describes the new efficient code and associated circuits for its implementation",2005,0, 2618,No fault found events during the operational life of military aircraft items,"No fault found (NFF) events are critical and well-known problems for certain aircraft items. This paper presents a study of these events for repairable items with on-condition maintenance, based on operational data from a military aircraft. Some findings are that: the number of NFF events is influenced by item type and number of repairs; most NFF events are initiated by faults recognized during operation; and different inspections contribute to NFF events. Hence, item design and tests at different operational modes and maintenance echelons should be better aligned to reduce the number of NFF events.",2009,0, 2619,Electrical model for program disturb faults in non-volatile memories,Non-volatile memories (NVMs) are susceptible to special type of faults known as disturb faults. A class of these disturb faults are faults induced by high electric field stress known as program disturbs. In this paper we discuss the physical nature of the defects that are responsible for these faults in flash memories. We develop an electrical fault model for defects and simulate faulty cell behavior based on physical defect location (in gate oxide). We also evaluate the impact of these defects on cell performance. The modeling technique is flexible and applicable under different disturb conditions and defect characteristics.,2003,0, 2620,An Evaluation of the Use of Atmospheric and BRDF Correction to Standardize Landsat Data,"Normalizing for atmospheric and land surface bidirectional reflectance distribution function (BRDF) effects is essential in satellite data processing. It is important both for a single scene when the combination of land covers, sun, and view angles create anisotropy and for multiple scenes in which the sun angle changes. As a consequence, it is important for inter-sensor calibration and comparison. Procedures based on physics-based models have been applied successfully with the Moderate Resolution Imaging Spectroradiometer (MODIS) data. For Landsat and other higher resolution data, similar options exist. However, the estimation of BRDF models using internal fitting is not available due to the smaller variation of view and solar angles and infrequent revisits. In this paper, we explore the potential for developing operational procedures to correct Landsat data using coupled physics-based atmospheric and BRDF models. The process was realized using BRDF shape functions derived from MODIS with the MODTRAN 4 radiative transfer model. The atmospheric and BRDF correction algorithm was tested for reflectance factor estimation using Landsat data for two sites with different land covers in Australia. The Landsat reflectance values had a good agreement with ground based spectroradiometer measurements. In addition, overlapping images from adjacent paths in Queensland, Australia, were also used to validate the BRDF correction. The results clearly show that the algorithm can remove most of the BRDF effect without empirical adjustment. The comparison between normalized Landsat and MODIS reflectance factor also shows a good relationship, indicating that cross calibration between the two sensors is achievable.",2010,0, 2621,Notice of Retraction
Comprehensive Evaluation of Certain Power Vehicle Fault Based on Rough Sets,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

With the advent of intelligence, people put forward higher request to the fault diagnosis of weapon. As the base of certain weapon system, the power vehicle can work well or not is directly relate to the tasks of the battle and training can complete smoothly or not. Firstly, the paper introduces the fundamental conception of Rough Sets and gives the method of ascertaining the evaluation value and the weights. Secondly, based on the operation characteristics of certain power vehicle, the paper analyses and creates an index system of the power vehicle, then RS is used to impersonally distribute weights. Finally, the comprehensive evaluation model based RS is given out, and the fault sequence and evaluating grade are educed. The computing results show this practical model is very significant to the weapon maintenance and offers a reference for the evaluation of civil equipment fault.",2009,0, 2622,A novel diagnostic method for single power switch open-circuit faults in voltage-fed PWM motor drives,"Nowadays, variable speed AC drives have become a standard in many industrial applications. Hence, due to the widespread adoption of advanced power control devices, fault diagnosis of power converters, in particular the voltage source inverter in variable speed AC drives, is becoming more and more important. Although there are different fault types that may occur in these converters, in this paper, only single power switch open-circuit failures are considered in a vector controlled permanent magnet synchronous motor drive. A new real-time algorithm that allows the detection and localization of the faulty device using just the motor phase currents is proposed. Several results under different operating conditions are presented, proving the method effectiveness, low detection time and robustness against false alarms.",2010,0, 2623,Scan-based ATPG diagnostic and optical techniques combination: A new approach to improve accuracy of defect isolation in functional logic failure,"Nowadays, with the increasing complexity of new VLSI circuits, laser stimulation or emission techniques and scan-based ATPG diagnostic reach their limits in functional logic failure. To overcome these limitations, a new methodology has been established. This methodology, presented in this paper, combines the advantages of both approaches in order to improve accuracy of fault isolation and defect localization.",2008,0, 2624,Defect association and complexity prediction by mining association and clustering rules,"Number of defects remaining in a system provides an insight into the quality of the system. Software defect prediction focuses on classifying the modules of a system into fault prone and non-fault prone modules. This paper focuses on predicting the fault prone modules as well as identifying the types of defects that occur in the fault prone modules. Software defect prediction is combined with association rule mining to determine the associations that occur among the detected defects and the effort required for isolating and correcting these defects. Clustering rules are used to classify the defects into groups indicating their complexity: SIMPLE, MODERATE and COMPLEX. Moreover the defects are used to predict the effect on the project schedules and the nature of risk concerning the completion of such projects.",2010,0, 2625,NURBS interpolator with confined chord error and tangential and centripetal acceleration control,"NURBS interpolation is highly requested in CNC systems because it allows high speed and high accuracy machining. In this work, an algorithm for NURBS interpolation capable of limiting chord error, centripetal acceleration and tangential acceleration is proposed. The algorithm is composed of two stages that may be executed simultaneously. In the first stage, the algorithm breaks down the curve into segments and, for each segment, calculates the feedrate limit that allows to respect both chord error tolerance and maximum centripetal acceleration limit. The second stage is a speed-controlled interpolator with a tangential acceleration limited feedrate profile generated using information provided by first stage. Software simulations are performed to verify the fulfillment of constraints and performance is compared to those of speed-controlled interpolator and variable feedrate interpolator.",2010,0, 2626,Fault detection capabilities of coupling-based OO testing,"Object-oriented programs cause a shift in focus from software units to the way software classes and components are connected. Thus, we are finding that we need less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance and aggregation, especially when combined with polymorphism, introduce new kinds of integration faults, which can be covered using testing criteria that take the effects of inheritance and polymorphism into account. This paper demonstrates, via a set of experiments, the relative effectiveness of several coupling-based OO testing criteria and branch coverage. OO criteria are all more effective at detecting faults due to the use of inheritance and polymorphism than branch coverage.",2002,0, 2627,Robust fault detection using consistency techniques for uncertainty handling,"Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system.",2007,0, 2628,Research on Multi-agent System Model of Diesel Engine Fault Diagnosis by Case-Based Reasoning,"Oil monitoring technology is a useful method in condition monitoring and fault diagnosis for the machine, especially for low-speed, heavy-load, reciprocated and lubricated diesel engine equipment. But it is difficult to implement intelligent diagnosis because monitored information lacks logical relationship in oil monitoring. To solve this problem, the theory and method of case-based reasoning is adopted for the data processing and fault analysis in oil monitoring with a multi-agent system structure. Detailed definitions of agents in the system were proposed, and the multi-agent system framework was established finally. Multi-agent mechanism brings flexible for case based reasoning. It enhances the capability of solving complicated question in new system, and overcomes the shortcoming of the fault knowledge difficult to update in traditional systems",2006,0, 2629,Study of Unequal Error Protection Method Based on JPEG2000 for Surveying&Mapping Image,"On aviation mapping, compression image maybe not decode because of wireless transmission interference. UEP (unequal error protection) scheme is put forward about layer and block structure of JPEG2000 image compression method, and realizes three degrees UEP of data block, data header, image data via various degrees correction-coding under the condition of obstruct, naked and shade. BCH, RS and Hamming coding are simulated, and image decoding success rate and imagepsilas quality are analyzed according to different coding and ROI (region of interest) in the Gaussian white noise model and Rayleigh fading model. Results shows that the method we putted forward roughly improves decoding success rate, at the same time, almost not increasing bandwidth; flexibly enhances local reconfigurable imagepsilas quality which is in favor of identifier in the region of interest.",2008,0, 2630,Research for digital circuit fault testing and diagnosis techniques,"On basis of scientific research projects recently fulfilled by the author, and through detailed analysis for more than 30 LASAR circuit simulation result documentations, this paper makes a detailed description of fault diagnosis software development process, which provides design basis for relevant development of circuit fault simulation software. The assay stipulates data domain testing technology and its approaches; on basis of testing system comprehensive description, realizing process of fault diagnosis software is states accordingly. In the research, LASAR (logic automatic stimulate and response) circuit simulation results fault dictionary and VXI bus technology are adopted, and Lab Windows/CVI for dummy instrument software development environment are used as the platform for fault diagnosis software development.",2009,0, 2631,Fault evaluation of certain power vehicle based on fuzzy comprehensive assessment,"On the basis of analyzing and creating the index system of the fault evaluation of certain new weapon system power vehicles, Delphi was used to distribute weight vectors, and fuzzy comprehensive assessment model was given out. A sample shows that the model is practical, as well as the model can be used in various weapon systems and the relevant software can be developed by the professionals. It offers a referrence for the evaluation of other weapon fault and it is strategic for the army to win the advantage of the opportunity.",2008,0, 2632,Simulation techniques for fault diagnosis of digital circuits based on LASAR,"On the basis of scientific research projects recently completed by the author, and through analysis for contents of fault test, this paper makes a detailed description of methods for fault test, which provides fault dictionary method, coding method and comparison method. The assay mainly stipulates fault simulation analysis based on LASAR (Logic Automatic Stimulate and Response), and gives four major steps of the development for fault simulation software including modeling of tested unit, simulation of a normal board, fault simulation and post-treatment, and realizes 96.59% of fault coverage rate of excitations.",2010,0, 2633,Dynamic on-body channel and its ARQ error control,"On-body communication channels are of increasing interest as more and more wireless devices are wearable in medical, military and personal communication applications. Based on our previous study, this paper investigates reasons for time-varying fading of on-body channel and its ARQ error control. We used commercial software POSER7 to generate distance and direction mismatch between on-body antennas during actions and analyzed dynamic path loss based on a measured on-body antenna radiation pattern. The middle and long term fading are attributed to intended movements which can be controlled, and the dominant short term fading is attributed to the mismatch of antenna direction resulting from unintentional movements which cannot be controlled. Backoff strategy for ARQ retransmission to over the dynamic on-body fading is then analyzed on considering QoS criteria and power consumption criteria.",2009,0, 2634,Development and evaluation of a model of programming errors,"Models of programming and debugging suggest many causes of errors, and many classifications of error types exist. Yet, there has been no attempt to link causes of errors to these classifications, nor is there a common vocabulary for reasoning about such causal links. This makes it difficult to compare the abilities of programming styles, languages, and environments to prevent errors. To address this issue, this paper presents a model of programming errors based on past studies of errors. The model was evaluated with two observational of Alice, an event-based programming system, revealing that most errors were due to attentional and strategic problems in implementing algorithms, language constructs, and uses of libraries. In general, the model can support theoretical, design, and educational programming research.",2003,0, 2635,Prototyping a fault-tolerant multiprocessor SoC with run-time fault recovery,"Modern integrated circuits (ICs) are becoming increasingly complex. The complexity makes it difficult to design, manufacture and integrate these high-performance ICs. The advent of multiprocessor systems-on-chips (SoCs) makes it even more challenging for programmers to utilize the full potential of the computation resources on the chips. In the mean time, the complexity of the chip design creates new reliability challenges. As a result, chip designers and users cannot fully exploit the tremendous silicon resources on the chip. This research proposes a prototype which is composed of a fault-tolerant multiprocessor SoC and a coupled single program, multiple data (SPMD) programming framework. We use a SystemC based modeling and simulation environment to design and analyze this prototype. Our analysis shows that this prototype as a reliable computing platform constructed from the potentially unreliable chip resources, thus protecting the previous investment of hardware and software designs. Moreover, the promising application-driven simulation results shed light on the potential of a scalable and reliable multiprocessing computing platform for a wide range of mission-critical applications",2006,0, 2636,A Fault-Tolerant Dynamic Fetch Policy for SMT Processors in Multi-Bus Environments,"Modern microprocessors get more and more susceptible to transient faults, e.g. caused by high-energetic particles due to high integration, clock frequencies, temperature and decreasing voltage supplies. A newer method to speed up contemporary processors at small space increase is simultaneous multithreading (SMT). With the introduction of SMT, instruction fetch- and issue policies gained importance. SMT processors are able to simultaneously fetch and issue instructions from multiple instruction streams. In this work, we focus on how dynamic bus arbitration and scheduling of hardware threads within the processors front-end can help to dynamically adjust fault coverage and performance. The novelties which help to reach this goal are: a multi-bus-scheduling scheme which can be used to tolerate permanent bus faults and single event disturbances (SEDs). The second novelty can be used in conjunction with the first: a dynamic fetch scheduling algorithm for a simultaneous multithreaded processor, leading to the introduction of dynamic multithreading. Dynamically multithreaded processors are able to switch between different SMT fetch policies, thus enabling a graceful degradation of the processors front-end",2006,0, 2637,McC++/Java: Enabling Multi-core Based Monitoring and Fault Tolerance in C++/Java,"Monitoring and fault tolerance are important approaches to give high confidence that long-running online software systems run correctly. But these approaches will certainly cause high overhead cost, i.e. the loss of efficiency. Multi-core platforms can make such cost acceptable because of the advantage of the parallel performance. For allowing ordinary software developers without any knowledge of multi-core platforms to handle such programming tasks more efficiently, we propose an approach to enable multi-core based monitoring and fault tolerance in C++/Java.",2010,0, 2638,An approach for controller fault detection,"Monitoring and maintaining of control software becomes more and more important and difficult with the increase of control software in size and complexity. In this paper, an approach for control software fault detection is proposed, which is based on the monitoring of the discrepancies between the control outputs of the actual controller and the benchmark controller, a Linear Quadratic Gaussian (LQG) controller. The discrepancies are assumed to be Gaussian distribution with a stable mean under the normal situation. Faults in the actual controller are characterized by sudden jumps in the mean of the discrepancies. The fault detection is transferred into a jump point identification problem. A detector based on Generalized Likelihood Ratio (GLR) test is employed for the jump point identification. The proposed approach is applicable to general control software even it is only illustrated through a water heater case study with a simple PID controller.",2004,0, 2639,Derivation of simulative fault data from normal operating data for on-line monitoring and diagnostic system,"More and more on-line monitoring and diagnostic system have been applied in power system to ensure the reliability of its HV power equipment. The diagnostic system has to be built up according to the actual fault patterns of the equipment. However, because of their low on-site failure rate, the real fault data are usually scarce, which restricts the validity verification of diagnostic method and corresponding algorithm. Hence, the method of simulative fault data derived from the real normal operating data was presented to provide a solution. Based on the measuring data from a 110 kV HV bushing on-line monitoring system, some fault data were simulated. In the system an artificial neural network (ANN) is constructed as the diagnostic algorithm, which is employing the adaptive resonance theory (ART). It is concluded that applying the method of simulative fault data was convenient for constructing the diagnostic system.",2004,0, 2640,Addressing the Corrections Crisis with Software Technology,"More than 2.3 million people currently live in US prisons or jails, 25 percent of the world's total inmate population, a comparatively much higher rate than in other Western countries. Denmark only incarcerates 66 of every 100,000 citizens, compared to 760 in the US (www.kcl.ac.uk/depsta/rel/icps/worldbrief/world_brief.html). This situation results from tough sentencing policies that focus on drug use and habitual offenders. Over three decades, these policies contributed to high incarceration rates. While most states have stopped enforcing these policies, the legacy remains, with high recidivism rates perpetuating the cycle. This situation has resulted in rampant overcrowding, with facilities operating at levels above design capacity and inmates frequently housed on bunks in recreational areas. Faced with rising costs and rampant overcrowding, correctional facilities are turning to software technologies for help.",2010,0, 2641,Rapid: Identifying Bug Signatures to Support Debugging Activities,"Most existing fault-localization techniques focus on identifying and reporting single statements that may contain a fault. Even in cases where a fault involves a single statement, it is generally hard to understand the fault by looking at that statement in isolation. Faults typically manifest themselves in a specific context, and knowing that context is necessary to diagnose and correct the fault. In this paper, we present a novel fault-localization technique that identifies sequences of statements that lead to a failure. The technique works by analyzing partial execution traces corresponding to failing executions and identifying common segments in these traces, incrementally. Our approach provides developers a context that is likely to result in a more directed approach to fault understanding and a lower overall cost for debugging.",2008,0, 2642,Implementation and evaluation of transparent fault-tolerant Web service with kernel-level support,"Most of the techniques used for increasing the availability of Web services do not provide fault tolerance for requests being processed at the time of server failure. Other schemes require deterministic servers or changes to the Web client. These limitations are unacceptable for many current and future applications of the Web. We have developed an efficient implementation of a client-transparent mechanism for providing fault-tolerant Web service that does not have the limitations mentioned above. The scheme is based on a hot standby backup server that maintains logs of requests and replies. The implementation includes modifications to the Linux kernel and to the Apache Web server, using their respective module mechanisms. We describe the implementation and present an evaluation of the impact of the backup scheme in terms of throughput, latency, and CPU processing cycles overhead.",2002,0, 2643,Deriving Symbol Dependent Edit Weights for Text Correction_The Use of Error Dictionaries,"Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.",2007,0, 2644,Low cost error tolerant motion estimation for H.264/AVC standard,Motion estimation is the most important module in any video encoder from quality point of view. Although in the presence of faults the final generated bit-stream is compatible with the standard but the quality degradation and decreasing of compression ratio is considerable. In this paper a low cost error tolerant structure for motion estimation is proposed. Simulation results show that our proposed method imposes smaller area and power overhead with small effect on quality.,2010,0, 2645,Increasing power efficiency in transmitter diversity systems under error performance constraints,"Motivated by combinatorial optimization theory, we propose an algorithmic power allocation method that minimizes the total transmitting power in transmitter diversity systems, provided that the instantaneous Bit-Error-Rate (BER) is not greater than a predetermined value. This method applies to many practical applications where the power transmitted by each antenna is constrained. We also provide closed-form expressions for the average total transmitted power for the case of two transmitting antennas operating in Rayleigh fading, and the average number of active antennas at the transmitter assuming Nakagami-m fading channels. Simulations and numerical results show that, compared to the conventional equi-power scheme, the proposed model offers a considerable reduction in the total transmitting power and the average number of active antennas, without loss in error performance.",2008,0, 2646,Increasing Power Efficiency in Transmitter Diversity Systems under Error Performance Constraints,"Motivated by the well-known knapsack problem, we propose an efficient algorithmic method that minimizes and optimally allocates the total transmitting power in transmitter diversity systems, provided that the instantaneous bit-error- rate (BER) is not greater than a predetermined value. We also provide closed-form expressions for the average total transmitted power for the case of two transmitting antennas operating in Rayleigh fading, and the average number of active antennas at the transmitter assuming Nakagami-m fading channels. Simulations and numerical results show that, compared to the conventional equi-power scheme, the proposed model offers a considerable reduction in the total transmitting power and the average number of active antennas, without loss in error performance.",2007,0,2645 2647,MRI inter-packet movement correction for images acquired with non-complementary data,"Movement during the acquisition of magnetic resonance images can cause artifacts that interfere with subsequent image analysis. In this paper we address the problem of inter-packet motion and provide a method to minimize errors associated with this artifact. The procedure is based on an iterative packet-to-volume registration process and does not require complementary information such as multimodal acquisitions or protocols that provide redundant volume data. A Kaiser-Bessel function is used to interpolate missing data. Experiments with simulated data demonstrate that the packet-to-volume registration improves greatly after a single iteration and maintains improvement for the following iterations, while experiments with real data demonstrate a substantial reduction in associated artifacts and improvement in quality. In both cases anatomical integrity is preserved after reconstruction.",2008,0, 2648,Performance evaluation of MPEG-4 visual error resilient tools over a mobile channel,"MPEG-4 is the most recent standard for audiovisual representation published by International Organization for Standardization, targeted for future interactive multimedia video communications calling for content-based functionalities, universal access in error prone environments and high coding efficiency. Besides the provisions for content-based functionalities, the MPEG-4 video coding standard will assist the efficient storage and transmission of very low bit rate video in error prone environments. This makes MPEG-4 a good candidate for video coding standard for Terrestrial Trunked Radio Standard (TETRA) video transmission. This paper presents the overall performance of a communication system used to transmit MPEG-4 encoded video over TETRA channels. The video quality that can be achieved at different channel conditions employing different combination of MPEG-4 built-in error resilient tools are presented in this paper.",2003,0, 2649,Distributed Fault Detection and Diagnosis of Chemical Process Based on MAS,"Multi-agent technology offers a number of characteristics that make it well suited for distributed process monitoring and fault diagnosis tasks. In this paper we introduce a multi-agent architecture to implement distributed applications for chemical process monitoring and diagnosis as a set of cooperating intelligent agents. The agents appeared in this application are defined in ADL (Agent Description Language), a high-level specification language, and interchange data/knowledge through service requests using a common knowledge representation language. Elements of a multi-agent framework that is being used to support this complex distributed application, and the mechanisms used for agent usability within the chemical process environment are also described.",2003,0, 2650,pCube: Update-efficient online aggregation with progressive feedback and error bounds,"Multidimensional data cubes are used in large data warehouses as a tool for online aggregation of information. As the number of dimensions increases, supporting efficient queries as well as updates to the data cube becomes difficult. Another problem that arises with increased dimensionality is the sparseness of the data space. In this paper we develop a new data structure referred to as the pCube (data cube for progressive querying), to support efficient querying and updating of multidimensional data cubes in large data warehouses. While the pCube concept is very general and can be applied to any type of query, we mainly focus on range queries that summarize the contents of regions of the data cube. pCube provides intermediate results with absolute error bounds (to allow trading accuracy for fast response time), efficient updates, scalability with increasing dimensionality, and pre-aggregation to support summarization of large ranges. We present both a general solution and an implementation of pCube and report the results of experimental evaluations",2000,0, 2651,A new motion compensation approach for error resilient video coding,"Multihypothesis motion-compensated prediction (MHMCP) can be used as an error resilience technique for video coding. Motivated by MHMCP, we propose a new error resilience approach named alternative motion-compensated prediction (AMCP), where two-hypothesis and one-hypothesis predictions are alternatively used with some mechanism. Both theory and simulation results show that in case of one frame loss, the expected converged error using AMCP is smaller than that using two-hypothesis MCP.",2005,0, 2652,A Fault Tolerant Infrastructure for Mobile Agen,"Mobile agent technology is a promising paradigm for a myriad of real world applications. Owing to their tremendous capabilities, multiagent systems have been scoped in a large number of applications. However issues related to fault tolerance can hamper the suitability of mobile agents in these real world systems. In this paper we have proposed an infrastructure which provides agent fault tolerance. An algorithm similar to the sliding window model ensures a fault tolerant behavior. Different types of agents, work in collaboration to provide the desired system behavior by tolerating faults. The proposed infrastructure will be applicable in a variety of systems including, ecommerce, online banking etc. With the increasing market of electronic commerce it becomes an interesting aspect to use autonomous mobile agents for electronic business transactions.",2006,0, 2653,Enhancing Mobile Agent Applications with Security and Fault Tolerant Capabilities,"Mobile agent technology promises to be a powerful mechanism to improve the flexibility and customizability of applications with its ability to dynamically deploy application components across the network. But none of the present mobile agent prototype systems satisfy all the requirements to provide a secure and reliable architecture, suitable for any mobile agent based distributed application. This paper presents architecture for a mobile agent system to provide a reliable agent tracking mechanism from the perspective of the owner of the agent, protection for the host from a malicious agent and the agent from the malicious host when an agent migrates to different nodes with an assigned task. This framework uses various encryption mechanisms to provide the required host security and agent security.",2009,0, 2654,Language and Tool Support for Model Checking of Fault-Tolerant Distributed Algorithms,"Model checking is a successful formal verification technique; however, its application to fault-tolerant distributed algorithms is still not common practice. One major reason for this is that model checking requires non-negligible usersA efforts in representing the algorithm to be verified in the input language of a model checker. To alleviate this problem we propose an approach which encompasses (i) a language for concisely describing fault-tolerant distributed algorithms and (ii) a translator from the proposed language to PROMELA, the input language of the SPIN model checker. To demonstrate the feasibility of our approach, we show the results of an experiment where we described and verified several algorithms for consensus, a well-known distributed agreement problem.",2008,0, 2655,Automatic verification of fault tolerance using model checking,"Model checking is a technique that can make a verification for finite state systems absolutely automatic. We propose a method for automatic verification of fault-tolerant systems using this technique. Unlike other related work, which is tailored to specific systems, we are aimed at providing a general approach to verification of fault tolerance. The main obstacle in model checking is state explosion. To avoid the problem, we design this method so that it can use SMV, a symbolic model checking tool. Symbolic model checking can overcome the problem by expressing the state space and the transition relation by Boolean functions. Assuming that a system to be verified is specified by guarded commands, we define a modeling language suited for describing guarded command programs and propose a translation method from the modeling language to the input language of SMV. We show the results of applying the proposed method to various examples to demonstrate the usefulness",2001,0, 2656,Finding liveness errors with ACO,"Model checking is a well-known and fully automatic technique for checking software properties, usually given as temporal logic formulae on the program variables. Most of model checkers found in the literature use exact deterministic algorithms to check the properties. These algorithms usually require huge amounts of memory if the checked model is large. We propose here the use of an algorithm based on ACOhg, a new kind of ant colony optimization model, to search for liveness property violations in concurrent systems. This algorithm has been previously applied to the search for safety errors with very good results and we apply it here for the first time to liveness errors. The results state that our algorithmic proposal, called ACOhg-live, is able to obtain very short error trails in faulty concurrent systems using a low amount of resources, outperforming by far the results of nested-DFS, the traditional algorithm used for this task in the model checking community and implemented in most of the explicit state model checkers. This fact makes ACOhg-live a very suitable algorithm for finding liveness errors in large faulty concurrent systems, in which traditional techniques fail because of the model size.",2008,0, 2657,Neural fault detection of an adaptive controlled beam,"Model-based observers seem to be the most evolving techniques during the past years, but depending on the complexity of the monitored system, they may become impractical. Non-model-based methods for fault detection are suitable for these complex cases, and artificial neural networks are likely to provide the necessary features. A comparison between these two methods is conducted in this paper, focusing on the structural fault detection in a cantilevered beam. This system, despite being a simple structure, permits a good insight of the characteristics of the two methods. Two structural faults are presented: a simulated crack on a finite element model of the beam, and a mass variation on an experimental test-bed. Both the simulation and the experimental results infer that neural networks may be a good option for fault detection in complex systems",2000,0, 2658,Test generation and fault simulation methods on the basis of cubic algebra for digital devices,"Models and methods of digital circuit analysis for test generation and fault simulation are offered. The two-frame cubic algebra for compact description of sequential primitive element (here and further, primitive) in the form of cubic coverings is used. It is used for digital circuit designing, fault simulation and fault-free simulation as well. Problems of digital circuit testing are formulated as linear equations. The described cubic fault simulation method allows to propagate primitive fault lists from its inputs to outputs; to generate analytical equations for deductive fault simulation of digital circuit at gate, functional and algorithmic description levels; to build comparative and interpretative fault simulators for digital circuit. The fault list cubic coverings (FLCC), which allow to create single sensitization paths, are proposed. The test generation method for single stuck-at fault (SSF) detection with usage of FLCC is developed",2001,0, 2659,Adaptive transmission control for error-resilient multimedia synchronization,"Multimedia streams impose tight temporal constraints since different kinds of continuous multimedia streams have to be played synchronously. We devise in this paper a adaptive transmission scheme to ensure the error-resilient and synchronous playback of audio and video streams based on the real-time transport protocol. Realization of our adaptive transmission control is composed of a series of operations in three stages, namely, (1) dynamic reordering mechanism, (2) error-resilient mechanism, and (3) adaptive synchronization mechanism. In this paper, an empirical study is conducted to provide insight into our adaptive transmission scheme. As validated by our performance study, the adaptive transmission mechanism is able to strike a good balance of both stable playback and the end-to-end delay reduction. Furthermore, we analyze the jitter resistance, the end-to-end delay, and video quality in order to enhance the applicability of this scheme to more applications that require the transmission of multimedia data.",2004,0, 2660,Insights on Fault Interference for Programs with Multiple Bugs,"Multiple faults in a program may interact with each other in a variety of ways. A test case that fails due to a fault may not fail when another fault is added, because the second fault may mask the failure-causing effect of the first fault. Multiple faults may also collectively cause failure on a test case that does not fail due to any single fault alone. Many studies try to perform fault localization on multi-fault programs and several of them seek to match a failed test to its causative fault. It is therefore, important to better understand the interference between faults in a multi-fault program, as an improper assumption about test case failure may lead to an incorrect matching of failed test to fault, which may in turn result in poor fault localization. This paper investigates such interference and examines if one form of interference holds more often than another, and uniformly across all conditions. Empirical studies on the Siemens suite suggest that no one form of interference holds unconditionally and that observation of failure masking is a more frequent event than observation of a new test case failure.",2009,0, 2661,A low-complexity multiscale error diffusion algorithm for digital halftoning,"Multiscale error diffusion (MED) digital halftoning technique outperforms classical conventional error diffusion techniques as it can produce a directional-hysteresis-free bi-level image. However, extremely large computation effort is required for its implementation. In this paper, a fast MED-based digital halftoning technique is proposed to produce a halftone image without directional hysteresis at a significantly reduced computational cost. The amount of reduction is monotonic increasing with the image size. For an image of size 512512, the proposed algorithm can save 40% of arithmetic operations as compared with MED. Moreover, since it supports parallel processing, processing time can further be squeezed.",2005,0, 2662,Systematic approach to error budget analysis for integrated sensor systems,"Multi-sensor integration (fusion) has the potential to provide increased detection volume, complementary coverage, improved track accuracy and track continuity, and is a key feature being investigated for TIS-B and proposed for the overall Safe Flight 21 architecture. Tracking accuracy is a critical characteristic of any proposed system concept and is the focus of the analysis discussed in this paper. Methods for determining whether or not a particular system concept can produce tracking accuracies sufficient to meet the requirements for the anticipated applications must be developed. This paper describes a high level, end-to-end error budget analysis approach for statistically quantifying tracking errors at the end user. The approach uses Monte Carlo techniques to provide quantifiable statistical results that can be used for performance prediction, requirement refinement, and error budget allocation. The paper also discusses the need to carefully develop and articulate the track accuracy requirements for applications so that a system concept evaluation can be completed. An example analysis for the SF-21 Enhanced Visual Acquisition application using TIS-B illustrates this approach and how it can be extended to the development of other SF-21 applications.",2002,0, 2663,Assessing multi-version systems through fault injection,"Multi-version design (MVD) has been proposed as a method for increasing the dependability, of critical systems beyond current levels. However, a major obstacle to large-scale commercial usage of this approach is the lack of quantitative characterizations available. We seek to help answer this problem using fault injection. This approach has the potential for yielding highly useful metrics with regard to MVD systems, as well as giving developers a greater insight into the behaviour of each channel within the system. In this research, we develop an automatic fault injection system for multi-version systems called FITMVS. We use this si,stem to test a multi-version system, and then analyze the results produced. We conclude that this approach can yield useful metrics, including metrics related to channel sensitivity, code scope sensitivity, and the likelihood of common-mode failure occurring within a system",2002,0, 2664,Mutation analysis for Lustre programs: Fault model description and validation,"Mutation analysis is usually used to provide an indication of the fault detection ability of a test set. It is mainly used for unit testing evaluation, but has also been extended for integration testing evaluation. This paper describes adaptation of mutation analysis to the Lustre programming language, including both unit and integration testing. This paper focuses on the fault model, which has been extended since our previous works. Validation of the fault model is presented.",2007,0, 2665,"Mutual coupling in microstrip antenna array: evaluation, reduction, correction or compensation","Mutual coupling between the antenna elements in a microstrip antenna array is a potential source of performance degradation, particularly in a highly congested environment. The degradation includes impedance mismatching, increased side-lobe level, deviation of the radiation pattern from the desired one, and decrease of gain due to the excitation of a surface wave. To deal with these problems, the first thing is to evaluate the mutual coupling and to select the element with low mutual coupling. Then, it is still desired to reduce the mutual coupling further by taking some measures. Finally, in certain critical cases, it is necessary to involve the mutual coupling effects accurately through numerical analysis, such as ultra low side lobe arrays and adaptive ing arrays. All these issues are discussed and some numerical examples are given. Due to limited space, the paper focuses mainly on the work done in our laboratory.",2005,0, 2666,Co-design for NCS robust fault-tolerant control,"NCS (networked control system) is a kind of feedback control systems where the control loops are closed through real-time control network. The existence of real-time network in the feedback control loop makes it more complex to analyse and design fault-tolerant control (FTC) of NCS. In this paper, scheduling and control co-design for the robust fault-tolerant control of NCS is studied based on robust Hinfin fault-tolerant control idea. Parametric expression of controller is given based on feasible solution of linear matrix inequality (LMI). The plant IAE index is adopted to assign priority, therefore more control performance is considered while scheduling. After detailed theoretical analysis, the paper also provides the simulation results, which further demonstrated the proposed scheme",2005,0, 2667,Exact computation of maximally dominating faults and its application to n-detection tests,"n-detection test sets for stuck-at faults have been shown to be useful in detecting unmodeled defects. It was also shown that a set of faults, called maximally dominating faults, can play an important role in controlling the increase in the size of an n-detection test set as n is increased. In an earlier work, a superset of the maximally dominating fault set was used. In this work, we propose a method to determine exact sets of maximally dominating faults. We also define a new type of n-detection test sets based on the exact set of maximally dominating faults. We present experimental results to demonstrate the usefulness of this exact set in producing high-quality n-detection test sets.",2002,0, 2668,Raptor versus Reed Solomon forward error correction codes,"Network conditions generally cause errors on network packets. Correction of these errors is in the subject of ""forward error correction."" Forward error correction is divided into two categories: bit-level forward error correction and packet-level forward error correction. These two categories are unfamiliar. The aim of this study is to make a literature comparison of two alternative packet-level forward error correction codes: Raptor and Reed Solomon. Nowadays when packet-level error correction codes are mentioned, these two techniques are remembered. Reed Solomon FEC codes are found on the Internet and are tested with different network conditions. Raptor codes are commercial and not used broadly yet. But several new technologies (MBMS, DVB and etc.) uses Raptor. This study shows the cases, where Raptor and Reed Solomon are appropriate to use",2006,0, 2669,A study on network fault knowledge acquisition based on support vector machine,"Network fault knowledge acquisition is a necessary part of intelligent network management. In the paper, knowledge acquisition of two hierarchies is designed for modern network of large scale and some performance parameters instead of management information base are used to model the network faults so that the evaluations of network fault knowledge acquisition can easily be uniformed. Our knowledge acquisition methods are based on support vector machine. Basic support vector machine learning is applied to local network fault knowledge acquisition, and incremental PSVM is improved to be adapted to global dynamic network fault knowledge acquisition. Simulations indicate the correctness and efficiency of our method and the global network fault knowledge acquisition based on proximal support vector machine is still to be improved further.",2005,0, 2670,Notice of Retraction
Mining Top-k Fault Tolerant Association Rules by Redundant Pattern Disambiguation in Data Streams,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The real-world data may be usually polluted by uncontrolled factors or contained with noisy. Fault-tolerant frequent pattern can overcome this problem. It may express more generalized information than frequent pattern which is absolutely matched. The present research is integrated with previous research into an integrity new method, called Top-NFTDS, to discover fault-tolerant association rules over stream. It can discover top-k true fault-tolerant rules without minimum support threshold and minimum confidence threshold specified by user. We extend the negative itemsets to fault-tolerant space and disambiguate redundant patterns by this algorithm. Experiment results show that the developed algorithm is an efficient method for mining top-k fault-tolerant association rules in data streams.",2010,0, 2671,Notice of Retraction
The numerical experiment study of relationships between mining subsidence and fall of fault,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Tectonic stress and fault are main geological factors that influence the coal mining subsidence characteristics. In order to understand the control effect of different scale fault on mining subsidence characteristics, and improving prediction accuracy, in this paper, using FLAC3D, coupling model of compressive tectonic stress and small scale concealed adverse fault is established, the relation of fault fall and surface subsidence is studied by simulative mining. The experimental results show that, after coal-seam being mined, the scope of compressive stress would increase with the development of the fault fall in the coal wall of open-off cut, when the area of extensive stress increase near the strata of fault pinch-out place. The subsidence gap of upper and lower wall would increase with the development of fault fall, when in the strata near fault pinch-out place, the gap decreases. There is a positive correlation between the maximum subsidence and the fault fall bearing in coal and overlying strata.",2010,0, 2672,Comparison of MV-grid structures on fault ride through behavior of MV-connected DG-units,Nowadays the amount of distributed generation units is increasing rapidly. At this moment in the Netherland small combined heat and power (CHP) plants are most dominant. All CHP-plants are equipped with under-voltage protections which switch-off the CHP-plants at a dip of 0.8 p.u. with a duration of 100200 ms. In this paper the voltage dip propagation of two existing distribution grid structures are compared. With the aid of simulations per grid structure it is determined what amount of CHP-plants disconnects during a voltage dip because of these settings.,2009,0, 2673,Effects of STATCOM on wind turbines equipped with DFIGs during grid faults,"Nowadays, the power system is facing new challenges in increasing renewable energy sources' penetration. One of these problems is voltage stability in wind farms equipped with doubly fed induction generators (DFIGs) during grid faults. Flexible AC Transmission System (FACTS) devices can be used for this problem. This paper, investigates the Static Synchronous Compensator (STATCOM) application to achieve uninterrupted operation of wind turbine equipped with DFIG during grid faults. The simulation has been done in PSCAD/EMTDC framework.",2010,0, 2674,Pilot signal-based real-time measurement and correction of phase errors caused by microwave cable flexing in planar near-field tests,Millimeter and submillimeter wave receivers in scanning planar near-field test systems are commonly based on harmonic mixing and thus require at least one flexible microwave cable to be connected to them. The phase errors originated in these cables get multiplied and added to the phase of the final detected signal. A complete submillimeter setup with on-the-fly measurement of phase errors is presented. The novel phase error correction system is based on the use of a pilot signal to measure the phase errors caused by cable flexing. The measured phase error surface in the quiet-zone region of a 310 GHz compact antenna test range (CATR) based on a hologram is shown as an application example. The maximum measured phase error due to the cable within a 8090 cm2 scan area was 38.,2003,0, 2675,The effect of faults on plasma particle detector data reduction,"Missions in NASA's Solar-Terrestrial Probe Line feature challenges such as multiple spacecraft and high data production rates. An important class of scientific instruments that have for years strained against limits on communications are the particle detectors used to measure space plasma density, temperature, and flow. The Plasma Moment Application (PMA) software is being developed for the NASA Remote Exploration and Experimentation (REE) Program's series of Flight Processor Testbeds. REE seeks to enable instrument science teams to move data analyses such as PMA on board the spacecraft thereby reducing communication downlink requirements. Here we describe the PMA for the first time and examine its behavior under single bit faults in its static state. We find that ~90% of the faults lead to tolerable behavior, while the remainder cause either program failure or nonsensical results. These results help guide the development of fault tolerant, non-hardened flight/science processors",2001,0, 2676,An error sensitivity-based redundant macroblock strategy for robust wireless video transmission,"Packet video transmission over wireless networks suffers packet loss due to either temporary link outages or fading-induced bit errors. Such packet losses often occur in bursts, which may cause substantial degradation to the transmitted video quality. In this study, a redundant macroblock strategy is proposed. MB (macroblock) PSNR is employed to evaluate the error sensitivity of MBs. The most sensitive MBs are transferred in separate additional slices while coarsely quantized copies of the MBs are placed in the original slice. When working with chessboard style FMO (flexible macroblock reordering) and fixed length slice mode (FMO-slicing), the scheme performs well against packet loss errors with acceptable overhead and it is highly compatible with original H.264 bitstream. The results indicate that the proposed scheme can improve the decoded video quality by 0.8-2.5 dB comparing to H.264 chessboard FMO-slicing scheme over the simulated wireless channel.",2005,0, 2677,Region-of-Interest intra prediction for H.264/AVC error resilience,"Packets in a video bitstream contain data with different levels of importance that yield unequal amounts of quality distortion when lost. In order to avoid sharp quality degradation due to packet loss, we propose in this paper an error resilience method that is applied to the region of interest (RoI) of the picture. This method protects the RoI while not yielding significant overhead. We perform an eye tracking test to determine the RoIs of a video sequence and we assess the performance of the proposed model in error-prone environments by means of a subjective quality test. Loss simulation results show that stopping the temporal error propagation in the RoIs of the pictures helps preserving an acceptable visual quality in the presence of packet loss.",2009,0, 2678,PageChaser: A Tool for the Automatic Correction of Broken Web Links,"PageChaser is a system that monitors links between Web pages and searches for the new locations of moved Web pages when it finds broken links. The problem of searching for moved pages is different from typical information retrieval problems. First, it is impossible to identify the final destination until the page is actually moved, so the index-server approach is not necessarily effective. Secondly, there is a large bias about where the new address is likely to be and crawler-based solutions can be effectively implemented, avoiding the need to search the entire Web. PageChaser incorporates a comprehensive set of heuristics, some of which are novel, in a single unified framework. This paper explains the underlying ideas behind the design and development of PageChaser.",2008,0, 2679,Parameterization of a model-based 3-D PET scatter correction,"Parameterization of a fast implementation of the Ollinger (1996) model-based 3-D scatter correction method for positron emission tomography (PET) has been evaluated using measured phantom data acquired on a GE Advance PET imaging system. The Ollinger method explicitly estimates the 3-D single-scatter distribution using measured emission and transmission data and then estimates the multiple-scatter as a convolution of the single scatter. The main algorithm difference from that implemented by Ollinger is that the scatter correction does not explicitly compute scatter for azimuthal angles; rather, it determines 2-D scatter estimates for data within 2-D ""super-slices"" using as input data from the 3-D direct-plane (nonoblique) slices. These axial super-slice data are composed of data within a parameterized distance from the center of the super-slice. A model-based scatter correction method can be parameterized, and choice parameters may significantly change the behavior of the algorithm. Parameters studied in this work included transaxial image downsampling, the number of detectors to calculate scatter to, multiples kernel width and magnitude, the number and thickness of super-slices, and the number of scatter estimation iterations. Measured phantom data included imaging of the NEMA NU-2001 image quality (IQ) phantom, the IQ phantom with 2 cm extra water-equivalent tissue strapped around its circumference, and an attenuation phantom (20 cm uniform cylinder with Teflon, water and air inserts) with two 8 cm diameter water-filled nonradioactive arms placed by its side. For the IQ phantom data, a subset of NEMA NU-2001 measures were used to determine the contrast-to-noise ratio (CNR), lung residual bias, and background variability. For the attenuation phantom, region of interests (ROIs) were drawn on the nonradioactive compartments and on the background. These ROIs were analyzed for inter and intra-slice variation, background bias, and compartment-to-background ratio. In most cases, the algorithm was most sensitive to multiple-scatter parameterization and least sensitive to transaxial downsampling. The algorithm showed convergence by the second iteration for the metrics used in this study. Also, the range of the magnitude of change in the metrics analyzed was - small over all changes in parameterization. Further work to extend these results to more realistic phantom and clinical datasets is warranted.",2002,0, 2680,Parfait - A Scalable Bug Checker for C Code,"Parfait is a bug checker of C code that has been designed to address developers' requirements of scalability (support millions of lines of code in a reasonable amount of time), precision (report few false positives) and reporting of bugs that may be exploitable from a security vulnerability point of view. For large code bases, performance is at stake if the bug checking tool is to be integrated into the software development process, and so is precision, as each false alarm (i.e., false positive) costs developer time to track down. Further, false negatives give a false sense of security to developers and testers, as it is not obvious or clear what other bugs were not reported by the tool. A common criticism of existing bug checking tools is the lack of reported metrics on the use of the tool. To a developer it is unclear how accurate the tool is, how many bugs it does not find, how many bugs get reported that are not actual bugs, whether the tool understands when a bug has been fixed, and what the performance is for the reported bugs. In this tool demonstration we show how Parfait fairs in the area of buffer overflow checking against the various requirements of scalability and precision.",2008,0, 2681,Fault Discovery Protocol (FDP) for Passive Optical Networks (PONs),"Passive Optical Networks (PONs) are an attractive alternative to legacy copper based access lines, which still provide a number of services to customers despite their limited bandwidth and lack of future proof design. The recently growing volume of Ethernet PONs deployment [1], connected with low cost electronic and optical components used in the ONU modules, results in the situation where remote detection of faulty/active subscriber modules be-comes indispensable for proper operation of an EPON system. This paper addresses therefore the problem of the remote detection of faulty ONUs in the system, where the upstream channel is flooded with the Continuous Wave (CW) transmission from one or more damaged ONUs and standard communication is severed, providing a solution which is applicable in any type of PON networks, regardless of their operating protocol, physical structure, and data rate.",2007,0, 2682,Correction of motion-induced artifacts in clinical cardiac SPECT studies using a stereo-motion-tracking system,"Patient body motion is inevitable in cardiac SPECT due to the lengthy interval of time patients are imaged (23 min with transmission scan) and even respiratory patterns can change over that period. The use of external tracking devices provides additional information which is expected to result in a more robust correction than using just the emission data. We have been investigating the use of stereo-motion-tracking of retro-reflective markers on stretchy bands wrapped about the chest and abdomen of patients to provide 6 degree-of-freedom (DOF) tracking of the patient during cardiac SPECT. Recently we have introduced a visual-tracking-system (VTS) which utilizes 5 near-infrared (NIR) cameras to track markers from both ends of the clinic. This system is robust enough to compensate for multiple cameras being blocked from viewing all the markers due to the patient anatomy or viewing geometry restrictions imposed by the SPECT system. Initial studies have shown that the temporal and spatial accuracy of this motion-tracking system are well within the precision needed to correct for motion artifacts that can occur during cardiac SPECT. Clinical trials have been initiated for list-mode cardiac SPECT studies using bands of markers on the chest and abdomen, and so far we have been able to consistently track the markers. We have begun volunteer studies with IRB approval and informed consent where patients were asked to stay for a second cardiac-rest study with no additional radioactivity. Patients were asked to perform specific movements and we were able to correct for the rigid-body motion during SPECT reconstruction with our software as well as the standard clinical tool for comparison.",2008,0, 2683,Motion Estimation of Vortical Blood Flow Within the Right Atrium in a Patient with Atrial Septal Defect,"Patients with an atrial septal defect (ASD) have a left to right shunt with associated complications. Currently, various imaging modalities, including echocardiography and invasive cardiac catheterization, are utilized in the management of these patients. Cardiac magnetic resonance (CMR) imaging provides a novel and non-invasive approach for imaging patients with ASDs. A study of vortices generated within the right atrium (RA) during the diastolic phase of the cardiac cycle can provide useful information on the change in the magnitude of vorticity pre-and post-ASD closure. The motion estimation of blood applied to CMR is performed. In this study we present, a two dimensional (2D) visualization of in-vivo right atrial flow. This is constructed using flow velocities measured from the intensity shifts of turbulent blood flow regions in MRI. In particular, the flow vortices can be quantified and measured, against controls and patients with ASD, to extend medical knowledge of septal defects and their haemodynamic effects.",2007,0, 2684,RI2N/UDP: High bandwidth and fault-tolerant network for a PC-cluster based on multi-link Ethernet,"PC-clusters with high performance/cost ratio have been one of the typical platforms for high performance computing. To lower costs, Gigabit Ethernet is often used for intercommunication networks. However, the reliability of Ethernet is limited due to hardware failures and tentative errors in the network switches. To solve this problem, we propose an interconnection network system based on multi-link Ethernet named RI2N. In this paper, we developed a user level implementation of RI2N using UDP/IP that is called RI2N/UDP. When this new system was evaluated for performance and fault tolerance, the bandwidth on a 2-link Gigabit Ethernet was 246 MB/s, and the system could remain active during network link failure to provide high system reliability.",2007,0, 2685,FEA analysis of classical defects in impulse storage capacitor,"PD measurement is a good way to estimate the status of impulse storage capacitor. For deep research of PD, simulation of the electrical distribution in capacitor when it charged and discharge was processed. It brought forward four classical defect models for FEA (finite element analysis) in this paper. To different defect, the electrical distribution is definitely different. The electric potential change curve of inner defect during discharge time is also presented. Through analysis and calculation, the result proved that defects would do great effects to the capability of capacitor insulation. Especial, under discharge impulse, the defects did much more damage to capacitor insulation. Associating with the analysis of microcosmic phenomenon of materials, PD would do great function in future detection of storage capacitors.",2005,0, 2686,Improvement of Network Load and Fault-Tolerant of P2P DHT Systems,"Peer-to-Peer(P2P) fllesharing systems are now one of the most popular Internet applications. The unstructured P2P networks have significant scaling problem and limited efficiency. The structured P2P network based on Distributed Hash Table(DHT) has proved to be a useful substrate for large distributed system. However, most commercial P2P systems do not adapt DHT algorithms and still use central facilities or broadcasting based routing mechanisms. One reason impeding the DHT algorithm popularity is the routing information maintenance overhead in DHT algorithms; it generates considerable network traffic and increases P2P system complexity, especially in a highly dynamic environment. So we proposed a self-stabilizing P2P network construction and maintenance protocol, called multi-layer ring network protocol, which adopts small-world network to construct the topology, and presents the corresponding routing algorithm for the system. In this paper, we will pay more attention on maintenance overhead and resilience to failures of P2P network routing algorithms.",2006,0, 2687,Increasing user's privacy control through flexible Web bug detection,"People usually provide personal information when visiting Web sites, even though they are not aware of this fact. In some cases, the collected data is misused, resulting on user privacy violation. The existing tools which aim at guaranteeing user privacy usually restrict access to personalized services. In this work, we propose the Web bug detector. Upon detecting and informing users about browsing tracking mechanisms which invisibly collect their personal information when visiting sites, it represents an alternative that provides a better control over privacy while allowing personalization. Through experimental results, we demonstrate the applicability of our strategy by applying the detector to a real workload. We found that about 5.37% of user's requests were being tracked by third-party sites.",2005,0, 2688,"Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging","Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (~80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was ~14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to ~1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques for accidental coincidences, Compton scatter, and count loss due to image size increased target-to-background contrast ratios to approximately the maximum level theoretically achievable with this PEM system",2001,0, 2689,Evaluating the Post-Delivery Fault Reporting and Correction Process in Closed-Source and Open-Source Software,"Post-delivery fault reporting and correction are important activities in the software maintenance process. It is worthwhile to study these activities in order to understand the difference between open-source and closed-source software products from the maintenance perspective. This paper proposes three metrics to evaluate the post-delivery fault reporting and correction process, the average fault hidden time, the average fault pending time, and the average fault correction time. An empirical study is further performed to compare the fault correction processes of NASA Ames (closed-source) projects and three open-source projects: Apache Tomcat, Apache Ant, and Gnome Panel.",2007,0, 2690,On power and fault-tolerance optimization in FPGA physical synthesis,"Power and fault tolerance are deemed to be two orthogonal optimization objectives in FPGA synthesis, with independent attempts to develop algorithms and CAD tools to optimize each objective. In this paper, we study the relationship between these two optimizations and show empirically that there are strong ties between them. Specifically, we analyze the power and reliability optimization problems in FPGA physical synthesis (i.e., packing, placement, and routing), and show that the intrinsic structures of these two problems are very similar. Supported by the post routing results with detailed power and reliability analysis for a wide selection of benchmark circuits, we show that with minimal changes - fewer than one hundred lines of C code - an existing power-aware physical synthesis tool can be used to minimize the fault rate of a circuit under SEU faults. As a by-product of this study, we also show that one can improve the mean-time-to-failure by 100% with negligible area and delay overhead by performing fault-tolerant physical synthesis for FPGAs. The results from this study show a great potential to develop CAD systems co-optimized for power and fault-tolerance.",2010,0, 2691,Faults and defects in power transformers - a case study,"Power transformers play a fundamental role in electrical power systems, in addition to representing significant investments involved in the implementation of these systems. To reduce the costs associated with a transformer's life cycle and to guarantee its reliability and durability, it is essential to monitor its operating conditions, its insulation system, and the working conditions of its accessories and other components. Therefore, the aim of this work is to study the faults and defects that occurred in 34.5 kV, 69 kV, 138 kV, and 230 kV oil-immersed power transformers of the electrical system and the insulation system of CELG, a major electric energy concessionaire in the state of Goias, Brazil. The results of this study, i.e., the efficacy of the predictive technique for maintenance over the last 28 years (from 1979 to 2007), the characterization of faults and defects during this period, and the presentation of proposals for improvements in the predictive technique, aimed at reducing the number of stoppages in the electric power supply system, are expected to contribute to the body of knowledge in this field.",2009,0, 2692,Practical Criteria for the Separability of Eddy-Current Testing Signals on Multiple Defects,"Practical quantities have been introduced by the authors to characterize the interaction among multiple defects located in a specimen under eddy-current testing (ECT). If these quantities indicate that the interaction between the pairs of defects is negligible, the signal of an ECT probe for multiple defects can be calculated as the superposition of those signals, which are obtained for each defect as if it would be a single one. Conversely, if the criteria holds, the measured ECT signal can be decomposed into signals associated with the individual defects, which essentially simplifies the solution of the inverse problem of defect reconstruction. As an application of the criteria, the design of a novel barcoding system is presented, which is developed for marking metallic parts by laser treating, and where the applicability of linear signal processing methods for the reading out of the barcode is a requirement.",2008,0, 2693,Toward the detection of interpretation effects and playing defects,"Precise automatic music transcription requires accurate modeling and identification of the spectral content of the audio signal. But music can not be reduced to a succession of notes, and an accurate transcriptor should be able to detect other performance characteristics, such as slow tempo variations or, depending on the instrument detecting some interpretation effects. In a pedagogic way a student could want to improve his level and a good challenge will be to estimate the quality of play of musician. In this paper we present some of the most common playing defects and interpretation effects and we propose a way for detecting them.",2009,0, 2694,Towards Identifying the Best Variables for Failure Prediction Using Injection of Realistic Software Faults,"Predicting failures at runtime is one of the most promising techniques to increase the availability of computer systems. However, failure prediction algorithms are still far from providing satisfactory results. In particular, the identification of the variables that show symptoms of incoming failures is a difficult problem. In this paper we propose an approach for identifying the most adequate variables for failure prediction. Realistic software faults are injected to accelerate the occurrence of system failures and thus generate a large amount of failure related data that is used to select, among hundreds of system variables, a small set that exhibits a clear correlation with failures. The proposed approach was experimentally evaluated using two configurations based on Windows XP. Results show that the proposed approach is quite effective and easy to use and that the injection of software faults is a powerful tool for improving the state of the art on failure prediction.",2010,0, 2695,The Back Propagation Neural Network Model of Non-Periodic Defected Ground Structure,"Presently, electromagnetic field numerical value analysis methods such as finite difference time-domain (FDTD) method are generally used to calculate the DGS, although these methods are accurate, they are also computationally expensive. In this paper, a neural network model of a novel defected ground structure is established. Since the neural network model has the advantages of great precision and effectiveness, the developed design model can be used to take the place of the FDTD method of the DGS, being a kind of aid tool of circuit design. The neural network models of two different non-periodic DGS have been developed, at the same time the circuit of the according DGS is designed and manufactured. The result of computer simulation and product measurements are obtained to demonstrate the effectiveness of the method.",2008,0, 2696,The detection of interturn stator faults in doubly-fed induction generators,"Presently, many condition monitoring techniques that are based on steady-state analysis are being applied to wind generators. However, the operation of wind generators is predominantly transient, therefore prompting the development of non-stationary techniques for fault detection. In this paper we apply steady-state techniques, e.g. motor current signatures analysis (MCSA) and the extended Park's vector approach (EPVA), as well as a new transient technique that is a combination of the EPVA, the discrete wavelet transform and statistics, to the detection of turn faults in a doubly-fed induction generators (DFIG). It is shown that steady-state techniques are not effective when applied to DFIG's operating under transient conditions. The new technique shows that stator turn faults can be unambiguously detected under transient conditions.",2005,0, 2697,Analysis of the practical capacity of multi-valued hetero-associator considering fault tolerance,"Presents a method of pattern recognition using the multi-valued polynomial bidirectional hetero-associator (PBHA). This network can be used for the industrial application of optical character recognition. According to detailed simulations, the PBHA has a higher capacity for pattern pair storage than that of the conventional bidirectional associative memories and fuzzy memories. Meanwhile, the practical capacity of a PBHA considering fault tolerance is discussed. The fault tolerance requirement leads to the discovery of the attraction radius of the basin for each stored pattern pair. The PBHA takes advantage of multi-valued characteristics in evolution equations such that the signal-noise-ratio is significantly increased. We apply the result of this research to pattern recognition problems. The practical capacity of the multi-valued data recognition using the PBHA considering fault tolerance in the worst case is also estimated. Simulation results are presented to verify the derived theory",2001,0, 2698,Comparative study of shallow water multibeam imagery for cleaning bathymetry sounding errors,"Presents the results of a six-month study for the French Hydrographic Service (SHOM) to investigate the use of multibeam seafloor imagery for aiding existing bathymetry data cleaning techniques. These data cleaning algorithms efficiently eliminate erroneous soundings from deep water (depth >80 m) survey datasets but generate dubious soundings in shallow water. Such soundings are time consuming for an operator to validate or invalidate. In order to improve performance, the authors tested whether additional information could be derived from the correlation between multibeam imagery and bathymetry. The discussed methodology attempts to associate imaged objects (echo/shadow sets) with a list of suspicious soundings output by SHOM algorithms. Two approaches are considered: a ping-to-ping approach and a geographic approach. Object detection algorithms are run on the two different methods. Two datasets are examined: one from a SIMRAD EM1002S and another from an ATLAS FS20. The segmentation tools developed are helpful for analyzing suspicious beams where the imagery presents an anomaly. The four methods implemented may be adapted to the type of data used and to the desired subtlety of the segmentation",2001,0, 2699,Validation of a software dependability tool via fault injection experiments,"Presents the validation of the strategies employed in the RECCO tool to analyze a C/C++ software; the RECCO compiler scans C/C++ source code to extract information about the significance of the variables that populate the program and the code structure itself. Experimental results gathered on an Open Source Router are used to compare and correlate two sets of critical variables, one obtained by fault injection experiments, and the other applying the RECCO tool, respectively. Then the two sets are analyzed, compared, and correlated to prove the effectiveness of RECCO's methodology",2001,0, 2700,HVAC Fault Diagnosis System Using Rough Set Theory and Support Vector Machine,"Preventive maintenance plays a very important role in the modern Heating, Ventilation and Air Conditioning (HVAC) systems for guaranteeing the thermal comfort, energy saving and reliability. The fault diagnosis on HVAC system is a difficult problem due to the complex structure of the HVAC and the presence of multi-excite sources. As the HVAC system fault information has inaccurate and uncertainty characteristic, A new kind of fault diagnosis system based on Rough Set Theory (RST) and Support Vector Machine (SVM) is presented in this paper. The hybrid model is integrated the advantages of RST effectively dealing with the uncertainty information and SVMpsilas greater generalization performance. The HVAC diagnosis experiment demonstrated that the solution can reduce the cost and raise the efficiency of diagnosis, and verified the feasibility of engineering application. As a result, the presented hybrid fault diagnosis method can help to maintain the health of the HVAC systems, reduce energy consumption and maintenance cost.",2009,0, 2701,Fault tolerant control using sliding mode observers,"Previous work has considered the use of sliding mode observers for fault detection and isolation (FDI) in uncertain linear systems whereby the unknown faults are reconstructed by appropriate processing of the so-called equivalent output error injection. The paper builds on this work and considers such a scheme within the broader context of fault tolerant control. Specifically, by correcting the faulty measurement by an estimate of the fault obtained from the sliding mode FDI scheme, good closed-loop performance is still maintained. An example of such a scheme, which has been implemented on a laboratory dc motor rig, is described.",2004,0, 2702,Improving the Precision of Dependence-Based Defect Mining by Supervised Learning of Rule and Violation Graphs,"Previous work has shown that application of graph mining techniques to system dependence graphs improves the precision of automatic defect discovery by revealing subgraphs corresponding to implicit programming rules and to rule violations. However, developers must still confirm, edit, or discard reported rules and violations, which is both costly and error-prone. In order to reduce developer effort and further improve precision, we investigate the use of supervised learning models for classifying and ranking rule and violation subgraphs. In particular, we present and evaluate logistic regression models for rules and violations, respectively, which are based on general dependence-graph features. Our empirical results indicate that (i) use of these models can significantly improve the precision and recall of defect discovery, and (ii) our approach is superior to existing heuristic approaches to rule and violation ranking and to an existing static-warning classifier, and (iii) accurate models can be learned using only a few labeled examples.",2010,0, 2703,Proactive Fault Tolerance Using Preemptive Migration,"Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.",2009,0, 2704,Application of Combinatorial Probabilistic Neural Network in Fault Diagnosis of Power Transformer,"Probabilistic Neural Network (PNN) overcame the shortcomings of entrapment in local optimum, slow convergence rate which was in BP algorithm. With enough training samples, PNN obtained the optimal result of Bayesian rules. Because of the fast training rate, the training samples can be added into PNN at any time. So, PNN is fit to diagnose the fault of power transformer and has auto-adaptability. In order to improve the classification accuracy, the conception of combination is introduced into PNN. The fault diagnosis of power transformer is consisted of 4 Probability neural networks in this paper. PNN1 is used to classify the normal and fault. PNN2 is used to classify the heat fault and partial discharge (PD) fault. PNN3 is used to classify the general overheating fault and severe overheating fault. PNN4 is used to classify the partial discharge fault, and energy sparking or arcing fault. The example shows that the effect of combinatorial PNN is a good classifier in the fault diagnosis of power transformer. The combinatorial PNN has better diagnosis accuracy than BPNN and FUZZY algorithm.",2007,0, 2705,Sensor placement and fault detection using an efficient fuzzy feature selection approach,"Process monitoring and fault diagnosis are of great importance for operation safety and efficiency of complex industrial plants. The present article proposes a novel methodology to address the sensor location problem for fault detection. Firstly, all the process situations are identified based on a fuzzy learning algorithm using measurements generated from the whole available set of sensors. Then, a fuzzy feature selection approach is used to select the optimal number of sensors that characterize accurately the set of process situations (abnormal and normal). This method optimizes the performance of the learning algorithm within a membership margin framework, and thereby, it is capable to address correlation and redundancy issues. A behavioral pattern of the process is constructed with the selected sensors and is used to associate new online observations to previously characterized process situations. The proposed strategy has been applied for fault diagnosis to a pharmaceutical synthesis carried out in a new intensified heat-exchanger reactor.",2010,0, 2706,Process system fault source tracing based on Bayesian networks,"Process systems are different from discrete manufacturing systems in that they are composed of many interlocking subsystems that consist of various tightly coupled units. Hence, whenever a small unit of a subsystem functions badly, it could influence or cause the whole system to function abnormally. Under this circumstance, how to locate abnormal or failed units will be a very formidable task. By introducing Bayesian networks into the tracking of fault sources in process systems, a new method of fault source tracing is proposed, and a model is set up accordingly. To guarantee the accuracy of the modeling process, a series of rules which must be abided by are defined. And, a mapping between the Bayesian network and the units of a process system is developed. Furthermore, to make full use of this new Bayesian network model, its probability characteristics and the problem-solving method exploiting it are expanded and investigated in depth. Additionally, a complete reasoning for the fault source tracing based on the Bayesian network is described. Finally, an example is provided to demonstrate the modeling and reasoning processes, and thereby verifies the practicality and validity of this model in tracing abnormalities in process systems.",2009,0, 2707,A compact error model for reliable system design,Permanent and transient errors are inherently different in property and effect. This paper shows how to utilize this fact to develop a System Error Decision Diagram for reliable embedded systems. Based on this model an efficient approach for reliability evaluation is developed. The model and the reliability evaluation approach are assumed to be employed in a system-level design process to accelerate design space exploration. The proposed approach is demonstrated for a control system taken from the automotive domain.,2009,0, 2708,Exploring the Role of Software Architecture in Dynamic and Fault Tolerant Pervasive Systems,"Pervasive systems are rapidly growing in size, complexity, distribution, and heterogeneity. As a result, the traditional practice of developing one-off embedded applications that are often rigid and unmanageable is no longer acceptable. This is particularly evident in a growing class of mobile and dynamic pervasive systems that are highly unpredictable, and thus require flexible and adaptable software support. At the same time, many of these applications are mission critical and have stringent fault tolerance requirements. In this paper, we argue that an effective approach to developing software systems in this domain is to employ the principles of software architecture. We discuss the design and implementation of facilities we have provided in a tool-suite targeted for architecture-based development of fault tolerant pervasive systems.",2007,0, 2709,Quantification of PET and CT Data Misalignment Errors in Cardiac PET/CT:Clinical and Phantom Studies,"PET/CT units with high temporal resolution (particularly with 64-slice CT capability) are increasingly used as in clinical diagnosis and prognosis of cardiovascular disease. Since the CT sub-system in the combined PET/CT unit is used to perform attenuation correction of acquired PET data, misalignments between patient positioning for both scans can cause artifacts in the myocardial PET images potentially resulting in false positive artifacts. The aim of this study is to evaluate the misalignment effect (induced by spurious or physiological patient motion in-between the two modalities) on regional and global uptake values in the myocardial region. In this study, we used both phantom (RSD thorax phantom) and clinical studies (two FDG and one NH3 rest/stress). Manual shifts between the CT and PET images ranging from 0 to 20 mm in six different directions were applied. Thereafter, attenuation correction was applied to the emission data using the manually shifted CT images in order to model patient motion between PET and CT. The reconstructed PET images using shifted CT images for attenuation correction were compared with the PET images corrected with the hypothetically misalignment free original CT image. The criteria and figures of merit used included VOI and linear regression analysis. The analysis was performed using 500 VOIs located within the myocardial wall in each PET dataset. The VOIs were uniformly distributed across all myocardial wall regions to assess the overall influence of PET and CT misalignment. The absolute percentage relative difference increased in all simulated movements with increasing misalignments for both phantom and clinical studies (up to 30% in some regions for the 20 mm shift). In conclusion, increasing the misalignment between PET and CT studies resulted in increased changes in the tracer uptake value within the myocardium both on a regional and global basis with respect to the reference as revealed by the various figures of meri- t used. The variation was more significant for right and down movements versus left and up directions.",2009,0, 2710,DECIMAL and PLFaultCAT: From Product-Line Requirements to Product-Line Member Software Fault Trees,"PLFaultCAT is a tool for software fault tree analysis (SFTA) during product-line engineering. When linked with DECIMAL, a product-line requirements verification tool, the enhanced version of PLFaultCAT provides traceability between product- line requirements and SFTA hazards as well as semi-automated derivation of the SFTA for each new product-line system previously verified by DECIMAL. The combined tool reduces the effort needed to safely reuse requirements and customize the product-line SFTA as each new system is constructed.",2007,0, 2711,Ultra-shallow junction formation by Point Defect Engineering,"Point Defect Engineering (PDE) using high-energy ion bombardment can be used as a method to inject vacancies near the surface region with excessive interstitials created near the end of the projected range deep inside the substrate. We demonstrate that PDE not only suppresses transient enhanced diffusion of B in Si caused by implantation-induced defects, but also suppresses boride-enhanced diffusion normally associated with a high B concentration layer. With PDE, we can retard B diffusion, sharpen boron profile and increase B activation. An enhancement of substitutional ratio of B was observed by aligned nuclear reaction analysis. By drive-in diffusion of B from a surface deposited layer, the concept of boron diffusion control was used as an approach to form sub-10 nm ultrashallow junctions in Si.",2002,0, 2712,Pesticide: Using SMT to Improve Performance of Pointer-Bug Detection,"Pointer bugs associated with dynamically-allocated objects resulting in out-of-bounds memory access are an important class of software bugs. Because such bugs cannot be detected easily via static-checking techniques, dynamic monitoring schemes have been proposed. However, the key challenge with dynamic monitoring schemes is the runtime overhead (slowdowns of the order of lOx are common). Previous approaches have used thread-level speculation (TLS) to reduce the overhead. However, the approaches still incur substantial slowdowns while requiring complex TLS hardware. We make the key observation that because the monitor code and user code are largely and unambiguously independent, TLS hardware with all its complexity to handle speculative parallelism is unnecessary. We explicitly multithread the monitor code in which a thread checks one access and use SMT to exploit the parallelism in the monitor code. Despite multithreading the monitor code on SMT, dynamic monitoring slows down the user thread due to two problems: instruction overhead and insufficient overlap among the monitor threads. To address instruction overhead, we exploit the natural locality in the user thread addresses and memoize recent checks in a small table called the allocation-record-cache (ARC). However, programs making and accessing many small memory allocations cause many ARC misses and incur significant runtime overhead. To address this issue, we make a second key observation that because adjacent memory objects result in ARC entries with contiguous address ranges, the entries can be merged into one by simply merging the ranges into one. This merging increases the effective size of the ARC. Finally, insufficient overlap among monitor threads occurs because of inefficient synchronization to protect the allocation data structure updated by the user thread and read by the monitor threads. We make the third key observation that because monitor-thread reads occur for every check but user-thread writes oc- cur only in allocations and deallocations, monitor reads are much more frequent than user writes. We propose a locking strategy, called biased lock, which puts the locking overhead on the writer away from the readers. We show that starting from a runtime overhead of 414% our scheme reduces this overhead to a respectable 24% running three monitor threads on an SMT using a 256-entry ARC with merging and biased lock.",2006,0, 2713,A software radio architecture for multi-channel digital upconversion and downconversion using generalized polyphase filterbanks with frequency offset correction,"Polyphase filterbanks (PFBs) provide a computationally efficient approach to extracting channels of arbitrary bandwidth from a wideband signal. This technique, know as digital downconversion, would find use in software radio applications. Conversely, PFBs can also be used to perform digital upconversion, a process in which a wideband signal is constructed from several narrowband channels. In this paper, we apply the PFB technique to the IS-136 standard (North American Digital TDMA). Since the IS-136 standard requires non-overlapping adjacent channel filter masks, we show in this paper that the Generalized PFB is the optimal architecture given the non-integer sampling rate conversion that is needed to go from IF to baseband. Since frequency offset correction is an important consideration for radio receivers, we also present an augmented GPFB architecture that intrinsically performs frequency offset correction.",2002,0, 2714,Analysis of Inter-Module Error Propagation Paths in Monolithic Operating System Kernels,"Operating Systems interact directly with the hardware, so they are prone to hardware errors. This is particularly true with monolithic kernels, where all subsystems and device drivers share the same execution domain. Even in systems that support kernel modules, components still run under the same privilege level as the main kernel. Module partitioning techniques to address error propagation do exist, but they impose a performance overhead from frequent domain switches when control flows from one module to another. This paper presents a technique to extract the relationship between kernel modules and to identify potential error propagation paths between them. We also suggest a technique to group modules with respect to the function they provide, so that the number of execution domains can be minimized while still maintaining error isolation between subsystems. Additionally, we provide an evaluation of the module grouping technique in respect to performance overhead and dependability for a simple isolation environment.",2010,0, 2715,A study on the surface defects of a compact disk,"Optical disk drives are widely used today. However, even though the technology has been available to the consumer during 20 years, there are still performance issues to be improved. This study reveals a way of dealing with the photo diode signals in the optical pick-up to improve the performance of the optical disk drives",2001,0, 2716,Nonlinear weighted multiple centrality corrections interior point method for optimal power flow,"Optimal power flow (OPF) is a large scale nonlinear non-convex optimization problem. In the last decades many algorithms are developed to solve these problems and the interior point method (IPM) is a popular one. The primal-dual IPMs and their later developments have attracted much research interest. Impressed by the improvements in convergent performance of the further developed multiple centrality corrections IPM in linear programming (LP), this paper extended it to nonlinear programming and made certain modifications to make it adaptable to the nonlinear OPF. Test results on several cases ranging from 9-bus to 2746-bus system indicate the efficiency and high convergence of the proposed algorithm, which outperforms the original multiple centrality corrections IPM. Comparisons between the proposed method and other forms of interior point methods are also given to show its advantage in convergent performance.",2009,0, 2717,Performance of TC-QPSK-OFDM System with Phase Error and Perfect Compensation in Rayleigh Fading Channel,Orthogonal frequency division multiplexing (OFDM) combines the advantage of high achievable rates and relatively easy implementation. In this paper we present an evaluation of the combined use of the turbo codes (TC) and the orthogonal frequency division multiplexing with pilot-aided data in Rayleigh fading channel with compensation the phase error and perfect compensation. The system is called TC-QPSK-OFDM. The simulation results of TC-QPSK-OFDM at three iterations are sufficient to provide good BER performance with phase error and perfect compensation. The gain factor is 7 dB compared with un-coded QPSK-OFDM with perfect compensation at BER 2*10-4 and 8 dB at 3*10-4 with phase error compensation. The proposed system can be used in several wireless standard such as 802-11a.,2008,0, 2718,Design and implementation of a simulator for the analysis of bit error rates by using orthogonal Frequency Division Multiplexing,"Orthogonal Frequency Division Multiplexing (OFDM) has become very popular for its advantages. The researches are till going on for the development of OFDM. In this paper we have described a new simulator that can perform the BER analysis using OFDM technology and generate respective plots for bit errors vs signal energy (Eb/No) for several modulation schemes & different noise effects in three types of channels (namely AWGN, Rayleigh and Rician).",2009,0, 2719,Fault-tolerance in a distributed management system: a case study,"Our case study provides the most important conceptual lessons learned from the implementation of a Distributed Telecommunication Management System (DTMS), which controls a networked voice communication system. Major requirements for the DTMS are fault-tolerance against site or network failures, transactional safety, and reliable persistence. In order to provide distribution and persistence both transparently and fault-tolerant we introduce a two-layer architecture facilitating an asynchronous replication algorithm. Among the lessons learned are: component based software engineering poses a significant initial overhead but is worth it in the long term; a fault-tolerant naming service is a key requirement for fail-safe distribution; the reasonable granularity for persistence and concurrency control is one whole object; asynchronous replication on the database layer is superior to synchronous replication on the instance level in terms of robustness and consistency; semi-structured persistence with XML has drawbacks regarding consistency, performance and convenience; in contrast to an arbitrarily meshed object model, a accentuated hierarchical structure is more robust and feasible; a query engine has to provide a means for navigation through the object model; finally the propagation of deletion operation becomes more complex in an object-oriented model. By incorporating these lessons learned we are well underway to provide a highly available, distributed platform for persistent object systems.",2003,0, 2720,Small animal imaging with attenuation correction using clinical SPECT/CT scanners,"Our group has previously reported on performing single photon emission tomography (SPECT) studies on rats, using a GE Hawkeye millennium SPECT-CT scanner with pinhole collimators. The main challenge to obtaining quantitative physiological information from such data is the lack of X-ray attenuation data for the small animals. In this paper we present an experimental design in which a nonspecific clinical SPECT-CT scanner is used to collect both transmission and emission scan data for small animals. Necessary hardware enhancements include construction of a new animal bed and construction of a modified emission-transmission calibration object. An emission scan of this calibration object is used to determine the study-specific geometrical parameters of the gantry. In the past, we have shown that 25 adjustable parameters are needed to describe the angle-dependent positions of the detectors and pinhole collimators in order to obtain sufficient spatial precision. A CT scan of the calibration object is used to achieve three-dimensional image registration by aligning the transmission and emission fields of view. The emission system matrix is calculated by ray tracing. The algorithm corrects for X-ray attenuation and for the system geometric response that results from the finite sizes of the pinhole aperture and the detector pixels. The system matrix was applied to the reconstruction of static and dynamic SPECT images using the ML-EM algorithm with a total-variation regularization term. In future studies, time-varying data from the first minute of acquisition will be used to extract kinetic information about how radiopharmaceuticals interact with different tissue types.",2007,0, 2721,Experimental Risk Assessment and Comparison Using Software Fault Injection,"One important question in component-based software development is how to estimate the risk of using COTS components, as the components may have hidden faults and no source code available. This question is particularly relevant in scenarios where it is necessary to choose the most reliable COTS when several alternative components of equivalent functionality are available. This paper proposes a practical approach to assess the risk of using a given software component (COTS or non-COTS). Although we focus on comparing components, the methodology can be useful to assess the risk in individual modules. The proposed approach uses the injection of realistic software faults to assess the impact of possible component failures and uses software complexity metrics to estimate the probability of residual defects in software components. The proposed approach is demonstrated and evaluated in a comparison scenario using two real off-the-shelf components (the RTEMS and the RTLinux real-time operating system) in a realistic application of a satellite data handling application used by the European Space Agency.",2007,0, 2722,From Fireflies to Fault-Tolerant Swarms of Robots,"One of the essential benefits of swarm robotic systems is redundancy. In case one robot breaks down, another robot can take steps to repair the failed robot or take over the failed robot's task. Although fault tolerance and robustness to individual failures have often been central arguments in favor of swarm robotic systems, few studies have been dedicated to the subject. In this paper, we take inspiration from the synchronized flashing behavior observed in some species of fireflies. We derive a completely decentralized algorithm to detect non-operational robots in a swarm robotic system. Each robot flashes by lighting up its on-board light-emitting diodes (LEDs), and neighboring robots are driven to flash in synchrony. Since robots that are suffering catastrophic failures do not flash periodically, they can be detected by operational robots. We explore the performance of the proposed algorithm both on a real-world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.",2009,0, 2723,Speeding-up fault injection experiments with dynamic code injection,"One of the main advantages of the software implemented fault injection methodology is a very high level of controllability of an injected fault and high observability of its propagation. However, in practice, the time cost of controllability of the application under tests and injected faults might be very high. This relates mainly to the problem of the proper fault injection time instant identification. The paper presents a new technique based on the dynamic code injection at the test runtime called Code Cave. The injected code, executed in the context of application under tests, traps the application at the precisely defined time instant. The comparative experiments with old-fashioned implementation show the gained speed-up reducing the experiment duration up to 50%.",2010,0, 2724,On node state reconstruction for fault tolerant distributed algorithms,"One of the main methods for achieving fault tolerance in distributed systems is recovery of the state of failed components. Though generic recovery methods like checkpointing and message logging exist, in many cases the recovery has to be application specific. In this paper we propose a general model for a node state reconstruction after crash failures. In our model the reconstruction operation is defined only by the requirements it fulfills, without referring to the specific application dependent way it is performed. The model provides a framework for formal treatment of algorithm-specific and system-specific recovery procedures. It is used to specify node state reconstruction procedures for several widely used distributed algorithms and systems, as well as to prove their correctness.",2002,0, 2725,Detection and real-time correction of faulty visual feedback in atomic force microscopy based nanorobotic manipulation,"One of the main roadblocks to Atomic Force Microscope (AFM) based nanomanipulation is lack of real time visual feedback. Although the model based visual feedback can partly solve this problem, its unguaranteed reliability due to the inaccurate models in nano-environment still limits the efficiency of AFM based nanomanipulation. This paper introduce a Realtime Fault Detection and Correction (RFDC) method to improve the reliability of the visual feedback. By utilizing Kalman filter and local scan technologies, the RFDC method not only can realtime detect the fault display caused by the modeling error, but also can on-line correct it without interrupting manipulation. In this way, the visual feedback keeps consistent with the true environment changes during manipulation, which makes several operations being finished without a image scanning in between. The theoretical study and the implementation of the RFDC method are elaborated in this paper. Experiments of manipulating nano-particles have been carried out to demonstrate the effectiveness and efficiency of the proposed method.",2008,0, 2726,Fault-tolerant wearable computing system architecture for self-health management,"One of the most important applications of wearable computing or smart textiles is biomedical monitoring for self-health management. In this paper Constraint Satisfaction Problem (CSP) for smart textiles is defined. The goal is an optimized fault tolerant on-body sensor network. A novel event-driven architecture for smart textiles is proposed. A correlation between the architecture and the design features is established for optimized signal collection, fault tolerance and simplified interconnects. As part of the reliability analysis, this work proceeds to present the description of faults. Then a suitable fault model is defined supporting the defect distribution. Finally, the simulator based on Mealy machines was designed to correlate all the features defined during the course of work. In the end, the proposed architecture has shown significant improvement in terms of highest path reliability over a simplistic system with a single data collecting point.",2008,0, 2727,Online Defects Inspection Method for Velcro Based on Image Processing,"Quality control is a crucial issue in producing velcro, and defects existing in velcro can dramatically downgrade velcro level. Manual inspection can not meet the requirements of production efficiency, so an online feasible inspection method is referred to control the surface quality of the velcro. The original algorithm for the edge detection has been improved, and the flaws are extracted according to the first-order characteristic value. And these defects are classified according to the spectrum. Finally, the experiments have indicated that the various defects can be detected by proposed algorithm accurately, and the defects inspection method has been efficient and of great significance.",2010,0, 2728,Distributed Fault Detection with Correlated Decision Fusion,"Quick and accurate fault detection is critical to the operation of modern dynamic systems. In this paper, the fault detection problem when using multiple sensors is investigated. At each time step, local sensors transmit binary data to the fusion center, where decision fusion is performed to detect the potential occurrence of a fault. Since the sensors observe a common dynamic process, their measurements, and thus the local decisions, are correlated. Under a likelihood-ratio-based local decision rule constraint, we propose efficient suboptimal system designs involving local sensor rules and fusion rule that include the correlation consideration. Two correlation models are proposed to approximate the complicated correlation between sensor measurements for general systems. Experimental results show that the designs with correlation consideration outperform the design under the independence assumption significantly when the correlation between sensor measurements is strong.",2009,0, 2729,Agile Store: experience with quorum-based data replication techniques for adaptive Byzantine fault tolerance,"Quorum protocols offer several benefits when used to maintain replicated data but techniques for reducing overheads associated with them have not been explored in detail. It is desirable that a system be able to adapt its operation so that fault tolerance related overheads are only incurred when the protocol execution actually encounters faults. There are a number of issues that need to be carefully examined to achieve such agility of quorum based systems. We make use of a file system prototype, developed in our Agile Store project, to experimentally evaluate several techniques that are important for efficient implementation of Byzantine fault-tolerant quorum protocols. We present an optimistic quorum collection scheme and a probabilistic hashing scheme for determining the response to a quorum request, and show that they lead to significant performance improvements. The Agile Store also makes use of reconfigurable quorum techniques to allow system size and fault threshold to be dynamically varied when, for example, faulty servers are removed, new servers are added, or the threat level is changed. We quantify the performance gains made possible by such reconfiguration of quorum parameters. We also show how performance scales with different system parameters and how it is affected by design choices such as whether to use proxies. We believe that the results in the paper provide important insights into how to implement quorum protocols to provide good performance while achieving Byzantine fault tolerance.",2005,0, 2730,Adaptive OSEK Network Management for in-vehicle network fault detection,"Rapid growth in the deployment of networked electronic control units (ECUs) and enhanced software features within automotive vehicles has occurred over the past two decades. This inevitably results in difficulties and complexity in in-vehicle network fault diagnostics. To overcome these problems, a framework for on-board in-vehicle network diagnostics has been proposed and its concept has previously been demonstrated through experiments. This paper presents a further implementation of network fault detection within the framework. Adaptive OSEK Network Management, a new technique for detecting network level faults, is presented. It is demonstrated in this paper that this technique provides more accurate fault detection and the capability to cover more fault scenarios.",2007,0, 2731,Soft-error detection using control flow assertions,"Over the last few years, an increasing number of safety-critical tasks have been demanded of computer systems. In this paper, a software-based approach for developing safety-critical applications is analyzed. The technique is based on the introduction of additional executable assertions to check the correct execution of the program control flow. By applying the proposed technique, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the proposed technique in comparison with state-of-the-art alternative assertion-based methods. Experimental results show that the proposed approach is far more effective than the other considered techniques in terms of fault detection capability, at the cost of a limited increase in memory requirements and in performance overhead.",2003,0, 2732,Error recovery for multicast conversational video over error-prone networks (with application to ad hoc WLANs),"Packet error rates in multicast conversational video transmissions over error-prone networks severely affect the performance of motion-compensated hybrid decoders. The loss of information in one frame has a considerable impact on the quality of the following frames, and errors propagate in the sequence. Conversational video requires fast error recovery, but multicast transmission with real-time interactive constraints introduces serious difficulties. We introduce a feedback-based protocol for resilient multicast videoconferencing in error-prone LANs. In particular, we focus on video transmission over ad hoc wireless LANs (WLANs). Unlike previous approaches, our scheme not only improves video quality at error-prone receivers, but also maximizes the overall video quality of the session and achieves efficient use of the available bandwidth. This protocol uses a decentralized repair request algorithm with (i) a new multicast suppression technique that protects critical correction requests and eliminates duplicated or overlapped control packets, and (ii) a new error resilience scheme based on multicast repair packets that convey predictive correction information to the video session. We present experimental videoconferencing results that demonstrate the effectiveness of the proposed protocol in ad hoc WLANs.",2005,0, 2733,Evaluating the Effect of the Number of Naturally Occurring Faults on the Estimates Produced by Capture-Recapture Models,"Project managers can use the capture-recapture models to estimate the number of faults in a software artifact. The capture-recapture estimates are calculated using the number of unique faults and the number of times each fault is found. The accuracy of the estimates is affected by the number of inspectors and the number of faults. Our earlier research investigated the effect that the number of inspectors had on the accuracy of the estimates. In this paper, we investigate the effect of the number of faults on the performance of the estimates using real requirement artifacts. These artifacts have an unknown amount of naturally occurring faults. The results show that while the estimators generally underestimate, they improve as the number of faults increases. The results also show that the capture-recapture estimators can be used to make correct re-inspection decisions.",2009,0, 2734,Fault location estimation based on matching the simulated and recorded waveforms using genetic algorithms,"Prompt and accurate location of the faults in a large-scale transmission system is critical when system reliability is considered and usually is the first step in the system restoration. The accuracy of fault location estimation essentially depends on the information available. In this paper, the fault location estimation is mathematically formulated as an optimization problem of which the fault location and fault resistances are unknown variables. An efficient genetic algorithm-based searching scheme is developed for obtaining a solution that is globally optimal",2001,0, 2735,Reconstruction of sensor faults using a secondary sliding mode observer,"Proposes two methods for reconstructing sensor faults using sliding mode observers. In both methods, fictitious systems are introduced in which the original sensor fault appears as an actuator fault. The original sensor faults are then reconstructed using a 'secondary' sliding mode observer. For both methods, there are conditions which must be satisfied for successful fault reconstruction. The methods are demonstrated with a chemical process example",2001,0, 2736,Protection system faults 1999 2003 and the influence on the reliability of supply,"Protection and control systems play important roles in power system operation and for the reliability and security of supply. This paper presents main results from a study of incorrect operations of protection and control systems on the voltage levels 1 - 420 kV in Norway, comprising mainly false and missing operations. The statistics for the period 1999 - 2003 show that false or unwanted operation is the major fault type and that the relative number of faults and the contribution to energy not supplied (ENS) increase by increasing voltage level. Incorrect operations of protection and control systems at the transmission levels 33 - 420 kV contribute to 34 % of the total number of power system faults and 17 % of ENOL at these levels. Human related causes dominate by roughly 40-50 %, technical causes count for 20-30 %, while there are large portions of faults where the cause is not identified.",2005,0, 2737,Empirical scatter correction (esc): A new CT scatter correction method and its application to metal artifact reduction,"Scatter artifacts impair the CT image quality and the accuracy of CT values. Especially in cases with metal implants and in wide cone-angle flat detector CT scans, scatter artifact removal can be of great value. Typical scatter correction methods try to estimate scattered radiation and subtract the estimated scatter from the uncorrected data. Scatter is found either by time-consuming Monte Carlo-based simulations of the photon trajectories, or by rawdata-based modelling of the scatter content using scatter kernels, whose open parameters have to be determined very accurately and for each scanner and type of object individually, and that sometimes even require a data base of typical objects. The procedures are time-consuming and require intimate knowledge about the scanner, in particular about the spectral properties, for which a correction is designed. We propose an empirical scatter correction (ESC) algorithm which does not need lots of prior knowledge for calibration. ESC assumes that a linear combination of the uncorrected image with various ESC basis images is scatter-free. The coefficients for the linear combination are determined in image domain by maximizing a flatness criterion of the combined volume. Here, we minimized the total variation in soft tissue regions using the gradient descent method with a line search. Simulated data and several patient data sets acquired with a clinical cone-beam spiral CT scanner, where scatter was added using a Monte Carlo scatter calculation algorithm, were used to evaluate ESC. Metal implants were simulated into those data sets, too. Our preliminary results indicate that ESC has the potential to efficiently reduce scatter artifacts in general, and metal artifacts in particular. ESC is computationally inexpensive, highly flexible, and does not require know-how of the scanner properties.",2010,0, 2738,A fast algorithm for scheduling imprecise computations with timing constraints to minimize weighted error,Scheduling tasks with different weights in the imprecise computation model is rather difficult. Each task in the imprecise computation model is logically decomposed into a mandatory subtask and an optional subtask. The mandatory subtask must be completely executed before a deadline to produce an acceptable result; the optional subtask begins after the mandatory subtask to refine the result. The error in the results of a task is measured by the processing time of the unexecuted portion of the optional subtask. This paper proposes a fast algorithm for scheduling imprecise computation with timing constraints on uniprocessor systems. The proposed algorithm can obtain the optimal schedule for different weighted tasks with time complexity O(n log2 n),2000,0, 2739,A Taxonomy for the Analysis of Scientific Workflow Faults,"Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.",2010,0, 2740,Statistical Feature Selection From Massive Data in Distribution Fault Diagnosis,"Selecting proper features to identify the root cause is a critical step in distribution fault diagnosis. Power engineers usually select features based on experience. However, engineers cannot be familiar with every local system, especially in fast growing regions. With the advancing information technologies and more powerful sensors, utilities can collect much more data on their systems than before. The phenomenon will be even more substantial for the anticipating Smart Grid environments. To help power engineers select features based on the massive data collected, this paper reviews two popular feature selection methods: 1) hypothesis test, 2) stepwise regression, and introduces another two: 3) stepwise selection by Akaike's Information Criterion, and 4) LASSO/ALASSO. These four methods are compared in terms of their model requirements, data assumptions, and computational cost. With real-world datasets from Progress Energy Carolinas, this paper also evaluates these methods and compares fault diagnosis performance by accuracy, probability of detection and false alarm ratio. This paper discusses the advantages and limitations of each method for distribution fault diagnosis as well.",2010,0, 2741,Invariant checkers: An efficient low cost technique for run-time transient errors detection,"Semiconductor technology evolution brings along higher soft error rates and long duration transients, which require new low cost system level approaches for error detection and mitigation. Known software based error detection techniques imply a high overhead in terms of memory usage and execution times. In this work, the use of software invariants as a means to detect transient errors affecting a system at run-time is proposed. The technique is based on the use of a publicly available tool to automate the invariant detection process, and the decomposition of complex algorithms into simpler ones, which are checked through the verification of their invariants during the execution of the program. A sample program is used as a case study, and fault injection campaigns are performed to verify the error detection capability of the proposed technique. The experimental results show that the proposed technique provides high error detection capability, with low execution time overhead.",2009,0, 2742,Linear randomized voting algorithm for fault tolerant sensor fusion and the corresponding reliability model,"Sensor failures in process control programs can be tolerated through application of well known modular redundancy schemes. The reliability of a specific modular redundancy scheme depends on the predefined number of sensors that may fail, f, out of the total number of sensors available, n. Some recent sensor fusion algorithms offer the benefit of tolerating a more significant number of sensor failures than modular redundancy techniques at the expense of degrading the precision of sensor readings. In this paper, we present a novel sensor fusion algorithm based on randomized voting, having linear - O(n) expected execution time. The precision (the length) of the resulting interval is dependent on the number of faulty sensors - parameter f. A novel reliability model applicable to general sensor fusion schemes is proposed. Our modeling technique assumes the coexistence of two major types of sensor failures, permanent and transient. The model offers system designers the ability to analyze and define application specific balances between the expected system reliability and the desired interval estimate precision. Under the assumptions of failure independence and exponentially distributed failure occurrences, we use Markov models to compute system reliability. The model is then validated empirically and examples of reliability prediction are provided for networks with fairly large number of sensors (n>100).",2005,0, 2743,Adaptive Sensor Fault Detection and Identification Using Particle Filter Algorithms,"Sensor fault detection and identification (FDI) is a process of detecting and validating sensor's fault status. Because FDI guarantees system reliable performance, it has received much attention recently. In this paper, we address the problem of online sensor fault identification and validation. For a physical sensor validation system, it contains transitions between sensor normal and faulty states, change of system parameters, and a fusion of noisy readings. A common dynamic state-space model with continuous state variables and observations cannot handle this problem. To circumvent this limitation, we adopt a Markov switch dynamic state-space model to simulate the system: we use discrete-state variables to model sensor states and continuous variables to track the change of the system parameters. Problems in Markov switch dynamic state-space model can be well solved by particle filters, which are popularly used in solving problems in digital communications. Among them, mixture Kalman filter (MKF) and stochastic M-algorithm (SMA) have very good performance, both in accuracy and efficiency. In this paper, we plan to incorporate these two algorithms into the sensor validation problem, and compare the effectiveness and complexity of MKF and SMA methods under different situations in the simulation with an existing algorithm - interactive multiple models.",2009,0, 2744,Creation and analysis of a scenario based universal sensory driver layer with real-time fault tolerant properties,"Sensor fusion and sensor integration is becoming an increasingly popular approach in dealing with complex sensor systems in autonomous mobile robots (AMR). However, the procedure for the sensor integration and sensor fusion is a non-trivial process. This paper presents a scenario based approach to sensor fusion based on the autonomous evolution of sensory and actuator driver layers through environmental constraints (AEDEC) [T.A Choi, 2002]. Using the scenario based approach, the programmer's work of creating a sensory driver will be eliminated by having the AMR learn the driver on its own. In the process of creating each scenario, sensor fusion is automatically implemented. If sensors change or even if the sensor configuration changes, the driver can be updated by having the AMR relearn the driver over again. Due to the tabular structure of the scenario based sensory drivers, malfunctioning sensors can not only be detected, but the driver can automatically adapt to the malfunctioning sensor in real-time. Furthermore, different AMR's trained using AEDEC architecture will have similar interpretations of its environment. This is guaranteed by having the AMR learn the driver in the same highly structured training environment. The behavioral coding is simplified by eliminating any reference to hardware dependent parameters. Finally, the level of abstraction and the consistency of the highly structured environment allows for coding portability.",2003,0, 2745,Electrical Defect Density Test Structures for DFM in the Sub-wavelength Lithography Regime with Copper Metallization,"Serpent/comb test structures are standard electrical defect monitors for semiconductor interconnect processes. Traditionally, these test structures are drawn as straight lines, based on the assumption that defects are random and independent of geometry. However, with lithography now performed in the sub-wavelength regime, critical dimensions are not always printed as drawn, depending on the surrounding geometries. Therefore, defect density is not random but has a systematic component. Effective design for yield depends on understanding systematic process/layout interactions. This paper describes test structures to characterize defect density sensitivity to layout, followed by experimental results on copper metallization.",2006,0, 2746,An Aspect Oriented Approach to Analyzing Fault of Service Composition,"Service composition is an effective way to achieve value-added service, which has found wide application in software system. Fault handling is critical to achieve high reliability for these applications. However, the existing service composition methods seldom consider services' fault handling, which results in high risk of runtime failure. This paper proposes a formal aspect-oriented approach to designing and analyzing fault of service composition. The underlying formalism is Petri net and its corresponding modeling method. The fault handling process is encapsulated into aspect net and base net, and Petri net is used to model the core concerns and crosscutting concerns, the weaving mechanism systematically integrates these schemas into a complete service composition model. Based on the model, the related theories of Petri net help prove the correctness of fault handling. Finally, an Export Service and simulation results show that our method can ensure the high reliability and design quantity of service composition.",2010,0, 2747,SOAR: An Extended Model-Based Reasoning for Diagnosing Faults in Service-Oriented Architecture,"Service-oriented architecture (SOA) is a cost effective approach to building enterprise applications. SOA reveals non-conventional characteristics of heterogeneity, grid-like distribution, evolvability, and limited visibility. Hence, services management presents non-conventional challenges. Especially, fault diagnosis at runtime is challenging due to the SOA features. Model-based reasoning (MBR) is a formal approach to diagnosing faults, which is based on predicate calculus and term resolution. In this paper, we present SOAR (service-oriented abductive reasoning) which extends the basic MBR to diagnose faults in various SOA components. SOAR provides an enhanced inference capability with state-based and QoS-based reasoning in addition to the basic setting/observation-based reasoning. We propose concrete schemes to formally represent system description, normal behavior, fault model and observations, and reasoning methods to diagnose faults and to determine their causes. In addition, we present a case study of applying SOAR to show how it is applied in practice and how the diagnosis can be conducted in autonomous way.",2009,0, 2748,Fault-resilient ubiquitous service composition,"Service-oriented architecture (SOA) promises an elegant model that can easily handle dynamic and heterogeneous environments such as pervasive computing systems. However, in reality, frequent failures of resource-poor and low-cost sensors greatly diminish the guarantees on reliability and availability offered and expected by SOA. To provide a framework for building fault-resilient service-oriented pervasive computing system, we present a solution that includes a virtual sensor framework and a high performance service composition solution based on: (1) WS-Pro, a probe-based performance-driven service composition architecture, and (2) an abstract service composition template (ASCT) approach. The concept of virtual sensor enhances the availability of services, while the service composition solution ensures the system can efficiently adapt to changes and failures in the environment. This approach also includes a novel probe- based monitoring technique to actively collect performance data, and the use of an extended Petri net model, FPQSPN, to model performance of service composition.",2007,0, 2749,Improving RFID Read Rate Reliability by a Systematic Error Detection Approach,"Reliability, security and privacy are the key concerns with RFID (radio frequency identification) adoption. While the mainstream RFID research is focused on solving the security and privacy issues, this paper focuses on addressing the reliability issues in general and detecting read rate failures in particular. We specifically consider the issue of detecting if some RFID tags are not read at all, and if the tags are not read an alarm should be activated. This is quite different from the main stream RFID reliability research which attempts to increase the read rate by developing new and powerful antennas or improving the surrounding environment. To address this issue, we present a novel solution which can detect missed readings and notify appropriate entity to take suitable action against it. The novelty of the proposed solution lies in the combined use of RFID reader along with a normal weighing machine. The concept is to compare the gross weight of the tagged items against the gross weight (of the same items) stored in a backend database. The backed database can only be accessed for those RFID tags which are properly read. If some tags are not read at all these weights would vary and hence incorrect readings could be identified. This paper provides the detailed theoretical foundation for the proposed solution. In addition we compare the proposed solution against existing solutions to demonstrate the success and potential of our solution for practical deployment of RFID in library or supermarket scenario.",2007,0, 2750,Architecture and algorithm for high performance fault-tolerant replication of sensitive military and homeland security C/sup 3/I database messages,"Replicated databases are used by military and homeland security command, control, communications and intelligence (C3I) systems for fault-tolerance and fast retrieval of crucial decision-aiding information irrespective of the position the military units. In this paper, we propose a replicated database nodal architecture in which nodes are organized into multiple clusters connected by long-haul links. This kind of multi-site architecture permits access to crucial information even when a site is completely destroyed. In this architecture, nodes within a cluster communicate through either LANs or multi-hop wireless routing. Database updates can originate at any node in any cluster, hut must be replicated to all the nodes in the system. In the proposed replication algorithm, optimal use of the bandwidth of long haul links is achieved by an arrangement in which each node in a cluster replicates its updates to a single designated node in each one of the other clusters and those designated nodes take responsibility to replicate messages to the other nodes in their respective clusters. Fault-tolerance is addressed by assigning surrogates for each node in a cluster so that the surrogates take over replication of their primary nodes as soon as they sense inactivity in them. For high performance, this algorithm avoids usage of a reliable transport mechanism like the TCP and synchronous replication messaging required by the 2-phase or 3-phase commit type of algorithms, but still achieves sequence-preserving lossless message communication by asynchronous message flows and application level control of the messages received from an unreliable UDP channel. Composite queues with in-memory and persistent segments are used for storage of the replication messages so that they can be supplied to the receiving nodes, upon their recovery, with low latency after small down-times and without any message loss after reasonably long down times. The algorithm has also features such a- - s throttling for avoidance of message loss in low capacity long-haul links and batch acknowledgements to reduce control traffic",2005,0, 2751,An empirical study of reported bugs in server software with implications for automated bug diagnosis,"Reproducing bug symptoms is a prerequisite for performing automatic bug diagnosis. Do bugs have characteristics that ease or hinder automatic bug diagnosis? In this paper, we conduct a thorough empirical study of several key characteristics of bugs that affect reproducibility at the production site. We examine randomly selected bug reports of six server applications and consider their implications on automatic bug diagnosis tools. Our results are promising. From the study, we find that nearly 82% of bug symptoms can be reproduced deterministically by re-running with the same set of inputs at the production site. We further find that very few input requests are needed to reproduce most failures; in fact, just one input request after session establishment suffices to reproduce the failure in nearly 77% of the cases. We describe the implications of the results on reproducing software failures and designing automated diagnosis tools for production runs.",2010,0, 2752,Improving Fault Injection of Soft Errors Using Program Dependencies,"Research has shown that modern micro-architectures are vulnerable to soft errors, i.e., temporary errors caused by voltage spikes produced by cosmic radiation. Soft-error impact is usually evaluated using fault injection, a black-box testing approach similar to mutation testing. In this paper, we complement an existing evaluation of a prototype brake-by-wire controller, developed by Volvo Technology, with static-analysis techniques to improve test effectiveness. The fault-injection tests are both time- and data-intensive, which renders their qualitative and quantitative assessment difficult. We devise a prototype visualization tool, which groups experiments by injection point and provides an overview of both instruction and fault coverage, and the ability to detect patterns and anomalies. We use the program-dependence graph to identify experiments with a priori known outcome, and implement a static analysis to reduce the test volume. The existing pre-injection heuristic is extended with liveness analysis to enable an unbiased fault-to-failure probability.",2008,0, 2753,What Types of Defects Are Really Discovered in Code Reviews?,"Research on code reviews has often focused on defect counts instead of defect types, which offers an imperfect view of code review benefits. In this paper, we classified the defects of nine industrial (C/C++) and 23 student (Java) code reviews, detecting 388 and 371 defects, respectively. First, we discovered that 75 percent of defects found during the review do not affect the visible functionality of the software. Instead, these defects improved software evolvability by making it easier to understand and modify. Second, we created a defect classification consisting of functional and evolvability defects. The evolvability defect classification is based on the defect types found in this study, but, for the functional defects, we studied and compared existing functional defect classifications. The classification can be useful for assigning code review roles, creating checklists, assessing software evolvability, and building software engineering tools. We conclude that, in addition to functional defects, code reviews find many evolvability defects and, thus, offer additional benefits over execution-based quality assurance methods that cannot detect evolvability defects. We suggest that code reviews may be most valuable for software products with long life cycles as the value of discovering evolvability defects in them is greater than for short life cycle systems.",2009,0, 2754,Algebraic expression for extra degree added to hypercube interconnection network in case of multiple edge-faults,"Researchers have used more than one method for implementing algorithms on faulty hypercube interconnection networks. Some modified the structures of the hypercube whilst others didn't, implementing their operations on a faulty hypercube. Modifying the structure of a hypercube entails adding extra degrees to each node, called reconfiguration of hypercubes. I improved an algebraic expression for added extra degrees to a hypercube to make it non-faulty",2001,0, 2755,Extension of Power Line Fault Location Techniques to Pressurized Line Diagnostics,"Researchers in power systems have investigated fault location methods for many years and have developed mature techniques. Analogies between hydraulic systems and electrical circuits are long established, and are useful for pipeline system analysis. Based on an electric model of the pressurized lines, this paper provides a feasibility study of extending power line fault location techniques to pressurized line leakage diagnostics. Theoretical derivations are presented in the paper.",2006,0, 2756,Effect of Spacer's Defects and Conducting Particles on the Electric Field Distribution along Their Surfaces in GIS,"Researches in the area of gas insulated systems (GIS) reliability are still attracting a considerable attention from the electric utilities and the scientific community in many countries. Solid insulating spacers in GIS represent the weakest points in these systems, and several troubles and systems' outages have been reported all over the world due to their failure. So it is essential to determine the electric field distribution along their surfaces and hence evaluate the degree of their reliability. This paper discusses the electric stress distribution at the solid-gas interface with spacer's defects and contaminating spherical conducting particles on the surface. The effects of the defects size, type and its location and the particles size and its location on the electric stress distribution at solid-gas interface are presented and discussed.",2007,0, 2757,Adaptive temporal error concealment scheme for H.264/AVC video decoder,"Resilience to channel noise is indispensable for video communication based consumer applications. Temporal error concealment is a kind of decoder-based technique to compensate for transmission errors and is popular in most consumer electronics. In this paper, an adaptive temporal error concealment scheme is proposed based on H.264 video standard to improve error resilience ability for video consumer applications. Mode switching mechanism is devised to flexibly choose appropriate temporal concealment mode. The search range for candidate motion vectors could be adaptively determined by the range of spatially and temporally neighboring motion vectors. Weighted outer boundary distortion function could dynamically select the optimal replacing motion vector for local area. Experiment and comparison with recent error concealment schemes shows that our proposed algorithm could effectively improve received video quality with reduced requirement for decoding time.",2008,0, 2758,A Fault-Tolerant Mutual Exclusion Resource Reservation Protocol for Clustered Mobile Ad hoc Networks,"Resource reservation and mutual exclusion are challenging problems in mobile ad-hoc networks (MANET). Due to the dynamic characteristics of nodes in these networks, yet, few algorithms have been proposed. The other problem in these networks is link or node failure due to many reasons (e.g. running out of battery, hardware software crash, getting out of transmission range due to high mobility). Thus fault tolerance for these algorithms is another necessity which hasn't been completely accomplished. In this paper we proposed an algorithm which is completely fault tolerant (covers temporary and permanent faults). It also has the mutual exclusion property for critical resource reservations. The proposed algorithm uses three recovery processes to maintain the stable state for whole system. At the end we have proved the proposed algorithm's Safety and liveness properties to show its integrity.",2007,0, 2759,EVALUATION AND OPTIMISATION OF ERRORS IN THE REVERSE ENGINEERING PROCESS: A CASE STUDY,"Reverse engineering is used to reproduce a virtual model of any existing complex 3D shape. It is a fast evolving area which has a multitude of applications. It has also become an increasingly vital tool to reduce product development cycles. In conjunction with modern growing areas such as rapid prototyping or rapid tooling, this science is leading towards a rapidity, flexibility and agility discipline. This paper describes and analyses the successive errors embedded in the Reverse engineering process. Several simple components with specific geometric shapes are reverse engineered and remanufactured. Results show that the successive errors involved in each stage of the Reverse engineering process remain minimal (of the order of 0.5% or less) resulting in an overall maximum uncertainty of less than 1% between the original components and their remanufactured parts. Finally, two surface reconstruction procedures are described and compared with a suggested alternative method. This method enables the construction a CAD model with smooth surfaces without any oscillations and a closer fit to the scans. The final CAD model obtained can then be redesigned for an improved performance prior to a remanufacturing.",2007,0, 2760,Read-Error-Rate evaluation for RFID system on-line testing,"RFID systems are complex hybrid systems, consisting of analog and digital hardware and software components. RFID technologies are often used into critical domains or into harsh environments. But RFID system is only based on low cost equipments which then do not allow achieving robust communications. All these points make the on-line testing of RFID systems a very complex task. Thus, this article proposes the on-line characterization of a statistical system parameter, the Read-Error-Rate, to perform the on-line detection of faulty RFID components. As an introduction to the on-line testing of RFID systems, a FMEA first describes the effects on these systems of potential defects impacting the communication part. Second, a SystemC model of the RFID system is discussed as a way to evaluate the proposed test solutions. Finally, the way the maximal Read-Error-Rate can be determined using system-level simulation is explained.",2010,0, 2761,Fault-tolerant design in a networked vibration control system,"Rocket fairing should be able to sustain a certain level of vibration during launch process and its time in orbit. In active vibration control, actuator failures may lead to control performance deterioration or even catastrophic accidents. For this purpose, a networked vibration control system is used to ensure the reliability of rocket fairing system in the presence of actuator failures. However, in the distributed control systems, the sensors, controllers, and actuators are normally dislocated, and the control signal exchanges among them are realized via network communications. Inevitably, network-induced delay often degrades control system performance. Therefore, it is highly necessary to minimize its detrimental effects on control system performance so as to achieve more robust control authorities. This paper deals with the fault-tolerant design issues for a rocket fairing vibration control system including both actuator failure compensation and network-induced delay compensation. A Luenberger canonical form based actuator failure compensation scheme is proposed to accommodate some typical actuator failures, whose values, pattern and time instants are uncertain. A time-delay compensation scheme is then implemented to reduce damaging effects caused by the sensor-to-controller delay.",2003,0, 2762,RPC-V: Toward Fault-Tolerant RPC for Internet Connected Desktop Grids with Volatile Nodes,"RPC is one of the programming models envisioned for the Grid. In Internet connected Large Scale Grids such as Desktop Grids, nodes and networks failures are not rare events. This paper provides several contributions, examining the feasibility and limits of fault-tolerant RPC on these platforms. First, we characterize these Grids from their fundamental features and demonstrate that their applications scope should be safely restricted to stateless services. Second, we present a new fault-tolerant RPC protocol associating an original combination of three-tier architecture, passive replication and message logging. We describe RPC-V, an implementation of the proposed protocol within the XtremWeb Desktop Grid middleware. Third, we evaluate the performance of RPC-V and the impact of faults on the execution time, using a real life application on a Desktop Grid testbed assembling nodes in France and USA. We demonstrate that RPC-V allows the applications to continue their execution while key system components fail.",2004,0, 2763,Fast run-time fault location in dependable FPGA-based applications,"Run-time fault location in field-programmable gate arrays (FPGAs) is important because the resulting diagnostic information can be used to reconfigure the FPGA to tolerate permanent faults. In order to minimize system downtime and increase availability, a fault location technique with very short diagnostic latency is desired. We present a fast technique for run-time FPGA fault location that can be used for high-availability reconfigurable systems. By integrating FPGA fault tolerance and concurrent error detection (CED) techniques, our approach can achieve significant availability improvement by minimizing the number of reconfigurations required for FPGA fault location and recovery. The area overhead of our approach is studied and illustrated using applications implemented in FPGAs",2001,0, 2764,Research on RTOS-Integrated TMR for Fault Tolerant Systems,"Safety and availability are issues of major importance in many critical systems. A RTOS (realtime operating system)-integrated fault-tolerant system using TMR technology is presented in this paper. The system incorporates three homogeneous microcomputers and provides the fault-tolerant function through system-APIs to applications. As it is integrated with RTOS, the system is more general-purpose, and programmers need not pay too much attention to the fault tolerance technology. This system works in normal and degraded (duple or even single modular) modes, and can tolerate transient or permanent faults. The system also provides MultiTask-support fault-tolerant function, and reconfiguration after a fault occurs is transparent to applications. Meanwhile, a novel seamless software upgrade method through intelligent state-transition-control is brought forward.",2007,0, 2765,"Dependability analysis of systems with on-demand and active failure modes, using dynamic fault trees","Safety systems and protection systems can experience two phases of operation (standby and active); an accurate dependability analysis must combine an analysis of both phases. The standby mode can last for a long time, during which the safety system is periodically tested and maintained. Once a demand occurs, the safety system must operate successfully for the length of demand. The failure characteristics of the system are different in the two phases, and the system can fail in two ways: (1) it can fail to start (fail on-demand), or (2) it can fail while in active mode. Failure on demand requires an availability analysis of components (typically electromechanical components) which are required to start or support the safety system. These support components are usually maintained periodically while not in active use. Active failure refers to the failure while running (once started) of the active components of the safety system. These active components can be fault tolerant and use spares or other forms of redundancy, but are not maintainable while in use. The approach, in this paper, automatically combines the ""availability analysis of the system in standby mode"" with the ""reliability analysis of the system in its active mode."" The general approach uses an availability analysis of the standby phase to determine the initial state probabilities for a Markov model of the demand phase. A detailed method is presented in terms of a dynamic fault-tree model. A new ""dynamic fault-tree construct"" captures the dependency of the demand-components on the support systems, which are required to detect the demand or to start the demand system. The method is discussed using a single example sprinkler system and then applied to a more complete system taken from the off-shore industry",2002,0, 2766,High-Intensity Radiated Field fault-injection experiment for a fault-tolerant distributed communication system,"Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.",2010,0, 2767,Fault recovery based on checkpointing for hard real-time embedded systems,"Safety-critical embedded systems often operate in harsh environmental conditions that necessitate fault-tolerant computing techniques. Many safety-critical systems also execute real-time applications. The correctness of these systems depends not only on the logical result of computation, but also on the time at which the results are produced. The missing of task deadlines can therefore be viewed as a temporal fault. In this paper, we examine fault-recovery based on checkpointing for real-time systems. We present schedulability tests for check-pointing in real-time systems. These feasibility-of-scheduling tests provide the criteria under which checkpointing can provide fault tolerance and real-time guarantees for hard real-time embedded systems under two different fault arrival models.",2003,0, 2768,Fault Handling in FPGAs and Microcontrollers in Safety-Critical Embedded Applications: A Comparative Survey,"Safety-critical nowadays include more and more embedded computer systems, based on different hardware platforms. These hardware platforms, reaching from microcontrollers to programmable logic devices, lead to fundamental differences in design. Major differences result from different hardware architectures and their robustness and reliability as well as from differences in the corresponding software design. This paper gives an overview on how these hardware platforms differ with respect to fault handling possibilities as fault avoidance and fault tolerance and the resulting influence on the safety of the overall system.",2007,0, 2769,An Improved Watchdog Timer to Enhance Imaging System Reliability In The Presence Of Soft Errors,"Satellite and Ariel imaging systems are located at high altitudes. Thus, they are more vulnerable to Soft Errors than similar systems operating at sea level. This paper studies the effect of transient faults on microprocessor based imaging systems. The paper studies the ability of different watchdog timer systems to recover the system from failure. A new improved watchdog timer system design is introduced This new design solves the problems of both the standard and windowed watchdog timers. The watchdog timers are tested by injecting a fault while a processor is reading an image from RAM and sending it to the VGA RAM for display. This method is implemented on FPGA, and visually demonstrates the existence of fast watchdog resets, which can not be detected by standard watchdog timers, and faulty resets which occur undetected within the safe window of the windowed watchdog timers.",2007,0, 2770,Fault tolerance through redundant COTS components for satellite processing applications,"Satellites and other space systems typically employ rare and expensive radiation tolerant, radiation hardened or at least military qualified parts for computational and other subsystems to ensure reliability in the harsh environment of space. In this paper, a parallel architecture is proposed that allows commercial devices to be incorporated into a computational unit with aggregate reliability figures approaching that of space-qualified alternatives. Apart from the common argument of cost, commercial-off-the-shelf (COTS) devices are attractive due to their relatively low power consumption and high processing performance. This paper describes a COTS-based processing architecture for space-borne computers and compares its reliability to a common alternative radiation hardened configuration.",2003,0, 2771,Enhancing pipelined processor architectures with fast autonomous recovery of transient faults,"Recent technology trends have made radiation-induced soft errors a growing threat to the reliability of microprocessors, a problem previously only known to the aerospace industry. Therefore, the ability to handle higher soft error rates in modern processor architectures is essential in order to allow further technology scaling. This paper presents an efficient fault-tolerance method for pipeline-based processors using temporal redundancy. Instructions are executed twice at each pipeline stage, which allows the detection of transient faults. Once a fault is detected the execution is stopped immediately and recovery is implicitly performed within the pipeline stages. Due to this fast reaction the fault is contained at its origin and no expensive rollback operation is required later on.",2010,0, 2772,Low Latency Recovery from Transient Faults for Pipelined Processor Architectures,"Recent technology trends have made radiation-induced soft errors a growing threat to the reliability of microprocessors, a problem previously only known to the aerospace industry. Therefore, the ability to handle higher soft error rates in modern processor architectures is essential in order to allow further technology scaling. This paper presents an efficient fault-tolerance method for pipeline-based processors using temporal redundancy. Instructions are executed twice at each pipeline stage, which allows the detection of transient faults. Once a fault is detected the execution is stopped immediately and recovery is implicitly performed within the pipeline stages. Due to this fast reaction the fault is contained at its origin and no expensive rollback operation is required later on.",2010,0, 2773,A temporal error concealment method for H.264/AVC using motion vector recovery,"Recently H.264/AVC is rapidly deployed in the mobile multimedia market, such as terrestrial and satellite DMB (digital multimedia broadcasting), and Internet multimedia streaming systems. To provide the better quality under the unreliable communication environments, we propose a motion vector (MV) recovery method for the temporal error concealment. The H.264/AVC adopts the various block sizes for the motion estimation and compensation, ranging from 16times16 to 4times4 block sizes. To increase the accuracy in the temporal error concealment, the 4times4 block size is used as the MV recovery unit. Flexible MB ordering (FMO) option, by which the neighboring MBs can be transmitted in the different packets, is used. The MVs of the lost MBs are recovered based on the MV tendency which is derived from the neighboring MVs. The simulation results show that the proposed method improves video quality up to 2.95dB and 2.45dB, compared with the H.264/AVC test model and Lagrange interpolation method, respectively.",2008,0, 2774,Byzantine Anomaly Testing for Charm++: Providing Fault Tolerance and Survivability for Charm++ Empowered Clusters,"Recently shifts in high-performance computing have increased the use of clusters built around cheap commodity processors. A typical cluster consists of individual nodes, containing one or several processors, connected together with a high-bandwidth, low-latency interconnect. There are many benefits to using clusters for computation, but also some drawbacks, including a tendency to exhibit low Mean Time To Failure (MTTF) due to the sheer number of components involved. Recently, a number of fault-tolerance techniques have been proposed and developed to mitigate the inherent unreliability of clusters. These techniques, however, fail to address the issue of detecting non-obvious faults, particularly Byzantine faults. At present, effectively detecting Byzantine faults is an open problem. We describe the operation of ByzwATCh, a module for run-time detecting Byzantine hardware errors as part of the Charm++ parallel programming framework",2006,0, 2775,An improved error localization on DV-Hop scheme for wireless sensors networks,"Recently years research on wireless sensor networks (WSN) paid more attention to solve the localization of the wireless sensors, different algorithms had been applied, one of them is DV-Hop, which still need to be improved to obtained better position accuracy. In this paper, we propose a strategy to get accurate location from the unknown sensor using the average hopsize and average error reducing the final localization error. No extra hardware device is required. Simulation shows a better accuracy than the classical DV-Hop.",2010,0, 2776,MR fluid motion tracking of blood flow in right atrium of patient with atrial septal defect,"Recently, a newly developed technique for motion estimation of magnetic resonance (MR) signals that are represented on image has been applied. This enabled MR fluid motion tracking of blood in cardiac chambers. The motion field derived from this tracking operation can be used as the blood flow information to be presented using a two dimensional velocity vector field plot. In addition to velocity reconstruction maps, we also devised vorticity flow maps on which the velocity field is superimposed onto. Based on such a framework, the degree of vortical flow is assessed for the right atrium of a patient with atrial septal defect (ASD). We are able to examine a difference in the magnitude of vortex strength pre- and post-atrial septal occlusion.",2008,0, 2777,A Study on Fault Diagnosis and Performance Evaluation of Propulsion System,"Recently, as the feasibility study shows that trans-Korea railway and trans-continental railway are advantageous, interest in high-speed railway system is increasing. Because railway vehicle is environment-friendly and safe than compared with airplane and ship, its market-sharing increases gradually. KHST(Korean high speed train) has been developed by KRRI (Korea railroad research institute). An electric railway system is composed of high-tech subsystems, among which main electric equipment such as transformers and converter are critical components determining the performance of rolling stock. We developed a measurement system for online test and evaluation of performances of KHST. The measurement system is composed of software part and hardware part. Perfect interface between multi-users is possible. A new method to measure temperature was applied to the measurement system. By using the system, fault diagnosis and performance evaluation of electric equipment in Korean high speed train was conducted during test running",2006,0, 2778,Scan chain fault identification using weight-based codes for SoC circuits,"Recently, it has been observed that embedded cores in a high-speed SoC circuit have the problem of broken scan chains that cannot shift properly. Also, scan chain intermittent faults caused by hold-time violations and crosstalk noises are pervasive. In this research, an efficient method is proposed to identify the faulty scan chain(s) at the core level. That is, the core where the scan chain is defective can be identified, even if the scan chain is broken. The result can be used to tune up the fabrication process or to guide the fine-grained scan cell identification process. Here, weight-based m-out-of-n codes, which can generate a large number of codewords, with small hardware overhead and high fault detection capability are used to generate the scan chain diagnostic patterns for permanent (and possibly intermittent) faults. An efficient codeword generation method is proposed to maximize the number of codewords, minimize the aliasing probabilities and test application cost. The idea of multiple m-out-of-n codes is also proposed to guarantee that sufficient number of codewords are generated to perturb the scan chains and the associated combinational circuits. Simulation results demonstrate the feasibility of the proposed method.",2004,0, 2779,Zero phase error tracking control system with multi harmonics disturbance suppression loop,"Recently, on the view point of capacity of memory, the media of optical disk recording system is increasing. Moreover, its disk rotation speed will be increasing according to its large memory capacity, such as more than 10000[rpm]. The optical disk system has a radial run-out originate from the disk eccentricity. For this reason, the optical disk system has the periodic disturbance which is synchronized to its disk rotation speed. In order to suppress this periodic disturbance, we have proposed the robust feedforword tracking controller based on Zero Phase Error Tracking (ZPET) control. The optical disk system has the periodic harmonics disturbance. However, it is difficult for the conventional robust feedforword tracking controller to suppress the periodic high order harmonics disturbance. In order to overcome this problem, this paper proposes a new ZPET control system with multi harmonics disturbance suppression loop. The experimental results point out that the proposed tracking control system has a precise tracking response against both periodic primary harmonics disturbance and periodic high order harmonics disturbance.",2010,0, 2780,3 Faults Tolerant Orthogonal RAID for Large Storage,"Recently, the demand of low cost large scale storage increases. We developed VLSD (Virtual Large Scale Disks) toolkit for constructing virtual disk based distributed storages, which aggregate free spaces of individual disks. However, in order to construct large-scale storage, more than or equal to 3 fault tolerant RAID is important. In this paper, we propose MeshRAID that is 3 fault tolerant orthogonal RAID. And, we implement MeshRAID using VLSD. It is easy to implement MeshRAID using various classes in VLSD. From the viewpoint of its features, MeshRAID is addressed between RAID55 and NaryRAID.",2010,0, 2781,A note on error detection in an RSA architecture by means of residue codes,"Recently, various attacks have been proposed against many crypto systems, exploiting deliberate error injection during the computation process. In this paper, we add a residue-based error detection scheme to an RSA architecture to protect against such attacks. We then evaluate the error coverage and the expected area and latency overheads",2006,0, 2782,Techniques to enable FPGA based reconfigurable fault tolerant space computing,Reconfigurable computing using field programmable gate arrays (FPGAs) offer significant performance improvements over traditional space based processing solutions. The application of commercial-off-the-shelf (COTS) FPGA processing components requires radiation-effect detection and mitigation strategy to compensate for the FPGAs' susceptibility to single event upsets (SEUs) and single event functional interrupts (SEFIs). A reconfigurable computing architecture that uses external triple modular redundancy (TMR) via a radiation-hardened ASIC provides the most robust approach to SEU and SEFI detection and mitigation. Honeywell has designed a TMR Voter ASIC with an integrated FPGA configuration manager that can automatically reconfigure an upset FPGA upon TMR error detection. The automatic configuration manager also has features to support resynchronizing the upset FPGA with the remaining two FPGAs operating in a self checking pair (SCP) mode. Automating and minimizing reconfiguration times and re synchronization times enables high performance FPGA-based processors to provide high system availability with minimal software/system controller intervention,2006,0, 2783,A Distributed Replication Strategy Evaluation and Selection Framework for Fault Tolerant Web Services,"Redundancy-based fault tolerance strategies are proposed for building reliable Service-Oriented Architectures/Applications (SOA), which are usually developed on the unpredictable remote Web services. This paper proposes and implements a distributed replication strategy evaluation and selection framework for fault tolerant Web services. Based on this framework, we provide a systematic comparison of various replication strategies by theoretical formula and real-world experiments. Moreover, a user participated strategy selection algorithm is designed and verified. Experiments are conducted to illustrate the advantage of this framework. In these experiments, users from six different locations all over the world perform evaluation of Web services distributed in six countries. Over 1,000,000 test cases are executed in a collaborative manner and detailed results are also provided.",2008,0, 2784,A benchmark for quantitative fault tree reliability analysis,"Reliability analysis of critical computer-based systems is often performed using software tools, typically using fault tree analysis. Many software tools are available, either commercially or from university research groups, and each tool uses different solution techniques, ranging from fast approximations to more complex analysis such as Markov analysis or binary decision diagrams. Analysts are thus faced with the difficult task of validating a tool which is being considered for use in a real application. One approach to verification might be to select a set of representative examples, and compare the tool results with some known set of correct solutions. However, the development of a reliable benchmark can be time consuming, and the determination of the correct result may be very difficult. It would therefore be very useful if there was a published benchmark, with the correct solutions, from which the analyst could select test cases. Towards this goal the authors present a set of example systems and their analysis. They use these examples to help validate theirr dynamic fault tree analysis tool, Galileo. Their goal is to gather a set of representative examples which are reasonably challenging and which are derived from real systems",2001,0, 2785,Fault-tolerant PM brushless DC drive for aerospace application,"Reliability is a fundamental requirement in airborne equipments, it involves a particular approach of study on mission failure probabilities. Architectures robust to failure of single components like multi-phase fault-tolerant motors drives introduce in these systems an intrinsic improvement. By providing compensation for potential hardware failures, a fault-tolerant design approach may achieve reliability objectives without recourse to non-optimized redundancy or over-sizing. A fault-tolerant design approach differs from a pure design redundancy approach in that provisions are made for planned degraded modes of operation where acceptable. This study shows how a 5-phase motor is able to run at rated torque also with one or two phase open, by using suitable current commands defined as degraded modes. An experimental prototype has been arranged and tested.",2010,0, 2786,Transient fault sensitivity analysis of analog-to-digital converters (ADCs),"Reliability of systems used in space, avionic and biomedical applications is highly critical. Such systems consist of an analog front-end to collect data, an ADC to convert the collected data to digital form and a digital unit to process it. It is important to analyze the fault sensitivities of each of these to effectively gauge and improve the reliability of the system. This paper addresses the issue of fault sensitivity of ADCs. A generic methodology for analyzing the fault sensitivity of ADCs is presented. A novel concept of node weights specific to -particle induced transient faults is introduced to increase the accuracy of such an analysis",2001,0, 2787,Fault-sensitivity analysis and reliability enhancement of analog-to-digital converters,"Reliability of systems used in space, avionic, and biomedical applications is highly critical. Such systems consist of an analog front-end to collect data, an analog-to-digital converter (ADC) to convert the collected data to digital form, and a digital unit to process it. Though a considerable amount of research has been performed to increase the reliability of digital blocks, the same cannot be claimed for mixed-signal blocks. The reliability enhancement that we employ begins with fault-sensitivity analysis followed by redesign. The data obtained from the sensitivity analysis is used to grade blocks based on their sensitivity to faults. The highly sensitive blocks can then be replaced by more reliable alternatives. The improvement gained by opting for more robust implementations might be limited due to the number of possible implementations. In these cases, alternative reliability enhancement techniques such as adding redundancy may provide further improvements. The steps involved in the reliability enhancement of ADCs are illustrated in this paper by first proposing a sensitivity analysis methodology for /spl alpha/-particle induced transients and then suggesting redesign techniques to improve the reliability of the ADC. A novel concept of node weights specific to /spl alpha/-particle transients is introduced, which improves the accuracy of the sensitivity analysis. The fault simulations show that, using techniques such as alternative robust implementations, adding redundancy, pattern detection, and transistor sizing, considerable improvements in reliability can be attained.",2003,0, 2788,On-chip Fault-tolerance Utilizing BIST Resources,"Recent and projected advances in VLSI fabrication technology will allow for integration of billions of transistors and advanced architectures on a single chip. According to the International Technology Roadmap for Semiconductors (ITRS), widespread reliability challenges are expected for these VLSI fabrication technologies (65 nm and below). Effective and efficient on-chip fault-tolerance solutions are needed. A new approach of achieving on-chip fault-tolerance using built-in-self-test (BIST) is proposed in this paper. The proposed approach reduces production cost, implementation overhead and time-to-market; increases reusability, post-fabrication reconfigurability and productivity; and is scalable across multiple VLSI processes and feature sizes. This will result in obvious advantages of yield enhancement and prolonged lifetime of VLSI chips as well.",2006,0, 2789,Surge voltage performance of power transformer winding sections provided with metal oxide surge absorber blocks with faults in portion of sections,"Recent investigations by few researchers have shown that transient voltage distribution across winding sections of power transformer can be controlled by providing suitable metal oxide surge absorber (MOA) blocks across these sections. However, it is to be ascertained that in the event of occurrence of faults in such type of transformer winding sections whether it is possible to identify the fault. The present investigations have been aimed to analyze and identify the parameters that may change as a result of occurrence of fault in sections of transformer winding even though these sections are provided with MOA protection blocks. High voltage windings with 'alpha' values in range of 6 to 18 have been analyzed by carrying out simulations using PSPICE software. These investigations have shown that there is no appreciable change in voltage waveforms even when faults to the extent of 5% of winding length occur in any of the winding sections. Further investigations have indicated that there are changes in waveform of current flowing through transformer winding neutral line when 5% of total winding length faults are present in any one section. These changes are not large in magnitude and are observed to occur at 50 microsecond or 100 microsecond instants on the neutral current waveform. Depending on the value of alpha, the change in magnitude of neutral currents are found to be in the range of 3% to 9% at either of 50 microsecond or 100 microsecond time instant.",2007,0, 2790,Reducing risks of widespread faults and attacks for commercial software applications: towards diversity of software components,"Recent IT attacks demonstrated how vulnerable consumers and enterprises are when adopting commercial and widely deployed operating systems, software applications and solutions. Diversity in software applications is fundamental to increase chances of survivability to faults and attacks. Current approaches to diversity are mainly based on the development of multiple versions of the same software, their parallel execution and the usage of voting mechanisms. Because of the high cost, they are used mainly for very critical and special cases. We introduce and discuss an alternative method to ensure diversity for common widespread software applications without requiring additional resources. We describe a few encouraging results obtained from simulations.",2002,0, 2791,Concurrent fault detection in a hardware implementation of the RC5 encryption algorithm,"Recent research has shown that fault diagnosis and possibly fault tolerance are important features when implementing cryptographic algorithms by means of hardware devices. In fact, some security attack procedures are based on the injection of faults. At the same time, hardware implementations of cryptographic algorithms, i.e. crypto-processors, are becoming widespread. There is however, only very limited research on implementing fault diagnosis and tolerance in crypto-algorithms. Fault diagnosis is studied for the RC5 crypto-algorithm, a recently proposed block-cipher algorithm that is suited for both software and hardware implementations. RC5 is based on a mix of arithmetic and logic operations, and is therefore a challenge for fault diagnosis. We study fault propagation in RC5, and propose and evaluate the cost/performance tradeoffs of several error detecting codes for RC5. Costs are estimated in terms of hardware overhead, and performances in terms of fault coverage. Our most important conclusion is that, despite its nonuniform nature, RC5 can be efficiently protected by using low-cost error detecting codes.",2003,0, 2792,Therapist-mediated post-stroke rehabilitation using haptic/graphic error augmentation,"Recent research has suggested that enhanced retraining for stroke patients using haptics (robotic forces) and graphics (visual display) to generate a practice environment that can artificially enhance error rather than reducing it, can stimulate new learning and foster accelerated recovery. We present an evaluation of early results of this novel post-stroke robotic-aided therapy trial that incorporates these ideas in a large VR system and simultaneously employs the patient, the therapist, and the technology to accomplish effective therapy.",2009,0, 2793,Learning from defect removals,"Recent research has tried to identify changes in source code repositories that fix bugs by linking these changes to reports in issue tracking systems. These changes have been traced back to the point in time when they were previously modified as a way of identifying bug introducing changes. But we observe that not all changes linked to bug tracking systems are fixing bugs; some are enhancing the code. Furthermore, not all fixes are applied at the point in the code where the bug was originally introduced. We flesh out these observations with a manual review of several software projects, and use this opportunity to see how many defects are in the scope of static analysis tools.",2009,0, 2794,ReCoNet: modeling and implementation of fault tolerant distributed reconfigurable hardware,"Recent research was mainly focused on the OS support for a single reconfigurable chip. This paper presents a general approach to manage fault tolerant distributed reconfigurable hardware. In order to run such a system, three basic tasks must be implemented: (i) rerouting to compensate line errors, (ii) rebinding to compensate node failures, and (iii) hardware reconfiguration to allow the optimization of these systems during runtime. This paper proposes first ideas and solutions of these management functions. Furthermore, a prototype implementation consisting of four fully connected FPGAs is presented.",2003,0, 2795,Short-term load forecasting: Multi-level wavelet neural networks with holiday corrections,"Short-term load forecasting plays a central role in reliable system operation by Independent System Operators and in making prudent bid decisions by market participants. Accurate forecasting is difficult in view of the complicated effects on load by various factors. In addition, it is difficult to forecast holidays as well as the days before and the days after in view of their particular load patterns and very limited data. In this paper, a multi-level wavelet neural network method is developed to forecast tomorrow's load. To effectively forecast the load for holidays as well as the days before and the days after, a correction coefficient scheme with holiday grouping is developed. Numerical results for a simple example and for Midwest-ISO's load demonstrate the effectiveness of multi-level wavelet neural networks, correction coefficients, and holiday grouping.",2009,0, 2796,Optimal index-bit allocation for dynamic post-correction of analog-to-digital converters,"Signal processing methods for digital post-correction of analog-to-digital converters (ADCs) are considered. ADC errors are in general signal dependent, and in addition, they often exhibit dynamic dependence. A novel dynamic post-correction scheme based on look-up tables is proposed. In order to reduce the table size and, thus, the hardware requirements, bit-masking is introduced. This is to limit the length of the table index by deselecting index bits. At this point, the problem of which bits to use arises. A mathematical analysis tool is derived, enabling the allocation of index bits to be analyzed. This analysis tool is applied in two optimization problems, optimizing the total harmonic distortion and the signal-to-noise and distortion ratio, respectively, of a corrected ADC. The correction scheme and the optimization problems are illustrated and exemplified using experimental ADC data. The results show that the proposed correction scheme improves the performance of the ADC. They also indicate that the allocation of index bits has a significant impact on the ADC performance, motivating the analysis tool. Finally, the optimization results show that performance improvements compared with static look-up table correction can be achieved, even at a comparable table size.",2005,0, 2797,Application of Rotating Machinery Fault Diagnosis System Based on Improved WNN,"Significance of equipment fault diagnosis is mainly reflected in lower failure rate; lower maintenance costs reduce maintenance time, increase operating time. Wavelet network is the perfect combination of the theory of wavelet analysis and the theory artificial neural network; it is compatible with the superiority of the wavelet and neural networks. In this paper, the wavelet neural network based on the BP algorithm was studied. And also provides the initial parameter settings of wavelet neural network of combination of types of wavelet and study sample. It introduces the improved wavelet neural network based on the BP algorithm and applies it to the examples of rotating machinery fault diagnosis in order to avoid the low efficiency of traditional algorithm of network structure, and improve the performance of network learning.",2010,0, 2798,Implementation of an Overblowing Correction Controller and the proposal of a quantitative assessment of the sound's pitch for the anthropomorphic saxophonist robot WAS-2,"Since 2007, our research is related to the development of an anthropomorphic saxophonist robot, which it has been designed to imitate the saxophonist playing by mechanically reproducing the organs involved for playing a saxophone. Our research aims in understanding the motor control from an engineering point of view and enabling the communication. In a previous paper, the Waseda Saxophone Robot No. 2 (WAS-2) which is composed by 22-DOFs has been presented. Moreover, a feedback error learning with dead time compensation has been implemented to control the air pressure of the robot. However, such a controller couldn't deal with the overblowing effects (unsteady tones) that are found during a musical performance. Therefore; in this paper, the implementation of an Overblowing Correction Controller (OCC) has been proposed and implemented in order to assure the steady tone during the performance by using the pitch feedback signal to detect the overblowing condition and by defining a recovery position (off-line) to correct it. Moreover, a saxophone sound evaluation function (sustain phase) has been proposed to compare the sound produced by human players and the robot. A set of experiments were carried out to verify the improvements on the musical performance of the robot and its sound has been quantitatively compared with human saxophonists. From the experimental results, we could observe improvements on the pitch (correctness) and tone stability.",2010,0, 2799,Fault Localization Based on Multi-level Similarity of Execution Traces,"Since automated fault localization can improve the efficiency of both the testing and debugging process, it is an important technique for the development of reliable software. This paper proposes a novel fault localization approach based on multi-level similarity of execution traces, which is suitable for object-oriented software. It selects useful test cases at class level and computes code suspiciousness at block level. We develop a tool that implements the approach, and conduct empirical studies to evaluate its effectiveness. The experimental results show that our approach has the potential to be effective in localizing faults for object-oriented software.",2009,0, 2800,Design of monitoring and Fault Diagnosis System on-line for certain gas turbo-generator set,"Since its parameters can not been test about the certain gas turbo-generator set in local rapidly after repairing, a monitoring and fault diagnosis system based on IPC is designed in this paper. The system adopts advanced measuring technology based on PCI bus, intelligent appearance and VI technology and can check voltage, electric current, power and temperature of key parts on-line about the set etc. Simultaneously it can give an alarm when the parameter is abnormal and diagnose basic fault by FDES (Fault Diagnosis Expert System). Its results has characteristics of high reliability, high speed and intuitive. So it can improve efficiency of test system maximum and save repair expenditure. The system can helps to improve reliability of gas turbo-generator set and power quality.",2008,0, 2801,An efficient hardware-based fault diagnosis scheme for AES: performances and cost,"Since standardization in 2001, the Advanced Encryption Standard has been the subject of many research efforts, aimed at developing efficient hardware implementations with reduced area and latency. So far, reliability has not been considered a primary objective. Recently, several error detecting schemes have been proposed in order to provide some defense against hardware faults in AES. The benefits of such schemes are twofold: avoiding wrong outputs when benign hardware faults occur, and preventing the collection of information about the secret key through malicious injection of faults. In this paper, we present a complete scheme for parity-based fault detection in a hardware implementation of the Advanced Encryption Standard which includes a key schedule unit. We also provide a preliminary evaluation of the hardware and latency overhead of the proposed scheme.",2004,0, 2802,Hardware fault tolerance: an immunological solution,"Since the advent of computers, numerous approaches have been taken to create hardware systems that provide a high degree of reliability even in the presence of errors. This paper addresses the problem from a biological perspective using the human immune system as a source of inspiration. The immune system uses many ingenious methods to provide reliable operation in the body and so may suggest how similar methods can be used in the future design of reliable computer systems. The paper addresses this challenge through the implementation of an immunised finite state machine-based counter. The proposed methods demonstrate how through a process of self/non-self-differentiation, the hardware immune system creates a set of tolerance conditions to monitor the change in states of the hardware. Potential faults may then be flagged, assessed and the appropriate recovery action taken",2000,0, 2803,More about arc-fault circuit interrupters,"Since the arc-fault circuit interrupter (AFCI) was commercially introduced in 1998, questions have arisen about how it detects arcs, whether it detects series and parallel arcs, and what types of AFCIs are available. Types other than the original branch/feeder AFCI are emerging. This paper is intended to provide an update regarding answers to those questions, following an earlier paper that introduced the basic functioning of the AFCI.",2004,0, 2804,A SAT-Based Arithmetic Circuit Bug-Hunting Method,"Several differences lie between arithmetic circuit formal verification and conventional hardware formal verification. In this paper a SAT-based word-level model checking method aimed at arithmetic circuit bug-hunting is introduced. E-CNF is hybrid of Boolean formula and arithmetic formula. The original problem whether specification holds in the arithmetic circuit is translated into the satisfiability of an E-CNF problem. E-SAT, the E-CNF solver is an extension of complete SAT solver, with optimization techniques including tag clause. Experiments show SAT based word-level model checking method is highly automatic and powerful in bug-hunting for arithmetic circuit",2006,0, 2805,Fault detection and identification for robot manipulators,"Several factors must be considered for robotic task execution in the presence of a fault, including: detection, identification, and accommodation for the fault. In this paper, a prediction error based dead-zone residual function and nonlinear observers are used to detect and identify a class of actuator faults. Advantages of the proposed fault detection and identification methods are that they are based on the nonlinear dynamic model of a robot manipulator (and hence, can be extended to a number of general Euler Lagrange systems), they do not require acceleration measurements, and they are independent from the controller. A Lyapunov-based analysis is provided to prove that the developed fault observer converges to the actual fault.",2004,0, 2806,Studies of the computational error in the probabilistic eigenvalue analysis,"Several models for probabilistic eigenvalue analysis are comparatively studied in computational precision in this paper. The focus is on the computational errors of eigenvalue expectation and variance, as well as the high cumulants. Based on particular testing systems, the study provides a useful error comparison for further application.",2004,0, 2807,Condition monitoring and fault detection of a compressor using signal processing techniques,"Several techniques developed for performance monitoring and fault detection of machinery in the petrochemical industry is presented. The main objective is to perform condition monitoring and fault detection of a compressor based on the collected vibration and other on-site data from a Bently Navada data acquisition system. The analysis techniques developed, configuration and functions of the computer program are also briefly introduced. The techniques can be divided into four parts: (1) time-domain analysis, (2) frequency-domain analysis, (3) orbit analysis, and (4) trend analysis. The program developed has been tested using on-site data from a compressor at Suncor Inc",2001,0, 2808,Fine-grained incremental learning and multi-feature tossing graphs to improve bug triaging,"Software bugs are inevitable and bug fixing is a difficult, expensive, and lengthy process. One of the primary reasons why bug fixing takes so long is the difficulty of accurately assigning a bug to the most competent developer for that bug kind or bug class. Assigning a bug to a potential developer, also known as bug triaging, is a labor-intensive, time-consuming and fault-prone process if done manually. Moreover, bugs frequently get reassigned to multiple developers before they are resolved, a process known as bug tossing. Researchers have proposed automated techniques to facilitate bug triaging and reduce bug tossing using machine learning-based prediction and tossing graphs. While these techniques achieve good prediction accuracy for triaging and reduce tossing paths, they are vulnerable to several issues: outdated training sets, inactive developers, and imprecise, single-attribute tossing graphs. In this paper we improve triaging accuracy and reduce tossing path lengths by employing several techniques such as refined classification using additional attributes and intra-fold updates during training, a precise ranking function for recommending potential tossees in tossing graphs, and multi-feature tossing graphs. We validate our approach on two large software projects, Mozilla and Eclipse, covering 856,259 bug reports and 21 cumulative years of development. We demonstrate that our techniques can achieve up to 83.62% prediction accuracy in bug triaging. Moreover, we reduce tossing path lengths to 1.5-2 tosses for most bugs, which represents a reduction of up to 86.31% compared to original tossing paths. Our improvements have the potential to significantly reduce the bug fixing effort, especially in the context of sizable projects with large numbers of testers and developers.",2010,0, 2809,Preliminary Models of the Cost of Fault Tolerance,"Software cost estimation and overruns continue to plague the software engineering community, especially in the area of safety-critical systems. We provide some preliminary models to predict the cost of adding fault detection, fault-tolerance, or fault isolation techniques to a software system or subsystem if the cost of originally developing the system or subsystem is known. Since cost is a major driver in the decision to develop new safety-critical systems, such models will be useful to requirements engineers, systems engineers, decision makers, and those intending to reuse systems and components in safety-critical environments where fault tolerance is critical.",2007,0, 2810,Application Research on Positive and Negative Association Rules Oriented Software Defects,"Software defects are the key factors to evaluate the dependable software. This paper analyzes the attributes of software defects, and applies positive and negative association rules method to the research of software defects. This method can not only overcome the weak point of the traditional association rules method that can only mine the explicit rules, but also output some more meaningful rules of relationship of attributes. We extract the exterior relationship among the attributes of software defects, and fully mine the rules of inter-attributes. Through the application of the ""Design and Implementation of Mining Linkage Management System of Coal Mine"", the experimental results demonstrate that our mined rules have the advantages of less quantity, higher quality, fewer errors and conflicts.",2009,0, 2811,The Application of the Most Similar Path Set Based on the Specified Invalid Path in Software Fault Location,"Software fault location is a very complex problem. Many researchers at home and abroad have already started studying how to use the candidate path which is similar to the invalid path to find the fault location, and they have achieved some results. In this paper, based on existing researches, a method of using the path set which is the most similar to the specified invalid path to locate fault, and it has been proved through a concrete example that this approach can help to find fault where the error occurred.",2010,0, 2812,Fault contribution trees for product families,"Software fault tree analysis (SFTA) provides a structured way to reason about the safety or reliability of a software system. As such, SFTA is widely used in mission-critical applications to investigate contributing causes to possible hazards or failures. In this paper we propose an approach similar to SFTA for product families. The contribution of the paper is to define a top-down, tree-based analysis technique, the fault contribution tree analysis (FCTA), that operates on the results of a product-family domain analysis and to describe a method by which the FCTA of a product family can serve as a reusable asset in the building of new members of the family. Specifically, we describe both the construction of the fault contribution tree for a product family (domain engineering) and the reuse of the appropriately pruned fault contribution tree for the analysis of a new member of the product family (application engineering). The paper describes several challenges to this approach, including evolution of the product family, handling of subfamilies, and distinguishing the limits of safe reuse of the FCTA, and suggests partial solutions to these issues as well as directions for future work. The paper illustrates the techniques with examples from applications to two product families.",2002,0, 2813,Multi Fault Laser Attacks on Protected CRT-RSA,"Since the first publication of a successful practical two-fault attack on protected CRT-RSA surprisingly little attention was given by the research community to an ensuing new challenge. The reason for it seems to be two-fold. One is that generic higher order fault attacks are very difficult to model and thus finding robust countermeasures is also difficult. Another reason may be that the published experiment was carried out on an outdated 8 bit microcontroller and thus was not perceived as a serious threat to create a sense of urgency in addressing this new menace. In this paper we describe two-fault attacks on protected CRT-RSA implementations running on an advanced 32 bit ARM Cortex M3 core. To our knowledge, this is the first practical result of two fault laser attacks on a protected cryptographic application. Considering that laser attacks are much more accurate in targeting a particular variable, the significance of our result cannot be overlooked.",2010,0, 2814,Fault-tolerant control of PMSM drive unit,"Since the Fuel-Cell Vehicle's demonstration in the public transport, its fault diagnosis and fault tolerant control strategy become more and more important. This paper studies on the PMSM drive of FCV and presents a sensorless control algorithm in the fault mode based analytical redundancy. Simulation analysis and experiment verification are presented to compare the control algorithm using Expanded Kalman Filter (EKF) and Phase Locked Loop.",2009,0, 2815,An Improved Fault-Tolerant Model for Channel Assignment in Cellular Networks,"Since the natural resources of electromagnetic spectrum are strictly administrated, the channel assignment problem (CAP) has been an important issue for mobile computing. Another important concept in mobile computing is handoff, which is also called handover. It occurs when a mobile host moves from the coverage area of one base station to the adjacent one while still involved in communication. A new channel is immediately to be assigned to continue the call. Sometimes, there aren't more channels in this cell, it is irritating for a mobile user to break the connection. Therefore, it is desirable that the channel assignment algorithm be an improved fault tolerant. Thus, even if in the presence there are insufficient channels available in the cell, it can still continue communicating with its mobile hosts. In this paper, we design an improved fault-tolerant model for channel assignment in mobile computing. Besides considering co-channel interfence, our model considers handoff by using the reserved channel technique, borrowing/lending and locking technique. We also provide results of the performance evaluation of our algorithm.",2010,0, 2816,"SpeedHap: An Accurate Heuristic for the Single Individual SNP Haplotyping Problem with Many Gaps, High Reading Error Rate and Low Coverage","Single nucleotide polymorphism (SNP) is the most frequent form of DNA variation. The set of SNP's present in a chromosome (called the haplotype) is of interest in a wide area of applications in molecular biology and biomedicine, including diagnostic and medical therapy. In this paper we propose a new heuristic method for the problem of haplotype reconstruction for (portions of) a pair of homologous human chromosomes from a single individual (SIH). The problem is well known in literature and exact algorithms have been proposed for the case when no (or few) gaps are allowed in the input fragments. These algorithms, though exact and of polynomial complexity, are slow in practice. When gaps are considered no exact method of polynomial complexity is known. The problem is also hard to approximate with guarantees. Therefore fast heuristics have been proposed. In this paper we describe SpeedHap, a new heuristic method that is able to tackle the case of many gapped fragments and retains its effectiveness even when the input fragments have high rate of reading errors (up to 20%) and low coverage (as low as 3). We test SpeedHap on real data from the HapMap Project.",2008,0, 2817,Optimal Frequency Value to Detect Low Current Faults Superposing Voltage Tones,"Single phase to ground low current faults (LCF) are difficult to be detected with common fault detection techniques. Therefore, several specific methodologies have been developed to identify LCF in an easier way. Among these methodologies, a method based on superposing voltage tones has been proposed. This method requires defining the frequency of the tones that have to be superimposed.",2008,0, 2818,New power factor correction AC/DC converter with reduced storage capacitor voltage,"Single-stage power factor correction (PFC) AC/DC converters usually present a high storage capacitor voltage stress and voltage variation. The series inductance interval (SII) PFC converters allow obtaining a bulk capacitor voltage lower than the peak value of the line voltage and even lower than output voltage. In this paper the novel single-stage SII-B-2D PFC converter is presented. This topology combines as three main advantages a low value and not much variable storage capacitor voltage, input current harmonics under EN61000-3-2 Class D limits, and an advantageous component count.",2002,0, 2819,Implementation of LCD driver using asymmetric truncation error compensation for mobile DMB phone display,"Small color LCDs have been widely used in the mobile display market for the past couple of decades due to their cost effectiveness. In particular, 64K color LCDs have gained in popularity over true color LCDs due to their low power consumption and effective transmission. However, the 24-bit to 16-bit image data conversion (referred to as asymmetric data truncation) results in a colorization of the gray scale and lack of smoothness, such as pseudo edge artifacts. Thus, to solve these problems, this paper proposes and implements a simple truncation error compensation algorithm using a 1-bit lower data expansion with neighbor pixel information. In experiments, the implemented LCD driver IC is shown to correct these artifacts .",2008,0,7212 2820,Improving SNR for DSM Linear Systems Using Probabilistic Error Correction and State Restoration: A Comparative Study,"Smaller feature sizes and lower supply voltages make DSM devices more susceptible to soft errors generated by alpha particles and neutrons as well as other sources of environmental noise. In this scenario, soft-error/noise tolerant techniques are necessary for maintaining the SNR of critical DSP applications. This paper studies linear DSP circuits and discusses two low cost techniques for improving the SNR of DSP filters. Both techniques use a single checksum variable for error detection. This gives a distance two code that is traditionally good for error detection but not correction. In this paper, such a code is used to improve SNR rather than perfectly remove the error. The first technique, 'checksum-based probabilistic error correction', uses the value indicated by the checksum variable to probabilistically correct the error and achieves up to 5 dB improvement in the SNR value. The second technique, 'state restoration', works well when the length of burst errors is small and the error magnitude is large. A general error statistics has been defined as a random process and the distribution of SNR is compared for the two proposed techniques",2006,0, 2821,Comparing the fault detection effectiveness of n-way and random test suites,"Software testing plays a critical role in the timely delivery of high-quality software systems. Despite the important role that testing plays, little is known about the fault detection effectiveness of many testing techniques. We investigate ""n-way"" test suites created using a common greedy algorithm for use in combinatorial testing. A controlled study is designed and executed to compare the fault detection effectiveness of n-way and random test suites. Combinatorial testing is conducted on target systems that have been injected with software faults. The results are that there is no significant difference in the fault detection effectiveness of n-way and random test suites for the applications studied. Analysis of the random test suite finds that they are very similar to n-way test suites from the perspective of the number of test data combinations covered. This result concurs with other hypothetical results that indicate little difference between n-way and random test suites. While we do not expect this result to apply in all combinatorial testing situations, we believe the result will lead to the design of better combinatorial test suites.",2004,0, 2822,Bit-Error-Rate (BER) for modulation technique using Software defined Radio,"Software-defined radio technologies are attractive for future mobile communication systems because of reconfigurable and multimode operation capabilities. The reconfigurable feature is useful for enhancing functions of equipment with out replacing hardware. Multimode operation is essential for future wireless terminals because a number of wireless communication standards will still coexist. The transceiver is modeled in Matlab and consists of a BPSK transmitter, an additive white Gaussian noise (AWGN) channel, and a BPSK receiver. In this paper, we have considered basics modulation techniques used in mobile and wireless systems. Based on this analysis a PSK modulation scheme for SDR is proposed to pick the constellation size that offers the best reconstructed signal quality for each average SNR. Simulation results of signal transmissions confirm the effectiveness of our proposed PSK modulation scheme. The performance of the modulation technique is evaluated when the system is subjected to noise and interference in the channel. Computer simulation tool, MATLAB, is used to evaluate Bit-Error-Rate (BER) for Software defined Radio.",2009,0, 2823,Monitoring and fault diagnosis of photovoltaic panels,"Solar irradiance and temperature affect the performance of systems using photovoltaic generator. In the same way, it is essential to insure good performances of the installation so that its profitability won't be reduced. The objective of this work consists in diagnosing the panels faults and in certain cases in locating the faults using a model, the temperatures, the luminous flow, the speed of the wind, as well as the currents and the tensions. The development of software fault detection on a real installation is performed under the Matlab/Simulink environment.",2010,0, 2824,Development of knowledge base of fault diagnosis system in solar power tower plants,"Solar Power Tower (SPT) plant is a hugeous and complicated system, thus there have not been relative research productions on the record in the aspect of developing the knowledge base of its Fault Diagnosis System (FDS) in the whole world yet. In this paper, a modular and hierarchical knowledge base of FDS is designed and developed to use in SPT plants according to the characteristics of structure and operation of SPT plants. This knowledge base consists of main control module, concentrator subsystem, receiver subsystem, heat storage subsystem, generating subsystem and assistant subsystem. Each subsystem module contains a sub-control module and some secondary subsystem modules. In the knowledge base, knowledge is divided into metaknowledge, facts and rules. Moreover, rules are separated into meta rules, goal rules and diagnosis rules. Production rule representation is adopted to express the knowledge. Uncertainty of knowledge is described in this paper additionally. According to the application of the knowledge base in SPT plant, it is validated that the knowledge base developed in this paper has the characteristics of simple structure and high inference efficiency, which are favorable to simplify the design and development of inference engine.",2009,0, 2825,Study of solid state fault interruption device for medium voltage distribution systems with distributed generators,"Solid state fault interruption devices (FID) can interrupt fault currents much faster than the presently available circuit breakers. Due to their current inability to block system level voltages, presently available semiconductor devices are connected in series to increase blocking capability of the fault interrupting device. To verify the ability of the interruption device for use in a medium voltage distribution system, a FID model is subjected to simulated tests for continuous current carrying capability, rated fault current interruption, and lightning impulse withstand tests. A simulation to demonstrate the ability of the FID to interrupt fault currents in a 7.2kV distribution system and to study the effect on system voltages is shown.",2010,0, 2826,Error significance map for bit-plane FIR filtering array,"Some applications require correct computation, while many others do not. Large domain where perfect functional performance is not always required is in multimedia and DSP systems. Relaxing the requirement of 100% correctness for devices and interconnections may dramatically reduce costs of manufacturing, verification, and testing. The goal of this paper is development of error significance map for bit-plane FIR filtering array. The map marks the part of the array that must be error-free in order to enable computing on the bit-plane array with acceptable results. In other words the array cells, out of the marked area, could produce errors, but without significant influence on the marked high order bits of the resulting word. The bit-plane array operates on a bit level and assumes accumulation throughout the array with sum and carry propagation. It means that derivation of the error significance map is not a trivial for design automation. In this paper we propose a rigorous mathematical path based on transitive closure that generates error significance map for the bit-plane array.",2008,0, 2827,Fault tolerant motor drive system with redundancy for critical applications,"Some of the recent research activities in the area of electric motor drives for critical applications (such as aerospace and nuclear power plants) are focused on looking at various motor and drive topologies. This paper presents a motor drive system, which provides an inverter topology for three-phase motors, and also proposes an increased redundancy. The paper develops a simulation model for the complete drive system including synthetic faults. In addition, the hardware details including the implementation of DSP based motor controller, inverter module, and brushless PM motor system are provided and some experimental results are presented.",2002,0, 2828,An improved method of differential fault analysis on SMS4 key schedule,"SMS4 is a 128-bit block cipher published as the symmetric-key encryption standard of Wireless Local Area Network(WLAN) by China in 2006. By inducing faults into the key schedule, we propose an improved method of differential fault attack on the key schedule of the SMS4 cipher. The result shows that our attack can recover its secret key by introducing 4 faulty ciphertexts.",2010,0, 2829,On implementing a soft error hardening technique by using an automatic layout generator: case study,"Soft error rates induced by cosmic radiation become unacceptable in future very deep sub-micron technologies. Many hardening techniques at different abstraction levels have been proposed to cope with increased soft error rates. Depending on the abstraction level some techniques need to modify the design at architecture, circuit and transistor level, others required the modification of the circuit layout or to use new defined cells within the circuit. In this paper an automatic layout generator is presented to complete the system design process being able to easily generate the hardened design layout, thus reducing the system design time. This work aims at presenting a case study of a complete soft error tolerant integrated circuit by using an automatic layout generator called Parrot Punch.",2005,0, 2830,A new automated instrumentation for emulation-based fault injection,"Soft errors are an increasing threat in up-to-date technologies, so robustness evaluation has become an important part of digital circuit design. Emulation-based fault injection techniques have proved to be an efficient approach to perform such evaluations. In this paper, we propose new optimizations further improving the experimental duration and the instrumentation cost while maintaining the maximum flexibility for the dependability evaluation process.",2010,0, 2831,PRASE: An Approach for Program Reliability Analysis with Soft Errors,"Soft errors are emerging as a new challenge in computer applications. Current studies about soft errors mainly focus on the circuit and architecture level. Few works discuss the impact of soft errors on programs. This paper presents a novel approach named PRASE, which can analyze the reliability of a program with the effect of soft errors. Based on the simple probability theory and the corresponding assembly code of a program, we propose two models for analyzing the probabilities about error generation and error propagation. The analytical performance is increased significantly with the help of basic block analysis. The programAs reliability is determined according to its actual execution paths. We propose a factor named PVF (program vulnerability factor), which represents the characteristic of programAs vulnerability in the presence of soft errors. The experimental results show that the reliability of a program has a connection with its structure. Comparing with the traditional fault injection techniques, PRASE has the advantage of faster speed and lower price with more general results.",2008,0, 2832,An Improved Soft-Error Rate Measurement Technique,"Soft errors caused by ionizing radiation have emerged as a major concern for current generation of CMOS technologies, and the trend is expected to get worse. The measurement unit for failures due to soft errors is failure in time (FIT) that represents the number of failures encountered per billion hours of device operation. FIT rate measurement is time consuming and calls for accelerated testing. To improve effectiveness of soft-error rate (SER) testing, the patterns must be targeted toward detecting node failures that are most likely. In this paper, we present a technique for identifying soft-error-susceptible sites based on efficient electrical analysis that treats soft errors as Boolean errors but uses analog strengths to decide whether such errors can propagate to the next stage. Next, we present pattern generation techniques for manifestable soft errors such that each pattern targets a maximal set of soft errors. These patterns maximize the likelihood of detecting a soft error when it occurs. The pattern generators target scan architecture. It is well known that scan test time is dominated by scan shifts, when no useful testing is being done. To improve efficiency of scan-based testing, we extend the functionality of the existing built-in logic block observation (BILBO) architecture to support test-per-clock operation. Such targeted pattern generation and test application improve SER characterization time by an order of magnitude.",2009,0, 2833,Characterization of Error-Tolerant Applications when Protecting Control Data,"Soft errors have become a significant concern and recent studies have measured the ""architectural vulnerability factor"" of systems to such errors, or conversely, the potential that a soft error is masked by latches or other system behavior. We take soft-error tolerance one step further and examine when an application can tolerate errors that are not masked. For example, a video decoder or approximation algorithm can tolerate errors if the user is willing to accept degraded output. The key observation is that while the decoder can tolerate error in its data, it can not tolerate error in its control. We first present static analysis that protects most control operations. We examine several SPEC CPU2000 and MiBench benchmarks for error tolerance, develop fidelity measures for each, and quantify the effect of errors on fidelity. We show that protecting control is crucial to producing error-tolerance, for without this protection, many applications experience catastrophic errors (infinite execution time or crashing). Overall, our results indicate that with simple control protection, the error tolerance of many applications can provide designers with considerable added flexibility when considering future challenges posed by soft errors",2006,0, 2834,A case study of evaluation technique for soft error tolerance on SRAM-based FPGAs,"SRAM-based field programmable gate arrays (FPGAs) are vulnerable to a single event upset (SEU), which is induced by radiation effect. Therefore, the dependable design techniques become important, and the accurate dependability analysis method is required to demonstrate their robustness. Most of present analysis techniques are performed by using full reconfiguration to emulate the soft error. However, it takes long time to analyze the dependability because it requires many times of reconfiguration to complete the soft error injection. In the present paper, we construct the soft error estimation system to analyze the reliability and to reduce the estimation time. Moreover, we apply Monte Carlo simulation to our approach, and identify trade-off between accuracy of error rate and estimation time. As a result of our experimentation for 8-bit full-adder and multiplier, we can show the dependability of the implemented system. Also, the constructed system can reduce the estimation time. According to the result, when performing about 50% circuit Monte Carlo simulation, the error rate is within 20%.",2010,0, 2835,Fault injection into SRAM-based FPGAs for the analysis of SEU effects,"SRAM-based FPGAs are currently utilized in applications such as industrial and space applications where high availability and reliability and low cost are important constraints. The technology of such devices is sensible to Single Event Upsets (SEUs) that may be originated mainly from heavy ion radiation. This paper presents a fault injection method that is based on emulated SEU on the configuration bitstream file of commercial SRAM-based FPGA devices to study the error propagation in these devices. To demonstrate the method, an Altera FPGA, i.e. the Flex10K200, and the ITC'99 benchmark circuits are used. A fault injection tool is developed to inject emulated SEU faults into the circuits. The results show that between 33 to 45 percent of the SEUs injected to the FPGA device have propagated to the output terminals of the device.",2003,0, 2836,Transient Fault Detection in State-Automata,"State automata are implemented in numerous ways and technologies - from simple traffic light controls to high-performance microprocessors comprising thousands of different states. Highly-integrated microprocessors get more and more susceptible to transient faults induced by radiation, extreme clocking, temperature and decreasing voltage supplies. A transient fault in form of a single event-upset (SEUs) can change the current state of an automaton to another valid state, thus causing a control-flow error. From control-flow based simulations of a microprogrammable automaton we determine the number of effective, overwritten and latent faults. Faults can be detected by counting the number of transitions to the ending state and the comparison with a precomputed value being part of the microcode and the number of counted cycles. Faults cannot be detected if the original state is transferred to another valid state, reaching the ending state in the same number of transitions. We further determine the number of faults which can be detected by using this simple scheme and propose to encode these states in a way that a bit-flip will result in a state with a different distance from the ending state without any additional space consumption for the code.",2007,0, 2837,Impedance-based fault location formulation for unbalanced primary distribution systems with distributed generation,"State-of-the-art impedance-based fault location formulations for power distribution systems suppose that the system is radial. However, the introduction of new generation technologies in distribution systems, such as distributed generation, changes the direction of the system load flow from unidirectional to multi-directional. Therefore, it is necessary to extended current impedance-based fault location formulations to take into account the presence of generation units in the distribution system. Moreover, the distribution feeders are inherently unbalanced. This characteristic decreases accuracy of current fault location estimates that are based on sequence or modal phasor quantities. In this paper it is presented an extended impedance-based fault location formulation using phase coordinates. Computational simulations test on a typical unbalanced deregulated distribution system are presented and compared with state-of-the art techniques. The extended formulation is implemented numerically and a case study is presented to demonstrate the methods accuracy.",2010,0, 2838,Using regression trees to classify fault-prone software modules,"Software faults are defects in software modules that might cause failures. Software developers tend to focus on faults, because they are closely related to the amount of rework necessary to prevent future operational software failures. The goal of this paper is to predict which modules are fault-prone and to do it early enough in the life cycle to be useful to developers. A regression tree is an algorithm represented by an abstract tree, where the response variable is a real quantity. Software modules are classified as fault-prone or not, by comparing the predicted value to a threshold. A classification rule is proposed that allows one to choose a preferred balance between the two types of misclassification rates. A case study of a very large telecommunications systems considered software modules to be fault-prone, if any faults were discovered by customers. Our research shows that classifying fault-prone modules with regression trees and the using the classification rule in this paper, resulted in predictions with satisfactory accuracy and robustness.",2002,0, 2839,An integrated approach for increasing the soft-error detection capabilities in SoCs processors,"Software implemented hardware fault tolerance (SIHFT) techniques are able to detect most of the transient and permanent faults during the usual system operations. However, these techniques are not capable to detect some transient faults affecting processor memory elements such as state registers inside the processor control unit, or temporary registers inside the arithmetic and logic unit. In this paper, we propose an integrated (hardware and software) approach to increase the fault detection capabilities of software techniques by introducing a limited hardware redundancy. Experimental results are reported showing the effectiveness of the proposed approach in covering soft-errors affecting the processor memory elements and escaping to purely software approaches.",2005,0, 2840,Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software,"Software is increasingly being developed/maintained by multiple, often geographically distributed developers working concurrently. Consequently, rapid-feedback-based quality assurance mechanisms such as daily builds and smoke regression tests, which help to detect and eliminate defects early during software development and maintenance, have become important. This paper addresses a major weakness of current smoke regression testing techniques, i.e., their inability to automatically (re)test graphical user interfaces (GUIs). Several contributions are made to the area of GUI smoke testing. First, the requirements for GUI smoke testing are identified and a GUI smoke test is formally defined as a specialized sequence of events. Second, a GUI smoke regression testing process called daily automated regression tester (DART) that automates GUI smoke testing is presented. Third, the interplay between several characteristics of GUI smoke test suites including their size, fault detection ability, and test oracles is empirically studied. The results show that: 1) the entire smoke testing process is feasible in terms of execution time, storage space, and manual effort, 2) smoke tests cannot cover certain parts of the application code, 3) having comprehensive test oracles may make up for not having long smoke test cases, and 4) using certain oracles can make up for not having large smoke test suites.",2005,0, 2841,A clustering algorithm for software fault prediction,"Software metrics are used for predicting whether modules of software project are faulty or fault free. Timely prediction of faults especially accuracy or computation faults improve software quality and hence its reliability. As we can apply various distance measures on traditional K-means clustering algorithm to predict faulty or fault free modules. Here, in this paper we have proposed K-Sorensen-means clustering that uses Sorensen distance for calculating cluster distance to predict faults in software projects. Proposed algorithm is then trained and tested using three datasets namely, JM1, PCI and CM1 collected from NASA MDP. From these three datasets requirement metrics, static code metrics and alliance metrics (combining both requirement metrics and static code metrics) have been built and then K-Sorensen-means applied on all datasets to predict results. Alliance metric model is found to be the best prediction model among three models. Results of K-Sorensen-means clustering shown and corresponding ROC curve has been drawn. Results of K-Sorensen-means are then compared with K-Canberra-means clustering that uses other distance measure for evaluating cluster distance.",2010,0, 2842,Research on formal description of data flow software faults,"Software plays an important part in our society. The occurrence of software fault may lead to serious disaster. Data flow software fault is a kind of important software fault. In this paper, the properties of data dependency relationship are studied, the formal definitions of some data flow software faults, such as using undefined variable, nonused variable since definition, and redefining nonused variable since definition are given, the corresponding detecting methods are proposed, and some sample data flow software faults are given to demonstrate the effectiveness of the proposed methods.",2010,0, 2843,Software reliability growth models incorporating fault dependency with various debugging time lags,"Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Over the past 30 years, many software reliability growth models (SRGMs) have been proposed and most SRGMs assume that detected faults are immediately corrected. Actually, this assumption may not be realistic in practice. In this paper we first give a review of fault detection and correction processes in software reliability modeling. Furthermore, we show how several existing SRGMs based on NHPP models can be derived by applying the time-dependent delay function. On the other hand, it is generally observed that mutually independent software faults are on different program paths. Sometimes mutually dependent faults can be removed if and only if the leading faults were removed. Therefore, here we incorporate the ideas of fault dependency and time-dependent delay function into software reliability growth modeling. Some new SRGMs are proposed and several numerical examples are included to illustrate the results. Experimental results show that the proposed framework to incorporate both fault dependency and time-dependent delay function for SRGMs has a fairly accurate prediction capability",2004,0, 2844,Shannon Capacity and Symbol Error Rate of Space-Time Block Codes in MIMO Rayleigh Channels With Channel estimation Error,"Space-time block coding (STBC) is an attractive solution for improving quality in wireless links. In this paper, we analyze the impact of channel estimation error on the ergodic capacity and symbol error rate (SER) for space-time block coded multiple-input multiple-output (MIMO) systems. We derive a closed-form capacity expression over MIMO Rayleigh channels with channel estimation error. Moreover, we derive an exact closed-form SER for general PAM/PSK/QAM with channel estimation error. We show that, as expected, channel estimation error introduces the capacity loss and diversity gain loss. Furthermore, as the number of transmit and receive antennas increases, the sensitivity of the STBC system to channel estimation error increases. Simulation results demonstrate the accuracy of our analysis.",2008,0, 2845,Magnetic Field Sensor Using a Physical Model to Pre-Calculate the Magnetic Field and to Remove Systematic Error due to Physical Parameters,"Speed and angle measurements of rotating shafts are very important in automotive applications. Typical sensing arrangements for angular measurements using the magnetic principle are analyzed in this paper. It is shown that such sensor arrangements are prone to phase errors. The phase error mainly depends on the distance between sensor element and rotating shaft. By employing finite element simulations, a variety of frequently used magnetic field sensor configurations are investigated. Measurements complement the simulations and confirm correct simulation results. Due to mounting tolerances and mechanical vibrations in automotive applications the sensor distance is varying and dynamic phase errors appear. The accuracy of measurement can only be improved if these errors get compensated. This can be done with the help of digital signal processing.",2007,0, 2846,Using Bayesian statistical methods to determine the level of error in large spreadsheets.,"Spreadsheets are ubiquitous with evidence that Microsoft Excel, the leading application in the area, has an install base of 90% on end-user desktops. Nowhere is the usage of spreadsheets more extensive or more critical than in the financial sector, where regulations such as the Sarbanes-Oxley Act of 2002 has placed added pressure on organisations to ensure spreadsheets are error-free. This paper outlines the research that has been carried out into the use of Bayesian statistical methods to estimate the level of error in large spreadsheets based on expert knowledge and spreadsheet test data. The estimate can aid in the decision to accept or re-examine a large time consuming spreadsheet.",2009,0, 2847,Software defect detection and process improvement using personal software process data,"Statistical evidence has shown that programmers perform better when following a defined, repeatable process such as the Personal Software Process (PSP).Anecdotal and qualitative evidence from industry indicates that two programmers working side-by-side at one computer, collaborating on the same design, algorithm, code, or test, perform substantially better than the two working alone i.e. pair programming. Bringing these two ideas together, a new Software Process has been formulated. The Hybrid Personal Software Process (HPSP) is a defined, repeatable process for two programmers working collaboratively. The development time and cost of the product are reduced in HPSP when compared with PSP programming.",2010,0, 2848,A New Algorithm for the Detection of Inter-Turn Stator Faults in Doubly-Fed Wind Generators,Steady state analysis techniques (i.e. Motor Current Signature Analysis) cannot be applied to variable speed wind generators since their operation is predominately in the transient. A new non-stationary fault detection technique is proposed to detect inter-turn stator faults in doubly-fed wind generators. The technique is a combination of the extended Park's vector approach and a new adaptive algorithm. It will be shown that the new technique can unambiguously detect inter-turn stator faults under transient conditions while providing insight into the degree of severity of the fault,2006,0, 2849,Magnetic Sensor for the Defect Detection of Steam Generator Tube With Outside Ferrite Sludge,"Steam generator tube in nuclear power plant is a boundary between primary side and secondary side. Nondestructive test for the steam generator tube, eddy current testing (ECT) method has been carried out. In case of bobbin-type ECT probe, transverse defect, defect with ferro-phase, outside of tube having ferrite sludge were very difficult to detect. To overcome these problems, a motorized rotating probe coil (MRPC) probe was developed but scan speed is slow and not effective method in time. In this work we have developed a new sensor having U-shape of yoke to measure permeability variation of test specimen, which can detect ferro-phase generated as well as normal defects in the tube material of Inconel 6oo. Electronics for signal processing, 2-phase lock-in amplifier per sensor, ADC, and embedded controller were employed in one probe and measured digital data were transmitted to the PC data acquisition software using RS232C interface. Using the developed sensors we have applied to detect defects in the tube outside with ferrite sludge which is very difficult to detect conventional bobbin-type ECT probe. Developed sensor could measure defect size of 0.2 mm times 10 mm times 0.44 mm (width times length times depth) for the normal defect and defect outside of tube having ferrite sludge. We expect the developed sensor could detect defects like as MRPC probe and scan sensor speed like as bobbin-type ECT probe.",2009,0, 2850,"On the Integration of Test Adequacy, Test Case Prioritization, and Statistical Fault Localization","Testing and debugging account for at least 30% of the project effort. Scientific advancements in individual activities or their integration may bring significant impacts to the practice of software development. Fault localization is the foremost debugging sub-activity. Any effective integration between testing and debugging should address how well testing and fault localization can be worked together productively. How likely does a testing technique provide test suites for effective fault localization? To what extent may such a test suite be prioritized so that the test cases having higher priority can be effectively used in a standalone manner to support fault localization? In this paper, we empirically study these two research questions in the context of test data adequacy, test case prioritization and statistical fault localization. Our preliminary postmortem analysis results on 16 test case prioritization techniques and four statistical fault localizations show that branch-adequate test suites on the Siemens suite are unlikely to support effective fault localization. On the other hand, if such a test suite is effective, around 60% of the test cases can be further prioritized to support effective fault localization, which indicates that the potential savings in terms of effort can be significant.",2010,0, 2851,An Efficient Fault Simulator for QDI Asynchronous Circuits,"Testing asynchronous circuits is a difficult task because of two main reasons; first, the absence of a global clock does not allow the use of traditional test generation techniques used for synchronous circuits. Second, correct (i.e., hazard- free) operations of asynchronous circuits are usually obtained by introducing redundancies, that is, sacrificing the testability. So test frameworks such as fault simulator for synchronous circuits are not applicable for asynchronous circuits. In this paper we present an efficient fault simulator for template-based asynchronous circuits which is based on checking sequence of signals in templates. Our proposed fault simulator provides higher fault coverage by taking into consideration the detection of a special class of faults called premature firing faults without introducing any hardware redundancy in the designed circuit. Experimental results on a set of circuits have shown the effectiveness of the fault simulator.",2008,0, 2852,Fault-Based Interface Testing Between Real-Time Operating System and Application,"Testing interfaces of an embedded system is important since the heterogeneous layers such as hardware, OS and application are tightly coupled. We propose the mutation operators in three respects, 'When?', 'Where?' and 'How?' in order to inject a fault into RTOS program when testing interface between RTOS and application. Injecting a fault without affecting the RTOS in run-time environment is the core of proposed mutation operators. We apply the mutation operators to interface testing during the integration of RTOS and application in the industrial programmable logic controller.",2006,0, 2853,"On campus beta site: architecture designs, operational experience, and top product defects","Testing network products in a beta site to reduce the possibility of customer found defects is a critical phase before marketing. We design and deploy an innovative beta site on the campus of National Chiao Tung University, Hsinchu, Taiwan. It can be used for developers to test and debug products, while maintaining network quality for network users. To satisfy the needs of developers, we set up environments and mechanisms, such as a variety of test zones for multiple types of products or systems under test (SUTs), remote control, degrees of traffic volume, and traffic profiling. For network users, we set up mechanisms of failure detection, notification, and recovery. The beta site network users are all volunteers. Test results show that beta site testing is good for finding stability and compatibility defects. The period starting from the beginning of a test until the next defect is found is called the time to fail (TTF). We call it converged if the TTF exceeds four weeks, and the convergence ratio is the percentage of SUTs that reach convergence. We find that the TTF increases with longer test duration, meaning that product quality improves through beta site testing. However, the convergence ratios are only 7 and 20 percent for test durations of one month and one year, respectively, meaning that few products operate faultlessly for a long duration. The convergence ratios also indicate that it takes much more time to enhance product quality to be converged. Therefore, if considering both marketing timing and product quality, one month is our suggested minimum TD for low-end and shortlife- cycle products. However, we recommend one year as the minimum TD for high-end and long-life-cycle products.",2010,0, 2854,Testing Flash Memories for Tunnel Oxide Defects,"Testing non volatile memories for tunnel oxide defects is one of the most important aspects to guarantee cell reliability. Defective tunnel oxide layer in core memory cells can result in various disturb faults. In this paper, we study various defects in the insulating layers of a IT flash cell and analyze their impact on cell performance. Further, we present a test methodology and test algorithms that enable the detection of tunnel oxide defects in an efficient manner.",2008,0, 2855,"Optimized reasoning-based diagnosis for non-random, board-level, production defects","The ""back-end"" costs associated with debug of functional test failures can be one of the highest cost adders in the manufacturing process. As boards become more dense and more complex, debug of functional failures will become more and more difficult. Test strategies try to detect and diagnose failures early on in the test process (component and structural tests), but inevitably some defects are not detected until functional testing is done on the board. Finding these defects usually requires an ""expert"", with engineering level skills in both hardware and software. Depending on the complexity of the product, it could take several months (even years) to develop this level of expertise. During the initial product ramp, this expertise is usually most needed and often unavailable. Debug time is usually very long and scrap rates are generally high. This paper will provide an overview of reasoning-based diagnosis techniques and how they can significantly decrease debug time, especially during new product introduction. Because these engines are ""model-based"", there is no guarantee how they will perform in real life. In almost all cases, the reasoning engine will have to be modified based on instances where the reasoning engine could not correctly identify the failing component. Making these adjustments to the reasoning is a very complex and sometimes risky endeavor. While the new model may correctly identify the previously missed failure, the reasoning may have been altered to a point where several other diagnoses have now been unknowingly compromised. This paper will propose enhancements to the reasoning engine that will allow a simpler approach to adapting to diagnostic escapes without risking compromises to the original diagnostic engine",2005,0, 2856,"HeapMon: A helper-thread approach to programmable, automatic, and low-overhead memory bug detection","The ability to detect and pinpoint memory-related bugs in production runs is important because in-house testing may miss bugs. This paper presents HeapMon, a heap memory bug-detection scheme that has a very low performance overhead, is automatic, and is easy to deploy. HeapMon relies on two new techniques. First, it decouples application execution from bug monitoring, which executes as a helper thread on a separate core in a chip multiprocessor system. Second, it associates a filter bit with each cached word to safely and significantly reduce bug checking frequencyby 95% on average. We test the effectiveness of these techniques using existing and injected memory bugs in SPEC2000 applications and show that HeapMon effectively detects and identifies most forms of heap memory bugs. Our results also indicate that the HeapMon performance overhead is only 5%, on averageorders of magnitude less than existing tools. Its overhead is also modest: 3.1% of the cache size and a 32-KB victim cache for on-chip filter bits and 6.2% of the allocated heap memory size for state bits, which are maintained by the helper thread as a software data structure.",2006,0, 2857,"Strategies for fault-tolerant, space-based computing: Lessons learned from the ARGOS testbed","The Advanced Space Computing and Autonomy Testbed on the ARGOS satellite provides the first direct, on orbit comparison of a modem radiation hardened 32 bit processor with a similar COTS processor. This investigation was motivated by the need for higher capability computers for space flight use than could be met with available radiation hardened components. The use of COTS devices for space applications has been suggested to accelerate the development cycle and produce cost effective systems. Software-implemented corrections of radiation-induced SEUs (SIHFT) can provide low-cost solutions for enhancing the reliability of these systems. We have flown two 32-bit single board computers (SBCs) onboard the ARGOS spacecraft. One is full COTS, while the other is RAD-hard. The COTS board has an order of magnitude higher computational throughput than the RAD-hard board, offsetting the performance overhead of the SIHFT techniques used on the COTS board while consuming less power.",2002,0, 2858,"Fault tolerant, variable frequency, unity power factor converters for safety critical PM drives","The aerospace industry is currently considering a constant voltage, variable frequency supply for aircraft power systems. This variable frequency system is based on a generator directly driven from the aero-engine, in which the frequency is dependent on engine speed. A number of safety critical loads are placed on this system, and it is essential that failure of any one load does not affect the operation of another. The paper develops the concept of fault tolerant input converters for an electric fuel pump. The converters always act as a unity power-factor load, and can tolerate a range of failures, whilst maintaining both the operation of the pump and minimal impact on the electrical supply. Two converter types are proposed and compared for this application. The operation of these two unity power factor converters from a variable frequency supply is demonstrated. The effect of faults in the selected converters on converter operation and the supply itself is then discussed.",2003,0, 2859,Cognitive aspects of error finding on a simulation conceptual modeling notation,"The aim of the study is to investigate and compare experimentally the error finding strategies of a notation-familiar group with degrees in computer science related fields and a domain-familiar group on a simulation conceptual modeling representation based on UML. The use of eye movement and verbal protocols together with performance data underline the differences such as error finding and reasoning between two groups. The experiment with 20 participants also reveals that the diagrammatic complexity and the degree of causal chaining are the properties of diagrams that affect understanding, reasoning and problem solving with conceptual modeling representations. In a follow-up study with 24 university students, it is seen that these properties are independent of gender. The study also emphasizes the combination of different data collection modalities, namely eye movements, verbal protocol and performance data to be effective in uncovering individual differences in human-computer interaction studies in the domain of software engineering.",2008,0, 2860,Operating system function reuse to achieve low-cost fault tolerance,The aim of this article is to propose a new approach to fault tolerance in single processor embedded systems which is centred on the operating system. Particular attention is put on low-cost techniques that exploit functions already present in the system in a different than-usual way to achieve protection with little or no intervention at the application level. An example is given: the realization of a checkpoint and rollback scheme through the context switch function.,2004,0, 2861,Rotor inter-turn short circuit fault detection in wound rotor induction machines,"The aim of this paper is to develop a technique for detecting a real rotor inter-turn short-circuit fault in a wound rotor induction machine working as generator by using the spectral and the bispectral analysis. The stator currents provide interesting signatures since the rotor inter-turn short-circuit fault introduces new harmonics in the stator windings. Hence, in this paper, it is proved that sensing one stator current by using a current sensor is possible techniques for detecting this kind of electrical fault. This technique is based on mmf, flux and emf computation for both healthy and faulty machines during any operating condition. Then, by using the spectral and the bispectral analysis, the computed harmonics of stator currents are experimentally verified on a 5.5kW-220V/380V-50Hz-8 poles wound-rotor induction machine working in both healthy and faulty modes at different load conditions.",2010,0, 2862,Vision correction with excimer lasers,"Summary form only given. Excimer lasers were invented in 1975, and the first human eye treatment with VISX laser technology was in 1987. Since then many advances in LASIK surgery and related diagnostic and control techniques have led to improved vision correction with over 95% of patients corrected to 20/20 vision or better. This paper describes the development of excimer laser surgery and recent results, wavefront-guided vision correction methods, and the equipment required to achieve these results. Specific requirements imposed on the 193 nm excimer laser, on the laser beam control, the electronic controls, and the controlling software have led to development of laser systems and diagnostic systems unique to LASIK surgery. These include development of a highly reliable laser system with tightly controlled energy output and precision beam shaping and positioning. This development and widely successful use in refractive surgery are reviewed in this paper.",2004,0, 2863,"In Search of Real Data on Faults, Errors and Failures","Summary form only given. In order to make a relevant contribution to industrial practice and have an impact on the future systems and services, it is essential that the research community has an access to real data to be able to test effectiveness and verify correctness of proposed techniques for enhanced availability (the term ""real data"" refers to the field data collected at the customers' sites, not just at the lab where usually experts assume various scenarios without proper attention to operator mistakes, environment and customer's maintenance procedures). To date the community had rather sporadic opportunities to access of the field data and have developed the body of knowledge based frequently on wrong assumptions, hypothetical failure models and simplistic distributions. At the core of the problem is that the failure data are classified due to the competition and the fact that almost always it is attached to specific customers and its bulk may be enormous. With thousands of measurement points and up to about 1200 parameters that can be measured on computer and communication systems, the amount of data may reach from several Gbytes to over a hundred Gbytes per day. The key challenge is how to filter out real data and code it such that it can be accessed by the research community while at the same time the bulk of data is significantly reduced by focusing strictly on faults, errors and failures and their root causes. To change this state of affairs, the panel attempts to give pointers to the sources of real data, investigate the ways of collecting the data and making it accessible by the research community. The panel includes academic and industrial experts",2006,0, 2864,Adaptive sensor fault detection and identification and life extending in health monitoring systems,"Summary form only given. Usually, solutions to sensor validation fall into two major categories: the data-based approaches and the model-based approaches. Model-based methods include nonparametric and parametric approaches. Belonging to the first category are neural-network-bank based approaches. The non-parametric methods are more robust, but a large number of training data are needed nevertheless. On the other hand, parametric approaches, including dynamic state space models (DSSM), provide better accuracy and tracking performance without the need of training. The price paid here is the need for high fidelity real-time system models. Particle filter (PF) is an alternative name for sequential importance sampling for DSSM. PF has been commonly employed to online processing of dynamic systems described by DSSM. We will also discuss a Markov jump DSSM (MJDSSM) for system modeling and mixture Kalman filter (MKF) solution-a unique and efficient particle filtering detector being developed. We have modeled and calculated the probability of failure due to component damage. Using this model, a Monte Carlo simulation is also performed to evaluate the likelihood of damage accumulation under various operating conditions. Using thermal mechanical fatigue (TMF) of a critical component as an example, it has been shown that that an intelligent acceleration algorithm can drastically reduce life usage with minimum sacrifice in performance. By means of genetic search algorithms, optimal acceleration schedules can be obtained with multiple constraints. The simulation results show that an optimized acceleration schedule can provide a significant life saving in selected engine components. The ultimate goal of engine health monitoring is to maximize the amount of meaningful information to perform diagnostics and prognostics on engine health. To achieve highest level of intelligence in different levels and aspects, in the future work, we propose to implement the concept of data fusion tha- - t integrates data from multiple sources to obtain improved accuracy and more specific results.",2010,0, 2865,Fault-tolerant DSM on the SOME-Bus multiprocessor architecture with message combining,"Summary form only given. We present a broadcast-based architecture called the SOME-Bus interconnection network, which directly links processor nodes without contention, and can efficiently interconnect several hundred nodes. Each node has a dedicated output channel and an array of receivers, with one receiver dedicated to every other node's output channel. The SOME-Bus eliminates the need for global arbitration and provides bandwidth that scales directly with the number of nodes in the system. Under the distributed shared memory (DSM) paradigm, the SOME-bus allows strong integration of the transmitter, receiver and cache controller hardware to produce a highly integrated system-wide cache coherence mechanism. Backward error recovery fault-tolerance techniques can exploit DSM data replication and SOME-Bus broadcasts with little additional network traffic and corresponding performance degradation. Simulation results show that in the SOME-Bus architecture under the DSM paradigm, messages tend to wait at the node output network interface. Consequently, to minimize the effect of increased network traffic, messages can be combined at the node output queue to form a new message containing the payloads of all original messages. We use simulation to examine the effect of such message combining on the performance of SOME-Bus, in the presence of additional traffic due to fault tolerance, and we compare it to similar performance measures of a reduced SOME-Bus network where two nodes share one channel.",2004,0, 2866,Thermal and magnetic characteristics of bulk superconductor and performance analysis of magnetic shielding type of superconducting fault current limiter,"Superconducting fault current limiter (SCFCL) is expected to be the first application of a high-temperature superconductor (HTS) to electric power systems. The authors have been developing a magnetic shielding type of SCFCL that uses a cylindrical Bi-2223 HTS bulk. Short-circuit fault tests in a small SCFCL model were performed experimentally. A computer program based on the finite element method (FEM) taking the voltage-current (E-J) characteristics of the bulk material into account was developed to analyze the performance in the short-circuit fault tests and to investigate the dynamic electromagnetic behavior within a bulk superconductor. Because the E-J characteristic of HTS bulk depends on temperature and magnetic field, they investigated experimentally the E-J characteristics of a bulk superconductor in various operating temperatures and magnetic fields. The computer program considering the measured E-J characteristics simulated the electromagnetic behaviors in an SCFCL test model successfully (BiPb)2Sr2Ca2Cu3O10 ",2001,0, 2867,Systematic defect identification through layout snippet clustering,"Systematic defects due to design-process interactions are a dominant component of integrated circuit (IC) yield loss in nano-scaled technologies. Test structures do not adequately represent the product in terms of feature diversity and feature volume, and therefore are unable to identify all the systematic defects that affect the product. This paper describes a method that uses diagnosis to identify layout features that do not yield as expected. Specifically, clustering techniques are applied to layout snippets of diagnosis-implicated regions from (ideally) a statistically-significant number of IC failures for identifying feature commonalties. Experiments involving an industrial chip demonstrate the identification of possible systematic yield loss due to lithographic hotspots.",2010,0, 2868,Micronmesh for fault-tolerant GALS Multiprocessors on FPGA,"System-on-Chip (SoC) circuits have evolved to single chip Multiprocessor systems. Due to increasing variance of process parameters, which produces synchronization problems on large SoCs, a globally-asynchronous locally-synchronous (GALS) design style must have been mobilized. In addition, the large VLSI circuits are also becoming more susceptible to transient and intermittent faults which can corrupt their operation. This paper presents a new micronmesh network-on-chip (NoC) which is targeted to fault-tolerant communication of GALS Multiprocessor SoCs (MPSoC). It is fully synthesizable with current design tools and it can be used for prototyping MPSoCs on FPGA circuits. The Micronmesh incorporates a new improved fault-diagnosis-and-repair (FDAR) system which is able to diagnose and repair also buffer memories in addition to wire connections while fault-tolerant DOR (FTDOR) routing is used for routing packets to their destinations around defected parts. Owing to the FDAR system and the FTDOR Micronmesh degrades gracefully as permanent faults appear and it is able to recover from transient and intermittent faults. The fault-tolerance of the Micronmesh is also improved by switch-to-switch (S2S) level retransmissions which reduce the number of end-to-end (E2E) level retransmissions that produce considerably higher latencies. These methods targeted at improving the fault-tolerance are also becoming necessary for improving the manufacturability of the circuits in the future.",2008,0, 2869,Optimal design of k-out-of-n:G subsystems subjected to imperfect fault-coverage,"Systems subjected to imperfect fault-coverage may fail even prior to the exhaustion of spares due to uncovered component failures. This paper presents optimal cost-effective design policies for k-out-of-n:G subsystems subjected to imperfect fault-coverage. It is assumed that there exists a k-out-of-n:G subsystem in a nonseries-parallel system and, except for this subsystem, the redundancy configurations of all other subsystems are fixed. This paper also presents optimal design polices which maximize overall system reliability. As a special case, results are presented for k-out-of-n:G systems subjected to imperfect fault-coverage. Examples then demonstrate how to apply the main results of this paper to find the optimal configurations of all subsystems simultaneously. In this paper, we show that the optimal n which maximizes system reliability is always less than or equal to the n which maximizes the reliability of the subsystem itself. Similarly, if the failure cost is the same, then the optimal n which minimizes the average system cost is always less than or equal to the n which minimizes the average cost of the subsystem. It is also shown that if the subsystem being analyzed is in series with the rest of the system, then the optimal n which maximizes subsystem reliability can also maximize the system reliability. The computational procedure of the proposed algorithms is illustrated through the examples.",2004,0, 2870,Automated synthesis of EDACs for FLASH memories with user-selectable correction capability,"Tackling the design of a mission-critical system is a rather complex task: different and quite often contrasting dimensions need to be explored and the related trade-offs need to be evaluated. Designing a mass-memory device is one of the typical issues of mission-critical applications: the whole system is expected to accomplish a high level of dependability which highly relies on the dependability provided by the mass-memory device itself. NAND flash-memories could be used for this goal: in fact on the one hand they are nonvolatile, shock-resistant and powereconomic but on the other hand they have several drawbacks (e.g., higher cost and number of erasure cycles bounded). Error Detection And Correction (EDAC) techniques could be exploited to improve dependability of flash-memory devices: in particular binary Bose and Ray-Chaudhuri (BCH) codes are a well known correcting code technique for NAND flash-memories. In spite of the importance of error correction capability several other equally critical dimensions need to be explored during the design of binary BCH codes for a flash-memory based mass-memory device. No systematic approach has so far been proposed to consider them all as a whole: as a consequence a novel design environment with a user-selectable error correction capability is aimed at supporting the design of binary BCH codes for a flash-memory based mass-memory device.",2010,0, 2871,Research on Optimal Placement of Travelling Wave Fault Locators in Power Grid,"Taking the full network observability of power system operation state, maximum state measurement redundancy and minimum number of travelling wave fault location device (TFD) as objectives, an TFD optimal placement scheme for power grid fault location with travelling wave is presented in the paper. The scheme contains two steps: static processing and dynamic configuration. Terminal substations should install TFD. Then taking the terminal substations as starting point, the whole network is separated into several unattached branches. The branch which includes the most number of substations, via the longest line, and has not any loop, can install TFD at its both terminals. And then combining with the practical length of each line and coverage range of substations, the optimal disposition of TFD can successfully accomplish. A novel network-based fault location algorithm is also designed with travelling wave velocity on-line measuring and every TFD recorded travelling arrival times fusing. EMTP simulation results show that the TFD optimal placement scheme can use less TFDs to locate all faults in the whole power grid with economy and high reliability. The location error is no more than 100 m.",2008,0, 2872,Target Registration Correction Using the Neural Extended Kalman Filter,"Target registration can be considered a problem in aligning the reports of two sensor platforms. It is often a result of sensor misalignment and navigation errors. One technique to alleviate these errors is to continually recompute a correction with each report. In this paper, a different approach using a modification of an adaptive neural network technique is proposed and developed. The technique, which is referred to as a neural extended Kalman filter, learns the differences between the a priori model of the off-board reports and the actual model. This correction can then be added to the model to provide an improved estimate of the sensor report. The approach is applied to the problem of static-registration-applied track-level position reports.",2010,0, 2873,An Efficient Approach for Target Tracking and Error Minimization by Kalman Filtering Technique,"Target tracking and size estimation have always been an active field of research because of its widely spread applications in navigational, air defense and missile interception systems, etc. In this paper error-minimization and to increase hit probability has been focused keeping in view the limitations imposed by practical constraints. An efficient practical approach has been modeled and tested practically in this research work, which tackles a maneuvering target and its dynamics in real time. The proposed approach develops a dynamic control for target tracking and its operation is based on dynamic switching of dynamic models.",2007,0, 2874,Simulative study of target tracking accuracy based on time synchronization error in wireless sensor networks,"Target tracking is one of the popular applications of wireless sensor networks (WSN). Randomly distributed sensor nodes(motes) in environment gather spatio-temporal information about target(s) and send them to a sink node for further processing. Motes like personal computers use low cost crystal based clocks. In order to maintain synchronization accuracy to acceptable levels, we shall need to synchronize them quite often. Time synchronization tries to control the time accuracy in WSN but it still has a specific limited accuracy. Motes partly process their sensed data using local fusion before sending them to the sink. Target tracking has two major error sources, first of all is that the equation system most of times gives two different answers which target can be placed in one of them. Second factor is that low time accuracy causes the great amounts of errors in results. Choosing proper fusion methods and sensing coverage degree helps us to increase the accuracy of target tracking results.",2008,0, 2875,Automatic error recovery in targetless logic emulation,"Targetless logic emulation refers to a verification system in which there are no external hardware targets interfacing with the emulator. In such systems input stimuli to the DUT come either from a user provided vector file or a HDL testbench running on a software simulator and the DUT runs on hardware based logic emulator. Many users use such targetless environment for automated long running verification tests consisting of huge sets of input stimuli, consequently an automatic recovery method is of significant interest in such systems. The automatic error recovery method shall be able to complete the emulation session gracefully skipping error points and subsequently report various errors and mismatch conditions for user debug. The paper presents a novel methodology and verification infrastructure based on periodic checkpointing, which provides a robust way of error condition detection, subsequent restoration of last saved system state and resume emulation run by skipping offending operations. It does not require any special hardware extension and provides a fully customizable checkpoint frequency selection scheme. It is seen to add only a minimal overhead on overall hardware emulation speed.",2009,0, 2876,Fast and effective fault simulation for path delay faults based on selected testable paths,"Test generation and fault simulation of path delay faults are very time-consuming. A new fault simulation method of fully enhanced scan designed circuits is proposed for path delay faults based on single stuck-at tests without circuit transformation. The proposed method identifies robustly and non-robustly testable paths first, for which a selected path circuit (SPC) is constructed. The SPC circuit contains no internal fanouts. Fault simulation of non-robustly testable paths is reduced to 3-valued logic simulation of the SPC circuit. Fault simulation is completed on the SPC circuit by only tracing the active part of SPC circuit. An effective fault dropping technique is also adopted based on the selective tracing scheme. The proposed fault simulation scheme is extended to that of robustly testable path delay faults. Experimental results confirm that the proposed fault simulator is exact. It is shown according to experimental results that the proposed fault simulator gets exact fault simulation results in very short time. Sufficient experimental results are presented to compare with previous methods on CPU time and accuracy.",2007,0, 2877,Are Fault Failure Rates Good Estimators of Adequate Test Set Size?,"Test set size in terms of the number of test cases is an important consideration when testing software systems. Using too few test cases might result in poor fault detection and using too many might be very expensive and suffer from redundancy. For a given fault, the ratio of the number of failure causing inputs to the number of possible inputs is referred to as the failure rate. Assuming a test set represents the input domain uniformly, the failure rate can be re-defined as the fraction of failed test cases in the test set. This paper investigates the relationship between fault failure rates and the number of test cases required to detect the faults. Our experiments suggest that an accurate estimation of failure rates of potential fault(s) in a program can provide a reliable estimate of an adequate test set size with respect to fault detection (a test set of size sufficient to detect all of the faults) and therefore should be one of the factors kept in mind during test set generation.",2009,0, 2878,Test-driven development as a defect-reduction practice,"Test-driven development is a software development practice that has been used sporadically for decades. With this practice, test cases (preferably automated) are incrementally written before production code is implemented. Test-driven development has recently re-emerged as a critical enabling practice of the extreme programming software development methodology. We ran a case study of this practice at IBM. In the process, a thorough suite of automated test cases was produced after UML design. In this case study, we found that the code developed using a test-driven development practice showed, during functional verification and regression tests, approximately 40% fewer defects than a baseline prior product developed in a more traditional fashion. The productivity of the team was not impacted by the additional focus on producing automated test cases. This test suite aids in future enhancements and maintenance of this code. The case study and the results are discussed in detail.",2003,0, 2879,Rotor fault detection using the instantaneous power signature,"The aim of this paper is to present a method to detect broken rotor bar faults by estimating a global modulation index which corresponds to the contribution of all detected modulating frequencies in the stator current. We show that additional information carried by instantaneous power improves the detection of the sidebands and consequently the monitoring too. In fact, the instantaneous power method can be interpreted as a modulation operation in the time domain that translates the spectral components specific to the broken rotor bars to the band [0-50]Hz.",2004,0, 2880,Impact of inverter faults in the overall performance of permanent magnet synchronous motor drives,"The aim of this paper is to present an overall analysis of a typical permanent magnet synchronous motor drive under different operating conditions. Single power switch open-circuit faults as well as single phase open-circuit faults are introduced in the inverter and their effects are investigated trough a drive global performance evaluation. This includes the study of phase currents and voltages harmonic distortion and power factor on the mains supply side, the DC bus current, power losses and efficiency of the rectifier as well as of the system comprising the inverter and the motor. In addition, total drive efficiency values are also reported. Experimental and simulation results under these faulty operating conditions are presented, as well as results under healthy operating conditions, with the objective to establish a reference for comparison.",2009,0, 2881,Motion Correction for Respiratory Gated PET Images,"The aim of this study was to investigate the feasibility of a motion correction method for respiratory gated PET and to evaluate its effect on image quantification. The NCAT phantom was used and simulated to have three lung lesions at different locations in the right lung. All lesions had either a source to background ratio of 5:1 or 10:1. Data were binned into 16, 8, 4, and 2 gates in addition to a non-gated data set. Poisson noise was added to the data before being reconstructed with OSEM, 4 iterations and 8 subsets. Using a non-rigid registration algorithm, the gated images were deformed into the peak inhale gate. This resulted in the motion corrected images, which were then summed for analysis. Regions of interest were placed in the lung background, soft tissue, and the center of each lesion. The mean signal, contrast to noise ratio (CNR), and spatial resolution were evaluated as a function of the number of gates, with and without motion correction, lesion size, lesion placement, lesion contrast, and count level. Compared to the non-gated data, mean signal recovery in the gated and motion corrected images increased as the number of gates used increased, lesion placement was lower in the lung, as the lesion size increased, and as lesion contrast decreased. Although the mean value did not change significantly as the number of gates increased from 4 to 16, the standard deviation decreased significantly. This resulted in an increase in CNR recovery particularly with decreased lesion size, lower lesion placement, and decreased intrinsic contrast. The spatial resolution also improved 10-30% in the motion corrected images for the lesions located at the lower and middle of the lung. These results indicated that the motion corrected images have greatly reduced image noise, and at the same time improved signal recovery due to reduced resolution losses. This showed the efficacy of this motion correction algorithm, and resulted in improved image contrast and detectability.",2006,0, 2882,Analysis of the effects of real and injected software faults: Linux as a case study,The application of fault injection in the context of dependability benchmarking is far from being straightforward. One decisive issue to be addressed is to what extent injected faults are representative of actual faults. This paper proposes an approach to analyze the effects of real and injected faults.,2002,0, 2883,Comparative Analysis between Models of Neural Networks for the Classification of Faults in Electrical Systems,"The application of neural networks to electrical power systems has been widely studied by several researchers [1-7]. Nevertheless, almost all the studies made so far have used the structure of neural network of back-propagation with supervised learning. In the present paper some of the more recent models particularly those that use combined non-supervised/supervised learning applied to the classification of faults in transmission lines are analyzed. In this work the following models are considered: (i) back propagation network (BP); (ii) feature mapping network (FM); (Hi) radial base function network and (iv) learning vector quantization network (LVQ). Special emphasis is made in the performance comparison in terms of the size of the neural network, the learning process, the classification precision and the robustness for generalization. The result of this work provides guides on how to select a neural network from a diversity of possibilities of neural network architecture for a specific application [7].",2007,0, 2884,Fault Tolerant Hardware for High Performance Signal Processing,"The approach described in this paper uses an array of Field Programmable Gate Array (FPGA) devices to implement a fault tolerant hardware system that can be compared to the running of fault tolerant software on a traditional processor. Fault tolerance is achieved is achieved by using FPGA with on the fly partial programmability feature. Major considerations while mapping to the FPGA includes the size of the area to be mapped and communication issues related to their communication. Area size selection is compared to the page size selection in Operating System Design. Communication issues between modules are compared to the software engineering paradigms dealing with module coupling, fan-in, fan-out and cohesiveness. Finally, the overhead associated with the downloading of the reconfiguration files is discussed.",2008,0, 2885,A portable and fault-tolerant microprocessor based on the SPARC v8 architecture,"The architecture and implementation of the LEON-FT processor is presented. LEON-FT is a fault-tolerant 32 bit processor based on the SPARC V8 instruction set. The processors tolerates transient SEU errors by using techniques such as TMR registers, on-chip EDAC, parity, pipeline restart, and forced cache miss. The first prototypes were manufactured on the Atmel ATC35 0.35 m CMOS process, and subjected to heavy-ion fault-injection at the Louvain Cyclotron. The heavy-ion tests showed that all of the injected errors (>100,000) were successfully corrected without timing or software impact. The device SEU threshold was measured to be below 6 MeV while ion energy-levels of up to 110 MeV were used for error injection.",2002,0, 2886,Preventing human errors in power grid management systems through user-interface redesign,"Supervising an energy network is a critical and complex endeavor. Decisions are made under severe time constraints and errors can be costly, often resulting in a crippling impact on the network. Furthermore, the dispatchers who oversee these networks must navigate through multiple systems and tools to access the information they need to make good decisions quickly. Thousands of dynamic variables and hundreds of network configurations need to be considered. Energy network supervision is prone to human error because it requires high dispatcher attention and memory load. This paper shows that it is possible, without massive investments in dollars or in technology, to prevent human errors and to significantly reduce decision times by redesigning the user interfaces on the computer software used by the energy system's supervisors. It illustrates how the redesign of an energy transmission protection system user interface through a cognitive ergonomics approach, eliminates the cause of human errors induced by the existing user interface and reduces time to access information by 90 percent.",2007,0, 2887,Fault tolerant algorithms for orderings and colorings,"Summary form only given. A k-forward numbering of a graph is a labeling of the nodes with integers such that each node has less than k neighbors whose labels are equal or larger. We obtain three self-stabilizing (s-s) algorithms for finding a k-forward numbering, provided one exists. One such algorithm also finds the k-height numbering of graph, generalizing s-s algorithms by Bruell et al. and Antonoiu et al. for finding the center of a tree. Another k-forward numbering algorithm runs in polynomial time. There is a strong connection between k-forward numberings and colorings of graphs. We use a k-forward numbering algorithm to obtain an s-s algorithm that is more general than previous coloring algorithms in the literature, and which k-colors any graph having a k-forward numbering. Special cases of the algorithm 6-color planar graphs, thus generalizing an s-s algorithm by Ghosh and Karaata, as well as 2-color trees and 3-color series-parallel graphs. We discuss how our s-s algorithms can be extended to the synchronous model.",2004,0, 2888,Error analysis of Non-contact Weighted-Stretched-Wire System for measuring big structure deflections,"Structure deflections are key parameters to reflect its state, Non-contact Weighted-Stretched-Wire System (NCWSWS) is a novel system for measuring on-line deflections of big structures. This paper presents the configuration of the system and its measurement principle, researches errors due to the stretched-wire component and photography system, and corresponding strategies are given to reduce the errors. The practicality of the system is improved by study the error, and its performance satisfies the requirements of on-line measurement of big structure deflections.",2008,0, 2889,A business-oriented hierarchical configuration management model and its applications for fault localization and impact analysis,"The ever-increasing complexities of IT systems is bringing great challenges to IT management. Meanwhile, IT management is expanding beyond the traditional IT network management and IT service management, and spreading to business service management. Existing configuration management models for IT management solutions lack holistically presenting, managing, and leveraging the IT logical entities related to business services (such as computing services, application system, business system), relationships among these entities and traditional IT physical entities (such as networks, computers, and processes). This research studies how these entities interrelate to support business operations. We propose a business-oriented hierarchical configuration management model for describing their relationships and this model's application for fault location and impact analysis. The model and algorithms form the basis of an advanced IT management platform being built within China Mobile Communication Corporation (CMCC).",2010,0, 2890,Changing test and data modeling requirements for screening latent defects as statistical outliers,"The expanded role of test demands a significant change in mind-set of nearly every engineer involved in the screening of semiconductor products. The issues to consider range from DFT and ATE requirements, to the design and optimization of test patterns, to the physical and statistical relationships of different tests, and finally, to the economics of reducing test time and cost. The identification of outliers to isolate latent defects will likely increase the role of statistical testing in present and future technologies. An emerging opportunity is to use statistical analysis of parametric measurements at multiple test corners to improve the effectiveness and efficiency of testing and reliability defect stressing. In this article, we propose a ""statistical testing"" framework that combines testing, analysis, and optimization to identify latent-defect signatures. We discuss the required characteristics of statistical testing to isolate the embedded-outlier population; test conditions and test application support for the statistical-testing framework; and the data modeling for identifying the outliers.",2006,0, 2891,Enhanced FPGA reliability through efficient run-time fault reconfiguration,"The expanded use of field programmable gate arrays (FPGA) in remote, long life, and system-critical applications requires the development and implementation of effective, efficient FPGA fault-tolerance techniques. FPGA have inherent redundancy and in-the-field reconfiguration capabilities, thus providing alternatives to standard integrated circuit redundancy-based fault-recovery techniques. Runtime reliability can be enhanced by using such unique features. Recovery from permanent logic and interconnect faults without runtime computer-aided design (CAD) support can be efficiently performed with the use of fine-grained and physical design partitioning. Faults are localized to small partitioned blocks that have fixed interfaces to the surrounding portions of the design, and the affected blocks are reconfigured with previously generated, functionally equivalent block instances that do not use the faulty resources. This technique minimizes the post-fault-detection system downtime, while requiring little area overhead. Only the finely located faulty portions of the FPGA are removed from use. In addition, the end user need not have access to CAD tools, making the algorithm completely transparent to system users. This approach has been efficiently implemented on a diverse set of FPGA architectures. The algorithm's flexibility is also apparent from the variable emphases that can be placed on system reliability, area overhead, timing overhead, design effort, and system memory. Given user-defined emphases, the algorithm can be modified to specific application requirements. Experiments using random s-independent and s-correlated fault models reveal that the approach enhances system reliability, while minimizing area and timing overhead",2000,0, 2892,The digital and high-precision error detection of complex freeform surface,"The exploitation and modification of automobile die attach great importance to the automobile exploitation. It should be realized in the process of modifying mould-obtaining 3D model by reverse engineering-testing errors-confirming model. This thesis applies the method of ATOS Optical Scanner and CMM (Three-Coordinate Measuring Machine) to obtain the complete high-precision point cloud data because of the complicated free form surface of automobile model. It is difficult to have error evaluation because the CAD model and the point cloud data are impossible to make automatic alignment. Therefore, on the basis of the surface reconstruction of reverse engineering software Imageware and the comparative functions between surface and point cloud data, we propose a new perspective of detecting digital error, i.e. the contour adjustment of surface automatically by the way of golden section method. Under the condition of satisfying the least square method, this method, which can make anastomosis for the actual and theoretical surface, reduces and even eliminates the position error caused by the inconsistency between the actual reference and CAD model reference, laying a foundation to measure shape error by the way of direct alignment method and reducing time as well as improving the error detection precision. Being simplistic, effective and practical, this method has been successfully utilized in the female die design of interior plastic trimming for a certain type of Shanghai Volkswagen car, having great reference value in terms of the automatization, flexibility and digitalization of products on-line detection.",2008,0, 2893,An integrated framework of the modeling of failure-detection and fault-correction processes in software reliability analysis,"The failure-detection and fault-correction are critical processes in attaining good performance of software quality. In this paper, we propose several improvements on the conventional software reliability growth models (SRGMs) to describe actual software development process by eliminating some unrealistic assumptions. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, we also analyze the effect of applying the delay-time non-homogeneous Poisson process (NHPP) models. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction process.",2008,0, 2894,Voltage recovery of grid-connected wind turbines with DFIG after a short-circuit fault,"The fast development of wind power generation brings new requirements for wind turbine integration to the network. After the clearance of an external short-circuit fault, the voltage at the wind turbine terminal should be reestablished with minimized power losses. This paper concentrates on voltage recovery of variable speed wind turbines with doubly fed induction generators (DFIG). A simulation model of a MW-level variable speed wind turbine with a DFIG developed in PSCAD/EMTDC is presented, and the control and protection schemes are described. A new control strategy is proposed to reestablish the wind turbine terminal voltage after the clearance of an external short-circuit fault, and then restore the normal operation of the variable speed wind turbine with DFIG, which has been demonstrated by simulation results.",2004,0, 2895,An Approach for Analyzing Infrequent Software Faults Based on Outlier Detection,"The fault analysis is critical process in software security system. However, identifying outliers in software faults has not been well addressed. In this paper, we define WCFPOF (weighted closed frequent pattern outlier factor) to measure the complete transactions, and propose a novel approach for detecting closed frequent pattern based outliers. Through discovering and maintaining closed frequent patterns, the outlier measure of each transaction is computed to generate outliers. The outliers are the data that contain relatively less closed frequent itemsets. To describe the reasons why detected outlier transactions are infrequent, the contradictive closed frequent patterns for each outlier are figured out. Experimental results show that our algorithm has shorter time consumption and better scalability.",2009,0, 2896,Study of adaptive fault current algorithm for microgrid dominated by inverter based distributed generators,"The fault characteristic of inverter interfaced DG is very different from that of synchronous rotating generators. Traditional protection methods based on large fault current are no longer applicable to inverter-only supplied microgrid. This paper analyzes the fault behaviour of inverter based microgrid, and proposes an adaptive fault current protection algorithm for this type of microgrid. According to the fault characteristic of inverter interfaced DG, it uses fault component current which only presents in fault condition. The result is simulated and proved by DigSilent.",2010,0, 2897,A Novel Adaptive Protective Scheme For the Single-Phase Earth Fault of the Non-Effectively Grounded Power Systems,"The fault current of the faulty feeder is quite low when a single-phase earth fault occurs on a neutral un-effectively grounded grid. The accurate identification of the faulty feeder becomes a difficult task due to many disadvantageous factors. The conventional scheme is implemented with the comparison of the magnitude, polarity or the phase angle among the zero-sequence currents. Therefore, it is hard to be implemented into the feeder protection equipped on the FTU. A novel zero-sequence over-voltage protection scheme based on zero-sequence current compensation is therefore proposed. The fundamental method is based on the measurement of the zero-sequence over-voltage, and the tripping time characteristic adopts the inverse time delay. By virtue of the analysis of distribution of the zero-sequence transient current, a transient RMS value of the zero-sequence current is utilized to form the composite compensation voltage together with zero-sequence voltage. This voltage is utilized to revise the inverse time-delay curve to achieve the selectivity of the protection. The principle is verified with the EMTP simulator.",2006,0, 2898,Fault detection method for the subcircuits of a cascade linear circuit,"The fault detection method for the subcircuits of a cascade linear circuit is discussed. While there is any fault (either hard or soft and either single or multiple) at one subcircuit of a cascade linear circuit, it can be quickly detected by using the method proposed. While there are faults simultaneously existing at multiple subcircuits, they can generally be detected by the searching approach proposed here. The aforementioned method is the continuation and development of the unified fault detection dictionary method for linear circuits proposed previously by the authors (see ibid., vol. 46, Oct. 1999)",2000,0, 2899,Fault Detection of Satellite Network Based on MNMP,"The fault detection of satellite network is required to collect data of managed objects via network management protocol. Therefore, it is a premise of fault management of satellite network to choose a network management protocol designed according to the feature of satellite network. Due to the uniqueness of satellite network, the current network management protocol and standard, such as simple network management protocol (SNMP) and CMIP, are both directly unsuitable for the applying of fault detection of satellite network. By analysis of multiplex network management protocol (MNMP), this paper focuses on the discussion of the MNMP application in the fault detection of satellite network. Besides, in order to investigate the performance of fault detection based on MNMP, this paper also gives a performance test on several means of data acquisition of MNMP, and compares as well as analyze it with SNMP.",2009,0, 2900,The Design of Fault Diagnosis Expert System about Temperature Adjustment System Based on CLIPS,"The fault diagnosis of a certain launching unit's temperature controller is researched with the object-oriented programming method based on expert system theory, the fault-diagnosis expert-system software is designed and developed with VC++ 2008 and CLIPS. The structure of the system is firstly analyzed; The representation of knowledge, the design of database and the production rule are discussed then; lastly the diagnosis flow is studied up and a demonstration of the fault diagnosis is given.",2009,0, 2901,Fault Diagnosis of Vehicle Continuously Variable Transmission Based on Mutual Information Entropy and Support Vector Machine,"The fault inspection and diagnosis of vehicle transmission plays a special role of safely steering and knocking the traffic accidents down. For the fault character of vehicle continuously variable transmission(CVT), the multi-resolution singular-spectrum entropy method, based on the mutual information entropy, to effectively extract the signal character of fault, and the multiclass classification support vector machine(SVM) algorithm to be easily implemented to deal with the problems of classification for fault state are proposed. Experiment results have shown the feasibility and efficiency of the combination mutual information entropy with multiclass classification SVM in fault diagnosis of vehicle CVT, additionally, the algorithms are very reliable to be implemented with fault inspection.",2010,0, 2902,Fault diagnosis expert system of artillery radar based on neural network,"The fault of new type artillery radar is highly complex and correlative. The neural network technology was incorporated into the radar fault diagnosis after the fault features of new type artillery radar and the shortage of the expert diagnosis system were analyzed. There are many difficulties in the process of the servicing for the artillery radar, such as technology level is low, fault diagnosis is difficult. To resolve the problem, a fault diagnosis expert system was realized based on RBF(Radial Basis Function) neural network. The collectivity structure of expert system, structure and function of software were discussed. Accordingly, several key techniques such as the fault diagnosis principle of RBF neural network, knowledge database, reasoning engine were also given in detail. The application results showed that the expert system proved its feasibility and practical, the servicing efficiency and fault diagnosis ability are improved.",2010,0, 2903,Efficient Analysis Algorithms for Parametric Fault Trees,"The Fault Tree (FT) is a widespread model for the dependability (reliability) analysis. One of its several extensions is called Parametric Fault Tree (PFT) and is oriented to redundant systems providing a compact (parametric) way to model replicated components or subsystems. This paper presents the PFT solution method based on a new form of Binary Decision Diagrams (BDD), called Parametric BDD (pBDD). Such method exploits the parametric form combined with the advantages of the use of BDDs. pBDDs are used for both the qualitative and the quantitative analysis of PFTs. A case of redundant system supports the introduction and the application of the new method.",2005,0, 2904,CMOS implementation of precise sample-and-hold circuit with self-correction of the offset voltage,"The authors describe the silicon implementation of a new sample-and-hold circuit topology. Its main feature is the self correction of the offset voltage that is generated mainly by the mismatch on the differential pair at the input and the charge injected by the NMOS switches in the sampling capacitor. The circuit was implemented in a CMOS CYE 0.8 m n-well process from AMS. The results, initially obtained from simulations, were compared to real laboratory measurements. The comparison indicates that the measurements and the simulated results have a very strong correspondence. The real circuit is capable of reducing the total sample-and-hold output error to just 0.14% at a sampling rate of 250 kHz, so that a system which operates at 250 K samples can be implemented.",2005,0, 2905,A single DC reactor type fault current limiting interrupter for three-phase power system,"The authors propose a single DC reactor type fault current limiting interrupter (FCLI) for a three-phase power system. The device uses a single high temperature superconducting (HTS) coil that operates in conjunction with a modified half control bridge composed of thyristors and diodes connected to a transformer's secondary windings. One variety is an automatic interrupter, which automatically blocks fault current through the application of DC bias current to the bridge. Another is a gate interrupter, which does the same thing by locking the thyristor's gate pulses. The authors examine the results of various simulations on a new device that both limits and interrupts fault current in a three-phase power system",2001,0, 2906,Diagnosis potential of different partial discharge features of diverse PD defects in N2/SF6mixtures,"The automated partial discharge diagnosis is an universal method to identify critical insulation conditions of electrical equipment. The diagnosis process needs significant information about the PD source by measuring the key values of the continuous PD activity, the so called Phase Resolved PD Pulse Sequence. From the information of such a measurement plenty of features can be calculated, which should distinguish the different defect types. This contribution discusses the main question: which type of PD features can distinguish the different PD defect types better than the other PD features? Therefore three different PD feature extraction methods: the phase position analysis, the voltage gradient and the voltage difference of consecutive pulses were tested with different PD defect types in GIS and GIL. In an automated diagnosis process all three feature extraction methods are combined with an L2 classification algorithm to calculate the diagnosis result. The question is discussed if a reference data base has to be adapted. Preliminary results have shown, that the performance of the feature extraction methods is strongly depending on the type of PD defect but also influenced by the gas mixture percentage.",2003,0, 2907,Electronic barometric altimeter in real time correction,"The barometric altitude measurement using modern electronic sensors has significant error due to atmospheric pressure variations with respective to weather and time. Based on the altimeter setting concept, the barometric altimeter should be corrected according to hourly QNH data to maintain accuracy. This paper presents an airborne altitude measurement technology using barometric measurement and real time calibration through GPRS uplink. The proposed method is practical in correcting the time variant error of the electronic barometric altimeter automatically. After these corrections, the static and dynamic accuracy of the electronic barometric altimeter can be much improved into less than 2% errors in real time. The proposed system is suitable for ultra light aircrafts (ULA) or unmanned aerial vehicles (UAV).",2008,0, 2908,A Reconfigurable Motor for Experimental Emulation of Stator Winding Interturn and Broken Bar Faults in Polyphase Induction Machines,"The benefits and drawbacks of a 5-hp reconfigurable induction motor, which was designed for experimental emulation of stator winding interturn and broken rotor bar faults, are presented in this paper. It was perceived that this motor had the potential of quick and easy reconfiguration to produce the desired stator and rotor faults in a variety of different fault combinations. Hence, this motor was anticipated to make a useful test bed for evaluation of the efficacy of existing and new motor fault diagnostics techniques and not the study of insulation failure mechanisms. Accordingly, it was anticipated that this reconfigurable motor would eliminate the need to permanently destroy machine components such as stator windings or rotor bars when acquiring data from a faulty machine for fault diagnostic purposes. Experimental results under healthy and various faulty conditions are presented in this paper, including issues associated with rotor bar-end ring contact resistances that showed the drawbacks of this motor in so far as emulation of rotor bar breakages. However, emulation of stator-turn fault scenarios was successfully accomplished.",2008,0, 2909,Common Trends in Software Fault and Failure Data,"The benefits of the analysis of software faults and failures have been widely recognized. However, detailed studies based on empirical data are rare. In this paper, we analyze the fault and failure data from two large, real-world case studies. Specifically, we explore: 1) the localization of faults that lead to individual software failures and 2) the distribution of different types of software faults. Our results show that individual failures are often caused by multiple faults spread throughout the system. This observation is important since it does not support several heuristics and assumptions used in the past. In addition, it clearly indicates that finding and fixing faults that lead to such software failures in large, complex systems are often difficult and challenging tasks despite the advances in software development. Our results also show that requirement faults, coding faults, and data problems are the three most common types of software faults. Furthermore, these results show that contrary to the popular belief, a significant percentage of failures are linked to late life cycle activities. Another important aspect of our work is that we conduct intra- and interproject comparisons, as well as comparisons with the findings from related studies. The consistency of several main trends across software systems in this paper and several related research efforts suggests that these trends are likely to be intrinsic characteristics of software faults and failures rather than project specific.",2009,0, 2910,Bit error rate performance of iterative decoding in a perpendicular magnetic recording channel,"The BER performance of iterative decoding systems and PRML systems in a perpendicular magnetic recording channel is experimentally investigated. A performance evaluation system consisting of a spinstand, a recording waveform generator, a digital storage oscilloscope and a personal computer was used. PR1ML, PR2ML, EPR3ML, E2PR3ML and ME2PR4ML systems are discussed as a PRML channel. The 8/9 and 16/17 iterative decoding systems consists of the encoder, with RSC encoder serially concatenated with preceded PR channels and the iterative decoder with APP modules using Max-log-MAP algorithm. The results show that EPR3ML and E2PR3ML provide good BER performance in a perpendicular magnetic recording channel and that the iterative decoding system is also an efficient way to improve the BER performance in the channel",2001,0, 2911,Bit Error Rate Analysis of jamming for OFDM systems,"The bit error rate (BER) analysis of various jamming techniques for orthogonal frequency-division multiplexing (OFDM) systems is given in both analytical form and software simulation results. Specifically, the BER performance of barrage noise jamming (BNJ), partial band jamming (PBJ) and multitone jamming (MTJ) in time-correlated Rayleigh fading channel with additive white gaussian noise (AWGN) has been investigated. In addition, two novel jamming methods - optimal-fraction PBJ and optimal-fraction MTJ for OFDM systems are proposed with detailed theoretical analysis. Simulation results validate the analytical results. It is shown that under the A WGN channel without fading, the optimal-fraction MTJ always gives the best jamming effect among all the jamming techniques given in this paper, while in Rayleigh fading channel the optimal-fraction MTJ can achieve acceptable performance. Both analysis and simulation indicate that the proposed optimal-fraction MTJ can be used to obtain improved jamming effect under various channel conditions with low complexity for OFDM systems.",2007,0, 2912,Predictive distribution reliability analysis considering post fault restoration and coordination failure,"The calculation of predicted distribution reliability indexes can be implemented using a distribution analysis model and the algorithms defined by the ""Distribution System Reliability Handbook"", EPRI Project 1356-1 Final Report. The calculation of predicted reliability indexes is fairly straightforward until post fault restoration and coordination failure are included. This paper presents the methods used to implement predictive reliability with consideration for post fault restoration and coordination failure into a distribution analysis software model",2002,0,3860 2913,Timing for the Loran-C signal by Beidou satellite and error correction for the transmission time,"The capability of positioning of Beidou/Loran-C integrated navigation system is restricted by the transmitting precision of Loran-C signal. In this paper, the wavelet shrinkage de-noising method of correction for the transmission time error of Loran-C signal, which is timed by the Beidou satellite, is discussed. Getting the random offset from the time difference, and then, adopting the soft-threshold function and the SUREShrink estimation in the de-noising. The results demonstrated that the transmission time error was decreased about 30 ns, and the performance of three dimension positioning was improved evidently through the method in the experiment.",2008,0, 2914,A real time vision system for defect inspection in a cast extrusion manufacturing process,"The cast extrusion manufacturing process is the initial step which enables the creation of the raw materials, such as clear polypropylene film, needed for the flexible packaging manufacturing process. The current methodology of controlling extrusion related defect occurrences is attempted by a combination of statistical sampling and human inspection. However, the defects are small in size and hard to visualise in a clear thin film 3 m in width moving at a speed of 50 m/min. This resulted in poor product quality and high return ratio from customers. To the best of our knowledge, there is no system available that can automatically and accurately detect such defects. This research investigates possible defect detection methodologies and has subsequently proposed a system that is capable of real time monitoring of defects on the cast extrusion manufacturing process.",2007,0, 2915,What we have learned about fighting defects,"The Center for Empirically Based Software Engineering helps improve software development by providing guidelines for selecting development techniques, recommending areas for further research, and supporting software engineering education. A central activity toward achieving this goal has been the running of ""e- Workshops"" that capture expert knowledge with a minimum of overhead effort to formulate heuristics on a particular topic. The resulting heuristics are a useful summary of the current state of knowledge in an area based on expert opinion. This paper discusses the results to date of a series of e-Workshops on software defect reduction. The original discussion items are presented along with an encapsulated summary of the expert discussion. The reformulated heuristics can be useful both to researchers (for pointing out gaps in the current state of the knowledge requiring further investigation) and to practitioners (for benchmarking or setting expectations about development practices).",2002,0, 2916,Three-phase synchronous PWM for flyback converter with power-factor correction using FPGA ASIC design,The design and development of a synchronous pulsewidth modulation (PWM) generator suitable for the three-phase flyback converter with transformer isolated and power-factor correction using a field-programmable gate array is proposed. The proposed three-phase synchronous PWM makes it possible for the converter to obtain the sinusoidal supply currents with a near-unity power factor. A high-frequency transformer is considered in the design to provide galvanic isolation and serves the dual role of inductor and transformer. Results are provided to demonstrate the effectiveness of the design.,2004,0, 2917,A Design and Implementation of a Fault-Tolerant Rod Control System for Nuclear Power Plants,"The design and implementation of a fault-tolerant control rod control system for nuclear power plants is described. High reliability and safety are necessary for the instrumentation and control systems for safety-critical plants such as nuclear power plants. For a control rod control system that controls the nuclear reactivity inside the reactor by inserting or withdrawing the rods into or from the reactor, the reliability is more critical than the safety since its malfunction directly results in an unexpected shutdown of the power plant. This paper deals with a design and implementation practice to enhance the reliability of the control system. The reliability enhancement is basically achieved by adopting hardware redundancy in its structure. The reliability is evaluated quantitatively to check if the designed and implemented system is reliable enough to be applied to commercial plants. The ability of fault detection realized in the system is expected to give a further reliability enhancement by means of software. The bumpless control problem that can be arisen from adopting hardware redundancy is discussed here. Also a new algorithm for the rod movement detection is briefly introduced and demonstrated with a test result",2006,0, 2918,Transition Fault Test Reuse,"The design complexity of systems on a chip drives the need to reuse legacy or intellectual property cores, whose gate-level implementation details are unavailable. The core test depends on manufacturing technologies and changes permanently during a design lifecycle. The purpose of this paper is to assist the designer in the decision making how to test transition faults of re-synthesized intellectual property cores. We have performed various comprehensive experiments with combinational benchmark circuits. The comparison of the detection of the transition faults for different implementations of the circuit was carried out. Our experiments show that the test sets generated for a particular circuit realization fail to detect in average only less than one and a half percent of the transition faults of the re-synthesized circuit. The possibilities of the reuse of functional delay test were studied as well",2006,0, 2919,"Design, Simulation, and Fault Analysis of a 6.5-MV LTD for Flash X-Ray Radiography","The design of a 6.5-MV linear transformer driver (LTD) for flash-radiography experiments is presented. The design is based on a previously tested 1-MV LTD and is predicted to be capable of producing diode voltages of 6.5 MV for a 50-Omega radiographic-diode load. Several fault modes are identified, and circuit simulations are used to determine their effect on the output pulse and other components. For all the identified fault modes, the peak load voltage is reduced by less than 5%",2006,0, 2920,An estimation of nonlinear time series with ARCH errors using neural networks,"The design of a neural network-based estimation system for nonlinear economic time series, based on the Volterra model, is presented. We use ECLMS (extended correlation least mean squares) algorithms as the noise cancelling method for nonlinear time series. The validity of the proposed method is demonstrated by estimating a high-order Volterra model with ARCH (autoregressive conditional heteroscedasticity) errors. This algorithm has a good performance in solving nonlinear economic time-series estimation problems",2000,0, 2921,Adaptive minimum symbol-error-rate decision feedback equalization for multilevel pulse-amplitude modulation,"The design of decision feedback equalizers (DFEs) is typically based on the minimum mean square error (MMSE) principle as this leads to effective adaptive implementation in the form of the least mean square algorithm. It is well-known, however, that in certain situations, the MMSE solution can be distinctly inferior to the optimal minimum symbol error rate (MSER) solution. We consider the MSER design for multilevel pulse-amplitude modulation. Block-data adaptive implementation of the theoretical MSER DFE solution is developed based on the Parzen window estimate of a probability density function. Furthermore, a sample-by-sample adaptive MSER algorithm, called the least symbol error rate (LSER), is derived for adaptive equalization applications. The proposed LSER algorithm has a complexity that increases linearly with the equalizer length. Computer simulation is employed to evaluate the proposed alternative MSER design for equalization application with multilevel signaling schemes.",2004,0, 2922,Integrated algorithm for solving H2-optimal fault detection and isolation problems,"The design problem of fault detection and isolation filters can be formulated as a model matching problem and solved using an H2-norm minimization approach. A systematic procedure is proposed to choose appropriate filter specifications which guarantee the existence of proper and stable solutions of the model matching problem. This selection is integral part of a numerically reliable computational method to design of H2-optimal fault detection filters. The proposed design approach is completely general, being applicable to both continuous- and discrete-time systems, and can easily handle even unstable and/or improper systems.",2010,0, 2923,Ultrasonic nondestructive detection for defects in epoxy/mica insulation,"The detection and recognition of defects in insulation is a significant investigation for insulation diagnosis. In our laboratory, an ultrasonic testing system has been developed and used for insulation diagnosis of large generators. Using this system with lower frequency transducers, experiments were performed on actual stator bars. The results showed that, the 0.5~1.2 MHz frequency transducers could be used for the manufacturing quality test of stator bars. The 0.5~0.8 MHz transducers are more effective for aging assessment of aged stator bars. They are sensitive to the degradation and the internal mechanical damages of aged stator insulation",2001,0, 2924,Distributed intelligent control of car lighting system with fault detection,The developed system for distributed intelligent control of automotive lighting system is presented in this paper. Control system is based on distributed microcontroller system and its control algorithms. A technique for intelligent light intensity control based on pulse width modulation is presented and comparison with traffic safety law regulations is given. A subsystem for failure detection and adaptation in case of failure appearances caused by hardware or communication malfunction of vehicle lighting system is built into control system. For analog signal measuring purposes are presented and applied techniques for signal filtering by using fast convolution based methods.,2007,0, 2925,An effective eye gaze correction operation for video conference using antirotation formulas,"The deviated eye gaze problem in video conferencing has been known and studied for many years. This paper suggests a simple and novel approach for face reorientation in a monocular setting, which is done by performing a combined rotation and antirotation operation on the image. Our approach on face reorientation does not require 3D modeling, registration or texture mapping. It is simple, efficient and robust. To make the eye gaze correction complete, an image warping technique is used for modifying eyelids and a simple transformation is used for correcting eye glares.",2003,0, 2926,Generalizing fault contents from a few classes,"The challenges in fault prediction today are to get a prediction as early as possible, at as low a cost as possible, needing as little data as possible and preferably in such a language that your average developer can understand where it came from. This paper presents a fault sampling method where a summary of a few, easily available metrics is used together with the results of a few sampled classes to generalize the fault content to an entire system. The method is tested on a large software system written in Java, that currently consists of around 2000 classes and 300,000 lines of code. The evaluation shows that the fault generalization method is good at predicting fault-prone clusters and that it is possible to generalize the values of a few representative classes.",2007,0, 2927,Earth faults and related disturbances in distribution networks,"The characteristics of the earth faults and related disturbances, recorded in medium voltage overhead distribution networks during the years 1998-1999 are described. Altogether 316 real cases were analysed. The use of subharmonic oscillation and harmonic distortion was investigated, as a means of anticipating faults. Arcing faults made up at least half of all the disturbances, and were especially predominant in the unearthed network. Fault resistances reached their minimum values near the beginning of the disturbances. The maximum currents that allowed for autoextinction in the unearthed network were comparatively small",2002,0,4963 2928,Error analysis of Chinese text segmentation using statistical approach,"The Chinese text segmentation is important for the indexing of Chinese documents, which has significant impact on the performance of Chinese information retrieval. The statistical approach overcomes the limitations of the dictionary based approach. The statistical approach is developed by utilizing the statistical information about the association of adjacent characters in Chinese text collected from the Chinese corpus. Both known words and unknown words can be segmented by the statistical approach. However, errors may occur due to the limitation of the corpus. In this work, we have conducted the error analysis of two Chinese text segmentation techniques using statistical approach, namely, boundary detection and heuristic method. Such error analysis is useful for the future development of the automatic text segmentation of Chinese text or other text in oriental languages. It is also helpful to understand the impact of these errors on the information retrieval system in digital libraries.",2004,0, 2929,Petri sub-nets for minpath-based fault trees,"The choice of proper forms of fault tree (FT) and success tree (ST) representations, respectively inside of Petri nets (PNs) in the field of R&M modeling is not a trivial problem, because there is the danger of stray tokens inside of the sub-PNs of those trees which would disturb the system model's proper operation in the long run. The author has been advocating the use of disjunctive normal forms (DNFs=sum-of-products forms). However, typically in the field of graph connectivity problems the initially found FTs usually result from minpaths rather than from mincuts. The Boolean FT functions are therefore initially conjunctive normal forms (CNFs=product-of-sums forms). As the main result of this paper it is shown that for such FTs, sub-PNs can be designed systematically, even though they are not quite as simple as sub-PNs for FTs of DNFs. The main point is to allow for extra FT input places, and to gather all the tokens corresponding to the single variables of the diverse sums once the repairs of the corresponding components are finished. This way no stray tokens remain inside of the FT's sub-PN. As a consequence of the duality between FTs and STs, and since both trees are usually inserted in the overall system PN model, it suffices to find a DNF or a CNF of either tree's Boolean function. A CNF or a DNF of the other tree is then readily found via Shonnon's inversion theorem, i.e., it needs no complex Boolean algebra manipulations. The general results are formulated as PN design rules",2001,0, 2930,On fault diagnosis tree and its control flow,"The coarse-grained organization of the existing fault diagnosis scheme can not realize the automatic and intelligent diagnosis. This paper provides a tree structure of fault diagnosis scheme named T04FDS, which integrates the fault classification relations and nesting relation of part and whole. By building the state transform system to represent the diagnosis process, and describing the control flow with operations of stack. Furthermore the thesis presents the physical realization of T04FDS fault diagnosis, and finally proves the feasibility of this method by giving an example of fault diagnosis scheme for computer wireless network card.",2009,0, 2931,Fault-tolerant grid services using primary-backup: feasibility and performance,"The combination of grid technology and Web services has produced an attractive platform for deploying distributed applications: grid services, as represented by the Open Grid Services Infrastructure (OGSI) and its Globus toolkit implementation. As the use of grid services grows in popularity, tolerating failures becomes increasingly important. This work addresses the problem of building a reliable and highly-available grid service by replicating the service on two or more hosts using the primary-backup approach. The primary goal is to evaluate the ease and efficiency with which this can be done, by first designing a primary-backup protocol using OGSI, and then implementing it using Globus to evaluate performance implications and tradeoffs. We compared three implementations: one that makes heavy use of the notification interface defined in OGSI, one that uses standard grid service requests instead of notification, and one that uses low-level socket primitives. The overall conclusion is that, while the performance penalty of using Globus primitives - especially notification - for replica coordination can be significant, the OGSI model is suitable for building highly-available services and it makes the task of engineering such services easier.",2004,0, 2932,Fault-Based Test Case Generation for Component Connectors,"The complex interactions appearing in service-oriented computing make coordination a key concern in service-oriented systems. In this paper, we present a fault-based method to generate test cases for component connectors from specifications. For connectors, faults are caused by possible errors during the development process, such as wrongly used channels, missing or redundant subcircuits, or circuits with wrongly constructed topology. We give test cases and connectors a unifying formal semantics by using the notion of design, and generate test cases by solving constraints obtained from the specification and faulty connectors. A prototype symbolic test case generator serves to demonstrate the automatizing of the approach.",2009,0, 2933,The feasibility study on the combined equipment between micro-SMES and inductive/electronic type fault current limiter,"The concept of the combined equipment between micro-SMES and inductive/electronic type FCL is proposed in this paper. Having the multifunction for a superconducting device, the new equipment can serve as the protective component for a dual power system. The specification of a testing model was determined and the transient performance was analyzed by Matlab software. The results show that the combined equipment is realizable for a dual power system application, where it has the major function of limiting fault current (FCL function) and the minor function of maintaining power fluctuation (SMES function).",2003,0, 2934,Challenges in scalable fault tolerance,"The continued scaling of device dimensions is leading toward devices where only a handful of dopant atoms or charges can make the difference between a one and a zero in the of state of represented bit, by enhancing or depleting channel conduction. Thus very minor static imperfections in dopant distribution, dielectric properties, or device geometry, and dynamic conditions associated with heat, radiation, or aging can perturb a device out of specification and cause electrical errors. Thus we might expect to soon see devices with millions of static defects and thousands of soft errors in very short time periods. Both static defects and dynamic faults at these scales present huge challenges. Classical methods for fault or defect tolerance at a higher level of architecture (eg. N-modular-redundancy) can be impractically expensive, and some approaches to diagnosis and reconfiguration require immense reliable memories or impractical test and reconfiguration times. Efficient and effective means are needed that exploit structure inherent in one layer of architecture to provide key properties to enable reliable execution at other levels.",2009,0, 2935,The DVB television signal transmission simulation using the forward error correction codes,The contribution deals with the simulation of the digital video signal transmission through the baseband transmission channel model. The simulation model that covers selected phenomena of DVB (digital video broadcasting) system signal processing is presented. The digital video signal is represented with the digital data of one noncompressed video frame that is channel encoded and protected against errors with the forward error correction (FEC) codes. The transmission channel model has influence on transmitted digital data and its distortion and the pertubative signals affect on the data decoding. The developed interactive simulation software (Matlab application) features are outlined too and the conclusion presents efficiency of the used FEC codes.,2003,0, 2936,Improved current-regulated delta Modulator for reducing switching frequency and low-frequency current error in permanent magnet brushless AC drives,"The conventional current-regulated delta modulator (CRDM) results in a high current ripple and a high switching frequency at low rotational speeds, and in low-frequency current harmonics, including a fundamental current error, at high rotational speeds. An improved current controller based on CRDM is proposed which introduces a zero-vector zone and a current error correction technique. It reduces the current ripple and switching frequency at low speeds, without the need to detect the back-emf, as well as the low-frequency error at high speeds. The performance of the modulator is verified by both simulation and measurements on a permanent magnet brushless ac drive.",2005,0, 2937,Crosstalk Fault Detection for Interconnection Lines Based on Path Delay Inertia Principle,"The crosstalk fault becomes more and more important in the deep submicron SoC and its detection involves sophisticated timing measurement. In this paper, a new test scheme to detect the crosstalk fault, based on the path delay inertia, for interconnection lines in SoC is proposed. The scheme, without using timing measurement, applies a transition on the aggressor line and a critical width pulse, CWP, to the victim line and detects the propagation of the CWP at the output of the victim line. The scheme is simple and simulation analysis and experiments show that it is effective in detecting crosstalk faults",2005,0, 2938,Near-lossless/lossless compression of error-diffused images using a two-pass approach,"The current international standard, the Joint Bilevel Image Experts Group (JBIG), is a representative of a bilevel image compression algorithm. It compresses bilevel images with high performance, but it shows relatively low performance in compressing error-diffused halftone images. This paper proposes a new bilevel image compression for error-diffused images, which is based on Bayes' theorem. The proposed coding procedure consists of two passes. It groups 2 2 dots into a cell, where each cell is represented by the number of black dots and the locations of the black dots in the cell. The number of black dots in the cell is encoded in the first pass, and their locations are encoded in the second pass. The first pass performs a near-lossless compression, which can be refined to be lossless by the second pass. Experimental results show a high compression performance for the proposed method when it is applied to error-diffused images.",2003,0, 2939,Design of fault tolerant system based on runtime behavior tracing,"The current researches to improve the reliability of operating systems have been focusing on the evolution of kernel architecture or protecting against device driver errors. In particularly, the device driver errors are critical to the most of the complementary operating systems that have a kernel level device driver. Especially on special purpose embedded system, because of its limited resources and variety of devices, more serious problems are induced. Preventing data corruption or blocking the arrogation of operational level is not enough to cover the entire problems. For examples, when using device drivers, the violation of function's call sequence can cause a malfunction. Also a violation of behavior rules on system level involves the same problem. This type of errors is difficult to be detected by the previous methods. Accordingly, we designed a system that traces system behavior at runtime and recovers optimally when errors are detected. We experimented in Linux 2.6.24 kernel operating on GP2X-WIZ mobile game player.",2010,0, 2940,Support vector machine used to diagnose the fault of rotor broken bars of induction motors,"The data-based machine learning is an important aspect of modern intelligent technology, while statistical learning theory (SLT) is a new tool that studies the machine learning methods in the case of a small number of samples. As a common learning method, support vector machine (SVM) is derived from the SLT. Here we were done some analogical experiments of the rotor broken bar faults of induction motors used, analyzed the signals of the sample currents with Fourier transform, and constructed the spectrum characteristics from low frequency to high frequency used as learning sample vectors for the SVM. After a SVM is trained with learning sample vectors, so each kind of the rotor broken bar faults of induction motors can be classified. Finally the retest is demonstrated, which proves that the SVM really has preferable ability of classification. In this paper we tried applying the SVM to diagnose the faults of induction motors, and the results suggested that the SVM could yet be regarded as a new method in the fault diagnosis.",2003,0, 2941,Accurate Bit-Error-Rate Analysis of Bandlimited Cooperative OSTBC Networks Under Timing Synchronization Errors,"The distributed multiple-input-multiple-output (MIMO) system (e.g., intercluster communication via cooperating nodes in a wireless sensor network) is a topic of emerging interest. Many previous studies have assumed perfect synchronization among cooperating nodes and identically distributed communication links. Such assumptions are rarely valid in practical operating scenarios. This paper develops an analytical framework for computing the average bit error rate (ABER) of a distributed multiple-input-single-output (MISO) space-time-coded system with binary phase shift keying (BPSK) modulation affected by timing synchronization errors. The cooperating nodes use data pulse-shaping filters for transmission over generalized frequency-nonselective fading channels. As an illustrative example, the performance evaluation of a 2 times 1 MISO system that uses distributed orthogonal space-time block coding (OSTBC) is presented, although this approach can be readily extended to analyze distributed transmit diversity with a larger number of cooperating nodes. We show that under certain conditions, a distributed MISO system with time synchronization errors can still outperform a perfectly synchronized single-input-single-output (SISO) system.",2009,0, 2942,Fault-tolerant mesh of trust applied to DNS security,"The Domain Name System is critical for the proper operation of applications on the Internet. Unfortunately, the DNS has a number of significant security weaknesses that can result in the compromise of Web sites, e-mail messages, and log-in sessions. Additionally, these weaknesses have been used as the basis for man-in-the-middle attacks on what are considered secure network protocols. This paper provides a short description of the weaknesses of the current DNS and a description of DNS security extensions that will solve the existing insecurities.",2003,0, 2943,A Programmable and Portable NMES Device for Drop Foot Correction and Blood Flow Assist Applications,"The Duo-STIM, a new, programmable and portable neuromuscular stimulation system for drop foot correction and blood flow assist applications is presented. The system consists of a programmer unit and a portable, programmable stimulator unit. The portable stimulator features fully programmable, sensor-controlled, constant- voltage, dual-channel stimulation and accommodates a range of customized stimulation profiles. Trapezoidal and free-form adaptive stimulation intensity envelope algorithms are provided for drop foot correction applications, while time dependent and activity dependent algorithms are provided for blood flow assist applications. A variety of sensor types can be used with the portable unit, including force sensitive resistor based foot switches and NMES based accelerometer and gyroscope devices. The paper provides a detailed description of the hardware and block-level system design for both units. The programming and operating procedures for the system are also presented. Finally, functional bench test results for the system are presented.",2007,0, 2944,Effects of rotor bar and end-ring faults over the signals of a position estimation strategy for induction motors,"The effect of rotor faults, such as broken bars and end rings, over the signals of a position estimation strategy for induction motor drives is analyzed using a multiple coupled circuit model. The objective is to establish the possibility of using the estimation strategy signals for fault diagnosis in variable speed electric drives. This strategy is based on the effect produced by inductance variation on the zero sequence voltage, exciting the motor with a predefined inverter-switching pattern. Experimental results illustrate the feasibility of the proposal.",2003,0,4729 2945,MPEG-4 video error resilient tools performance evaluation and assessment,The effectiveness of MPEG-4 video error resilience tools are presented in this paper. MPEG-4 video quality measured in average peak signal to noise ratio (APSNR) that can be achieved in different channel conditions with the use of different combinations of MPEG-4 video error resilience tools will be presented in this paper.,2003,0, 2946,Fixed point error analysis of CORDIC processor based on the variance propagation,"The effects of angle approximation and founding in the CORDIC processor have been intensively studied for the determination of design parameters. However, the conventional analyses provide only the error bound which results in large discrepancy between the analysis and the actual implementation. Moreover, some of the signal processing architectures require the specification in terms of the mean squared error (MSE) as in the design specification of FFT processor for OFDM. This paper proposes a fixed point MSE analysis based on the variance propagation for more accurate error expression of CORDIC processor. It is shown that the proposed analysis can also be applied to the modified CORDIC algorithms. As an example of application, an FFT processor for OFDM using the CORDIC processor is presented. The results show close match between the analysis and simulation.",2003,0,4730 2947,The Application of Fault Tree Analysis in Software Project Risk Management,"The fault tree model has great significance on software project risk management. According to the standard fault-tree model, this paper establishes the corresponding mathematical model and sets up the software fault tree model of software project, analyzes project risk probability and influence coefficient combined with the actual software project risk management; sequentially lays a theoretical foundation for better controlling software project risk management.",2009,0, 2948,A fault-tolerant TCP scheme based on multi-images,"The fault-tolerance of TCP is a key technology to guarantee the availability of services for servers. A fault-tolerant TCP scheme based on multi-images is discussed in this paper. Each TCP connection in this scheme exists two synchronous connection images and it is no need to backup the status of every TCP connection. It issues a module mechanism in the kernel of Linux and does not affect the software running on both client and server. It also guarantees each TCP connection be taken over seamlessly when it fails, transparent to the user. At the same time, it also tries to reduce the side effect to the system performance on the premise of ensuring the fault tolerance of each connection.",2003,0, 2949,Holistic schedulability analysis of a fault-tolerant real-time distributed run-time support,"The feasibility test of a hard real time system must not only take into account the temporal behavior of the application tasks but also the behavior of the run-time support in charge of executing applications. The paper is devoted to the schedulability analysis of a run-time support for distributed dependable hard real time applications. In contrast to previous works that consider rather simple run-time supports (e.g. a real time kernel made of a simple tick scheduler and an unreliable communication protocol), our work deals with a complex run-time support with fault tolerance capabilities and made of multiple tasks that invoke each other",2000,0, 2950,Stochastic change detection based on an active fault diagnosis approach,The focus in this paper is on stochastic change detection applied in connection with active fault diagnosis (AFD). An auxiliary input signal is applied in AFD. This signal injection in the system will in general allow to obtain a fast change detection/isolation by considering the output or an error output from the system. The classical CUSUM (cumulative sum) method will be modified such that it will be able to detect change in the signature from the auxiliary input signal in the (error) output signal. It will be shown how it is possible to apply both the gain as well as the phase change of the output vector in the CUSUM test.,2007,0, 2951,Discrete game abstraction for fault tolerant control synthesis,"The focus of this paper is on the generation of controllers for fault tolerant control. Active off-line fault tolerant control design relies on a pool of controllers designed to handle different failure modes. Designing this pool of controllers remains a challenge and this paper hence investigates automatic control synthesis for a specific class of systems by abstracting the control problem to a set of combinatorial objects or a state machine. Next, a discrete game is formulated, in which the objective is to reach a goal set representing a desired (reference) state, and faults are treated as moves by a malignant opponent. Assuming that a winning strategy exists for this game, it forms a rule base that determines which control law to use under a given fault condition.",2008,0, 2952,Proceedings. 18th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems,"The following topics are dealt with: yield and defects; optoelectronics; fault analysis, injection and simulation; test and diagnosis; current test and diagnosis; test generation and application; scan design and test; BIST; error correcting codes; analogue and mixed signal test; defect tolerance and testing; FPGA and memory test; design verification and synthesis; SoC and core test; system reliability; fault tolerance; soft errors.",2003,0, 2953,Dynamic node management and measure estimation in a state-driven fault injector,"The following topics were dealt with: visual querying and data exploration; graphs and hierarchies; taxonomies, frameworks and methodology; document visualization and collaborative visualization; algorithm visualization; and 3D navigation",2000,0, 2954,Application of the Pronys method for induction machine stator fault diagnosis,"The fundamental positive and negative sequence components of the voltage and current are always characters in induction machine stator fault diagnosis. This paper presents the application of the Pronypsilas method for deriving these components from the measured data. It is known that the Pronypsilas method is sensitive to noise and takes a long computation time by increasing of the length of the data. In order to enhance its performance, pretreatments, including frame transformation, frequency shifting and decimation, are employed before using the Pronypsilas method. Finally, the fundamental sequence components, figured out by using the Pronypsilas method, are put into a neural network to calculate the indicator for diagnosis. The proposed technique is verified by applying to diagnose the stator fault in a three-phase induction motor.",2008,0, 2955,Research on the Fault Diagnosis for Boiler System Based on Fuzzy Neural Network,"The fuzzy logic was applied into the neural network and the application of fault diagnosis for boiler system with the integrated fuzzy neural network is investigated on the basis of the introductions of the basic principle of artificial neural network (ANN) and the principle of fault diagnosis for boiler system based on neural network. A example of training process and testing results about the sample of boiler was given. At last, it is proved that this integrated method can acquire a better result on fault diagnosis for boiler through error analysis compared with the traditional standard BP network.",2009,0, 2956,Fault-Tolerant Rate-Monotonic Scheduling Algorithm in Uniprocessor Embedded Systems,The general approach to fault tolerance in uniprocessor systems is to use time redundancy in the schedule so that any task instance can be re-executed in presence of faults during the execution. In this paper a scheme is presented to add enough and efficient time redundancy to the rate-monotonic (RM) scheduling policy for periodic real-time tasks. This scheme can be used to tolerate transient faults during the execution of tasks. For performance evaluation of this idea a tool is developed,2006,0, 2957,Effects of time synchronization errors in GNSS-aided INS,"The effects of time synchronization errors in a GNSS-aided inertial navigation system (INS) are studied in terms of the increased error covariance of the state vector. Expressions for evaluating the error covariance of the navigation state vector-given the vehicle trajectory and the model of the INS error dynamics-are derived. Two different cases are studied in some detail. The first case considers a navigation system in which the timing error is not included in the integration filter. This leads to a system with an increased error covariance and a bias in the estimated forward acceleration. In the second case, a parameterization of the timing error is included as a part of the estimation problem in the data integration. Simulation results show that by including the timing error in the estimation problem, almost perfect time synchronization is obtained and the bias in the forward acceleration is removed.",2008,0, 2958,Fault diagnosis of rotor winding inter-turn short circuit in turbine-generator based on BP neural network,"The electromagnetic characteristic and rotor vibration characteristic of turbine-generator are analyzed when rotor winding inter-turn short circuit fault has happened. This paper reveals that exciting magnetic force Ff is constant in a fixed condition whereas the exciting current If increases in case of rotor inter-turn fault. This paper also finds relevant characteristic parameters. Based on the theory, we can get training patterns without doing destructive tests. Then BP (back propagation) neural network can be adequately trained and diagnosis rotor winding inter-turn short circuit. BP neural network is independent on mathematic models and parameters of turbine-generator. Finally practically acquired dynamic experiment data of the MJF-30-6 generator, the results of verification show that the theory analysis is right and the method is efficient and accurate.",2008,0, 2959,Improving Accuracy in the Montgomery County Corrections Program Using Case-Based Reasoning,"The Montgomery county corrections program is a program designed to address the problem of overcrowded jails by providing an out-of-jail rehabilitative program as an alternative. The candidate offenders chosen for this program are offenders convicted on nonviolent charges and are currently chosen subjectively with little statistical basis. In addition, historical data has been recorded on offenders who have passed through the program, making the program a good candidate for case-based reasoning. Using such reasoning, county officials would like an objective measurement which will predict the success or failure of a candidate offender based on past offender history. The four case-based reasoning algorithms chosen for this prediction are discrete, continuous and distance weighted k-nearest neighbors and a general regression neural network (GRNN). Although all four algorithms prove to be an improvement on the current system, the GRNN performs the best, with an average accuracy rate of 68%.",2008,0, 2960,Accelerated functional modeling of aircraft electrical power systems including fault scenarios,"The more-electric aircraft concept is a fast-developing trend in modern aircraft power systems and will result in an increase in electrical loads fed by power electronic converters. Finalizing the architectural bus paradigm for the next generation of more-electric aircraft involves extensive simulations ensuring power system integrity. Since the possible number of loads in an on-board power system can be very large, the development of accurate, effective and computational time-saving models is of great importance. This paper focuses on development of a modeling approach based-on functional representation of individual power system units. This provides for possibility of fast simulation of a full generator-load power system under both normal and fault conditions. The paper describes the modeling principle, illustrates the acceleration attainable and shows how the functional representation can handle fault scenarios.",2009,0, 2961,Assessing Asymmetric Fault-Tolerant Software,"The most popular forms of fault tolerance against design faults use ""asymmetric"" architectures in which a ""primary"" part performs the computation and a ""secondary"" part is in charge of detecting errors and performing some kind of error processing and recovery. In contrast, the most studied forms of software fault tolerance are ""symmetric"" ones, e.g. N-version programming. The latter are often controversial, the former are not. We discuss how to assess the dependability gains achieved by these methods. Substantial difficulties have been shown to exist for symmetric schemes, but we show that the same difficulties affect asymmetric schemes. Indeed, the latter present somewhat subtler problems. In both cases, to predict the dependability of the fault-tolerant system it is not enough to know the dependability of the individual components. We extend to asymmetric architectures the style of probabilistic modeling that has been useful for describing the dependability of ""symmetric"" architectures, to highlight factors that complicate the assessment. In the light of these models, we finally discuss fault injection approaches to estimating coverage factors. We highlight the limits of what can be predicted and some useful research directions towards clarifying and extending the range of situations in which estimates of coverage of fault tolerance mechanisms can be trusted.",2010,0, 2962,A Case Study for Fault Tolerance Oriented Programming in Multi-core Architecture,"The multi-core architecture brings more and more challenges and means to common software developers. Reliable software system design approaches can give a high confidence that long-running online software systems run correctly. But anyway these approaches will certainly cause the loss of the efficiency. We found that the multi-core architecture is a quite suitable platform to support reliable software system design and can make the cost acceptable because of its advantages of the parallel performance and prevalence. In this paper we make use of the multi-core architecture to support software fault tolerance. This approach will make the integration of software fault tolerance and the multi-core architecture as a common design choice. According to the idea of software fault tolerance, for some key software units in a system we can develop N separate versions of them with equivalent functionalities. Each version is developed independently by an isolated group to prevent identical faults among versions. All implemented versions run separately from same initial conditions and inputs. Outputs of all redundant versions are submitted to a decision module that determines a single result from multiple results as the correct output. In this paper, we give a case study to show that with the multi-core architecture, the redundant versions of a key software unit can run in parallel on different cores to improve the efficiency.",2009,0, 2963,A formal specification of fault-tolerance in prospecting asteroid mission with Reactive Autonomie Systems Framework,"The NASA's Autonomous Nano Technology Swarm (ANTS) is a generic mission architecture consisting of miniaturized, autonomous, self-similar, reconfigurable, and addressable components forming structures. The Prospecting Asteroid Mission (PAM) is one of ANTS applications for survey of large dynamic populations. In this paper, we propose a formal approach based on Category Theory to specify the fault-tolerance property in PAM by Reactive Autonomie Systems Framework.",2010,0, 2964,Detection of Rotor Faults in Squirrel-Cage Induction Motors using Adjustable Speed Drives,"The need for detection of rotor faults at an earlier stage, so that maintenance can be planned ahead, has pushed the development of monitoring methods with increasing sensitivity and noise immunity. Addressing diagnostic techniques based on current signatures analysis (MCSA), the characteristic components introduced by specific faults in the current spectrum are investigated and a diagnosis procedure correlate the amplitudes of such components to the fault extent. In this paper, the impact of feedback control on asymmetric rotor cage induction machine behavior is analyzed. It is shown that the variables usually employed in diagnosis procedures assuming open-loop operation are no longer effective under closed-loop operation. Simulation results show that signals already present at the drive are suitable to effective diagnostic procedure. The utilization of the current regulator error signals and the influence of the regulators gains on their utilization in rotor failure detection are the aim of the present work. The use of a band-pass filter bank to detect the presence of sidebands is also proposed in the paper",2006,0, 2965,Modeling and simulation of turn-fault and unbalance magnetic pull in induction motor based on magnetic equivalent circuit method,"The need for detection of turn fault at primary stages, such that maintenance can be planned ahead, has pushed the development of monitoring method with increasing sensitivity and noise immunity. An important issue in such effort is the modeling of induction motor under turn fault conditions with minimum computation and complexity and maximum accuracy. Magnetic equivalent circuit is employed in the simulation and analysis of squirrel cage induction motor with turn to turn fault and unbalance magnetic pull that is arisen from this fault. Simulation and experimental results are presented to support the proposed approach.",2005,0, 2966,Techniques for BRDF Correction of Hyperspectral Mosaics,"The need to correct view- and sun-angle-dependent intensity gradients in aerial and satellite images is well established, with bidirectional reflectance distribution function (BRDF) kernel-based techniques receiving much of the recent attention in the literature. In these methods, a plausible (physical or empirical) model of the BRDF is fitted to the data and then used to normalize the image to standard solar and view angles. As yet, very little attention has been paid to the case where the images are hyperspectral (i.e., there are many contiguous bands captured simultaneously), beyond using known techniques on each band. This can lead to loss of spectral integrity and poor agreement on overlapping regions unless careful attention is paid to these factors. In this paper, a range of techniques that can be employed for the purpose of hyperspectral mosaicking is presented.",2010,0, 2967,Design of a power factor correction ac-dc converter,The need to keep EMI emissions by electronic power supplies below the limit specified by international standards have dictated that any new power supply design must include active power factor correction at the front end. The modern trend in power supply designs is towards digital control. This paper reports an effort to design a power factor correction AC-DC converter that is geared towards digital control. The power factor correction circuit employs zero voltage transition arrangement to minimize switching losses. Calculations that led to the selection of the circuit components are presented. Interface requirements between the power converter stage and the digital control processor are tackled. Average current mode control method is employed in the controller. The complete design has been tested by means of Powersim power electronics simulation software. The resulting input voltage and current waveforms show that the design is successful.,2007,0, 2968,Fault analysis in four-wire distribution networks,"The neutral wire in most existing power flow and fault analysis software is usually merged into phase wires using Kron's reduction method. In some applications, such as fault analysis, fault location, power quality studies, safety analysis, loss analysis etc., knowledge of the neutral wire and ground currents and voltages could be of particular interest. A general short-circuit analysis algorithm for three-phase four-wire distribution networks, based on the hybrid compensation method, is presented. In this novel use of the technique, the neutral wire and assumed ground conductor are explicitly represented. A generalised fault analysis method is applied to the distribution network for conditions with and without embedded generation. Results obtained from several case studies on medium- and low-voltage test networks with unbalanced loads, for isolated and multi-grounded neutral scenarios, are presented and discussed. Simulation results show the effects of neutrals and system grounding on the operation of the distribution feeders.",2005,0, 2969,The research and implementation of a CORBA-based architecture for adaptive fault tolerance in distributed systems,"The new generation of complex mission-critical systems (such as air traffic control systems, security monitoring systems and real time systems) is inherently distributed and operates in highly dynamic environments. Fault tolerance is a main means of assurance of system reliability. Single fault tolerance policy can not satisfy the dynamic changes of these systems, so their fault tolerance mechanism should provide more intelligence to adapt them to be in response to the changes in system resource, application demands and user requirements and improve resource utilization. This paper presents a CORBA-based architecture called AFTLSDS for adaptive fault tolerance in distributed systems and its design can satisfy the requirement of new generation complex mission-critical systems. We put emphasis on its component and design policy. Finally we give prototype implementation of this architecture and conclusion.",2002,0, 2970,Study of fault tolerance control for power management system,"The new generation of vessels, especially on electrical power propulsion installations, have a complex power system configuration. On the basis of electric propulsion power management system, the fault-tolerance control technology of the power management fault-tolerance control system is explored and it also constructs the function structure of this fault-tolerance control system. Synchronously, some main functions of this system such as information checkout, parallel operation method and control instruction validation and decision are expatiated.",2009,0, 2971,Backward-compatible robust error protection of JPEG XR compressed video,"The new JPEG XR image encoding standard offers a great compression rate while maintaining a good visual quality. Nonetheless, it has low error robustness, making it unusable in case of unreliable transmission over error prone channels, e.g., wireless channels. An improvement to the standard was developed, which can correct transmission errors, both bit or packet losses, and which is fully compatible with legacy decoders. Data interleaving and channel coding can offer a good protection against transmission errors; different levels of protection can be adopted, in order to trade-off between error protection capabilities and decompressed image quality.",2010,0, 2972,Fault Location for the NEPTUNE Power System,"The objective of the North Eastern Pacific Time-Series Undersea Networked Experiment (NEPTUNE) program is to construct an underwater cabled observatory on the floor of the Pacific Ocean, encompassing the Juan de Fuca Tectonic Plate. The power system associated with the proposed observatory is unlike conventional terrestrial power systems in many ways due to the unique operating conditions of underwater cabled observatories. In the event of a backbone cable fault, the location of the fault must be identified accurately so that a repair ship can be sent to repair the cable. Due to the proposed networked, mesh structure, traditional techniques for cable fault identification can not achieve the desired level of accuracy. In this paper, a system-theoretic method is proposed for identification of the fault location based on the limited data available. The method has been tested with extensive simulations and is being implemented for the field test in Monterey, California. In this study, a lab test is performed for the fault location function",2007,0, 2973,Mixed test structure for soft and hard defect detection,"The objective of this paper is to present a mixed test structure designed to characterize yield losses due to hard defect and back-end process variation (PV) at die and wafer level. A brief overview of the structure, designed in a ST-Microelectronics' 130 nm technology, is given. This structure is based on a SRAM memory array for detecting hard defects. Moreover each memory cell can be configured in the ring oscillator (RO) mode for back-end PV's characterization. The structure is tested in both modes (SRAM, RO) using a single test flow. Experimental results are given and confirm the ability of the structure to monitor PV and defect density.",2008,0, 2974,Analysis of arcing fault models,"The objective of this paper is to present, discuss and compare, in some detail, the arc models used to represent an arcing fault, in order to determine which of them is the most precise for this purpose. At the beginning of the paper a brief explanation of arcing faults is given, bringing out the importance of a realistic simulation of them. A theoretical description of the black box equations to model arc is explained. The paper concludes with a comparison of the most representative arc models. Results will be useful not only in arcing fault detection but also in the design of autoreclosure schemes on transmission lines. Implementation and simulation method are based on ATP/EMTP.",2008,0, 2975,A new functional fault model for FPGA application-oriented testing,"The objective of this paper is to propose a new fault model suitable for test pattern generation for an FPGA configured to implement a given application. The paper demonstrates that the faults affecting the bit cells of the look-up tables (LUTs) are not redundant, although they store constant values. We demonstrate that these faults cannot be neglected and that the fault model corresponding to modifying the content of each LUT memory cell must be considered in order to cover the full range of possible faults. In order to evaluate the fault coverage of the proposed fault model, a set of circuits mapped on a Xilinx Virtex 300 FPGA have been considered. Test sequences generated by a gate-level commercial ATPG and an academic RT-level one have been fault simulated on these benchmark circuits. The obtained figures show that a high percentage of faults affecting the LUT bit cells are undetected, thus suggesting that suitable ATPG algorithms adopting the new fault model are required.",2002,0, 2976,Fault tolerant generator systems for wind turbines,"The objective of this paper is to review the possibilities of applying fault tolerance in generator systems for wind turbines based on what has been presented in the literature. In order to make generator systems fault tolerant in a suitable way, it is necessary to gain insight into the probability of different failures, so that suitable measures can be taken. Therefore, a literature survey of reliability of wind turbines, electrical machines and power electronic converters is given. Five different ways of achieving fault tolerance identified in the literature are discussed together with their applicability for wind turbines: (1) converters with redundant semiconductors, (2) fault tolerant converter topologies, (3) fault tolerance by increasing the number of phases, (4) fault tolerance of switched reluctance machines, and (5) design for fault tolerance of PM machines and converters. Because converters fail more often than machines, it makes sense to use of fault tolerant converter topologies. Increasing the number of phases is a useful form of fault tolerance because it can be achieved without increasing the cost significantly.",2009,0, 2977,Evaluation of Respiratory Motion Effect on Defect Detection in Myocardial Perfusion SPECT: A Simulation Study,"The objective of this study is to investigate the effects of respiratory motion (RM) on defect detection in Tc-99m sestamibi myocardial perfusion SPECT (MPS) using a phantom population that includes patient variability. Three RM patterns are included, namely breath-hold, slightly enhanced normal breathing, and deep breathing. For each RM pattern, six 4-D NCAT phantoms were generated, each with anatomical variations. Anterior, lateral and inferior myocardial defects with different sizes and contrasts were inserted. Noise-free SPECT projections were simulated using an analytical projector. Poisson noise was then added to generate noisy realizations. The projection data were reconstructed using the OS-EM algorithm with 1 and 4 subsets/iteration and at 1, 2, 3, 5, 7, and 10 iterations. Short-axis images centered at the centroid of the myocardial defect were extracted, and the channelized Hotelling observer (CHO) was applied for the detection of the defect. The CHO results show that the value of the area under the receiver operating characteristics (ROC) curve (AUC) is affected by the RM amplitude. For all the defect sizes and contrasts studied, the highest or optimal AUC values indicate maximum detectability decrease with the increase of the RM amplitude. With no respiration, the ranking of the optimal AUC value in decreasing order is anterior then lateral, and finally inferior defects. The AUC value of the lateral defect drops more severely as the RM amplitude increases compared to other defect locations. Furthermore, as the RM amplitude increases, the AUC values of the smaller defects drop more quickly than the larger ones. We demonstrated that RM affects defect detectability of MPS imaging. The results indicate that developments of optimal data acquisition methods and RM correction methods are needed to improve the defect detectability in MPS.",2009,0, 2978,Comparison of detection accuracy of perfusion defects in SPECT for different reconstruction strategies using polar-map quantitation,"The objective of this study was to determine the impact on detection accuracy of coronary artery disease (CAD) from different combinations of compensation for attenuation, scatter, and resolution using unified normal databases with polar-map quantitation for detection. We studied 102 individuals who either underwent X-ray angiography (57) or were determined to be less than 5% likely to have CAD (45). The low-likelihood group was identified using standard criteria. Both groups underwent stress testing (physical or pharmacological) before imaging commenced. A Philips Medical Systems Prism 3000 (Philips Medical Systems, Cleveland, OH) SPECT camera was used for all acquisitions. The standard filtered backprojection reconstruction (FBP) reconstruction with no attenuation or scatter compensation was performed on the emission data using the 180 data from RAO to LPO. A Butterworth pre-filter of order 5 with a cutoff of 0.25 cycles per pixel was used. Emission data were also reconstructed through 360 using an ordered-subset expectation-maximization (OSEM) algorithm. We used 15 subsets of four angles each with one iteration employed for attenuation compensation (AC) alone, and for AC in combination with scatter compensation (SC). Resolution compensation (RC) was included with the previous compensation methods using 5 iteration of OSEM. Three-dimensional post-reconstruction Gaussian filtering was performed using a standard deviation (sigma) of 0.75 pixels. For the detection of CAD, a progressive improvement in the detection accuracy was observed from FBP without added compensation to OSEM with AC, SC, and RC included. This trend was not consistently observed when comparing detection accuracy for the individual territories, but OSEM with all three compensations had generally the best detection accuracy. Our patient population presented a significant problem with subdiaphragmatic activity due to the large percentage of pharmaceutical stress patients (65%). Therefore, the inferior wall (right coronary artery) territory showed evidence of deterioration in detection accuracy for solely AC as opposed to FBP due to the presence of significant contamination from subdiaphragmatic activity in this patient population.",2003,0, 2979,An Adaptable Fault-Tolerance for SOA using a Peer-to-Peer Framework,"The onset of Service Oriented Architectures (SOA) in domains like trading and banking has considerably heightened the need for dependable system operation. Late binding to services in business-to-business operations pose a serious problem for dependable system operation as it delegates the decision to trust a service to an external agent. However, it is impossible to guarantee that any service is 100% fault-free due to failings in hardware, software and human error. This means that fault tolerance remains the most practical way to address the problem. Unfortunately, there is currently no standard way to achieve this in SOA. This paper describes a novel adaptable fault tolerance framework that overlays a peer network formed by JXTA technology protocols to address this problem. We have adopted a layered approach by incrementally adding protocols and supporting the code infrastructure. The framework is implemented exclusively in Java and XML to ensure cross platform compatibility.",2007,0, 2980,The bridge-type fault current controller - a new FACTS controller,"The operation of a novel current controller, which can also function as a fault current limiter and as a solid-state AC circuit breaker, is presented. The controller, which consists of a thyristor bridge, an inductor and an optional bias power supply, is installed in series with the voltage source and the load. For load current values smaller than a preset value, the inductor of the current controller presents no impedance to the AC current flow. For values higher than the preset current value, the inductor is switched automatically into the AC circuit and limits the amount of current flow. Theoretical results in the form of circuit simulations and experimental results with a single-phase unit, operating on a 13.7 kV three-phase system with peak short-circuit currents of 3140 Arms, are presented.",2002,0, 2981,On-line detection of stator winding faults in controlled induction machine drives,"The operation of induction machines with fast switching power electric devices puts additional stress on the stator windings what leads to an increased probability of machine faults. These faults can cause considerable damage and repair costs and - if not detected in an early stage - may end up in a total destruction of the machine. To reduce maintenance and repair costs many methods have been developed and presented in literature for an early detection of machine faults. This paper gives an overview of todaypsilas detection techniques and divides them into three major groups according to their underlying methodology. The focus will be on methods which are applicable to todaypsilas inverter-fed machines. In that case and especially if operated under controlled mode, the behavior of the machine with respect to the fault is different than for grid supply. This behavior is discussed and suitable approaches for fault detection are presented. Which method is eventually to choose, will depend on the application and the available sensors as well as hard- and software resources, always considering that the additional effort for the fault detection algorithm has to be kept as low as possible. The applicability of the presented fault detection techniques are also confirmed with practical measurements.",2005,0, 2982,The neural network technology applied on fault detection of locomotive converter,"The paper analyzes the fault principle of the SS7E locomotive converter, and classifies the fault sorts of it. The paper describes a method of adopting wavelet analysis to extract energy eigenvector from the output voltage wave of the converter, and proposes a fault diagnosis method of the converter by using energy eigenvector and neural-network technology. The hardware frame and software design of the fault diagnosis system are discussed and the application scheme to diagnose the specific fault position of converter online by increasing a few monitoring points is proposed. The validity of the method is proved by computer simulation.",2010,0, 2983,Temperature correction to chemoresistive sensors in an e-NOSE-ANN system,"The influence of the temperature coefficient of resistance in the chemoresistive response of inherently conductive polymer (ICP) sensors in the performance of an artificial neural network (ANN) e-natural olfactory sensor emulator (e-NOSE) system is evaluated. Temperature was found to strongly influence the response of the chemoresistors, even over modest ranges (ca. 2/spl deg/C). An e-NOSE array of eight ICP sensor elements, a relative humidity (RH/spl plusmn/0.1%) sensor, and a resistance temperature device (RTD/spl plusmn/0.1/spl deg/C) was tested at five different RH levels while the temperature was allowed to vary with the ambient. A temperature correction algorithm based on the temperature coefficient of resistance /spl beta/ for each material was independently and empirically determined then applied to the raw sensor data prior to input to the ANN. Conversely, uncorrected data was also passed to the ANN. The performance of the ANN was evaluated by determining the error found between the actual humidity versus the calculated humidity. The error obtained using raw input sensor data was found to be 10.5% and using temperature corrected data, 9.3%. This negligible difference demonstrates that the ANN was capable of adequately addressing the temperature dependence of the chemoresistive sensors once temperature was inclusively passed to the ANN.",2003,0, 2984,Injection of faults at component interfaces and inside the component code: are they equivalent?,"The injection of interface faults through API parameter corruption is a technique commonly used in experimental dependability evaluation. Although the interface faults injected by this approach can be considered as a possible consequence of actual software faults in real applications, the question of whether the typical exceptional inputs and invalid parameters used in these techniques do represent the consequences of software bugs is largely an open issue. This question may not be an issue in the context of robustness testing aimed at the identification of weaknesses in software components. However, the use of interface faults by API parameter corruption as a general approach for dependability evaluation in component-based systems requires an in depth study of interface faults and a close observation of the way internal component faults propagate to the component interfaces. In this paper we present the results of experimental evaluation of realistic component-based applications developed in Java and C using the injection of interface faults by API parameter corruption and the injection of software faults inside the components by modification of the target code. The faults injected inside software components emulate typical programming errors and are based on an extensive field data study previously published. The results show the consequences of internal component faults in several operational scenarios and provide empirical evidences that interface faults and software component faults cause different impact in the system",2006,0, 2985,Fault diagnosis on analog circuits based on Integrated Learning Method,"The Integrated Learning Method (ILM) uses multiple learners to solve the same problem, which can greatly improve the generalization ability of learning systems. To address the fault diagnosis on analog circuits, aiming at the shortcomings of diagnosis and model stability with single RBF neural network to diagnose faults of analog circuit system, the paper discussed method to improve model diagnosis accuracy with Bagging algorithm of ILM to integrated multiple neural networks. The experiment results show the adoption of this scheme can significantly improve the performance of neural network diagnostic model.",2010,0, 2986,DSP-Based Sensorless Electric Motor Fault-Diagnosis Tools for Electric and Hybrid Electric Vehicle Powertrain Applications,"The integrity of electric motors in work and passenger vehicles can best be maintained by frequently monitoring their condition. In this paper, a signal processing-based motor fault-diagnosis scheme in detail is presented. The practicability and reliability of the proposed algorithm are tested on rotor asymmetry detection at zero speed, i.e., at startup and idle modes in the case of a vehicle. Regular rotor asymmetry tests are done when the motor is running at a certain speed under load with stationary current signal assumption. It is quite challenging to obtain these regular test conditions for long-enough periods of time during daily vehicle operations. In addition, automobile vibrations cause nonuniform air-gap motor operation that directly affects the inductances of electric motors and results in a noisy current spectrum. Therefore, it is challenging to apply conventional rotor fault-detection methods while examining the condition of electric motors as part of the hybrid electric vehicle (HEV) powertrain. The proposed method overcomes the aforementioned problems by simply testing the rotor asymmetry at zero speed. This test can be achieved at startup or repeated during idle modes, where the speed of the vehicle is zero. The proposed method can be implemented at no cost using the readily available electric motor inverter sensor and microprocessing unit. Induction motor fault signatures are experimentally tested online by employing the drive-embedded master processor [TMS320F2812 digital signal processor (DSP)] to prove the effectiveness of the proposed method.",2009,0,1184 2987,An Ontology Modeling Method of Mechanical Fault Diagnosis System Based on RSM,"The intelligent level and diagnostic accuracy of mechanical fault diagnosis system depend on the knowledge quantity and quality in its library. While fusing existing knowledge is an important method to increase the knowledge quantity and quality in library. Accordingly, this paper using resource space model (RSM) of knowledge grid (KG) to classify and manage the fault diagnosis knowledge, then proposed an ontology modeling method of mechanical fault diagnosis system. Based on the method, we using protege 4 to construct an ontology of AC motor faults diagnosis.",2009,0, 2988,Codesign and Simulated Fault Injection of Safety-Critical Embedded Systems Using SystemC,"The international safety standard IEC-61508 highly recommends fault injection techniques in all steps of the development process of safety-critical embedded systems, in order to analyze the reaction of the system in a faulty environment and to validate the correct implementation of fault tolerance mechanisms. Simulated fault injection enables an early dependability assessment that reduces the risk of late discovery of safety related design pitfalls and enables the analysis of fault tolerance mechanisms at each design refinement step using techniques such as failure mode and effect analysis. This paper presents a SystemC based executable modeling approach for the codesign and early dependability assessment by means of simulated fault injection of safety-critical embedded systems, which reduces the gap between the abstractions at which the system is designed and assessed. The effectiveness of this approach is examined in a train on-board safety-critical odometry example, which combines fault tolerance and sensor-fusion.",2010,0, 2989,Evaluation of the H.264 Scalable Video Coding in Error Prone IP Networks,"The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called scalable video coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.",2008,0, 2990,Journey error analysis and compensation for automatic overhead travelling crane,"The journey accuracy of overhead travelling crane was improved based on error compensation. To begin with analysis of error sources, the holographic error model had been established. As for each error source, the effect had been analyzed to acquire the main error sources which needed to be considered. The error compensation methods corresponding to relevant main error sources had been discussed by different means. The error compensation algorithm that had been designed according to the research results had been applied to compensating journey error of overhead travelling crane. The simulation experiment validated that the algorithm was effective. Through a large number of on-site tests, the maximal error of actual position and ideal position was +2.5(cm), which well satisfied the need of production.",2009,0, 2991,Focus Error Analysis and Simulation in Multi-Layer Data Storage System,"The key problem in multi-layer data storage readout system is how to separate the signal among different layers and avoid the crosstalk between the different layers. The adjusted Fuke prism method and dip incidence is used in focusing control of readout system. Since the practical requirements of fixing prism is higher, the variation of defocus signal with the prism fixing bias is analyzed and simulated in this paper. It is proved by theory and simulation that the focus error signal can be separated from each layer and be received by the detector with the consideration of fixing error and bias.",2009,0, 2992,Influence of spatial wavefront characteristics of laser radiation on quality intracavity modal correction,"The laser systems supplied by an adaptive optical system allow obtaining output beam with high quality spatial-power characteristics. The efficiency of functioning of adaptive optical system phase conjugation with modal correcting by the bimorph deformable mirror of the resonator was investigated. The form of a mirror surface was modeled by Zernic polynom, and wavefront was modeled by the spline-interpolation of the limited set of random numbers. In the decision of the task of formation of operation action on a deformed mirror various methods of optimization were used. The influence of a standard deviation and relative entropy both of phases and angles of the slopes of the wavefront on quality of correction was examined. The quality of correction was estimated on static and dynamic errors of wavefront reconstruction. Moreover connection among spatial characteristics of a radiation beam was estimated",2005,0, 2993,From fault trees to diagnostics knowledge base: Case study and quantitative improvement assessment,"The level of readiness and availability of equipment and systems is strongly affected by troubleshooting and problem resolution quality. Until recently, the most pervasive method of supporting technicians in achieving high problem resolution performance has been the creation of fault trees. It is now accepted that knowledge-based problem resolution has many advantages over the fault tree method, including a vast acceleration in creating the troubleshooting support tool, easier modification, higher accuracy, and learning capability. However, there still exists a large body of problem resolution knowledge ""frozen"" into fault charts. This paper describes a systematic and automatic method of converting this knowledge into a model-based diagnostics knowledge base and describes one case in which the ClickFix intelligent troubleshooting support software was used to facilitate this conversion. The results prove the validity of this method and show the advantages in terms of automatic minimization of time required to complete fault isolation as well as increased diagnostics accuracy and power to handle unforeseen cases and multiple faults. and in terms of ease of modification.",2002,0, 2994,Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools,"The longer a fault remains in the code from the time it was injected, the more time it will take to fix the fault. Increasingly, automated fault detection (AFD) tools are providing developers with prompt feedback on recently-introduced faults to reduce fault fix time. If however, the frequency and content of this feedback does not match the developer's goals and/or workflow, the developer may ignore the information. We conducted a controlled study with 18 developers to explore what factors are used by developers to decide whether or not to address a fault when notified of the error. The findings of our study lead to several conjectures about the design of AFD tools to effectively notify developers of faults in the coding phase. The AFD tools should present fault information that is relevant to the primary programming task with accurate and precise descriptions. The fault severity and the specific timing of fault notification should be customizable. Finally, the AFD tool must be accurate and reliable to build trust with the developer.",2007,0, 2995,Flexible Error Concealment for H.264 Based on Directional Interpolation,"The losses of packets cannot he avoided if real-time video is transported over error prone environments. To conceal missing parts of video pictures, the spatial and temporal correlation feature of natural video sequences is used. However, in some cases - for instance in case of a scene change - there is no temporal correlation available and thus spatial error concealment has to be used. This article proposes flexible spatial error concealment based on directional interpolation method that performs well also if only two neighboring boundaries are used as common for H.264 spatially predicted frames. The proposed method was implemented and tested in a H.264 codec together with other error concealment methods to evaluate their performance",2005,0, 2996,Adaptive compensation for position error signal nonlinearity,"The magnetoresistive (MR) magnetic head is a poor positioning transducer for a disk file's servo control system because its positioning response is nonlinear with radial displacement. This paper shows how the MR head's poor positioning properties are alleviated by a self-adjusting adaptive algorithm that allows a disk file to linearize its own servo position error signal (PES). The adaptive linearizer uses a nonlinear state estimator whose nonlinearity adjusts to match the nonlinearity of the PES. As the match between the two nonlinearities adaptively improves, the state estimator gives increasingly accurate estimates of the true actuator position",2002,0, 2997,Spike-Timing Error Backpropagation in Theta Neuron Networks,"The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of.",2009,0, 2998,Applying evolving fuzzy models with adaptive local error bars to on-line fault detection,"The main contribution of this paper is a novel fault detection strategy, which is able to cope with changing system states at on-line measurement systems fully automatically. For doing so, an improved fault detection logic is introduced which is based on data-driven evolving fuzzy models. These are sample-wise trained from online measurement data, i.e. the structure and rules of the models evolve over time in order to cope 1.) with high-frequented measurement recordings and 2.) online changing operating conditions. The evolving fuzzy models represent (changing) non-linear dependencies between certain system variables and are used for calculating the deviation between expected model outputs and real measured values on new incoming data samples (rarr residuals). The residuals are compared with local confidence regions surrounding the evolving fuzzy models, so-called local error bars, incrementally calculated synchronously to the models. The behavior of the residuals is analyzed over time by an adaptive univariate statistical approach. Evaluation results based on high-dimensional measurement data from engine test benches are demonstrated at the end of the paper, where the novel fault detection approach is compared against static analytical (fault) models.",2008,0, 2999,Experimental analysis of the errors induced into Linux by three fault injection techniques,"The main goal of the experimental stud), reported in this paper is to investigate to what extent distinct fault injection techniques lead to similar consequences (errors and failures). The target system we are using to carry out our investigation is the Linux kernel as it provides a representative operating system. It is featuring full controllability and observability thanks to its open source status. Three types of software-implemented fault injection techniques are considered, namely: i) provision of invalid values to the parameters of the kernel calls, ii) corruption of the parameters of the kernel calls, and iii) corruption of the input parameters of the internal functions of the kernel. The workload being used for the experiments is tailored to activate selectively each functional component. The observations encompass typical kernel failure modes (e.g., exceptions and kernel hangs) as well as a detailed analysis of the reported error codes.",2002,0, 3000,Spectral analysis for the rotor defects diagnosis of an induction machine,"The main goal of this paper is to develop a method permitting to determine the nature of the rotor defects thanks to the spectral analysis. The development growing that the equipment of measurement (spectral analyzer) and the software of digital signal processing had made possible the diagnosis of the electric machine defects. Motor current signature analysis (MCSA) is used for the detection of the electrical and the mechanical faults of an induction machine to identify rotor bar faults. Also, the calculation of machine inductances (with and without rotor defects) is carried out by the tools of software MATLAB before the beginning of simulation under Software SIMULINK. Simulation and experimental results are presented to confirm the validity of proposed approach.",2005,0, 3001,Self-commissioning training algorithms for neural networks with applications to electric machine fault diagnostics,"The main limitations of neural network (NN) methods for fault diagnostics applications are training data and data memory requirements, and computational complexity. Generally, a NN is trained offline with all the data obtained prior to commissioning, which is not possible in a practical situation. In this paper, three novel and self-commissioning training algorithms are proposed for online training of a feedforward NN to effectively address the aforesaid shortcomings. Experimental results are provided for an induction machine stator winding turn-fault detection scheme, to illustrate the feasibility of the proposed online training algorithms for implementation in a commercial product.",2002,0, 3002,Hybrid intelligent architecture for fault identification in power distribution systems,"The main objective involved with this paper consists of presenting the results obtained from the application of artificial neural networks and statistical tools in the automatic identification and classification process of faults in electric power distribution systems. The developed techniques to treat the proposed problem have used, in an integrated way, several approaches that can contribute to the successful detection process of faults, aiming that it is carried out in a reliable and safe way. The compilations of the results obtained from practical experiments accomplished in a pilot radial distribution feeder have demonstrated that the developed techniques provide accurate results, identifying and classifying efficiently the several occurrences of faults observed in the feeder.",2009,0, 3003,Team-based fault content estimation in the software inspection process,"The main objective of software inspection is to detect faults within a software artifact. This helps to reduce the number of faults and to increase the quality of a software product. However, although inspections have been performed with great success, and although the quality of the product is increased, it is difficult to estimate the quality. During the inspection process, attempts with objective estimations as well as with subjective estimations have been made. These methods estimate the fault content after an inspection and give a hint of the quality of the product. This paper describes an experiment conducted throughout the inspection process, where the purpose is to compare the estimation methods at different points. The experiment evaluates team estimates from subjective and objective fault content estimation methods integrated with the software inspection process. The experiment was conducted at two different universities with 82 reviewers. The result shows that objective estimates outperform subjective when point and confidence intervals are used. This contradicts the previous studies in the area.",2004,0, 3004,Application of Wavelet Packet in Defect Recognition of Optical Fiber Fusion Based on ISO14000,"The important meaning of the optical fiber fusion defect recognition was introduced based on ISO14000. Detecting the optical fiber fusion point by using the UltraPAC system, aiming at the defect feature, the method of analyzing and extracting the defect eigenvalue by using wavelet packet analysis and pattern recognition by making use of the wavelet neural network is discussed. This method can realize to extract the interrelated information which can reflect defect feature from the ultrasonic information being detected and analysis it by the information. Constructing the network model for realizing the qualitative recognition of defects. The results of experiment show that the wavelet packet analysis adequately make use of the information in time-domain and in frequency-domain of the defected echo signal, multi-level partition the frequency bands and analyze the high-frequency part further which donpsilat been subdivided by multi-resolution analysis, and choose the interrelated frequency bands to make it suited with signal spectrum. Thus, the time-frequency resolution is risen, the good local amplificatory property of the wavelet neural network and the study characteristic of multi-resolution analysis can achieve the higher accuracy rate of the qualitative classification of fusion defects.",2009,0, 3005,Calculation of correction factors to compensate for the reference electric field nonuniformity,"The inaccuracy of the reference electric field nonuniformity assessment is identified, which unnecessarily increases the measurement uncertainty of standardized systems for electric field-meters calibration, making them inadequate for calibration of modern, precision field-meters. By means of numerical field calculation, the correction factors are computed which allow compensation for the reference field nonuniformity. The uncertainty of that calculation is also estimated",2001,0, 3006,Correction of the Method of Images for Partial Inductance Calculations of QFP,"The inapplicability of the method of images to partial inductance calculation within the magneto-quasi-static approximation has been shown by the authors in previous works. This concept is restated in this paper, and some correction terms for the application of the method of images are proposed. A partial inductance calculation technique based on potential theory is also proposed, which does not require the calculation of the current distribution, and is limited at present to infinite perfectly conducting planes. The proposed correction terms are verified with simple structures at first, and later with the calculation of the partial inductance matrix of a quad flat package.",2010,0, 3007,Assessment of fault location algorithms in transmission grids,"The increased accuracy into the fault's detection and location make an easier task for maintenance, this being the reason to develop new possibilities to a precise estimation of the fault location. The paper presents the results of the implementation of two fault location algorithms in ATP-EMTP program. Some ATP-MODELS modules were associated to the ATP model of different transmission grids, these modules being developed on basis of Takagi algorithm applied in two-machine systems and on basis of one algorithm processing synchronized positive-sequence phasor quantities on both transmission lines' terminals. DFT and A3 type filters were used to calculate the fundamental frequency phasors of the transient voltages and currents. There are presented some simulations', the considered parameters of the presented analysis being: line's load, fault's type and resistance and fault position along the overseen line.",2009,0, 3008,Microarchitecture-based introspection: a technique for transient-fault tolerance in microprocessors,"The increasing transient fault rate necessitates on-chip fault tolerance techniques in future processors. The speed gap between the processor and the memory is also increasing, causing the processor to stay idle for hundreds of cycles while waiting for a long-latency cache miss to be serviced. Even in the presence of aggressive prefetching techniques, future processors are expected to waste significant processing bandwidth waiting for main memory. This paper proposes microarchitecture-based introspection (MBI), a transient-fault detection technique, which utilizes the wasted processing bandwidth during long-latency cache misses for redundant execution of the instruction stream. MBI has modest hardware cost, requires minimal modifications to the existing microarchitecture, and is particularly well suited for memory-intensive applications. Our evaluation reveals that the time redundancy of MBI results in an average IPC reduction of only 7.1 %for memory-intensive benchmarks in the SPEC CPU2000 suite. The average IPC reduction for the entire suite is 14.5%.",2005,0, 3009,Influence of power line on ground line and communication line during short circuit fault,"The induced voltage on the communication line near power lines is calculated by virtue of the CDEGS software. When power lines fault, the fault current flows through the power lines and the ground lines. The influence of the ground lines is analyzed through a test on practical power lines. In addition, a transient analysis is carried out using the Fourier transformation with the modeled fault current.",2004,0, 3010,Three-Dimensional Pareto-Optimal Design of Inductive Superconducting Fault Current Limiters,"The inductive-type superconducting fault current limiters (LSFCLs) mainly consist of a primary copper coil, a secondary complete or partial superconductor cylinder, and a closed or open magnetic iron core. Satisfactory performance of such device significantly depends on optimal selection of its employed materials and construction dimensions, as well as its electrical, thermal, and magnetic parameters. Therefore, it is very important to identify a comprehensive model describing the LSFCL behavior in a power system prior to its fabrication. When a fault occurs, the dynamic model should essentially characterize the overall phenomena to compare the simulation results by varying LSFCL parameters to maximize the merits of a fault current limiter while minimizing its drawbacks during the normal state. The principle object of this paper is to achieve a feasible and full penetrative approach in 3-D alignments, i.e., a Pareto-optimal design of LSFCLs by means of multicriteria decision-making techniques after defining the LSFCL model in a power system CAD/electromagnetic transients including dc environment.",2010,0, 3011,Topology Error Identification for the NEPTUNE Power System,"The goal of the North Eastern Pacific Time-Series Undersea Networked Experiment (NEPTUNE) is to provide the infrastructure necessary for scientific exploration and investigation on the floor of the Pacific Ocean in an area encompassing the Juan de Fuca Tectonic Plate. In order to achieve this goal, the power delivery capabilities of the terrestrial distribution system will be extended into the Pacific Ocean. The power system associated with the proposed observatory is unlike conventional terrestrial power systems due to the unique under ocean operating conditions. The operating requirements of the system dictate hardware and software applications that are not found in terrestrial power systems. This paper describes a method for topology error identification in the presence of various forms of measurement errors for NEPTUNE, a highly interconnected DC power system. Hardware and reliability requirements have led to a power system configuration that includes a large number of unmeasured sections. Using previously developed state estimation algorithms as a starting point, a methodology for system topology identification is proposed in this paper.",2005,0, 3012,Fault tolerance in sensor networks: Performance comparison of some gossip algorithms,"The goal of this paper is to evaluate the efficiency of three versions of the well known gossip algorithm, namely: basic gossip, push-sum and broadcast, for the distributed solution of averaging problems. The main focus is on the impact of link failures that, reducing the network connectivity, decrease the convergence speed. As a similar effect occurs in non fully-meshed networks, because of a limited coverage radius of the nodes, a comparison is made between these two scenarios. The considered algorithms can require optimization of some share factors; to this purpose, we resort to simulations, but the conclusions achieved are confirmed through analytical arguments, exploiting the concept of potential function.",2009,0, 3013,Synthesizing Byzantine Fault-Tolerant Grid Application Wrapper Services,"The grid is inherently unreliable due to its geographical dispersion, heterogeneity and the involvement of multiple administrative domains. The most general case of failures are so-called Byzantine failures where no assumptions about the behavior of faulty components can be made. In this paper a novel system is described that allows to diagnose and tolerate byzantine faults based on service replication. We suggest, briefly describe and compare two fail-stop and two byzantine fault tolerance algorithms. Given that many scientific larger-scale grid applications have complex outputs the comparison of replica results as needed to implement byzantine fault tolerance becomes a non-trivial task. Therefore we include an automation mechanism based on a generic description language and code generation for this particualar problem. Our approach has been implemented as extension to the Otho Toolkit, a system that synthesizes tailor-made wrapper services for a given application, grid environment and resource. An analysis of performance and overheads for three real-world applications completes our work.",2008,0, 3014,Predictive fault management in the dynamic environment of IP networks,"The growing complexity of IP networks in terms of hardware components, operating system, communication and application software and the huge amount of dependencies among them have caused an increase in demand for network management systems, particularly in fault management An efficient fault detection system needs to work effectively even in face of incomplete management information, uncertain situations and dynamic changes. In this paper, dynamic Bayesian networks are proposed to model static and dynamic dependencies between managed objects in IP networks. Prediction strategies and a backward inference approach are provided for the proactive management in fault detection based on the dynamic changes of IP networks.",2004,0, 3015,Application of BCC Algorithm and RBFNN in Identification of Defect Parameters,"The identification of defect parameters in thermal non-destructive test and evaluation (NDT/E) was considered as a kind of inverse heat transfer problem (IHTP). However, it can be farther considered as a shape optimization problem then a structure design optimization problem, and the design results should meet the surface temperature profile of the apparatus with defects. A bacterial colony chemotaxis (BCC) optimization algorithm and a radial basis function neural network (RBFNN) are applied to the thermal NDT/E for the identification of defects parameters. The RBFNN is a precise and convenient surrogate model for the time costly finite element computation, which obtains the surface temperature with different defect parameters. The BCC optimization algorithm is derivatively-free, and the convergence speed is fast. This method is applied to a simple verification case and the result is acceptable. The algorithm is also compared with the particle swarm optimization (PSO) algorithm, and the BCC algorithm can access the optimum with faster speed.",2010,0, 3016,Experimental study on the impact of endoscope distortion correction on computer-assisted celiac disease diagnosis,"The impact of applying barrel distortion correction to endoscopic imagery in the context of automated celiac disease diagnosis is experimentally investigated. For a large set of feature extraction techniques, it is found that contrasting to intuition, no improvement but even significant result degradation of classification accuracy can be observed. For techniques relying on geometrical properties of the image material (shape), moderate improvements of classification accuracy can be achieved. Reasons for this somewhat unexpected results are discussed and ways how to exploit potential distortion correction benefits are sketched.",2010,0, 3017,"Bit error rate measurements of diffraction by non-true-time-delay, holographic optical element","The impact on digital communications performance from non-true-time-delay beam steering by a holographic optical element (HOE) is investigated. Free-space data transmission experiments were performed using an HOE with a 5 mm diameter and 33 diffraction angle at the data rate of 10 Gbit/s with return-to-zero and non-return-to-zero formats. For this diffraction case, a small penalty of 0.7 dB at 10-9 bit error rate is observed for both formats. This penalty is smaller than the optical loss but would be expected to increase for larger diameters and data rates.",2009,0, 3018,Exploiting Parametric Power Supply and/or Temperature Variations to Improve Fault Tolerance in Digital Circuits,"The implementation of complex functionality in low-power nano-CMOS technologies leads to enhance susceptibility to parametric disturbances (environmental, and operation-dependent). The purpose of this paper is to present recent improvements on a methodology to exploit power-supply voltage and temperature variations in order to produce fault-tolerant structural solutions. First, the proposed methodology is reviewed, highlighting its characteristics and limitations. The underlying principle is to introduce on-line additional tolerance, by dynamically controlling the time of the clock edge trigger driving specific memory cells. Second, it is shown that the proposed methodology is still useful in the presence of process variations. Third, discussion and preliminary results on the automatic selection (at gate level) of critical FF for which DDB insertion should take place are presented. Finally, it is shown that parametric delay tolerance insertion does not necessarily reduce delay fault detection, as multi-vdd or multi-frequency self-test can be used to recover detection capability.",2008,0, 3019,A method to compensate periodic errors by gain tuning in instrumentation,"The method described uses multiprobes without individual probes accuracy calibrations. By simple gain (or sensitivity) tuning, perfect compensation of any arbitrary frequency component of an error is achieved. Online calculations or estimations of the periodic error are not required, and significant improvements are achieved just by initial tuning of the gains of separate probes. In Sections II and III, we first derive a mathematical model of the periodic error and then propose a new method to compensate it. In. Section IV, we derive the minimum number of probes needed to compensate a given number of frequency components of the periodic error, and present two algorithms to calculate the gains for separate probes. In Section V, we show two simulated and one experimental example of periodic error compensation. One of the simulated examples is applied to a rotary type of a sensor and the other to a linear type of a sensor. Minimum number of probes needed to compensate multiple frequency components is also derived. The method is successfully demonstrated on two simulated and one experimental example, where one and two frequency components of a periodic error are perfectly compensated. The method is applicable both to rotary and to linear types of sensors",2002,0, 3020,A fault line selection algorithm in non-solidly earthed network based on holospectrum,"The methods of line selection today focus on one target of the signal such as amplitude, frequency or phase. A novel method based on holospectrum algorithm to detect single-phase faults in distribution systems is proposed in this paper. After structuring analytic signals of zero sequence current and voltage, the holospectrum algorithm is applied. Thus the analysis of combined signal of amplitude, frequency and phase is realized. Compared with the use of single amplitude, frequency or phase, combined signal carries more details and information of transient signal. Theoretical analysis and simulation based on Simulink of MATLAB show that the presented method can exactly and effectively choose the faulty line in single-phase-to-ground fault.",2010,0, 3021,Fault Tolerance Mechanism in Secure Mobile Agent Platform System,"The mobile agent paradigm has attracted many attentions recently but it is still not widely used. One of the barriers is the difficulty in protecting an agent from failure because an agent is able to migrate over the network autonomously. Design and implementation of mechanisms to relocate computations requires a careful consideration of fault tolerance, especially on open networks like the Internet. . In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., ensures that the agent arrives at its destination. In this paper, we propose fault tolerant mechanism based on replication and voting for mobile agent platform system. The proposed mechanism has been implemented and evaluated the effects of varying of degree, different replication methods and voting frequencies on Secure Mobile Agent Platform System (SMAPS). We also report on the result of reliability and performance issue involved in mobile agent for Internet application.",2009,0, 3022,Actuator faults estimation using sliding mode state estimator,The modified method of the actuator faults estimator design for the continuous-time linear MIMO systems is treated in this paper. Based on the sliding mode state estimator the problem addressed is indicated as the approach giving necessary and sufficient condition for design. Lyapunov inequality implying two linear matrix inequalities from are outlined to posses a stabile solution for the optimal parameters in the prescribed estimator structure. A system model based example is presented to illustrate the properties of the proposed design method.,2010,0, 3023,Identifying symptoms of recurrent faults in log files of distributed information systems,"The manual process to identifying causes of failure in distributed information systems is difficult and time-consuming. The underlying reason is the large size and complexity of these systems, and the vast amount of monitoring data they generate. Despite its high cost, this manual process is necessary in order to avoid the detrimental consequences of system downtime. Several studies and operator practice suggest that a large fraction of the failures in these systems are caused by recurrent faults. Therefore, significant efficiency gains can be achieved by automating the identification of these faults. In this work we present methods, which draw from the areas of information retrieval as well as machine learning, to automate the task of infering symptoms pertinent to failures caused by specific faults. In particular, we present a method to infer message types from plain-text log messages, and we leverage these types to train classifiers and extract rules to identify symptoms of recurrent faults automatically.",2010,0, 3024,Mars exploration rovers surface fault protection,"The Mars exploration rovers surface fault protection design was influenced by the need for the solar powered rovers to recharge their batteries during the day to survive the night. The rovers were required to autonomously maintain thermal stability, and initiate reliable communication with orbiting assets or directly to Earth, while maintaining their energy balance. This paper describes the system fault protection design for the surface phase of the mission, including hardware descriptions and software algorithms. Additionally, a few in-flight experiences are described, including the Spirit flash memory anomaly and the Opportunity ""stuck-on"" heater failure.",2005,0, 3025,A Frequency Shift Technique for Pattern Correction in Hologram-Based CATRs,"The measurement accuracy of a compact antenna test range (CATR) is limited by the level of spurious signals. At millimeter and submillimeter wavelengths the spurious signals are caused by range reflections and deformations of the range collimating element. Several antenna pattern and range evaluation techniques that can be used to mitigate the effects of these nonidealities have been published. At submillimeter wavelengths these techniques are typically time consuming or they are not able to correct the antenna pattern close to the main beam. The frequency shift technique does not have these limitations in a hologram based CATR. In this paper, the applicability of the frequency shift technique is analytically studied for a hologram based CATR. The technique is verified by measurements. The accuracy provided by the frequency shift technique is compared to that of the antenna pattern comparison (APC) technique",2006,0, 3026,Formal Analysis of Fault Recovery in Self-Organizing Systems,"The members of a self-organizing distributed system have ability to automatically organize themselves into a specific structure. The functionality of the system is achieved by collaboration of the members in this structure. Through automatic (re)organization, such a system is able to recover from various temporary faults which may disturb the established structure. In this paper, we propose a technique to identify all recoverable faults as well as to analyze fault tolerance and recovery from temporary faults by reorganization in self-organizing systems.",2009,0, 3027,Error sensitivity testing for the MC-EZBC scalable wavelet video coder,"The paper looks at the error resiliency of the motion-compensated embedded zero block coder (MC-EZBC). The MC-EZBC is gaining momentum as a motion compensated temporal filtering solution. We propose to demultiplex the MC-EZBC bitstream into seven bit categories and examine the effect of bit errors on each bit category. It is shown that different bit categories have different immunity to bit errors. The significance map at quadtree level zero, the motion vectors and the coefficients sign bits are shown to be the most sensitive. These categories exceed 60% of the total bitrate. It is concluded that employing unequal error protection for the motion fields is sufficient. The coding algorithm need not be modified, hence preserving its bitstream embedding and universal scalability feature.",2005,0, 3028,SEU effect analysis in a open-source router via a distributed fault injection environment,"The paper presents a detailed error analysis and classification of the behavior of an open-source router when affected by Single Event Upsets (SEUs). The experimental results have been gathered on a real communication network, resorting to an ad-hoc Fault Injection system. The injector has been designed to corrupt the router during its normal service and to analyze the SEU injection effects on the overall distributed system. The performed experiments allowed the authors to identify the most critical memory regions and to cluster the router variables according to their impact on system dependability",2001,0, 3029,Fault Analysis of Multiphase Distribution Systems Using Symmetrical Components,"The paper presents a new approach for performing the fault analysis of multiphase distribution networks based on the symmetrical components. The multiphase distribution system is represented by an equivalent three-phase system; hence, the single-phase and two-phase line segments are represented in terms of their sequence values. The proposed technique allows the state of the art short-circuit analysis solvers to analyze unbalanced distribution networks. The fault currents calculated using the proposed technique is compared with the phase components short-circuit analysis solver. The obtained results for the IEEE radial test feeders show that the proposed technique is accurate. Based on the proposed method, the existing commercial grade short-circuit analysis solvers based on sequence networks can be utilized for performing unbalanced distribution systems.",2010,0, 3030,A smart relaying scheme for high impedance faults in distribution and utilization networks,The paper presents a novel transient time domain relaying scheme for distribution/utilization networks. The scheme is based on graphical image feature extraction of a diagnostic feature vector xf that carries information about the fault types. The relaying algorithm is capable of detecting low current impedance type faults (HIF),2000,0, 3031,Towards latent faults prevision in an automatic controlled plant,The paper presents an approach to develop a program for the latent fault prevision dedicated to an automatic controlled plant. The new approach is based on a soft-computing processing of a set of variables to work out a quality index for evaluating the efficiency of the plant operation. The trend of this index is the base for the prevision of latent faults. The aim of the approach is to realise an intelligent supervision program to insert in the existing automation system. A robotic soldering cell is considered.,2000,0, 3032,A novel method for transmission network fault location using genetic algorithms and sparse field recordings,"The paper presents an approach to locate a fault in a transmission network based on waveform matching. Matching during-fault recorded phasor with the during fault simulated phasor is used to determine the fault location. The search process to find the best waveform match is actually an optimization problem. The genetic algorithm (GA) is introduced to find the optimal solution. The proposed approach is suitable for the situations where only the data recorded sparsely is available. Under such circumstances, it can offer more accurate results than other known techniques.",2002,0, 3033,Automatic test vector generation for bridging faults detection in combinational circuits using false Boolean functions,"The paper presents an automatic test vector generation program for detecting bridging faults in combinational circuits using false Boolean functions. These functions are used for solving the system of equations of the circuit concerning controllability, observability and interconnectivity concepts. The presented model applies to bridging faults that do not change the circuit nature i.e. the combinational free-fault circuit with these interconnectivity faults remains a combinational one.",2002,0, 3034,XSIM: An Efficient Crosstalk Simulator for Analysis and Modeling of Signal Integrity Faults in both Defective and Defect-free Interconnects,"The paper presents an efficient crosstalk simulator tool ""XSIM"" and it's methodology for analysis and modeling of signal integrity faults in deep sub-micron chips. The tool is used for analyzing the crosstalk coupling behavior in both defective and defect-free two parallel interconnects. Using the XSIM tool one can also determine the critical values of RLGC parasitics of interconnects and the critical values of crosstalk coupling capacitance and bridging resistance (i.e. mutual conductance, if any), beyond which the device will most likely suffer from the signal integrity losses, whereas for lower coupling values the device will continue to behave as crosstalk fault tolerant one. The special feature of XSIM tool is that it is based on an indigenous methodology implemented in Visual C++. The latter implementation provides a user-friendly GUI, which not only makes the tool very easy-to-use but also very accurate, flexible and yet at least 11 times faster than PSPICE circuit simulator.",2007,0, 3035,ESFFI-a novel technique for the emulation of software faults in COTS components,"The paper presents and evaluates a methodology for the emulation of software faults in COTS components using software implemented fault injection (SWIFI) technology. ESFFI (Emulation of Software Faults by Fault Injection) leverages matured fault injection techniques, which have been used so far for the emulation of hardware faults, and adds new features that make possible the insertion of errors mimicking those caused by real software faults. The major advantage of ESFFI over other techniques that also emulate software faults (mutations, for instance) is making fault locations ubiquitous; every software module can be targeted, no matter if it is a device driver running in operating kernel mode or a third party component whose source code is not available. Experimental results have shown that for specific fault classes, e.g. assignment and checking, the accuracy obtained by this technique is quite good.",2001,0, 3036,Identification techniques for chemical process fault diagnosis,"The paper presents the application results concerning the fault diagnosis of a dynamic process using dynamic system identification and model-based residual generation techniques. The first step of the considered approach consists of identifying different families of models for the monitored system. In particular, it is selected the most accurate identified model able to describe in the best way the dynamic behaviour of the considered process. The next step of the fault diagnosis scheme requires the design of output estimators e.g., dynamic observers or Kalman filters which are used as residual generators. The proposed fault diagnosis and identification scheme has been tested on a real chemical process in the presence of both sensor, actuator, component faults and disturbance. The results and concluding remarks have been finally reported.",2004,0, 3037,Soft errors in Flash-based FPGAs: Analysis methodologies and first results,"The paper presents the development of three different analysis methodologies in order to evaluate soft errors effects in flash-based FPGAs. They are complementary and can be used in different design stages, from the device characterization up to the design sensitiveness estimation. First results are very promising, proving that such methodologies are valid and open new ways of investigation. In particular, we are going to upgrade the experimental setup in order to support higher frequencies (up to 250 MHz) for further characterizing SEE effects. Moreover, a benchmark circuit should be defined in order to correctly predict the expected number of SETs for real circuits, taking into account other side effects, like broadening and logical masking. We expect that from the analysis results we will able to delight suitable hardening techniques that will undergo to both radiation test and prediction analysis.",2009,0, 3038,Bifurcation of frequency perturbations analysis due to faults and power swings,"The paper presents the result of faults and power swings transient measurements occurred at 400 kV Switching Station Girawali, Maharashtra State, India. It is having lines fed from Parali Thermal Power station, Super Thermal Power Station; Chandrapur, Solapur, Lonikand and indirectly connected to Koyna hydro power station. The oscillographs of phase currents and voltages with neutral parameters are measured online during transient processes of line faults and power swings to compare them with respect to the frequency perturbations. Computer simulation is performed to analyze frequency from point to point. The fluctuations and perturbations in frequency during fault stage and rotor dynamics stage are measured. An algorithm is developed to activate and block the power swing detection by measuring frequency changes. The occurrences of frequency fluctuations and perturbations are high in case of power swings as compared to faults. Logic is developed to utilize this feature for bifurcation",2005,0, 3039,Analyses and correction of the dynamic properties of the VALYDYNE TM differential pressure sensor,"The paper presents the results of the modeling and measurement study of the differential pressure transducer MP45 series made by the ValidyneTM Corp. This research is necessary during the construction of the device for identification of the human airducts model using the time method. The transducer parameters in the manufacturer's model are significantly different from the parameters calculated after the measurement study. This paper presents verification of models and frequency range of transducer. A new, more accurate model is proposed. The results obtained show that it is necessary to make a correction of the dynamic properties of this pressure sensor. The effects of this correction using first- and second-order correctors are presented and analyzed",2001,0, 3040,Coping with multiple Q-V solutions of the WLS state estimator induced by shunt-parameter errors,"The paper proposes a new iterative algorithm able to cope with multiple Q-V solutions of the WLS state estimator due to shunt-reactance errors. For such errors, it is shown in a 735/230-kV Hydro-Quebec subsystem that the conventional Gauss-Newton iterative algorithm converges to a strongly biased Q-V solution that is not detected as such by the residual statistical tests. By contrast, under no bad measurements, the new iterative algorithm converges to a solution foreseen by the dispatcher via the inclusion of additive state voltage weights in the gain matrix",2004,0, 3041,SSD: an affordable fault tolerant architecture for superscalar processors,"The paper proposes an integrity checking architecture for superscalar processors that can achieve fault tolerance capability of a duplex system at much less cost than the traditional duplication approach. The pipeline of the CPU core (P-pipeline) is combined in series with another pipeline (V-pipeline), which re-executes instructions processed in the P-pipeline. Operations in the two pipelines are compared and any mismatch triggers the recovery process. The V-pipeline design is based on replication of the P-pipeline, and minimized in size and functionality by taking advantage of control flow and data dependency resolved in the P-pipeline. Idle cycles propagated from the P-pipeline become extra time for the V-pipeline to keep up with program re-execution. For a large-scale superscalar processor, the proposed architecture can bring up to 61.4% reduction in die area and the average-execution time increase is 0.3%",2001,0, 3042,"Software-based, low-cost fault detection for microprocessors","The PSW-NOP is a low-cost solution to the error detection problem because multiple versions of code or hardware redundancy are not needed. The approach is useful in creating dependable software, especially in deep-submicron ICs. It also supplements to existing approaches of control-flow error detection.",2008,0, 3043,"Pulsed Methodology and Its Application to Electron-Trapping Characterization and Defect Density Profiling","The pulsed current-voltage (I-V) measurement technique with pulse times ranging from ~17 ns to ~6 ms was employed to study the effect of fast transient charging on the threshold voltage shift DeltaV t of MOSFETs. The extracted DeltaV t values are found to be strongly dependent on the band bending of the dielectric stack defined by the high-kappa and interfacial layer dielectric constants and thicknesses, as well as applied voltages. Various hafnium-based gate stacks were found to exhibit a similar trap density profile.",2009,0, 3044,Forward error protection of H.263-DG bit streams,"The purpose of our work is to report a solution for the problem of the joint source-channel coding (JSCC) of the H.263-DG bit stream to be transmitted through error prone channels. Our investigations show that the H.263 syntactical elements have different error sensitivities. This result led us to classify them into several classes of different significance. A more robust scheme, named H.263-DG, was obtained by optimally grouping the H.263 classes. This resulted in a maximum PSNR gain of 5.5 dB at a 10-3 bit error rate (BER). The solution presented in this paper further increases the error robustness by unequal protection of the classes of the H.263-DG coding scheme. A good reconstructed video quality is achieved at 64 kbps transmission rate using rate compatible punctured convolutional (RCPC) codes. By exploiting the different class sensitivities, we present an unequal error protection (UEP) scheme that outperforms the optimal equal error protection (EEP). The forward error correction adopted has resulted in PSNR improvements over 20 dB for bit error rates higher than 410-3.",2002,0, 3045,Personalized cranium defects restoration technique based on reverse engineering,"The purpose of this paper is to overcome the limitations of the traditional cranial defects restoration technique and better satisfy the aesthetic and comfort demands of different patients. An arithmetic profile curve blending technique was used based on a well-proportioned points cloud data obtained by analyzing computer tomography (CT) images of the patients. This technique uses reverse engineering technique to reconstruct a model of the defective cranium, taking all the characteristics of the protruding cranium into consideration to check the form and appropriateness of the restoration and to adjust the surface in real time to obtain the ideal shape. Then, the model is transferred to a multiple-point forming (MPF) pressure machine to produce a titanium alloy restoration model. The system has greater flexibility, shorter production cycles, and lower cost through the use of digital production technology, guarantees the quality of the cranial defects restoration model, reduces the surgical risks, and alleviates the patients' pain. In addition, an improved contour curved bridge algorithm technique is used to repair any cranium defects on the contour curve to make the contour more complete and closed.",2009,0, 3046,Application of Neuro-fuzzy Network for Fault Diagnosis in an Industrial Process,"The purpose of this paper is to present results that were obtained in fault diagnosis of an industrial process. The diagnosis algorithm combines an artificial neural network (ANN) based supplement of a fuzzy system in a block-oriented configuration. A methodology for designing the system is described. As a motivating example, a chemical plant with a recycle stream is considered. Faults in the supply of raw materials and in controllers are simulated. The performance of the system in handling simultaneous faults is also analysed. The running test results show that the strategy appears to be better suited to diagnose faults of such an industrial process.",2007,0, 3047,CT-PET image fusion using the 4D NCAT phantom with the purpose of attenuation correction,"The purpose of this study is to improve CT-based attenuation correction of 3D PET data using an attenuation map derived from non-rigidly transformed 3D CT data. Utilizing the 4D NURBS-based cardiac-torso (NCAT) phantom with a realistic respiratory model based on high-resolution respiratory-gated CT data, we develop a method to non-rigidly transform 3D CT data obtained during a single breath hold to match that of 3D PET emission data of the same patient obtained over a longer acquisition time and many respiratory cycles. For patients who underwent 3D CT and PET (transmission and emission) studies, the 3D anatomy of the NCAT phantom was first fit to that revealed through automatic segmentation of the 3D CT data. From the 3D PET emission data, a second body outline was segmented using an automatic algorithm. Using the 4D NCAT respiratory model, the morphed 3D NCAT phantom was transformed such that its body outline provided the best match with that obtained from the 3D PET emission data. The other organs of the NCAT followed the corresponding transformations provided by the 4D respiratory model. The transformations were then applied to the 3D CT image data to form the attenuation map to be used for attenuation correction. For eight preliminary sets of patient data, the NCAT respiratory model allowed excellent registration of the 3D CT and PET transmission data as visually assessed by 3 independent observers. Minor registration errors occurred near the diaphragm and lung walls. The 4D NCAT phantom with a realistic model of the respiratory motion was found to be a valuable tool in a non-rigid warping method to improve CT-PET image fusion. The improved fusion provides for a more accurate CT-based attenuation correction of 3D PET image data.",2002,0, 3048,"Position mapping, energy calibration, and flood correction improve the performances of small gamma camera using PSPMT","The purpose of this study is to improve the performances of a small gamma camera using position sensitive PMT (PSPMT) by applying position mapping, energy calibration, and flood correction. The small gamma camera consists of a 5"" PSPMT coupled with either CsI(Tl) array or NaI(Tl) plate crystals. Flood images were obtained intrinsically in CsI(Tl) array system and with lead hole mask in NaI(Tl) plate system. The position mapping was performed by locating crystal arrays and hole positions in the two systems. The energy calibration was performed using energy discrimination table for each pixel array or for each hole position. The flood correction was performed using a uniformity correction table containing the relative efficiency of each image element. The resolution of the CsI(Tl) array system remained similar before and after corrections. On the other hand, the resolution of the NaI(Tl) plate system was improved about 16% after correction. The uniformity and linearity improved 33.9% to 11.6% and 0.5 mm to 0 mm, respectively, in the CsI(Tl) array system. The corrections more effectively improve the uniformity and linearity in the NaI(Tl) plate system, 21.3% to 9.2% and 0.5 mm to 0 mm, respectively. Furthermore, the resolution deterioration observed in the NaI(Tl) plate system at the outer part of FOV, was considerably diminished after the corrections. The results of this study indicate that the correction algorithms considerably improve the performances of a small gamma camera and the performance gain is more prominent in the system employing a plate type crystal.",2003,0, 3049,Quantitative analysis of first-pass contrast-enhanced myocardial perfusion multidetector CT using a Patlak plot method and extraction fraction correction during adenosine stress,"The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1, which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-density curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1 = EF, E = (1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E = (1- exp(-(0.253 F + 0.7871)/F)). Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.",2009,0, 3050,MV generator low-resistance grounding and stator ground fault damage,"The purposes of This work are: 1) to examine the generator transient over-voltage and currents under the low-resistance ground fault conditions, and also to evaluate their corresponding stator ground fault damages, and 2) to establish an acceptable maximum system ground fault level. For comparison purposes, three versions of low-resistance grounding systems have been considered and they are: a low-resistance grounding system with a neutral breaker at the generator (hereinafter called a ""neutral breaker system""); a low-resistance grounding system with the generator neutral low resistor being switched to a high-resistor after a stator ground fault (hereinafter called a ""hybrid system""); and a low-resistance grounding system similar to the current practice (hereinafter called a ""traditional system""). The simulation study is conducted with the aid of the electro magnetic transient program (EMTP). An experimental analog generator model is also used to verify the simulation results.",2004,0,827 3051,Ultrasonic fault machinery monitoring by using the Wigner-Ville and Choi-Williams distributions,"The present work shows that the Wigner-Ville and Choi-Williams distributions are useful to determine the proper operation of rotating equipment. As an example, the bearing diagnosis obtained by analyzing real sound samples using ultrasonic technology will be presented to highlight the flexibility of the proposed approaches for industrial machinery monitoring.",2008,0, 3052,Hierarchical Aggregation and Intelligent Monitoring and Control in Fault-Tolerant Wireless Sensor Networks,"The primary idea behind deploying sensor networks is to utilize the distributed sensing capability provided by tiny, low powered, and low cost devices. Multiple sensing devices can be used cooperatively and collaboratively to capture events or monitor space more effectively than a single sensing device. The realm of applications envisioned for sensor networks is diverse including military, aerospace, industrial, commercial, environmental, and health monitoring. Typical examples include: traffic monitoring of vehicles, cross-border infiltration detection and assessment, military reconnaissance and surveillance, target tracking, habitat monitoring, and structure monitoring, to name a few. Most of the applications envisioned with sensor networks demand highly reliable, accurate, and fault-tolerant data acquisition process. In this paper, we focus on innovative approaches to deal with multivariable, multispace problem domains (data integrity, energy-efficiency, and fault-tolerant framework) in wireless sensor networks and present novel ideas that have practical implementation in developing power-aware software components for designing robust networks of sensing devices.",2007,0, 3053,Layered model for supporting fault isolation and recovery,"The primary objective of our research is to efficiently support the manual and automated steps needed to perform network fault management (detection, isolation, and recovery). This paper introduces a layered model and an implementation scheme for enhancing the level of automation in fault isolation and recovery. This complements earlier fault management work that has mostly focussed on automating the fault detection and isolation aspects. Our model allows the use of network information, which is typically available to the network designers, in a form more suitable to the network operators. The model supports the use of fault analysis rules that are defined for the network functions, protocols and services at the various network layers. These rules can capture dependency, availability, redundancy, switchover and hierarchical status information with respect to the network services provided by a network",2000,0, 3054,Using run-time reconfiguration for fault injection applications,"The probability of faults occurring in the field increases with the evolution of the CMOS technologies. It becomes, therefore, increasingly important to analyze the potential consequences of such faults on the applications. Fault injection techniques have been used for years to validate the dependability level of circuits and systems, and approaches have been proposed to analyze very early in the design process the functional consequences of faults. These approaches are based on the high-level description of the circuit or system and classically use simulation. Recently, hardware emulation on FPGA-based systems has been proposed to accelerate the experiments; in that case, an important characteristic is the time to reconfigure the hardware, including re-synthesis, place and route, and bitstream downloading. In this paper, an alternative approach is proposed, based on hardware emulation and run-time reconfiguration. Fault injection is carried out by direct modifications in the bitstream, so that re-synthesizing the description can be avoided. Moreover, with some FPGA families (e.g., Virtex or AT6000), it is possible to reconfigure the hardware partially at run-time. Important time-savings can be achieved when taking advantage of these features, since the injection of a fault necessitates the reconfiguration of only a few resources of the device. The injection process is detailed for several types of faults and experimental results are discussed.",2003,0,4201 3055,Fault-Secure Interface Between Fault-Tolerant RAM and Transmission Channel Using Systematic Cyclic Codes,"The problem of designing a fault-secure interface between a fault-tolerant RAM memory system and a transmission channel, both protected against errors using cyclic linear error detecting and/or correcting codes is considered. The main idea relies on using the RAM check bits to control the correct operation of the parallel cyclic code encoder, so that the whole interface has no single point of failure.",2007,0, 3056,Active incipient fault detection with more than two simultaneous faults,"The problem of detecting small parameter variations in linear uncertain systems due to incipient faults, with the possibility of injecting an input signal to enhance detection, is considered. Most studies assume that there is only one fault developing. Recently an active approach for two simultaneous faults has been introduced. In this paper we extend this approach to allow for more than two simultaneous faults. Having more than two simultaneous incipient faults is sometimes a natural assumption. A computational method for the construction of an input signal for achieving guaranteed detection with specified precision is presented for discrete time systems. The method is an extension of a multi-model approach used for the construction of auxiliary signals for failure detection, however, new technical issues must be addressed. A case study is examined.",2009,0, 3057,Efficient ROM correction technique for digital home appliances,"The problem of emergence of fatal bugs in the software that drives microprocessors embedded in digital home appliances necessitates a rescrutiny of strategies that are used to revamp software functionalities of such systems. Existing strategies to remedy such problems involve disintegrating the entire system in order to replace the execution of a defective program of the ROM that stores the software. This paper proposes an efficient and a facile technique that corrects the functions of the ROM using a remote controller based on wireless message transmission and reception. The main advantage of this technique is that unlike existing techniques it does not need entire system disintegration to fix the ROM running the software program. In addition, we extend the proposed technique to be applied for a digital convergence appliance embedding two microprocessors by means of a CD-R instead of the remote controller.",2004,0, 3058,Error exponents in multiple hypothesis testing for arbitrarily varying sources,"The problem of multiple hypothesis testing (HT) for arbitrarily varying sources (AVS) is considered. The achievable error probability exponents (reliabilities) region is derived, optimal decision schemes are described. The result extends the known ones by Fu and Shen and by Tuncel. The Chernoff bounds for AVS binary and M-ary HT are specified via indication of a Sanov theorem for those sources.",2010,0, 3059,Robust H guaranteed cost satisfactory fault-tolerant control for discrete-time systems with quadratic D stabilizability,"The problem of robust H 00 guaranteed cost satisfactory fault-tolerant control with quadratic D stabilizability against actuator failures is investigated for a class of discrete-time systems with value-bounded uncertainties existing in both the state and control input matrices. Based on a more practical and general model of actuator continuous gain failures, taking the transient property, robust behaviour on H performance and quadratic cost performance requirements into consideration, sufficient conditions for the existence of satisfactory fault-tolerant controller are given and the effective design steps with constraints of multiple performance indices are provided. Meanwhile, the consistency of the regional pole index, H norm-bound constraint and cost performance indices is set up for fault-tolerant control. A simulation example shows the effectiveness of the proposed method.",2010,0, 3060,Mining Bug Repositories--A Quality Assessment,"The process of evaluating, classifying, and assigning bugs to programmers is a difficult and time consuming task which greatly depends on the quality of the bug report itself. It has been shown that the quality of reports originating from bug trackers or ticketing systems can vary significantly. In this research, we apply information retrieval (IR) and natural language processing (NLP) techniques for mining bug repositories. We focus particularly on measuring the quality of the free form descriptions submitted as part of bug reports used by open source bug trackers. Properties of natural language influencing the report quality are automatically identified and applied as part of a classification task. The results from the automated quality assessment are used to populate and enrich our existing software engineering ontology to support a further analysis of the quality and maturity of bug trackers.",2008,0, 3061,Development of diagnostic systems for the fault tolerant operation of Micro-Grids.,"The progressive penetration level of Distributed Generation (DG) is destined to cause deep changes in the existing distribution networks no longer considered as passive terminations of the whole electrical system. A possible solution is the realization of small networks, namely the Micro-Grids, reproducing in themselves the structure of the main production and distribution of the electrical energy system. In order to gain an adequate reliability level of the micro-grids the individuation and the management of the faults with the goal of maintaining the micro-grid operation (fault tolerant operation) is quite important. In the present paper after the introduction of the aims of several diagnostic systems, the main available diagnostic techniques are examined with particular references to those applied to the fault diagnosis of the electrical machines and finally the Authors also present an approach for the fault tolerant exercise of the micro-grid.",2010,0, 3062,Fault models' analysis at register-transfer level description,"The proliferation of system-on-chip designs is forcing us to consider the possibility of doing all design phases at the highest possible levels of abstraction. Behavioral-level design tools are today commercially available, and offer a solution to this problem. Conversely, test issues are usually addressed at the lowest levels of abstraction and, although in recent years many efforts have been devoted to the definition of strategies for addressing test at the high level, a global solution is yet to come. We present preliminary experimental results about some of the available high-level fault models working at the register-transfer (RT) level. The experimental procedure we adopted is presented and some preliminary results are discussed.",2004,0, 3063,A software methodology for detecting hardware faults in VLIW data paths,"The proposed methodology aims at providing concurrent hardware fault detection properties in data paths for VLIW processor architectures. The approach, carried out on the application software consists in the introduction of additional instructions for controlling the correctness of the computation with respect to failures in one of the data path functional units. The paper presents the methodology and its application to a set of media benchmarks",2001,0,6618 3064,The automatic implementation of Software Implemented Hardware Fault Tolerance algorithms as a radiation-induced soft errors mitigation technique,"The radiation-induced soft errors significantly increase the failure rate for advanced electronic components and systems. Rad-sensitive microprocessor-based devices working in the radiation environment are one of the most sensitive parts of the machine. This paper is focusing mainly on the strict software mitigation technique, called Software Implemented Hardware Fault Tolerance (SIHFT). SIHFT methods are based on the redundancy of variables or procedures implemented in compiled project. Sophisticated algorithms are used to check the correctness of control flow in the application. Several articles describe in details theoretical information about software protection algorithms but problem of efficient and error-proof implementation of these methods has always been omitted. Unfortunately, manual implementation of presented algorithms is difficult and can introduce additional problems with program functionality caused by human errors. Presented solution is based on the automatic implementation of SIHFT algorithms during the compilation process. Several modifications of software methods were proposed to make theoretical algorithms possible to the automatic installation.",2008,0, 3065,Pattern Recognition of Wood Defects Types Based on Hu Invariant Moments,"The recognition of wood defects is very significant for reasonable selection and scientific utilization of wood. X-ray was adopted as a measure method for wood nondestructive testing. The difference of X-ray intensity after exposure is tested in order to judge whether the wood defects exist or not. Then the defects images were processed effectively. A group of describing shape features parameters can be defined by extending Hu invariant moments theory. Those parameters not only have translation invariance, scaling invariance, rotation invariance, but also have lower computational complexity. Input the feature parameters into neural network after pretreatment, and then recognize the wood defects. The experimental results show that the ratio of recognition attains 86%. It is shown that this method is very successful for detection and classification of wood defects. This study offers a new method for automatic recognition of wood defects.",2009,0, 3066,Low-Cost Motor Drive-Embedded Fault Diagnosis - A Simple Harmonic Analyzer,"The reference frame theory and its applications to fault diagnosis of electric machinery as a powerful tool to find the magnitude and phase quantities of fault signatures are explored in this paper. The core idea is to convert the associated fault signature to a dc quantity, followed by calculating the signal average value in the new reference frame to filter out the rest of the signal harmonics, i. e. its ac components. Broken rotor bar and rotor eccentricity faults are experimentally tested both offline using the data acquisition system, and online employing the TMS320F2812 DSP to prove the efficacy of the proposed tool. The proposed method has been theoretically and experimentally proven to detect the fault harmonics and determine the existence and the severity of machine faults. The advantages of this method include the following: (1) no need to employ external hardware or a PC running a high level program; (2) provides instantaneous fault monitoring using a DSP controller in real time; (3) embedded into the motor drive; thus, readily available drive sensors and the core processor are used without employing additional hardware; (4) no need to store machine currents data, and thus no need for large memory size; (5) very short convergence time capability; (6) immune to non-idealities like sensor dc offsets, imbalance, etc.",2007,0, 3067,Using Relocatable Bitstreams for Fault Tolerance,"The regular structure and addressing scheme for the Virtex-IIfamily of field programmable gate arrays (FPGAs) allows the relocation of partial bitstreams through direct bitstream manipulation. Our bitstream translation program relocates modules on an FPGA by changing the partial bitstream of the module. To take advantage of relocatable modules, three fault tolerant circuit designs are developed and tested. While operating through a fault, these designs provide support for efficient and transparent replacement of the faulty module with a relocated fault-free module. The architecture of the FPGA and static logic significantly constrain the placement of relocatable modules, especially when a microprocessor is placed on the FPGA.",2007,0, 3068,Automatic synthesis of dynamic fault trees from UML system models,"The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs.",2002,0, 3069,Classification and Impact Analysis of Faults in Automated System Management,"The reliability of automated system management solutions will increase in importance as the use of cloud computing and data centres expands. As part of a study to improve reliability, this paper provides a classification of faults that can occur in automated system management and proposes a method for determining the severity of such faults. A baseline deployment is compared with an alternate proposed configuration to determine the difference in reliability. The results gained show a significant improvement over the baseline. While it is still in development, the method is able to determine and compare the reliability of deployment configurations from early in the design process.",2010,0, 3070,A Hybrid Approach to Fault Detection and Correction in SoCs,"The reliability of Systems-on-Chip (SoCs) is very important with respect to their use in different types of critical applications. Several fault tolerance techniques have been proposed to improve their fault detection and correction capabilities. These approaches can be classified in two basic categories: software-based and hardware-based techniques. In this paper, we propose a hybrid approach to provide fault detection and correction capabilities of transient faults for processor-based SoCs. This solution improves a previous one, aimed at fault detection only, and combines some modifications of the source code at high level with the introduction of an Infrastructure Intellectual Property (TIP). The main advantage of the proposed method lies in the fact that it does not require modifying the microprocessor core. Experimental results are provided to evaluate the effectiveness of the proposed method.",2007,0, 3071,Diagnostic End to End Monitoring & Fault Detection for Braking Systems,"The reliable and effective performance of a braking system is fundamental to the operation of most vehicles. Any failure in the braking system that impacts the ability to retard a vehicles motion have an immediate and frequently catastrophic effect on a vehicles safety. There are very few diagnostic systems that monitor the health of multiple individual components in a braking system and even fewer that can automatically detect early stage failures and component wear with the reliability required for such an important vehicle subsystem. The systems that can accomplish these tasks are restricted in application due to size, weight and there own unique maintenance loads. Current diagnostic techniques call for careful maintenance of braking system components. These techniques tend to be backward thinking in that they are based on previous experience which is not always a good indicator for future systems. In addition the need to perform constant maintenance on the braking system puts restrictions on its design and heavy loads on maintenance personnel. This paper describes a technique developed at the Penn State Applied Research Laboratories Systems Operations and Automation Department for low physical impact fault detection and monitoring of hydraulic vehicular braking systems without compromising reliability. This technique is being applied to severe duty tactical vehicles with brake systems that have maintenance accessibility issues. The brake monitoring system (BMS) directly monitors the linings of the friction surfaces for wear using environment tolerant sensors. Additionally using pressure sensors, level sensors and high band width data gathering the hydraulic system is monitored for transients that are consistent with early stage failures",2006,0, 3072,Evaluation of security and fault tolerance in mobile agents,"The reliable execution of a mobile agent is a very important design issue in building a mobile agent system and many fault-tolerant schemes have been proposed so far. Security is a major problem of mobile agent systems, especially when money transactions are concerned . Security for the partners involved is handled by encryption methods based on a public key authentication mechanism and by secret key encryption of the communication. In this paper, we examine qualitatively the security considerations and challenges in application development with the mobile code paradigm. We identify a simple but crucial security requirement for the general acceptance of the mobile code paradigm, and evaluate the current status of mobile code development in meeting this requirement. We find that the mobile agent approach is the most interesting and challenging branch of mobile code in the security context. Therefore, we built a simple agent- based information retrieval application, the Traveling Information Agent system, and discuss the security issues of the system in particulars.",2008,0, 3073,Modeling and evaluation of fault tolerant mobile agents in distributed systems,"The reliable execution of a mobile agent is a very important design issue to build a mobile agent system and many fault-tolerant schemes have been proposed. Hence, in this paper, we present evaluation of the performance of the fault-tolerant schemes for the mobile agent environment. Our evaluation focuses on the checkpointing schemes and deals with the cooperating agents. We derive the FANTOMAS (fault-tolerant approach for mobile agents) design which offers a user transparent fault tolerance that can be activated on request, according to the needs of the task, also discuss how transactional agent with types of commitment constraints can commit. Furthermore this paper proposes a solution for effective agent deployment using dynamic agent domains.",2005,0, 3074,Research on correction method of traffic simulation model based on linear regression,"The research on parameter calibration of traffic simulation model plays an important role in whether a model can well reflect the real traffic situation of the road. Thus it can provide evidence indirectly to traffic management and control. In this paper, the data verified in the traffic simulation software VISSIM is compared with the measured data, then we know that the model parameters after correction can better reflect the actual situation of the road. The method can also be extended to other roads and intersections in the parameters correction of simulation model.",2010,0, 3075,Judging the Effectiveness of a Passive Haptic Device in Teleoperation Based on the Average Angular Error in Force Generation,"The research presented here focuses on the control limitations imposed by the use of passive actuators such as brakes or clutches in a planar haptic system. Previous research has addressed the issue within the framework of path following applications; however in a teleoperation system, the goal of a haptic interface centers around reproducing a force from the remote environment. As such a realistic haptic experience involves creating a force for the human to feel that matches the remote system both in magnitude or scaled magnitude as well as direction of force. To address the issue of matching force direction, the research presented here introduces the concept of average angle error, Avg(thetaserror), the average difference between an arbitrary desired haptic force and the closest force to it that the dissipative passive haptic device can produce. A portion of the material presented here will discuss the calculation of the value of Avg(thetaserror) for an arbitrary planar passive haptic device, and finally it will be evaluated over the workspace of a representative planar five-bar mechanism with various link lengths.",2008,0, 3076,A new approach of halftoning based on error diffusion with rough set filtering,"The rough set filtering makes use of the concepts of indiscernibility relations and approximation spaces to define an equivalence class of neighboring pixels in a processing mask, then utilize the statistical mean of the equivalence classes to replace the gray levels of the central pixel in a processing mask. The error diffusion makes use of the correction factor composed of the weighted errors for these pixels prior to addition of the pixel to be processed to diffuse error over the neighboring pixels in a continuous tone image. Both of a system and an algorithm of implementation of halftoning on error diffusion with rough sets are introduced in the paper",2000,0, 3077,Optimizing resources in real-time scheduling for fault tolerant processors,"The safety critical systems used in avionics, nuclear power plants and emergency medical equipments have to meet stringent reliability and temporal demands. Such demands are met with fault tolerant mechanisms, such as hardware and software redundancy. In this paper, we consider a safety critical application, the dual redundant onboard computer (OBC) system of the Indian Satellite Launch Vehicle and propose a scheme to optimize the onboard computing resources without detracting from the system reliability requirements. The redundancy is dealt with at the task allocation level and the slack generated, is used for allocation of more computational tasks, making the scheme very attractive in terms of efficient management of resources. The scheme of task allocation combined with real-time scheduling using Rate Monotonic (RM) and Earliest Deadline First (EDF) provide more programming flexibility and efficiently utilize the system resources. The scheme when implemented gives an efficient offline task allocation for fault-free conditions and flexible fault tolerance strategy during processor failure. The proposed scheme is compared with a traditional dual scheme. The implementation is experimented with a simulation and evaluated using performance metrics to illustrate the enhanced performance capability of the approach. This scheme, extended to multiprocessors with generic features can lead to tremendous throughput in terms of performance and costs. The contributions of this work are a system level algorithm for the implementation of real-time task allocation and scheduling.",2010,0, 3078,A limited-global information model for dynamic fault-tolerant routing in cube-based multicomputers,"The safety level model is a special coded fault information model designed to support fault-tolerant routing in hypercubes. In this model, each node is associated with an integer, called safety level, which is an approximated measure of the number and distribution of faulty nodes in the neighborhood. The safety level of each node in an n-dimensional hypercube (n-cube) can be easily calculated through (n-1) rounds of information exchanges among neighboring nodes. We focus on routing capability using safety levels in a dynamic system; that is, a system in which new faults might occur during a routing process. In this case, the updates of safety levels and the routing process proceed hand-in-hand. Our approach is based on an early work (2001) in a special fault model. In that model, each fault appears at a different time step and before each fault occurrence the safety levels in the cube are stabilized. This paper extends our results to a general fault model without any limitation on fault occurrence. Under the assumption that the total number of faults is less than n, we provide an upper bound of detour number in a routing process. Simulation results are also provided to compare with the proposed upper bound.",2003,0, 3079,Status and application foreground of ultraviolet technology on fault detection of power devices,"The safety of power grid depends on the reliable running of power devices and transmission circuits. At present, the detection methods of electric device faults are often taken by infrared imaging system, ultrasonic imagining system, etc. However, these techniques have some inherited shortcomings. Thus it needs a more effective detection technology for the safety running of power system. As a new detection technology, UV has its special advantages and wide developing space. This paper states the UV mechanism, advantages, forms and diagnosing methods, which also lists its application examples in China and abroad. And it sums up the annual proceedings of transmission specialty council of CSEE on UV Detection Discussion in the year 2007. Finally, it points out some shortcomings of UV technology and forecasts its developing trend.",2008,0, 3080,"Scalable, fault-tolerant management of Grid Services","The service-oriented architecture has come a long way in solving the problem of reusability of existing software resources. Grid applications today are composed of a large number of loosely coupled services. While this has opened up new avenues for building large, complex applications, it has made the management of the application components a nontrivial task. Use of services existing on different platforms, implemented in different languages and presence of variety of network constraints further complicates management. This paper investigates problems that emerge when there is a need to uniformly manage a set of distributed services. We present a scalable, fault-tolerant management framework. Our empirical evaluation shows that the architecture adds an acceptable number of additional resources for providing scalable, fault-tolerant management framework.",2007,0, 3081,Fault simulation and test algorithm generation for random access memories,"The size and density of semiconductor memories is rapidly growing, making them increasingly harder to test. New fault models and test algorithms have been continuously proposed to cover defects and failures of modern memory chips and cores. However, software tool support for automating the memory test development procedure is still insufficient. For this purpose, we have developed a fault simulator (called RAMSES) and a test algorithm generator (called TAGS) for random-access memories (RAMs). In this paper, we present the algorithms and other details of RAMSES and TAGS and the experimental results of these tools on various memory architectures and configurations. We show that efficient test algorithms can be generated automatically for bit-oriented memories, word-oriented memories, and multiport memories, with 100% coverage of the given typical RAM faults",2002,0, 3082,Search-based Resource Scheduling for Bug Fixing Tasks,"The software testing phase usually results in a large number of bugs to be fixed. The fixing of these bugs require executing certain activities (potentially concurrent) that demand resources having different competencies and workloads. Appropriate resource allocation to these bug-fixing activities can help a project manager to schedule capable resources to these activities, taking into account their availability and skill requirements for fixing different bugs. This paper presents a multi-objective search-based resource scheduling method for bug-fixing tasks. The inputs to our proposed method include i) a bug model, ii) a human resource model, iii) a capability matching method between bug-fixing activities and human resources and iv) objectives of bug-fixing. A genetic algorithm (GA) is used as a search algorithm and the output is a bug-fixing schedule, satisfying different constraints and value objectives. We have evaluated our proposed scheduling method on an industrial data set and have discussed three different scenarios. The results indicate that GA is able to effectively schedule resources by balancing different objectives. We have also compared the effectiveness of using GA with a simple hill climbing algorithm. The comparison shows that GA is able to achieve statistically better fitness values than hill-climbing.",2010,0, 3083,Differential busbar protection current circuits transients features during nonsimultaneous short faults,The paper deals with special features of transients in current transformer groups of differential busbar and busway protection. The unfaulted phases currents nature is described. The methods of determining of current transformer deep saturation condition and of these protections input currents extreme distortion condition are proposed. The methods described allow initial information obtaining necessary for stable protections algorithms development.,2003,0, 3084,A design-diversity based fault-tolerant COTS avionics bus network,"The paper describes a COTS bus network architecture consisting of the IEEE 1394 and SpaceWire buses. This architecture is based on the multi-level fault tolerance design methodology proposed by S.N. Chau et al. (1999) but has much less overhead than the original IEEE 1394/I2C implementation. The simplifications are brought about by the topological flexibility and high performance of the SpaceWire. The SpaceWire can form a connected graph that embeds multiple spanning trees. This is a significant advantage because it allows the IEEE 1394 bus to select a different tree topology when a fault occurs. It also has sufficient performance to stand in for the IEEE 1394 bus during fault recovery, so that a backup IEEE 1394 bus is no longer required. These two buses are very compatible at the physical level and therefore can easily be combined. Analysis of the effectiveness of the IEEE 1394/SpaceWire architecture shows that it can achieve the same fault tolerance capability as the IEEE 1394/I2 C architecture with less redundancy",2001,0, 3085,A model-based fault-tolerant CSCW architecture. Application to biomedical signals visualization and processing,"The paper describes a methodological approach that uses Petri nets (PNs) and Time Petri nets (TPNs) for modeling, analysis and behavior control of fault-tolerant computer supported synchronous cooperative work (CSSCW) architectures inside which a high level of interactivity between users is required. Modeling allows architectures to be formally studied under different functioning conditions (normal communications and deficient communications). Results show that the model is able to predict interlocking and state inconsistencies in the presence of errors. TPNs are used to extend PN models in order to detect communication errors and avoid subsequent dysfunctions. The approach is illustrated through the improvement of a recently presented collaborative application dedicated to biomedical signal visualization and analysis",2000,0, 3086,A new transmission line fault locating system,"The paper describes a new fault locating system that has been developed at Mehta Tech, Inc (USA). The distance calculating technique of the system is based on the reactance method of fault distance estimation, using data from one terminal of a transmission line. The technique compensates for errors caused by factors such as load flow and fault resistance. The system has been in commercial use by some reputable electric utilities, both local and international for over three years. The paper also presents some field results that have been obtained from some of these electric utilities that are using the new fault locating system",2001,0, 3087,Confirming the reliability and safety of MV distribution network including DG applying protection applications for earth faults,"The paper describes the effects of distributed generation on the earth fault protection of a medium voltage feeder and protection coordination. Neutral isolated and compensated systems were considered. The aim was to investigate the behaviour of the production unit during automatic reclosings, especially as regards electrical safety. Methods possibly feasible for clearing a temporary earth fault without voltage break were considered. Thus disturbances affecting production and customers are less than with automatic reclosings. The method of the study was dynamic simulation of the earth faults in medium voltage system. The network model including a wind power plant was implemented applying PSCADtrade simulation software.",2009,0,8352 3088,Object Oriented Concurrent Fault Diagnostic System for 8051 Based Microcontroller System,"The paper discusses a diagnostic intelligent system based on object oriented paradigm. The aim of this research work is to investigate the object rule integration model and construct an object oriented system that incorporates logic rules in such a way that the user can able to update the rules without changing the method functions of the object model. The system uses knowledge representation structures based on declarative knowledge which is most suitable for fault diagnosis process. The knowledge acquisition process is automated so that the domain expert can easily encode their knowledge by themselves. The diagnostic knowledge is distributed in three categories such as, fault classification, knowledge related to causes of the fault and check knowledge. First the fault is classified and then causes for a fault are identified just like human expert does. The check knowledge is used to match the observed symptoms with related types of the knowledge objects in the knowledge base. The main objective of the diagnostic system is to carry out the fault diagnosis process more intelligently and in flexible order with minimum time. The user interface is developed using visual programming aspects which shows graphically the diagnostic process carried out. The user interface permits the user to enter symptoms as and when required by the system or it will fetch online status of the control signals form the unit under test. The present system is very much suitable for fault diagnosis in microcontroller based systems and can also be used for fault diagnosis in embedded systems or related systems by updating the knowledge base appropriately.",2007,0, 3089,Error performance of dual header pulse interval modulation (DH-PIM) in optical wireless communications,"The performance of dual header pulse interval modulation in distortion-free optical wireless channels is examined in terms of error performance, power requirement and achievable bit rate. Complete derivations of the slot and packet error rates, optical power and bandwidth requirements are presented. The calculated results are confirmed by computer simulation of the proposed scheme. The performance of DH-PIM is compared with PPM, PIM and OOK modulation schemes. It is shown that DH-PIM offers improved error performance compared with OOK and PIM, but is marginally inferior to PPM",2001,0, 3090,Error Simulation Analysis of Gripping Deviation of Plate Specimen Tensile Test Based on Finite Element Method,"The plate specimen tensile test is a common material mechanical property test in practical engineering, but the gripping deviation of plate specimen will bring tensile test result an error. In this paper, a finite element model of aluminum alloy notched specimen is built and a static finite element analysis is carried on, the influence of the plate specimen tensile test gripping position deviation rightness of experiment to the test result is studied. The result expresses that the gripping deviation of plate specimen will bring tensile test result an error bigger. It is necessary to raise the accuracy request of the gripping position of plate specimen tensile test.",2009,0, 3091,Data Unpredictability in Software Defect-Fixing Effort Prediction,"The prediction of software defect-fixing effort is important for strategic resource allocation and software quality management. Machine learning techniques have become very popular in addressing this problem and many related prediction models have been proposed. However, almost every model today faces a challenging issue of demonstrating satisfactory prediction accuracy and meaningful prediction results. In this paper, we investigate what makes high-precision prediction of defect-fixing effort so hard from the perspective of the characteristics of defect dataset. We develop a method using a metric to quantitatively analyze the unpredictability of a defect dataset and carry out case studies on two defect datasets. The results show that data unpredictability is a key factor for unsatisfactory prediction accuracy and our approach can explain why high-precision prediction for some defect datasets is hard to achieve inherently. We also provide some suggestions on how to collect highly predictable defect data.",2010,0, 3092,On the Impact of Design Flaws on Software Defects,"The presence of design flaws in a software system has a negative impact on the quality of the software, as they indicate violations of design practices and principles, which make a software system harder to understand, maintain, and evolve. Software defects are tangible effects of poor software quality. In this paper we study the relationship between software defects and a number of design flaws. We found that, while some design flaws are more frequent, none of them can be considered more harmful with respect to software defects. We also analyzed the correlation between the introduction of new flaws and the generation of defects.",2010,0, 3093,Higher-order corrections to the pi criterion for the periodic operation of chemical reactors,"The present work develops a method to determine higher-order corrections to the pi criterion, derived from basic results of Center Manifold theory. The proposed method is based on solving the Center Manifold PDE via power series. The advantage of the proposed approach is the improvement of the accuracy of the pi criterion in predicting performance under larger amplitudes. The proposed method is applied to a continuous stirred tank reactor, where the yield of the desired product must be maximized.",2009,0, 3094,On-line error detection and testing of AES,"The paper proposes low cost on-line error detection architecture for advanced encryption standard algorithm. The implementation is optimized for FPGA based embedded applications since it is tuned to specific FPGA logic resources. In order to provide more reliable operation and reduce the possibility of suffering from fault-based side channel attacks, the on-line error detection based on parity codes is developed.",2009,0, 3095,Estimation of parametric sensitivity for defects size distribution in VLSI defect/fault analysis,The parametric sensitivity of defect size distribution in VLSI defect/fault analysis is evaluated. The use of special software tool FIESTA for the computational experiment aimed at estimation of the significance of parameters in expressions approximating the actual defect distribution is considered. The obtained experimental results and their usefulness have been analysed,2002,0, 3096,Fault tolerant insertion and verification: a case study,"The particular circuit structures that allow the building of a fault tolerant (FT) circuit have been extensively studied in the past, but currently there is a lack of CAD support in the design and evaluation of FT circuits. The aim of the AMATISTA European project (IST project 11762) is to develop a set of tools devoted to the design of FT digital circuits. The toolset is composed of: an automatic insertion tool and a simulation tool to validate the FT design. This paper is a case study describing how this set of FTI (fault tolerant insertion) and FTV (fault tolerant verification) tools have been used to increase the reliability in a typical automotive application.",2002,0,4835 3097,A multi-level meta-object protocol for fault-tolerance in complex architectures,"The past decade has seen an increasing use of complex computer systems made of third party components to develop mission critical applications. To insure the dependability of those systems in a sound and maintainable manner, technologies are needed to add fault-tolerance mechanisms transparently, while maintaining efficiency, high coverage, and evolvability. In this paper, we present a generic framework that addresses this problem and can be used within current industrial software. Our proposal is based on a limited set of core concepts inspired from plant biology and meta-object protocols. It provides separation of concerns for the implementation of adaptive fault tolerance strategies, while maintaining a global inter-level perception of the system runtime behavior. We demonstrate its practicality by using it to control the non-determinism of a CORBA/UNIX system.",2005,0, 3098,"Reverse link bit error rate for cellular CDMA with antenna arrays, Rayleigh fading, and power control error","The performance of code-division multiple-access (CDMA) systems is affected by multiple factors such as large-scale fading, small-scale fading, and cochannel interference (CCI). Most of the published research on the performance analysis of CDMA systems usually accounts for subsets of these factors. We provide an analysis that combines several of the most important factors affecting the performance of CDMA systems. In particular, new analytical expressions are developed for reverse link bit error probability (BEP) of CDMA systems. These expressions account for adverse effects such as path loss, large-scale fading (shadowing), small-scale fading (Rayleigh fading), and CCI, as well as for correcting mechanisms such as power control (compensates for path loss and shadowing), spatial diversity (mitigates against Rayleigh fading), and voice activity gating (reduces CCI). The new expressions may be used as convenient analysis tools that complement computer simulations",2000,0, 3099,Analytical redundancy for sensor fault isolation and accommodation in public transportation vehicles,"The paper discusses an instrument fault detection, isolation, and accommodation procedure for public transportation vehicles. After a brief introduction to the topic, the rule set implementing the procedure with reference to the kinds of sensors usually installed on public transportation vehicles is widely discussed. Particular attention is paid to the description of the rules aimed at allowing the vehicle to continue working regularly even after a sensor fault develops. Finally, both the estimated diagnostic and dynamic performances in the off-line processing of the data acquired in several drive tests are then analyzed and commented upon.",2004,0, 3100,Implications of shorted turn faults in bar wound PM machines,"The paper discusses the effect of single turn short-circuits in the windings of fault tolerant permanent magnet machines. It is shown that previously recognised methods for dealing with shorted turns do not work for larger bar-wound machines, and a new method for protecting the windings against single turn faults is therefore proposed. The new method depends on rapid detection of the fault and injection of a current (of appropriate magnitude and phase) into the faulted winding. The paper gives a theoretical analysis supported by finite element simulations of the fault conditions and illustrates these with a case study taken from the aerospace industry.",2004,0, 3101,Redundancy classification for fault tolerant computer design,"The paper discusses the principles of redundancy classification for the design of fault tolerant computer systems. The basic functions of classification: definitive, characteristic and predictive are presented. It is shown that the proposed classification of redundancy possesses substantial predictive power. The proposed classification is suitable for the analysis of roles of hardware and software to achieve system fault tolerance",2001,0, 3102,Analysis and research of association pattern between network performances and faults in voip network,"The paper focuses on implementation of association pattern between the network performances and the general network faults. We use network simulation tool, OPNET, to accomplish network simulations of different network faults and different situations based on VoIP. Network performance parameters such as end-to-end delay, jitter, traffic dropped, queuing delay, link throughout, link utilization are monitored and collected. The simulation results are processed using data mining software, Weka, to discover the association pattern.",2009,0, 3103,On the design of error detection and correction cryptography schemes,"The paper introduces the method of modifying cryptography encryption and decryption units, which includes circuitry of checks that operations have been performed without errors. This technique is based on addition to storage devices, error correction codes, and module check of arithmetic and logic units operations",2000,0, 3104,Estimating error rates in processor-based architectures,The paper investigates a new technique to predict error rates in digital architectures based on microprocessors. Three studied cases are presented concerning three different processors. Two of them are included in the instruments of a satellite project. The actual space applications of these two instruments were implemented using the capabilities of a dedicated system. Results of the fault injection and radiation testing experiments and discussions about the potentialities of this technique are presented,2001,0, 3105,Assessing the Relationship between Software Assertions and Faults: An Empirical Investigation,"The use of assertions in software development is thought to help produce quality software. Unfortunately, there is scant empirical evidence in commercial software systems for this argument to date. This paper presents an empirical case study of two commercial software components at Microsoft Corporation. The developers of these components systematically employed assertions, which allowed us to investigate the relationship between software assertions and code quality. We also compare the efficacy of assertions against that of popular bug finding techniques like source code static analysis tools. We observe from our case study that with an increase in the assertion density in a file there is a statistically significant decrease in fault density. Further, the usage of software assertions in these components found a large percentage of the faults in the bug database",2006,0, 3106,Offline Adaptive Forward Error Correction (AFEC) for Reliable Multicast,"The use of FEC/AFEC (adaptive forward error correction) for reliable multicast is not new. However, they are still not widely accepted as the de facto standard for recovering lost data. The primary reason is the added CPU overhead involved in generating the parity data. In this paper, we propose a technique to move the generation of AFEC parity data offline, on the server side. Thus, avoiding any CPU overhead associated with parity data generation. The challenge on the server side would be to meet the expectations of all the clients for a particular multicast. Since different clients may experience different loss rates at the same time, the challenge is to satisfy their needs in an optimal way without causing network congestion",2006,0, 3107,Pressure control devices with the microprocessor error correction,"The use of microprocessors in pressure control devices is necessary to improve its accuracy, provide long-term stability, and satisfy modern requirements when the digital signal protocol needs reprogramming. By using complementary MOS structure microprocessors and decreasing the working frequency, the electronic module power consumption may be decreased. Nevertheless, the reduced processor speed allows doing the full cycle of design-measuring procedures 3-5 times per second and correcting DAC with the same speed. Microprocessor pressure control devices have some common procedures: the calibration procedure, resulting in the form of the sensitive element mathematical model characteristic and some adjustments of the device; the characteristic approximation procedure on base of the sensitive element error correction mathematical model; the measurement result-engineering unit transformation procedure; the forming of analog digital signals of interface.",2003,0, 3108,Implementing a reflective fault-tolerant CORBA system,"The use of reflection is becoming popular today for the implementation of non-functional mechanisms such as fault tolerance. The main benefits of reflection are separation of concerns between the application and the mechanisms and transparency from the application programmer point of view. Unfortunately, metaobject protocols (MOPs) available today are not satisfactory with respect to necessary features needed for implementing fault tolerance mechanisms. Previously, we proposed a specialised MOP based on Corba, well adapted for such mechanisms (M.-O. Killijian and J.C. Fabre, 1998). We deliberately focus on the implementation of this metaobject protocol using compile-time reflection and its use for implementing distributed fault tolerance. We present the design and the implementation of a fault-tolerant Corba system using this metaobject together with some preliminary experimental results. From the lessons learnt from this work, we briefly address the benefits of reflection in other layers of a system for dependability issues",2000,0, 3109,Using formal verification to eliminate software errors,"The use of software in safety critical railway applications is increasing. Techniques for developing and running such software are based on reducing the probability of an error in the software causing an unsafe system failure; e.g. that the system permits a train to proceed when it shouldn't. At the same time these techniques cause problems of the opposite nature; that the systems fail to allow trains to proceed even when it is safe to proceed. Such failures, although not directly dangerous, lead to stress and delayed traffic, which in turn can cause safety to be compromised. This paper shows how the use of formal verification can solve this problem. This technique can be used to find and eliminate all errors in the software, before the system is put into service. Formal verification is one of the corner stones of Prover iLock, an off-the-shelf commercial tool suite used for developing railway signalling applications.",2008,0, 3110,Using simulation for assessing the real impact of test-coverage on defect-coverage,"The use of test-coverage measures (e.g., block-coverage) to control the software test process has become an increasingly common practice. This is justified by the assumption that higher test-coverage helps achieve higher defect-coverage and therefore improves software quality. In practice, data often show that defect-coverage and test-coverage grow over time, as additional testing is performed. However, it is unclear whether this phenomenon of concurrent growth can be attributed to a causal dependency, or if it is coincidental, simply due to the cumulative nature of both measures. Answering such a question is important as it determines whether a given test-coverage measure should be monitored for quality control and used to drive testing. Although it is no general answer to this problem, a procedure is proposed to investigate whether any test-coverage criterion has a genuine additional impact on defect-coverage when compared to the impact of just running additional test cases. This procedure applies in typical testing conditions where the software is tested once, according to a given strategy, coverage measures are collected as well as defect data. This procedure is tested on published data, and the results are compared with the original findings. The study outcomes do not support the assumption of a causal dependency between test-coverage and defect-coverage, a result for which several plausible explanations are provided",2000,0, 3111,Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports,"The usual way to compare testing strategies, whether theoretically or empirically, is to compare the number of faults they detect. To ascertain definitely that a testing strategy is better than another, this is a rather coarse criterion: shouldn't the nature of faults matter as well as their number? The empirical study reported here confirms this conjecture. An analysis of faults detected in Eiffel libraries through three different techniques-random tests, manual tests, and user incident reports-shows that each is good at uncovering significantly different kinds of faults. None of the techniques subsumes any of the others, but each brings distinct contributions.",2008,0, 3112,Operating system supports to enhance fault tolerance of real-time systems,"The virtual memory functions in real-time operating systems have been used in real-time systems. The virtual memory functions of real-time operating systems enhance the fault-tolerance of real-time systems because their memory protection mechanism isolates faulty real-time tasks. Recent RISC processors provide virtual memory support through software-managed translation lookaside buffer (TLB) in software. In real-time systems, managing TLB entries is the most important issue because overhead at TLB miss time greatly affects overall performance of the system. In this paper, we propose several virtual memory management algorithms by comparing overheads at task switching times and TLB miss times.",2003,0, 3113,Automatic image processing filter generation for visual defects classification system,"The visual inspection system is used in various production systems. The Visual Inspection System is used to maintain the quality of products. But, there are some defects which are not detected with enough reliability on conventional systems. To meet with the problems, the automatic generation system of best image processing filters which extract the proper characteristics of images for that kind of defects has been introduced to improve recognition rate. The system is designed to generate two kinds of filters to detect the defects with vague edge and widely distributed defect images using neural network method and co-occurrence histogram images. Experiments shows the generated filters get better recognition rate.",2009,0, 3114,Error resilience performance evaluation of H.264 I-frame and JPWL for wireless image transmission,"The visual quality obtained in wireless transmission strongly depends on the characteristics of the wireless channel and on the error resilience of the source coding. The wireless extensions of the JPEG 2000 standard (JPWL) and H.264 are the latest international standards for still image and video compression, respectively. However, few results have been reported to compare the rate-distortion (R-D) performance of JPEG 2000 and H.264. Conversely, comparative studies of error resilience between JPWL and H.264 for wireless still image transmission have not been thoroughly investigated. In this paper, we analyse the error resilience of image coding based on JPWL and H.264 I-frame coding in Rayleigh fading channels. Comprehensive objective and perceptual results are presented in relation to the error resilience performance of these two standards under various conditions. Our simulation results reveal that H.264 is more robust to transmission errors than JPWL for wireless still image transmission.",2010,0, 3115,Fault detection in a wastewater treatment plant,The wastewater treatment plants are very unstable and the waters to be treated are ill defined. The command of those processes needs to be done with advanced methods of control and supervision. Those methods have to take into account the bad knowledge of the processes. The most important situations that can imply problems for the plant are listed. The parameters that allow the comparison between the crisis situation and normal one are measured. The fuzzy logic method is used to distinguish the different situations in order to take a decision for the working of the plant. The method is tested in simulation and on a real plant.,2001,0, 3116,FTWeb: a fault tolerant infrastructure for Web services,"The Web services architecture came as answers to the search for interoperability among applications. There has been a growing interest in deploying on the Internet applications with high availability and reliability requirements. However, the technologies associated with this architecture still do not deliver adequate support to this requirement. The model proposed in this article is located in this context and provides a new layer of software that acts as a proxy between client requests and service delivery by providers. The main objective is to ensure client transparent fault tolerance by means of the active replication technique. This model supports the following faults: value, omission and stops. This paper describes the features and outcomes obtained through the implementation of this model.",2005,0, 3117,Designing Fault Tolerant Web Services Using BPEL,"The Web services technology provides an approach for developing distributed applications by using simple and well defined interfaces. Due to the flexibility of this architecture, it is possible to compose business processes integrating services from different domains. This paper presents an approach, which uses the specification of services orchestration, in order to create a fault tolerant model combining active and passive replication technique. This model support fault of crash. The characteristics and the results obtained by implementing this model are described along this paper.",2008,0, 3118,Learning in Presence of Ontology Mapping Errors,"The widespread use of ontologies to associate semantics with data has resulted in a growing interest in the problem of learning predictive models from data sources that use different ontologies to model the same underlying domain (world of interest). Learning from such semantically disparate data sources involves the use of a mapping to resolve semantic disparity among the ontologies used. Often, in practice, the mapping used to resolve the disparity may contain errors and as such the learning algorithms used in such a setting must be robust in presence of mapping errors. We reduce the problem of learning from semantically disparate data sources in the presence of mapping errors to a variant of the problem of learning in the presence of nasty classification noise. This reduction allows us to transfer theoretical results and algorithms from the latter to the former.",2010,0, 3119,Comparison of the performance of two solid state fault current limiters in the distribution network,"The solid state fault current limiters, offer a superior solution to the power distribution system problems caused by high available fault current. In this paper the comparison of two different types of SSFCL is carried out. The design of the SSFCL devices are developed, power losses and harmonics are evaluated. An equivalent circuit model of each device is then derived and implemented, along with the scheme of the considered radial distribution network, in the MATLAB/Simpower software Package. The simulation results show that theses devices not only limit the fault current but also actively control fault currents in order to reduce the influence of faults in distribution networks. The limiting effect, interruption capability, losses, harmonics and influence of SSFCL devices on the healthy part of the power distribution network to mitigate voltage sag are studied and compared.",2008,0, 3120,Entropy based feature extraction for motorbike engine faults diagnosing using neural network and wavelet transform,"The sound of working vehicle provides an important clue for engine faults diagnosis. Endless efforts have been put into the research of fault diagnosis based on sound. It offers concrete economic benefits, which can lead to high system reliability and save maintenance cost. A number of diagnostic systems for vehicle repair have been developing in recent years. Artificial neural network is a very demanding application and popularly implemented in many industries including condition monitoring via fault diagnosis. This paper presents a feature extraction algorithm using total entropy of 5 level decomposition of wavelet transform. The engine noise signal is decomposed into 5 levels (A5, D5, A4, D4, A3, D3, A2, D2, A1, D1) using Daubechies ""db4"" wavelet family. From the decomposed signals, the entropy is applied for each levels and the feature are extracted and used to develop a backpropagation neural network.",2009,0, 3121,Fault Protection for the Space Interferometry Mission,"The Space Interferometry Mission (SIM) is a deep space mission with limited ground contact and challenging instrument fault protection requirements. Some faults that could occur on SIM have symptoms only visible in the scientific measurements, and many of the observations that SIM makes must be repeated to be useful. Because of this, SIM needs to be very reliable. We are addressing these challenges by combining several approaches. The use of redundancy in the instrument will be important in making the complex instrument with several dozen actuators reliable. Probabilistic risk assessment and reliability modeling are being performed to study the most cost effective areas for redundancy. Ground checks are important parts of the fault detection approach. Special approaches such as ""flood mode"" (providing large amounts of raw data) and diagnostic data dumps are also important",2005,0, 3122,Research of positioning system for virtual manipulator based on visual error compensation,"The space positioning mechanism based on binocular stereo vision system for the picking manipulator was analyzed, and the positioning process simulation system for space manipulator was developed by virtual reality technology. The precise positioning for a virtual space manipulator was achieved in the simulation system, through a combination of the positioning error compensation mechanism for binocular stereo vision system; provide a reference for the research into exact position of picking robot in the actual operating environment.",2009,0, 3123,Modeling and estimation of the spatial variation of elevation error in high resolution DEMs from stereo-image processing,The spatial variability of elevation errors in high-resolution digital elevation models (DEMs) derived from stereo-image processing is examined. Error models are developed and evaluated by examining the correlation between various DEM parameters and the magnitude of the observed DEM vertical error. DEM vertical errors were estimated using a dataset of more than 51000 points of known elevation obtained from a kinematic Global Positioning Satellite (GPS) ground survey. Elevation variability and the quality of the stereo-correlation match over small spatial scales were the dominant factors that determined the magnitude of the DEM error at any given location. The error models are strongly correlated with the magnitude of the DEM vertical error and are shown to adequately represent the full range of the observed error. The error models are used to estimate the magnitude of the vertical error for every point in the DEMs. The models are then used to predict the overall error in the DEMs. The results demonstrate that the error models can accurately quantify and predict the spatial variability of the DEM error,2001,0, 3124,An evaluation of atmospheric correction techniques using the spectral similarity scale,"The spectral similarity scale (SSS) is used to evaluate the ATREM and ACORN hyperspectral atmospheric correction software techniques. The SSS evaluates spectra by evaluating the shape and the brightness between pairs of spectra in a hyperspectral data set. An AVIRIS observation of corn crops in Shelton, Nebraska with ground truth is used for this evaluation. Initial results show that it is possible for atmospheric correction techniques to add many ""false"" spectral features that were not present in the original observation. A correct atmospheric correction of a data set increases the spectral contrast of some data and reveals other subtle spectral features. The ACORN software provides a superior correction to ATREM in terms of removing gaseous spectral features such as that of water",2001,0, 3125,Improved in vivo abdominal image quality using real-time estimation and correction of wavefront arrival time errors,"The speed of sound varies with tissue type, yet commercial ultrasound imagers assume it is constant. Sound speed variation in abdominal fat and muscle layers is widely believed to be largely responsible for poor image contrast and resolution in some patients. The simplest model of the abdominal wall assumes that it adds a spatially varying time delay to the ultrasound wavefront. We describe an adaptive imaging system consisting of a GE LOGIQ 700 imager connected to a multi-processor computer. Arrival time errors for each beamforming channel, estimated by correlating each channel signal with the beamsum signal, are used to correct the imager's transmit and receive beamforming time delays at the image frame rate. A multi-row transducer provides two dimensional sampling of wavefront arrival time errors. After beamforming time delay correction, we observe significant improvement in abdominal images of healthy male volunteers, including increased contrast of blood vessels, increased brightness of liver tissue, and improved definition of the renal capsule and splenic boundary",2000,0, 3126,Stability guaranteed active fault tolerant control of networked control systems,The stability guaranteed active fault tolerant control against actuators failures in networked control systems (NCS) is addressed. A detailed design procedure is formulated as a convex optimization problem which can be efficiently solved by existing software. An illustrative example is given to show the efficiency of the proposed method for NCS.,2007,0, 3127,Measurement Errors Caused by the Transient Limiter,"The standard conducted emissions test may use a transient limiter between the LISN and the measuring instrumentation. This component is non-linear and can introduce significant errors in the measurement, sufficient to completely change the outcome of the test but which are transparent to the test engineer. The different mechanisms by which this may happen are described and modelled, and some mitigation techniques are recommended.",2007,0, 3128,Structural method of explicit fault location in a LAN segment,"The structural method of fault location in a LAN segment has been proposed. It combines a method of many-valued fault table's analysis using vectors of elementary probes with a method of structural fault location by reachability matrix, where lines of matrix are used instead of fault table lines. Such approach allows to reduce an area of suspected faults and fault location time. Experimental results are valid and they correspond to real behavior of LAN with defined conditions.",2004,0, 3129,Monitoring faults self-consciously in the real time dual working system based on high performance network,"The time that system takes to find out exceptions cannot be too long in a real-time system. In this paper, we first construct a stochastic Petri net to analyze the relationship between the time the system takes to diagnose faults and the system's availability. We conclude that the shorter the time it takes the higher the availability it can achieve. Next, we explain how the self-detecting fault can be realized. Finally, several experiments have been designed to prove its effectiveness.",2002,0, 3130,Research on integrated fault tree modeling of the certain missile double-buses tolerated-fault control system,"The traditional static fault trees with AND, and OR gates can express the logic relation between events, but they cannot capture the dynamic behavior of complex system. Aim at this problem, this paper presents a kind of fault tree modeling method based on Integrated Fault Tree, that models with the idea of modularization. The whole tree is separated several absolute static trees and dynamic trees. Relatively, the modularization linearity-searching algorithm based on depth-first-left-most search is presented. The paper chooses the double buses tolerated-fault control system fault tree modeling of missile body explosive bolt. Lastly, the modularization algorithm is used to validate rationality of model.",2010,0, 3131,Adaptive Error Resilience Tools for Improving the Quality of MPEG-4 Video Streams over Wireless Channels,"The transmission of encoded video streams over heterogeneous networks, characterized by congestion losses in wired links and high bit error rate in the wireless channel, is under consideration in this paper. First of all, it is necessary to make a distinction between the different kinds of losses to achieve a high channel utilization. Then, when dealing with wireless losses, the robustness of the encoded stream has to be improved. This article shows an easy way of implementing a global adaptive algorithm for wireless networks. It proposes a new algorithm to adequate the parameters of video error resilience tools to the changing characteristics of the wireless channel. The aim of this algorithm is guaranteeing the quality of the video. Simplicity is the key, as the receiver of the video stream will be a handheld device (i.e. PDA, mobile phone, etc.) with limited processing capacity",2006,0, 3132,Error Protection and Interleaving for Wireless Transmission of JPEG 2000 Images and Video,"The transmission of JPEG 2000 images or video over wireless channels has to cope with the high probability and burstyness of errors introduced by Gaussian noise, linear distortions, and fading. At the receiver side, there is distortion due to the compression performed at the sender side, and to the errors introduced in the data stream by the channel. Progressive source coding can also be successfully exploited to protect different portions of the data stream with different channel code rates, based upon the relative importance that each portion has on the reconstructed image. Unequal error protection (UEP) schemes are generally adopted, which offer a close to the optimal solution. In this paper, we present a dichotomic technique for searching the optimal UEP strategy, which lends ideas from existing algorithms, for the transmission of JPEG 2000 images and video over a wireless channel. Moreover, we also adopt a method of virtual interleaving to be used for the transmission of high bit rate streams over packet loss channels, guaranteeing a large PSNR advantage over a plain transmission scheme. These two protection strategies can also be combined to maximize the error correction capabilities.",2009,0, 3133,Use of Triple Modular Redundancy (TMR) technology in FPGAs for the reduction of faults due to radiation in the readout of the ATLAS monitored drift tube (MDT) chambers,"The Triple Modular Redundancy (TMR) technology allows protection of the functionality of FPGAs against single event upsets (SEUs). Each logic block is implemented three times with a 2-out-of-3 voter at the output. Thus, the correct logical value is available even if there is an upset bit in one location. We applied TMR to the configuration code of a Virtex-II-2000 FPGA, which serves as the on-chamber readout processor of the ATLAS monitored drift tubes (MDTs). We describe the code implementation, results of performance measurements and discuss several limitations of the method. Finally, we present a supplementary technology called scrubbing. It permanently checks the configuration memory while the FPGA is operating, and corrects upset configuration bits when necessary.",2010,0, 3134,Effect of tropospheric range delay corrections on differential SAR interferograms,"The tropospheric range delay is estimated from meteorological data or GPS data, and then is used to correct JERS-1 SAR differential interferograms. Topography of the concerned area is considered in mapping the delay",2001,0, 3135,Failure analysis using IDD current leakage and photo localization for gate oxide defect of CMOS VLSI,"The typical electrical degradation on the complementary metal oxide semiconductor (CMOS) performance is due to defect in gate oxide layer. During integrated circuit (IC) infant mortality phase, stress tests introduced at wafer sort and final test in order to assured only good IC being deliver to end customer. Stress tests such as burn-in, gate stress, and quiescent current (IDDQ) test, demonstrated their competency to screen out this type of early failure. Nevertheless, the latent defect is a time dependent failure, which, affect the CMOS reliability after certain time, temperatures and application stress. Consequently, revise failure analysis technique has to be introduced in effective approach to compensate the problem in the current technology due to metal interconnection layers and dense for front side failure analysis (FA). The motivation of this work is to present the fault localization on the elevated IDD current of the faulty logic cells during the transition by photo localization technique and clarify gate oxide defect through circuit simulation. We have confirmed that the IDD scan test and photo localization technique were effective to localize faulty IC in silicon active area through front side.",2010,0, 3136,Cloud Model-Based Security-Aware and Fault-Tolerant Job Scheduling for Computing Grid,"The uncertainties of grid nodes security are main hurdle to make the job scheduling secure, reliable and fault-tolerant. The fixed fault-tolerant strategy in jobs scheduling may utilize excessive resources. In this paper, the job scheduling decides which kinds of fault-tolerance strategy will be applied to each individual job for more reliable computation and shorter makespan. And we discuss the fuzziness or uncertainties between TL and SD attributes by the subjective judgment of human beings. Cloud model is a model of the uncertain transition between qualitative concept and its quantitative representation. Based on the cloud model, We propose a security-aware and fault-tolerant jobs scheduling strategy for grid (SAFT), which makes the assess of SD and SL to become more flexible and more reliable. Meanwhile, the different fault-tolerant strategy has been applied in grid job scheduling algorithm by the SD and job workload. Moreover, much more important, we are able to set up some rules and active each qualitative rule to select a suitable fault-tolerant strategy for a scheduling job by input value (the SD and job workload) to realize the uncertainty reasoning. The results demonstrate that our algorithm has shorter makespan and more excellent efficiencies on improving the job failure rate than the fixed fault-tolerant strategy selection.",2010,0, 3137,Fault-tolerant multi-server video-on-demand service,"The work describes three fault tolerant video streaming models for multi-server video-on-demand (VoD) services. These models guarantee continuous streaming to clients despite server failures, while utilizing very low network's bandwidth and a small client's buffer. This is achieved by exploiting the special characteristics of the MPEG and Motion-JPEG codecs. The uniqueness of the models suggested here compared to previous research is that, to the best of our knowledge, we are the first to address server failures, rather than only disk or file failures, while trying to take advantage of the codecs used in sending streaming media. In all models there is some degradation in quality until the failure is detected and compensated. However, there are no pauses in the movie, and the degradation in quality is very small compared to the low bandwidth and buffer overhead required, if any.",2003,0, 3138,Stability analysis of radome error and calibration using neural networks,Theoretical and numerical simulation analyses for the radome refraction effect on stability and induced miss distance of missiles guided by proportional navigation are presented. Quantitative stability conditions are derived with respect to linear and nonlinear radome error. A novel neural network compensation scheme for radome error is also presented. It is shown that the proposed neural compensator can effectively reduce the influence resulting from radome error. Preliminary results indicate encouraging improvement in the miss distance and magnitude of the acceleration command,2001,0, 3139,Hand-held state monitoring and fault diagnosis system for oil field pouring and pick equipment,"There are a lot of problems in pouring and pick equipments (PPE) which are widely applied in oil field, such as many different kinds of faults, dispersal of equipments, strict requirement for continuous working etc. But traditional fault diagnosis technology (FDT) based on PC doesnt have adequate flexibility, so it can not satisfy the requirements as mentioned before. To solve these problems, a new kind of hand-held state monitoring and fault diagnosis system (SM-FDS) based on embedded technology was put forward. After detailed analysis of working situation and operating features of PPE, the realization way which is mainly based on the index of vibration is confirmed; Using multi-channel data collecting technology, hand-held circuit check is realized in real-time data collecting process. Signal analysis component library and graphical user interfaces (GUI) component library which can reconstruct and occupy small space are established to make characteristic extraction of signal, on-line analysis and diagnosis by multi-kind signal analysis method. According to specific requirements of hand-held instruments, monitoring system which has upper-lower position computer with dual-CPU computer architecture and low power consumption designation is realized. Industry experiments indicated that this system monitors accurately, works stably, and has a bright application future.",2006,0, 3140,Using fuzzy probabilistic neural network for fault detection in MEMS,"There are different methods for detecting digital faults in electronic and computer systems. But for analog faults, there are some problems. This kind of faults consists of many different and parametric faults, which can not be detected by digital fault detection methods. One of the proposed methods for analog fault detection is neural networks. Fault detection is actually a pattern recognition task. Faulty and fault free data are different patterns which must be recognized. In this paper we use a probabilistic neural network for fault detection in MEMS. A fuzzy system is used to improve performance of the network. Finally different network results are compared.",2005,0, 3141,On the tracking performance improvement of optical disk drive servo systems using error-based disturbance observer,"There are many control methods to guarantee the robustness of a system. Among them, the disturbance observer (DOB) has been widely used because it is easy to apply and the cost is low due to its simplicity. Generally, an output signal of the system is required to construct a DOB, but for some systems such as magnetic/optical disk drive systems, we cannot measure the position output signal, but only the position error signal (PES). In order to apply a DOB to such systems, we must use an error signal instead of an output signal. We call it the error-based disturbance observer (EDOB) system. We analyze the differences between a conventional DOB system and EDOB system, and show the effectiveness of the proposed EDOB through simulations and experiments. Also, this paper proposes criteria to enhance the robustness of an EDOB system, and reveals the disturbance rejection property of the EDOB system. Finally, we propose a new method of a double Q system to improve the track-following performance. This is also verified through experiments for a DVD 12 optical disk drive system.",2005,0, 3142,Application of detection method based on energy function against faults in motion system,"There are many possibilities of faults in motion systems which are related to driver equipments, based on the torque monitor signal supplied by systemic driver, a new fault diagnosis method was brought forward which was profited from a nonlinear energy operator. The principles of using it to performing fault diagnosis were presented, the keynotes about its application were also analyzed, and the detail flow chart of diagnosis was given. After the software simulation using Matlab and diagnosing experiments on X-Y-Z motion platform under numeric control arithmetic, the validity of this method was proved. Aiming at different kinds of energy functions and the choices of their parameters, after particular analysis, some guidelines about fault diagnosis are concluded.",2008,0, 3143,Detection of Rotor Faults in Brushless DC Motors Operating Under Nonstationary Conditions,"There are several applications where the motor is operating in continuous nonstationary operating conditions. Actuators and servo motors in the aerospace and transportation industries are examples of this kind of operation. Detection of faults in such applications is, however, challenging because of the need for complex signal processing techniques. Two novel methods using windowed Fourier ridges and Wigner-Ville-based distributions are proposed for the detection of rotor faults in brushless dc motors operating under continuous nonstationarity. Experimental results are presented to validate the concepts and illustrate the ability of the proposed algorithms to track and identify rotor faults. The proposed algorithms are also implemented on a digital signal processor to study their usefulness for commercial implementation",2006,0, 3144,Diagnosis of rotor faults in brushless DC (BLDC) motors operating under non-stationary conditions using windowed Fourier ridges,"There are several applications where the motor is operating in continuous non-stationary operating conditions. Actuators in the aerospace and transportation industries are examples of this kind of operation. Diagnostics of faults in such applications is, however, challenging. A novel method using windowed Fourier ridges is proposed in this paper for the detection of rotor faults in BLDC motors operating under continuous non-stationarity. Experimental results are presented to validate the concept and depict the ability of the proposed algorithm to track and identify rotor faults. The proposed algorithm is simple and can be implemented in real-time without much computational burden.",2005,0, 3145,Fault detection using phenomenological models,"There exist many different established approaches to detect system faults. This paper discusses the various system models and the associated fault detection techniques. Specifically, phenomenological models are presented in detail. Fault detection using principal components analysis and the cluster and classify method is illustrated with real operational data from an electrically powered vehicle.",2003,0, 3146,Do Crosscutting Concerns Cause Defects?,"There is a growing consensus that crosscutting concerns harm code quality. An example of a crosscutting concern is a functional requirement whose implementation is distributed across multiple software modules. We asked the question, ""How much does the amount that a concern is crosscutting affect the number of defects in a program?"" We conducted three extensive case studies to help answer this question. All three studies revealed a moderate to strong statistically significant correlation between the degree of scattering and the number of defects. This paper describes the experimental framework we developed to conduct the studies, the metrics we adopted and developed to measure the degree of scattering, the studies we performed, the efforts we undertook to remove experimental and other biases, and the results we obtained. In the process, we have formulated a theory that explains why increased scattering might lead to increased defects.",2008,0, 3147,Hardware support for fault tolerance in triple redundant CAN controllers,"There is a growing interest in using the controller area network (CAN) protocol for critical control applications. In many studies considering this possibility, it is assumed that CAN controllers, which are the circuits that implement most of the protocol specification, never present faults. Our work permits us to substantiate this assumption by proposing a fault-tolerant CAN controller subsystem, which is made-up from three standard CAN controllers and a specifically designed circuit that manages the introduced redundancy. This circuit has been designed to adapt to the specific characteristics of the CAN protocol. This paper is focused on the presentation of the general criteria that have been followed in the design of the circuit. It is to be noted that our approach is perfectly compatible with other fault tolerance mechanisms that could be applied to other parts of the system.",2002,0, 3148,A Relaxed-Ring for Self-Organising and Fault-Tolerant Peer-to-Peer Networks,"There is no doubt about the increase in popularity of decentralised systems over the classical client-server architecture in distributed applications. These systems are developed mainly as peer-to-peer networks where it is possible to observe many strategies to organise the peers. The most popular one for structured networks is the ring topology. Despite many advantages offered by this topology, the maintenance of the ring is very costly, being difficult to guarantee lookup consistency and fault-tolerance all the time. By increasing self-management in the system we are able to deal with these issues. We model ring maintenance as a self- organising and self-healing system using feedback loops. As a result, we introduce a novel relaxed-ring topology that is able to provide fault-tolerance with realistic assumptions concerning failure detection. Limitations related to failure handling are clearly identified, providing strong guarantees to develop applications on top of the relaxed-ring architecture. Besides permanent failures, the paper analyses temporary failures and broken links, which are often ignored.",2007,0, 3149,Computer testing method of defect feature of fabric,"There is not a prefect method for defect feature testing of fabrics using computer at present. In this paper, a 3-D gray feature change model is proposed after a great deal of experiments, and fabric feature is tested and measured. All these provide the theory and data basis for the research of automatic defect recognition. Firstly, mathematic description of 3-D gray feature change model is given, and detail testing method of fabric feature is introduced. Then, different experiments are designed according to the energy, variance, entropy, and range, and expression effect is tested and 3-D gray feature change graphs are drawn. Finally, according to these testing results, detail applicable defect category with these features is given, and result is concluded that 3-D gray feature change model can test fabric defect feature better.",2009,0, 3150,Glass auto alignment mechanism on LCD Defect repair stage system,"These Mechanism provide Auto Align for LCD Defect repair stage system. While glass floated on glass chuck by air, this system what located at side of glass chuck will blow air to glass side through many holes. This air blow will bring on floating glass side on side of glass chuck, so, at that times, glass will enter to guide while sliding, which guide is fixed by fixed glass size on side of glass chuck. This mechanism has a lot of profits that low cost, past align time, easy maintenance, no cylinder, etc.",2009,0, 3151,Optimizing fault tolerance in embedded distributed systems,"This article considers a gas-insulated switchgear (GIS) station. Here, all switches are located in SF6 gas volumes, offering a much better isolation and spark-extinguishing property than air, and thus leading to a far more compact size. All volumes have one or two sensors for monitoring gas density. The station's distributed computing system is arranged in several levels. PISAs-embedded computing systems for preprocessing the data from digital/analog converters and for writing them on the process bus (PB)-link to the sensor and actuator hardware. PISAs in the switches also take the actuator commands from the process bus and carry out the switching actions",2000,0, 3152,The use of abduction and recursion-editor techniques for the correction of faulty conjectures,"The synthesis of programs, as well as other synthetic tasks, often ends up with an unprovable, partially false conjecture. A successful subsequent synthesis attempt depends on determining why the conjecture is faulty and how it can be corrected. Hence, it is highly desirable to have an automated means for detecting and correcting fault conjectures. We introduce a method for patching faulty conjectures. The method is based on abduction and performs its task during an attempt to prove a given conjecture. On input X.G(X), the method builds a definition for a corrective predicate, P(X), such that X.P(X)G(X) is a theorem. The synthesis of a corrective predicate is guided by the constructive principle of formulae as types, relating inference to computation. We take the construction of a corrective predicate as a program transformation task. The method consists of a collection of construction commands. A construction command is a small program that makes use of one or more program editing commands, geared towards building recursive, equational procedures. A synthesised corrective predicate is guaranteed to be correct, turning a faulty conjecture into a theorem. If conditional, it will be well-defined. If recursive, it will also be terminating",2000,0, 3153,Evaluation of fault-tolerant designs implemented on SRAM-based FPGAs,"The technology of SRAM-based devices is sensible to single event upsets (SEUs) that may be induced mainly by high energy heavy ions and neutrons. We present a framework for the evaluation of fault-tolerant designs implemented on SRAM-based FPGAs using emulated SEUs. The SEU injection process is performed by inserting emulated SEUs in the device using its configuration bitstream file. An Altera FPGA, i.e. the Flex10K200, and the ITC'99 benchmark circuits are used to experimentally evaluate the method. The results show that between 32 to 45 percent of SEUs injected to the device propagate to the output terminals of the device.",2004,0, 3154,Correction Model of Pressure Sensor Based on Support Vector Machine,"The temperature and voltage fluctuation characteristics of pressure sensor was analyzed and found that the sensor output is nonlinear and easy to be affected by temperature and voltage fluctuation over a wide measuring range, a correction model of pressure sensor based on Support Vector Machine was presented. The approximate ability of the SVM to any nonlinear function was utilized to drill the correction model. so as to enable it to be setup at different temperatures and voltage fluctuation, thus allowing the sensor output can be in a nonlinear mapping relation to the voltage values the sensor actually sensed. The experimental results showed that the max comes down from 22.2% for 0.64%; the model can not only eliminate the influence of temperature fluctuation and voltage fluctuation but obtain the expected linear output from the output terminal of correction model.",2009,0, 3155,Hilbert-Huang Transform Based Application in Power System Fault Detection,The temporarily fault signals existing in high voltage lines and electric equipments are usually non-linear and non-stationary. Low frequency oscillation characteristic extraction from fault signals plays an important role in online fault monitoring and detection system designing. In this paper we propose a procedure to analyze power system fault signal by employing HTT method. The fault signal is firstly decomposed into intrinsic mode function (IMF) by the empirical mode decomposition (EMD) method. Then instantaneous frequency and instantaneous amplitude is obtained by Hilbert transform to compose Hilbert spectrum. Thus the transient caused by fault can be analyzed and further detected accurately through the instantaneous frequency and the time-frequency-amplitude spectra analysis. Experiments show promising results.,2009,0, 3156,The Software for a Choice of Coefficient of Criterion of a Minimum of the Resulting Root-Mean-square Error and Small Parameter for Linear Automatic Control Systems,"The theme of this work is research of influence of weight coefficient of criterion of a minimum of a root-mean-square error on a resulting error of linear system. The various approaches to a choice of weight coefficient of criterion and software for automation of a choice of optimum meaning of small parameters of system with use of the given criterion are considered. The program allows numerically and graphically to determine meaning of small parameter of an automatic control system by minimizing an error of system, and also to keep results and to print them.",2006,0,8453 3157,Low-cost diagnostic method for open-switch faults in inverters,"The theory, design, implementation and experimental validation of a novel low-cost online diagnosis and location method for open-switch faults in voltage source inverters is presented. According to the analysis of the operating states in inverters, there are different collector-emitter voltages of the bottom power switches in faulty conditions compared with normal conditions. To reduce cost, the proposed method employs high-speed photocouplers to sense voltages of the bottom switches, and then a sample logic circuit is designed to implement the detection and location of the fault. The developed approach minimises the time interval between the fault occurrence and its diagnosis, and is validated by the experiment.",2010,0, 3158,Model-based fault-tolerant control reconfiguration for general network topologies,This article describes a fault-tolerant approach to systems with arbitrary network topologies that uses a model-based diagnosis and control reconfiguration mechanism. The authors illustrate this technique using a wireless sensor network as an example,2001,0, 3159,"Low-cost, fault-tolerance applications","This article describes an approach for designing various low-cost, fault-tolerant uniprocessor applications using a multiprocessor. The proposed software-based fault-tolerant model is an economic and effective solution towards tolerating transients and intermittent faults that may occur during the run time of a multiprocessor application system. In this article, the proposed technique adopts the strategy of defensive programming based on time redundancy. This article focuses on protecting an application from faults by running multiple copies of the application on a shared-memory multiprocessor. It saves the costs of developing various independent versions of an application program. This is a significant step towards designing a reliable system at a low cost.",2005,0, 3160,An online fault diagnosis method based on object oriented modeling for FCV power train system,"This article mainly deals with the object oriented online diagnosis system based on analytical redundancy for fuel cell cars. On the basis of the realization of physical modeling language, Using the specific math frame structure and code compilation system, the method transform the analytical redundancy in object-oriented language into real-time operational diagnosis code in embedded object environment by means of object-oriented physical modeling language and computer assistant developing technology.",2008,0, 3161,Computer models of internal short circuit and incipient faults in transformers,"This article presents the computer models, which simulates the internal short circuit and incipient faults in transformers. To make an accurate diagnostic decision, transformer internal winding faults must be characterized by analyzing quantities of data, which could be generated through computer simulation or field experiments. A finite element method is presented which models internal short circuit faults (H.Wang et al., 2001). Finite element analysis (FEA) techniques are useful to obtain an accurate characterization of the electromagnetic behavior of magnetic components, such as transformers. Finite analysis is applied to calculate the parameters for an equivalent circuit of the transformer with an internal short circuit fault using ANSOFT's Maxwell Software. Various short circuit and incipient fault scenarios at different degrading insulation levels of the transformer winding insulation were simulated. Graphs of the results will be presented and discussed.",2003,0, 3162,Fault detection and location on electrical distribution system,"This case study summarizes the efforts undertaken at CP and L-A Progress Energy Company over the past five years to improve distribution reliability via detection of distribution faults and determination of their location. The analysis methods used by CP and L have changed over the years as improvements were made to the various tools used. The tools used to analyze distribution faults included a feeder monitoring system (FMS), an automated outage management system (OMS), a distribution SCADA system (DSCADA), and several Excel spreadsheet applications. The latest fault detection system involves an integration of FMS, OMS, and DSCADA systems to provide distribution dispatchers with a graphical display of possible locations for faults that have locked out feeder circuit breakers",2002,0, 3163,Detection of rotor faults in squirrel cage induction machines at standstill for batch tests by means of the Vienna monitoring method,"This contribution addresses the possibility of integration of an induction machine rotor fault monitoring technique into batch tests. Conventional monitoring techniques are designed for the area of rated torque. Generally load torque is not available at batch test as the test sample shouldn't or can't be clutched. However, at standstill, machine torque and machine currents are big enough to enable rotor monitoring. The short-circuit tests are applicable for diagnostic purposes. This application is demonstrated by means of the Vienna monitoring method (VMM), which is a model based evaluation technique",2000,0, 3164,Effects of camera aperture correction on keying of broadcast video,"This contribution discusses the effects of camera aperture correction in broadcast video on colour-based keying. The aperture correction is used to 'sharpen' an image and is one element that distinguishes the 'TV-look' from 'film-look'. If this sharpening is done excessively, as is the case in many TV productions then this significantly shifts the colours around object boundaries with high contrast. This paper discusses these effects and their impact on keying and describes a simple low-pass filter to compensate for them. Tests with two segmentation algorithms show that the proposed compensation is effectively decreasing the keying artefacts on object boundaries.",2008,0, 3165,Error propagation in software architectures,"The study of software architectures is emerging as an important discipline in software engineering, due to its emphasis on large scale composition of software products, and its support for emerging software engineering paradigms such as product line engineering, component based software engineering, and software evolution. Architectural attributes differ from code-level software attributes in that they focus on the level of components and connectors, and that they are meaningful for an architecture. In this paper, we focus on a specific architectural attribute, which is the error propagation probability throughout the architecture, i.e. the probability that an error that arises in one component propagates to other components. We introduce, analyze, and validate formulas for estimating these probabilities using architectural level information.",2004,0, 3166,On-line correction of errors introduced by instrument transformers in transmission-level steady-state waveform measurements,"The successful implementation of a transmission level harmonic measurement system requires accurate and reliable measurement of harmonic voltages and currents. Existing substation instrument transformers are designed for 60 Hz measurements and they have been shown to cause resonance errors in the measurements. In this paper, we propose an on-line error correction method to correct for these resonance errors as well as possible saturation errors. The error correction is formulated as an output tracking problem where the distorted measurements are used along with the experimentally developed transformer model to reconstruct the transformer input. The method is generic, thereby permitting its use with any measurement system that utilizes a transducer with nonideal properties. It is also cost effective since it can be implemented on a personal computer or a digital signal processing chip",2000,0, 3167,Supervision and fault management of process-tasks and terminology,"The supervision of technical processes is aimed at showing the present state, indicating undesired or unpermitted states, and taking appropriate actions to avoid damage or accidents. The deviations from normal process behavior result from faults and errors, which can be attributed to many causes. They may result in some shorter or longer time periods with malfunctions or failures if no counteractions are taken. One reason for supervision is to avoid these malfunctions or failures. In the following article the basic tasks of supervision are shortly described.",2007,0, 3168,The Copper Surface Defects Inspection System Based on Computer Vision,"The surface defects in copper strips severely affect the quality of copper. So detecting the surface defects in copper strip has great significance to improve the quality. This paper presents a copper strip surface inspection based on computer vision, which uses modularized frame of hardware and the software of image processing. The paper adopts a self-adaptive weight averaging filtering method to preprocess image, and uses the moment invariants to pick the characters of typical defects which eigenvector is identified with the RBF neural networks. Experiments show that the real-time method can effectively detect the copper strip surface defects in the production line.",2008,0, 3169,Calculation of the Half-Power Beamwidths of Pyramidal Horns With Arbitrary Gain and Typical Aperture Phase Error,This letter describes a method for calculating the half-power beamwidths of a pyramidal horn in closed form as a function of its gain and aperture phase error. The derived expressions are independent of the horn's physical dimensions and are valid for horns of arbitrary gain and phase error in the range of practical interest. Comparisons to methods in the literature demonstrate the efficacy of the approach. The formulation is a useful tool in the analysis and design of pyramidal horns when specific gain and half-power beamwidth values are required.,2010,0, 3170,Analysis of the error performance of adaptive array antennas for CDMA with noncoherent M-ary orthogonal modulation in Nakagami fading,"This letter presents an analytical model for evaluating the bit error rate (BER) of a direct sequence code division multiple access (DS-CDMA) system, with M-ary orthogonal modulation and noncoherent detection, employing an array antenna operating in a Nakagami fading environment. An expression of the signal to interference plus noise ratio (SINR) at the output of the receiver is derived, which allows the BER to be evaluated using a closed form expression. The analytical model is validated by comparing the obtained results with simulation results.",2005,0, 3171,Improving the Statement of the Corrective Security-Constrained Optimal Power-Flow Problem,"This letter proposes a formulation of the corrective security-constrained optimal power-flow problem imposing, in addition to the classical post-contingency constraints, existence and viability constraints on the short-term equilibrium reached just after contingency. The rationale for doing so is discussed and supported by two examples",2007,0, 3172,Design of Complex FIR Filters With Reduced Group Delay Error Using Semidefinite Programming,"This letter proposes an improved method for designing complex finite impulse response (FIR) filters with reduced group delays using semidefinite programming. A key step in the proposed method is to directly impose a group delay constraint, which is formulated as a linear matrix inequality constraint with some reasonable approximations. The main advantage of the proposed design method is the significant reduction in group delay error at the expense of the slight increase in magnitude error. The effectiveness of the proposed design method is illustrated with an example",2006,0, 3173,Structural defects: general approach and application to textile inspection,"This paper addresses detection of imperfections in repetitive regular structures (textures). Humans can easily find such defects without prior knowledge of the `good' pattern. In this study, it is assumed that structural defects are detected as irregularities, that is, locations of lower regularity. We define pattern regularity features and find defects by robust detection of outliers in the feature space. Two tests are presented to assess the approach. In the first test, diverse texture patterns are processed individually and outliers are searched in each pattern. In the second test, classified defects in a group of textiles are considered. Defect-free patterns are used to learn distance thresholds that separate defects",2000,0, 3174,Research on Transformer Fault Diagnosis Expert System Based on DGA Database,"This paper analyzes and designs the transformer fault diagnosis system based on dissolved gas analysis (DGA) database in which DGA data is managed by the Oracle database. The fault diagnosis module includes the single analyzing item and the Integrated analyzing item, such as, improvement three-ratio method, grey relational entropy, fuzzy clustering, artificial neural networks, and so on. They reduce the insufficiency in diagnosis method which is used now. The system realizes each function of the modules by using the lamination method, it is able to diagnose problems existing in oil chromatogram analysis data of transformer, and the accuracy of the system is also testified by practical example.",2009,0, 3175,Fault Tolerance of Relative Navigation Sensing in Docking Approach of Spacecraft,"This paper analyzes fault tolerance of spacecraft relative navigation in automated rendezvous and docking (AR&D). The relatively low technology readiness of existing relative navigation sensors for AR&D has been carried as one of the NASA crew exploration vehicle project's top tasks. Fault tolerance could be enhanced with the help of FDIR (fault detection, identification and recovery) logic and use of redundant sensors. Because of mass and power constraints, it is important to choose a fault tolerant design that provides the required reliability without adding excessive hardware. An important design trade is determining whether a redundant sensor can be normally unpowered and activated only when necessary. This paper analyzes reliability trades for such fault tolerant system. A Markov Chain model of the system is composed of sub-models for sensor faults and for sensor avionics states. The sensor fault sub-model parameters are based on sensor testing data. The avionics sub-model includes FDIR states; the parameters are determined by Monte Carlo simulations of the near field docking approach. The integrated Markov Chain model allows the probabilities of mission abort and a mishap to be computed. The results of the trade study include dependence of the probabilities on the backup sensor activation delay.",2008,0, 3176,Commercial fault tolerance: a tale of two systems,"This paper compares and contrasts the design philosophies and implementations of two computer system families: the IBM S/360 and its evolution to the current zSeries line, and the Tandem (now HP) NonStop Server. Both systems have a long history; the initial IBM S/360 machines were shipped in 1964, and the Tandem NonStop System was first shipped in 1976. They were aimed at similar markets, what would today be called enterprise-class applications. The requirement for the original S/360 line was for very high availability; the requirement for the NonStop platform was for single fault tolerance against unplanned outages. Since their initial shipments, availability expectations for both platforms have continued to rise and the system designers and developers have been challenged to keep up. There were and still are many similarities in the design philosophies of the two lines, including the use of redundant components and extensive error checking. The primary difference is that the S/360-zSeries focus has been on localized retry and restore to keep processors functioning as long as possible, while the NonStop developers have based systems on a loosely coupled multiprocessor design that supports a ""fail-fast"" philosophy implemented through a combination of hardware and software, with workload being actively taken over by another resource when one fails.",2004,0, 3177,Comparison of TCP to XCP performance on channels with correlated errors employing error correction and interleaving,"This paper compares the performance of the TCP and XCP transport protocols over a lossy, large bandwidth-delay link. The link employs block-based error correction codes ECC, along with interleaving which seems to affect both TCP and XCP bandwidth utilization performance. The results are obtained by numerical simulation of TCP and XCP connections over the channel. Single connection results show that the delay introduced by interleaving significantly affects TCP performance, while XCP performance is somewhat better. However, when large interleavers are employed, both protocols suffer. This is likely due to decorrelation of packet errors introduced by interleaving.",2005,0, 3178,Dynamic Field Estimation Using Wireless Sensor Networks: Tradeoffs Between Estimation Error and Communication Cost,"This paper concerns the problem of estimating a spatially distributed, time-varying random field from noisy measurements collected by a wireless sensor network. When the field dynamics are described by a linear, lumped-parameter model, the classical solution is the Kalman-Bucy filter (KBF). Bandwidth and energy constraints can make it impractical to use all sensors to estimate the field at specific locations. Using graph-theoretic techniques, we show how reduced-order KBFs can be constructed that use only a subset of the sensors, thereby reducing energy consumption. This can lead to degraded performance, however, in terms of the root mean squared (RMS) estimation error. Efficient methods are presented to apply Pareto optimality to evaluate the tradeoffs between communication costs and RMS estimation error to select the best reduced-order KBF. The approach is illustrated with simulation results.",2009,0, 3179,Calibration errors in augmented reality: a practical study,"This paper confronts some theoretical camera models to reality and evaluates the suitability of these models for effective augmented reality (AR). It analyses what level of accuracy can be expected in real situations using a particular camera model and how robust the results are against realistic calibration errors. An experimental protocol is used that consists of taking images of a particular scene from different quality cameras mounted on a 4DOF micro-controlled device. The scene is made of a calibration target and three markers placed at different distances of the target. This protocol enables us to consider assessment criteria specific to AR as alignment error and visual impression, in addition to the classical camera positioning error.",2005,0, 3180,SIMONA flight simulator implementation of a fault tolerant sliding mode scheme with on-line control allocation,This paper considers a sliding mode based allocation scheme for fault tolerant control. The scheme allows redistribution of the control signals to the remaining functioning actuators when a fault or failure occurs. It is shown that faults and even certain total actuator failures can be handled directly without reconfiguring the controller. The results obtained from implementing the controller on the SIMONA flight motion simulator show good performance in both nominal and failure scenarios even in wind and gust conditions.,2008,0, 3181,Error probability and SINR analysis of optimum combining in rician fading,"This paper considers the analysis of optimum combining systems in the presence of both co-channel interference and thermal noise. We address the cases where either the desired-user or the interferers undergo Rician fading. Exact expressions are derived for the moment generating function of the SINR which apply for arbitrary numbers of antennas and interferers. Based on these, we obtain expressions for the symbol error probability with M-PSK. For the case where the desired-user undergoes Rician fading, we also derive exact closed-form expressions for the moments of the SINR. We show that these moments are directly related to the corresponding moments of a Rayleigh system via a simple scaling parameter, which is investigated in detail. Numerical results are presented to validate the analysis and to examine the impact of Rician fading on performance.",2009,0, 3182,ATM cell error performance of xDSL under impulse noise,"This paper considers the cell error performance of ATM over digital subscriber lines (DSL) in the presence of impulse noise. In previous years there has been an increasing interest in xDSL technology due to its relatively broad bandwidth and ease of deployment. ATM is a preferred protocol over DSL. ATM, however, has been designed for low bit error rates. Despite error correction protocols, DSL cannot always overcome bursty errors, particularly impulse noise. Interest in DSL has triggered research into impulse noise. A new model has been developed based on surveys on experimental data. Using this model the impact of impulse noise on ATM cell error performance has been traced. Various DSL framings have been found to affect adversely the ATM stream. The interleave depth should be either 1 or large enough to correct the errors. The performance in terms of header and payload errors differs. Time between errored cells exhibits clustering like interarrival times. It is possible within one impulse event to have good cells between errored cells. Finally, for higher level applications error free cells is a more appropriate metric than error free seconds",2001,0, 3183,Low-density parity-check coding for impulse noise correction on power-line channels,"This paper considers the design of near capacity-achieving error correcting codes for a discrete multi-tone system in the presence of both additive white Gaussian noise and impulse noise. Impulse noise is one of the main channel impairments in the power-line channel. One way to combat impulse noise is to detect the presence of the impulses and to declare an erasure when an impulse occurs. In this paper, we propose a coding system based on irregular low-density parity-check (LDPC) codes and bit-interleaved coded modulation. We show that by carefully choosing the degree distribution of the irregular LDPC code, both the additive noise and the erasures can be handled in a single code. We show that the proposed system can perform close to the capacity of the channel and for the same redundancy is significantly more immune to the impulse noise than the existing methods based on an outer Reed-Solomon code.",2005,0, 3184,Adaptive online correction and interpolation of quadrature encoder signals using radial basis functions,"This paper considers the development of an adaptive online approach for the correction and interpolation of quadrature encoder signals, suitable for application to precision motion control systems. It is based on the use of a two-stage double-layered radial basis function (RBF) neural network. The first RBF stage is used to adaptively correct for the imperfections in the encoder signals such as mean, phase offsets, amplitude deviation and waveform distortion. The second RBF stage serves as the inferencing machine to adaptively map the quadrature encoder signals to higher order sinusoids, thus, enabling intermediate positions to be derived. Experimental and simulation results are provided to verify the effectiveness of the RBF approach.",2005,0, 3185,"Implementation of error trapping technique in (31,16) cyclic codes for two-bit error correction in 16-bit sound data using Labview Software","This paper considers the implementation of cyclic codes encoder and decoder for multimedia content in the form of sound data using National Instruments LabView software. Cyclic codes can be defined by two parameters, which are code size n and information bit size k. LabView is an easy to use, multipurpose software which has many features for designing and prototyping. This research is a preliminary research on channel coding implementation on LabView. In this research, cyclic codes are used to implement the design. 16-bit sound data are used as test subjects for cyclic code encoding, decoding, and error correction. The result shows that the design works well. The design can correct short two-bit error in last n-k position of the codeword. Authors' next project is to implement more advanced code for error correcting implementation in LabView.",2010,0, 3186,Design of a fault-tolerant coarse-grained,"This paper considers the possibility of implementing low-cost hardware techniques which would allow to tolerate temporary faults in the datapaths of coarse-grained reconfigurable architectures (CGRAs). Our goal was to use less hardware overhead than commonly used duplication or triplication methods. The proposed technique relies on concurrent error detection by using residue code modulo 3 and re-execution of the last operation, once an error is detected. We have chosen the DART architecture as a vehicle to study the efficiency of this approach to protect its datapaths. Simulation results have confirmed hardware savings of the proposed approach over duplication.",2010,0, 3187,Alternative Error Correcting Coding in the Mobile TV Broadcasting,"This paper describes an alternative error correcting coding in the mobile TV broadcasting. A comparative analysis of the Reed-Solomon and matroid nonbinary error correcting codes is given. Also, the parameters of matroid codes and the results of matroid coder-decoder PLD-designing are presented",2006,0, 3188,Amending the syntax of the MPEG-4 Simple Scalable Profile to use error resilience tools,"This paper describes an amendment to the ISO/IEC 14496-2 standard commonly known as MPEG-4 video which incorporates error resilience tools in the Simple Scalable Profile of the MPEG-4 video codec, such that scalable MPEG-4 is made suitable for deployment in a mobile communication environment. This is known as the Error Resilient Simple Scalable Profile (ER-SSP). When it was defined in 2000, the syntax of the Simple Scalable Profile (SSP) prohibited the use of the error resiliency tools which were available in the base layer of the MPEG-4 codec, and which are essential for use of MPEG-4 in a mobile communication environment. This paper reports on the rationale behind the newly defined Error Resilient Simple Scalable Profile within the MPEG standard, describes the syntax for the new profile, and explains the suitability of incorporating the error resiliency tools of the base layer into the enhancement layer. It is shown that minor modifications are required in the Header Extension to synchronize the decoding process between the two layers. Hence it is shown that base layer error resilience tools are equally applicable to the enhancement layer with nominal syntax changes.",2003,0, 3189,Correction of time-varying radiometric biases in the TRMM Microwave Imager,"This paper describes an empirical correction for a time-varying radiometric calibration error present in the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The Tb error is modeled as a function of the spacecraft orbit position and solar angles based on observed systematic differences between the TMI 10 GHz vertically polarized (10V) channel oceanic Tb's as compared to a radiative transfer model with collocated numerical weather model environmental parameters. This correction technique will be implemented in Version 7 of the TRMM 1B11 data product, which is scheduled to be released in 2010. Available spacecraft in-flight thermal measurements that have relevence to the instrument performance are discussed. Results from thermal modeling of the spacecraft and instrument provide support for the emissive reflector model as the primary cause for the empirical Tb bias estimates. Residual differences between the thermal model and the bias estimates may provide evidence of smaller secondary effects from solar array shadowing of the instrument.",2010,0, 3190,An experimental study of adaptive forward error correction for wireless collaborative computing,"This paper describes an experimental study of a proxy service to support collaboration among mobile users. Specifically, the paper addresses the problem of reliably multicasting Web resources across wireless local area networks, whose loss characteristics can be highly variable. The software architecture of the proxy service is described, followed by results of a performance study conducted on a mobile computing testbed. The main contribution of the paper is to show that an adaptive forward error correction mechanism, which adjusts the level of redundancy in response to packet loss behavior, can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels",2001,0, 3191,Multiphase Power Converter Drive for Fault-Tolerant Machine Development in Aerospace Applications,"This paper describes an experimental tool to evaluate and support the development of fault-tolerant machines designed for aerospace motor drives. Aerospace applications involve essentially safety-critical systems which should be able to overcome hardware or software faults and therefore need to be fault tolerant. A way of achieving this is to introduce variable degrees of redundancy into the system by duplicating one or all of the operations within the system itself. Looking at motor drives, multiphase machines, such as multiphase brushless dc machines, are considered to be good candidates in the design of fault-tolerant aerospace motor drives. This paper introduces a multiphase two-level inverter using a flexible and reliable field-programmable gate-array/digital-signal-processor controller for data acquisition, motor control, and fault monitoring to study the fault tolerance of such systems.",2010,0, 3192,A fuzzy-logic based optical sensor for online weld defect-detection,"This paper describes an intelligent optical sensor for real time defect detection in gas metal arc welding processes. The sensor measures the radiations emitted by the plasma surrounding the welding arc, and analyzes the information in real time to determine an index of local quality of the weld. The data processing algorithm encompasses a Kalman filter to reduce the heavy amount of noise affecting the measured signals, and an intelligent fuzzy system to assess the degree of acceptability of the weld. The fuzzy system is also able to detect the risk of specific problems (e.g., anomalies in the current, voltage or speed of the arc, contamination with other materials, holes) and the position of defects along the welding line. In an extensive experimental comparison, the fuzzy system outperforms a former version of the detection algorithm based on a statistical approach.",2005,0, 3193,Correcting for systematic errors in one-port calibration-standards,"This paper describes how to correct for systematic errors in one-port vector network analyzer calibrations that are caused by modeling errors in the one-port calibration standards. The paper shows how to correct an open-short-load calibration that was made with a broadband load whose reflection coefficient is not zero. The method can also be used to correct for modeling errors in the open and short reflects if those errors are known. The method is demonstrated by performing an open-short-load calibration with a broadband load whose reflection coefficient is as large as 0.035 at 50 GHz, and then correcting that calibration to produce measurement accuracies comparable to those from a thru-reflect-line calibration.",2003,0, 3194,Fault Management for Secure Embedded Systems,"This paper describes principles of an embedded system design propping safety and security using a dedicated architecture. After reviewing a simple specification language deployed, the main attention is focused on hardware architecture, software, and communication services that fit application requirements. The gasoline dispenser controller presents in this case a real-world solution of a safety and security critical embedded system application. The paper stresses those features that distinguish the real project from a demonstration case study.",2009,0, 3195,A nonintrusive power system arcing fault location system utilising the VLF radiated electromagnetic energy,"This paper describes research being conducted by the Power and Energy Systems Group at the University of Bath into a novel technique for monitoring the condition of transmission and sub-transmission circuitry using the electromagnetic radiated energy emitted from a power system arcing fault, together with the propagation effects of very low frequency (VLF) radio waves. The emission of an atmospheric radio wave, collectively known as a sferic, from a power system arcing fault will be characterised and is similar to the way a lightning strike induces a sferic. The paper describes the hardware infrastructure necessary to be able to monitor the condition of transmission and sub-transmission circuitry, together with the real-time intelligent algorithm which enables discrimination between power system arcing faults and other sources of impulsive radio noise in the VLF spectrum The technique described in this paper is not limited to monitoring a specific line or circuit as no physical connection is required to the system, instead the described system monitors a geographic area and therefore it can be economically used on sub-transmission circuitry",2000,0, 3196,Comparison of methods for nonlinearity correction of platinum resistance thermometer,"This paper describes the comparison of two methods for nonlinearity correction of Platinum resistance industrial thermometers in production phase. Temperature measurement is done by industrial thermometer based on measuring inlet and is made by Platinum resistance sensor. Measuring range of such thermometers is between -50degC and +600degC. As the measuring sensor has not linear transfer function it is necessary to make its correction. It is possible to do this on two ways: by predefined table of values of temperature and related resistance, or by temperature calculation using equations. The comparison of these methods is interesting to know how to choose the better solution and include it in thermometer and computing the microcontroller resources, while keeping the approximation error within the allowed boundaries.",2008,0, 3197,Design and control of an intelligent dual-arm manipulator for fault-recovery in a production scenario,"This paper describes the design and control methodology used for the development of a dual-arm manipulator as well as its deployment in a production scenario. Multi-modal and sensor-based manipulation strategies are used to guide the robot on its task to supervise and, when necessary, solve faulty situations in a production line. For that task the robot is equipped with two arms, aimed at providing the robot with total independence from the production line. In other words, no extra mechanical stoppers are mounted on the line to halt targeted objects, but the robot will employ both arms to (a) stop with one arm a carrier that holds an object to be inserted/replaced, and (b) use the second arm to handle such object. Besides, visual information from head and wrist-mounted cameras provide the robot with information such as the state of the production line, the unequivocal detection/recognition of the targeted objects, and the location of the target in order to guide the grasp.",2009,0, 3198,Corrective action with power converter for faulty multiple fuel cells generator used in transportation,"This paper deals with the corrective action with a power converter for a 100kW multiple fuel cells (FC) generator under fault and used for vehicle propulsion, or high power onboard electrical assistance. The objective is to permit, through the power converter and its control strategy, a soft shut-down of a FC stack in fault and guarantee a continuous of operation at a reduced power, acceptable by the specifications. The power converter should also realize the power management during the degraded working situation. Two power system architectures are studied and compared by numerical simulation.",2010,0, 3199,Fault Tolerant Control Design for Nonlinear Polynomial Systems,This paper deals with the design of an original fault tolerant control strategy for a class of nonlinear polynomial systems. Our main contribution in this work consists in the synthesis of a polynomial static state feedback in the presence of model parameters variations. The controller is based on the feedback linearization formalism and the powerful mathematical tool of the Kronecker product. The control technique is synthesized through a moving horizon state estimator in order to preserve the online system closed-loop performances. An example illustrates the effectiveness of the proposed method via the study of the control problem of a series DC motor.,2010,0, 3200,Induction motor stator faults diagnosis by a current Concordia pattern-based fuzzy decision system,"This paper deals with the problem of detection and diagnosis of induction motor faults. Using the fuzzy logic strategy, a better understanding of heuristics underlying the motor faults detection and diagnosis process can be achieved. The proposed fuzzy approach is based on the stator current Concordia patterns. Induction motor stator currents are measured, recorded, and used for Concordia patterns computation under different operating conditions, particularly for different load levels. Experimental results are presented in terms of accuracy in the detection of motor faults and knowledge extraction feasibility. The preliminary results show that the proposed fuzzy approach can be used for accurate stator fault diagnosis if the input data are processed in an advantageous way, which is the case of the Concordia patterns.",2003,0, 3201,Comparative analysis of electromagnetic interference produced by high power single-phase power factor correction pre-regulators,"This paper deals with the problem of electromagnetic interference (EMI) generated in power converters used to power factor correction (PFC). The topologies discussed in this paper are the boost, the interleaved boost and the dual boost converters. The main noise sources generated by converters and a comparative analysis of the differential mode noise and common mode noise present in the topologies in study are shown. The EMI filter design to comply with the EMI standards is also approached. Experimental results of the conducted EMI of the converters operating as PFC are presented at the end of the paper, in order to prove the comparisons.",2009,0, 3202,Using current signature analysis technology to reliably detect cage winding defects in squirrel cage induction motors,"This paper demonstrates, through industrial case histories, the application of current signature analysis (CSA) technology to reliably diagnose rotor winding problems in squirrel cage motors. Many traditional CSA methods result in false alarms and/or misdiagnosis of healthy machines due to the presence of current components in the broken cage winding frequency domain that are not the result of such defects. Such components can result from operating conditions and drive components such as mechanical load fluctuations, speed reducing gearboxes, etc. Due to theoretical advancements it is now possible to predict many of these current components, thus making CSA testing less error prone and therefore a much more reliable technology. Reliable detection of the inception of broken cage winding problems, or broken rotor bars, prior to failure allows for remedial actions to be taken to avoid significant costs associated with consequential motor component damage and unplanned down time associated with such in-service failures.",2005,0, 3203,Vehicle localization in outdoor woodland environments with sensor fault detection,"This paper describes a 2D localization method for a differential drive mobile vehicle on real forested paths. The mobile vehicle is equipped with two rotary encoders, Crossbow's NAV420CA inertial measurement unit (IMU) and a NAVCOM SF-2050M GPS receiver (used in StarFire-DGPS dual mode). Loosely-coupled multisensor fusion and sensor fault detection issues are discussed as well. An extended Kalman filter (EKF) is used for sensor fusion estimation where a GPS noise pre-filter is used to avoid introducing biased GPS data (affected by multi-path). Normalized innovation squared (NIS) tests are performed when a GPS measurement is incorporated to reject GPS data outliers and keep the consistency of the filter. Finally, experimental results show the performance of the localization system compared to a previously measured ground truth.",2008,0, 3204,Bit Error Rate Estimation for Improving Jitter Testing of High-Speed Serial Links,"This paper describes a bit error rate (BER) estimation technique for high-speed serial links, which utilizes the jitter spectral information extracted from the transmitted data and some key characteristics of the clock and data recovery (CDR) circuit in the receiver. In addition to improving the accuracy of BER prediction, the estimation technique can be used to accelerate the jitter tolerance test by eliminating the conventional BER measurement process. Experimental results comparing the estimated BER and the BERT-measured BER on a 2.5 Gbps commercial CDR circuit demonstrate the high accuracy of the proposed technique",2006,0, 3205,Fault-tolerant speed measurement for the control of a DC-motor,This paper describes a control system that has been designed to handle sensor faults and therefore could be considered as fault-tolerant control one. The fault tolerant controller is based on the output of the speed sensor and two speed observers. The first one is a suboptimal Kalman filter based on the state model of the machine. The second one is based on the estimation of the frequency of one rotor slot harmonic present in the DC current signal which contains speed information. The two observers guarantee the best dynamic and steady state performances required by the application and also improve the reliability in the event of sensor loss or sensor recovery. The fault tolerant controller reorganization is based on a control decision block which depends on the steady state and dynamic performances of the two observers. The results of the control system show the effectiveness of this approach in the event of speed sensor loss and recovery.,2007,0, 3206,"Assessing and Estimating Corrective, Enhancive, and Reductive Maintenance Tasks: A Controlled Experiment","This paper describes a controlled experiment of student programmers performing maintenance tasks on a C++ program. The goal of the study is to assess the maintenance size, effort, and effort distribution of three different maintenance types and to describe estimation models to predict the programmer's effort on maintenance tasks. The results of our study suggest that corrective maintenance is much less productive than enhancive and reductive maintenance. Our study also confirms the previous results which conclude that corrective and reductive maintenance requires large proportions of effort on program comprehension activity. Moreover, the best effort model we obtained from fitting the experiment data can estimate the time of 79% of the programmers with the error of 30% or less.",2009,0, 3207,Mixed DSP/FPGA implementation of an error-resilient image transmission system based on JPEG2000,"This paper describes a demonstrator of an error-resilient image communication system over wireless packet networks, based on the novel JPEG2000 standard. In particular, the decoder implementation is addressed, which is the most critical task in terms of complexity and power consumption, in view of use on a wireless portable terminal for cellular applications. The system implementation is based on a mixed DSP/FPGA architecture, which allows us to parallelize some computational tasks, thus leading to efficient system operation.",2001,0, 3208,Analysis on Fault of Yun-Guang hybrid AC/DC Power Transmission System,"This paper addresses the transient response of instant ground fault in Yun-Guang hybrid UHVDC/AC power transmission system. A detailed bipolar UHVDC electromagnetic transient model is established by PSCAD/EMTDC software. The control strategies for UHVDC system are investigated and the main factors of commutation failure are analyzed. A lot of simulation results demonstrate that Yunnan rectifier AC system fault can not cause UHVDC commutation failure. Metallic ground fault of Guangdong inverter AC system will result in UHVDC commutation failure but enough transition resistance can reduce the chance of it. When one pole ground fault of UHVDC line occurs, DC current of the fault pole has an obvious overshoot and non-fault pole is influenced a little. UHVDC can come back to normal operation once the instant ground fault is cleared.",2008,0, 3209,Modeling and Simulation of Internal Faults in a Synchronous Machines in d-q Axis Models,"This paper aims at presenting a method for modeling and simulation of internal single phase-to-ground faults in d-q axis model in stator windings of large synchronous machines. The method of partitioning the stator windings is used in analyzing the internal faults. This Partitioning method, under internal faults determines inductances of the affected windings which, is extended. In this paper, we modeled the machine in d-q axis models under specific conditions by using the obtained inductances and park transformation. The application of this method is analyzing internal single phase-to-frame faults. Finally, this method is simulated to evaluate an internal fault in stator windings.",2007,0, 3210,Error rate estimation for a flight application using the CEU fault injection approach,This paper aims at validating the efficiency of a fault injection approach to predict error rate on applications devoted to operate in radiation environment. Soft error injection experiments and radiation ground testing were performed on software modules using a digital board built on a digital signal processor which is included in a satellite instrument. The analysis of experimental results put in evidence the potentialities offered by the used methodology to predict the error rate of complex applications.,2002,0, 3211,Multiple Fault Tolerance Patterns for Systems with Arbitrary Deadline,"This paper aims to provide a fault tolerant scheduling algorithm that have fault tolerance patterns for periodic task with arbitrary deadlines. The fault tolerance is achieved by checkpointing where number of checkpoint is decided on the bases of the lemmas proposed. These patterns provide minimum tolerance to all the releases and an improved tolerance to some releases pertaining to the availability of the slack time. They may be binary (i.e., either provide maximum or minimum tolerance to a release) or greedy (i.e., provide an improved tolerance whenever it is possible) in nature. Theorems have been proposed to ensure that the task set is schedulable with at least minimum fault tolerance. The effectiveness of the proposed patterns have been measured through extensive examples and simulations.",2007,0, 3212,Analysis of Applicability of Partial Runtime Reconfiguration in Fault Emulator in Xilinx FPGAs,"This paper analyses applicability of partial runtime reconfiguration (PRR) in fault emulators based on FPGAs of Xilinx Virtex family. PRR is used for loading emulator modules and for injecting faults into the emulated circuit. Since the time of reconfiguration may have significant impact on its usability, this paper deals with this issue. The goal was to accelerate PRR and to evaluate the time needed for fault injection by PRR on these FPGAs. Experimental results show that we have achieved up to eight times faster reconfiguration compared to the original Xilinx method, and fault injection time about 77 mus per one emulated fault.",2008,0, 3213,Influence of resonant coil tuning in the fault current magnitude,"This paper analyses the behavior of the resonant grounding method in MV electrical distribution systems, and its influence in the fault current magnitude. It is necessary to take into account several aspects as the specific topology of the system. Thus, a comparative analysis of different systems with different characteristics is presented. The fault current magnitude is studied for different resonant coil tuning values, and the obtained results are compared with the solid grounding method. Finally, using a real MV network, the effect of resonant coil tuning is analysed.",2004,0, 3214,Experimental field trials of a utility AMR power line communication system analyzing channel effects and error correction methods,This paper analyses the effects of noise in a real automated meter reading (AMR) environment where power line communication (PLC) is used for meter reading. The aim is to give an analysis of periodic noise properties through its effect on a B-FSK based PLC system and to compare different error correction techniques under different constraints of attenuation and noise produced by the real loads installed in the network. Tests have been carried out using two different channel frequencies in the CENELEC A band taking into account frequency dependence of the line.,2007,0, 3215,Fault Diagnosis and Safety Strategy for Power Electricity-Driven Equipment on Vessels,"This paper analyses the practical faults existing in large power electricity-driven equipment of vessels, and discusses safety strategy and driven control system optimization. The reliability is designed both in hardware and software respectively. It is of high effect that the fault tolerant is used as the non-key fault control. Moreover, the embedded system is succeeded in vessel's control system, which also involves EMI prevent method and trouble diagnosis technology. Finally the investigated technology has applied on real control systems of the vessels successfully.",2007,0, 3216,Towards using particle filtering for increasing error correction accuracy in traffic surveillance applications,"This paper analyzes and compares two error correction algorithms in context of traffic control applications. They are the Sequential Importance Sampling (SIS) and the Sequential Importance Resampling (SIR). Performances of both algorithms are analyzed and conclusions are drawn. Moreover, experimental results prove that algorithms' parameters have a great impact on their performances.",2010,0, 3217,Fault tolerant steer-by-wire road wheel control system,"This paper describes a fault tolerant steer-by-wire road wheel control system. With dual motor, dual microcontroller architecture this system has the capability to tolerate single point failure without degrading the control system performance. The arbitration bus, mechanical arrangement of motors, and the developed control algorithm allow the system to reconfigure itself automatically in the event of a single point fault, and assure a smooth reconfiguration procedure. The experimental result illustrates the effectiveness of the proposed system.",2005,0, 3218,Fuzzy Detection and Diagnosis of Fault Modes in a Voltage-Fed PWM Inverter Induction Motor,This paper describes a fuzzy-based method of fault detection and diagnosis in a PWM inverter feeding an induction motor. The proposed fuzzy approach is a sensor-based technique using the mains current measurement to detect intermittent loss of firing pulses in the inverter switches. A localization domain made with seven patterns is built with the stator Concordia current vector. One is dedicated to the healthy domain and the last six to each inverter switch. The fuzzy fault detector is designed on the basis of stator current analysis. Simulated experimental results on 1.5-kW induction motor drives show the feasibility of the proposed approach,2005,0, 3219,Artificial neural network based method for fault location in distribution systems,"This paper describes a method for fault location in power distribution systems based on artificial neural networks. The topological and electrical characteristics of typical distribution feeders are appropriately taken into account. The input-output relationship of the problem has been studied in detail and, a modular architecture has been derived in order to improve the generalization capability of the employed neural networks and to reduce the magnitude of their training sets. Promising results were obtained for cases of substantially high fault resistance values, on which analytical methods tend to fail. An application tool is being implemented with a windows-like graphic interface, allowing the user to choose among different types of neural network",2005,0, 3220,"Background Calibration of Gain Errors in A/D Converters","This paper describes a method of calibrating gain errors in each channel of Hadamard modulated parallel delta-sigma () converters. The converters extend traditionally bandwidth-limited modulators to applications such as wireless communications that require high-resolution wide-bandwidth converters. The focus of this paper is on the development and convergence of an adaptive calibration algorithm. A hardware implementation that verifies this paper can be found in an earlier work. The calibration is done in real time and is accomplished by adding an additional channel that is linearly dependent on the channels that are used. By introducing this redundancy, an adaptive recursive least squares (RLS) algorithm is used to correct for gain errors within the channels. To demonstrate the calibration scheme, an eight-channel converter with second-order modulators and an oversampling ratio of 4 was simulated in MATLAB. Simulation results show a 40-dB reduction in unwanted distortion tones caused by 1% gain mismatches.",2010,0, 3221,Investigation of Internal Fault Modeling of Power former,"This paper describes a model of a 75 MVA and 150 kV synchronous machine (powerformer), which can be used to simulate internal fault waveforms for power system protection studies. The method employs a direct phase representation considering the cable capacitance. A method to calculate the inductance and its magnetic axis location of the faulty path is outlined. The machine equations are then solved using a suitable numerical technique. Comparisons are made between the simulated waveforms and recorded waveforms to verify the accuracy of the model",2005,0, 3222,The use of fault-recorder data for diagnosing timing and other related faults in electricity transmission networks,"This paper describes a model-based diagnostic system for diagnosing faults in electrical transmission systems (Timely's off-line system). This diagnostic system uses data available from digital fault recorders which are collected after a network event (such as a short circuit) has occurred. The data are used to detect incipient faults in network equipment by comparing their operation against that predicted by extended finite state automaton known as augmented reactive models (ARM). Thus Timely's off-line system combines signal processing and model based diagnostic techniques to provide a practical model-based system that aids the analysis of the performance of protective equipment after a network event has occurred. In particular, its use of data derived directly from fault recorder files (such as voltage and impedance magnitudes) means that the system can diagnose much more subtle faults (e.g. timing related faults)",2000,0, 3223,The computer-aided detection of inferior printing quality and errors,"This paper describes a new approach which detects inferior printing quality and errors by using a regular PC and document scanner. Our method relies on the comparison of an inspected document with its referential version. It firstly registers the images of the two documents, and then detects any discrepancies between the aligned pixels or regions. Iterations of the two-step registrations with interim interpolations introduce a sort of elastic image correction. We confirmed experimentally that the error detection rate for those documents with simpler structures, mostly pictures, was about 95%, whereas with more complex documents containing a lot of text this figure is about 90%",2006,0, 3224,Fault location in medium voltage networks by the help of adapted triangulation principle,"This paper describes a new fault location method for medium voltage networks consisting of power distribution lines with tree topology. The method is based on the adaptation of triangulation principle. It uses the detection of undervoltage wave propagation as a source of information for fault location calculation. The paper gives short description of triangulation principle adaptation, needed input data as well as factors influencing calculation accuracy. Calculation algorithm is then described in more details and its principle is explained through simplified short example of fault location calculation. Proposed method utilizes Smart Grid concept, because its functionality depends on data exchange between measuring points placed in distribution power lines and control centre, where the fault location calculation is done.",2010,0, 3225,Reduced resolution update video coding with interblock filtering of prediction error residuals,"This paper describes a new improvement upon a video coding tool known as reduced-resolution update (RRU). In RRU, as described in Annex Q of the H.263 standard, full resolution prediction error residuals are downsampled and coded at a reduced spatial resolution. During RRU coding, as previously proposed, each block of prediction errors is downsampled and interpolated without reference to any of its neighboring blocks. This can lead to severe blockiness in the decoded picture. In H.263 Annex Q RRU, the deblocking filter strength is increased to the maximum value to reduce this blockiness. In the new technique presented here, referred to as RRU+, the downsampling and interpolation filters use residuals from neighboring blocks to prevent the blockiness that results from RRU. Experimental results show that RRU+ provides better PSNR versus bitrate performance than RRU with maximum deblocking filter strength.",2005,0, 3226,Random Access Memory Faults Descriptions and Simulation using VHDL,"This paper describes a new method of random access memory faults description using VHDL language. The fault injection technique, which uses behavioral synthesis VHDL descriptions, is proposed. The injection can be easily automated for memory test algorithms verification using only VHDL language and standard simulation software. No other applications and simulation tools are needed.",2007,0, 3227,Utilizing data mining algorithms for identification and reconstruction of sensor faults: a Thermal Power Plant case study,"This paper describes a procedure of identifying sensor faults and reconstructing the erroneous measurements. Data mining algorithms are successfully applied for deriving models that estimate the value of one variable based on correlated others. The estimated values can then be used instead of the recorded ones of a measuring instrument with false reading. The aim is to reassure the correctness of data entered to an optimization software application under development for the Thermal Power Plants of Western Macedonia, Greece.",2007,0, 3228,Calibration of industrial robots by magnifying errors on a distant plane,"This paper describes a robot calibration approach called the virtual closed kinematic chain (ViCKi) method. Traditionally, calibration requires the measurement of the position and orientation of the end effector, and measurement resolution limits the accuracy of the robot model. In ViCKi, we attach a laser to the end effector to create a virtual 7th link. The laser spot produced on a distant plane, the end of this virtual link, magnifies small changes at the end effector, resulting in a high resolution error measurement of the end effector. The accuracy of the robot after using the proposed calibration procedure is measured by aiming at an arbitrary fixed point and measuring the mean and standard deviation of the radius of spread of the projected points. The mean and standard deviation of the radius of spread were improved from 5.64 mm and 1.89 mm to 1.05 mm and 0.587 mm respectively. It is also shown in simulation that the method can be automated by a feedback system that can be implemented in real-time.",2007,0, 3229,ONE: Adaptive One-to-N Error Recovery in Wireless Sensor Networks,"This paper describes an adaptive error recovery mechanism for sink-to-sensors reliable data dissemination in multi hop wireless sensor networks. The proposed error recovery mechanism(ONE) uses a cross layer variant of the negative acknowledgement(NACK) based Selective Repeat scheme for retransmissions. The mechanism is further extended to multi hop topologies by adjusting the window parameters and applying an efficient buffer management strategy. We have shown that the adaptive behavior of the scheme enables optimization of system parameters that affect the total number of packets sent in a reliable session. The analysis of the proposed scheme is given and the effect of buffer size, density and loss ratio parameters on the overall performance is shown.",2007,0, 3230,Fault identification and reconfigurable control for bimodal piecewise affine systems,"This paper addresses the design of a fault detection and reconfigurable control structure for bimodal piecewise affine (PWA) systems. The PWA bimodal system will be designed to verify input-to-state stability (ISS) in closed loop. The proposed methodology is divided into two parts. First, a Luenberger-based observer structure is proposed to solve the fault detection and identification (FDI) problem for bimodal PWA systems. The unknown value of the fault parameter is estimated by an observer equation, which is derived using a Lyapunov-based methodology. Then, the ISS property is proved for the observer. Second, a fault-tolerant state feedback controller is synthesized for the PWA model. The controller is designed to deal with partial loss of control authority identified by the observer. The ISS property is also proved for the controller. Finally, the ISS property for the interconnection of the controller and the observer-based fault identification mechanism is studied. The design procedure is formulated as a set of linear matrix inequalities (LMIs), which can be solved efficiently using available software packages.",2009,0, 3231,Detection of rotor faults in squirrel-cage induction machines at standstill for batch tests by means of the Vienna monitoring method,"This paper addresses the possibility of integration of an induction machine rotor fault monitoring technique into batch tests. Conventional monitoring techniques are designed for operating conditions near nominal speed and load. Generally, load torque is not available at batch test as the test sample should not or cannot be clutched. However, at standstill, shaft torque and rotor currents are big enough to enable rotor monitoring. Therefore, short-circuit tests are applicable for diagnostic purposes. This application is demonstrated by means of the Vienna Monitoring Method, which is a model-based torque evaluation technique",2002,0,3163 3232,A generic approach to structuring and implementing complex fault-tolerant software,"This paper addresses the practical implementation of means of tolerating residual software faults in complex software systems, especially concurrent and distributed ones. There are several inherent difficulties in implementing such fault-tolerant software systems, including the controlled use of extra redundancy and the mixture of different design concerns. In an attempt to minimise these difficulties, we present a generic implementation approach, composed of a multi-layered reference architecture, a configuration method and an architectural pattern. We evaluate our implementation approach using an industrial control application whose control software we equip with the ability to tolerate a variety of software faults. The preliminary evidence shows that our approach can simplify the implementation process, reduce repetitive development effort and provide high flexibility through a generic interface for a wide range of fault tolerance schemes",2002,0, 3233,Analysis of interconnect crosstalk defect coverage of test sets,"This paper addresses the problem of evaluating the effectiveness of test sets to detect crosstalk defects in interconnects of deep sub-micron circuits. The fast and accurate estimation technique will enable: (a) evaluation of different existing tests, like functional, scan, logic BIST, and delay tests, for effective testing of crosstalk defects in interconnects, and (b) development of crosstalk tests if the existing tests are not sufficient, thereby minimizing the cost of interconnect testing. Based on a covering relationship we establish between transition tests in detecting crosstalk defects, we develop an abstract crosstalk fault model for circuit interconnects. Based on this fault model, and the covering relationship, we develop a fast and efficient method to estimate the fault coverage of any general test set. We also develop a simulation-based technique to calculate the probability of occurrence of the defects corresponding to each fault, which enables the fault coverage analysis technique to produce accurate estimates of the actual crosstalk defect coverage of a given test set. The crosstalk test and fault properties, as well as the accuracy of the proposed crosstalk coverage analysis techniques, have been validated through extensive simulation experiments. The experiments also demonstrate that the proposed crosstalk techniques are orders of magnitude faster than the alternative method of SPICE-level simulation. Finally, we demonstrate the practical applicability of the proposed fault coverage analysis technique by using it to evaluate the crosstalk fault coverage of logic BIST tests for the buses in a DSP core",2000,0, 3234,Simulated Fault Injection for Quantum Circuits Based on Simulator Commands,"This paper addresses the problem of evaluating the fault tolerance algorithms and methodologies (FTAMS) designed for quantum circuits, by making use of fault injection techniques. These techniques are inspired from their rich classical counterparts, and were adapted to the quantum computation specific features, including the available error models. Because of their wide spectrum of application, including quantum circuit simulation, and their flexibility in circuit representation (i.e. both behavioral and structural descriptions of a circuit are possible), the hardware description languages (HDLs) appear as the best choice for our simulation experiments. The techniques employed for fault injection are based on simulator commands. The simulation results for the fault-affected circuit were compared to the outputs of a Gold circuit (a faultless circuit) in order to compute the quantum failure rate.",2007,0, 3235,A unit to provide a capability for programmable real-time adjustment of stimulus intensity to a hand-held stimulator for drop foot correction applications,"This paper describes the design of an intelligent drop foot stimulator unit for use in conjunction with a commercial NMES unit, the NT2000. The developed micro-controller unit interfaces to a PC and a GUI allows the clinician to graphically specify the shape of the stimulation intensity envelope required for subject undergoing drop foot correction. The developed unit is based on the ADuC812S micro-controller evaluation board from analog devices and uses two FSR-based foot-switches to control application of stimulus. The unit has the ability to display diagnostic data to clinician indicating how the stimulus intensity envelope is being delivered during walking using a data capture capability. The developed system has a built-in algorithm to adjust the deliver of stimulus to reflect changes in gait cycle time.",2003,0, 3236,A Rule-Based Expert System for Automatic Error Interpretation Within Ultrasonic Flaw Model Simulators,"This paper describes the development of a novel rule-based expert system application that automatically runs a set of theoretical models used to simulate test procedures for ultrasonic testing methods in nondestructive evaluation and interprets their results. Theoretical modeling is an essential tool in verifying that the test procedures are fit for their intended purpose of defect detection. Four validated models are available to simulate theoretical ultrasonic flaw modeling scenarios. Under certain conditions, the models may break down and produce warning flags indicating that results may not be considered accurate. A considerable level of expertise in the theoretical background of the models is required to interpret these flags. The expert system addresses any warning flags encountered by adjusting the original simulation parameters and rerunning the test in order to produce a valid simulation. Warning flags are addressed by a rule file, which contains formal rules developed from knowledge-elicitation sessions with suitably qualified engineers. The rule file represents the action an engineer would adopt to counter highlighted warning flags. A description of the system and rule-base design is given as well as how the system and its performance were validated",2006,0, 3237,The development of interface adapter in the digital circuit fault diagnosis system based on VXI,"This paper describes the development process of general interface adapter in the digital circuit fault diagnosis system based on VXI. After giving overall description of fault diagnosis system and VXI bus, the paper presents a method of solving the problem about load matching and interface matching, and realizes the function of identification to read and write on the memory of interface circuit and to control chip selection. The method of identity installation in interface circuit to be selected is to give an ID number and add a memory to interface circuit to ensure the accuracy and effectiveness. The paper also describes the method of self diagnosis in the interface circuit that is the key to whole fault diagnosis system.",2009,0, 3238,FPGA Implementation of Wideband IQ Imbalance Correction in OFDM Receivers,"This paper describes the implementation of a digital compensation scheme, called CSAD, for correcting the effects of wideband gain and phase imbalances in dual-branch OFDM receivers. The proposed scheme is implemented on a Xilinx Virtex-4 field programmable gate array (FPGA). The flexible architecture of the implementation makes it readily adaptable for different broadband applications, such as DVB-T/H, WLAN, and WiMAX. The proposed correction scheme is resilient against multipath fading and frequency offset. When applied to DVB-T, it is shown that an 11-bit arithmetic precision is sufficient to achieve the required BER of 2x10-4 at an SNR of 16.5 dB. Using this bit-precision, the implementation consumes 1686 Virtex-4 slices equivalent to about 42600 gates.",2008,0, 3239,Quantitative nondestructive evaluation of material defect using GP-based fuzzy inference system,"This paper deals with a quantitative nondestructive evaluation in eddy current testing for steam generator tubes of nuclear power plants by using genetic programming (GP) and fuzzy inference system. Defects can be detected as a probe impedance trajectory by scanning a pancake type probe coil. An inference system is proposed for identifying the defect shape inside and/or outside tubes. GP is applied to extract and select effective features from a probe impedance trajectory. Using the extracted features a fuzzy inference system detects presence, position, and size of a defect of a test sample. The effectiveness of the proposed method is demonstrated through computer simulation studies",2000,0, 3240,Performance of a self-commutated BTB HVDC link system under a single-line-to-ground fault condition,"This paper deals with a self-commutated back-to-back (BTB) high-voltage direct-current (HVDC) link system for the purpose of power flow control and/or frequency change in transmission systems. Each BTB unit consists of two sets of 16 three-phase voltage-source converters, and their AC terminals are connected in series each other via 16 three-phase transformers. Hence, the BTB unit uses totally 192 switching devices capable of achieving gate commutation. This results in a great reduction of voltage and current harmonics without performing PWM control. Simulation results verify the validity of the proposed system configuration and control scheme not only under a normal operating condition but also under a single-line-to-ground fault condition.",2003,0, 3241,Optimization algorithm for fault location in transmission lines considering current transformers saturation,"This paper deals with fault location calculations that use voltages and currents during transient conditions and pre fault values. The method uses transmission line measurements at both terminals and its main contribution is the possibility of fault location even with the lack of some current measurements as in the case of current transformers saturation or even when data acquisition process fails. The equations are based on two-port line representation that can be applied to transposed or untransposed lines and equivalent impedances at the terminals are not needed. With the proposed formulation, the algorithm does not use simplifying assumptions to calculate the fault distance and the fault resistance simultaneously. For phase-to-ground faults, the possibility of lack of accurate current measurement in one CT is assumed. For double phase-to-ground faults and three phase faults, the lack of measurements in two or three CTs, respectively, is also assumed. The results, considering different line configurations and fault types, are presented, showing the accuracy and efficiency of the proposed method.",2005,0, 3242,Effects of fault arcs on insulating walls in electrical switchgear,This paper deals with test methods to evaluate the effects of high current fault arcs occurring in electrical switchgear compartments stressing insulating walls made of different materials. These partitions are stressed mechanically and thermally. To study these effects several test arrangements were developed and relevant experimental studies performed. The performance of the investigated materials is presented and discussed,2000,0, 3243,Fuzzy ART neural network algorithm for classifying the power system faults,"This paper introduces advanced pattern recognition algorithm for classifying the transmission line faults, based on combined use of neural network and fuzzy logic. The approach utilizes self-organized, supervised Adaptive Resonance Theory (ART) neural network with fuzzy decision rule applied on neural network outputs to improve algorithm selectivity for a variety of real events not necessarily anticipated during training. Tuning of input signal preprocessing steps and enhanced supervised learning are implemented, and their influence on the algorithm classification capability is investigated. Simulation results show improved algorithm recognition capabilities when compared to a previous version of ART algorithm for each of the implemented scenarios.",2005,0, 3244,Fuzzy fault detection and diagnosis under severely noisy conditions using feature-based approaches,"This paper introduces an approach to fault detection and diagnosis scheme which uses fuzzy reference models to describe the symptoms of both faulty and fault-free plant operation. Recently, some approaches have been combined with fuzzy logic to enhance its performance in particular applications such as fault detection and diagnosis. The reference models are generated from training data which are produced by computer simulation of typical plant. A fuzzy matching scheme compares the parameters of a fuzzy partial model, identified using on-line data collected from the real plant, with the parameters of the reference models. The reference models are also compared to each other to take account of the ambiguity which arises at some operating points when the symptoms of correct and faulty operations are similar. Independent components analysis (ICA) is used to extract the exact data from variables under severe noisy conditions. A fuzzy self organizing feature map is applied to the data obtained from ICA for obtaining more accurate and precise features representing different conditions of the system. The results are then applied to the model-based fuzzy procedure for diagnosis goals. Results are presented which demonstrate the applicability of the scheme.",2008,0, 3245,Road to the integrated protective relaying fault information system,"This paper introduces an integrated power system protective relaying fault information system. This system interconnects several vital substations to take the protective relaying and fault recorder under control, so the real-time supervision of them and integrative analysis can be carried out. It has been installed in the State Power Corporation of China and is now under test-run. At the same time, in this paper, it is proposed to integrate fault information system with other applications related to protective relaying to form a uniform platform for the operation and analysis of protective relaying. And the further development of this system in the near future is put forward.",2003,0, 3246,Optical Fault Masking Attacks,"This paper introduces some new types of optical fault attacks called fault masking attacks. These attacks are aimed at disrupting of the normal memory operation through preventing changes of the memory contents. The technique was demonstrated on an EEPROM and Flash memory inside PIC microcontrollers. Then it was improved with a backside approach and tested on a PIC and MSP430 microcontrollers. These attacks can be used for the partial reverse engineering of semiconductor chips by spotting the areas of activity in reprogrammable non-volatile memory. This can assist in data analysis and other types of fault injection attacks later, thereby saving the time otherwise required for exhaustive search. Practical limits for optical fault masking attacks in terms of sample preparation, operating conditions and chip technology are discussed, together with possible countermeasures.",2010,0, 3247,Error bounds based stochastic approximations and simulations of hybrid dynamical systems,"This paper introduces, develops and discusses an integration-inspired methodology for the simulation and analysis of deterministic hybrid dynamical systems. When simulating hybrid systems, and thus unavoidably introducing some numerical error, a progressive tracking of this error can be exploited to discern the properties of the system, i.e., it can be used to introduce a stochastic approximation of the original hybrid system, the simulation of which would give a more complete representation of the possible trajectories of the system. Moreover, the error can be controlled to check and even guarantee (in certain special cases) the robustness of simulated hybrid trajectories",2006,0, 3248,Error Detection and Correction - A novel technique implementing Dual Rail Logic and Rollback recovery Architecture,"This paper investigates a computer architecture that provides fault detection in the execution elements, redundancy and error coding in memory storage elements, and incorporates software that allows rollback to a recovery boundary in the executing program when errors do occur. The architecture is intended for use in an environment where any errors encountered would be in the processors current computational instructions. The use of dual-rail logic is proposed for the purpose of providing single-bit error detection in computational units. This approach will be step towards creating a reliable computation environment in space based applications where the environment is quite hostile to computing systems.",2008,0, 3249,Optimal Parameter Settings for Forward Error Correction Schemes for Multimedia Streaming over Wireless Networks,"This paper investigates algorithms to determine an optimal choice of the FEC parameters (n, k) to mitigate the effects of packet loss due to buffer overflow at a wireless base station on multimedia traffic. We develop an analytic model of the considered network scenario that takes into account the traffic arrival rates, the channel loss characteristics, the size of the buffer at the wireless access point, and the influence of the FEC parameters on the packet loss. Applying the theory of recurrent linear equations, we present a new approach to establish a closed form solution of the underlying Markov model for the buffer occupancy and verify the analytical results via simulations",2005,0, 3250,An Energy-Efficient Fault-Tolerant Scheduling Scheme for Aperiodic Tasks in Embedded Real-Time Systems,"This paper investigates how to combine fault tolerance with power management in embedded real-time systems at the same time. Fault tolerance is achieved by check pointing, and power management is carried out via dynamic voltage and frequency scaling (DVFS). Also, we take tasks' average switched capacitances into consideration. We present a fault-tolerant schedulability analysis for a periodic tasks and derive the optimal number of checkpoints. The optimal number of checkpoints can help the task to guarantee the timing constraints and minimize the worst case execution time in the presence of faults. We then propose a scheduling scheme which carries out DVFS on the basis of the schedulability analysis for the problem of static task scheduling and voltage allocation. The problem is addressed and formulated as a linear programming (LP) problem. The simulation results show that an increase of the number of variable voltages can reduce energy consumption. Also, selecting suitable voltages for tasks can lead to drastic energy reduction even if the number of variable voltages is very small.",2009,0, 3251,Impact of Waveform Distorting Fault Current Limiters on Previously Installed Overcurrent Relays,"This paper investigates in detail the impacts of distorted current waveforms, produced by certain types of fault current limiters on time-overcurrent protection relays. A thyristor-based solid-state fault current limiter is chosen as representative of such a device for a case study which investigates its effects on two coordinated protection relays. A detailed software model of the current limiter has been developed and implemented on the real-time digital simulator platform, modeling a typical distribution system. Relay models are used to obtain initial results, which are later validated by an actual protective relay connected in a hardware-in-the-loop simulation setup. The results illustrate the increase of relay tripping times due to severe current limitation caused by the fixed firing angle control of the current limiter. It is revealed that different current measurement principles employed by the relays, such as fundamental, peak, or true rms, can lead to miscoordination due to the distorted fault current waveform. It is demonstrated that these undesirable effects can be mitigated by employing appropriate control strategies for the firing angle in the current limiter.",2008,0, 3252,Designing fault tolerant systems into SRAM-based FPGAs,"This paper discusses high level techniques for designing fault tolerant systems in SRAM-based FPGAs, without modification in the FPGA architecture. Triple Modular Redundancy (TMR) has been successfully applied in FPGAs to mitigate transient faults, which are likely to occur in space applications. However, TMR comes with high area and power dissipation penalties. The new technique proposed in this paper was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. We present some fault coverage results and a comparison with the TMR approach.",2003,0, 3253,Scatter estimation and motion correction in PET,"This paper discusses how to estimate scatter when performing motion correction of PET data. It concentrates on head movement, but some results are valid for non-rigid motion as well. We show that rigid motion affects scattered events differently then unscattered events. This has consequences for the scatter estimation procedure. We show with simulations, phantom measurements and patient data that our proposed method obtains fully quantitative motion-corrected PET images.",2005,0, 3254,Fault analysis based on integration of digital relay and DFR data,"This paper discusses integration of two existing automated analysis applications, DFR data analysis and digital relay data analysis, to achieve comprehensive fault analysis. As inputs to the integrated application, digital relay files and reports are introduced. The proposed strategy and implementation of integration are outlined. An example is used to demonstrate features of the integrated application developed so far.",2005,0, 3255,Development of an Online Real Time Web Accessible Low-Voltage Switchgear Arcing Fault Early Warning System,"This paper discusses the design and development aspects associated with hardware and software of a Web accessible online real time low-voltage switchgear arcing fault early warning system based on NI C series I/O Modules and NI cRIO-9004 controller. Based on the higher-order harmonic differential approach, the switchgear arcing fault early warning system was implemented. The NI cRIO-9004 controller with IP addressable and remote access capabilities provides options for comprehensive fault recording and diagnostic capabilities. Dedicated test-bench was established in the laboratory environment to validate the algorithm and test the performance of this newly developed system.",2007,0, 3256,Optimum Design of Circuit Fault Diagnosis Software Based on Fault Dictionary Method,"This paper discusses the design procedure of digital circuit fault diagnosis software by using optimization method of data stream through the analysis of the file format of LASAR circuit simulation result fault dictionary, pin connection table and node truth table.",2008,0, 3257,Evaluation of a software-based error detection technique by RT-level fault injection,"This paper discusses the efficiency of a software hardening technique when transient faults occur in the processor elements. Faults are injected in the RT-Level model of the processor, thus providing a more comprehensive view of the robustness compared with injections limited to the registers in the programmer model (e.g. injections based on an Instruction Set Simulator or using instructions of the processor to modify contents of registers).",2006,0, 3258,Simulation-based techniques for calculating fault resolution and false removal statistics,"This paper discusses the use of diagnostic simulations to generate the Fault Resolution metric for a system or equipment. Simulation-based calculations are free of some of the biases that inhere within traditional, math-based approaches. Moreover, a simulation-based evaluation of the replacement of failed items also provides a basis for the calculation of the effect of diagnostic ambiguity upon false removals-including the estimated costs that can be attributed to removals beyond those that would be expected during a product's intended lifetime",2000,0, 3259,A hybrid system approach towards redundant fault-tolerant control systems,This paper discusses the verification problem of redundancy management systems (RMS) in fault-tolerant control by using a hybrid system approach $the discrete-event-system (DES) abstracting strategy. The qualitative fault-tolerant criteria can be formally verified if a DES model is abstracted from the continuous/discrete-time dynamical system in a consistent way. The acquisition of the DES model and verification of fault-tolerant criteria are illustrated based on a concrete RMS of a redundant flight control system,2000,0, 3260,Fault Tolerance of Tornado Codes for Archival Storage,"This paper examines a class of low density parity check (LDPC) erasure codes called Tornado codes for applications in archival storage systems. The fault tolerance of Tornado code graphs is analyzed and it is shown that it is possible to identify and mitigate worst-case failure scenarios in small (96 node) graphs through use of simulations to find and eliminate critical node sets that can cause Tornado codes to fail even when almost all blocks are present. The graph construction procedure resulting from the preceding analysis is then used to construct a 96-device Tornado code storage system with capacity overhead equivalent to RAID 10 that tolerates any 4 device failures. This system is demonstrated to be superior to other parity-based RAID systems. Finally, it is described how a geographically distributed data stewarding system can be enhanced by using cooperatively selected Tornado code graphs to obtain fault tolerance exceeding that of its constituent storage sites or site replication strategies",2006,0, 3261,Accurate template-based correction of brain MRI intensity distortion with application to dementia and aging,"This paper examines an alternative approach to separating magnetic resonance imaging (MRI) intensity inhomogeneity from underlying tissue-intensity structure using a direct template-based paradigm. This permits the explicit spatial modeling of subtle intensity variations present in normal anatomy which may confound common retrospective correction techniques using criteria derived from a global intensity model. A fine-scale entropy driven spatial normalisation procedure is employed to map intensity distorted MR images to a tissue reference template. This allows a direct estimation of the relative bias field between template and subject MR images, from the ratio of their low-pass filtered intensity values. A tissue template for an aging individual is constructed and used to correct distortion in a set of data acquired as part of a study on dementia. A careful validation based on manual segmentation and correction of nine datasets with a range of anatomies and distortion levels is carried out. This reveals a consistent improvement in the removal of global intensity variation in terms of the agreement with a global manual bias estimate, and in the reduction in the coefficient of intensity variation in manually delineated regions of white matter.",2004,0, 3262,An Optimized and Adaptive Error-Resilient Coding for H. 264 Video,"This paper has proposed and implemented a new optimized and adaptive error-resilient coding based on bit-error detection and directional intra-frame concealment (DIFC). The bit error detection based on multiblock checksum, chain coverage and remainder coding. The DIFC takes the advantage of flexible block sizes to deal with detailed movement areas and make the usage of object edge detection to improve the accuracy of spatial interpolation. The results showed that the proposed directional intra-frame concealment has a better performance than the weighted pixel interpolation in H.264 software",2006,0, 3263,A fault tolerant VoIP implementation based on open standards,"This paper highlights the design and implementation aspects for making voice over IP softswitches more dependable on commercial of-the-shelf telecommunication platforms. As a proof-of-concept, the open source Asterisk Private Branch Exchange application was made fault tolerant by using high availability middleware based on the Service Availability Forum's application interface specifications (AIS). The prototype was implemented on Intel x86 architecture blade servers running Carrier Grade Linux in an active/hot-standby configuration. Primarily, the Asterisk application was re-engineered and adapted to use AIS defined interfaces and model. In case of application, component or node failures, the middleware detects and triggers the application failover to the hot-standby node. The Asterisk application on the hot-standby node detects it is now the active instance, so it retrieves the checkpoint data and immediately continues to service both existing and new call-sessions thus improving overall availability",2006,0, 3264,The Diagnosis System of Mechanical Fault Based on LabVIEW Platform and Its Application,"This paper introduced the content and the status quo of study of the mechanical fault diagnosis, detailed the construction method and the overall structure of mechanical fault diagnosis system based on LabVIEW, and carried out applied research of rotating mechanical fault diagnosis by the use of virtual instrument technology and the use of mechanical fault diagnosis system based on the LabVIEW platform according to rotating machinery and its characteristics.",2009,0, 3265,Anomaly detection: A robust approach to detection of unanticipated faults,"This paper introduces a methodology to detect as early as possible with specified degree of confidence and prescribed false alarm rate an anomaly or novelty (incipient failure) associated with critical components/subsystems of an engineered system that is configured to monitor continuously its health status. Innovative features of the enabling technologies include a Bayesian estimation framework, called particle filtering, that employs features or condition indicators derived from sensor data in combination with simple models of the systempsilas degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme provides the probability of abnormal condition and the probability of false alarm. The presence of an anomaly is confirmed for a given confidence level. The efficacy of the proposed anomaly detection architecture is illustrated with test data acquired from components typically found on aircraft and monitored via a test rig appropriately instrumented.",2008,0, 3266,Realization of ultra wideband bandpass filter using new type of split-ring Defected Ground Structure,"This paper introduces a new compact bandpass filter using a new Circular Split - Ring type Defected Ground Structures. A split ring Defected Ground Structure (DGS) when applied with a continuous microstrip line yields a very sharp stop band Lowpass filter. In the proposed structure, a T-shaped discontinuous microstrip line is included, which provides ultra wide passband. The passband of the proposed bandpass filter can be tuned by simply changing the radius of the inner ring of the split-ring DGS unit. This proposed structure provides an improved bandwidth when another split-ring DGS unit is included. A MoM based simulation is performed for the structures.",2010,0, 3267,Modeling and Control of Six-Phase Symmetrical Induction Machine Under Fault Condition Due to Open Phases,"This paper introduces a new fault-tolerant operation method for a symmetrical six-phase induction machine (6PIM) when one or several phases are lost. A general decoupled model of the induction machine with up to three open phases is given. This model illustrates the existence of a pulsating torque when phases are opened. Then, a new control method reducing the pulsating torque and the motor losses is proposed in order to improve the drive performances. The proposed method is compared to two other existing techniques. The simulation and experimental results obtained on a dedicated test-rig confirm the validity and the efficiency of the proposed method for a fault-tolerant symmetrical 6PIM drive.",2008,0, 3268,Implement fault diagnosis high speed reasoning expert system with CPLD,"This paper introduces a new method to design expert system reasoning with CPLD for fault diagnosis. Firstly, the reasoning process of normal expert system is analyzed; Secondly, the knowledge and experiences of experts are transformed to binary or multi-valued reasoning using fault tree, and the processes are realized with simple gate circuits; lastly, the new scheme is used in fault diagnosis for HV circuit breakers instead of primary reasoning with software. It is validated that the reasoning speed using this scheme is faster than traditional reasoning modes, and it can be applicable to many expert systems based on single-chip controller or DSP (digital signal processor).",2009,0, 3269,Investigation of the occurrence of: no-faults-found in electronic equipment,"This paper investigates the occurrence of the NFF (no fault found) failure in electronic equipment. The main types of NFF are outlined, and then the results of accessing the Loughborough University reliability database to investigate the root causes of NFF is discussed. The presence of complex components and connectors and the effect of equipment complexity and fraction-usage are examined as causes of NFF. The presence of connectors or complex components on a board and the fraction usage, and the complexity of equipment have little or no effect on the number of NFF events. No relationship was found between equipment usage or equipment complexity and NFF occurrence. The reason for such a high incidence of NFF in electronic equipment has not yet been identified, and further work into this phenomenon is required. Of course factors such as software faults and transient effects, amongst other causes, are likely to be involved",2001,0, 3270,Exploring autonomic options in an unified fault management architecture through reflex reactions via pulse monitoring,This paper investigates the potential of adding autonomic capabilities to the telecommunications fault management architecture and highlights the importance of a reflex-healing dual strategy to facilitate this advanced automation. The reflex reaction is facilitated through the concept of a pulse monitor - essentially the extension of the fault tolerant heartbeat monitor mechanism to incorporate reflex urgency levels and health check summary information.,2004,0, 3271,Optimal morphological filter design for fabric defect detection,"This paper investigates the problem of automated defect detection for textile fabrics and proposes a new optimal morphological filter design method for solving this problem. Gabor wavelet network (GWN) is adopted as a major technique to extract the texture features of textile fabrics. An optimal morphological filter can be constructed based on the texture features extracted. In view of this optimal filter, a new semi-supervised segmentation algorithm is then proposed. The performance of the scheme is evaluated by using a variety of homogeneous textile images with different types of common defects. The test results exhibit accurate defect detection with low false alarm, thus confirming the robustness and effectiveness of the proposed scheme. In addition, it can be shown that the algorithm proposed in this paper is suitable for on-line applications. Indeed, the proposed algorithm is a low cost PC based solution to the problem of defect detection for textile fabrics",2005,0, 3272,Real-time bounded-error pose estimation for road vehicles using vision,"This paper is about online, constant-time pose estimation for road vehicles. We exploit both the state of the art in vision based SLAM and the wide availability of overhead imagery of road networks. We show that by formulating the pose estimation problem in a relative sense, we can estimate the vehicle pose in real-time and bound its absolute error by using overhead image priors. We demonstrate our technique on data gathered from a stereo pair on a vehicle traveling at 40 kph through urban streets. Crucially our method has no dependence on infrastructure, needs no workspace modification, is not dependent on GPS reception, requires only a single stereo pair and runs on an every day laptop.",2010,0, 3273,Managing software quality with defects,This paper describes two common approaches to measuring and modeling software quality across the project life cycle so that it can be made visible to management. It discusses examples of their application in real industry settings. Both of the examples presented come from CMM Level 4 organizations.,2002,0, 3274,A real-time fault tolerant intra-body network,"This paper designs an intra-body network (IBN) of nodes, consisting of small sensors and processing elements (SPEs) placed at different locations within the body and a personal digital assistant placed externally but in close proximity to the body. The sensors measure specific physiological attributes such as electrophysiological and biochemical changes in the myocardium (action potentials of cells), glucose level, blood viscosity etc. and forward them to the processing element. Communication protocols for configuration and data access protocols are proposed. The privacy of the IBN data, fault tolerance and real-time data acquisition are addressed.",2002,0, 3275,Model for JPALS/SRGPS Flexure and Attitude Error Allocation,"This paper develops a linearized parametric error model for assessing the effects of structural flexure and attitude uncertainties on the shipboard variant of the joint precision approach and landing system (JPALS). The outputs of the error model are position domain error bounds on the estimate of the ship reference point (SRP) coordinates. The model is parameterized in terms of GPS antenna installation geometry and the covariance matrices capturing the statistics of GPS measurement, ship structural flexure, and ship attitude estimation uncertainty. The performance of the model is evaluated via a set of simulation studies. It is shown that when the attitude errors are small and the flexure statistics well characterized, the error model provides an accurate and convenient way of mapping attitude and flexure uncertainties into SRP position uncertainties. Estimation of SRP position errors is a nonlinear problem and when ship attitude uncertainties are large, the nonlinearities can be important. However, the bounds calculated by the error model developed can be inflated to deal with these nonlinearities. Finally, by analyzing data collected from ship trials, it is shown that perhaps a more challenging issue may be the potential for highly correlated, bias-like structural flexure uncertainties.",2010,0, 3276,A novel RF phase error Built-in-Self-Test for GSM,"This paper discusses a novel RF Built-in-Self-Test (RF-BiST) targeting to replace the traditionally expensive and time-consuming RF parametric phase error test on a GSM/EDGE Digital Radio Processor (DRP) radio transceiver. The verification of the RF BiST in a production environment and a comparison of the internal BiST vs. the current test in are presented, which validates the RF BiST as an accepted test method for determining the phase error of GSM devices. The results illustrate that there are great opportunities in reduction of test time and costs by moving to the internal digital method of BiST for testing RF/analog IC products.",2008,0, 3277,Evaluating the security threat of firewall data corruption caused by instruction transient errors,"This paper experimentally evaluates and models the error-caused security vulnerabilities and the resulting security violations of two Linux kernel firewalls: IPChains and Netfilter. There are two major aspects to this work: to conduct extensive error injection experiments on the Linux kernel and to quantify the possibility of error-caused security violations using a SAN (Stochastic Activity Network) model. The error injection experiments show that about 2% of errors injected into the firewall code segment cause security vulnerabilities. Two types of error-caused security vulnerabilities are distinguished: temporary, which disappear when the error disappears, and permanent, which persist even after the error is removed, as long as the system is not rebooted. Results from simulating the SAN model indicate that under an error rate of 0.1 error/day during a 1-year period in a networked system protected by 20 firewalls, 2 machines (on the average) will experience security violations. This indicates that error-caused security vulnerabilities can be a non-negligible source of a security threats to a highly secure system.",2002,0, 3278,Effects of virtual development on product quality: exploring defect causes,"This paper explores the effects of virtual development on product quality, from the viewpoint of ""conformance to specifications"". Specifically, causes of defect injection and non- or late-detection are explored. Because of the practical difficulties of obtaining hard project-specific defect data, an approach was taken that relied upon accumulated expert knowledge. The accumulated expert knowledge based approach was found to be a practical alternative to an in-depth defect causal analysis on a per-project basis. Defect injection causes seem to be concentrated in the requirements specification phases. Defect dispersion is likely to increase, as requirements specifications are input for derived requirements specifications in multiple, related sub-projects. Similarly, a concentration of causes for the non- or late detection of defects was found in the Integration Test phases. Virtual development increases the likelihood of defects in the end product because of the increased likelihood of defect dispersion, because of new virtual development related defect causes, and because causes already existing in co-located development are more likely to occur.",2003,0, 3279,Increasing data TLB resilience to transient errors,"This paper first demonstrates that a large fraction of data TLB entries are dead (i.e., not used again before being replaced) for many applications at any given time during execution. Based on this observation, it then proposes two alternate schemes that replicate actively accessed data TLB entries in these dead entries to increase the resilience of the TLB against transient errors.",2005,0, 3280,Non-Intrusive System-Level Fault Tolerance for an Electronic Throttle Controller,"This paper describes the methodology used to add nonintrusive system-level fault tolerance to an electronic throttle controller. The original model of the throttle controller is a hybrid system created at a major automotive company. We use Gurkh as a framework within which we translate the hybrid model into a set of timed automata and perform analysis using formal methods. The first step of the translation process is to transform the hybrid model and its static schedule into Gurkhs preemptive tasking paradigm. Using the UPPAAL tool, we then check the correctness of the resulting set of timed-automata by formally verifying reachability and timing properties. We also propose a method for quantifying the quality of the translation by estimating the amount of jitter thence introduced. The final step is the implementation of a Monitoring Chip based on the formal system model. The chip provides non-intrusive ""out-of-path"" and timing error detection which in turn allows for fault tolerance at a system level.",2006,0, 3281,COMTRADE-Based Fault Information System for TNB Substations,"This paper describes the process of extracting the fault information from common format for transient data exchange (COMTRADE) record. The COMTRADE format record was obtained from the distance relay recording device, provided by the Protection Department of Tenaga Nasional Berhad (TNB). A graphical user interface (GUI) simulation program using MATLAB was developed which implements the one-cycle cosine filter relaying algorithm to digitally filter out the unwanted signals at system's fundamental frequency from the COMTRADE record. This work aims to help the TNB protection engineer to have a faster solution in parameter extraction from protective relay recorded from substations in COMTRADE format.",2005,0, 3282,Analog IC fault diagnosis by means of supply current monitoring in test points selected evolutionarily,"This paper describes the technique dedicated to an analog integrated circuit testing by means of supply current monitoring. The minimal set of test points, that allows to achieve the highest possible fault coverage, is determined with the use of genetic algorithm. Thanks to the proposed dynamic scheme of phenotype coding, the optimization process is more efficient than for a standard, static genotype structure realization.",2010,0, 3283,Error Compensation in a Horizontal Machining Center Using Artificial Intelligence Strategy,"This paper investigates the application of artificial neural networks to the problem of calculating error compensation values for axis motion on a horizontal machining center. Firstly, traditional compensation methods and artificial intelligence strategies are introduced. Secondly, some multilayer neural network architectures are examined for applicability to the problem. Using standard modeling techniques, the motion error compensation model for a horizontal machining center is developed. And this model is parameterized by measurement of the parametric error functions using a laser interferometer, electronic levels and a precision square. Relevant programs are developed in Matlab development environment, and modified termination algorithms are applied to reduce computation times. Finally, experiments are carried out to validate the approaches proposed in this paper. The results show that artificial neural networks are capable of learning the error map of a real machine, and that ANN-based compensation can significantly reduce motion errors.",2009,0, 3284,Availability requirement for a fault-management server in high-availability communication systems,"This paper investigates the availability requirement for the fault management server in high-availability communication systems. This study shows that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability, as long as the fail-safe ratio (the probability that the failure of the fault management server does not bring down the system) and the fault coverage ratio (probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio, and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed.",2003,0, 3285,Error-injection-based failure characterization of the IEEE 1394 bus,This paper investigates the behavior of the IEEE 1394 bus in the presence of transient errors in the hardware layers of the protocol. Software-implemented error injection is used to introduce errors into the internals of the 1394 bus hardware chipset. Results from this study indicate that the IEEE 1394 bus protocol provides robust network communication in the presence of single-bit errors in the chipset.,2003,0, 3286,Fault tree based methodology to evaluate the reliability of converter transformers,"This paper is devoted to analyze the fault tree for predicting the system performance and to analyze the system reliability. Specifically, the analysis of the reliability of HVDC converter transformers is considered.",2008,0, 3287,Relative error measures for evaluation of estimation algorithms,"This paper is part of a series of publications that deal with evaluation of estimation algorithms. This series introduces and justifies a variety of metrics useful for evaluating various aspects of the performance of an estimation algorithm, among other things. This paper focuses on relative error measures, i.e., those with respect to some references, including the magnitude of the quantity to be estimated, its prior mean, and/or measurement error. It proposes several relative metrics that are particularly good for measuring different aspects of estimation performance. They often reveal the inherent error characteristics of an estimator better than widely used metrics of the absolute error. The metrics are illustrated via an example of target localization with radar measurements.",2005,0, 3288,A large signal dynamic model for single-phase AC-to-DC converters with power factor correction,"This paper presents a model for average current control that can be applied to DC-to-DC converters and AC-to- DC power factor correction (PFC) circuits. The proposed DC-to-DC model consists of two parts: 1) an averaged DC-to-DC converter topology with all the switching elements replaced by dependent sources 2) an average current control scheme with a pulse width modulation (PWM) model, which determines the duty cycles. Similarly, the AC-to-DC PFC model is obtained by combining an averaged boost converter model with the PFC control scheme using average current control. To verify the proposed model, simulated results were compared to experimental waveforms. The experimental results demonstrate that the model can correctly predict the steady-state and large signal dynamic behavior for average current controlled DC-to-DC and AC-to-DC PFC converters.",2004,0, 3289,An Effective Neural Approach for the Automatic Location of Stator Interturn Faults in Induction Motor,"This paper presents a neural approach to detect and locate automatically an interturn short-circuit fault in the stator windings of the induction machine. The fault detection and location are achieved by a feedforward multilayer-perceptron neural network (NN) trained by back propagation. The location process is based on monitoring the three-phase shifts between the line current and the phase voltage of the machine. The required data for training and testing the NN are experimentally generated from a three-phase induction motor with different interturn short-circuit faults. Simulation, as well as experimental, results are presented in this paper to demonstrate the effectiveness of the used method.",2008,0, 3290,A novel fault classification technique for high speed protective relaying of transmission lines,"This paper presents a new algorithm for faulty phase selection in transmission lines. The algorithm is based on a limited soft processing of the 3-phase and the zero-component current phasors. Current phasors are determined by using a pair of short length data window filters giving orthogonal outputs - phasor components. The soft processing means here that, generally, the operations like min and max are used rather than multiplication or summation of two or more arguments.",2010,0, 3291,The application of neural networks and Clarke-Concordia transformation in fault location on distribution power systems,"This paper presents a new approach to fault location on distribution power lines. This approach uses an artificial neural network based learning algorithm and Clarke-Concordia transformation. The /spl alpha/,/spl beta/,0 components of line currents resulting from the Clarke-Concordia transformation are used to detect all types of fault. The neural network is trained to map the nonlinear relationship existing in fault location equations. The proposed approach is able to identify and locate all different types of faults (single line to ground, double line to ground, line-to-line and three-phase short-circuit). This approach is subdivided into several main steps: Data acquisition, corresponding on three-phase current signals; Mathematical treatment by the Clarke-Concordia transformation; Fault identification, obtained by the analysis of fault and pre-fault data; Fault location artificial neural network based learning algorithm. The fault position is presented as the output of the neural network on which, as the input, it was considered the eigenvalue of matrix representing transformed line current. Results are presented which shows the effectiveness of the proposed algorithm for a correct fault location on distribution power system networks.",2002,0, 3292,A New Approach for Fault Location Identification in Transmission system using Stability Analysis and SVMs,"This paper presents a new approach to the location of fault in the high voltage power transmission system using support vector machines (SVMs). A knowledge base is developed using transient stability studies for apparent impedance swing trajectory in the R-X plane. SVM technique is applied to identify the fault location in the system. Results are presented on sample 3-power station, a 9-bus system illustrate the implementation of the proposed method.",2006,0, 3293,Integrated line and busbar protection scheme based on wavelet analysis of fault generated transient current signals,"This paper presents a new directional relaying principle, which integrates the power line protection with the busbar protection together. In the presented technique, the relay installed at the busbar is responsible for the detection of fault direction with respect to the busbar. The transient signals captured from CTs related to the power lines connected to the busbar are processed with a wavelet technique to determine the fault direction. Simulation studies show that the scheme is insensitive to fault type, fault position, fault path resistance and fault inception angle. It has been shown the presented scheme is suitable for all types of faults occurring at different sections on the power lines or the busbar and its associated equipment. In addition, the technique is fast in response, simple in principle and easy to implement.",2004,0, 3294,A Low Complexity Error Concealment Method for H.264 Video Coding Facilitating Hardware Realization,"This paper presents a new error concealment algorithm, which is suitable for the H.264/AVC coding standard and the hardware implementation. This algorithm can be dividing into two major categories. In the spatial domain, we use the reliable neighboring pixel values with edge detection method to conceal all the lost pixels in the block. In the temporal domain, we propose a variable block size error concealment method. It consists of a block size determination step to determine the type of the lost macro-block and a motion vector recovery step to find the optimal motion vector from the current frame. In the block size determination step, we propose a criterion to determine the size type of the lost block from the neighbor's macro-blocks. In the motion vector recovery step, the optimal motion vector for the lost block chosen from the neighbor's block in the current frame. And it is the minimum value of the side match distortion of the lost block. This proposed algorithm can not only determine the most correct mode for the lost block in a easy way, but also save the much more computation time for error concealment, which is more suitable for hardware implementation.",2009,0, 3295,SIED: software implemented error detection,"This paper presents a new error detection technique called software implemented error detection (SIED). The proposed method is based on a new control check flow scheme combined with software redundancy. The distinctive advantage of the SIED approach over other fault tolerance techniques is the fault coverage. SIED is able to cope with faults affecting data and the program control flow. By-applying the proposed approach on several benchmark programs, we evaluate the error detection capabilities by means of several fault injection experiments. Experimental results underline very good error detection capabilities for the obtained hardened version of selected benchmark programs.",2003,0, 3296,New multi-ended fault location design for two- or three-terminal lines,"This paper presents a new fault location system for multiterminal power transmission lines. The algorithm used by this system is suitable for inclusion in a numerical protection relay, which communicates with remote relay(s) over a protective relaying channel. The data volume communicated between relays is sufficiently small to be easily transmitted using a digital protection channel. The new algorithm does not require data alignment, pre-fault load flow information, phase selection information, and does not perform iterations to calculate the distance to the fault. Pre-fault load flow, zero-sequence mutual coupling, fault resistance, power system nonhomogeneity, and current infeeds from other line terminals or tapped loads do not affect the fault location accuracy",2001,0, 3297,Wavelet transform based accurate fault location and protection technique for cable circuits,This paper presents a new fault location technique applicable to power cable circuits. The proposed fault locator consists of a transient detection device installed at the busbar to detect the high frequency voltage transient signals. The initial and subsequent reflected travelling waves caused by the fault are recorded. The wavelet transform (WT) is used as a filter bank to decompose the fault signals. The travelling times of fault transients and consequently the location of the fault are calculated using the extracted signals. The simulation studies showed that the wavelet transform is very effective to extract the transient components from the complicated fault signals.,2000,0, 3298,An adaptive distance relaying algorithm with a morphological fault detector embedded,"This paper presents an adaptive distance relaying algorithm (ADRA) for transmission line protection. In ADRA, a fault detector designed based on mathematical morphology (MM) is used to determine the occurrence of a fault. The Euclidean norm of the detector output is then calculated for fault phase selection and fault type classification. With respect to a specific type of fault scenario, an instantaneous circuit model applicable to a transient fault process is constructed to determine the position of the fault. The distance between the fault position and the relay is calculated by a differential equation of the instantaneous circuit model which is resolved in a recursive manner within each sampling interval. Due to the feature of recursive calculation, the protection zone of the relay varies from a small length to large, which increases as an augment in the sample window length. ADRA is evaluated on a transmission model based on PSCAD/EMDTC, under a variety of different fault distances, fault types, fault resistances and loading angles, respectively. The simulation results show that in comparison with conventional DFT-based protection methods, by which the fault distance is calculated using phasor measurements of voltage and current signals in a fixed-length window, ADRA requires much fewer samples to achieve a same degree of the accuracy of fault distance calculation, which enables much faster tripping, and its protection zone can be extended as more samples are used.",2009,0, 3299,An analysis of fault effects and propagations in ZPU: The world's smallest 32 bit CPU,"This paper presents an analysis of the effects and propagations of transient faults by simulation-based fault injection into the ZPU processor. This analysis is done by injecting 5800 transient faults into the main components of ZPU processor that is described in VHDL language. The sensitivity level of various points of ZPU processor such as PC, SP, IR, Controller, and ALU against fault manifestation is considered and evaluated. The behavior of ZPU processor against injected faults is reported. Besides, it is shown that about 50.25% of faults are recovered during simulation time; 46.47% of faults are effective and the remainders 3.28% of faults are latent. Moreover, a comparison of the behavior of ZPU processor in fault injection experiments against some common microprocessors is done. The results will be used in the future research for proposing a fault-tolerant mechanism for ZPU processor.",2010,0, 3300,Stochastic fault tree analysis with self-loop basic events,"This paper presents an analytical approach for performing fault tree analysis (FTA) with stochastic self-loop events. The proposed approach uses the flow-graph concept, and moment generating function (MGF) to develop a new stochastic FTA model for computing the probability, mean time to occurrence, and standard deviation time to occurrence of the top event. The application of the method is demonstrated by solving one example.",2005,0, 3301,Application of PIC microcontroller for online monitoring and fiber fault identification,"This paper presents an application PIC microcontroller for online monitoring and fiber fault identification in optical fiber communication. For this propose the PIC microcontroller is used control any optical switch connected to it. The proposed system architecture is discussed in detail, considering both the hardware and software elements involved. Finally, the benefits and limitations of such a system are considered, using some initial results obtained in an experimental implementation. The system is developed and tested successfully.",2009,0, 3302,Fault detection effectiveness of spathic test data,"This paper presents an approach for generating test data for unit-level, and possibly integration-level, testing based on sampling over intervals of the input probability distribution, i.e., one that has been divided or layered according to criteria. Our approach is termed ""spathic"" as it selects random values felt to be most likely or least likely to occur from a segmented input probability distribution. Also, it allows the layers to be further segmented if additional test data is required later in the test cycle. The spathic approach finds a middle ground between the more difficult to achieve adequacy criteria and random test data generation, and requires less effort on the part of the tester. It can be viewed as guided random testing, with the tester specifying some information about expected input. The spathic test data generation approach can be used to augment ""intelligent"" manual unit-level testing. An initial case study suggests that spathic test sets defect more faults than random test data sets, and achieve higher levels of statement and branch coverage.",2002,0, 3303,A fault injection approach based on reflective programming,"This paper presents an approach for the validation of OO applications by software-implemented fault injection (SWIFI) that is based on computational reflection. The primary motivation for the use of reflection is that it allows a clear separation between functional aspects and the later being related to necessary for fault injection and monitoring. Besides separation of concerns, the use of OO programming and reflection is also intended to provide more flexibility, extensibility, portability and reusability for the instrumentation features. Ease of use, not only to the instrumentation programmer but also to the user is also a goal. This paper presents FIRE, a prototyping tool that supports the proposed approach. FIRE was implemented using OpenC++1.2 and it's aimed to validate C++ applications. Preliminary results on the use of FIRE are also presented",2000,0, 3304,Correction of perspective text image based on gradient method,"This paper presents an approach to address the problem of the correction of perspective distortion text image which every vertex has maximum or minimum vertical or horizontal coordinate in one fourth image. In order to recognize this kind of perspective distortion text image by OCR (Optical Character Recognition), we correct it into a hardly distorted regular text image based on a gradient method. Our approach is binarizing and denoising the distortion text image firstly. Then a gradient method is applied to obtain the four vertexes of the irregular convex quadrilateral composed from the perspective distortion text block. Thus, the mapping parameters of the correction model can be obtained through one-to-one relation between the irregular quadrilateral and the rectangle of the original text image. At last, the perspective distortion text image is corrected with the perspective transformation model. The experimental results show that our approach can correct the kind of perspective text image effectively.",2010,0, 3305,A BIST approach for very deep sub-micron (VDSM) defects,"This paper presents a BIST approach for the very deep submicron (VDSM) defects in an ASIC. As bridging or open defects are dominant in VDSM, efficient and accurate tests to detect them are now strongly required. We evaluated the BIST patterns for various criteria. These evaluations and additional real chip experiments have indicated that BIST has better detectability of defects than the conventional stored test",2000,0, 3306,A Byzantine Fault Tolerant Protocol for Composite Web Services,"This paper presents a Byzantine fault tolerant protocol for composite Web Services. We extend Castro and Liskov's well-known practical Byzantine fault tolerance method for the server client model and Tadashi Araragi's method for the agent system to a method for the composite Web Services. Different from other Byzantine tolerant methods for single web service, in composite Web Services we have to create replicas on both sides, while in the server-client model of Castro and Liskov's method, replicas are created only on the server side. We present a modular implementation, and experimental results demonstrate only a moderate overhead due to replication.",2010,0, 3307,Case study of fault-tolerant architectures for 90nm CMOS crythographic cores,This paper presents a case study of different fault-tolerant architectures. The emphasis is on the silicon realization. A 128 bit AES cryptographic core has been designed and fabricated as a main topology on which the fault-tolerant architectures have been applied. One of the fault-tolerant architectures is a novel four-layer architecture exhibiting a large immunity to permanent as well as random failures. Characteristics of the averaging/ thresholding layer are emphasized. Measurement results show advantage of four-layer architecture over triple modular redundancy in terms of reliability.,2007,0, 3308,Validation and evaluation of a software solution for fault tolerant distributed synchronization,"This paper presents a case study on the combined use of different tools and techniques for the validation and evaluation, from, the early stages of the design, of a fault tolerant software mechanism named distributed synchronization. The mechanism has been specified using UML state charts and sequence diagrams. A number of stochastic well-formed nets (SWN) models have been derived from the specifications: they have been composed using the tool algebra, and the resulting model has been model-checked using the PROD tool for temporal logic properties, thanks to a GreatSPN-to-PROD translator. The quantitative analysis has been performed using the SWN solvers of the Great-SPN tool.",2002,0, 3309,A Formal Language Approach in Fault Location on Distribution Power Systems,"This paper presents a Clarke-Concordia transformation approach, combined with formal language theory, within fault diagnosis for distribution power lines. The alpha, beta, 0 components of line currents, resulting from the Clarke-Concordia transformation, are used to detect and classify all types of fault. The formal language approach is used to map the non-linear relationship existing in fault location equations. The proposed approach is able to identify and to locate all different types of faults (single line to ground, double line to ground, line-to-line and three-phase short-circuit). This approach is subdivided into several main steps: first is the data acquisition of the corresponding current signals; then the mathematical computation of the Clarke-Concordia components; the fault identification is then performed by the analysis of fault and pre-fault data; finally the fault location estimation using a formal language theory based algorithm. Results are presented to reveal the effectiveness of the proposed algorithm for a correct fault location on distribution power lines.",2008,0, 3310,Combinational test generation for transition faults in acyclic sequential circuits,"This paper presents a combinational test generation method for transition faults in acyclic sequential circuits. In this method, to generate test sequences for transition faults in a given acyclic sequential circuit is performed on its extend time-expansion model. The model is composed of two copies of time-expansion model of the given circuit and extends in the close two sequences to generate 2 vectors for the transition faults with some restrictions. Experimental results show the method can achieve the higher fault efficiency with the lower test generation times than conventional method.",2008,0, 3311,Performance analysis of three-phase induction motor drives under inverter fault conditions,"This paper presents a comparative analysis involving several fault tolerant operating strategies applied to three-phase induction motor drives. The paper exploits the advantages and the inconveniences of using remedial operating strategies under different control techniques, such as the field oriented control and the direct torque control. Global results are presented concerning the analysis of some key parameters like efficiency, motor line currents harmonic distortion, among others.",2003,0, 3312,A Control Strategy for Load Balancing and Power Factor Correction in Three-Phase Four-Wire Systems Using a Shunt Active Power Filter,"This paper presents a control scheme for load balancing and power factor correction in three-phase four-wire systems using a three-phase four-leg active power filter. It is assumed that the active power filter is connected to a load that can be unbalanced and may also draw harmonic currents. The four-leg active power filter is used for harmonic compensation, reactive power compensation, load balancing and neutral current compensation as well as improving the supply side power factor. The proposed control strategy is more effective and flexible and also has lower cost and higher efficiency. The topology and operation principle of the proposed control method is discussed in detail, finally the feasibility of such a scheme is demonstrated through simulation studies.",2006,0, 3313,An industrial fault injection platform for soft-error dependability analysis and hardening of complex system-on-a-chip,"This paper presents a fault injection platform that is currently developed and used to perform soft-error dependability analysis and hardening of complex SoCs. Primarily, it is oriented toward safety analysis, safety requirement conformance testing and hardening of complex SoCs. This platform makes use of clusters of hardware emulation resources available for SoC verification to achieve massive faults injection capabilities. It is able to distribute fault injection campaigns across multiple heterogeneous emulation platforms to achieve high fault coverage. It is able to virtually handle almost any circuit size and is designed to support all kind of designs. We present the first results obtained on a small design, the Leon2 IP, on which exhaustive fault injection have been performed.",2009,0, 3314,Fault tolerant tracking control for nonlinear systems based on derivative estimation,"This paper presents a fault tolerant control (FTC)-architecture for trajectory tracking, where generalized actuator faults are online diagnosed and compensated for by the control system. Employing least-squares derivative estimators for identifying the faults inserts time-delays into the control loop. When a reference trajectory is to be tracked, the closed FTC loop represents a nonlinear time-varying time-delay system. Linearizing its dynamics around the reference trajectory makes it possible to determine tolerable delay times, which allows to deduce admissible intervals for the values of the FTC parameters. The FTC scheme is illustrated by simulations of an underactuated satellite with faulty actuators.",2010,0, 3315,Decision tree-based methodology for high impedance fault detection,"This paper presents a high impedance fault (HIF) detection method based on decision trees (DTs). The features of HIF, which are the inputs of DTs, are those well-known ones, including current [in root mean square (rms)], magnitudes of the second, third, and fifth harmonics, and the phase of the third harmonics. The only measurements needed in the proposed method are the current signals sampled at 1920 Hz. It will reduce the cost of hardware compared with methods that use high sampling rates. A new HIF model is also used. The data of current signals are from the simulation of Electromagnetic Transients Program (EMTP). The DT algorithm trained can successfully distinguish the HIFs from most normal operations on simulation data, including switching loads, switching shunt capacitors, and load transformer inrush currents. Testing on experimental data is recommended for future work.",2004,0, 3316,FTS: a high-performance CORBA fault-tolerance service,"This paper presents a lightweight CORBA fault-tolerance service called FTS. The service is based on standard portable features of CORBA, and in that respect is fully CORBA compliant, but does not follow the FT-CORBA specifications in areas where the authors felt the latter interfered with their other design goals. The service features a unique architecture, based on a new type of an object adaptor, called Group Object Adaptor (GOA). The service is portable, interoperable, and aims for simplicity and high-performance request processing. Moreover, the service supports network partitions, some aspects of non-deterministic processing, and mixing ORBs of different vendors in the same fault-tolerance infrastructure. The paper also presents an analysis of the differences between the service design and FT-CORBA, with the hope of stimulating a discussion about future improvements to the FT-CORBA standard",2002,0, 3317,Fault Detection of Backlash Phenomenon in Mechatronic System with Parameter Uncertainties Using Bond Graph Approach,"This paper presents a method for fault detection and residuals evaluation, applied on electromechanical test bench system in the presence of backlash phenomenon and parameter uncertainties. The analytical redundancy relations (ARRs) are generated using uncertain bond graph model, in linear fractional transformation form (LFT). Through the presented method, one can distinguish between backlash default and system parameter variation. Simulation tests presented in this paper show the influence of dead zone magnitude and system parameters variations on residuals evaluation",2006,0, 3318,Voltage sags pattern recognition technique for fault section identification in distribution networks,"This paper presents a method to identify a faulted section in a distribution network using voltage sags pattern characteristics. The method starts with fault analysis to establish analytical voltage sags database. When a fault occurs, the voltage sag at the monitored node is compared with the established voltage sags in the database to find all the possible faulted sections. Finally, the method applied rank reasoning analysis to prioritize all the possible faulted sections. The method has been tested on an urban distribution network feeder. The results show that the most fault sections in the tested distributed network feeder can be located by the first attempt. All remaining faulted sections can be found by the second attempt.",2009,0, 3319,A Fault-Location Method for Application With Current Differential Relays of Three-Terminal Lines,This paper presents a new method for locating faults on three-terminal power lines. Estimation of a distance to fault and indication of a faulted section is performed using three-phase current from all three terminals and additionally three-phase voltage from the terminal at which a fault locator is installed. Such a set of synchronized measurements has been taken into consideration with the aim of developing a fault-location algorithm for applications with current differential relays of three-terminal lines. The delivered fault-location algorithm consists of three subroutines designated for locating faults within particular line sections and a procedure for indicating the faulted line section. Testing and evaluation of the algorithm has been performed with fault data obtained from versatile Alternate Transients Program-Electromagnetic Transients Program simulations. The sample results of the evaluation are reported and discussed.,2007,0, 3320,A model of asynchronous machines for stator fault detection and isolation,This paper presents a new model of asynchronous machines. This model allows one to take into account unbalanced stator situations which can be produced by stator faults like short circuits in windings. A mathematical transformation is defined and applied to the classical abc model equations. All parameters which affect this new model can be known online. This makes the model very useful for control algorithms and fault detection and isolation algorithms. The model is checked by comparing simulation data with actual data obtained from laboratory experiments.,2003,0, 3321,Fault-tolerant architecture for nanoelectronic digital logic,"This paper presents a new system architecture for implementing fault-tolerant information processing. The proposed structure relies on simple processing elements (PEs) arranged into a regular locally-interconnected array. Such an approach is a favorable way of implementing circuits with inherently unreliable nanodevices. Different network operations are achieved through binary programmable interconnections. The array can be divided into a set of software-defined segments for implementing functions with different levels of complexity and redundancy, assuring the system versatility and flexibility. The examples of basic Boolean operations are presented. The error correction mechanism is explained and its impact on fault-tolerance is briefly analyzed.",2008,0, 3322,DSP-Based Adaptive High impedance Ground Fault Subtransmission Feeder Protection,"This paper presents a new, adaptive strategy for substation feeder protection implemented with state-of-the-art digital signal processing technology. This feeder protection adapts its trip settings to provide improved selectivity and speed of fault detections for varying power system configurations and loadings. This protection accurately senses and de-energizes downed conductors that often could not be detected by conventional non-adaptive feeder protections. Several people are killed every year by contacting with live downed conductors. This adaptive protection is designed to significantly reduce this potentially fatal condition. This paper presents the design of this adaptive feeder protection. Several power system faults were simulated under varying load and system conditions. The results are analyzed to verify the design, to determine the improvement over a conventional static protection, and to verify the correct protection operation. Field tests were conducted in Runnymede Transformer Station in Ontario to validate this adaptive feeder protection",2006,0, 3323,Automatic road feature detection and correlation for the correction of consumer satellite navigation system mapping,"This paper presents a novel approach for the use of on-vehicle video analysis aimed at the verification and correction of consumer satellite navigation system mapping information. The proposed system automatically detects road and environment features (e.g. flyover bridges, road junctions, traffic lights and road signs) for real-time comparison to information available from corresponding navigation mapping. This can be used both for secondary feature-based localization of vehicle position and the verification of roadway mapping information against the true environment.",2010,0, 3324,Architectural level support for dynamic reconfiguration and fault tolerance in component-based distributed software,"This paper presents a novel architectural approach to support fault tolerance in component-based distributed software (CBDS) through dynamic reconfiguration. Using the graph-oriented programming (GOP) model, the software architecture of CBDS is specified by a logical graph which is reified as an explicit object distributed over the network. Dynamic reconfiguration is implemented by executing a set of operations defined over the graph. The approach supports fault tolerance by dynamically reconfiguring the CBDS upon detection of faults. We describe the basic model, the system architecture and its prototype implementation on top of CORBA.",2002,0, 3325,Wavelet-based one-terminal fault location algorithm for aged cables without using cable parameters applying fault clearing voltage transients,"This paper presents a novel fault location algorithm, which in spite of using only voltage samples taken from one terminal, is capable to calculate precise fault location in aged power cables without any need to line parameters. Voltage transients generated after circuit breaker opening action are sampled and using wavelet and traveling wave theorem, first and second inceptions of voltage traveling wave signals are detected. Then wave speed is determined independent of cable parameters and finally precise location of fault is calculated. Because of using one terminal data, algorithm does not need to communication equipments and global positioning system (GPS). Accuracy of algorithm is not affected by aging, climate and temperature variations, which change the wave speed. In addition, fault resistance, fault inception angle and fault distance does not affect accuracy of algorithm. Extent simulations carried out with SimPowerSystem toolbox of MATLAB software, confirm capability and high accuracy of proposed algorithm to calculate fault location in the different faults and system conditions.",2010,0, 3326,Fault tolerant permanent magnet motor drives for electric vehicles,"This paper presents a novel five-phase fault tolerant interior permanent magnet (IPM) motor drive with higher performance and reliability for electric vehicles applications. A new machine design along with efficient control strategy is developed for fault tolerant operation of electric drive without severely compromising the drive performance. Fault tolerance is achieved by employing a five phase fractional-slot concentrated windings configuration IPM motor drive, with each phase electrically, magnetically, thermally and physically independent of all the others. The proposed electric drive system presents higher torque density, negligible cogging torque, and about plusmn0.5% torque ripple. Power converter requirements are discussed and control strategies to minimize the impact of machine or converter fault are developed. Besides, all the requirement of a fault tolerant operation, including high phase inductance and negligible mutual coupling between phases are met. Analytical and finite element analysis and comparison case studies are presented.",2009,0, 3327,Fault Detection and Isolation of a Cryogenic Rocket Engine Combustion Chamber Using a Parity Space Approach,"This paper presents a parity space (PS) approach for fault detection and isolation (FDI) of a cryogenic rocket engine combustion chamber. Nominal and non-nominal simulation data for three engine set points have been provided. The PS approach uses three measurements to generate residuals and a spherical transformation to map these residuals to faults. The radial co-ordinate is used for fault detection whereas the azimuthal and polar co-ordinates are used for fault isolation. Evaluation criteria are missed alarms, false alarms, and fault detection time. Although the approach needs a different residual generation method to become more robust, it works very well when compared with the other FDI approaches.",2009,0, 3328,A fault location algorithm based on distributed neutral-to-ground current sensor measurements,"This paper presents a study of a multi-grounded three-phase four-wire distribution system and discusses a line-to-ground fault location algorithm based on real time monitoring of current levels in the neutral-to-ground path at several locations throughout the primary feeder. The measurement technique, combined with a triangulation algorithm based on an exponential curve fitting approach, suggests a fault location accuracy of approximately +/- 5 poles.",2010,0, 3329,Fault detection by means of wavelet transform in a PMSMW under demagnetization,"This paper presents a study of a permanent magnet synchronous machine (PMSM) running under demagnetization. The demagnetization fault is analyzed through stator currents analysis and processing tools at different speeds. Advanced signal analysis by means of wavelet transforms is presented. Simulations were carried out by using two dimensional (2-D) finite element analysis (FEA), and result were compared with experimental ones, showing the effectiveness of the proposed method for magnetization fault detection and identification.",2007,0, 3330,Fault Detection in dynamic conditions by means of Discrete Wavelet Decomposition for PMSM running under Bearing Damage,This paper presents a study of the permanent magnet synchronous machine (PMSM) running under bearing damage. To carry out the study a two-dimensional (2-D) Finite Element Analysis (FEA) is used. Stator current induced harmonics for fault condition were investigated. Advanced signal analysis by means of continuous and discrete wavelet transforms was performed. Simulation were carried out and compared with experimental.,2009,0, 3331,Attributes Reduction Applied to Leather Defects Classification,"This paper presents a study on attributes reduction, comparing five discriminant analysis techniques: FisherFace, CLDA, DLDA, YLDA and KLDA. Attributes reduction has been applied to the problem of leather defect classification using four different classifiers: C4.5, kNN, Naive Bayes and Support Vector Machines. The results of several experiments on the performance of discriminant analysis applied to the problem of defect detection are reported.",2010,0, 3332,A 5 GHz class-AB power amplifier in 90 nm CMOS with digitally-assisted AM-PM correction,This paper presents a technique for correcting AM-PM distortion in power amplifiers. The technique uses a varactor as part of a tuned circuit to introduce a phase shift that counteracts the AM-PM distortion of the PA. The varactor is controlled by the amplitude of the IQ baseband data in a feedforward fashion. The technique has been demonstrated in a class-AB CMOS power amplifier designed for WEAN applications and implemented in a 90 nm CMOS process. The PA delivers 10.5 dBm of average power while transmitting at 54 Mbps (64 QAM). The proposed technique is shown to improve the efficiency of the PA by a factor of 2,2005,0, 3333,Design of the pole placement controller for D-STATCOM in mitigating three phase fault,This paper presents a design of pole placement controller for D-STATCOM in mitigation of three phase fault. In the pole placement method the existing poles are shifted to the new locations of poles at the real-imaginary axes for better response. This type of controller is able to control the amount of injected current or voltage or both from the D-STATCOM inverters to mitigate the three phase fault by referring to the currents that are the input to the pole placement controller. The controller efficiency was tested in the different percentage of voltage sag that occurs during the three phase fault. The controller and the D-STATCOM were designed using SIMULINK and Power System Blockset toolbox available in MATLAB program,2005,0, 3334,Distributed Fault Detection for Wireless Sensor Based on Weighted Average,"This paper presents a distributed fault detection algorithm for wireless sensor networks (WSNs) by exploring the weighted average value scheme. Considering the spatial correlations in WSNs, a faulty sensor can diagnose itself through comparing its own sensed data with the average of neighbors' data. Simulation results show that sensor nodes with permanent faults are identified with high accuracy for a wide range of fault rate, and keep false alarm rate for different levels of sensor fault model.",2010,0, 3335,Serial wound starter motor faults diagnosis using artificial neural network,"This paper presents a fault diagnosis system for a serial wound starter motor based on multilayer feed forward artificial neural network (ANN). Starter motor acts as an internal combustion (IC) engine and has a vital importance for all vehicles. That is because, if the starter motor fault occurred, the vehicle cannot be run. Especially in emergency vehicles (ambulance, fire engine, etc) starter motor faults causes the faults. This ANN based fault detection system has been developed for implementation on the emergency vehicles. Information of starter motor current is acquired and then it is practiced on a neural network fault diagnosis (NNFD) system. The multilayer feed forward neural network structures are used. Feed forward neural network is trained using the back propagation algorithm. NNFD system is effective in detection of six types of starter motor faults. NNFD system is able to diagnose the faults that can be seen in most frequencies in starter motors.",2004,0, 3336,Color detection for vision machine defect inspection on electronic devices,"This paper presents a recent innovation introduced by Ismeca in our novel vision platform, NativeNET, for the detection of surface defects in electronic device packages due to decoloration and which could not detected before. Up to now, mainly due to cost and processing-time constraints, most of inspection vision systems were working with monochrome images. Moreover, there is a need from semiconductor packaging industry to be able to provide new smart inspection which can detect more defects.",2010,0, 3337,"A sensorless speed estimator for application in a direct torque controller of an interior permanent magnet synchronous motor drive, incorporating compensation of offset error","This paper presents a sensorless technique for speed estimation of a direct torque controlled (DTC) interior permanent magnet (IPM) synchronous motor drive. The proposed method uses a new speed estimator from the stator flux linkage vector and the torque angle . It is shown that by including the torque angle in the estimation process results in a more accurate transient speed estimator than what is reported in the existing literature. The offset error causes the torque of a motor to oscillate and these torque ripples deteriorate the performance of the speed estimator. In order to eliminate ripples in the estimated speed, compensation has been made using a programmable cascaded low pass filter. Results from modeling and experiment verify the effectiveness of the sensorless speed estimator and offset error compensator.",2002,0, 3338,Efficiently utilization of redundancy backup server by forming dynamic clustering in Distributed Systems for tolerating faults,"This paper proposes a novel REDENDENCY THROUGH BACKUP PROCESS with CLUSTERING scheme which aims to address the following concerns: reliable, effective with high maintenance backup can be achieved by constructing a multi-node clustering backbone with a small number of backup cluster-heads for redundancy filling system through backup process for FAULT TOLRANCE DISTRIBUTED SYSTEMS. We can successfully reduce the overhead of backup servers and enhance the speed of backup delivering in an allowable time span compared to other redundancy fault tolerance distributed operating systems for both WAN and LAN networks. Next, we utilize the CLUSTER STRUCHING MECHANISM which determines the cluster size according to the leaving frequency of cluster-members. As the number of leaving events is reduced, the cluster topology is more stable and the BACKUP may also be available in a short and manageable time span before all the distributed system becomes failed. We have done our simulation using MATLAB to show cluster formation with a back-up server election and have investigated the performance of our system for different scenarios.",2010,0, 3339,A novel algorithm of wide area backup protection based on fault component comparison,"This paper proposes a novel wide area back-up protection algorithm that measures synchronized information from different buses in region. The amplitude of voltage fault component from different buses is compared, and the bus with maximum magnitude will be selected. Hence, suspected fault line set could be established according to the sub-graph and complete incidence matrix of selected buses. Then, voltage fault component amplitude at two sides of each suspected line could be calculated from another side, and the amplitude comparison between computed and measured value could be implemented. The ratio would be 1 when external fault occurs, and it would be greater than 1 when internal fault occurs. Thus, the fault line could be identified finally. The technique doesn't need high precision synchronization of wide area information, and could response to different faults. The simulation of 10-unit and 39-bus New England system using PSCAD/EMTDC illustrates the effectiveness of this method.",2010,0, 3340,Error detection by duplicated instructions in super-scalar processors,"This paper proposes a pure software technique ""error detection by duplicated instructions"" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that ""instruction-level parallelism"" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle",2002,0, 3341,Systematic Lossy Error Protection of Video Signals,"This paper proposes a scheme called systematic lossy error protection (SLEP) for robust transmission of video signals over packet erasure channels. The systematic portion of the transmission consists of a conventionally encoded video bit stream which is transmitted without channel coding. An additional bit stream generated by Wyner-Ziv encoding of the video signal is transmitted for error resilience. In the event of packet loss, this supplementary bit stream is decoded and allows the recovery of a coarsely quantized video signal, which is displayed in lieu of the lost portions of the primary video signal. The quantization mismatch results in a small, controlled loss in picture quality, but a drastic reduction in picture quality is avoided. An implementation of the SLEP system using the state-of-the-art H.264/AVC standard codec is described. Specifically, H.264/AVC redundant slices are used in conjunction with Reed-Solomon coding to generate the Wyner-Ziv bit stream. The received video quality is modeled as a function of the bit rates of the primary and redundant descriptions and the error resilience bit rate. The model is used to optimize the video quality delivered by SLEP. Via theoretical analysis and experimental simulation, it is shown that SLEP provides a flexible tradeoff between error resilience and decoded picture quality. By allowing the quality to degrade gracefully over a wider range of packet loss rates, SLEP mitigates the precipitous drop in picture quality suffered by traditional FEC-based systems.",2008,0, 3342,Research of information fusion technology for fault diagnosis based on selectivity criterion,"This paper proposes a selectivity criterion to choose appropriate information fusion method for the fusion of multiarguments to acquire logical consensus output. The research theory can improve the cognitive veracity of the object based on the redundancy of information, and can resolve the problem of incompletion about detected data. Firstly, the characteristic of D-S evidence theory is analyzed. It indicates that D-S theory will be fit for the instance that combined argument has analogical estimation to the object. Secondly, a selectivity criterion of D-S evidence theory is deduced. The fusion result will be more rationalized by the treatment of D-S theory if the relation of combined arguments is accord with the restriction of the criterion. Finally, an instance about fault diagnosis is analyzed. A logical diagnostic result can be acquired by the use of D-S evidence theory selectivity criterion. It is proved that the researched theory is available.",2010,0, 3343,A current error space vector based hysteresis controller with constant switching frequency and simple online boundary computation for VSI fed IM drive,"This paper proposes a simple current error space vector based hysteresis controller for two-level inverter fed Induction Motor (IM) drives. This proposed hysteresis controller retains all advantages of conventional current error space vector based hysteresis controllers like fast dynamic response, simple to implement, adjacent voltage vector switching etc. The additional advantage of this proposed hysteresis controller is that it gives a phase voltage frequency spectrum exactly similar to that of a constant switching frequency space vector pulse width modulated (SVPWM) inverter. In this proposed hysteresis controller the boundary is computed online using estimated stator voltages along alpha and beta axes thus completely eliminating look up tables used for obtaining parabolic hysteresis boundary proposed in. The estimation of stator voltage is carried out using current errors along alpha and beta axes and steady state model of induction motor. The proposed scheme is simple and capable of taking inverter upto six step mode operation, if demanded by drive system. The proposed hysteresis controller based inverter fed drive scheme is simulated extensively using SIMULINK toolbox of MATLAB for steady state and transient performance. The experimental verification for steady state performance of the proposed scheme is carried out on a 3.7kW IM.",2010,0, 3344,Real-time audio error concealment method based on sinusoidal model,"This paper proposes a sinusoidal model based on modified sinusoidal analysis and synthesis for the error concealment of lost frames. The parameters of the sinusoidal model are adjusted according to the property of the audio signals. It is applied to MPEG-2 AAC decoder for an error concealment of decoder based only. An analysis and synthesis of the windowed previous frame of the lost one and the residual samples are used to synthesize a new frame. Then overlap-add mechanism between the synthesized frame and the residual samples is applied to reconstruct the lost frame. This method can provide an audio quality with 1.3 average ODG score improvement under the loss rate of frame up to 10% without increasing the algorithm delay of the AAC decoding system. It is implemented on fixed-point DSPs, and the computational complexity is comparative to the optimized version of the decoder.",2008,0, 3345,Finding latent code errors via machine learning over program executions,"This paper proposes a technique for identifying program properties that indicate errors. The technique generates machine learning models of program properties known to result from errors, and applies these models to program properties of user-written code to classify and rank properties that may lead the user to errors. Given a set of properties produced by the program analysis, the technique selects a subset of properties that are most likely to reveal an error. An implementation, the fault invariant classifier, demonstrates the efficacy of the technique. The implementation uses dynamic invariant detection to generate program properties. It uses support vector machine and decision tree learning tools to classify those properties. In our experimental evaluation, the technique increases the relevance (the concentration of fault-revealing properties) by a factor of 50 on average for the C programs, and 4.8 for the Java programs. Preliminary experience suggests that most of the fault-revealing properties do lead a programmer to an error.",2004,0, 3346,Discovering the Fault Origin from Field Traces,"This paper proposes an automatic technique to reduce the time spent in detection of the fault origin from field traces, by discovering hidden patterns in the traces.",2008,0, 3347,Fault tolerance of CNC software based on artificial neural network,"This paper proposes an efficient method for realizing the fault tolerance of CNC software with the introduction of artificial neural network (ANN) to the design filed of CNC software. In addition, function aspects (velocity, acceleration, chord error, real time, prediction accuracy) from the experiment on Non-Uniform Rational B-Spline (NURBS) interpolator based on ANN were evaluated in detail. Our experimental results showed that the NURBS interpolation based on ANN not only meet the requirements of function aspects, but also can realize fault tolerance technology, which may provide a new strategy in the improvement of the reliability of CNC software.",2010,0, 3348,Automated fault location system for primary distribution networks,"This paper presents the development, simulation results, and field tests of an automated fault location system for primary distribution networks. This fault location system is able to identify the most probable fault locations in a fast and accurate way. It is based on measurements provided by intelligent electronic devices (IEDs) with built-in oscillography function, installed only at the substation level, and on a database that stores information about the network topology and its electrical parameters. Simulations evaluate the accuracy of the proposed system and the experimental results come from a prototype installation.",2005,0, 3349,Automated fault diagnostics testing for automotive Electronic Control Units deploying Hardware-in-the-Loop,This paper presents the implementation of the automated testing of diagnostics trouble code (DTC) based on the diagnostics specification for each Electronic Control Unit (ECU) in Jaguar Land Rover (JLR) carlines. The automated fault diagnostics testing is carried out using Virtual Integration and Test Automation Laboratory (VITAL) and Hardware-in-the-Loop (HIL) simulation environment at JLR. The methodology of automated diagnostics fault testing is described in detail and the framework is developed to support this. The effectiveness of this methodology is illustrated using real time fault injection and verification.,2010,0, 3350,Fault-Tolerant Partitioning Scheduling Algorithms in Real-Time Multiprocessor Systems,"This paper presents the performance analysis of several well-known partitioning scheduling algorithms in real-time and fault-tolerant multiprocessor systems. Both static and dynamic scheduling algorithms are analyzed. Partitioning scheduling algorithms, which are studied, are heuristic algorithms that are formed by combining any of the bin-packing algorithms with any of the schedulability conditions for the rate-monotonic (RM) and earliest-deadline-first (EDF) policies. A tool is developed which enables to experimentally evaluate the performance of the algorithms from the graph of tasks. The results show that among several partitioning algorithms evaluated, the RM-small-task (RMST) algorithm is the best static algorithm and the EDF-best-fit (EDF-BF) is the best dynamic algorithm, for non fault-tolerant systems. For fault-tolerant systems which require about 49% more processors, the results show that the RM-first-fit decreasing utilization (RM-FFDU) is the best static algorithm and the EDF-BF is the best dynamic algorithm. To decrease the number of processors in fault-tolerant systems, the RMST is modified. The results show that the modified RMST decreases the number of required processors between 7% and 78% in comparison with the original RMST, the RM-FFDU and other well-known static partitioning scheduling algorithms",2006,0, 3351,Development and Implementation of a Novel Fault Diagnostic and Protection Technique for IPM Motor Drives,"This paper presents the practical implementation of a novel fault diagnostic and protection scheme for the interior permanent-magnet (IPM) synchronous motors using wavelet packet transform (WPT) and artificial neural network. In the proposed technique, the line currents of different faulted and normal conditions of the IPM motor are preprocessed by the WPT. The second level WPT coefficients of line currents are used as inputs of a three-layer feedforward neural network. The proposed protection technique is successfully simulated and experimentally tested on the line-fed and inverter-fed IPM motors. The Texas Instrument 32-bit floating-point digital signal processor TMS320C31 is used for the real-time implementation of the proposed protection algorithm. The offline and online test results of both line-fed and inverter-fed IPM motors are given. These test results showed satisfactory performances of the proposed diagnostic and protection technique in terms of speed, accuracy, and reliability.",2009,0, 3352,Self-Adjusting Output Data Compression for RAM with Word Error Detection and Correction,"This paper presents the reliability improvement of self-adjusting output data compression technique. Our theoretical investigation showed that compression of both address and data allows to achieve single word error detection and correction, and double word error detection. Possible built-in self-test architecture is proposed.",2007,0, 3353,Thermal Behavior of a Three-Phase Induction Motor Fed by a Fault-Tolerant Voltage Source Inverter,"This paper presents the results of an investigation regarding the thermal behavior of a three-phase induction motor when supplied by a reconfigured three-phase voltage source inverter with fault-tolerant capabilities. For this purpose, a fault-tolerant operating strategy based on the connection of the faulty inverter leg to the dc link middle point was considered. The experimentally obtained results show that, as far as the motor thermal characteristics are concerned, it is not necessary to reinforce the motor insulation properties since it is already prepared for such an operation",2007,0, 3354,Detection of Demagnetization Faults in Permanent-Magnet Synchronous Motors Under Nonstationary Conditions,"This paper presents the results of our study of the permanent-magnet synchronous motor (PMSM) running under demagnetization. We examined the effect of demagnetization on the current spectrum of PMSMs with the aim of developing an effective condition-monitoring scheme. Harmonics of the stator currents induced by the fault conditions are examined. Simulation by means of a two-dimensional finite-element analysis (FEA) software package and experimental results are presented to substantiate the successful application of the proposed method over a wide range of motor operation conditions. Methods based on continuous wavelet transform (CWT) and discrete wavelet transform (DWT) have been successfully applied to detect and to discriminate demagnetization faults in PMSM motors under nonstationary conditions. Additionally, a reduced set of easy-to-compute discriminating features for both CWT and DWT methods has been defined. We have shown the effectiveness of the proposed method by means of experimental results.",2009,0, 3355,The Role of ATPG Fault Diagnostics in Driving Physical Analysis,This paper presents the role that ATPG fault diagnostic tools can play in driving physical device analysis. Barriers exist between the logical fault diagnostic domain and the physical device analysis domain. These barriers are being removed through the application of software tools that use design place-and-route data to bridge the gap between logical and physical domains. A case study in the use of ATPG fault diagnostic data to drive a photon prober is presented which illustrates the value of this methodology. A natural collaboration presents itself in the use of ATPG fault diagnostic output to direct physical probing efforts,2006,0, 3356,Six-phase induction machine drive model for fault-tolerant operation,"This paper presents the simulation and experimental results of a 42 V fault-tolerant six-phase induction machine (6PIM) with opened stator phases. The 6PIM has symmetrical 60 degrees displacement winding allowing fault-tolerant modes for an electric power steering (EPS) application. A voltage source inverter (VSI) in closed-loop using a vector control feeds its. The 6PIM model is derived from the stator winding distribution on a conductor-by-conductor basis and the geometrical dimensions of the different slots. A case study by simulation and experimental tests on a six-phase squirrel-cage induction machine rating 90 W, 14 V, 50 Hz, 2 poles is described and analyzed to validate the system performance in faulted modes. This paper presents also the torque ripple estimation in degraded mode using the dq0 components.",2009,0, 3357,Pre-silicon verification of the Alpha 21364 microprocessor error handling system,"This paper presents the strategy used to verify the error logic in the Alpha 21364 microprocessor. Traditional pre-silicon strategies of focused testing or unit-level random testing yield limited results in finding complex bugs in the error handling logic of a microprocessor. This paper introduces a technique to simulate error conditions and their recovery in a global environment using random test stimulus closely approximating traffic found in a real system. A significant number of bugs were found using this technique. A majority of these bugs could not be uncovered using a simple random environment, or were counter-intuitive to focused test design.",2001,0, 3358,Scan-based SLAM with trajectory correction in underwater environments,"This paper presents an approach to perform Simultaneous Localization and Mapping (SLAM) in underwater environments using a Mechanically Scanned Imaging Sonar (MSIS) not relying on the existence of features in the environment. The proposal has to deal with the particularities of the MSIS in order to obtain range scans while correcting the motion induced distortions. The SLAM algorithm manages the relative poses between the gathered scans, thus making it possible to correct the whole Autonomous Underwater Vehicle (AUV) trajectories involved in the loop closures. Additionally, the loop closures can be delayed if needed. The experiments are based on real data obtained by an AUV endowed with an MSIS, a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). Also, GPS data is available as a ground truth. The results show the quality of our approach by comparing it to GPS and to other previously existing algorithms.",2010,0, 3359,Efficient Spatial-Temporal Error Concealment Algorithm and Hardware Architecture Design for H.264/AVC,"This paper presents an efficient error concealment algorithm for video bitstream over error-prone channel suffering from damage. Moreover, hardware architecture design and chip implementation of the proposed error concealment algorithm are also presented. For spatial error concealment, a mode selection algorithm considering the reuse of intra mode information embedded in bitstream is developed for the adaptation of bilinear and directional interpolation. It suffers only 0.08 dB video quality drop in average but the speedup measured on a general purpose processor is up to 40 times compared with the conventional methods. It is also more suitable for low cost hardware design. For temporal error concealment, the decoded motion vectors of the neighboring blocks of the corrupted macroblock are reused to provide hints to estimate the motion vector of the corrupted macroblock. Moreover, for real-time applications, a data and computational results reuse scheme of motion vector estimation is proposed and 96% computation and memory bandwidth can be reduced compared with the conventional methods with 0.18 dB quality drop in average. With the UMC 90 nm 1P9M process, the proposed error concealment engine can process HDTV1080P 30 frames per second video data and the power consumption is 15.77 mW at 125 MHz operation frequency.",2010,0, 3360,An efficient error concealment implementation for MPEG-4 video streams,"This paper presents an efficient error concealment implementation for damaged MPEG-4 video bitstreams. The chosen spatial and temporal concealment algorithms are designed to fit in real-time decoders and are advantageously combined in a hybrid spatial/temporal approach to provide visually more plausible pictures than basic concealment techniques. In addition, the encoder impact on the visual quality of the reconstruction in presence of channel errors is highlighted",2001,0, 3361,Efficient diagnosis of single/double bridging faults with Delta Iddq probabilistic signatures and Viterbi algorithm,"This paper presents an efficient method to diagnose single and double bridging faults. This method is based on Delta Iddq probabilistic signatures, as well as on the Viterbi algorithm, mainly used in telecommunications systems for error correction. The proposed method is a significant improvement over an existing one, based on maximum likelihood estimation. The (adapted) Viterbi algorithm takes into account useful information not considered previously. Simulation and experimental results are presented to validate the approach",2000,0, 3362,Discrete wavelet and neural network for transmission line fault classification,This paper presents an efficient wavelet and neural network (WNN) based algorithm for fault classification in single circuit transmission line. The first level discrete wavelet transform is applied to decompose the post fault current signals of the transmission line into a series of coefficient components (approximation and detail). The values of the approximation coefficients obtained can accurately discriminate between all types of fault in transmission line and reduce the number of data feeding to the ANN. These coefficients are further used to train an Artificial Neural Network (ANN) fitting function. The trained FFNN clearly distinguishes and classify very accurate and very fast the fault type. A typical generation system connected by single circuit transmission line to many nodes of lodes at the receiving end was simulated using MATLAB simulation software using only one node to do this work. The generated data were used by the MATLAB software to test the performance of the proposed technique. The simulation results obtained show that the new algorithm is more reliable and accurate.,2010,0, 3363,Error Concealment in the Network Abstraction Layer for the Scalability Extension of H.264/AVC,"This paper presents an error concealment method applied to the network abstraction layer (NAL) for the scalability extension of H.264/AVC. The method detects loss of NAL units for each group of picture (GOP) and arranges a valid set of NAL units from the available NAL units. In case that there is more than one possibility to arrange a valid set of NAL units, this method uses the information about motion vectors of the preceding pictures to decide if the erroneous GOP will be shown with higher frame rate or higher spatial resolution. This method works without parsing of the NAL unit payload or using of estimation and interpolation to create the lost pictures. Therefore it requires very low computing time and power. Our error concealment method works under the condition that the NAL units of the key pictures, which is the prediction reference picture for other pictures in a GOP, are not lost. The proposed method is the first method suitable for real-time video streaming providing drift-free error concealment at low computational cost.",2006,0, 3364,Analyzing fault tolerance on parallel genetic programming by means of dynamic-size populations,"This paper presents an experimental research on the size of individuals when dynamic size populations are employed with Genetic Programming (GP). By analyzing the individual's size evolution, some ideas are presented for reducing the length of the best individual while also improving the quality. This research has been performed studying both individual's size and quality of solutions, considering the fixed-size populations and also dynamic size by means of the plague operator. We propose an improvement to the Plague operator, that we have called Random Plague, that positively affects the quality of solutions and also influences the individuals' size. The results are then considered from a quite different point of view, the presence of processors failures when parallel execution over distributed computing environments are employed. We show that results strongly encourage the use of Parallel GP on non fault-tolerant computing resources: experiments shows the fault tolerant nature of Parallel GP.",2007,0, 3365,Framework for testing the fault-tolerance of systems including OS and network aspects,"This paper presents an extensible framework for testing the behavior of networked machines running the Linux operating system in the presence of faults. The framework allows to inject a variety of faults, such as faults in the computing core or peripheral devices of a machine or faults in the network connecting the machines. The system under test as well as the fault- and workload run on this system are configurable. The core of the framework is a User Mode Linux, which runs on top of a real world Linux machine as a single process and simulates a single machine. A second process paired with each virtual machine is used for fault injection. The framework will be supported by utility programs to automate testing and evaluate test results",2001,0, 3366,Yield prediction using critical area analysis with inline defect data,"This paper presents methodologies for using critical area analysis with inline defect data to predict random defect limited yield and for partitioning yield losses by process step. The procedure involves (1) calculating critical areas, (2) modeling defect size distributions, and (3) combining critical area information and defect size distributions to estimate yield loss. We introduce a method to model defect size distribution from inline defect data. We develop two yield prediction methods that can overcome the difficulties caused by the inaccuracies in determining defect size when using laser scatterometry detection. We compare the predicted yield with the actual yield and show that the two are in good agreement.",2002,0, 3367,New principles to detect faults during power swing,"This paper presents new principles to detect power line faults during power swings. Faults during power swings can be classified as nonearth faults and earth faults. Since the difference in fault resistance between two types of faults could be very large, different principles are applied to detect different types of faults during power swings. This paper presents a new principle to detect nonearth faults during power swings by describing the waveform of swing center's voltage (WSCV). It also presents a new principle to detect earth faults during power swings by introducing a new concept, the synthetic negative sequence vector (SNSV), which is derived by combining the three phase negative sequence components. Studies show that WSCV and SNSV can be used as ideal fault exponents to detect faults during power swings, even for condition where three phases faults occur at swing centers and earth fault resistance reaching 300 ohm at the remote end of transmission line during power swings. The technique is also able to provide fast and reliable fault detection. Results from simulation studies using EMTP are presented in the paper",2001,0, 3368,Undergraduate research experiences in power system fault analysis using Matlab Simulation tools,"This paper presents research experiences of an undergraduate student in areas of power system fault analysis and fault location. The student obtains valuable skills by using Matlab and SimPowerSystems software, and an in-depth understanding of short-circuit analysis theories and procedures. It is shown that research involving formula derivation, system modeling, algorithm coding, and result visualization can greatly enhance the student's interests in power system subject.",2010,0, 3369,Model-Based Fault Diagnosis in Electric Drive Inverters Using Artificial Neural Network,"This paper presents research in model based fault diagnostics for the power electronics inverter based induction motor drives. A normal model and various faulted models of the inverter-motor combination were developed, and voltages and current signals were generated from those models to train an artificial neural network for fault diagnosis. Instead of simple open-loop circuits, our research focuses on closed loop circuits. Our simulation experiments show that this model-based fault diagnostic approach is effective in detecting single switch open-circuit faults as well as post-short-circuit conditions occurring in power electronics inverter based electrical drives.",2007,0, 3370,Relating vehicle-level and network-level reliability through high-level fault injection,"This paper presents some recent results to improve the evaluation of reliability due to network connections in automotive environments. Evaluation is based on the adoption of performance thresholds aiming at detecting performance loss at particular types of fault occurrence. For this activity we modeled the vehicle network at the functional level and then integrated it into a complete vehicle model describing both electronic and mechanical behavior; in this way, it is possible to build an automated fault injection environment to forecast the effects of faults at the network level on the vehicle dynamics. Furthermore, an on-line threshold manager permits to interrupt a single simulation when a fault activates an error threshold, reducing the overall campaign simulation time.",2003,0, 3371,Systems Reliability Analysis and Fault Diagnosis Based on Bayesian Networks,"This paper presents the application of Bayesian networks(BN) to the reliability analysis and fault diagnosis of systems. For systems, it is essential to do reliability analysis. Also it is necessary to do fault diagnosis when a system failed, but the better way is to do fault diagnosis before the system has failed. Bayesian networks have many special characteristics, one of them is that they are parallel structures, so the time of solving is much shorter than that of many common methods. Bayesian networks permit not only computing the reliability indices of a system but also presenting the effect of each component or some components on the system reliability to distinguish the unsubstantial part of the system, so we can know which part is the weakest one of the system. Two examples proved the validity and superiority of the method in the application of the reliability analysis and fault diagnosis of system.",2009,0, 3372,Identification of a chemical process for fault detection application,"This paper presents the application results concerning the fault detection of a dynamic process using linear system identification and model-based residual generation techniques. The first step of the considered approach consists of identifying different families of linear models for the monitored system in order to describe the dynamic behaviour of the considered process. The second step of the scheme requires the design of output estimators (e.g., dynamic observers or Kalman filters) which are used as residual generators. The proposed fault detection and system identification schemes have been tested on a chemical process in the presence of sensor, actuator, component faults and disturbance. The results and concluding remarks have been finally reported.",2004,0, 3373,Bit-error probability analysis of compact antenna arrays with maximal-ratio combining in correlated Nakagami fading,"This paper presents the bit-error probability performance of a compact space diversity receiver for the reception of binary non-coherent and coherent phase-shift-keying (PSK) and frequency shift keying (FSK) signals through a correlated Nakagami (1960) fading channel. Analytical expressions for the average bit-error-rate (BER) are derived as a function of the covariance matrix of the multipath component signals at the antenna elements. Closed-form expressions for the spatial cross-correlation are obtained under a Gaussian angular power profile assumption. This analysis provides a direct means of computing the average BER as a function of the array characteristics and the operating environment for a wide range of multipath fading conditions. The effects of the antenna array spacing, mutual coupling, and the operating environment (the angular spread and mean angle-of-arrival) on the BER performance are illustrated",2000,0, 3374,Application of ANN to power system fault analysis,"This paper presents the computer architecture development using Artificial Neural Network (ANN) as an approach for predicting fault in a large interconnected transmission system. Transmission line faults can be classified using the bus voltage and line fault current. Monitoring the performance of these two factors are very useful for power system protection devices. The ANN is designed to be incorporated with a matrix based software tool MATLAB Version 6.0, which deals with fault diagnosis in power system. In MATLAB software modules, the balanced and unbalanced fault can be simulated. The data generated from this software are to be used as training and testing sets in the Neural Ware Simulator.",2002,0, 3375,A fault locator for radial subtransmission and distribution lines,"This paper presents the design and development of a prototype fault locator that estimates the location of shunt faults on radial subtransmission and distribution lines. The fault location technique is based on the fundamental frequency component of voltages and currents measured at the line terminal. Extensive sensitivity studies of the fault location technique have been done-some of the results are included in the paper. Hardware and software of the fault locator are described. The fault locator was tested in the laboratory using simulated fault data and a real-time playback simulator. Some results from the tests are presented. Results indicate that the fault location technique has acceptable accuracy, is robust and can be implemented with the available technology",2000,0, 3376,Design and implementation of a low cost fault tolerent three phase energy meter,"This paper presents the design and implementation of a microprocessor based intelligent three phase energy meter that measures accurate energy, even at the presence of harmonics. Moreover if a fault occurs in any one of the potential transformer (PT) secondary circuits or the potential coil in the measuring device, the unit can sense the fault and registers the actual energy. Microprocessor provides a simple, accurate, reliable and economical solution of these problems. A framework of hardware circuitry and assembly language program for evaluation of energy values has been given and the problems which should be paid attention to execute the proposed algorithm using microprocessor has been discussed. Illustrative laboratory test results confirm the validity and accurate performance of the proposed method in real-time.",2007,0, 3377,A high-speed Reed-Solomon decoder for correction of both errors and erasures,"This paper presents the design of a (n,k) Reed-Solomon decoder for both errors and erasures. The key-equation solver is based on Sarwate's reformulated inversionless Berlekamp-Massey algorithm. The decoder has been implemented on FPGA and the maximum clock frequency can be 150 MHz for a (255, 239) code on a Xilinx Virtex-II device",2006,0, 3378,Fault detection and control in superheater using electronic simulator,This paper presents the design of an electronic simulator in order to reproduce the real function of the superheater in normal condition and in fault condition. With this electronic simulator there was studied the possibility to control the superheater when the perturbations action and the control of the system structure which will be implemented on the real system.,2010,0, 3379,Design and test of HTS coils for resistive fault current limiter,"This paper presents the design, manufacturing and test results of HTS coils for single-phase resistive current limiters using two different Bi-2223/Ag tapes. Two coils were separately wound on cylindrical G-10 tubes with helical path. One of the coil was wound in a bifilar conductor arrangement aiming at reducing the self-inductance. The maximum flux density is 40 mT on the HTS tape for both coils and the calculated inductance is 5.76H for the non-inductively wound coil and 28.3H for the conventional one. The two coils were individually tested under DC and AC currents. The DC power loss was calculated from the measured voltage-current values product and the AC power loss was determined by the liquid nitrogen mass boil-off measurement. The acquired voltage AC waveforms were analyzed through the separation of distinct contributions due to magnetic flux and to a resistive component in-phase with the current. It was observed an inflection point common to the DC and AC power losses curves close to Ic attributed to the AC transition current.",2004,0, 3380,Fault tolerant sliding modes control allocation with control surface priority weighting,"This paper presents an on-line sliding mode control allocation scheme for fault tolerant control with the capability of prioritizing certain control surfaces over the others. This capability is achieved through the choice of the novel fixed priority weights tuned during the design process. During faults and even total failures, the effectiveness level of the actuators is used by the control allocation scheme to redistribute the control signals to the remaining actuators. The simulation results show the capability of the controller in nominal conditions, and its capability to directly handle faults and even total failures, with almost no degradation in performance, without reconfiguring the controller.",2010,0, 3381,Soft Error Resilient System Design through Error Correction,"This paper presents an overview of the built-in soft error resilience (BISER) technique for correcting soft errors in latches, flip-flops and combinational logic. The BISER technique enables more than an order of magnitude reduction in chip-level error soft rate with minimal area impact, 6-10% chip-level power impact, and 1-5% performance impact (depending on whether combinational logic error correction is implemented or not). In comparison, several classical error-detection techniques introduce 40-100% power, performance and area overheads, and require significant efforts for designing and validating corresponding recovery mechanisms. Design trade-offs associated with the BISER technique and other existing soft error protection techniques are also analyzed",2006,0, 3382,Efficient error correction code configurations for quasi-nonvolatile data retention by DRAMs,"This paper presents analyses of various configurations of error correction codes for the purpose of reducing the parity area for quasi-nonvolatile data retention by DRAMs. By combining long and short error correction codes, we show that the parity area can be reduced to less than 1% of the total memory size, yet the system can offer comparable reliability and adaptability as an earlier design that requires 12.5% parity area. We also claim that even without using any area for parity data, the adaptive control of the DRAM refresh rate can reduce the total risk of data loss. Finally, we discuss an efficient decoder design for long RS codes",2000,0, 3383,Short circuit fault detection in PMSM by means of empirical mode decomposition (EMD) and wigner ville distribution (WVD),"This paper presents and analyzes short circuit failures for Permanent Magnet Synchronous Motor (PMSM). The study includes stable condition and speed transients in simulation and realistic experimental conditions. The stator current is analyzed by the empirical mode decomposition (EMD) method, which will generate a collection of intrinsic mode functions (IMF). Finally, the Hilbert-Huang transform (HHT) is used to compute the instantaneous frequencies resulting from the IMFs obtained from the stator currents. Moreover, the IMF 1 and IMF 2 have been analyzed by means of Wigner Ville distribution (WVD). Experimental laboratory results validate the analysis and demonstrate that this kind of time-frequency analysis can be applied to detect and identify short circuit failures in synchronous machines.",2008,0, 3384,Analysis of asymmetrical faults in power systems using dynamic phasors,"This paper presents application of the dynamic phasor modeling technique to unbalanced polyphase power systems. The proposed technique is a polyphase generalization of the dynamic phasor approach, and it is applicable to nonlinear power system models. In a steady-state, the dynamic phasors reduce to standard phasors from AC circuit theory. The technique produces results that are very close to those obtained from time-domain simulations. Simulations in terms of dynamic phasors typically allow larger integration steps than the standard time-domain formulation. We present simulations of unbalanced faults involving a three-phase synchronous generator connected to an infinite bus through a transmission line, and we demonstrate that models based on dynamic phasors provide very accurate descriptions of observed transients",2000,0, 3385,Fault tolerance enhancement using autonomy adaptation for autonomous mobile robots,"This paper presents how autonomy adaptation can be useful to enhance the fault tolerance of autonomous mobile robots. For that, we proposed a global and structured methodology which allows integrating specific fault tolerant mechanisms into an adaptive control architecture. When a problem is detected, the autonomous behavior of the robot is automatically adapted to overcome it. The human operator can punctually or definitively be inserted in the control loop to replace the damaged functionalities and to ensure the success of the mission. Experimental results on a mobile robot are proposed to illustrate the autonomy adaptation.",2010,0, 3386,Decoupled-DFIG Fault Ride-Through Strategy for Enhanced Stability Performance During Grid Faults,"This paper proposes a decoupled fault ride-through strategy for a doubly fed induction generator (DFIG) to enhance network stability during grid disturbances. The decoupled operation proposes that a DFIG operates as an induction generator (IG) with the converter unit acting as a reactive power source during a fault condition. The transition power characteristics of the DFIG have been analyzed to derive the capability of the proposed strategy under various system conditions. The optimal crowbar resistance is obtained to exploit the maximum power capability from the DFIG during decoupled operation. The methods have been established to ensure proper coordination between the IG mode and reactive power compensation from the grid-side converter during decoupled operation. The viability and benefits of the proposed strategy are demonstrated using different test network structures and different wind penetration levels. Control performance has been benchmarked against existing grid code standards and commercial wind generator systems, based on the optimal network support required (i.e., voltage or frequency) by the system operator from a wind farm installed at a particular location.",2010,0, 3387,A hybrid algorithm of on-line fault diagnosis,"This paper proposes a hybrid algorithm of on-line fault diagnosis, which calls the data of breakers, protections and digital recorders by layer for diagnosis. Breaker tripping information is used first to determine the power failure area that contains fault components. Where after, protection actions information is used to determine the exact fault elements. At last, digital recorder information is used to evaluate protection actions. The analysis on protection information is the main task and character of this paper. There are two steps in processing of protection information. First we find the intersection of regions of acted protections to locate the exact fault elements with rule-based reasoning technique. And then for each element in the suspicious fault components list, the posterior fault probability is calculated as its fault credence degree. Then the elements in the suspicious fault components list can be sequenced by fault credence degree. Theory of evidence is used for calculating the fault credence degree of the suspicious fault components. It is stricter and more reasonable than before. Simulation analyses of the actual systems verify the effectiveness of the algorithm.",2004,0, 3388,"A secure modular exponential algorithm resists to power, timing, C safe error and M safe error attacks","This paper proposes a method for protecting public key schemes from timing and fault attacks. In general, this is accomplished by implementing critical operations using ""branch-less"" path routines. More particularly, the proposed method provides a modular exponentiation algorithm without any redundant computation does not have a store operation with non-certain destination so that it can protect the secret key from many known attacks.",2005,0, 3389,Sliding mode estimation schemes for unstable systems subject to incipient sensor faults,"This paper proposes a new method for the analysis and design of sliding mode observers for fault reconstruction which is applicable for unstable systems. The proposed design addresses one of the restrictions in the existing literature (in which the open-loop system needs to be stable). Simulation results from an open-loop unstable system representing a fighter jet model show good fault estimation, even when simulated on the full nonlinear model.",2008,0, 3390,Multi-agent platform for fault tolerant control systems,"This paper proposes a new multi-agent platform for Fault Tolerant Control (FTC) Systems. Several multi-agent platforms exist to deal with different problems but none of them to deal with control systems tolerant to faults using the Matlab/Simulinkreg environment, which is in our days the scientific bench to this kind of research. When dealing with large-scale complex networked control systems (NCS), designing FTC systems is a very difficult task due to the large number of sensors and actuators spatially distributed and network connected. To solve this issue, the FTC platform presented in this paper uses simple and verifiable principles coming mainly from a decentralized design based on causal modelling partitioning of the NCS and distributed computing using multi- agent systems paradigm, allowing the use of agents with well established FTC methodologies or new ones developed taking into account the NCS specificities.",2007,0, 3391,Performance evaluation and failure rate prediction for the soft implemented error detection technique,"This paper presents two error models to evaluate safety of a software error detection method. The proposed models analyze the impact on program overhead in terms of memory code area and increased execution time when the studied error detection technique is applied. For faults affecting the processor's registers, analytic formulas are derived to estimate the failure rate before program execution. These formulas are based on probabilistic methods and use statistics of the program, which are collected during compilation. The studied error detection technique was applied to several benchmark programs and then program overhead and failure rate was estimated. Experimental results validate the estimated performances and show the effectiveness of the proposed evaluation formulas.",2004,0, 3392,ANN based power system fault classification,"This paper presents Wavelet based back propagation algorithm for classifying the power system faults, which is quite reliable, fast and computationally efficient. The proposed technique consists of a preprocessing unit based on discrete wavelet transform (DWT) in combination with an artificial neural network (ANN). The DWT acts as extractor of distinctive features in the input current signal which are collected at source end. The information is then fed into ANN for classifying faults. It can be used on-line following the operation of digital relays or off-line using the data stored in the digital recording apparatus. Extensive simulation studies carried out using MATLAB show that the proposed algorithm provides an accepted degree of accuracy in fault classification under different fault conditions.",2008,0, 3393,A fault-tolerant mechanism for reducing pollution attacks in peer-to-peer networks,"This paper proposed a fault tolerant mechanism to suppress and exterminate file pollution attacks; the idea has been originated from the concept that P2P file sharing network is now in a form of global file storage. The fault tolerant mechanism harnesses an approach that is adapted from the EVENODD coding scheme used in the disk storage technology, which tolerates effectively disk failures in the RAID (redundant array of independent disks) system. The fluid models are extended in this paper to analyze and evaluate the proposed anti-pollution mechanism. The accuracy (correctness) of the model has been proved by performing several simulation experiments, which yield the results that show the proposed mechanism could effectively suppress the pollution and could shorten 40~60% of the polluted time that a P2P file sharing network may be exposed to.",2009,0, 3394,Compensation of analog rotor position errors due to nonideal sinusoidal encoder output signals,"This paper proposes a compensation algorithm of analog rotor position errors due to nonideal sinusoidal encoder signals. The position sensors such as resolvers or incremental encoders are being replaced by sinusoidal encoders that offer much higher resolution. However, the periodic position errors are generated by the gain and offset errors between sine and cosine signals. In this paper, the effects of the gain and offset errors are analyzed by using dq-axis component. The analog position errors can be easily corrected by integral operation of the d-axis component. Therefore, the proposed algorithm does not need additional hardware and much computation time. The validity of the proposed algorithm is verified through experimental results.",2010,0, 3395,Inverter Nonlinearity Compensation in the Presence of Current Measurement Errors and Switching Device Parameter Uncertainties,"This paper proposes a compensation strategy for the unwanted disturbance voltage due to inverter nonlinearity. We employ an emerging learning technique called support vector regression (SVR). SVR constructs a motor dynamic voltage model by a linear combination of the current samples in real time. The model exhibits fast observer dynamics and robustness to observation noise. Then the disturbance voltage is estimated by subtracting the constructed voltage model from the current controller output. The proposed method compensates for all of the inverter nonlinearity factors at the same time. All the processes in estimating distortions are independent of the dead time and power device parameters. From the analysis of the effect on current measurement errors, we confirmed that the sampling error had little negative impact on the proposed estimation method. Experiments demonstrate the superiority of the proposed method in suppressing voltage distortions caused by inverter nonlinearity",2007,0,8693 3396,Network-enabled strategy and platform for corrective actions against power system instability,"This paper proposes a new strategy and platform for computer network-assisted corrective actions against voltage instability in the power distribution systems. The objective of the proposed strategy is to save the distribution system from imminent voltage collapse due to contingencies occurring in the power systems by executing corrective operations. The operations include coordinated local / distributed generating, capacitor switching, load tap changing, load shedding, etc. This paper shows the use of state-of-the-art digital signal processing technology for determination of correct stability controls, and the application of modern computer networking technology for monitoring of power-grid operating states and transmitting of data and stability control commands. This paper presents the design of algorithms for corrective actions against voltage instability in the power distribution systems, and hardware realization and real-time execution of corrective actions using a new two-layer architecture for network monitoring, data transmitting, and control delivering.",2008,0, 3397,FEDC: Control Flow Error Detection and Correction for Embedded Systems without Program Interruption,"This paper proposes a new technique called CFEDC to detect and correct control flow errors (CFEs) without program interruption. The proposed technique is based on the modification of application software and minor changes in the underlying hardware. To demonstrate the effectiveness of CFEDC, it has been implemented on an OpenRISC 1200 as a case study. Analytical results for three workload programs show that this technique detects all CFEs and corrects on average about 81.6% of CFEs. These figures are achieved with zero error detection /correction latency. According to the experimental results, the overheads are generally low as compared to other techniques; the performance overhead and the memory overhead are on average 8.5% and 9.1%, respectively. The area overhead is about 4% and the power dissipation increases by the amount of 1.5% on average.",2008,0, 3398,Feature set evaluation and fusion for motor fault diagnosis,"This paper proposes a novel approach to the feature fusion in motor fault diagnosis with the main aim of improving the performance and reliability of clustering and identification of the fault patterns. In addition, the significance of individual feature sets in specific fault scenarios, which is normally gained by engineers through experience, is investigated by using flexible Non-Gaussian modeling of the historical data. Furthermore the comparison is made by applying individual and fusion of feature sets to the probabilistic distributions of trained models using a Maximum a Posteriori (MAP) approach. To carry out the task, current waveforms are collected non-invasively from three-phase DC motors. Waveforms are then compressed into time, frequency and wavelet feature sets to form the input to the clustering algorithm. The result demonstrates the suitability of specific feature sets in different motor modes and the efficiency of fusion which is carried out with a Winner Takes All (WTA) approach.",2010,0, 3399,Effective fault treatment for improving the dependability of COTS and legacy-based applications,"This paper proposes a novel methodology and an architectural framework for handling multiple classes of faults (namely, hardware-induced software errors in the application, process and/or host crashes or hangs, and errors in the persistent system stable storage) in a COTS and legacy-based application. The basic idea is to use an evidence-accruing fault tolerance manager to choose and carry out one of multiple fault recovery strategies, depending upon the perceived severity of the fault. The methodology and the framework have been applied to a case study system consisting of a legacy system, which makes use of a COTS DBMS for persistent storage facilities. A thorough performability analysis has also been conducted via combined use of direct measurements and analytical modeling. Experimental results demonstrate that effective fault treatment, consisting of careful diagnosis and damage assessment, plays a key role in leveraging the dependability of COTS and legacy-based applications",2004,0, 3400,Educational visualizations of syntax error recovery,"This work is focused on the syntax error recovery visualization within the compilation process. We have observed that none of the existing tools, which display some views of the compilation, give a solution to this aspect. We present an educational tool called VAST which allows to visualize the different views of the compilation process. Besides, VAST allows to display different syntax error recovery strategies.",2010,0, 3401,A Comparison of Software Fault Imputation Procedures,"This work presents a detailed comparison of three imputation techniques, Bayesian multiple imputation, regression imputation and k nearest neighbor imputation, at various missingness levels. Starting with a complete real-world software measurement dataset called CCCS, missing values were injected into the dependent variable at four levels according to three different missingness mechanisms. The three imputation techniques are evaluated by comparing the imputed and actual values. Our analysis includes a three-way analysis of variance (ANOVA) model, which demonstrates that Bayesian multiple imputation obtains the best performance, followed closely by regression",2006,0, 3402,Design of a robust 8-bit microprocessor to soft errors,"This work presents a fault-tolerant version of the mass-produced 8-bit microprocessor M68HC11. It is able to tolerate single event transients (SETs) and single event upsets (SEUs). Based on triple modular redundancy (TMR) and time redundancy (TR) fault tolerance techniques, a protection scheme was implemented at high level in the sensitive areas of the microprocessor by using only standard gates in order to save design time. Furthermore, fault-tolerant IC design issues and results in area and performance were compared with a non-protected microprocessor version",2006,0, 3403,A fault location algorithm for transmission lines with tapped leg-PMU based approach,"This work presents a new fault location algorithm for transmission lines with tapped legs. For the transmission lines connected with short interim tapped leg on the midway, the conventional multi-terminals fault location algorithms are inappropriate for these systems. The proposed fault location algorithm only uses the synchronized phasors measured on two terminals of the original lines to calculate the fault location. Thus, the existing two-terminals fault locator can still be used via adopting the now proposed algorithms. Using the proposed algorithm, the computation of the fault location does not need the model of tapped leg. The proposed algorithm can be easily applied to any type of tapped leg, such as generators, loads or combined systems. EMTP simulation of a 100 km, 345 kV transmission line have been used to evaluate the performance of the proposed algorithm. The tested cases include various fault types, fault locations, fault resistances, fault inception angles, etc. The study also considers the effect of various types of tapped leg. Simulation results indicate that the proposed algorithm can achieve up to 99.95% accuracy for most tested cases.",2001,0, 3404,Noise Correction using Bayesian Multiple Imputation,"This work presents a novel procedure to detect and correct noise in a continuous dependent variable. The presence of noise in a dataset represents a significant challenge to data mining algorithms, as incorrect values in both the independent and dependent variables can severely corrupt the results of even robust learners. The problem of noise is especially severe when it is located in the dependent variable. In the worst case, severe noise in one of the independent variables can be handled by eliminating that attribute from the dataset, provided that the practitioner knows that noise is present. In the setting of supervised learning, the dependent variable is the most critical attribute in the dataset and therefore cannot be eliminated even if significant noise is present. Noise handling procedures in relation to the dependent variable are therefore absolutely critical to the success of a supervised learning initiative. In contrast to a binary dependent variable or class, noise in a continuous dependent variable presents many additional difficulties. Our procedure to detect and correct noise in a continuous dependent variable uses Bayesian multiple imputation, which was initially developed to combat the problem of missing data. Our case study considers a real-world software measurement dataset called CCCS, which has a numeric dependent variable with inherent noise. The results of our experiments show very encouraging results and clearly demonstrate the utility of our procedure",2006,0, 3405,Next day load curve forecasting using hybrid correction method,"This work presents an approach for short-term load forecast problem, based on hybrid correction method. Conventional artificial neural network based short-term load forecasting techniques have limitations especially when weather changes are seasonal. Hence, we propose a load correction method by using a fuzzy logic approach in which a fuzzy logic, based on similar days, corrects the neural network output to obtain the next day forecasted load. An Euclidean norm with weighted factors is used for the selection of similar days. The load correction method for the generation of new similar days is also proposed. The neural network has an advantage of dealing with the nonlinear parts of the forecasted load curves, whereas, the fuzzy rules are constructed based on the expert knowledge. Therefore, by combining these two methods, the test results show that the proposed forecasting method could provide a considerable improvement of the forecasting accuracy especially as it shows how to reduce neural network forecast error over the test period by 23% through the application of a fuzzy logic correction. The suitability of the proposed approach is illustrated through an application to actual load data of the Okinawa Electric Power Company in Japan.",2005,0, 3406,"A complete scheme for fault detection, classification and location in transmission lines using neural networks","This work presents an artificial neural network (ANN) approach to simulate a complete scheme for distance protection of a transmission line. In order to perform this simulation, the distance protection task was subdivided into different neural network modules for fault detection, fault classification as well as fault location in different protection zones. A complete integration amongst these different modules is then essential for the correct behaviour of the proposed technique. The three-phase voltages and currents sampled at 1 kHz, in pre and post-fault conditions, were utilised as inputs for the proposed scheme. The Alternative Transients Program (ATP) software was used to generate data for a 400 kV transmission line in a faulted condition. The NeuralWorks software was used to set up the ANN topology, train it and obtain the weights as an output. The NeuralWorks software provides a flexible environment for research and the application of techniques involving ANNs. Moreover, the supervised backpropagation algorithm was utilised during the training process",2001,0, 3407,Model-based fault identification of power generation turbine engines using optimal pursuit,"This work presents the results of applying an advanced fault detection and isolation technique to two ground-based power generation turbine engines. The advanced technique uses physics-based model with an optimal pursuit solution method. The technique automatically finds the best fault scenario to match measured (or test) data. The best fault scenario provides information about parameter deviations (i.e., fault detection) and fault-contributing components (i.e., isolation). The technique is independent of the thresholds used in fault detection as in some other techniques. The technique is effective even under the condition where data are scarce and widely spaced in time. Operational data from two Ishikawajima-Harima Heavy Industries (1H1) 1M270 engines were made available to Scientific Monitoring, Inc. (SMI). The purpose of the data is to apply SMI's model-based fault identification expertise to industrial power generation turbines. The investigation was conducted with extremely limited knowledge of the engines and their maintenance histories. The measured variables, provided in the data set, only include speed, air inlet temperature, power, exhaust temperature, fuel flow, and compressor discharge pressure. With these limited engine data, we modified an existing, generic model for turbine engines and developed the optimal pursuit method to ""hunt"" for suspicious fault states. The detection results were confirmed by the engine manufacturer IH1. The detection accuracy of this technique can be improved with additional data and knowledge about the IHI-IM270 engine. This technique can be readily generalized to fault/state detection of other types of equipment or assets in all industries.",2004,0, 3408,Error analysis in strapdown INS for aircraft assembly lines,"This work proposes a methodology for assessing the use of commercial inertial measurement units in manual part alignment operations with high precision requirements, found in aircraft assembly lines. The underlying application is a situation where two separate parts must be mounted at different locations within a facility, guaranteeing a precise angular alignment between them. If, for construction reasons, there is no direct sight between both elements, optical methods must be ruled out and the problem can be tackled by means of inclinometers. However, their precision are often limited compared to standard requirements in aeronautics applications. In this paper we propose a method based on two inertial navigation systems, and provide the tools necessary for determining the precision requirements for the gyroscopes and IMUs involved, based on error dynamics analysis and simulation.",2008,0, 3409,A Flexible Software-Based Framework for Online Detection of Hardware Defects,"This work proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extensions (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade-off performance with reliability without requiring any change to the hardware. We describe and evaluate different execution models for using the ACE framework. We also describe how the proposed ACE framework can be extended and utilized to improve the quality of post-silicon debugging and manufacturing testing of modern processors. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22 percent of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5 percent. Based on a detailed register transfer level (RTL) implementation of our technique, we find its area and power consumption overheads to be modest, with a 5.8 percent increase in total chip area and a 4 percent increase in the chip's overall power consumption.",2009,0, 3410,Numerical method for pre-arcing times: Application in HBC fuses with heavy fault-currents,"This works deals with calculations of pre-arcing time prediction for fuse links used in industrial protection circuits in case of heavy faults-currents. An enthalpy method to solve heat-transfer equation included two phase-changes is presented. The mathematical model couples thermal and electrical equations based on the principle of energy conservation and the Ohm's law respectively. In order to determine current density and temperature evolution in the fuses, three typical fuse links have been chosen for the calculations with circular, rectangular and trapezoidal reduced sections at their centre. Silver physical properties, mathematical equations and the numerical method are reported. Calculations results show that for the fuse link with rectangular reduced section a major heat-transfer mechanism took place compared to the other ones.",2007,0, 3411,Empirically determined response matrices for on-line orbit and energy correction at Thomas Jefferson National Accelerator Facility,"Thomas Jefferson National Accelerator Facility (Jefferson Lab) uses digital feedback loops (less than 1 Hz update rate) to correct drifts in the Continuous Electron Beam Accelerator Facility's (CEBAF) electron beam orbit and energy. Previous incarnations of these loops used response matrices that were computed by a numerical model of the machine. Jefferson Lab is transitioning this feedback system to use empirically determined response matrices whereby the software introduces small orbit or energy deviations using the loop's actuators and measures the system response with the loop's sensors. This method is in routine use for orbit correction. This paper describes the problem that both software lock systems are designed to address. A brief discussion of the design and operational experience of the numerical model based orbit and energy lock software system follows. Next, the empirical orbit lock system is described. Finally, future plans for an empirical energy lock system are presented",2001,0, 3412,Toward Fault-Tolerant Atomic Data Access in Mutable Distributed Hash Tables,"Though the peer-to-peer network based on distributed hash table have scalability and availability features, but most of the distributed hash tables lack the support for the atomic data accessing, this ability is important to build data structures on distributed system. This paper gives the design and implementation of an atomic accessible mutable distributed hash table - FamDHT. The FamDHT use the Paxos distributed consensus protocol to ensure that the simultaneous access to data is executed under a total order agreement. The FamDHT is build on top of the Bamboo DHT, a robust, open-source DHT implementation. At the end of the paper, an evaluation was given",2006,0, 3413,Recent case studies in bearing fault detection and prognosis,"This paper updates current efforts by the authors to develop fully-automated, online incipient fault detection and prognosis algorithms for drivetrain and engine bearings. The authors have developed and evolved ImpactEnergytrade, a feature extraction and analysis driven system that integrates high frequency vibration/acoustic emission data, collected using accelerometers and other sensors such as a laser interferometer to assess the health of bearings and gearboxes in turbine engines. ImpactEnergy combines advanced diagnostic features derived from waveform analysis, high-frequency enveloping, and more traditional time domain processing like root mean square (RMS) and kurtosis with classification techniques to provide bearing health information. The adaptable algorithm suite has been applied across numerous air vehicle relevant programs for the Air Force, Navy, Army, and DARPA. The techniques presented in this paper are tested and validated in a laboratory environment by monitoring multiple bearings on test rigs that replicate the operational loads of a turbomachinery environment. The capability of the software on full-scale test rigs at major OEMs (original equipment manufacturer) locations will be shown with specific data results. The team will review developments across these multiple programs and discuss specific implementation efforts to transition to the fleet in a variety of manned and unmanned platforms",2006,0, 3414,Schematic-Based Fault Dictionary: A Case Study,"This paper uses the opamp1 benchmark circuit to show that existing fault models are not accurate enough in the context of defect-based tests. A simple DFT circuit is added to increase coverage to 100% without affecting normal circuit operation. Furthermore, it is shown that this DFT circuit changes the operation modes of insensitive transistors.",2007,0, 3415,Skew Estimation and Correction of Text Using Bounding Box,"This paper, in addition to reporting some existing techniques, proposes a new technique for skew correction. It includes a novel document skew detection algorithm based on bounding box technique. The algorithm works quite efficiently for detecting the skew and then correcting it. The method has been experimented on various text documents and very promising results have been achieved given more than 97% accuracy. A comparative study has been reported to provide a detailed analysis of the proposed method together with some other existing methods in the literature.",2008,0, 3416,On Skew Estimation and Correction of Text,"This paper, in addition to reporting some existing techniques, proposes some new techniques for skew correction. It includes two novel document skew detection algorithms based on histogram statistics and connected component analysis. The histogram based algorithm works much efficient for detecting skew angle in which we analyze the lines as peaks and valleys on histogram. The connected component analysis is based on finding the connected components within a single line and considers them as one blob to estimate skew angle. The two methods have been experimented on various text documents and very promising results have been achieved given more than 99% accuracy. A comparative study has been reported to provide a detailed analysis of the proposed methods.",2007,0, 3417,Redundant motion vectors for improved error resilience in H.264/AVC coded video,"This proposal presents a new error robust strategy for encoding redundant pictures for the H.264/AVC standard. The method is based on providing motion vectors as redundant data, i.e. providing extra protection to the motion information of the encoded stream. The proposed system is implemented based on the existing redundant coding algorithm of the scalable extension of H.264/AVC. The performance of the algorithm is evaluated using various objective quality measurements under both error free and error prone Internet protocol (IP) packet network environments. The proposed algorithm increases the bandwidth utilization with slight degradation in the primary picture quality for error free conditions, compared to the existing redundant coding method of JSVM (joint scalable video model). Furthermore, the simulation results under packet loss environments show that the proposed algorithm outperforms the existing redundant picture coding of JSVM.",2008,0, 3418,Aberration correction for biological acoustic impedance microscope,"This report deals with the scanning acoustic microscope for imaging cross sectional characteristic acoustic impedance of biological soft tissues. A focused pulse wave is transmitted to the object placed on the Arear surfaceA of a plastic substrate. The reflected signals from the target and reference are interpreted into local acoustic impedance. Two-dimensional profile is obtained by scanning the transducer. This method, using a spherical transducer, produces a significant aberration, because the sound speed of the substrate is different from water that is used as a coupling medium. For this reason the spatial resolution is reduced. The spatial resolution was improved by using 3D deconvolution technique, considering the impulse response of the acoustic system. In addition, as the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. Calibration for acoustic impedance was carried out after the deconvolution process, considering the above mentioned oblique incidence.",2009,0, 3419,Threshold computation for robust fault detection in a class of continuous-time nonlinear systems,"This report presents a study on the problem of residual evaluation and threshold computation for detection of faults in continuous time nonlinear systems. A generalized framework for designing norm-based residual evaluation scheme is proposed. The problem of threshold computation is formulated as an optimization problem and its solution is developed using the well known LMI technique. Three commonly used thresholds i.e. Jth,RMS,2, Jth,Peak,Peak and Jth,P eak,2 are computed. A numerical example is given to show the effectiveness of the proposed approach.",2009,0, 3420,Robust fault detection in nonlinear systems: A Three-Tank Benchmark application,"This report presents fault detection in nonlinear systems with application to nonlinear Three-Tank System Benchmark. A robust fault detection filter based on H performance has been designed. Similarly three kinds of thresholds Jth, RMS, 2, Jth, Peak, Peak, Jth, Peak, 2 are also designed. All the algorithms are successfully tested with the real process. The experimental results given in the end of paper elaborate the effectiveness of the proposed approach.",2010,0, 3421,Fault-Location System for Multiterminal Transmission Lines,"This research presents the development and implementation in a computational routine of algorithms for fault location in multiterminal transmission lines. These algorithms are part of a fault-location system, which is capable of correctly identifying the fault point based on voltage and current phasor quantities, calculated by using measurements of voltage and current signals from intelligent electronic devices, located on the transmission-line terminals. The algorithms have access to the electrical parameters of the transmission lines and to information about the transformers loading and their connection type. This paper also presents the development of phase component models for the power system elements used by the fault-location algorithms.",2010,0, 3422,Vigilance and Error Detection in an Automated Command and Control Environment,"This study focused on improving vigilance performance through developing methods to arouse subjects to the possibility of errors in a data manipulation information warfare attack. The study suggests that by continuously applying arousal stimuli, subjects would retain initially high vigilance levels thereby avoiding the vigilance decrement phenomenon and improving error detection. The research focused on which methods were the most effective as well the impact of age upon the arousability of the subjects. Further the implications of vigilance and vigilance decrement for correct detections as well as productivity were explored. The study used a simulation experiment to provide a vigilance task in a reality-based information warfare environment. The results of the study suggest that stimuli type and age do impact arousal, although stimuli type had the greater effect. Also, moderate support was found to indicate that arousal does affect vigilance and vigilance decrement. However, the final analysis revealed that it was the arousal-vigilance interaction that had the greatest impact on correct detection and productivity.",2006,0, 3423,Phase distance relaying algorithm for unbalanced inter-phase faults,"This study presents a new fault impedance estimation algorithm for inter-phase faults for the purpose of phase distance relaying of extra high voltage (EHV) and ultra high voltage (UHV) transmission lines. The principle is based on the assumption that the fault arc path is predominantly resistive, and the phase angles of the unmeasured fault point voltage and fault arc path current are equal. This is used to construct the fault impedance estimation equation, which naturally prevents the effects of fault arc path resistance, load current, power swing and load encroachment. PSCAD (simulation software) simulation shows the validity of the proposed algorithm.",2010,0, 3424,Extended character defect model for recognition of text from maps,"Topographic maps contain a small amount of text compared to other forms of printed documents. Furthermore, the text and graphical components typically intersect with one another thus making the extraction of text a very difficult task. Creating training sets with a suitable size from the actual characters in maps would therefore require the laborious processing of many maps with similar features and the manual extraction of character samples. This paper extends the types of defects represented by Baird's document image degradation model in order to create pseudo randomly generated training sets that closely mimic the various artifacts and defects encountered in characters extracted from maps. Two Hidden Markov Models are then trained and used to recognize the text. Tests performed on extracted street labels show an improvement in performance from 88.4% when only the original Baird's model is used to a character recognition rate of 93.2% when the extended defect model is used for training.",2010,0, 3425,Improved topology error detection via optimal measurement placement,"Topology errors are known to significantly affect the performance of state estimators. There are methods which are developed for detecting and identifying topology errors based on the measurement residuals. However, the capability to detect and identify topology errors is related to the measurement locations and network configuration. In some cases, topology errors associated with certain branches can not be detected. The topology error detection capability can be improved by adding extra measurements at strategic locations. In this paper, this will be accomplished by using both phasor measurements and traditional measurements. It will be shown that a few extra measurements can drastically improve the capability to detect and identify topology errors in a given system. The proposed meter placement algorithm will be described and illustrative case studies will be presented.",2008,0, 3426,TPT-RAID: a High Performance Box-Fault Tolerant Storage System,"TPT-RAID is a multi-box RAID wherein each ECC group comprises at most one block from any given storage box, and can thus tolerate a box failure. It extends the idea of an out-of band SAN controller into the RAID: data is sent directly between hosts and targets and among targets, and the RAID controller supervises ECC calculation by the targets. By preventing a communication bottleneck in the controller, excellent scalability is achieved while retaining the simplicity of centralized control. TPT-RAID, whose controller can be a software module within an out-of-band SAN controller, moreover conforms to a conventional switched network architecture, whereas an in-band RAID controller would either constitute a communication bottleneck or would have to also be a full-fledged router. The design is validated in an InfiniBand-based prototype using /SCSI and /SER, and required changes to relevant protocols are introduced.",2007,0, 3427,RazorII: In Situ Error Detection and Correction for PVT and SER Tolerance,"Traditional adaptive methods that compensate for PVT variations need safety margins and cannot respond to rapid environmental changes. In this paper, we present a design (RazorII) which implements a flip-flop with in situ detection and architectural correction of variation-induced delay errors. Error detection is based on flagging spurious transitions in the state-holding latch node. The RazorII flip-flop naturally detects logic and register SER. We implement a 64-bit processor in 0.13 mum technology which uses RazorII for SER tolerance and dynamic supply adaptation. RazorII based DVS allows elimination of safety margins and operation at the point of first failure of the processor. We tested and measured 32 different dies and obtained 33% energy savings over traditional DVS using RazorII for supply voltage control. We demonstrate SER tolerance on the RazorII processor through radiation experiments.",2009,0,8834 3428,Design and implementation of error detection and correction circuitry for multilevel memory protection,"Traditional memories use only two levels per cell (0/1), which limits their storage capacity to 1 bit per cell. By doubling the cell capacity, we increase the density of the memory at the expense of its reliability. There are several types of memories that employ multi-level techniques. The subject of this paper is the design of a multi-level dynamic random access memory (MLDRAM). The problem of its reliability is investigated and a practical solution is proposed. The solution is based on the organization of the error-correcting code (ECC) that is tuned to the MLDRAM implementation. Conventional memories employ single-error-correcting and double-error-detecting (SEC-DED) ECCs. While such codes have been considered for MLDRAMs, their use is inefficient, due to likely double-bit errors in a single cell. For this reason, we propose an induced ECC architecture that uses ECC in such a way that no common error corrupts two bits. Induced ECC allows a significant increase in the reliability of the MLDRAM, by making use of improved check-bit generation circuitry that allows us to use less space for the parity-bit generation circuitry. The suggested approach is able to correct a two-bit error in a two-bits-per-cell MLDRAM, which the basic ECC cannot correct. The proposed solutions make the MLDRAM more tolerant to any kind of fault, and consequently more practical for mass production",2002,0, 3429,Incorporating fault dependency and debugging delay in software reliability analysis,"Traditional software reliability growth models are on the trend for generalization. The original restricted assumptions are released to adapt to different practical software testing environments. As far as our current research is concerned, the assumptions of immediate fault removal and fault independency are released. In this paper, a modeling framework for software reliability is proposed to incorporate both fault dependency and debugging time delay. Various models are derived based on different assumptions on debugging lag. This approach is illustrated with a real dataset from a software project.",2008,0, 3430,Software test selection patterns and elusive bugs,"Traditional white and black box testing methods are effective in revealing many kinds of defects, but the more elusive bugs slip past them. Model-based testing incorporates additional application concepts in the selection of tests, which may provide more refined bug detection, but does not go far enough. Test selection patterns identify defect-oriented contexts in a program. They also identify suggested tests for risks associated with a specified context. A context and its risks is a kind of conceptual trap designed to corner a bug. The suggested tests will find the bug if it has been caught in the trap.",2005,0,8739 3431,On modeling crosstalk faults,"Traditionally, digital testing of integrated semiconductor circuits has focused on manufacturing defects. There is another class of failures that happens due to circuit marginalities. Circuit-marginality failures are on the rise due to shrinking process geometries, diminishing supply voltage, sharper signal-transition rates, and aggressive styles in circuit design. There are many different marginality issues that may render a circuit nonoperational. Capacitive cross coupling between interconnects is known to be a leading cause for marginality-related failures. In this paper, we present novel techniques to model and prioritize capacitive crosstalk faults. Experimental results are provided to show effectiveness of the proposed modeling technique on large industrial designs.",2005,0, 3432,On the Use of Dynamic Binary Instrumentation to Perform Faults Injection in Transaction Level Models,"Transaction Level Modelling (TLM) has been widely accepted as systems modelling framework focused in system components communication. This approach allows efficient accurate estimation and rapid design space exploration. Besides of the functional simulation for validation of a hardware/software designs, there are additional reliability requirements that need advanced simulation techniques to analyze the system behaviour in the presence of faults. Several traditional VHDL fault injection mechanisms like mutants or saboteurs have been adapted to SystemC model descriptions. The main drawback of these approaches is the necessity of source code modification to carry out the fault injection campaigns. In this paper, we propose the use of Dynamic Binary Instrumentation (DBI) to perform fault injection in SystemC TLM models. DBI is a technique to intercept software routine calls allowing argument and return value corruption and data structures modification at runtime. This technique needs neither source code modifications nor recompilation of models in order to generate module mutants or in order to insert saboteurs in the signal communication path.",2009,0, 3433,Fault tolerance and configurability in DSM coherence protocols,"To make complex computer systems more robust and fault tolerant, data must be replicated for high availability. The level of replication must be configurable to control overhead costs. Using an application suite, the authors test several distributed shared memory coherence protocols under different workloads and analyze the operation costs, fault tolerance, and configurability of each",2000,0, 3434,Automatic Fault Localization for SystemC TLM Designs,"To meet today's time-to-market demands catching bugs as early as possible during the design of a system is absolutely essential. In Electronic System Level (ESL) design where SystemC has become the de-facto standard due to Transaction Level Modeling (TLM), many approaches for verification have been developed. They determine an error trace which demonstrates the difference between the required and the actual behavior of the system. However, the subsequent debugging process is very time-consuming, in particular due to TLM-related faults caused by complex process synchronization and concurrency. In this paper, we present an automatic fault localization approach for SystemC TLM designs. The approach determines components that can be changed such that the intended behavior of the design is obtained removing the contradiction given by the error trace. Techniques based on Bounded Model Checking (BMC) are used to find the components. We demonstrate the quality of our approach by experimental results.",2010,0, 3435,Signpost: matching program behavior against known faults,"To reduce debugging time and effort, Signpost uses a program's behaviour to query a knowledge base and automatically retrieve articles that describe known bugs and approaches to solving them.",2003,0, 3436,Knowledge Oriented Network Fault Resolution Method Based on Active Information Resource,"To reduce the loads imposed on network administrators, we have proposed AIR-NMS, which is a network management support system (NMS) based on Active Information Resource (AIR). In AIR-NMS, various information resources (e.g., state information of a network, practical knowledge of network management) are combined with software agents which have the knowledge and functions for supporting the utilization of the resources, and thus individual resources are given activities as AIRs. Through the organization and cooperation of AIRs, AIR-NMS provides the administrators with practical measures against a wide range of network faults. To make AIR-NMS fit for practical use, this paper proposes a method for achieving the effective installation and utilization of the network management knowledge needed in AIR-NMS.",2010,0, 3437,"The Impact of Tower Shadow, Yaw Error, and Wind Shears on Power Quality in a WindDiesel System","To study the impact of aerodynamic aspects of a wind turbine (WT) (i.e., tower shadow, wind shears, yaw error, and turbulence) on the power quality of a wind-diesel system, all electrical, mechanical, and aerodynamic aspects of the WT must be studied. Moreover, the contribution of the diesel generator system and its controllers should be considered. This paper describes how the aerodynamic and mechanical aspects of a WT can be simulated using TurbSim, AeroDyn, and FAST where the electrical parts of WT, diesel generator, its controllers, and electrical loads are modeled by Simulink blocks. Simulation results obtained from the model are used to observe the power and voltage variations at the WT generator terminals under different operating conditions. Furthermore, the effects of tower shadow, wind shears, yaw error, and turbulence on the power quality in a stand-alone wind-diesel system utilizing a fixed-speed WT are studied.",2009,0, 3438,Survivability Analysis of Wireless Sensor Network with Transient Faults,"To the best knowledge of us, survivability for WSN has never been studied considering failure affects on network. We perceive the network survivability as a composite measure consisting of both network failure duration and failure impact on the network. In this paper we will study network survivability in unstably state: in this state, network is affected by the failures that occur temporarily and instantly also may occur several times. In other words, the main characteristics of these failures are their frequency and being temporary. In this paper we will propose a survivability model for network in unstable state that is based on network availability. This availability model, presents frequent availability of a route. During this paper, first we acquire an availability model for network in unstable state that shows frequent availability of a node. We also use Markov model for nodes to show their transmission state according to availability model. Then we use a computational method for proving our model.",2008,0, 3439,Assessing Fault Sensitivity in MPI Applications,"Today, clusters built from commodity PCs dominate high-performance computing, with systems containing thousands of processors now being deployed. As node counts for multi-teraflop systems grow to thousands and with proposed petaflop system likely to contain tens of thousands of nodes, the standard assumption that system hardware and software are fully reliable becomes much less credible. Concomitantly, understanding application sensitivity to system failures is critical to establishing confidence in the outputs of large-scale applications. Using software fault injection, we simulated single bit memory errors, register file upsets and MPI message payload corruption and measured the behavioral responses for a suite of MPI applications. These experiments showed that most applications are very sensitive to even single errors. Perhaps most worrisome, the errors were often undetected, yielding erroneous output with no user indicators. Encouragingly, even minimal internal application error checking and program assertions can detect some of the faults we injected.",2004,0, 3440,An online defects inspection system for satin glass based on machine vision,"Today, quality control is a nodal point in many industries, and in the glass one in particular; in most cases the human control does not catch up with the pressing market requirements, therefore computer vision inspection systems are preferable to reduce costs and to improve the product quality, but several problems must be solved. In this paper a prototype system, able to reproduce all the functionalities of an automatic glass inspection system, is designed and realized; it guarantees good results and considerable reliability with low incidence on manufacturing costs. The final in-line computer vision system is under development in cooperation with a specialized electronic industry.",2009,0, 3441,Enhancing the fault tolerance of workflow management systems,"Today's commercial workflow systems, although useful, do not scale well, have limited fault tolerance, and don't interoperate well with other workflow systems. The authors discuss current research directions and potential future extensions that might enable workflow services to meet the needs of mission-critical applications",2000,0, 3442,ARROW - A Generic Hardware Fault Injection Tool for NoCs,"Todays NoCs are reaching a level where it is getting very hard to ensure 100% of functionality. Consequently, fault tolerance has become an important aspect in todays design techniques and like the system itself it has to be validated and tested. A vulnerable point of attack for faults in distributed systems like NoCs is certainly the interconnect. In this paper, we will give an overview about todays challenges in interconnect technology and potential resulting physical faults. Unfortunately, describing faults on the physical level is far too accurate and so it is necessary to abstract and to map all these faults to a logical level. In more complex systems, also the logical level may become too detailed. As a result, an even more abstract layer, which is defined as the functional level, has to be introduced. To be able to verify fault tolerance, an experiment based test approach like fault injection is necessary. Arrow is a generic hardware fault injection tool, written in the hardware description language VHDL, especially designed for digital fault injection on the interconnect of NoCs, making use of fault models from the logical level.",2009,0, 3443,Detecting motor bearing faults,"Three-phase induction motors are the workhorses of industry because of their widespread use. They are used extensively for heating, cooling, refrigeration, pumping, conveyors, and similar applications. They offer users simple, rugged construction, easy maintenance, and cost-effective pricing. These factors have promoted standardization and development of a manufacturing infrastructure that has led to a vast installed base of motors; more than 90% of all motors used in industry worldwide are ac induction motors. Causes of motor failures are bearing faults, insulation faults, and rotor faults. Early detection of bearing faults allows replacement of the bearings, rather than replacement of the motor. The same type of bearing defects that plague such larger machines as 100 hp are mirrored in lower hp machines which has the same type of bearings. Even though the replacement of defective bearings is the cheapest fix among the three causes of failure, it is the most difficult one to detect. Motors that are in continuous use cannot be stopped for analysis. We have developed a circuit monitor for these motors. Incipient bearing failures are detectable by the presence of characteristic machine vibration frequencies associated with the various modes of bearing failure. We will show that circuit monitors that we developed can detect these frequencies using wavelet packet decomposition and a radial basis neural network. This device monitors an induction motor's current and defines a bearing failure.",2004,0, 3444,Vehicle License Plate tilt correction based on the Weighted Least Square Method,"Tilt correction is a very crucial part of the Vehicle License Plate (VLP) automatic recognition. In this paper, according to the Weighted Least Square Method (WLSM), the VLP region is fitted to a straight line and then the line slope a1 is obtained, by which the rotation angle a is calculated. Finally the whole image is rotated by - and the image tilt correction is performed. The experimental results definitely show that, this paper method can quickly and accurately get the tilt angle and has great robustness and adaptability. Compared with the Least Square Method (LSM), in this paper method the tilt angle is more precise and the value of objection function is smaller; Compared with Hough Transformation (HT) and Radom Transformation (RT), this paper method is featured in faster processing time and more precise tilt angle, which is particularly well adapted to the real-time tilt correction in Intelligent Transportation System (ITS).",2010,0, 3445,Nonrandom quantization errors in timebases,"Timebase distortion causes nonlinear distortion of waveforms measured by sampling instruments. When such instruments are used to measure the RMS amplitude of the sampled waveforms, such distortions result in errors in the measured RMS values. This paper looks at the nature of the errors that result from nonrandom quantization errors in an instrument timebase circuit. Simulations and measurements on a sampling voltmeter show that the errors in measured RMS amplitude have a nonnormal probability distribution, such that the probability of large errors is much greater than would be expected from the usual quantization noise model. A novel timebase compensation method is proposed which makes the measured RMS errors normally distributed and reduces their standard deviation by a factor of 25. This compensation method was applied to a sampling voltmeter and the improved accuracy was realized",2001,0,6954 3446,Improving Fault Tolerance by Using Reconfigurable Asynchronous Circuits,"To achieve fault tolerance several tasks have to be performed, from fault detection up to recovery procedures. Sophisticated methods for each sub-task were and are still developed, but rarely a complete solution is proposed on circuit level. This paper fills the gap by proposing a concept that combines all required steps to implement fault tolerant digital circuits. The approach is based on asynchronous Four-State Logic (FSL) logic, which belongs to the family of Quasi Delay Insensitive (QDI) circuits. Contrary to conventional approaches, using synchronous logic plus additional hardware and/or software to achieve fault tolerance, we use the inherent properties of FSL for fault detection, fault localization and fault recovery. Only deadlock detection and error mitigation require an enhancement of the conventional FSL (four state logic) design. For this purpose, a monitoring unit has to be added and self-healing cells were developed that can be handled as conventional logic within the design flow. The feasibility of the approach is verified by a first prototype implementation of a fault tolerant adder circuit.",2008,0, 3447,Effective error concealment algorithm of whole frame loss for H.264 video coding standard by recursive motion vector refinement,"To benefit network transmission, the bit stream of the whole frame coded by H.264 is usually grouped into a single packet. However, the error and packet loss during the transmission will result in the distortion of reconstructed video. To deal with this problem, error concealment is widely used to recover the lost frames and to prevent error propagation. In this paper, we propose an accurate motion vector refinement algorithm to further refine the well-known motion copy error concealment method and thus improve the concealed results. By using the motion vector correlation from spatial and temporal domains, a possible motion vector difference variation area has been discovered by the recursive motion vector difference examining approach in the previous frame. Afterwards, a more precise motion vector difference can be derived from the selected motion vector difference variation area. Through the help of our proposed algorithm, the refined motion vectors can be obtained and thus improve the accuracy of motion vectors derived by a motion copy error concealment algorithm. Simulation results show that our proposed algorithm can achieve better performance compared to well-known schemes.",2010,0, 3448,Using inspection data for defect estimation,"To control projects, managers need accurate and timely feedback on the quality of the software product being developed. I propose subjective team estimation models calculated from individual estimates and investigate the accuracy of defect estimation models based on inspection data",2000,0, 3449,Effects of Internal Hole-Defects Location on the Propagation Parameters of Acoustic Wave in the Maple Wood,"To detect the effects of internal hole-defects location on the propagation parameters of acoustic wave in the wood, forty maple wood samples used as the study objects are tested by using PLG (Portable Lumber Grader) instrument in this paper. The propagation velocity and vibration frequency of acoustic wave in intact wood and defective wood were compared, and then the correlation between the propagation velocity or vibration frequency and the elastic modulus were discussed respectively. The analysis results showed that: (1) there were significant positive correlations between the propagation velocity or vibration frequency of acoustic wave and the elastic modulus of the intact and defective maple wood samples; (2) the propagation velocity and vibration frequency of acoustic wave in defective wood samples were lower than those of intact wood samples; and (3) the changes of acoustic wave propagation parameters were different when the location of internal hole-defects of wood samples were different.",2010,0, 3450,Strategies of fault tolerant operation for three-level PWM inverters,"This paper proposes fault tolerant operation strategies for three-level neutral point clamped pulsewidth modulation inverters in high power, safety-critical applications. Likely faults are identified and fault tolerant schemes based on the inherent redundancy of voltage vectors are presented. Simulation verification is performed to show fault handling capabilities. Prototyping and principle investigation are performed on a 150-KW inverter and testing results are presented.",2006,0, 3451,The nonredundant error correction differential detection in soft-decision decoding,"This paper proposes the application of nonredundant error correction (NEC) in soft-decision decoding. Combined with differential demodulation of /4DQPSK, we emphasize the analysis of the new algorithm. A practical implementation scheme is presented. The BER performance and correction capability of NEC is investigated by computer simulation in AWGN and Rician fading channels. The simulation results show that the performance improvement of the proposed algorithm is superior to conventional differential demodulation by 1.4 dB at a BER of 10-4 in the AWGN channel. In the Rician channel there is the same notable performance improvement. The algorithm makes NEC able to be used in soft-decision decoding, which is widely applied in practical satellite communication systems. It overcomes the limitation that NEC could only be used in hard-decision systems. Therefore, the system capacity and communication quality is notably improved. It is worthwhile for satellite communication for which power and spectrum are limited",2000,0, 3452,On-line error detection and correction in storage elements with cross-parity check,"This paper proposes the cross-parity check as a method for an on-line detection of multiple bit-errors in storage elements of microprocessors like registers or register files. Transient or 'soft' errors caused by radiation or electromagnetic influences are the focus of this work. Especially for register files or register groups, an easy implementable error correction method is proposed, which can be performed by software routines or additional hardware. The method is based on the logical interpretation of cross-parity vectors.",2002,0, 3453,A Reliable Reconfiguration Controller for Fault-Tolerant Embedded Systems on Multi-FPGA Platforms,"This paper proposes the design of a controller managing the fault tolerance of multi-FPGA platforms, contributing to the creation of a reliable system featuring high flexibility and resource availability. A fault management strategy that exploits the devices' reconfiguration capabilities is proposed, the Reconfiguration Controller, focus of this paper, is the main component in charge of implementing such strategy. The innovative points raised by this work are 1) the identification of a distributed control architecture, allowing the avoidance of single points of failure, 2) the management of both recoverable and non-recoverable faults, and 3) the definition of an overall reliable multi-FPGA system.",2010,0, 3454,Performance Improvement of the IPMSM Position Sensor-less Vector Control System by the On-line Motor Parameter Error Compensation and the Practical Dead-time Compensation,"This paper proposes the performance improvement methods for the IPMSM position sensor-less vector control system. The stability of the position sensor-less control is influenced by the motor parameter error and dead time compensation error. Especially, it becomes problem in case of the extreme temperature variation and at low speed. The influence on the position estimation is analyzed for the proposed method which using an armature current flux adaptive observer. For performance improvement and simplification of control system, the offline parameter measurement of a stator resistance, permanent magnet flux, d,q-axis inductances, and dead-time compensation are proposed. And the on-line parameter error compensation method is applied to reduce the position error, which uses only the observer signal without any additional sensor or signal injection. In addition, the practical dead-time compensation method is proposed based on the experimental measurement. Experimental results of the test system using the proposed methods showed good performance.",2007,0, 3455,An effective end-to-end solution for error robust video communications,"This paper provides an end-to-end solution for robust transmission of H.264 video stream over wireless IP network. On the decoder side, directional weighted spatial concealment with estimated edge direction and subblock-based motion compensated temporal concealment with weighted boundary match are employed to improve texture continuity. On the encoder side, a global rate-distortion optimization is proposed with three-state Markov channel model and drift-aware pixel-level recursive estimation of channel distortion. Simulations demonstrate the considerable improvements of image quality.",2007,0, 3456,Identification of residual generators for fault detection and isolation of a spacecraft simulated model,"This paper provides preliminary results regarding the design of residual generators in order to realise the complete diagnosis scheme for a complex process model with fault, uncertainty and disturbances. Under the hypothesis of exploiting input-output polynomial descriptions, it is possible to determine the residual generator functions. In particular, this work shows that the mathematical computation of these filters can be obtained by following a black-box identification approach. The application example of these filters when used for the fault detection and isolation of a simulated spacecraft system are finally reported. The robustness and reliability issues of the suggested residual generation approach and design are investigated by means of extensive simulations.",2007,0, 3457,A stereo-vision based compensation method for pose error of a 6-DOF parallel manipulator with 3-2-1 orthogonal chains,"This paper puts forward an algorithm to compensate the pose error of a 6-DOF (six degrees of freedom) parallel manipulator based on stereo vision. Firstly, under the principle of rigid motion, the position and orientation are expressed by coordinates of three feature points which are not in a same line. At the same time, the position and orientation of the moving platform at current moment are also computed. Then, according to the kinematics model of parallel manipulator and its pose at the current moment, the input-values of the micro-displacement drivers are computed. At the end, these input-values are utilized to compensate the pose error of the moving platform. The simulation results demonstrate that the compensated curve of the parallel manipulator is fitting well with its actual curve.",2008,0, 3458,Strain sensitivity of a modified single-defect photonic crystal nanocavity for mechanical sensing,"This paper reports the theoretical and experimental investigations of the strain sensitive effect of a 2D photonic crystal (PhC) nanocavity resonator for mechanical sensing applications. By using finite element method (FEM) using ANSYS software and finite different time domain (FDTD) simulation using CrystalWave software, the strain sensitivity of a high quality factor PhC nanocavity has been studied. Linear relationships between strain and shift of resonant wavelength of the cavity have been obtained. The sensitivities to longitudinal and transverse strains have been determined to be 1.9 nm/m and 0.25 nm/m, respectively. The sample structure were fabricated and characterized. The initial results show that the cavity peak with Q factor estimated to be 3800 had been obtained.",2010,0, 3459,Scalable Fault-Tolerant Distributed Shared Memory,"This paper shows how a state-of-the-art software distributed shared-memory (DSM) protocol can be efficiently extended to tolerate single-node failures. In particular, we extend a home-based lazy release consistency (HLRC) DSM system with independent check- pointing and logging to volatile memory, targeting shared-memory computing on very large LAN-based clusters. In these environments, where global coordination may be expensive, independent checkpointing becomes critical to scalability. However, independent checkpointing is only practical if we can control the size of the log and checkpoints in the absence of global coordination. In this paper we describe the design of our fault-tolerant DSM system and present our solutions to the problems of checkpoint and log management. We also present experimental results showing that our fault tolerance support is light-weight, adding only low messaging, logging and checkpointing overheads, and that our management algorithms can be expected to effectively bound the size of the checkpoints and logs or real applications.",2000,0, 3460,Two fault tolerant control strategies for shunt active power filter systems,"This paper shows how to integrate fault detection, fault identification and fault compensation into two different types of fault tolerant active power filter systems. These proposed strategies can compensate the open-circuit and short-circuit failures occurring in the power converter devices used on active power filter systems. The fault compensation is achieved by reconfiguring the power converter topology by using isolating and connecting devices. These devices are used to redefine the post-fault converter topology. This allows for continuous free operation of the system after isolation of the faulty power switches of the converter. Experimental results demonstrate the validity of the proposed fault tolerant control strategies.",2002,0, 3461,Fault-tolerant control system for joint module of light weight space robotic system,"This paper studies a fault-tolerant control system designed for a joint module of a light weight space robotic system. Considering the extremely tight constraints on small mass, low power consumption, low volume, stringent cost control, high system reliability, etc, the joint electronics is divided into two layers, central processing layer and peripheral interface layer, using different redundancy design on each layer. Fault modes of the joint electronics and its effects on the performance of servo control of joint position are deeply analyzed. Based on the results of the analysis, fault-tolerant mechanisms are proposed. The features of the hardware and software of the fault-tolerant control system are also presented. A prototype is developed, and performance is tested. With the results, the feasibility of the system design is verified.",2009,0, 3462,Probability of Error Analysis for Hidden Markov Model Filtering With Random Packet Loss,"This paper studies the probability of error for maximum a posteriori (MAP) estimation of hidden Markov models, where measurements can be either lost or received according to another Markov process. Analytical expressions for the error probabilities are derived for the noiseless and noisy cases. Some relationships between the error probability and the parameters of the loss process are demonstrated via both analysis and numerical results. In the high signal-to-noise ratio (SNR) regime, approximate expressions which can be more easily computed than the exact analytical form for the noisy case are presented",2007,0, 3463,An Improved Method of Differential Fault Analysis on the SMS4 Cryptosystem,"This paper studies the security of the block cipher SMS4 against differential fault analysis. It makes use of the byte- oriented fault model and the differential analysis. On the basis of the byte-oriented model, the 128-bit secret key for SMS4 can be recovered by 2 faulty ciphertexts in our method. Compared with all previous techniques, our work improves the efficiency of fault injection, and decreases the number of faulty ciphertexts. It provides a new approach for fault analysis on other block ciphers.",2007,0, 3464,Fault information model and maintenance cycle forecasting for ship's power system,"This paper suggests an information model of the fault variable rate. Based on the new model, the reliability of the ship's power system can be estimated, and the maintenance cycle can be predicted. The calculating method and application of results are discussed in detail. From the model analysis and the case study, the conclusion is: it is a practical new way to estimate the reliability and to calculate the maintenance cycle by the fault variable rate model. The state information can be obtained in terms of single-sample function, and the supervision for the system is simple and effective. Arrangement of the maintenance time, in reference to the new method, the operation and management of the ship's power system would be established on the basis of the scientific analysis. Because the fault variable rate model is especially suitable for the single-sample of the single-produced system, it can be used in the reliability analysis of other large-scale complicated systems-such as complicated control systems, etc",2002,0, 3465,Optimization of error detecting codes for the detection of crosstalk originated errors,"This work applies weight based codes to the detection of crosstalk originated errors. This type of fault, whose importance grows with device scaling may originate errors that are undetectable by the commonly used error detecting codes in VLSI ICs. Conversely, such errors can be easily detected by weight based codes that, however, have smaller encoding capabilities. In order to reduce the cost of these codes, a graph theoretic optimization is used. Moreover new applications of these codes are explored regarding the synthesis of self-checking FSMs, and the detection of errors related to the clock distribution network",2001,0, 3466,Fault Management for Self-Healing in Ubiquitous Sensor Network,"This work concerns the development of a fault model of sensor for detecting and isolating sensor, actuator, and various faults in USNs (Ubiquitous Sensor Network). USN are developed to create relationships between humans, objects and computers in various fields. A management research of sensor nodes is very important because the ubiquitous sensor network has the numerous sensor nodes. However, Self-healing technologies are insufficient to restore when an error event occurs in a sensor node in a USN environment. A layered healing architecture for each node layer (3-tier) is needed, because most sensor devices have different capacities in USN. In this paper, we design a fault model and architecture of the sensor and sensor node separately for self-healing in USN. In order to evaluate our approach, we implement prototype of the USN fault management system to evaluate our approach. We compare the resource use of self-healing components in the general distributed computing (wired network) and the USN.",2008,0, 3467,Random and systematic defect analysis using IDDQ signature analysis for understanding fails and guiding test decisions,"This work demonstrates IDDQ signature analysis of random and systematic defects, including yield detractors and reliability defects. Application is demonstrated for both understanding failure root cause and guiding test decisions. IDDQ signatures contain rich information about the circuit, defect behavior and processing conditions. This paper describes capturing that information by classifying signatures into different categories and using the classifications to learn about the nature of the defects occuring on a variety of ASIC chips.",2004,0, 3468,A precise sample-and-hold circuit topology in CMOS for low voltage applications with offset voltage self correction,"This work describes a new topology for CMOS sample-and-hold circuits in low voltage with self-correction of the offset voltage caused by mismatches in the differential input pair of the operational amplifier. The charge injection of the NMOS switches, although not properly modeled by the simulators, is an important factor and it is minimized in this topology. The results were obtained using the ACCUSIM II simulator on the AMS CMOS 0.8 m CYE and they reveal the circuit has a reduced error of just 0.03% at the output.",2002,0,5235 3469,Development of a fault tolerant flight control system,"This work discusses the design and development of a fault tolerant flight control system as a part of the research requirement of the Author's Master's Degree Thesis. The requirements of safety critical systems, reliable systems, fault tolerant systems, avionics and embedded systems were considered for this project. Byzantine resilience and common mode fault avoidance are considered beyond the scope of this work at this time. The fault tolerant system designed for this work was set up as a triple modular redundant system to tolerate the existence of one fault within the system. The system was implemented with the PC/104 embedded PC platform. Microsoft flight simulator was used as a test platform to generate input data and to demonstrate successful operation by showing a flight under control by the flight control system. The end results show that a fault tolerant system can be developed to successfully tolerate one fault while the system is in operation.",2004,0, 3470,An Immune Fault Detection System for Analog Circuits with Automatic Detector Generation,This work focuses on fault detection of electronic analog circuits. A fault detection system for analog circuits based on cross-correlation and artificial immune system is proposed. It is capable of detecting faulty components in analog circuits by analyzing its impulse response. The use of cross-correlation for preprocessing the impulse response drastically reduces the size of the detector used by the real-valued negative selection algorithm (RNSA). The proposed method can automatically generate very efficient detectors by using quadtree decomposition. Results have demonstrated that the proposed system is able to detect faults in a Sallen-Key bandpass filter and in a continuous-time state variable filter.,2006,0, 3471,Cross-layer WLAN measurement and link analysis for low latency error resilient wireless video transmission,This work introduces a cross-layer measurement and link analysis strategy for video transport over IEEE 802.11. Field trial measurement data is presented for streamed H.264 video over ad-hoc 802.11g links. The data is used to analyze the interactions between the physical/network/transport and application layers. The development of cross layer optimized low latency error resilient video transmission schemes is discussed.,2005,0, 3472,Techniques to reduce the soft error rate of a high-performance microprocessor,"Transient faults due to neutron and alpha particle strikes pose a significant obstacle to increasing processor transistor counts in future technologies. Although fault rates of individual transistors may not rise significantly, incorporating more transistors into a device makes that device more likely to encounter a fault. Hence, maintaining processor error rates at acceptable levels will require increasing design effort. This paper proposes two simple approaches to reduce error rates and evaluates their application to a microprocessor instruction queue. The first technique reduces the time instructions sit in vulnerable storage structures by selectively squashing instructions when long delays are encountered. A fault is less likely to cause an error if the structure it affects does not contain valid instructions. We introduce a new metric, MITF (Mean Instructions To Failure), to capture the trade-off between performance and reliability introduced by this approach. The second technique addresses false detected errors. In the absence of a fault detection mechanism, such errors would not have affected the final outcome of a program. For example, a fault affecting the result of a dynamically dead instruction would not change the final program output, but could still be flagged by the hardware as an error. To avoid signalling such false errors, we modify a pipeline's error detection logic to mark affected instructions and data as possibly incorrect rather than immediately signaling an error. Then, we signal an error only if we determine later that the possibly incorrect value could have affected the program's output.",2004,0, 3473,Detection of CMOS defects under variable processing conditions,"Transient signal analysis is a digital device testing method that is based on the analysis of voltage transients at multiple test points. In this paper, the power supply transient signals from simulation experiments on an 8-bit multiplier are analyzed at multiple test points in both the time and frequency domain. Linear regression analysis is used to separate and identify the signal variations introduced by defects and those introduced by process variation. Defects were introduced into the simulation model by adding material (shorts) or removing material (opens) from the layout. 246 circuit models were created and 1440 simulations performed on defect-free, bridging defective and open defective circuits in which process variation was modeled by varying circuit and transistor parameters within a range of 25% of the nominal parameters. The results of the analysis show that it is possible to distinguish between defect-free and defective devices with injected process variation",2000,0, 3474,An adaptive spatial error concealment (SEC) with more accurate MB type decision in H.264/AVC,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Therefore an error concealment scheme is applied at the video receiver side to mask the damaged video. Considering there are 3 types of MBs (Macro Blocks) in natural video frame, i.e., Textural MB, Edged MB, and Smooth MB, this paper proposes an adaptive spatial error concealment which can choose 3 different methods for these 3 different MBs. For choice of the appropriate method, 2 factors are taken into consideration. Firstly, standard deviation of our proposed edge statistical model is exploited. Secondly, some new features of latest video compression standard H.264/AVC, i.e., intra prediction mode is also considered for criterion formulation. Compared with previous approaches, which are only based on deterministic measurement, proposed method achieves more accurate MB type decision which leads to better image recovery. Subjective and objective image quality evaluations in experiments confirmed this.",2008,0, 3475,Error resilient coding of H.264 using intact long-term reference frames,"Transmission of compressed video over packet lossy networks is a challenge task due to the well-known error propagation problem, which may substantially degrade the transmitted video quality. In this paper, we propose an error-resilient coding scheme for the newest coding standard H.264 by using intact long-term reference frames. Specifically, two reference frames are available for inter-coding, one is the immediate past frame marked as short-term reference frame, and the other is an older and error-free one marked as long-term reference frame. To mitigate error propagation, a number of macroblocks (MBs) in each inter-frame are selected to be predicted from the long-term reference frame. Compared to the widely used MB intra-refresh, the proposed scheme is equally robust but more coding efficient. Numerical simulation results verify the validity of our scheme.",2008,0, 3476,Simulating the Multipath Channel With a Reverberation Chamber: Application to Bit Error Rate Measurements,"We illustrate the use of the reverberation chamber to simulate fixed wireless propagation environments including effects such as narrowband fading and Doppler spread. These effects have a strong impact on the quality of the wireless channel and the ability of a receiver to decode a digitally modulated signal. Different channel characteristics such as power delay profile and RMS delay spread are varied inside the chamber by incorporating various amounts of absorbing material. In order to illustrate the impact of the chamber configuration on the quality of a wireless communication channel, bit error rate measurements are performed inside the reverberation chamber for different loadings, symbol rates, and paddle speeds; the results are discussed. Measured results acquired inside a chamber are compared with those obtained both in an actual industrial environment and in an office.",2010,0, 3477,A Framework for Fault-Tolerant Control of Discrete Event Systems,"We introduce a framework for fault-tolerant supervisory control of discrete-event systems. Given a plant, possessing both faulty and nonfaulty behavior, and a submodel for just the nonfaulty part, the goal of fault-tolerant supervisory control is to enforce a certain specification for the nonfaulty plant and another (perhaps more liberal) specification for the overall plant, and further to ensure that the plant recovers from any fault within a bounded delay so that following the recovery the system state is equivalent to a nonfaulty state (as if no fault ever happened). The specification for the overall plant is more liberal compared to the one for the nonfaulty part since a degraded performance may be allowed after a fault has occurred. We formulate this notion of fault-tolerant supervisory control and provide a necessary and sufficient condition for the existence of such a supervisor. The condition involves the usual notions of controllability, observability and relative-closure, together with the notion of stability. An example of a power system is provided to illustrate the framework. We also propose a weaker notion of fault-tolerance where following the recovery, the system state is simulated by some nonfaulty state, i.e., behaviors following the recovery are also the behaviors from some faulty state. Also, we formulate the corresponding notion of weakly fault-tolerant supervisory control and present a necessary and sufficient condition (involving the notion of language-stability) for the its existence. We also introduce the notion of nonuniformly-bounded fault-tolerance (and its weak version) where the delay-bound for recovery is not uniformly bounded over the set of faulty traces, and show that when the plant model has finitely many states, this more general notion of fault-tolerance coincides with the one in which the delay-bound for recovery is uniformly bounded.",2008,0, 3478,A New Hardware/Software Platform and a New 1/E Neutron Source for Soft Error Studies: Testing FPGAs at the ISIS Facility,"We introduce a new hardware/software platform for testing SRAM-based FPGAs under heavy-ion and neutron beams, capable of tracing the bit-flips in the configuration memory back to the physical resources affected in the FPGA. The validation was performed using, for the first time, the neutron source at the RAL-ISIS facility. The ISIS beam features a 1/E spectrum, which is similar to the terrestrial one with an acceleration between 107 and 108 in the energy range 10-100 MeV. The results gathered on Xilinx SRAM-based FPGAs are discussed in terms of cross section and circuit-level modifications.",2007,0, 3479,Transition Delay Fault Testing of Microprocessors by Spectral Method,"We introduce a novel spectral method of delay test generation for microprocessors at the register-transfer level (RTL). Vectors are first generated by an available ATPG tool for transition faults on inputs and outputs of the RTL modules of the circuit. These vectors are analyzed using Hadamard matrices to obtain Walsh function components and random noise levels for each primary input. A large number of vector sequences is then generated such that all sequences have the same Walsh spectrum but they differ due to the random noise in them. At the gate-level, a fault simulator and an integer linear program (ILP) compact these vector sequences. The initial RTL vector generation also reveals the hard-to-test parts of the circuit. An XOR observability tree was used to improve the testability of those parts. We give results for an accumulator-based processor named Parwan. The RTL technique produced higher gate-level transition fault coverage in shorter CPU time as compared to a gate-level transition fault ATPG.",2007,0, 3480,Task feasibility analysis and dynamic voltage scaling in fault-tolerant real-time embedded systems,"We investigate dynamic voltage scaling (DVS) in real-time embedded systems that use checkpointing for fault tolerance. We present feasibility-of-scheduling tests for checkpointing schemes for a constant processor speed as well as for variable processor speeds. DVS is then carried out on the basis of the feasibility analysis. We incorporate practical issues such as faults during checkpointing and state restoration, rollback recovery time, memory access time and energy, and DVS overhead. Simulation results are presented for real-life checkpointing data and embedded processors.",2004,0, 3481,The effects of atmospheric correction schemes on the hyperspectral imaging of littoral environments,"We investigate the effects of several atmospheric correction schemes (ACS) on spectral determination, biophysical parameter estimation, and unsupervised classification from hyperspectral data collected over a coastal New Jersey tidal marsh. The ACS examined include: two modes from ACORN 4.0 (modes 1.0 and 1.5), ATREM, FLAASH, Tafkaa-6S, and Tafkaa-tabular. Results from the comparative analysis of derived spectra reveal a high degree of similarity for all methods for terrestrial derived spectra but considerably less so for spectra obtained from aquatic environments. Likewise, the similarities in the terrestrial spectra translate to significant correlations among the ACS for two of four computed biophysical reflectance indices (NDVI and Red Edge Wavelength). Of the remaining two indices, PRI and SIPI, computed values from both ACORN modes were not correlated with other ACS. Results from ISODATA classification for land and water environments, show a consistently high level of agreement (> 90% overall) between both Tafkaa methods. FLAASH also shows a relatively high level of agreement with the Tafkaa methods (> 70%) for land classes, however a relatively low agreement for water classes (< 40%). ACORN mode 1.5 consistently demonstrated the lowest agreement among correction methods",2004,0, 3482,Path vs. subpath vs. link restoration for fault management in IP-over-WDM networks: performance comparisons using GMPLS control signaling,"We investigate three restoration techniques (path, subpath, and link restoration) for fault management in an IP-over-WDM network. We have implemented all of these techniques on the ns-2 simulation platform using generalized multiprotocol label switching (GMPLS) control signaling. These techniques can handle practical situations such as simultaneous multiple fiber failures, which are difficult to design for and recover from by nonrestoration techniques. We then present performance measurement results for the three restoration techniques by applying them to a typical nationwide mesh network running IP over WDM. We investigate interesting trade-offs in the performance of the restoration techniques on restoration success rate, average restoration time, availability, and blocking probability.",2002,0, 3483,Dependability of distributed control system fault tolerant units,"We investigate two types of fault tolerant units (FTUs) suitable for dependable distributed control systems and numerically evaluate their reliability and mean time to failure (MTTF). A simple simulation-based methodology to numerically evaluate dependability functions of a wide variety of fault tolerant units is presented. The method is based on simulation of stochastic Petri nets. A set of 15 FTU configurations belonging to five groups is analyzed. Groups 1 and 2 belong to the node oriented category whereas groups 3 through 5 belong to the application oriented category. The methodology allows a quick and accurate evaluation of dependability functions of any distributed control system design in terms of the type of FTU (i.e., node or application), replicas per group, replicas per FTU, and shared replicas.",2002,0, 3484,Practical fault localization for dynamic web applications,"We leverage combined concrete and symbolic execution and several fault-localization techniques to create a uniquely powerful tool for localizing faults in PHP applications. The tool automatically generates tests that expose failures, and then automatically localizes the faults responsible for those failures, thus overcoming the limitation of previous fault-localization techniques that a test suite be available upfront. The fault-localization techniques we employ combine variations on the Tarantula algorithm with a technique based on maintaining a mapping between statements and the fragments of output they produce. We implemented these techniques in a tool called Apollo, and evaluated them by localizing 75 randomly selected faults that were exposed by automatically generated tests in four PHP applications. Our findings indicate that, using our best technique, 87.7% of the faults under consideration are localized to within 1% of all executed statements, which constitutes an almost five-fold improvement over the Tarantula algorithm.",2010,0, 3485,Impact of Transversal Defects on Confinement Loss of an All-Solid 2-D Photonic-Bandgap Fiber,"We numerically investigate the impact of transversal defects on the minimum confinement loss value in the case of a solid-core photonic-bandgap fiber with parabolic germanium-doped inclusions. We show that a standard deviation of only 5% of either the diameter, the refractive-index contrast, or the position of the high-index inclusions could double the minimal value-as a function of frequency-of losses. Moreover, we demonstrate that, in our case, accurately controlling the position of the doped inclusions along the first ring around the core is more important than having an accurate control on their diameter and index contrast. Oddly enough, we also point out that the defects could lead to a decrease of the minimum loss value. Furthermore, we demonstrate that a structure made of inclusions of two different refractive indices could have lower loss than both structures made of only one of these two types of inclusion.",2007,0, 3486,Combining FT-MPI with H2O: Fault-Tolerant MPI Across Administrative Boundaries,"We observe increasing interest in aggregating geographically distributed, heterogeneous resources to perform large scale computations. MPI remains the most popular programming paradigm for such applications; however, as the size of computing environments increases, fault tolerance aspects become critically important. We argue that the fault tolerance model proposed by FT-MPI fits well in geographically distributed environments, even though its current implementation is confined to a single administrative domain. We propose to overcome these limitations by combining FTMPI with the H2O resource sharing framework. Our approach allows users to run fault tolerant MPI programs on heterogeneous, geographically distributed shared machines, without sacrificing performance and with minimal involvement of resource providers.",2005,0, 3487,Model-based synthesis of fault trees from Matlab-Simulink models,"We outline a new approach to safety analysis in which concepts of computer HAZOP are fused with the idea of software fault tree analysis to enable a continuous assessment of an evolving programmable design developed in Matlab-Simulink. We also discuss the architecture of a tool that we have developed to support the new method and enable its application in complex environments. We show that the method and the tool enable the integrated hardware and software analysis of a programmable system and that in the course of that analysis they automate and simplify the development of fault trees for the system. Finally, we propose a demonstration of the method and the tool and we outline the experimental platform and aims of that demonstration.",2001,0, 3488,A Markov framework for error control techniques based on selective retransmission in video transmission over wireless channels,"We present a framework, based on Markov models, for the analysis of error control techniques in video transmission over wireless channels. We focus on retransmission-based techniques, which require a feedback channel but also enable to perform adaptive error control. Traditional studies of these methodologies usually consider a uniform stream of data packets. Instead, video transmission poses the non-trivial challenge that the packets have different sizes, and, even more importantly, are incrementally encoded; thus, a carefully tailored model is required. We therefore proceed on two different sides. First, we consider a low-level description of the system, where two main inputs are combined, namely, a video packet generation process and a wireless channel model, both described by Markov Chains with a tunable number of states. Secondly, from a highlevel perspective, we represent the whole system evolution with another Markov Chain describing the error control process, which can feed the packet generation process back with retransmissions. The framework is able to evaluate hybrid automatic repeat request with selective retransmission, but can also be adapted to study pure automatic repeat request or forward error correction schemes. In this way, we are able to comparatively evaluate different solutions for video transmission, as well as to quantitatively assess their performance trends in a variety of scenarios. Thus, our framework can be used as an effective tool to understand the behavior of error control techniques applied to video transmission over wireless, and eventually identify design guidelines for such systems.",2010,0, 3489,Robustness analysis of soft error accumulation in SRAM-FPGAs using FLIPPER and STAR/RoRA,We describe a methodology for analyzing the robustness of circuits implemented by SRAM-based FPGAs against the accumulation of soft errors within the configuration memory. A detailed analysis of the fault injection data is presented.,2008,0, 3490,Fault-tolerant incremental diagnosis with limited historical data,"We describe a novel incremental diagnostic system based on a statistical model that is trained from empirical data. The system guides the user by calculating what additional information would be most helpful for the diagnosis. We show that our diagnostic system can produce satisfactory classification rates, using only small amounts of available background information, such that the need of collecting vast quantities of initial training data is reduced. Further, we show that incorporation of inconsistency-checking mechanisms in our diagnostic system reduces the number of incorrect diagnoses caused by erroneous input.",2008,0, 3491,A Delay Fault Model for At-Speed Fault Simulation and Test Generation,"We describe a transition fault model, which is easy to simulate under test sequences that are applied at-speed, and provides a target for the generation of at-speed test sequences. At-speed test application allows a circuit to be tested under its normal operation conditions. However, fault simulation and test generation for the existing fault models become significantly more complex due to the need to handle faulty signal-transitions that span multiple clock cycles. The proposed fault model alleviates this shortcoming by introducing unspecified values into the faulty circuit when fault effects may occur. Fault detection potentially occurs when an unspecified value reaches a primary output. Due to the uncertainty that an unspecified value propagated to a primary output will be different from the fault free value, an inherent requirement in this model is that a fault would be potentially detected multiple times in order to increase the likelihood of detection. Experimental results demonstrate that the model behaves as expected in terms of fault coverage and numbers of detections of target faults. A variation of an n-detection test generation procedure for stuck-at faults is used for generating test sequences under this model",2006,0, 3492,Tests and tolerances for high-performance software-implemehted fault detection,"We describe and test a software approach to fault detection in common numerical algorithms. Such result checking or algorithm-based fault tolerance (ABFT) methods may be used, for example, to overcome single-event upsets in computational hardware or to detect errors in complex, high-efficiency implementations of the algorithms. Following earlier work, we use checksum methods to validate results returned by a numerical subroutine operating subject to unpredictable errors in data. We consider common matrix and Fourier algorithms which return results satisfying a necessary condition having a linear form; the checksum tests compliance with this condition. We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent in finite-precision floating-point calculations. We concentrate on comprehensively defining and evaluating tests having various accuracy/computational burden tradeoffs, and we emphasize average-case algorithm behavior rather than using worst-case upper, bounds on error.",2003,0, 3493,"A note on inconsistent axioms in Rushby's ""systematic formal verification for fault-tolerant time-triggered algorithms""",We describe some inconsistencies in John Rushby's axiomatization of time-triggered algorithms that he presented in these transactions and that he formally specifies and verifies in the mechanical theorem-prover PVS. We present corrections for these inconsistencies that have been checked for consistency in PVS,2006,0, 3494,Design and validation of portable communication infrastructure for fault-tolerant cluster middleware,"We describe the communication infrastructure (CI) for our fault-tolerant cluster middleware, which is optimized for two classes of communication: for the applications and for the cluster management middleware. This CI was designed for portability and for efficient operation on top of modern user-level message passing mechanisms. We present a functional fault model for the CI and show how platform-specific faults map to this fault model. Based on this fault model, we have developed a fault injection scheme that is integrated with the CI and is thus portable across different communication technologies. We have used fault injection to validate and evaluate the implementation of the CI itself as well as the cluster management middleware in the presence of communication faults.",2002,0, 3495,A 3-bit soft-decision IC for powerful forward error correction in 10-Gb/s optical communication systems,"We describe the design concept and performance of a 3-bit soft-decision IC, which opens a vista for new terabit-capacity optical communication systems by dramatically improving the capability of forward error correction (FEC). The proposed soft-decision IC is composed of five functional blocks, i.e., a soft-decider, an error filter, a 3-bit encoder, a 3:48 de-multiplexer, and a clock recovery circuit. The biggest challenge was the soft-decision block regenerating the common data using seven deciders with separate thresholds. We employed a novel SiGe BiCMOS process and a custom BGA package made from low-temperature co-fired ceramics to achieve a high sensitivity of 20 mVpp with a wide phase margin of 270 for 12.4-Gb/s nonreturn-to-zero (NRZ) data signals. The error filter and the 3-bit encoder, which are incorporated in the IC, prevent the degradation of the FEC performance due to signal noise or fluctuations. The 3:48 de-multiplexer provides an accessible interface with the FEC encoder/decoder LSI. The clock recovery circuit, based on a phase-locked-loop technology, fulfilled the jitter tolerance requirements corresponding to ITU-T G.825, even for 55% duty cycle optical return-to-zero (RZ) signals. The 3-bit soft-decision IC, in cooperation with a block turbo encoder/decoder, achieved a record net coding gain of 10.1 dB with 24.6% redundancy, which is only 0.9 dB away from the Shannon limit for a code rate of 0.8 for a binary symmetric channel.",2005,0, 3496,Service Fault Localization Using Probing Technology,"We describe the proactive probing and probing on demand algorithms that are used to monitor the health of a serviceable component system proactively and to localize faulty components intelligently. This application-level performance monitoring mechanism can greatly reduce the overhead of satisfying service-level objectives. When abnormal events occur, we are able to follow relevant dependent paths to localize possible faulty serviceable components based on the outcomes of incremental on demand probing. In this paper, we continue analyzing the impact of the algorithms in the process of probing and diagnosis",2006,0, 3497,Design of turbo-coded modulation for the AWGN channel with Tikhonov phase error,"We design 1-b/symbol/Hz parallel concatenated turbo-coded modulation (PCTCM) for the additive white Gaussian noise (AWGN) channel with Tikhonov phase error. Constituent recursive convolutional codes are optimized so that the turbo codes have low error floors and low convergence thresholds. The pairwise error probability based on the maximum-likelihood decoding metric is used to select codes with low error floors. We also present a Gaussian approximation method that accurately predicts convergence thresholds for PCTCM codes on the AWGN/Tikhonov channel. Simulation results show that the selected codes perform within 0.6 dB of constellation constrained capacity, and have no detectable error floor down to bit-error rates of 10-6.",2005,0, 3498,A rate-distortion approach to wavelet-based encoding of predictive error frames,"We develop a framework for efficiently encoding predictive error frames (PEF) as part of a rate scalable, wavelet-based video compression algorithm. We investigate the use of rate-distortion analysis to determine the significance of coefficients in the wavelet decomposition. Based on this analysis, we allocate the bit budget assigned to a PEF to the coefficients that yield the largest reduction in distortion, while maintaining the embedded and rate scalable properties of our video compression algorithm",2000,0, 3499,Detection of prescribed error events: application to perpendicular recording,"We discuss an error detection technique geared to a prescribed set of error events. The traditional method of error detection and correction attempts to detect/correct as many erroneous bits as possible within a codeword, irrespective of the pattern of the error events. The proposed approach, on the other hand, is less concerned about the total number of erroneous bits it can detect, but focuses on specific error events of known types. We take perpendicular recording systems as an application example. Distance analysis and simulation can easily identify the dominant error events for the given noise environment and operating density. We develop a class of simple error detection parity check codes that can detect these specific error events. The proposed coding method, when used in conjunction with post-Viterbi error correction processing, provides a substantial performance gain compared to the uncoded perpendicular recording system.",2005,0, 3500,Intelligent fault diagnosis technique based on causality diagram,"We discuss the knowledge expression, reasoning and probability computing in causality diagram, which is developed from the belief network and overcomes some shortages. The model of causality diagram used for system fault diagnosis is brought forward, and the model constructing method and reasoning algorithm are also presented. At last, an application example in the fault diagnosis of the nuclear power plant is given which shows that the method is effect.",2004,0, 3501,Performance comparison of different motion correction algorithms used in cardiac SPECT imaging,We evaluate the performance and accuracy of different motion correction algorithms used in nuclear cardiac imaging. Three algorithms used in automated tracking techniques are the diverging square (DS); diverging circles (DC); and self-adaptive masking methods (SM). The observation of the tracking process over the images acquired (specifically for this study with introduced patient motion) in nuclear cardiac laboratory showed that the DC and SM algorithms perform better when compared with the DS algorithm. It was shown that the SM method gains a good tolerance of low signal to noise ratio (SNR) images and is also robust in the presence of abrupt motion,2001,0, 3502,The evaluation of corrective reconstruction method for reduced acquisition time and various anatomies of perfusion defect using channelized hotelling observer for myocardial perfusion SPECT,"We evaluated the effect of conventional and corrective image reconstruction methods on reduced acquisition time for detecting a myocardial perfusion (MP) defect in MP SPECT using the Channelized Hotelling Observer (CHO). Using the 4D Extended Cardiac-Torso (XCAT) phantom, we simulated realistic transmural and endocardial MP defects in various location and size. Realistic Tc-99m Sestamibi MP projection data were generated using an analytical projector that included the effects of attenuation (A), collimator-detector response (D) and scatter (S) for various count levels simulating different acquisition times. They were reconstructed using the 3D FBP without correction and a 3D OS-EM method with ADS correction followed by a smoothing filter with various cut-off frequencies. The CHO followed by receiver operating characteristics (ROC) methodology was applied to the reconstructed images to evaluate the detectability of a MP defect in each method for different defect anatomies and count levels. Areas under the ROC curve (AUC) were computed to assess the changes in the MP defect detection. The results showed that the 3D OS-EM with ADS corrections showed significantly less changes in AUC value and gave overall higher AUC values than FBP at all cut-off frequencies of the post smoothing filter, count levels and MP defect sizes. The difference in AUC increased towards less smoothed images where the 3D OS-EM with correction was able to provide similar AUC values with 20-40% reduction in acquisition time compared to FBP. The AUC values for smaller MP defects were lower for both reconstruction methods with smaller differences. We concluded that the 3D OS-EM with ADS corrections provides higher performance in the MP defect detection task. It allows increased reduction of acquisition time without loss of MP defect detection in MP SPECT compared to the conventional FBP method especially towards less smoothed images.",2010,0, 3503,Effects of spot and background defects on quantitative data from spotted microarrays,"We apply simulations generating realistic spotted microarray image data to study the effects of broken spots and background defects on the numerical values obtained from such images. Our simulation uses gene, spot morphology, and background properties derived from a published microarray study to generate replicates of images. We generate simulated datasets for several cases with high and low severity of spot breaks and high and low severity of background defects. One hundred replicate images were generated for each case. Each spot on each replicate image was quantified for ""gene expression,"" using a spot-finding technique similar to those found in commercial software. Over the 100 replicates, we computed statistics for each gene. We found that spot defects had little common effect on the numerical values, except to slightly reduce the values for high-expressing genes. Background defects had significant effects all around, including increasing mean values in low-expressing genes, increasing variance in low- and high-expressing genes, and altering the skewness of the distribution of numerical values, frequently changing negatively-skewed distributions to positively-skewed. We conclude that background defects are likely to have the greatest effect on the accuracy of microarray data and should be avoided through experimental protocols enforcing careful handling of slides.",2003,0, 3504,More bang for the bug: An account of 2003's attack trends,"We can find considerable information security debris in the wake of 2003's attack trends and new security flaws. New and serious vulnerabilities were discovered, disclosed, and subsequently exploited in many ways - from simple, straightforward methods to more advanced and innovative exploitation techniques. This paper examines a handful of the more than 3,000 unique vulnerabilities and 115,000 security incidents reported in 2003 (according to CERT Coordination Center's report for quarters one through three) and do my best to predict information security woes for 2004. The author's analysis focuses on the distinguishing characteristics of the 2003 attacks trends rather than on specific vulnerabilities or a precisely defined taxonomy of security bugs.",2004,0, 3505,Transmitter Comparison and Unequal Bit Error Probabilities in Coherent QPSK Systems,We compare different QPSK transmitters and find that some simple configurations can give rise to a significant difference in BER between the two transmitted bits. The optimum receiver filter bandwidth is affected by this phenomenon.,2007,0, 3506,Resilient routing layers and p-cycles: tradeoffs in network fault tolerance,"We compare p-cycles and the recently introduced resilient routing layers as candidate schemes for network-level fault protection. Using computational routing trials we show that RRL has shorter backup path lengths and more successful double-link fault protection. On the other hand, p-cycles may require less forwarding state. Several tradeoffs of interest for network designers are described.",2005,0, 3507,Distributed multiuser optimization: Algorithms and error analysis,"We consider a class of multiuser optimization problems in which user interactions are seen through congestion cost functions or coupling constraints. Our primary emphasis lies on the convergence and error analysis of distributed algorithms in which users communicate through aggregate user information. Traditional implementations are reliant on strong convexity assumptions, require coordination across users in terms of consistent stepsizes, and often rule out early termination by a group of users. We consider how some of these assumptions can be weakened in the context of projection methods motivated by fixed-point formulations of the problem. Specifically, we focus on (approximate) primal and primal-dual projection algorithms. We analyze the convergence behavior of the methods and provide error bounds in settings with limited coordination across users and regimes where a group of users may prematurely terminate affecting the convergence point.",2009,0, 3508,Fast algorithm for distortion-based error protection of embedded image codes,"We consider a joint source-channel coding system that protects an embedded bitstream using a finite family of channel codes with error detection and error correction capability. The performance of this system may be measured by the expected distortion or by the expected number of correctly decoded source bits. Whereas a rate-based optimal solution can be found in linear time, the computation of a distortion-based optimal solution is prohibitive. Under the assumption of the convexity of the operational distortion-rate function of the source coder, we give a lower bound on the expected distortion of a distortion-based optimal solution that depends only on a rate-based optimal solution. Then, we propose a local search (LS) algorithm that starts from a rate-based optimal solution and converges in linear time to a local minimum of the expected distortion. Experimental results for a binary symmetric channel show that our LS algorithm is near optimal, whereas its complexity is much lower than that of the previous best solution.",2005,0, 3509,Designing fault-secure parallel encoders for systematic linear error correcting codes,"We consider the open problem of designing fault-secure parallel encoders for various systematic linear ECC. The main idea relies on generating not only the check bits for error correction but also, separately and in parallel, the check bits for error detection. Then, the latter are compared against error detecting check bits which are regenerated from the error correcting check bits. The detailed design is presented for encoders for CRC codes. The complexity evaluation of FPGA implementations of encoders with various degrees of parallelism shows that their fault-secure versions compare favorably against their unprotected counterparts both with respect to complexity and the maximal frequency of operation. Future research will include the design of FS decoders for CRC codes as well as the generalization of the presented ideas to design of FS encoders and decoders for other systematic linear ECC like nonbinary BCH codes and Reed-Solomon codes.",2003,0, 3510,Data verification and reconciliation with generalized error-control codes,"We consider the problem of data reconciliation, which we model as two separate multisets of data that must be reconciled with minimum communication. Under this model, we show that the problem of reconciliation is equivalent to a variant of the graph coloring problem and provide consequent upper and lower bounds on the communication complexity of reconciliation. Further, we show by means of an explicit construction that the problem of reconciliation is, under certain general conditions, equivalent to the problem of finding error-correcting codes for a general class of errors. Under this equivalence, reconciling with little communication is linked to codes with large size, and vice versa. We show analogous results for the problem of multiset verification, in which we wish to determine whether two multisets are equal using minimum communication. As a result, a wide body of literature in coding theory may be applied to the problems of reconciliation and verification.",2003,0, 3511,Intelligent probing: A cost-effective approach to fault diagnosis in computer networks,"We consider the use of probing technology for cost-effective fault diagnosis in computer networks. Probes are test transactions that can be actively selected and sent through the network. This work addresses the probing problem using methods from artificial intelligence. We call the resulting approach intelligent probing. The probes are selected by reasoning about the interactions between the probe paths. Although finding the optimal probe set is prohibitively expensive for large networks, we implement algorithms that find near-optimal probe sets in linear time. In the diagnosis phase, we use a Bayesian network approach and use a local-inference approximation scheme that avoids the intractability of exact inference for large networks. Our results show that the quality of this approximate inference degrades gracefully under increasing uncertainty and increases as the quality of the probe set increases.",2002,0, 3512,Real-time unequal error protection algorithms for progressive image transmission,"We consider unequal error protection strategies for the efficient progressive transmission of embedded image codes over noisy channels. In progressive transmission, the reconstruction quality is important not only at the target transmission rate but also at the intermediate rates. An adequate error protection strategy may, thus, consist of optimizing the average performance over the set of intermediate rates. The performance can be the expected number of correctly decoded source bits or the expected distortion. For the rate-based performance, we prove some interesting properties of an optimal solution and give an optimal linear-time algorithm to compute it. For the distortion-based performance, we propose an efficient linear-time local search algorithm. For a binary symmetric channel, two state-of-the-art source coders (SPIHT and JPEG2000), we compare the progressive ability of our proposed solutions to that of the strategies that optimize the end-to-end performance of the system. Experimental results showed that the proposed solutions had a slightly worse performance at the target transmission rate and a better performance at most of the intermediate rates, especially at the lowest ones.",2003,0, 3513,Experience with a Concurrency Bugs Benchmark,"We describe a benchmark of publicly-available multi-threaded programs with documented bugs in them. This project was initiated a few years ago with the goal of helping research groups in the fields of concurrent testing and debugging to develop tools and algorithms that improve the quality of concurrent programs. We present a survey of usage of the benchmark, concluding that the benchmark had an impact on the research in the field of testing and debugging of concurrent programs. We also present new possible directions to foster a discussion about new goals to be set for this initiative.",2008,0, 3514,Correction of Location and Orientation Errors in Electromagnetic Motion Tracking,"We describe a method for calibrating an electromagnetic motion tracking device. Algorithms for correcting both location and orientation data are presented. In particular, we use a method for interpolating rotation corrections that has not previously been used in this context. This method, unlike previous methods, is rooted in the geometry of the space of rotations. This interpolation method is used in conjunction with Delaunay tetrahedralization to enable correction based on scattered data samples. We present measurements that support the assumption that neither location nor orientation errors are dependent on sensor orientation. We give results showing large improvements in both location and orientation errors. The methods are shown to impose a minimal computational burden.",2007,0, 3515,Fast Single-Turn Sensitive Stator Interturn Fault Detection of Induction Machines Based on Positive- and Negative-Sequence Third Harmonic Components of Line Currents,"Unambiguous detection of stator interturn faults for induction machines at their incipient stage, i.e., few turns' fault, has recently received great attention. Traditionally, interturn faults are detected using negative-sequence current and impedance. However, their effectiveness under supply imbalance conditions is questionable. Recently, line-current third harmonic ( 3f) has also been used in an attempt to achieve this goal. Nevertheless, issues such as inherent structural asymmetry and voltage imbalance also influence the + 3f. In this paper, the positiveand negative-sequence third harmonics ( 3f) of line current under different operating conditions have been explored by combining space and time harmonics. The suggested fault signature was obtained by removing residual components from tested quantities. Simulation and experimental results using 1 s of data indicate that the proposed 3f signatures are capable of very effectively detecting even a single-turn fault, and distinguish it from voltage imbalance and structural asymmetry.",2010,0,5244 3516,Clues for Modeling and Diagnosing Open Faults with Considering Adjacent Lines,"Under the modern manufacturing technologies, the open defect is one of the significant issues to maintain the reliability of DSM circuits. However, the modeling and techniques for test and diagnosis for open faults have not been established yet. In this paper, we give an important clue for modeling an open fault with considering the affects of adjacent lines. Firstly, we use computer simulations to analyze the defective behaviors of a line with the open defect. From the simulation results, we propose a new open fault model that is excited depending on the logic values at the adjacent lines assigned by a test. Next, we propose a diagnosis method that uses the pass/fail information to deduce the candidate open fault. Finally, experimental results show that the proposed method is able to diagnose the open faults with good resolution. It takes about 6 minutes to diagnose the open fault on the large circuit (2M gates).",2007,0, 3517,Unequal Error Protection of Multiple Programs Based on Length-Variable Transport Stream Packets,"Unequal error protection (UEP), which provides important data with more protection, has been proven to be able to produce better quality in image communication. Previous UEP schemes are mostly proposed for single-image or single-program scenarios. Yet few are developed for multiple programs. Inspired by the MPEG-2 transport stream (TS), in this paper we transform the conventional TS packets to be length-variable, and propose a new UEP scheme, which is suitable for multiple-program scenarios. A theoretical model for this scheme is built in this paper, and experimental results also demonstrate the effectiveness of the scheme.",2009,0, 3518,An Intelligent Rule-Based System For Fault Detection And Diagnosis On A Model-Based Actuator Device,"Unmanned aerial vehicles due to their large operational potential may be required to travel over long distances and through various weather conditions, which might lead to potential degradation or even failure of their electrical or/and mechanical actuator parts. Control in trajectory derivations and path following processes is highly dependable on these actuators and sensors. Depending on their efficiency, the outcome will be a near optimum solution to every problem. Consequently, the minor failure can degrade the performance of the process and might drive it to an uncontrollable system. Therefore, an efficient mechanism should be capable of making these faults realizable and act accordingly so that a consistent performance actuator performance qualitative or quantitative index is continuously maintained. In this paper electro-mechanical actuator potential failures are firstly detected and then diagnosed for the application of unmanned aerial vehicles. It includes several scenarios of actuator faults and results which demonstrate the fault conditions and the effectiveness of the detection and diagnosis Kalman based algorithms. It involves the diagnosis strategy to minimizing errors produced due to malfunction in components or inaccuracies in the model. The residuals used are generated using empirical actuator models which are chosen under specific operating regimes.",2007,0, 3519,Analysis and management system for power system fault information based on intranet network,"Using Internet network technique, database technique and object-oriented design method, a comprehensive analysis and management system for power system fault information is developed and a feasible network connection project for fault information connection is put forward. The fault analysis functions, e.g., fault diagnosis and location, supervisory and judgment to equipment operation, harmonic analysis and waveform treatment, etc. are developed. Fault information management system is also set up, and easy inquiry and browse based on the Web is developed. The practical application shows that the proposed system makes the information be wide share and promotes operation automation.",2002,0, 3520,Coseismic fault rupture detection and slip measurement by ASAR precise correlation using coherence maximization: application to a north-south blind fault in the vicinity of Bam (Iran),"Using the phase differences between satellite radar images recorded before and after an earthquake, interferometry allows mapping the projection along the line of sight (LOS) of the ground displacement. Acquisitions along multiple LOS theoretically allow deriving the complete deformation vector; however, due to the orbit inclination of current radar satellites, precision is poor in the north-south direction. Moreover, large deformation gradients (e.g., fault ruptures) prevent phase identification and unwrapping and cannot be measured directly by interferometry. Subpixel correlation techniques using the amplitude of the radar images allow measuring such gradients, both in slant-range and in azimuth. In this letter, we use a correlation technique based on the maximization of coherence for a radar pair in interferometric conditions, using the complex nature of the data. In the case of highly coherent areas, this technique allows estimating the relative distortion between images. Applied to ASAR images acquired before and after the December 26, 2003 Bam earthquake (Iran), we show that the near-field information retrieved by this technique is useful to constrain geophysical models. In particular, we confirm that the major gradients of ground displacement do not occur across the known fault scarp but approximately 3 km west of it, and we also estimate directly the amplitude of right lateral slip, while retrieving this value from interferometry requires passing through the use of a model for the earthquake fault and slip.",2006,0, 3521,Vacuity Analysis by Fault Simulation,"Vacuum cleaning is a mandatory process when an implementation is verified with respect to a specification modeled by means of formal properties. In fact, vacuum cleaning looks for properties that, passing vacuously (e.g., an implication whose antecedent is always false), may lead verification engineers to a false sense of safety. Current approaches to vacuum cleaning, generally, exploit formal methods to provide an interesting witness proving that a property does not pass vacuously. However, such approaches are as complex as model checking, and they require to define and model check further properties, thus increasing the verification time. This paper proposes an alternative approach, based on fault simulation, that requires neither the definition of new properties, nor the use of model checking. Experimental results show the high efficiency of this approach.",2008,0, 3522,Reliability Growth of Open Source Software Using Defect Analysis,We examine two active and popular open source products to observe whether or not open source software has a different defect arrival rate than software developed in-house. The evaluation used two common models of reliability growth models; concave and S-shaped and this analysis shows that open source has a different profile of defect arrival. Further investigation indicated that low level design instability is a possible explanation of the different defect growth profile.,2008,0, 3523,Execution model for outsourced corrective maintenance,"We focus on corrective maintenance carried out in the outsourced mode under strict service level agreements and present the characteristics of the problem and the activities performed. We detail the information requirements for various maintenance services, such as, emergency maintenance, production support, and corrective maintenance. We present the concept of a system execution model with its constituent nodes and arcs and present the steps to build the same. We present a case study of a large commercial outsourcing project to demonstrate how the execution model can help in making quick decisions that reduces the turn around times of corrective maintenance requests.",2005,0, 3524,Consistency Error Modeling-based Localization in Sensor Networks,"We have developed a new error modeling and optimization-based localization approach for sensor networks in presence of distance measurement noise. The approach is solely based on the concept of consistency. The error models are constructed using non-parametric statistical techniques; they do not only indicate the most likely error, but also provide the likelihood distribution of particular errors occurring. The models are evaluated using the learn-and-test techniques and serve as the objective functions for the task of localization. The localization problem is formulated as task of maximizing consistency between measurements and calculated distances. We evaluated the approach in (i) both GPS-based and GPS-less scenarios; (ii) 1-D, 2-D and 3-D spaces, on sets of acoustic ranging-based distance measurements recorded by deployed sensor networks. The experimental evaluation indicates that localization of only a few centimeters is consistently achieved when the average and median distance measurement errors are more than a meter, even when the nodes have only a few distance measurements. The relative performance in terms of location accuracy compare favorably with respect to several state-of-the-art localization approaches. Finally, several insightful observations about the required conditions for accurate localization are deduced by analyzing the experimental results",2006,0, 3525,"Argus: Low-Cost, Comprehensive Error Detection in Simple Cores","We have developed Argus, a novel approach for providing low-cost, comprehensive error detection for simple cores. The key to Argus is that the operation of a von Neumann core consists of four fundamental tasks - control flow, dataflow, computation, and memory access - that can be checked separately. We prove that Argus can detect any error by observing whether any of these tasks are performed incorrectly. We describe a prototype implementation, Argus-1, based on a single-issue, 4-stage, in-order processor to illustrate the potential of our approach. Experiments show that Argus-1 detects transient and permanent errors in simple cores with much lower impact on performance (<4% average overhead) and chip area (<17% overhead) than previous techniques.",2007,0,7344 3526,Identification of the atomic scale defects involved in radiation damage in HfO2 based MOS devices,"We have identified the structure of three atomic scale defects which almost certainly play important roles in radiation damage in hafnium oxide based metal oxide silicon technology. We find that electron trapping centers dominate the HfO2 radiation response. We find two radiation induced trapped electron centers in the HfO2: an O2- coupled to a hafnium ion and an HfO2 oxygen vacancy center which is likely both an electron trap and a hole trap. We find that, under some circumstances, Si/dielectric interface traps similar to the Si/SiO2 Pb centers are generated by irradiation. Our results show that there are very great atomic scale differences between radiation damage in conventional Si/SiO2 devices and the new Si/dielectric devices based upon HfO2.",2005,0, 3527,The effect of nonlinear signal transformations on bias errors in elastography,"We have reported several artifacts in elastography (1991). These include mechanical artifacts, such as stress concentration, and signal processing artifacts, such as zebras, which are caused by bias errors incurred during the estimation of the peak of correlation functions using a curve-fitting method. We investigate the bias errors and show that bias errors in curve-fitting methods are substantially increased because of nonlinear operations on the echo signals that reduce other errors. We also show that, for typical sampling rates, the bias errors can be ignored in the absence of these nonlinear operations.",2000,0, 3528,Effect of atmospheric correction for different land use on Landsat 7 ETM+ satellite imagery,"Various changes in the atmosphere of the earth and different illuminations resulting from rough terrain change the spectral reflection values of satellite images. Studies making use of real reflection values belonging to the object will provide more accurate data. The atmospheric correction process to be applied in this study is used to prevent the negative effects resulting from atmosphere and different illuminations in order to represent the reflections from the ground on the image in the best way possible. Using atmospheric correction, differentiations in reflection values sensed by different sensors or platforms resulting from atmosphere and some technical problems will be prevented. In this study, the aim is to determine the changes in the spectral reflection values concerning land use following the atmospheric correction to be applied on Landsat image data. For this reason, atmospheric correction was applied on Landsat image data. The relations of each band with each other before and after the correction were determined. The changes between spectral reflection values of all bands before and after correction regarding three different land uses as forest, agricultural area and residential area were examined visually and statistically.",2009,0, 3529,Order Bispectrum Based Gearbox Fault Diagnosis During Speed-up Process,"Varying speed machinery faults detection is more difficult due to non-stationary vibration. Therefore, most conventional signal processing method based on time invariant carried out in constant time interval are frequently unable to provide meaningful results. In order to process the non-stationary vibration signals such as speed up or speed down vibration signals effectively, the order bispectrum analysis technique is presented. This new method combines computed order tracking technique with bispectrum analysis. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments. Therefore, the time domain transient signal is converted into angle domain stationary one. In the end, the resampled signals are processed by bispectrum analysis method. The experimental results show that order bispectrum analysis can effectively diagnosis the faults of the gear crack",2006,0, 3530,Populating a Release History Database from version control and bug tracking systems,"Version control and bug tracking systems contain large amounts of historical information that can give deep insight into the evolution of a software project. Unfortunately, these systems provide only insufficient support for a detailed analysis of software evolution aspects. We address this problem and introduce an approach for populating a release history database that combines version data with bug tracking data and adds missing data not covered by version control systems such as merge points. Then simple queries can be applied to the structured data to obtain meaningful views showing the evolution of a software project. Such views enable more accurate reasoning of evolutionary aspects and facilitate the anticipation of software evolution. We demonstrate our approach on the large open source project Mozilla that offers great opportunities to compare results and validate our approach.",2003,0, 3531,Software bugs and evolution: a visual approach to uncover their relationship,"Versioning systems such as CVS exhibit a large potential to investigate and understand the evolution of large software systems. Bug reporting systems such as Bugzilla help to understand which parts of the system are affected by problems. In this article, we present a novel visual approach to uncover the relationship between evolving software and the way it is affected by software bugs. By visually putting the two aspects close to each other, we can characterize the evolution of software artifacts. We validate our approach on 3 very large open source software systems",2006,0, 3532,Fault tolerance analysis of Extended Pruned Vertically Stacked Optical Banyan networks with link failures and variable group size,"Vertically stacked optical banyan (VSOB) networks are attractive for serving as optical switching systems due to the good properties of banyan network structures (such as the small depth and self-routing capability), and it is expected that using the VSOB structure will lead to a better fault-tolerant capability because it is composed of multiple identical copies of banyan networks. In the Extended Pruned Vertically Stacked Optical Banyan (EP-VSOB) network, the number of pruned planes has always been considered as (N), and a few extra planes (regular banyan) has been added with this pruned planes. In this paper, we present the results of blocking analysis of EP-VSOB network incorporating link-failures in which the number of pruned planes can be 2x, where 0 log2 N, in addition to the variable extra planes. This generalization helps us trade-off between different constraints and performance metrics. Our simulation results show that for some given performance requirements (e.g. cost, speed or blocking probability), we can choose a network that has lower switch count compared to N - plane pruned crosstalk-free optical banyan networks. Our result also reveals a fact that by accepting small link failure probability, the blocking behavior of EP-VSOB network is very similar to that of a fault-free one, which demonstrates our expectation of good fault-tolerant property of VSOB networks. Simulation results also show that the blocking probability does not always increase with the increase of link-failures; blocking probability decreases for certain range of link-failures, and then increases again.",2010,0, 3533,Reconstruction corrections in advanced neutron tomography algorithms,"Volume reconstruction methods operate by splitting the three-dimensional (3-D) reconstruction space into two-dimensional (2-D) data slices. Any of several algorithms (like FBP) available can be used to reconstruct 2-D cross-sectional images, which can then be stacked to provide a 3-D representation of internal volume of the object. The reconstruction process takes advantage from the improvement of raw data quality: the data pre-processing (that involves digital filtering) ensures that the reconstruction from low-contrast images can be performed with good details, allowing a better signal-to-noise ratio. This work describes the software treatment on data coming from the tomography system installed at the TRIGA reactor of in the ENEA's Casaccia Research Centre, Rome, Italy.",2005,0, 3534,Study on the error evalution method of calculated energy production by WAsP when the terrain gets complex,"WAsP is not able to simulate the flow of wind field in the complex terrain, because of the model limitation, which is one of the main error source of the complex terrain wind resource assessment. This paper studies the error of calculated energy production by WAsP when it is applied to the complex terrain. An error evaluation method is presented and verified by the actual production data from an existing wind farm. The result shows that the presented error evaluation method is very effective and has a good practical value.",2009,0, 3535,Lazy garbage collection of recovery state for fault-tolerant distributed shared memory,"We address the problem of garbage collection in a single-failure fault-tolerant home-based lazy release consistency (HLRC) distributed shared-memory (DSM) system based on independent checkpointing and logging. Our solution uses laziness in garbage collection and exploits consistency constraints of the HLRC memory model for low overhead and scalability. We prove safe bounds on the state that must be retained in the system to guarantee correct recovery after a failure. We devise two algorithms for garbage collection of checkpoints and logs, checkpoint garbage collection (CGC), and lazy log trimming (LLT). The proposed approach targets large-scale distributed shared-memory computing on local-area clusters of computers. The challenge lies in controlling the size of the logs and the number of checkpoints without global synchronization while tolerating transient disruptions in communication. Evaluation results for real applications show that it effectively bounds the number of past checkpoints to be retained and the size of the logs in stable storage",2002,0,6119 3536,Energy-efficient Interleaving for Error Recovery in Broadcast Networks,"We analyze the performance of MAC-layer Reed-Solomon error recovery in the CDMA2000 1xEV-DO broadcast and multicast services (BCMCS) environment, with respect to the size of the error control block (ECB) and the air-channel condition, and establish the relationship between ECB size, error-recovery capacity and energy consumption. Real-time traffic, such as voice and video streaming, is very sensitive to delay, but can stand a certain level of packet loss. We therefore propose an energy-efficient size of the ECB to reduce the average energy consumption during error recovery while minimizing the reduction in service quality for real-time multimedia applications. Extensive simulation results suggest that a significant amount of energy can be saved with negligible performance degradation by selecting the appropriate ECB size for the bit error-rate of the forward traffic channel, instead of always choosing the largest possible ECB, with the sole aim of increasing error recovery performance",2006,0, 3537,Analysis of Timing Error Detectors for Orthogonal Space-Time Block Codes,"We analyze the properties of a class of low complexity timing error detectors for the purpose of timing error tracking in orthogonal space-time block coding receivers. For symmetrical signal constellations, under the assumptions of ideal data decisions and channel knowledge at the receiver, expressions for the S-curve, estimation error variance and the detector signal-to-noise ratio are derived. Simulations are used to confirm the analytical results and to evaluate the effects of data decision errors on the estimator properties. Symbol-error-rate performance is evaluated for a system operating in a frequency-flat Rayleigh fading environment, where the timing synchronization loss is found to be less than 0.3 dB. In addition to receiver with perfect channel state information, results for pilot-based channel estimation are included in order to examine the effects of channel estimation errors on timing synchronization.",2007,0, 3538,The comparison of pixel size and atmospheric correction method on matched filter detection for a hyperspectral image,"Two atmospheric correction methods are used to obtain the reflectance for a hyperspectral data image resampled to varying spatial resolution. The physics-based FLAASH approach as well as the in-scene based QUAC method retrieve the reflectance spectra for the scene, and the ability to use the results to detect materials of interest in the image is determined. Using a spectral matched filter to score the results, both FLAASH and QUAC perform well at matching the ground truth spectrum of a bright meter-sized material at ground sampling distances of 2.4-24 m. For a dark material, QUAC performance degrades with lower resolutions.",2009,0, 3539,Taxonomy of faults in wireless networks,"Two critical issues in wireless networks are fault management and robustness. Addressing these issues makes the network fault tolerant and secure, and also provides better reliability and availability of resources. While researchers have focused on network management, fault detection and recovery, a classification of faults unique to wireless networks is needed. In this paper, we attempt to present a specific taxonomy of faults in wireless networks. Although some faults are similar to those encountered in wired networks, others are characteristic of wireless environments due to the nature of the medium and the mobility of the nodes.",2005,0, 3540,Finding Errors in Interoperating Components,"Two or more components (e.g., objects, modules, or programs) interoperate when they exchange data, such as XML data. Currently, there is no approach that can detect a situation at compile time when one component modifies XML data so that it becomes incompatible for use by other components, delaying discovery of errors to runtime. Our solution, a verifier for interoperating components for finding logic faults (Viola) builds abstract programs from the source code of components that exchange XML data. Viola symbolically executes these abstract programs thereby obtaining approximate specifications of the data that would be output by these components. The computed and expected specifications are compared to find errors in XML data exchanges between components. We describe our approach, implementation, and give our error checking algorithm. We used Viola on open source and commercial systems and discovered errors that were not detected during their design and testing.",2007,0, 3541,Research on application of virtual instrument and grey theory in the fault diagnostic system,"Virtual instrument is an integration of the latest PC technology, advanced testing technique and strong software package, and the development of visual instrument system is, in the domain of the automatic test and fault diagnosis equipment, etc, the trend of modern technology. It is developed, applied virtual instrument technology, grey system theory, and database, the fault diagnostic system for shipboard chemical defense equipment, in order to update entirely the means of the equipment testing. The modularization and universalization are proposed in its database-based design concept, realizing the design of software and hardware. The ODBC technique is applied for the interconnection of databases to ensure the generality and flexibility of the system. The system mode the best of virtual instrument platform and grey diagnosis method, broke through conventional check diagnosis patterns for warships chemical defense equipment, solved the problems of state prediction and trouble-mode recognition of warships chemical defense equipment. It has been proved by experiments that the system has merits both high accuracy and economical practicability. Also it can reduce the application development cycle and cost.",2007,0, 3542,Application of ESKD supported virtual instrument in distance engine fault diagnosis,"Virtual instrument is help to distance fault diagnosis and exclusion of large scale equipments as engines. In order to achieve distance automate control on engine fault diagnosis, a kind of virtual instrument fault diagnosis method based on Expert System supported Knowledge Database (ESKD) of double database cooperating mechanism was presented. The internet technology was also used for distance interactive operation and transmission of measure data. Then an intelligent engine fault diagnosis and exclusive system was designed. The specific experiment on some engine fault diagnosis shows that the system has high accuracy and practicality, and easy to operate and use.",2010,0, 3543,An extreme value injection approach with reduced learning time to make MLNs multiple-weight-fault tolerant,"We propose an efficient method for making multilayered neural networks(MLN) fault-tolerant to all multiple weight faults in an interval by injecting intentionally two extreme values in the interval in a learning phase. The degree of fault-tolerance to a multiple weight fault is measured by the number of essential multiple links. First, we analytically discuss how to choose effectively the multiple links to be injected, and present a learning algorithm for making MLNs fault tolerant to all multiple (i.e., simultaneous) faults in the interval defined by two multi-dimensional extreme points. Then it is shown that after the learning algorithm successfully finishes, MLNs become fault tolerant to all multiple faults in the interval. The time in a weight modification cycle is almost linear for the fault multiplicity. The simulation results show that the computing time drastically reduces as the multiplicity increases.",2002,0, 3544,Error resilient transcoding of Scalable Video bitstreams,"We propose in this paper a novel error resilient transcoding scheme that can be placed at the boundary between wired and wireless networks via heterogeneous network links. This error resilient transcoder shall seamlessly complement the standard scalable video coding (SVC) bitstream to offer additional error resilient adaptation capability for receiving devices. The novel error resilient transcoding scheme consists of three different modules; each is designed to meet various levels of complexity need. The three modules are all based on the loss-aware rate-distortion optimization (LA-RDO) mode decision algorithm we have previously developed for SVC. However, each individual module can be tailored to different complexity requirements depending on whether and how the LA-RDO mode decision is implemented. Another innovation of this approach is the design of a fast rate control algorithm in order to maintain consistent bitrates between input and output of the transcoder. This rate control algorithm only needs picture-level bit information for training target quantization parameters. Simulation results demonstrate that, comparing with standard SVC, the proposed approach is able to achieve up to 4 dB gain for the enhancement layer video and up to 1 dB gain for the base layer video.",2008,0, 3545,Optimal fault-tolerant routing scheme for generalized hypercube,"We propose node path vector (NPV) to capture complete shortest routing information for a generalized hypercube system. We also introduce the concept of relay node technique to reduce the computation complexity in obtaining NPV. Optimal fault-tolerant routing scheme (OFTRS) is further proposed to derive an optimal or sub-optimal routing path for any communication pair in a generalized hypercube system. Compared to previous work, OFTRS does not omit any routing information for optimal and sub-optimal path even in a generalized hypercube system with large number of faulty nodes and links while the previous schemes can potentially omit 60% routing paths. Thus it considerably improves the quality of fault-tolerant routing. In addition, our proposed scheme is distributed and relying only on non-faulty neighboring nodes, thus it has high applicability. Finally, the algorithm guarantees to route through the optimal or sub-optimal path as long as a path between the source-destination pair exists.",2005,0, 3546,Improved error concealment algorithms based on H.264/AVC non-normative decoder,"We propose several improved error concealment (EC) algorithms based on the H.264/AVC non-normative decoder. The major differences are that motion compensated EC is introduced for intra frames, whereas spatial EC is introduced for inter frames. As for the EC of intra frames, scene change detection, motion activity detection and MV retrieval are hierarchically performed to decide whether spatial or temporal information is to be used. As for the EC of inter frames, scene change is also detected to avoid merging the scenes from different video shots. Therefore, the main idea of the proposed algorithms is that both spatial and temporal correlations are utilized for the EC of intra and inter frames. Both subjective and objective simulations under Internet conditions show that the proposed algorithm greatly outperforms that in the H.264/AVC non-normative decoder.",2004,0, 3547,BOAs: backoff adaptive scheme for task allocation with fault tolerance and uncertainty management,"We propose the backoff adaptive scheme (BOAs) as a new technique for the automatic allocation of tasks amongst a team of heterogeneous mobile robots. It is an optimal, decentralized decision making scheme that utilizes explicit communication between the agents. A structured and unified framework is also proposed for task specification. This scheme is fault tolerant (to robot malfunctions) and allows for uncertainty in the nature of task specification in terms of the actual number of robots required. Team demography may change without the need for the respecification of tasks. The adaptive feature in BOAs further improves the flexibility of the team. Realistic simulations are carried out to verify the effectiveness of the scheme.",2004,0, 3548,The robust middleware approach for transparent and systematic fault tolerance in parallel and distributed systems,"We propose the robust middleware approach to transparent fault tolerance in parallel and distributed systems. The proposed approach inserts a robust middleware between algorithms/programs and system architecture/hardware. With the robust middleware, hardware faults are transparent to algorithms/programs so that ordinary algorithms/programs developed for fault-free networks can run on faulty parallel/distributed systems without modifications. Moreover, the robust middleware automatically adds fault tolerance capability to ordinary algorithms/programs so that no hardware redundancy or reconfiguration capability is required and no assumption is made about the availability of a complete subnetwork (at a lower dimension or smaller size). We also propose nomadic agent multithreaded programming as a novel fault-aware programming paradigm that is independent of network topologies and fault patterns. Nomadic agent multithreaded programming is adaptive to fault/traffic/workload patterns, and can take advantages of various components of the robust middleware, including the fault tolerance features and multiple embeddings, without relying on specialized robust algorithms",2003,0, 3549,The Unbounded-Error Communication Complexity of Symmetric Functions,"We prove an essentially tight lower bound on the unbounded-error communication complexity of every symmetric function, i.e.,f(x,y)=D(|x Lambda y|), where D:{0,1,...,n}-rarr{0,1} is a given predicate and x,y range over {0,1}n. Specifically, we show that the communication complexity of f is between Theta(k/log5 n) and Theta(k log n), where k is the number of value changes of D in {0,1,...,n}. The unbounded-error model is the most powerful of the basic models of communication (both classical and quantum), and proving lower bounds in it is a considerable challenge. The only previous nontrivial lower bounds for explicit functions in this model appear in the ground breaking work of Forster (2001) and its extensions. Our proof is built around two novel ideas. First, we show that a given predicate D gives rise to a rapidly mixing random walk on Z2 n, which allows us to reduce the problem to communication lower bounds for typical predicates. Second, we use Paturi's approximation lower bounds (1992), suitably generalized here to clusters of real nodes in [0,n] and interpreted in their dualform, to prove that a typical predicate behaves analogous to PARITY with respect to a smooth distribution on the inputs.",2008,0, 3550,High-speed error correcting code LSI with throughput of 5 to 48 Gbps,"We proved that the hardware implementation of the proposed code and the new packet synchronization system was effectively realized by using a unique circuit configuration. A three-dimensional size-five coder and decoding-synchronization system was implemented on FPGA. The developed FPGA was applied to a high-speed MPEG communication device, which can transmit a movie signal of 20 Mbps.",2003,0, 3551,On undetectable faults in partial scan circuits,We provide a definition of undetectable faults in partial scan circuits under a test application scheme where a test consists of primary input vectors applied at-speed between scan operations. We also provide sufficient conditions for a fault to be undetectable under this test application scheme. We present experimental results on finite-state machine benchmarks to demonstrate the effectiveness of these conditions in identifying undetectable faults.,2002,0, 3552,"Impact of Channel Errors on Decentralized Detection Performance of Wireless Sensor Networks: A Study of Binary Modulations, Rayleigh-Fading and Nonfading Channels, and Fusion-Combiners","We provide new results on the performance of wireless sensor networks in which a number of identical sensor nodes transmit their binary decisions, regarding a binary hypothesis, to a fusion center (FC) by means of a modulation scheme. Each link between a sensor and the fusion center is modeled independent and identically distibuted (i.i.d.) either as slow Rayleigh-fading or as nonfading. The FC employs a counting rule (CR) or another combining scheme to make a final decision. Main results obtained are the following: 1) in slow fading, a) the correctness of using an average bit error rate of a link, averaged with respect to the fading distribution, for assessing the performance of a CR and b) with proper choice of threshold, on/off keying (OOK), in addition to energy saving, exhibits asymptotic (large number of sensors) performance comparable to that of FSK; and 2) for a large number of sensors, a) for slow fading and a counting rule, given a minimum sensor-to-fusion link SNR, we determine a minimum sensor decision quality, in order to achieve zero asymptotic errors and b) for Rayleigh-fading and nonfading channels and PSK (FSK) modulation, using a large deviation theory, we derive asymptotic error exponents of counting rule, maximal ratio (square law), and equal gain combiners.",2008,0, 3553,Error analysis of 3D motion estimation algorithms in the differential case,"We put forth in this paper a geometrically motivated 3D (three-dimensional) motion error analysis, which is capable of supporting investigation of global effect such as inherent ambiguities. The error expression that we derive allows us to predict the exact conditions likely to cause ambiguities and how these ambiguities vary with motion types such as lateral or forward motion. Our formulation, though geometrically motivated, is employed to model the effect of noise.",2003,0, 3554,A general design technique for fault diagnostic systems,"We put forward a design method for fault diagnostic systems (FDSs) by proposing a fault model and using the incremental hybrid learning algorithm which tightly combines symbolic learning and neural networks. It is capable of overcoming several shortcomings in existing diagnostic systems, such as the lack of universality, the unbalance in the use of fault prior knowledge and the dynamic data and the dilemma of stability and plasticity. Experiment showed the FDS implemented by this kind of method had a good diagnostic ability",2001,0, 3555,A fault tolerant web services architecture based reflection,"Web services have been enjoying great popularities in recent years. The high usability of the Web service is becoming a new focus for research. How to provide generic fault tolerant mechanisms in Web services is worth researching. A fault tolerant web services architecture named Fault Tolerant Web Services is proposed in the article. In the architecture, the fault tolerant mechanisms are transparent and easy to use. The users can tailor their own fault tolerance mechanisms and the application programmers almost neednpsilat to care the fault tolerant mechanisms. The Architecture is set forth in detail in the article. The workflow of the system is narrated by three states of a fault tolerant Web service.",2008,0, 3556,Comparing Web Services Performance and Recovery in the Presence of Faults,"Web-services are supported by a complex software infrastructure that must ensure high performance and availability to the client applications. Web services industry holds a well established platform for performance benchmarking (e.g., TPC-App and SPEC jAppServer2004 benchmarks). In addition, several studies have been published recently by main vendors focusing web services performance. However, as peak performance evaluation has been the main focus, the characterization of the impact of faults in such systems has been largely disregarded. This paper proposes an approach for the evaluation and comparison of performance and recovery time in web services infrastructures. This approach is based on fault injection and is illustrated through a concrete example of benchmarking three alternative software solutions for web services deployment.",2007,0, 3557,Automatic generation of diagnostic expert systems from fault trees,"When a fault tolerant computer-based system fails, diagnosis and repair must be performed to bring the system back to an operational state. The use of fault tolerance design implies that several components or subsystems may have failed, and that perhaps many of these faults have been tolerated before the system actually succumbed to failure. Diagnosis procedures are then needed to determine the most likely source of failure and to guide repair actions. Expert systems are often used to guide diagnostics, but the derivation of an expert system requires knowledge (i.e., a conceptual model) of failure symptoms. In this paper, we consider the problem of diagnosing a system for which there may be little experience, given that it might be a one-of-a-kind system or because access to the system may be limited. We conjecture that the same fault tree model used to help aid in the design and analysis of the system can provide the conceptual model of system component interactions needed in order to define a diagnostic process. We explore the use of a fault tree model (along with the probabilities of failure for the basic events) along with partial knowledge of the state of the system (i.e., the system has failed, and perhaps some components are known to be operational or failed) to produce a diagnostic aid.",2003,0, 3558,Error Reporting Logic,"When a system fails to meet its specification, it can be difficult to find the source of the error and determine how to fix it. In this paper, we introduce error reporting logic (ERL), an algorithm and tool that produces succinct explanations for why a target system violates a specification expressed in first order predicate logic. ERL analyzes the specification to determine which parts contributed to the failure, and it displays an error message specific to those parts. Additionally, ERL uses a heuristic to determine which object in the target system is responsible for the error. Results from a small user study suggest that the combination of a more focused error message and a responsible object for the error helps users to find the failure in the system more effectively. The study also yielded insights into how the users find and fix errors that may guide future research.",2008,0, 3559,Surviving Attacks and Intrusions: What can we Learn from Fault Models,"When designing or analyzing applications or infrastructures with high reliability, safety, security, or survivability demands, the fundamental questions are: what is required of the application and can the infrastructure support these requirements. In the design and analysis of fault-tolerant systems, fault models have served us well to describe the theoretical limits. But with the inclusion of malicious acts, the direct application of fault models has exposed limited applicability. However, we can take advantage of the powerful fault models if we defer their direct application from the events that lead to faults, that is, the fault causes, and instead focus on the effects. This way one can avoid questions referring to the meaning of fault models in the context of previously unsuitable faults like Trojan horses or Denial of Service (DoS) attacks. Instead, we can use fault models at the level of abstraction where the application maps on the infrastructure. In this paper fault models are discussed in the context of system survivability and malicious act. It is shown that these models can be used to balance the demands put on the application and the capabilities of the underlying infrastructure. Active and imposed fault descriptions are defined that allow to match the mechanisms that provide survivability to the application with the infrastructure-imposed limitations. By defining a system as a collection of functionalities, individual functionalities and their associated fault descriptions can be analyzed in isolation.",2009,0, 3560,fMRI Activation during Observation of Others' Reach Errors,"When exposed to novel dynamical conditions (e.g., externally imposed forces), neurologically intact subjects easily adjust motor commands on the basis of their own reaching errors. Subjects can also benefit from visual observation of others' kinematic errors. Here, using fMRI, we scanned subjects watching movies depicting another person learning to reach in a novel dynamic environment created by a robotic device. Passive observation of reaching movements (whether or not they were perturbed by the robot) was associated with increased activation in fronto-parietal regions that are normally recruited in active reaching. We found significant clusters in parieto-occipital cortex, intraparietal sulcus, as well as in dorsal premotor cortex. Moreover, it appeared that part of the network that has been shown to be engaged in processing self-generated reach error is also involved in observing reach errors committed by others. Specifically, activity in left intraparietal sulcus and left dorsal premotor cortex, as well as in right cerebellar cortex, was modulated by the amplitude of observed kinematic errors.",2010,0, 3561,Shading extraction and correction for scanned book images,"When one scans document pages from a bound book, shading artifacts are commonly occurred in the book spine area. In this letter, we propose a general-purpose method for image shading correction based on an assumption that the reflectance function of the page surface is piecewise constant and the illumination function is smooth. The proposed method is able to completely correct more general types of shading artifacts which are nonuniformly distributed along the book spine. Comparison experiments on a synthetic and a variety of real scanned book images demonstrate the feasibility and effectiveness of the proposed method.",2008,0, 3562,"Simulation-based validation and defect localization for evolving, semi-formal requirements models","When requirements models are developed in an iterative and evolutionary way, requirements validation becomes a major problem. In order to detect and fix problems early, the specification should be validated as early as possible, and should also be revalidated after each evolutionary step. In this paper, we show how the ideas of continuous integration and automatic regression testing in the field of coding can be adapted for simulation-based, automatic revalidation of requirements models after each incremental step. While the basic idea is fairly obvious, we are confronted with a major obstacle: requirements models under development are incomplete and semi-formal most of the time, while classic simulation approaches require complete, formal models. We present how we can simulate incomplete, semi-formal models by interactively recording missing behavior or functionality. However, regression simulations must run automatically and do not permit interactivity. We therefore have developed a technique where the simulation engine automatically resorts to the interactively recorded behavior in those cases where it does not get enough information from the model during a regression simulation run. Finally, we demonstrate how the information gained from model evolution and regression simulation can be exploited for locating defects in the model.",2005,0, 3563,Research on Suppression of Secondary Arc Current under Different Fault Locations for 500kV Transmission Line,"When single-phase grounding fault occurs in high-voltage transmission line, secondary arc current and recovery voltage must be suppressed in order to ensure that single-phase autoreclosing operates reliably and successfully. This paper adopts the suppression measure of secondary arc current - shunt reactor with neutral small reactor-which is applied widely in many countries, and then uses PSCAD software to simulate suppression effect about different fault point locations toward an example of 500 kv double-ended sources high-voltage transmission line. According to the simulation results, suppression effects about different locations are distinct. Moreover, this measure can suppress secondary arc current effectively, ensure success of single phase autoreclosing operation and finally achieve security and stability of power system.",2010,0, 3564,Detecting Duplicate Bug Report Using Character N-Gram-Based Features,"We present an approach to identify duplicate bug reports expressed in free-form text. Duplicate reports needs to be identified to avoid a situation where duplicate reports get assigned to multiple developers. Also, duplicate reports can contain complementary information which can be useful for bug fixing. Automatic identification of duplicate reports (from thousands of existing reports in a bug repository) can increase the productivity of a Triager by reducing the amount of time a Triager spends in searching for duplicate bug reports of any incoming report. The proposed method uses character N-gram-based model for the task of duplicate bug report detection. Previous approaches are word-based whereas this study investigates the usefulness of low-level features based on characters which have certain inherent advantages (such as natural-language independence, robustness towards noisy data and effective handling of domain specific term variations) over word-based features for the problem of duplicate bug report detection. The proposed solution is evaluated on a publicly-available dataset consisting of more than 200 thousand bug reports from the open-source Eclipse project. The dataset consists of ground-truth (pre-annotated dataset having bug reports tagged as duplicate by the Triager). Empirical results and evaluation metrics quantifying retrieval performance indicate that the approach is effective.",2010,0, 3565,Effective congestion and error control for scalable video coding extension of the H.264/AVC,"We present an effective congestion and error control mechanism for scalable video coding (SVC) extension of the H.264/AVC video dissemination over Internet. The congestion control is used to determine the appropriate number of SVC video layers based on bandwidth inference congestion (BIC) control protocol for layered multicast scenarios and the error control is achieved by unequal forward error correction (FEC) layered protection using block erasure coding. Through the real Internet streaming experiments, we demonstrate the effectiveness of the proposed layered SVC delivery, in terms of subscription layer, average packet loss rate and PSNRs, under several layered-definition scalabilities.",2008,0, 3566,Fault-tolerant Ethernet middleware for IP-based process control networks,"We present an efficient middleware-based fault-tolerant Ethernet (FTE) developed for process control networks. Our approach is unique and practical in the sense that it requires no change to commercial off-the-shelf hardware (switch, hub, Ethernet physical link, and network interface card) and software (commercial Ethernet NIC card driver and standard protocol such as TCP/IP) yet it is transparent to IP-based applications. The FTE performs failure detection and recovery for handling multiple points of network faults and supports communications with non-FTE-capable devices. Our experimentation shows that FTE performs efficiently, achieving less than 1-ms end-to-end swap time and less than 2-sec failover time, regardless of the concurrent application and system loads. In this paper, we describe the FTE architecture, the challenging technical issues addressed, our performance evaluation results, and the lessons learned in design and development of such an open-network-based fault-tolerant network",2000,0, 3567,A new method of non-stationary signal analysis for control motor bearing fault diagnosis,"We present an equal phase sampling method (EPSM) based technique to diagnose nonstationary servomotor bearing faults. Conventional equal time sampling method (ETSM) is based on the time periodical feature of a constant rotating speed system. As the servomotor changes its rotating speed and direction frequently, it will loose its time periodical feature. However, it still keeps its space periodical feature. The new method of nonstationary signal analysis based on EPSM can eliminate the influence of rotating speed and direction changes. The mathematical simulation and experiment results prove that this method is very suitable and effective for servomotor bearing fault diagnosis.",2003,0, 3568,Using memory errors to attack a virtual machine,"We present an experimental study showing that soft memory errors can lead to serious security vulnerabilities in Java and .NET virtual machines, or in any system that relies on type-checking of untrusted programs as a protection mechanism. Our attack works by sending to the JVM a Java program that is designed so that almost any memory error in its address space will allow it to take control of the JVM. All conventional Java and .NET virtual machines are vulnerable to this attack. The technique of the attack is broadly applicable against other language-based security schemes such as proof-carrying code. We measured the attack on two commercial Java virtual machines: Sun's and IBM's. We show that a single-bit error in the Java program's data space can be exploited to execute arbitrary code with a probability of about 70%, and multiple-bit errors with a lower probability. Our attack is particularly relevant against smart cards or tamper-resistant computers, where the user has physical access (to the outside of the computer) and can use various means to induce faults; we have successfully used heat. Fortunately, there are some straightforward defenses against this attack.",2003,0, 3569,Analytical bounds on the error performance of the DVB-T system in time-invariant channels,"We present an upper bound on the BER performance of the DVB-T system for time-invariant channels. A unified approach is taken comprising all feasible combinations of convolutional encoder rates and constellations, for both non-hierarchical and hierarchical transmission modes. The validity of the estimated BER is compared thoroughly with the simulated BER obtained with BerbeX, a DVB-T compliant software, for the three channels specified in the DVB-T standard",2000,0, 3570,A novel framework for robust video streaming based on H.264/AVC MGS coding and unequal error protection,"We present a novel framework to provide robust video streaming service over time-varying error-prone network. The scheme is based on the medium granularity scalability (MGS) video coding of the H.264/AVC standard, which adopts a hierarchical prediction structure for the group-of-pictures (GOP). We determine the optimal allocation of protection strength for different network abstraction layer (NAL) units according to their individual importance to the end-to-end video quality. To analyse the importance of the NAL units, we emulate the error concealment if one frame is considered as lost and take into account the propagation distortion within the GOP. An efficient algorithm is proposed to account for the non-convex rate-distortion characteristics associated with the NAL units in the hierarchical GOP. With this framework, we can provide robust video streaming for the range of packet loss rates from 0% to 40% with about 30% additional channel bit-rate for the channel coding. The simulation results demonstrate high flexibility and efficiency of the proposed framework, which can effectively prevent frequent loss of frames.",2009,0, 3571,Empirical interval estimates for the defect content after an inspection,"We present a novel method for estimating the number of defects contained in a document using the results of an inspection of the document. The method is empirical, being based on observations made during past inspections of comparable documents. The method yields an interval estimate, that is, a whole range of values which is likely to contain the true value of the number of defects in the document. We also derive point estimates from the interval estimate. The method is validated using a known empirical inspection dataset and clearly outperforms existing approaches for estimating the defect content after inspections.",2002,0, 3572,"Performance improvement in high capacity, ultra-long distance, WDM systems using forward error correction codes","We present a performance study of a forward error correction (FEC) code using theoretical models, Monte-Carlo computer simulations, and a long-haul WDM transmission experiment. With a 14% redundancy code, the Q-factor was increased by 6.2 dB for both linear and non-linear impairments.",2000,0, 3573,A desktop environment for assessment of fault diagnosis based fault tolerant flight control laws,"We present a simulation based software environment conceived to allow an easy assessment of fault diagnosis based fault tolerant control techniques. The new tool is primary intended for the development of advanced flight control applications with fault accommodation abilities, where the requirements for increased autonomy and safety play a premier role.",2008,0, 3574,Software Defects Prediction using Operating Characteristic Curves,We present a software defect prediction model using operating characteristic curves. The main idea behind our proposed technique is to use geometric insight in helping construct an efficient and fast prediction method to accurately predict the. cumulative number of failures at any given stage during the software development process. Our predictive approach uses the number of detected faults instead of the software failure-occurrence time in the testing phase. Experimental results illustrate the effectiveness and the much improved performance of the proposed method in comparison with the Bayesian prediction approaches.,2007,0, 3575,Beat: Boolean expression fault-based test case generator,"We present a system which generates test cases from Boolean expressions. The system is based on the integration of several fault-based test case selection strategies developed by us. Our system generates test cases that are guaranteed to detect all single operator fault and all single operand faults when the Boolean expression is in irredundant disjunctive normal form. Apart from being an automated test case generation tool developed for software testing practitioners, this system can also be used as a training or self-learning tool for students as well as software testing practitioners.",2003,0, 3576,Systematic Error of the Nose-to-Nose Sampling-Oscilloscope Calibration,"We use traceable swept-sine and electrooptic-sampling-system-based sampling-oscilloscope calibrations to measure the systematic error of the nose-to-nose calibration, and compare the results to simulations. Our results show that the errors in the nose-to-nose calibration are small at low frequencies, but significant at high frequencies.",2007,0, 3577,Single-trial fMRI Shows Contralesional Activity Linked to Overt Naming Errors in Chronic Aphasic Patients,"We used fMRI to investigate the roles played by perilesional and contralesional cortical regions during language production in stroke patients with chronic aphasia. We applied comprehensive psycholinguistic analyses based on well-established models of lexical access to overt picture-naming responses which were evaluated using a single trial design that permitted distinction between correct and incorrect responses on a trial-by-trial basis. Although both correct and incorrect naming responses were associated with left-sided perilesional activation, incorrect responses were selectively associated with robust right-sided contralesional activity. Most notably, incorrect responses elicited overactivation in the right inferior frontal gyrus that was not observed in the contrasts for patients' correct responses or for responses of age-matched control subjects. Errors were produced at slightly later onsets than accurate responses and comprised predominantly semantic paraphasias and omissions. Both types of errors were induced by pictures with greater numbers of alternative names, and omissions were also induced by pictures with late acquired names. These two factors, number of alternative names per picture and age of acquisition, were positively correlated with activation in left and right inferior frontal gyri in patients as well as control subjects. These results support the hypothesis that some right frontal activation may normally be associated with increasing naming difficulty, but in patients with aphasia, right frontal overactivation may reflect ineffective effort when left hemisphere perilesional resources are insufficient. They also suggest that contralesional areas continue to play a roledysfunctional rather than compensatoryin chronic aphasic patients who have experienced a significant degree of recovery.",2010,0, 3578,Comparing Error Detection Techniques for Web Applications: An Experimental Study,"Web applications are highly sensitive to the occurrence of user-visible failures. Despite the usage of system-level monitoring tools there are still some application-level errors that escape to those tools and end up to be seen in the Web pages of the final users. Complementary error detection mechanisms should then be used to overcome this problem.In this paper, we present an experimental study where we measured the effectiveness of four different error-detection mechanisms under different fault-load. For the effect we used two benchmarks (JPetstore and TPC-W) and a software fault-injector. The results show that although system-level monitoring tools are very effective in most of the cases, there are other detection mechanisms that present a better latency and coverage when dealing with errors at the application-level. Particularly, the usage of external monitoring schemes seems to be of utmost importance.",2008,0, 3579,Error Correcting Output Coding-Based Conditional Random Fields for Web Page Prediction,"Web page prefetching has been used efficiently to reduce the access latency problem of the Internet, its success mainly relies on the accuracy of Web page prediction. As powerful sequential learning models, conditional random fields (CRFs) have been used successfully to improve the Web page prediction accuracy when the total number of unique Web pages is small. However, because the training complexity of CRFs is quadratic to the number of labels, when applied to a Web site with a large number of unique pages, the training of CRFs may become very slow and even intractable. In this paper, we decrease the training time and computational resource requirements of CRFs training by integrating error correcting output coding (ECOC) method. Moreover, since the performance of ECOC-based methods crucially depends on the ECOC code matrix in use, we employ a coding method, search coding, to design the code matrix of good quality.",2008,0, 3580,Cross-Cultural Differences of Entrepreneurs' Error Orientation: Comparing Chinese Entrepreneurs and German Entrepreneurs,"When starting up firms, entrepreneurs will easily be in the situation of being away from predetermined goals or criteria. How entrepreneurs cope with errors is critical to the development of their firms. The research used the free software, Mx, and explored the error orientation of entrepreneurs in four industries, including IT and software, catering and hotel, machinery and parts, and construction, to find the differences of entrepreneurs' error orientation by comparing Chinese entrepreneurs and German entrepreneurs. The results suggest that Chinese entrepreneurs pay more attention to the ability of solving problems caused by errors and the ability of learning from errors. German entreprenurs pay more attention to communicating with other when an error occurs. Entrepreneurs of either culture background value the ability of coping with errors. The implication of the results for IT industry is also included.",2010,0, 3581,Systematic error compensation for RPC model using semi-parametric estimation,"When the conventional method of parameter estimation is used to construct the RPC model, the uncertain factors can result in inconsistency between the RPC model and the reality. The inconsistency shows up as evident systematic error in the constructed RPC model. To tackle the problem, a non-parametric component is introduced on the base of the parametric model to account for the unknown factors and their effects. The new method, namely the semi-parametric estimation, could effectively compensate the effect of systematic errors. This paper studied the construction of the RPC model of remote sensing images using the semi-parametric estimation method. The experiment with SPOT-5 imagery demonstrated that the semi-parametric estimation method could improve the precision of fitting the rigorous imaging model by RPC model.",2010,0, 3582,Fault-tolerant clock synchronization for embedded distributed multi-cluster systems,"When time-triggered (TT) systems are to be deployed for large embedded real-time (RT) control systems in cars and airplanes, one way to overcome bandwidth limitations and achieve complexity reduction is the organization in clusters of strongly interacting computing nodes with well-defined interfaces. In this case, clock synchronization of different cluster times supports meaningful exchange of time-related data between clusters and allows coordinated control. This paper addresses fault-tolerant clock synchronization of clusters for TT systems that are already internally synchronized. By addressing systematic and stochastic errors of cluster times differently, the influence of systematic errors is eliminated and the quality of synchronization only depends on stochastic errors. Since systematic errors of cluster times are at least an order of magnitude larger than stochastic errors for typical RT embedded control systems, the presented algorithm achieves a significant improvement to known synchronization algorithms. An implementation of the proposed clock synchronization algorithm on top of the Time-Triggered Architecture and experiments show that synchronization is achieved with accuracy values of less than one microsecond.",2003,0, 3583,Fault Detection System Activated by Failure Information,"We propose a fault detection system activated by an application when the application recognizes the occurrence of a failure, in order to realize self managing systems that automatically find the source of a failure. In existing detection systems, there are three issues for constructing self managing applications: i) the detection results are not sent to the applications, ii) they can not identify the source failure from all of the detected failures, and iii) configuring the detection system for networked system is hard work. For overcoming these issues, the proposed system takes three approaches: i) the system receives failure information from an application and returns a result set to the application, ii) the system identifies the source failure using relationships among errors, and Hi) the system obtains information of the monitored system from a database. The relationship is expressed by a tree. This tree is called error relationship tree. The database provides information which are system entities such as hardware devices, software object, and network topology. When the proposed system starts looking for the source of a failure, causal relations from an error relation tree are referred to, and the correspondence of error definitions and actual objects is derived using the database. We show the design of the detection operation activated by the failure information and the architecture of the proposed system.",2007,0, 3584,"A fault tolerant, peer-to-peer replication network","We propose a fault tolerant, peer-to-peer replication network for synchronizing files across multiple hosts. The proposed topology is constructed by applying existing technologies and tools to ensure that files are kept synchronized even after subsequent modifications. One of its main advantages lies in the fact that there is no central authority to coordinate the process, hosts are connected in a peer-to-peer fashion, thus avoiding a single point of failure. Our proposal is intended for use in networks of personal computers where a small number of hosts have to be synchronized.",2010,0, 3585,Acceleration of Byzantine Fault Tolerance by Parallelizing Consensuses,We propose a new method that accelerates existing Byzantine Fault Tolerance (BFT) protocols for asynchronous distributed systems by parallelizing the involved consensuses. BFT realizes a reliable system against Byzantine failures and is usually solved by repeatedly executing consensus for a set of requests. Our method consistently parallelizes the consensus by introducing a new extra consensus on the order of processing agreed requests. We show the correctness of our method and analyze its performance in comparison with an existing non-parallelizing method and a naively parallelizing method. The results indicate that our parallelizing method is approximately 20% faster than those methods in such configurations where many replicas are running in order to increase reliability.,2009,0, 3586,Design of fault detection filters for periodic systems,"We propose a numerically reliable computational approach to design fault detection filters for periodic systems. This approach is based on a new numerically stable algorithm to compute least order annihilators without explicitly building time-invariant lifted system representations. The main computation in this algorithm is the orthogonal reduction of a periodic matrix pair to a periodic Kronecker-like form, from which the periodic realization of the detector is directly obtained.",2004,0, 3587,Closed-form error analysis of the non-identical Nakagami-m relay fading channel,"We present closed-form expressions for the average bit error probability (ABEP) of BPSK, QPSK and M-QAM of an amplify-and-forward average power scaling dual-hop relay transmission, over non-identical Nakagami-m fading channels, with integer values of m. Additionally, we evaluate in closed-form the ABEP under sufficiently large signal-to-noise ratio for the source-relay link, valid for arbitrary rn. Numerical and simulation results show the validity of the proposed mathematical analysis and point out the effect of the two hops unbalanced fading conditions on the error performance.",2008,0, 3588,Improved exponential bounds and approximation for the Q-function with application to average error probability computation,"We present new exponential bounds for the Gaussian Q-function or, equivalently, of the complementary error function er f c(.). More precisely, the new bound is in the form of the sum of exponential functions that, in the limit, approaches the exact value. Then, a quite accurate and simple approximated expression given by the sum of two exponential functions is reported. Moreover, some new simple bounds for the inverse er f c(.) are derived. The results are applied to the general problem of evaluating the average error probability in fading channels. An example of application to the computation of the pairwise error probability of space-time codes is also presented.",2002,0, 3589,Post-Error Correcting Code Modeling of Burst Channels Using Hidden Markov Models With Applications to Magnetic Recording,"We present two approaches for modeling burst channels using hidden Markov models (HMMs). The first method is based on the maximum-likelihood approach and improves on the computational efficiency of earlier methods. We present new algorithms for scaling and for determining the model parameters by using smart search techniques. We then generalize a gap length analysis and apply it to modeling HMMs. The algorithms are low-complexity and memory-efficient. Finally, we present simulation results for modeling errors in magnetic storage channels and show how this can be used for evaluating decoder failure rates by using Wolf's method, from real observed data",2007,0, 3590,Solving dynamic fault trees using a new Hybrid Bayesian Network inference algorithm,"We present a hybrid Bayesian network (HBN) framework to analyse dynamic fault trees. By incorporating a new approximate inference algorithm for HBNs involving dynamically discretising the domain of all continuous variables, accurate approximations for the failure distribution of both static and dynamic fault tree constructs are obtained. Unlike in other approaches no numerical integration techniques or simulation methods are required. Moreover, no exact expression for the posterior marginal is needed and no conditional probability tables need to be completed. Sensitivity analysis, uncertainty, diagnosis, common cause failure analysis, can all be easily performed within this framework. Posterior estimates of parameterised marginal failure distributions can also be obtained using available raw failure data together with prior information from expert judgement.",2008,0, 3591,Hardware evolution of analog circuits for in-situ robotic fault-recovery,"We present a method for evolving and implementing artificial neural networks (ANNs) on field programmable analog arrays (FPAAs). These FPAAs offer the small size and low power usage desirable for space applications. We use two cascaded FPAAs to create a two layer ANN. Then, starting from a population of random settings for the network, we are able to evolve an effective controller for several different robot morphologies. We demonstrate the effectiveness of our method by evolving two types of ANN controllers: one for biped locomotion and one for restoration of mobility to a damaged quadruped. Both robots exhibit nonlinear properties, making them difficult to control. All candidate controllers are evaluated in hardware; no simulation is used.",2005,0, 3592,"Intelligent, Fault Tolerant Control for Autonomous Systems","We present a methodology for intelligent control of an autonomous and resource constrained embedded system. Geared towards mastering permanent and transient faults by dynamic reconfiguration, our approach uses rules for describing device functionality, valid environmental interactions, and goals the system has to reach. Besides rules, we use functions that characterize a goal's target activity profile. The target activity profile controls the frequency our system uses to reach the corresponding goal. In the paper we discuss a first implementation of the given methodology, and introduce useful extensions. In order to underline the feasibility and effectiveness of the presented control system, we present a case study that has been carried out on a prototype system.",2007,0, 3593,Data Mining Techniques for Building Fault-proneness Models in Telecom Java Software,"This paper describes a study performed in an industrial setting that attempts to build predictive models to identify parts of a Java system with a high fault probability. The system under consideration is constantly evolving as several releases a year are shipped to customers. Developers usually have limited resources for their testing and inspections and would like to be able to devote extra resources to faulty system parts. The main research focus of this paper is two-fold: (1) use and compare many data mining and machine learning techniques to build fault-proneness models based mostly on source code measures and change/fault history data, and (2) demonstrate that the usual classification evaluation criteria based on confusion matrices may not be fully appropriate to compare and evaluate models.",2007,1, 3594,Comparing fault-proneness estimation models,"Over the last, years, software quality has become one of the most important requirements in the development of systems. Fault-proneness estimation could play a key role in quality control of software products. In this area, much effort has been spent in defining metrics and identifying models for system assessment. Using this metrics to assess which parts of the system are more fault-proneness is of primary importance. This paper reports a research study begun with the analysis of more than 100 metrics and aimed at producing suitable models for fault-proneness estimation and prediction of software modules/files. The objective has been to find a compromise between the fault-proneness estimation rate and the size of the estimation model in terms of number of metrics used in the model itself. To this end, two different methodologies have been used, compared, and some synergies exploited. The methodologies were the logistic regression and the discriminant analyses. The corresponding models produced for fault-proneness estimation and prediction have been based on metrics addressing different aspects of computer programming. The comparison has produced satisfactory results in terms of fault-proneness prediction. The produced models have been cross validated by using data sets derived from source codes provided by two application scenarios.",2005,1, 3595,Assessing the applicability of fault-proneness models across object-oriented software projects,"A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.",2002,1, 3596,The Application of Gray-Prediction Theory in the Software Defects Management,"Software defects are the parts of software products that software development company have to face. How to deal with them suitably is very important for software company's survival. This article will use the collecting data of software defects. Then, according to GM (1, 1), which is the core theory of the gray-prediction, establish the prediction model. Finally, we gain the prediction values. The results show that software company can improve software quality, control development process and allocate resources effectively.",2009,1, 3597,Predicting faults using the complexity of code changes,"Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.",2009,1, 3598,A Two-Step Model for Defect Density Estimation,"Identifying and locating defects in software projects is a difficult task. Further, estimating the density of defects is more difficult. Measuring software in a continuous and disciplined manner brings many advantages such as accurate estimation of project costs and schedules, and improving product and process qualities. Detailed analysis of software metric data gives significant clues about the locations and magnitude of possible defects in a program. The aim of this research is to establish an improved method for predicting software quality via identifying the defect density of fault prone modules using machine-learning techniques. We constructed a two-step model that predicts defect density by taking module metric data into consideration. Our proposed model utilizes classification and regression type learning methods consecutively. The results of the experiments on public data sets show that the two-step model enhances the overall performance measures as compared to applying only regression methods.",2007,1, 3599,Static analysis tools as early indicators of pre-release defect density,"During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91%.",2005,1, 3600,Predictive data mining model for software bug estimation using average weighted similarity,"Software bug estimation is a very essential activity for effective and proper software project planning. All the software bug related data are kept in software bug repositories. Software bug (defect) repositories contains lot of useful information related to the development of a project. Data mining techniques can be applied on these repositories to discover useful interesting patterns. In this paper a prediction data mining technique is proposed to predict the software bug estimation from a software bug repository. A two step prediction model is proposed In the first step bug for which estimation is required, its summary and description is matched against the summary and description of bugs available in bug repositories. A weighted similarity model is suggested to match the summary and description for a pair of software bugs. In the second step the fix duration of all the similar bugs are calculated and stored and its average is calculated, which indicates the predicted estimation of a bug. The proposed model is implemented using open source technologies and is explained with the help of illustrative example.",2010,1, 3601,Defect prediction for embedded software,"As ubiquitous computing becomes the reality of our lives, the demand for high quality embedded software in shortened intervals increases. In order to cope with this pressure, software developers seek new approaches to manage the development cycle: to finish on time, within budget and with no defects. Software defect prediction is one area that has to be focused to lower the cost of testing as well as to improve the quality of the end product. Defect prediction has been widely studied for software systems in general, however there are very few studies which specifically target embedded software. This paper examines defect prediction techniques from an embedded software point of view. We present the results of combining several machine learning techniques for defect prediction. We believe that the results of this study will guide us in finding better predictors and models for this purpose.",2007,1, 3602,What Software Repositories Should Be Mined for Defect Predictors?,"The information about which modules in a software system's future version are potentially defective is a valuable aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. Constructing effective defect prediction models in an industrial setting involves the decision from what data source the defect predictors should be derived. In this paper we compare defect prediction results based on three different data sources of a large industrial software system to answer the question what repositories to mine. In addition, we investigate whether a combination of different data sources improves the prediction results. The findings indicate that predictors derived from static code and design analysis provide slightly yet still significant better results than predictors derived from version control, while a combination of all data sources showed no further improvement.",2009,1, 3603,A model for early prediction of faults in software systems,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using decision tree based Model in combination of K-means clustering as preprocessing technique. This approach has been tested with CM1 real time defect datasets of NASA software projects. The high accuracy of testing results show that the proposed Model can be used for the prediction of the fault proneness of software modules early in the software life cycle.",2010,1, 3604,LOFT: A Latency-Oriented Fault Tolerant Transport Protocol for Wireless Sensor-Actuator Networks,"Wireless sensor-actuator networks, or WSANs, refer to a group of sensors and actuators which collect data from the environment and perform application-specific actions in response. To act responsively and accurately, an efficient and reliable data transport protocol is crucial for the sensors to inform the actuators about the environmental events. Unfortunately, the low-power multi-hop communications in WSANs are inherently unreliable; the frequent sensor and link failures as well as the excessive delays due to congestion further aggravate the problem. In this paper, we propose a latency-oriented fault tolerant data transport protocol in WSANs. We argue that reliable data transport in such a real-time system should resist to the transmission failures, and should also consider the importance and freshness of the reported data. We articulate this argument and provide a cross-layer two-step data transport protocol for on- time and fault tolerant data delivery from sensors to actuators. Our protocol adopts smart priority scheduling that differentiates the event data of non-uniform importance. It balances the workload of sensors by checking their queue utilization and copes with node and link failures by an adaptive replication algorithm. We evaluate our protocol through extensive simulations, and the results demonstrate that it achieves the desirable reliability for WSANs.",2007,0, 3605,Fault Tolerance in FPGA Architecture Using Hardware Controller - A Design Approach,"With advancement in process technology, the feature size is decreasing which leads to higher defect densities. More sophisticated techniques at increased costs are required to avoid defects. If nano-technology fabrication are applied the yield may go down to zero as avoiding defect during fabrication will not be a feasible option Hence, feature architecture have to be defect tolerant. In regular structure like FPGA, redundancy is commonly used for fault tolerance. In this work we present a solution in which configuration bit-stream of FPGA is modified by a hardware controller that is present on the chip itself. The technique uses redundant device for replacing faulty device and increases the yield. The design is implemented using FPGA Altera Quartus II EC121Q240C6.",2009,0, 3606,A scatter correction using thickness iteration in dual-energy radiography,"With area detectors, scattered radiation causes the dominant error in separation of different materials properly. Several methods for the scatter correction in dual energy imaging had been suggested and improved results. Such methods, however, need additional lead blocks or detectors, and additional exposures to estimate the scatter fraction for every correction. We suggest the scatter correction by using a database of the fraction and distribution of scattered radiations. To verify this method we did MCNP simulation. The generation of the scatter information for each combination of thickness of an aluminum-water phantom had been done. Based on the uncorrected signals, thickness of each material can be calculated by a conventional dual-energy algorithm. And then the scatter information of corresponding thickness from the look-up table is used to correct the original signals. The iterative scatter correction reduced relative-thickness error in results. This scatter correction method can be applied to two-material dual-energy radiography like mammography, contrast imaging, or industrial inspection.",2004,0,5896 3607,Techniques and experience in on-line transformer condition monitoring and fault diagnosis in ElectraNet SA,"With evolving maintenance strategies in the electricity industry internationally, there has been increasing pressure to develop improved techniques for condition monitoring. Specifically there has been a trade off between the speed and accuracy of testing. Traditionally, transformer condition monitoring involved high accuracy tests, which due to their duration, could only be performed on a discrete periodic basis. ElectraNet SA has experienced many limitations associated with this form of condition monitoring, and there has been a trend towards high speed on-line monitoring techniques for power transformers. Though these new techniques do not provide the level of accuracy found in traditional forms of testing, they overcome many of their limitations. This paper, describes ElectraNet SA's techniques and experience with power transformer monitoring",2000,0, 3608,Performance comparison of MLP and RBF neural networks for fault location in distribution networks with DGs,"With high penetration of distributed generations (DGs), power distribution system is regarded as a multisource system in which fault location scheme must be direction sensitive. This paper presents an automated fault location method using radial basis function neural network (RBFNN) for a distribution system with DG units. In the proposed method, the fault type is first determined by normalizing the fault currents of the main source and then fault location is predicted by using RBFNN. Several case studies have been considered to verify the accuracy of the RBFNN. A comparison is also made between the RBFNN and the conventional multilayer perceptron neural network for locating faults in a power distribution system with DGs. The test results showed that the RBFNN can accurately determine the location of faults in a distribution system with several DG units.",2010,0, 3609,On the significance of fault tree analysis in practice,"With increasing system complexity and extensive use of computerized control of industrial processes and plants, it is essential to have a systematic approach for identifying failures that can expose people and environment for unacceptable risks. With focus on a drive system used to control a linear motor, the fault tree analysis method is utilized to reveal design weaknesses and to find mitigations that can improve the system safety characteristics. Starting with a set of top level hazards, elements with high risk impact are identified, and appropriate mitigations are suggested.",2009,0, 3610,Double sampling data checking technique: an online testing solution for multisource noise-induced errors on on-chip interconnects and buses,"With processors and system-on-chips using nano-meter technologies, several design and test efforts have been recently developed to eliminate and test for many emerging DSM noise effects. In this paper, we show the emergence of multisource noise effects, where multiple DSM noise sources combine to produce functional and timing errors even when each separate noise source itself does not. We show the dynamic nature of multisource noise, and the need for online testing to detect such noise errors. We propose an online approach based on low-cost double-sampling data checking circuit to test for such noise effects in on-chip buses. Based on the proposed circuit, an effective and efficient testing methodology has been developed to facilitate online testing for generic on-chip buses. The applicability of this methodology is demonstrated through embedding the online detection circuit in a bus design. The validated design shows the effectiveness of the proposed testing methodology for multisource noise-induced errors in global interconnects and buses.",2004,0, 3611,Optimal scheduling of imprecise computation tasks in the presence of multiple faults,"With the advance of applications such as multimedia, image/speech processing and real-time AI, real-time computing models allowing to express the timeliness versus precision trade-off are becoming increasingly popular. In the imprecise computation model, a task is divided into a mandatory part and an optional part. The mandatory part should be completed by the deadline even under worst-case scenario; however, the optional part refines the output of a mandatory part within the limits of the available computing capacity. A non-decreasing reward function is associated with the execution of each optional part. Since the mandatory parts have hard deadlines, provisions should be taken against faults which may occur during execution. An FT-Optimal framework allows the computation of a schedule that simultaneously maximizes the total reward and tolerates transient faults of mandatory parts. We extend the framework to a set of tasks with multiple deadlines, multiple recovery blocks and precedence constraints among them. To this aim, we first obtain the exact characterization of imprecise computation schedules which can tolerate up to k faults, without missing any deadlines of mandatory parts. Then, we show how to generate FT-Optimal schedules in an efficient way. Our solution works for both linear and general concave reward functions",2000,0, 3612,ORBIT: Effective Issue Queue Soft-Error Vulnerability Mitigation on Simultaneous Multithreaded Architectures Using Operand Readiness-Based Instruction Dispatch,"With the advance of semiconductor processing technology, soft errors have become an increasing cause of failures of microprocessors fabricated using smaller and more densely integrated transistors with lower threshold voltages and tighter noise margins. With diminishing performance returns on wider issue superscalar processors, the microprocessor design industry has opted for using simultaneous multithreaded (SMT) architectures in commercial processors to exploit thread-level parallelism (TLP). SMT techniques enhance overall system performance but also introduce greater susceptibility to soft errors - concurrently executing multiple threads exposes many program runtime states to soft-error strikes at any given time. The issue queue (IQ) is a key micro architecture structure to exploit instruction-level and thread-level parallelism. On SMT processors, the IQ buffers a large number of instructions from multiple threads and is more susceptible to soft-error strikes. In this paper, we explore the use of operand-readiness-based instruction dispatch (ORBIT) as an effective mechanism to mitigate IQ soft-error vulnerability on SMT processors. We observe that IQ soft-error vulnerability is largely affected by instructions waiting for their source operands. The overall IQ soft-error vulnerability can be effectively reduced by minimizing the number of waiting instructions and their residency cycles in the IQ. We develop six techniques that aim to improve IQ reliability with negligible performance degradation on SMT processors. Moreover, we extend our techniques with prediction methods that can anticipate the readiness of source operands ahead of time. The ORBIT schemes integrated with reliability-awareness and readiness prediction achieve more attractive reliability/performance trade-offs. The best of the proposed schemes (e.g. Predict_DelayACE) reduces IQ vulnerability by 79% with only 1% throughput IPC and 3% harmonic IPC reduction across all studied workloads.",2008,0, 3613,Autonomous Fault Recovery Technology for Achieving Fault-Tolerance in Video on Demand System,"With the advances of compression technology, storage devices and networks, video on demand (VoD) service is becoming popular. The system needs to provide continuous service and heterogeneous service levels for users. However, these requirements cannot be satisfied in conventional VoD system which is constructed on redundant content servers and centralized management. In this paper, autonomous VoD system is proposed to meet the requirements. The system is constructed on faded information field architecture. Under the proposed architecture, autonomous fault detection and fault recovery technologies are proposed to achieve fault-tolerance for continuous service. The effectiveness of the proposed technologies are proved through simulation. The results show that an average of 30% improvement in recovery time and users' video service can be recovered without stopping compared with conventional VoD system",2006,0, 3614,Use of substation IED data for improved alarm processing and fault location,"With the advent of technology, substations of modern days are being equipped with different types of IEDs (Intelligent Electronic Devices) such as Digital Protective Relay (DPR), Digital Fault Recorders (DFR), Phasor Measurement Units (PMU), etc. These devices are capable of recording huge amount of data and thus integration and appropriate use of those data can be beneficial to the power industry. There are several issues to be solved in this regard: (1) Which data to be used and when (for what application), (2) Accuracy of such data (in the measurement process from the place of data capture to where it is used), (3) Extraction of useful information from captured data and (4) Use of the information in applications. This paper focuses on these issues and also some new applications which can use those substation IED data.",2008,0, 3615,A Web-Based Fault Diagnosis of Analogue Circuits System,"With the coming of the network era and the continuous improvement of information requirement, the fault diagnosis of devices is advancing from traditional mono device and field mode to distributed and remote mode, which can largely improve the ability of device maintenance. Several NI technologies are used in this paper to realize virtual instrument remote application and computer support collaborative work environment for device remote fault diagnosis. A simulated experiment on Fault Diagnosis of Analogue Circuits system is performed and the feasibility of the scheme is proved.",2010,0, 3616,Benchmarking a Semantic Web Service Architecture for Fault-tolerant B2B Integration,"With the development and maturity of Service- Oriented Architectures (SOA) to support business-tobusiness transactions, organizations are implementing Web services to expose their public functionalities associated with internal systems and business processes. In many business processes, Web services need to provide is a high level of availability, since the globalization of the Internet enables business partners to easily switch to other competitors when services are not available. Along with the development of SOA, considerable technological advances are being made to use the semantic Web to achieve the automated processing and integration of data and applications. This paper describes the implementation and benchmarking of an architecture that semantically integrates Web services with a peer-to-peer infrastructure to increase service availability through fault-tolerance.",2006,0, 3617,Estimation of Defect Proneness Using Design Complexity Measurements in Object-Oriented Software,"Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we discuss the application of CK metrics and estimation model to predict the external quality parameters for optimizing the design process and production process for desired levels of quality. Estimation of defect-proneness in object-oriented system at design level is developed using a novel methodology where models of relationship between CK metrics and defect-proneness index is achieved. A multifunctional estimation approach captures the correlation between CK metrics and defect proneness level of software modules.",2009,1, 3618,Reducing false alarms in software defect prediction by decision threshold optimization,"Software defect data has an imbalanced and highly skewed class distribution. The misclassification costs of two classes are not equal nor are known. It is critical to find the optimum bound, i.e. threshold, which would best separate defective and defect-free classes in software data. We have applied decision threshold optimization on Naiumlve Bayes classifier in order to find the optimum threshold for software defect data. ROC analyses show that decision threshold optimization significantly decreases false alarms (on the average by 11%) without changing probability of detection rates.",2009,1, 3619,A Multivariate Analysis of Static Code Attributes for Defect Prediction,"Defect prediction is important in order to reduce test times by allocating valuable test resources effectively. In this work, we propose a model using multivariate approaches in conjunction with Bayesian methods for defect predictions. The motivation behind using a multivariate approach is to overcome the independence assumption of univariate approaches about software attributes. Using Bayesian methods gives practitioners an idea about the defectiveness of software modules in a probabilistic framework rather than the hard classification methods such as decision trees. Furthermore the software attributes used in this work are chosen among the static code attributes that can easily be extracted from source code, which prevents human errors or subjectivity. These attributes are preprocessed with feature selection techniques to select the most relevant attributes for prediction. Finally we compared our proposed model with the best results reported so far on public datasets and we conclude that using multivariate approaches can perform better.",2007,1, 3620,"Defect Prediction using Combined Product and Project Metrics - A Case Study from the Open Source ""Apache"" MyFaces Project Family","The quality evaluation of open source software (OSS) products, e.g., defect estimation and prediction approaches of individual releases, gains importance with increasing OSS adoption in industry applications. Most empirical studies on the accuracy of defect prediction and software maintenance focus on product metrics as predictors that are available only when the product is finished. Only few prediction models consider information on the development process (project metrics) that seems relevant to quality improvement of the software product. In this paper, we investigate defect prediction with data from a family of widely used OSS projects based both on product and project metrics as well as on combinations of these metrics. Main results of data analysis are (a) a set of project metrics prior to product release that had strong correlation to potential defect growth between releases and (b) a combination of product and project metrics enables a more accurate defect prediction than the application of one single type of measurement. Thus, the combined application of project and product metrics can (a) improve the accuracy of defect prediction, (b) enable a better guidance of the release process from project management point of view, and (c) help identifying areas for product and process improvement.",2008,1, 3621,A Value-Added Predictive Defect Type Distribution Model Based on Project Characteristics,"In software project management, there are three major factors to predict and control; size, effort, and quality. Much software engineering work has focused on these. When it comes to software quality, there are various possible quality characteristics of software, but in practice, quality management frequently revolves around defects, and delivered defect density has become the current de facto industry standard. Thus, research related to software quality has been focused on modeling residual defects in software in order to estimate software reliability. Currently, software engineering literature still does not have a complete defect prediction for a software product although much work has been performed to predict software quality. On the other side, the number of defects alone cannot be sufficient information to provide the basis for planning quality assurance activities and assessing them during execution. That is, for project management to be improved, we need to predict other possible information about software quality such as in-process defects, their types, and so on. In this paper, we propose a new approach for predicting the distribution of defects and their types based on project characteristics in the early phase. For this approach, the model for prediction was established using the curve-fitting method and regression analysis. The maximum likelihood estimation (MLE) was used in fitting the Weibull probability density function to the actual defect data, and regression analysis was used in identifying the relationship between the project characteristics and the Weibull parameters. The research model was validated by cross-validation.",2008,1, 3622,Predicting defects with program dependencies,"Software development is a complex and error-prone task. An important factor during the development of complex systems is the understanding of the dependencies that exist between different pieces of the code. In this paper, we show that for Windows Server 2003 dependency data can predict the defect-proneness of software elements. Since most dependencies of a component are already known in the design phase, our prediction models can support design decisions.",2009,1, 3623,Predicting Defects for Eclipse,"We have mapped defects from the bug database of eclipse (one of the largest open-source projects) to source code locations. The resulting data set lists the number of pre- and post-release defects for every package and file in the eclipse releases 2.0, 2.1, and 3.0. We additionally annotated the data with common complexity metrics. All data is publicly available and can serve as a benchmark for defect prediction models.",2007,1, 3624,Code construction and FPGA implementation of a low-error-floor multi-rate low-density Parity-check code decoder,"With the superior error correction capability, low-density parity-check (LDPC) codes have initiated wide scale interests in satellite communication, wireless communication, and storage fields. In the past, various structures of single code-rate LDPC decoders have been reported. However, to cover a wide range of service requirements and diverse interference conditions in wireless applications, LDPC decoders that can operate at both high and low code rates are desirable. In this paper, a 9-k code length multi-rate LDPC decoder architecture is presented and implemented on a Xilinx field-programmable gate array device. Using pin selection, three operating modes, namely, the irregular 1/2 code mode, the regular 5/8 code mode, and the regular 7/8 code mode, are supported. Furthermore, to suppress the error floor level, a characterization on the conditions for short cycles in a LDPC code matrix expanded from a small base matrix is presented, and a cycle elimination algorithm is developed to detect and break such short cycles. The effectiveness of the cycle elimination algorithm has been verified by both simulation and hardware measurements, which show that the error floor is suppressed to a much lower level without incurring any performance penalty. The implemented decoder is tested in an experimental LDPC orthogonal frequency division multiplexing system and achieves the superior measured performance of block error rate below 10-7 at signal-to-noise ratio of 1.8 dB.",2006,0, 3625,Two dimensional image reconstruction of log cross-section defect based on stress wave technique,"Wood is a material that is produced biologically in the growing tree, making it vulnerable to the attack of fungi. This will reduce the quality of wood, especially for log. The 2D image reconstruction contributed greatly for log cross-section defect testing in order to promote the utilization rate of wood resources. At first, this paper studied the stress wave computerized tomography technique, and introduced the straight-line tracing technique and the Algebraic Reconstruction Technique (ART) algorithm. Then, the medium model was constructed for numerical simulation analysis. The reconstruction of the medium model was conducted using straight line tracing - ART algorithm, and the impact of the number of iterations for image reconstruction accuracy was analyzed. At last, this paper validated the feasibility for two-dimensional image reconstruction of log internal defects using this method by physical model testing. Empirical and medium model results showed that the convergence of straight line tracing - ART algorithm was fast and reconstruction image was good. The two-dimensional image reconstruction of log internal defects could basically be realized using the straight line tracing-algebraic reconstruction algorithm method. And the feasibility and practicality of theory and technique that proposed in this paper were validated by practical testing.",2010,0, 3626,Recognizing the Patterns of Wood Inner Defects Based on Wavelet Neural Networks,"Wood nondestructive detection technology is a new interdisciplinary technology, which has been successfully applied in wood production, wood processing, wood structure detection and many other fields. In the paper, ultrasonic nondestructive testing for wood defects is studied based on the energy spectrum variety of the ultrasonic signals by means of wavelet transform, coefficient of wavelet node and the artificial neural networks. The original signals of different elm specimen are dispelled by wavelet packet, and the signal energy variety of crunodes in the 5th layer wavelet bundle of both defect specimen and normal specimen without any defect is obtained. The experiment results show that the energy change of defect wood specimen mostly depends on the degree of wood defects. And the defect degree is proportional to the energy change. By comparing the energy variety of every signal crunode in the 5th layer wavelet bundle, it is explicit that the variety of the crunode (5,0) among 32 crunodes is the biggest. And the crunode contains defect character information mostly. The energy varieties of 32 crunodes in the 5th layer and wavelet radix of (5,0) crunode are respectively regarded as the character inputs of the artificial neural networks (ANN). Two ANN networks are analyzed according to the ability of identifying wood defect patterns through training network. The identifying results show that taking wavelet radix of (5,0) crunode as the character input is more effictive in recognizing the defect patterns of wood inner defects.",2007,0, 3627,Cesar-FD: An Effective Stateful Fault Detection Mechanism in Drug Discovery Grid,"Workflow management system is widely accepted and used in the wide area network environment, especially in the e-science application scenarios, to coordinate the operation of different functional components and to provide more powerful functions. The error-prone nature of the wide area network environment makes the fault-tolerance requirements of workflow management become more and more urgent. In this paper, we propose Cesar-FD, a stateful fault detection mechanism, which builds up states related to the runtime and external environments of workflow management system by aggregating multiple messages and provides more accurate notifications asynchronously. We demonstrate the use of this mechanism in the drug discovery grid environment by two use cases. We also show that it can be used to detect faulty situations more accurately.",2009,0, 3628,Improving Automatic Detection of Defects in Castings by Applying Wavelet Technique,"X-ray-based inspection systems are a well-accepted technique for identification and evaluation of internal defects in castings, such as cracks, porosities, and foreign inclusions. In this paper, some images showing typical internal defects in the castings derived from an X-ray inspection system are processed by some traditional methods and wavelet technique in order to facilitate automatic detection of these internal defects. An X-ray inspection system used to detect the internal defects of castings and the typical internal casting defects is first addressed. Second, the second-order derivative and morphology operations, the row-by-row adaptive thresholding, and the two-dimensional (2-D) wavelet transform methods are described as potentially useful processing techniques. The first method can effectively detect air-holes and foreign-inclusion defects, and the second one can be suitable for detecting shrinkage cavities. Wavelet techniques, however, can effectively detect the three typical defects with a selected wavelet base and multiresolution levels. Results indicate that 2-D wavelet transform is a powerful method to analyze images derived from X-ray inspection for automatically detecting typical internal defects in the casting",2006,0, 3629,A cache-defect-aware code placement algorithm for improving the performance of processors,"Yield improvement through exploiting fault-free sections of defective chips is a well-known technique (Koren and Singh (1990) and Stapper et al. (1980)). The idea is to partition the circuitry of a chip in a way that fault-free sections can function independently. Many fault tolerant techniques for improving the yield of processors with a cache memory have been proposed. In this paper, we propose a defect-aware code placement technique which offsets the performance degradation of a processor with a defective cache memory. To the best of our knowledge, this is the first compiler-based technique which offsets the performance degradation due to cache defects. Experiments demonstrate that the technique can compensate the performance degradation even when 5% of cache lines are faulty. In some cases the technique was able to offset the impact even in presence of 25% faulty cache-lines.",2005,0, 3630,Zzyzx: Scalable fault tolerance through Byzantine locking,"Zzyzx is a Byzantine fault-tolerant replicated state machine protocol that outperforms prior approaches and provides near-linear throughput scaling. Using a new technique called Byzantine Locking, Zzyzx allows a client to extract state from an underlying replicated state machine and access it via a second protocol specialized for use by a single client. This second protocol requires just one round-trip and 2 f + 1 responsive servers-compared to Zyzzyva, this results in 39-43% lower response times and a factor of 2.2-2.9 higher throughput. Furthermore, the extracted state can be transferred to other servers, allowing non-overlapping sets of servers to manage different state. Thus, Zzyzx allows throughput to be scaled by adding servers when concurrent data sharing is not common. When data sharing is common, performance can match that of the underlying replicated state machine protocol.",2010,0, 3631,Using Faults-Slip-Through Metric as a Predictor of Fault-Proneness,"Background: The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. Aims: This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. Method: We applied eight classification techniques, to the task of identifying fault prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Naive Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. Results: Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, the faults slip-through metric showed impressive results with the majority of the techniques for predicting fault-prone modules at both integration and system test levels. There were, however, no statistically significant differences between the performance of different techniques based on AUC, even though certain techniques were more consistent in the classification performance at the two test levels. Conclusions: We can conclude that the faults-slip-through metric is a potentially strong predictor of fault-proneness at integration and system test levels. The faults-slip-through measurements interact in ways that is conveniently accounted for by majority of the data mining techniques.",2010,1, 3632,The Application of Safety Simulation Technology in the Fault Diagnosis of the Chemical Process,"With the development of information and computational technology, the safety simulation technique is becoming more and more useful in the chemical process hazard assessment, hazard identification, and safety control system design and operating personnel training etc.The fault diagnosis of the gravity water tank is studied by using dynamic simulation of HYSYS (Hyprotech System for Engineers). The simulation results presents the method need not design problem-specific observer to estimate unmeasured state variables, and can identification and diagnosis faults simultaneously as well. The parameters of the chemical process are updated via on-line correction.",2008,0, 3633,Comprehension and evaluation on significant error risk of CPA audit in the information-processing environment,"With the development of IT, kinds of information systems and software have been applied in the audited enterprises. That means the audited enterprises have set up their computer-based information-processing environment. In such environment, the uncertainty factors of the certified public accountants (CPA) audit are increasing and the audit risk is enhanced. It is more suitable for the audit risk factors to be described in significant error risk (SER) and detection risk in the information-processing environment. The key of controlling the whole audit risk is to comprehend and evaluate the significant error risk objectively. This essay explains what is the SER and why the SER is the key in controlling CPA audit risk and introduces a quantitative method to help auditors to assess it objectively. Because risk has obviously grey and ambiguous characteristics, it will be helpful for auditors to use the method that involved grey principle components and fuzzy comprehensive evaluation to assess the SER. Only by understanding the risk factors in the information-processing environment of the audited enterprises deeply and evaluating the SER objectively can help auditors reach the audit aims and control the audit risk under an acceptable level.",2008,0, 3634,Airborne laser depth sounding: improvements in position- and depth estimates by local corrections for sea surface slope,"With the development of laser and data acquisition technology, the sounding density in laser depth sensing can be increased. When the distance between each laser shot is less than half the length of the sea surface wave, it is possible to estimate the local angle of the sea surface waves. The estimated slope angle is used to calculate the angular deflection of the light beam that penetrates the water surface. This allows a better estimation of the beam center position on the sea bottom than can be done by averaging of a flat sea surface. The possibility to improve the position and depth accuracy is examined with experimental data from a laser depth sounding system by comparing the repeatability of the depth estimations from different flights over the same area. The experimental data is collected from two different sites with bottom depths between 3 m and 9 m. The surface maximum wavelength and significant waveheight varies up to 8 m and 0.6 m respectively. The different measurements on a site are done using the same flight direction and a shot distance of approximately 1 m. It is seen that the combined position and depth corrections for local surface slope angles can reduce the maximum depth difference between different measurements over the same area by up to 10 cm. The use of different spatial filtering of the water surface data along with different algorithms to calibrate the position and depth corrections is demonstrated",2000,0, 3635,Double Redundant Fault-Tolerance Service Routing Model in ESB,"With the development of the Service Oriented Architecture (SOA), the Enterprise Service Bus (ESB) is becoming more and more important in the management of mass services. The main function of it is service routing which focuses on delivery of message among different services. At present, some routing patterns have been implemented to finish the messaging, but they are all static configuration service routing. Once one service fails in its operation, the whole service system will not be able to detect such fault, so the whole business function will also fail finally. In order to solve this problem, we present a double redundant fault tolerant service routing model. This model has its own double redundant fault tolerant mechanism and algorithm to guarantee that if the original service fails, another replica service that has the same function will return the response message instead automatically. The service requester will receive the response message transparently without taking care where it comes from. Besides, the state of failed service will be recorded for service management. At the end of this article, we evaluated the performance of double redundant fault tolerant service routing model. Our analysis shows that, by importing double redundant fault tolerance, we can improve the fault-tolerant capability of the services routing apparently. It will solve the limitation of existent static service routing and ensure the reliability of messaging in SOA.",2009,0, 3636,1st workshop on fault-tolerance for HPC at extreme scale FTXS 2010,"With the emergence of many-core processors, accelerators, and alternative/heterogeneous architectures, the HPC community faces a new challenge: a scaling in number of processing elements that supersedes the historical trend of scaling in processor frequencies. The attendant increase in system complexity has first-order implications for fault tolerance. Mounting evidence invalidates traditional assumptions of HPC fault tolerance: faults are increasingly multiple-point instead of single-point and interdependent instead of independent; silent failures and silent data corruption are no longer rare enough to discount; stabilization time consumes a larger fraction of useful system lifetime, with failure rates projected to exceed one per hour on the largest systems; and application interrupt rates are apparently diverging from system failure rates.",2010,0,5408 3637,A Novel Fault Diagnosis System for MANET Based on Hybrid GA-BP Algorithm,"With the fast development of mobile ad hoc networks (MANETs), fault diagnosis has become a critical need to guarantee robust service for various applications. Many techniques have been suggested to solve this problem, but they still cannot satisfy the special need of MANETs. In this paper, we propose a new fault diagnosis system using hybrid GA-BP neural network. The results of simulation demonstrate that the performance of this system is excellent.",2008,0, 3638,Fault Diagnosis Method Study on Automobile Electrical Controlled System Based on Fusing of ANN and D-S Evidence Theory,"With the improvement of automobile electric degree, more and more people begin to pay attention to the fault diagnosis method and theories of electric controlled system. The precision and accuracy of on-board diagnosis methods, which with OBDII standard and has been widely used at present need to be further improvement. So, in this paper, take the engine idling instability as the example, put forward a multi-sensor diagnosis method which fusing neural network and D-S evidence theory, this method mainly use for on-board diagnosis system datapsilas fusing process and analysis. The experimental result shows that, this method can make use of various faultspsila redundant and complementation information sufficiently, and then promote the recognition ability obviously. With electric controlled technology widely used in automobile, the performance of automobile products has been promoted largely, but these also make fault diagnosis become more difficult, traditional methods such as experience or simple instrument could not meet the flexible diagnosis demand. At present, the On-Board diagnosis with OBDII standard has been applied for electric controlled systempsilas fault diagnosis, but it could only for 70%-80%psilas fault, and the diagnosis results are mainly presented by fault code or data flow, and still need otherpsilas help, and the accuracy degree still needs further improvement. Therefore, looking for the more precious and intelligent method for electric controlled system become the key direction in automobile fault diagnosis field.",2008,0, 3639,Fault-tolerant static scheduling for grids,"While fault-tolerance is desirable for grid applications because of the distributed and dynamic nature of grid resources, it has seldom been considered in static scheduling. We present a fault-tolerant static scheduler for grid applications that uses task duplication and combines the advantages of static scheduling, namely no overhead for the fault-free case, and of dynamic scheduling, namely low overhead in case of a fault. We also give preliminary experimental results on our scheme.",2008,0, 3640,A novel stuck-at based method for transistor stuck-open fault diagnosis,"While most of the fault diagnosis tools are based on gate level fault models, for instance the stuck-at model, many faults are actually at the transistor level. The stuck-open fault is one example. In this paper we introduce a method which extends the use of available gate level stuck-at fault diagnosis tools to stuck-open fault diagnosis. The method transforms the transistor level circuit description to a gate level description where stuck-open faults are represented by stuck-at faults, so that the stuck-open faults can be diagnosed directly by any of the stuck-at fault diagnosis tools. The transformation is only performed on selected gates and thus has little extra computational cost. This method also applies to the diagnosis of multiple stuck-open faults within a gate. Successful diagnosis results are presented using wafer test data and an internal diagnosis tool from Philips",2005,0, 3641,Architecting Fault Tolerant Systems,"While typical solutions focus on fault tolerance (and specifically, exception handling) during the design and implementation phases of the software life-cycle (e.g., Java and Windows NT exception handling), more recently the need for explicit exception handling solutions during the entire life cycle has been advocated by some researchers. Several solutions have been proposed for fault tolerance via exception handling at the software architecture and component levels. This paper describes how the two concepts of fault tolerance and software architectures have been integrated so far. It is structured in two parts (overview on fault tolerance and exception handling, and integrating fault tolerance into software architecture) and is based on a survey study on architecting fault tolerant systems where more than fifteen approaches have been analyzed and classified. This paper concludes by identifying those issues that remain still open and require deeper investigation.",2007,0, 3642,Why automation needs error avoidance guidelines and evaluation?,"Why do we need automation? Many technologies cite three major reasons: to eliminate the dull, the dangerous, and the dirty routines. It is difficult to argue with this answer, but many things are automated for other reasons - to simplify a complex task, to reduce the work force, to entertain - or simply because it can be done. However, none of the above matters relate to the findings of this paper, whereby automation leads to humans making mistakes.",2010,0, 3643,Distributed Intermittent Fault Diagnosis in Wireless Sensor Networks Using Clustering,"Wireless sensor networks (WSNs) are an important tool for monitoring distributed remote environments. Faults occurring to sensor nodes are common due to the sensor device itself and the harsh environment where the sensor nodes are deployed. It is well known that the distributed fault detection (DFD)scheme checks out the failed nodes by exchanging data and mutually testing among neighbor nodes in this network. But the fault detection accuracy of aDFD scheme would decrease rapidly when the number of neighbor nodes to be diagnosed is small and the node's failure ratio is high. In this paper, aDFD scheme using clustering is proposed which satisfies three important diagnosis properties such as consistency, completeness and accuracy. These properties have been also proved. Simulation results demonstrate that the proposed DFD scheme increases the fault detection accuracy in comparison with a DFD scheme without clustering.",2010,0, 3644,Fault Aware Wireless Sensor Networks,"Wireless Sensor Networks (WSNs) collect information about the physical environment aiding to a wide variety of applications ranging from target detection to monitoring of harmful chemical gases. The drive to scale down the system size and cost has resulted in constraints in the quality of components. Reliable and accurate performance of sensors is necessary in critical applications. In this paper, we present statistical data analysis and signal processing techniques at the sensor node level to detect sensor faults and to eliminate noise. We also present the simulation of the proposed algorithm using real sensor data and demonstrate that the algorithm can distinguish between sensor faults and environmental events. Furthermore, we describe the real-time implementation of the developed algorithm. The information regarding the faulty sensor is broadcast to all nodes and the central processing base station node thereby achieving autonomous node level operation and a complete fault aware system.",2007,0, 3645,Efficient QoS control using unequal error protection (UEP) in physical layer,"A new concept of unequal error protection (UEP) in the physical layer of a mobile cellular communication system is presented. With various potential extensible schemes of the proposed scheme, UEP for multiple applications having different QoS requirements (ftp type best effort application and video streaming type near-real time application are used for this paper) is described and associated computer simulation results are provided. The key idea of the proposed scheme is applying a different amount of error protection in the physical layer to each application in accordance with its QoS attributes. Through the computer simulations, we have shown that the proposed scheme offers increased QoS controllability for each application and enhanced goodput for delay-insensitive applications compared to the performance of the conventional systems without physical layer UEP.",2002,0, 3646,A new control method for single-phase PWM inverters to realize zero steady-state error and fast response,A new control method for single-phase full-bridge PWM inverters is proposed in this paper. The proposed controller has a capability to realize a zero steady-state output voltage error with fast response. The zero steady-state output voltage error is achieved by using a controller that is derived by using the virtual LC resonant circuit. Fast response is obtained by using a virtual resistance that is connected in parallel with the filter capacitor. The validity of the proposed method is verified by experimental results.,2003,0, 3647,Bridge fault diagnosis using stuck-at fault simulation,A new diagnostic fault simulator is described that diagnoses both feedback and nonfeedback bridge faults in combinational circuits while using information from fault simulation of single stuck-at faults. A realistic fault model is used which considers the existence of the Byzantine Generals problem. Sets representing nodes possibly involved in a defect are partitioned based on logic and fault simulation of failing vectors. The approach has been demonstrated for two-line bridge faults on several large combinational benchmark circuits containing Boolean primitives and has achieved over 98% accuracy for nonfeedback bridge faults and over 85% accuracy for feedback bridge faults with good diagnostic resolution,2000,0, 3648,A dynamic selective neural network ensemble method for fault diagnosis of steam turbine,"A new dynamic selective neural network ensemble method for fault diagnosis of steam turbine is proposed. Firstly, a great number of diverse BP neural network models are produced. Secondly, the error matrix is calculated and the K-nearest neighbor algorithm is used to predict the generalization errors of different neural networks on each testing sample. Thirdly, the individual networks whose generalization errors are in a threshold will be dynamically selected and a conditional generalized variance minimization method is used to choose the most suitable ensemble members again. Finally, the predictions of the selected neural networks with weak correlations are combined through majority voting. The practical applications in fault diagnosis of steam turbine show the proposed approach gives promising results on performance even with smaller learning samples, and it has higher accuracy and efficiency compared with other methods.",2009,0, 3649,High performance error concealment algorithm by motion vector refinement for MPEG-4 video,A new error concealment algorithm is proposed using recursive motion vector refinement. The proposed method utilizes the top/bottom motion vectors of lost macroblocks in current and reference frames and refines motion vectors recursively. Simulation results based on the MPEG-4 codec present a superior subjective and objective performance of the proposed technique compared with conventional temporal concealment techniques.,2005,0, 3650,Low-error approximation of artificial neuron sigmoid function and its derivative,"A new low-error approximation of the sigmoid function based on the piecewise linear method is proposed. The approximation results, in comparison with those of the state-of-the-art, show the lowest mean absolute and relative errors.",2009,0, 3651,A March-based algorithm for location and full diagnosis of all unlinked static faults,"A new March-based fault location and full diagnosis algorithm is proposed for word-oriented static RAMs. A March algorithm of complexity 31N, N is the number of memory words, is defined for fault detection and partial diagnosis. Then March-like algorithms of complexity 3N to 5N are used to locate the aggressor words of coupling faults (CF) and achieve full diagnosis for all unlinked static CFs. Another March-like algorithm of complexity 16logB+18, B is the number of bits in the word, is applied to locate the aggressor bit in the aggressor word. A software tool is developed for automated generation of fault syndromes for detection, partial and full diagnosis of all static unlinked faults",2006,0, 3652,A new approach to fault-tolerant wormhole routing for mesh-connected parallel computers,"A new method for fault-tolerant routing in arbitrary dimensional meshes is introduced. The method was motivated by certain routing requirements of an initial design of the Blue Gene supercomputer project currently underway in IBM Research. Among the requirements were to provide deterministic deadlock free wormhole routing in a 3-dimensional mesh, in the presence of many faults (up to a few percent of the many thousands of nodes in the machine), while using two virtual channels. It was also desired to minimize the number of ""turns"" in each route, i.e., the number of times that the route changes direction. There has been much work on routing methods for meshes that route messages around faults or regions of faults. The new method is to declare certain good nodes to be ""lambs"": a lamb is used for routing but not processing, so a lamb is neither the source nor the destination of a message. The lambs are chosen so that every ""survivor node"", a node that is neither faulty nor a lamb, can reach every survivor node by at most two rounds of dimension-ordered (such as e-cube) routing. An algorithm for finding a set of lambs is presented The results of simulations on 2D and 3D meshes of various sizes with various numbers of random node faults are given. For example, on a 323232 3D mesh with 3% random faults, and using two rounds of e-cube routing for each message, the average number of lambs is less than 68. which is less than 7% of the number 983 of faults. The computational complexity offending the minimum number of lambs for a given fault set is also explored, and this problem is shown to be NP-hard fir 3-dimensional meshes with two rounds of e-cube routing.",2002,0,36 3653,Analytical approach to internal fault simulation in power transformers based on fault-related incremental currents,"A new method for simulating faulted transformers is presented in this paper. Unlike other methods proposed in the literature, this method uses the data obtained from any sound transformer simulation to obtain the damaged condition by simply adding a set of calculated currents. These currents are obtained from the definition of the fault. The model is fully based on determining the incremental values exhibited by the currents in phases and lines from the prefault to the postfault condition. As a consequence, data obtained from simulation of the sound transformer may be readily used to define the damaged condition. The model is described for light and severe faults, introducing this latter feature as a further add-on feature to the low-level faults simulation. The technique avoids the use of complex routines and procedures devoted to specially simulate the internal fault. Of prompt application to relay testing, the proposed analytical model also gives an insight into the fault nature by means of the investigation of symmetrical components. In contrast with its low complexity, the method has shown to present large accuracy for simulating the fault performance.",2006,0, 3654,Study on prediction-correction homotopy method of tracking Hopf bifurcation point,"A new method named prediction-correction homotopy method is proposed to calculate Hopf bifurcation points associated with the parameter dependent differential algebraic equations (DAE) which are used to model power systems dynamics. It uses the secant prediction method to track the Hopf bifurcation homotopy path. Compared with the tangent prediction method, the computation load is much less because it is not related to the matrix inversion computation. At the same time, the automatic step-size control strategy ensures the calculation accuracy and speed to effectively implement the step-by-step correction. Finally, it is proved that this algorithm can be used accurately and effectively through WSCC 3-machine 9-bus system and NewEngland 39-bus system.",2010,0, 3655,Real-time MRI for assessment of PET/CT attenuation correction protocols,"A geometrical mismatch between emission and transmission occurs in cardiac PET/CT because of respiratory and cardiac motions. Proposed solutions to this problem include fast CT during free breathing, with or without image alignment, and slow CT. To study this problem, including the variability of human breathing patterns and compliance with instructions, we performed real-time FISP MRI measurements of two free-breathing volunteer subjects. We have developed a method for simulating PET and CT coronal images from these image sequences, and for locating the left cardiac free wall semi-automatically. The PET geometry represents an average over the whole MR examination. The simulated CT geometry represented 28.8 mm axial extent and speeds of 83 mm/sec (fast) and 12 mm/sec (slow), representing a Senation-64 CT scanner running at fast and slow settings. We considered 400 start times for each CT geometry. The free wall was located in each CT image and was compared with the location expected in PET. The error in a given CT scan depends on the geometry, i.e. fast or slow scan, and also on the state of breathing at the time of the scan. A more accurate result can be realized by selecting the best-aligned of two or three fast CT scans. The average errors in fast, slow, best of 2, and best of 3 fast CT scans are respectively 5, 4, 4, and 3 mm. The errors in these scans have 98% likelihood of being less than, respectively, 14, 9, 9, and 7 mm.",2008,0, 3656,A taxonomy and catalog of runtime software-fault monitoring tools,"A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",2004,0, 3657,Study on Data Mining for Grounding Fault Line Selection in 6kV Ineffectively Grounded System of Coal Mine,"A great amount of fault wave has been recorded by the devices for detecting phase-to-ground faults in ineffectively grounded systems. However, a better method hasn't found for effectively taking advantage of these data to improve the result of fault line selection. Data mining techniques can be used for fault line selection in ineffectively grounded system to gain knowledge from the existing data and to improve the technique of fault line selection. This paper briefly describes the principles, methods and implementation of data mining techniques, classifies the fault samples of ineffectively grounded systems by using clustering analysis method, employs different fault line selection methods according to the types of faults, and consequently provides a set of criteria for modeling of typical ineffectively grounded systems and verifying the validity of real-time fault line selections. The validity of the methods has been convinced by the calculation using the data obtained from the real performance of a substation in coal mine. It has been shown to be promising to employ the data mining techniques in ineffectively grounded systems fault detection. This paper provides very good methods for resolving the difficulties with onsite tests, enhancing the techniques of fault line selection and establishing the fault detection management systems.",2010,0, 3658,Design of a video system to detect and sort the faults of cigarette package,A hardware flatform detecting and sorting the faults of cigarette package is designed by using the technology of machine vision. Algorithm and systemic software are also designed according to its characteristics. With stability and accuracy it is of some guiding value to the high speed on-line package detection.,2008,0, 3659,"GPR studies in the piano di pezza area of the ovindoli-pezza fault, Central Apennines, Italy: Extending palacoselsmic trench investigations with high resolution GPR profiling","A high-resolution 200 MHz frequency GPR profile with 0.1 m station spacing was collected using a Pulse EKKO 100 system, adjacent to a trench site previously excavated and interpreted by Pantosti et al., 1996. The profile intersected the fault scarp on one of the alluvial fans on the northern slope of the Piano di Pezza; this fault scarp was thought to be associated with repe:ated earthquake activity along the Ovindoli-Pezza fault. The trench excavation exposed a 4 metre sedimentary section formed principally of palaeosols and alluvial fan gravel dipping gently toward the south west, into the Piano di Pezza basin. The deposits exposed in the trench are strongly deformed in a 7 metre complex fault zone, located beneath the main topographic scarp. The scarp is 6.8 metres high at this location. Two palaeo-earthquake events were recognised in the trench, the most recent (circa. 1300 A.D.?) with a vertical displacement of 2.8 to 3.0 metres, and thoucght to be responsible for the current scarp formation, surface rupture, and deformation of the sedimentary section exposed in the upper part of the trench. The second event (circa. 1900 B.C.?) is thought to be represented in the lower part of the trench, and estimated to have a minimum vertical displacement of 2.5 metres. The GPR data were processed using Win EKKO radar data processing software and included the application of SEC gain and time-depth conversion utilising a field derived CMP analysis which provided a velocity of 0.06 dns. In the upper part of the GPR section a strong continuous radar reflector, thought to represent the top of the carbonate scarp, is clearly imaged beneath 1.5 m of surficial cover, and is abruptly down faulted vertically by approximately 2.8 metres, to the south west. The complex, 7 - metre wide deformation zone, recognised in the trench and associated with the most recent catastrophic event, is clearly imaged in the hanging wall of the fault and exhibits a diminishing deformation intensity towa- ds the south west, terminating with a vertical sequence of antithetic faults with a local cumulative offset of 1.5 metres, thus forming a low angle, south west tilted graben structure. In the lower part of the GPR section, the down thrown carbonate surface forms the north eastern margin of a rotated basement block, above which an asymmetric graben has developed within the sedimentary section, with maximum antithetic fault offset of the order of 0.6 metres. This graben is imaged beneath the sedimentary units exposed by the trenching operations, and is interpreted as representing the base of an approximately 12 metres cumulative vertical offset from the original carbonate surface. The GPR data has thus provided a framework for estimating the extent of extensional and episodic catastrophic activity along the fault, and elucidated the geometric relationship between the faulting in response to recent tectonic events.",2004,0, 3660,Fault Diagnosis Expert System for Broadcasting Station Interface Circuit Board on VXI Bus,"A high-tech information electronic equipment of some given type is designed in order to proceed automatically fault detection and improve the efficiency and accuracy of diagnosis. This thesis, which is a part of the program, introduces the research of algorithm of fault diagnosis expert system of the broadcasting station of the equipment and algorithm realization and example proving on the hardware platform. It's quicker and more convenient to locate fault on the circuit boards on this equipment. It's proved that this expert system can solve the problems of high cost and long intervals of maintenance and keep the equipment in a stable status",2006,0, 3661,Federate Fault Tolerance in HLA-Based Simulation,"A large scale HLA-based simulation (federation) is composed of a large number of simulation components (federates), which may be developed by different participants and executed at different locations. These federates are subject to failures due to various reasons. What is worse, the risk of federation failure increases with the number of federates in the federation. In this paper, a fault tolerance mechanism is proposed to tolerate the crash-stop failures of federates. By exploiting the decoupled federate architecture, federate failures can be masked from the federation and recovery can take place without interrupting the executions of other federates. A basic state recovery protocol is first proposed to recover the state of the failed federate relying on the checkpoint and message logging taken before the failure. Then, an optimized protocol is further developed to accelerate the state recovery procedure. Experiments are carried out to verify that the proposed mechanism provides correct failure recovery. The experimental results also indicate that the optimized protocol can outperform the basic one considerably.",2010,0, 3662,Research of Fault Diagnosis Method Based on Immune Neural Network,"A fault diagnosis method based on immune neural network is proposed in this paper. In this method the weights of neural network is encoded as the antibody, and the network error is considered as the antigen. Firstly the weights of the network are searched globally using immune algorithm, then searched locally using BP algorithm. The simulation is done through the experiment of the pump-jack, and the diagnosis method proposed in this paper is compared with the fault diagnosis method based on BP neural network. The result shows that the fault diagnosis method based on immune neural network has the capability in escaping local minimum and improving the algorithm speed.",2009,0, 3663,Design of a Fault Diagnostic System for an ABS Based on Dual-CPU Structure,"A fault diagnostic system with dual-CPU is designed and applied to the pneumatic anti-lock braking system (ABS) developed independently. ABS faults are classified according to the locations of possible faults through analysis of the basic configurations and operation principles of ABS. Diagnostic solutions for possible faults are raised and the corresponding diagnostic circuits and codes are designed. Fault diagnosis IS09141 protocol is adopted and corresponding communication circuits and interfaces are designed to transmit fault codes of ABS. An simple off-board tester is used to communicate with electrical control unit (ECU) and it attains to show, read, clear fault codes and look up information of the codes. It performs well according to road tests in various conditions.",2006,0, 3664,An improved on-line monitoring technique for a fault-tolerant computing node,"A fault tolerant computing node is an indispensable component of reliable distributed computer systems devised for life critical applications. The on-line monitoring technique is frequently used for error detection in such systems and assumes the use of an external hardware monitor connected to the system bus. It does the control flow checking based on signatures assigned to each block of an application. However, this technique cannot be used with contemporary processors with the built-in cache. Therefore, an improved on-line monitoring technique which overcomes this problem is proposed in the paper.",2004,0, 3665,Design and implementation of a dual-server fault-tolerant real-time processing system,"A fault-tolerant computer system is one such that if problems arise then the system itself has the capacity to find and correct or eliminate them and ensure that the whole system runs normally. The dual-server real-time processing system discussed in this paper is a dual-module hardware redundant system. This system has two Sun workstations as dual-server. First we review the design of the whole system, then we focus on discussing how to resolve the problems of communication, synchronization, and problem detection and correction between two Sun workstations. The result shows that this system resolve those problems existing in the dual-server redundant system. This system can be widely used in many real-time processing fields such as power supply scheduling, technical production control and so on",2000,0, 3666,Fault-tolerant earliest deadline first scheduling with resource reclaim,"A fault-tolerant real-time scheduling algorithm through time redundancy, with a schedulability bound, is based on the worst execution time of tasks and the time reserved for fault-tolerant operation of tasks, has a high rejection rate and low resource utilization. This paper presents fault-tolerant earliest deadline scheduling with resource reclaim, based on FT-EDF, to improve resource utilization and task throughput. This algorithm makes use of the fact that the actual execution time of the task is lower than the worst execution time, to reclaim and reuse the resource released by the completed task.",2002,0, 3667,Actuator fault compensation for a helicopter model,"A fault-tolerant system is the one that can continue its operation without significant impact on performance in the presence of hardware and/or software errors. In this paper, the design of a fault-tolerant flight controller to control UH-60 helicopter is investigated. A 9th-order state space representation of the helicopter model operating at the forward mode with 80 knots is presented; then a fault-tolerant optimal feedback controller is designed and tested.",2003,0, 3668,"Animation can show only the presence of errors, never their absence","A formal specification animator executes and interprets traces on a specification. Similar to software testing, animation can only show the presence of errors, never their absence. However, animation is a powerful means of finding errors, and it is important that we adequately exercise a specification when we animate it. The paper outlines a systematic approach to the animation of formal specifications. We demonstrate the method on a small example, and then discuss its application to a non-trivial, system-level specification. Our aim is to provide a method for planned, documented and maintainable animation of specifications, so that we can achieve a high level of coverage, evaluate the adequacy of the animation, and repeat the process at a later time",2001,0, 3669,Hierarchical Fault Diagnosis: Application to an Ozone Plant,"A framework for diagnosis in hierarchical finite-state machines is presented and applied to an ozone plant. In this approach, the model of the system is broken into simpler substructures called D-holons. At any instant, instead of the complete system model, only the D-holons associated with the ongoing phase of operation are used for diagnosis, which reduces memory requirement. Furthermore, within the above setup, a semimodular diagnosis method is presented and used to reduce design computations. Following the proposed framework, a diagnosis system is designed for the ozone plant. It is shown that the proposed approach significantly reduces the complexity of constructing and storing the diagnosis system.",2007,0,3670 3670,Hierarchical fault diagnosis: application to an ozone plant,"A framework for online passive fault diagnosis in hierarchical finite-state machines (HFSM) is presented and applied to an ozone generation plant. This approach takes advantage of system structure to reduce computational complexity. Here, the system model is broken into simpler substructures called D-holons. A diagnoser is constructed for each D-holon. At any given time, only a subset of the diagnosers are active, and as a result, instead of the entire model of the system, only the models of D-holons associated with active diagnosers are used for diagnosis. Furthermore, a set of sufficient conditions is provided under which the diagnosis process becomes semi-modular. The ozone generation plant under study, consisting of two units, is modeled as an HFSM. It is shown that a proper choice of sensors results in modular diagnosis (one diagnoser for each unit). Following the proposed framework, a hierarchical fault diagnosis system is designed for the plant. It is shown that the proposed approach significantly reduces the complexity of constructing and storing the diagnosis system.",2004,0, 3671,An efficient automatic redeye detection and correction algorithm,"A fully automatic redeye detection and correction algorithm is presented to address the redeye artifacts in digital photos. The algorithm contains a redeye detection part and a correction part. The detection part is modeled as a feature based object detection problem. Adaboost is used to simultaneously select features and train the classifier. A new feature set is designed to address the orientation-dependency problem associated with the Haar-like features commonly used for object detection design. For each detected redeye, a correction algorithm is applied to do adaptive desaturation and darkening over the redeye region.",2004,0, 3672,Faults Analysis on Regulating System Fed by Voltage Type Inverter and the Strategies of System Tolerance,A new method of analysis on system faults based on softwares without any increase of hardware is brought up. The faults characters are obtained. Different faults are sorted according to the characteristics of driving system fed by voltage type inverter. Experiments show the soft comprehensive protection can let system operate with the limits of system tolerance,2006,0, 3673,The research of the fault types recognition in transformer by differential protection based on LIBSVM,"A new method to distinguish various fault types of transformer by differential protection based on LIBSVM is discussed. This paper uses PSCAD/EMTDC to build transformer and differential protection model which is applied to simulate various transformer fault types, the simulation data and characteristic value extracted are preprocessed; then fault identification model based on LIBSVM is established. This paper finds the optimal settings by different kernel functions and parameters through a lot of experiments. Experimental results show that LIBSVM can identify various fault types correctly, it verifies the accuracy of transformer fault types recognition by differential protection based on LIBSVM.",2010,0, 3674,Transformer Fault Prediction Based on GA and Variable Weight Gray Verhulst Model,"A new prediction method combined variable weight Gray Verhulst model and gray integrated relation grade was proposed in this paper to solve the problem of power transformer fault prediction. Because power transformer gases concentration sequence was S-shaped, Gray Verhulst model was chosen to forecast the gases concentrations. Variable weight Gray Verhulst model was proposed based on 2 improved Gray Verhulst models with 2 difference select rules of parameter p in background function. Genetic algorithm chose parameter p for variable weight Gray Verhulst model. Powers transformer fault diagnosis using gray integrated relation grade had 93.7% diagnostic accuracy. Experiments on power transformer fault prediction show that Variable weight Gray Verhulst model had higher prediction accuracy, and, the fault prediction method proposed in this paper has the same forecasting result with the true values and is reliable and effective.",2009,0, 3675,A Digital Image Scrambling Method Based on AES and Error-Correcting Code,"A new scrambling method of true color images based on AES algorithm is proposed. It takes the structure of true color images into full consideration, and decreases the complexity. The analytical method of cryptography is first used in this paper to analyze the security of disordered images; at the same time, an error-correcting code is designed to prevent passive attacks based on bit modification. Experimental results show that the method is safe, efficient and has great ability of error correcting.",2008,0, 3676,Stepped hairpin shape defected ground structure (SH-DGS) study,"A new shape of defected ground structure (DGS) is introduced. The advantage over conventional dumb-bell shape DGS is, the physical dimensions and frequency response can be directly determined using its equivalent LC circuit. The equations can be extracted from Stepped Impedance Hairpin Resonator (SIHR) low pass filter study. They are similar in term of electrical characteristic. Conventional dumb-bell DGS use iterative trial and error method for the same purpose. It is time consuming and tend to lead to inaccurate design. This new DGS simplify and speed up design process significantly. Measurement and simulation data shows good relationship except at high frequency whereby the measured circuit suffers greater transmission loss. Higher dielectric constant material is suggested to reduce this effect.",2007,0, 3677,A new software-based technique for low-cost fault-tolerant application,A new software approach providing fault detection and correction capabilities by using software techniques is described. The approach is suitable for developing commercial-off-the-shelf processor-based architectures for safety-critical applications. Data and code duplications are exploited to provide fault detection and correction capabilities. Preliminary results coming from fault injection experiments support the effectiveness of the method.,2003,0, 3678,Spatial Error Concealment Using Directional Extrapolation,"A new spatial error concealment algorithm based on directional extrapolation is presented. Practical communication channels are not error free. Data corruption or loss may be caused by network congestion, signal fade, various noise, etc. Error concealment is an important and practical scheme to solve such problems. Spatial error concealment is aimed at masking the effect of missing blocks by making use of the properties of smoothness and edge continuity in the spatial domain of an image or intra frame. This paper presents a spatial directional extrapolation algorithm that addresses concealment of missing image blocks on pixel-by-pixel basis. After determining the direction of edge which traverses the to-be-recovered pixel, the pixel is recovered by extrapolation along the corresponding direction. Experimental results demonstrate the performance improvement of the proposed method over conventional algorithms.",2005,0, 3679,A Theoretical Framework for Probability Coefficients: A New Methodology for Fault Detection,"A new spectral method that eliminates the need of inner product evaluations in determination of signature of a combinational circuit realizing given Boolean function is described. The signature is obtained using probability coefficients of the function instead of conventional spectral signature. Theoretical relations for achievable computational advantage in terms of required additions in computing all 2n probability coefficients of ""n"" variable function have been developed. It is shown that for n ges 5, only 50% additions are needed to compute all probability coefficients as compared to spectral coefficients. The fault detection techniques based on spectral signature can be used with probability signature also to offer computational advantage.",2008,0, 3680,Fault tolerant computation of large inner products,A new technique for applying fault tolerance to modulus replication RNS computations by adding redundancy to the independent computational channels is introduced. This technique provides a low-overhead solution to fault tolerant large inner product computations,2001,0, 3681,Tunable bandpass microwave filters based on defect commandable photonic bandgap waveguides,A new type of tunable filter based on a commandable defect in the bandgap of a periodic CPW structure is proposed. The defect level is tuned either mechanically by adding a covering slab over the device or electrically by polarising diodes located at the defect. The validity of this concept is experimentally demonstrated at 4 GHz. This kind of filter is well suited for applications in the 10 to 60 GHz frequency range.,2003,0, 3682,LMS-based calibration of pipelined ADCs including linear and nonlinear errors,"A least mean square (LMS) based calibration algorithm is proposed to calibrate most known error sources in 1.5 bit/stage pipelined ADCs, known to be immune to moderate comparator offsets. The error sources include linear gain errors, reference voltage errors, systematic offset errors and amplifier non-linear errors of each pipeline stage. LMS is used to estimate the error parameters in the digital domain. After estimation the proposed algorithm calibrate the pipelined ADC using the estimated parameter errors. Simulation results show that the proposed algorithm can improve the ENOB from 6.6 bits to 13.9 bits for a 14 bit 1.5 bits/stage pipelined ADC.",2007,0, 3683,A fault tolerant approach in cluster computing system,"A long-term trend in high performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Hence, fault tolerance becomes a key property for parallel application running on parallel computing systems. The message passing interface (MPI) is currently the programming paradigm and communication library most commonly used on parallel computing platforms. MPI applications may be stopped at any time during their execution due to an unpredictable failure. In order to avoid complete restarts of an MPI application because of only one failure, a fault tolerant MPI implementation is essential. In this paper, we propose a fault tolerant approach in cluster computing system. Our approach is based on reassignment of tasks to the remaining system and message logging is used for message losses. This system consists of two main parts, failure diagnosis and failure recovery. Failure diagnosis is the detection of a failure and failure recovery is the action needed to take over the workload of a failed component. This fault tolerant approach is implemented as an extension of the message passing interface.",2008,0, 3684,Are There Language Specific Bug Patterns? Results Obtained from a Case Study Using Mozilla,"A lot of information can be obtained from configuration management systems and post-release bug databases like Bugzilla. In this paper we focus on the question whether there are language specific bug patterns in large programs. For this purpose we implemented a system for extracting the necessary information from the Mozilla project files. A comparison of the extracted information with respect to the programming language showed that there are bug patterns specific to programming languages. In particular we found that Java files of the Mozilla project are less error prone than C and C++ files. Moreover, we found out that the bug lifetime when using Java was almost double the lifetime of bugs in C or C++ file.",2009,0, 3685,Odometry error correction by sensor fusion for autonomous mobile robot navigation,A low cost navigation system is developed which fuses inertial sensor information provided by gyroscopes and odometry information. Both Kalman filters and a rule set based fusion strategy are used to correct the odometry errors in orientation for mobile robots. The fusion system even improves orientation estimation using a gyroscope with an extremely high drift rate. This navigation system is implemented on the autonomous mobile robot B21. The performance of this fusion method is tested on different surfaces and orientation errors do not increase as the robot travels along. Experimental results demonstrate that even in harsh environments with obstacles the performance of the designed navigation system reduces odometry tracking errors. The effects of nonsystematic odometry errors caused by unpredictable large bumps or objects encountered on the floor are reduced,2001,0, 3686,Recovery and performance balance of a COTS DBMS in the presence of operator faults,"A major cause of failures in large database management systems (DBMS) is operator faults. Although most of the complex DBMS have comprehensive recovery mechanisms, the effectiveness of these mechanisms is difficult to characterize. On the other hand, the tuning of a large database is very complex and database administrators tend to concentrate on performance tuning and disregard the recovery mechanisms. Above all, database administrators seldom have feedback on how good a given configuration is concerning recovery. This paper proposes an experimental approach to characterize both the performance and the recoverability in DBMS. Our approach is presented through a concrete example of benchmarking the performance and recovery of an Oracle DBMS running the standard TPC-C benchmark, extended to include two new elements: a fault load based on operator faults and measures related to recoverability. A classification of operator faults in DBMS is proposed. The paper ends with the discussion of the results and the proposal of guidelines to help database administrators in finding the balance between performance and recovery tuning.",2002,0, 3687,Modeling open source software bugs with complex networks,"A major concern in open source software development is regarding bugs emerging during the life circle of a software project. Understanding the topological characteristics of interrelated bugs can provide useful insights into software project management and potentially facilitate the development of new complex network models. In this paper, we analyze the bug network in Gentoo Linux, one of the most popular Linux distributions. We model software bugs as nodes and the blocking relationships among them as edges. As the resulting Gentoo bug network cannot be adequately explained by some commonly-used complex network models, we propose a new model, which can better explain this network.",2010,0, 3688,Dynamic error correction of a digitizer for time-domain metrology,"A method for numerical correction of distortion in a digitizer used for metrology applications is described. Investigation of the digitizer's error behavior in the phase plane leads to the development of an analytic error model that describes the digitizer's distortion behavior. Of particular significance is the model's ability to describe nonlinear error in the fundamental spectral component manifested as amplitude and frequency-dependent gain and phase error. When fitted only to the harmonic distortion content of the digitizer's output data, the model generates an amount of fundamental that correctly accounts for the error in the digitizer's gain that is not due to linear system response. The model is therefore able to improve not just the total harmonic distortion (THD) performance of the digitizer but its ac root mean square measurement accuracy as well. At 1 MHz, the model linearizes the digitizer to 70 V/V over a range of 1 to 8 V and reduces harmonic distortion by >20 dB. It is believed that this is the first time that results of this nature have been reported in the literature.",2004,0, 3689,Development and control of an integrated and distributed inverter for a fault tolerant five-phase switched reluctance traction drive,"A concept of an integrated and distributed inverter for switched reluctance machines is introduced. The application at hand is an outer-rotor direct drive designed for railway traction applications. A five-phase switched reluctance machine (SRM) was developed and is used to demonstrate the function of the integrated and distributed inverter. The distribution is achieved by supplying each phase coil with its own modular inverter. Each inverter module is placed evenly around the end of the stator stack next to its dedicated coil. This increases the redundancy of the drive significantly. The likelihood of phase-to-phase faults is reduced, because no overlapping end-turns are necessary. Also, the integration of machine and inverter is simplified, because the semiconductors can be evenly distributed around the machine. The concept reduces the amount of terminals between drive and vehicle to communication, power supply and cooling, independent of the number of machine phases. With the integrated and distributed inverter new control strategies can be developed to influence machine vibration and radiated noise. In this paper the design of the prototype, the direct torque control of the five-phase machine and the behavior in case of a fault inside a module is analyzed.",2010,0, 3690,Evaluation on fault current limiting and power system stabilization by a SMES with a series phase compensator,"A configuration of combined controller of a SMES with a series phase compensator and its control scheme for the power system stabilization and fault current limiting has been proposed. In this paper, the effectiveness of the proposed controller is evaluated from the viewpoints of current limiting, steady-state stability improvement and transient stability improvement based on several numerical analyses including PSCAD/EMTDC simulations",2001,0, 3691,Towards Decentralized Fault Detection in UAV Formations,"A decentralized fault detection (DFD) scheme warranting mission integrity for leader-to-follower formations of almost-lighter-than-air vehicles (ALTAVs) is proposed. Such DFD allows compensating for concurrent communication network and component-level faults that may significantly impact mission integrity despite the presence of on-board fault detection, isolation and recovery for actuator and sensor faults. The formation of unmanned ALTAVs is beforehand stabilized by a distributed formation guidance scheme that uses neighboring vehicle information. Each ALTAV is equipped with an observer based on simplified models of the neighboring ALTAVs. The observer and the guidance law use the same local information. Residues associated with neighboring vehicle states carry the information as to their normal or abnormal flight conditions, thereby determining whether or not there is a fault in the neighboring vehicles. The observer design is formulated as a mixed H2/Hinfin controller synthesis compensating for exogenous disturbance estimates and accounting for noisy measurements. The minimum detectable fault and the associated threshold are shown to be increasing functions of the relative distance between the formation leader and the follower vehicle at hand, which means that the value of the threshold may vary from one follower to another depending on its relative distance from the leader. Simulations based on high-fidelity nonlinear 6DOF ALTAV models show that the proposed team-level DFD scheme combined with a simple guidance adaptation technique enable quick hard fault/failure detection while preserving leader-to-follower formation geometry requirements.",2007,0, 3692,Natural Language Processing Based Detection of Duplicate Defect Patterns,"A Defect pattern repository collects different kinds of defect patterns, which are general descriptions of the characteristics of commonly occurring software code defects. Defect patterns can be widely used by programmers, static defect analysis tools, and even runtime verification. Following the idea of web 2.0, defect pattern repositories allow these users to submit defect patterns they found. However, submission of duplicate patterns would lead to a redundancy in the repository. This paper introduces an approach to suggest potential duplicates based on natural language processing. Our approach first computes field similarities based on Vector Space Model, and then employs Information Entropy to determine the field importance, and next combines the field similarities to form the final defect pattern similarity. Two strategies are introduced to make our approach adaptive to special situations. Finally, groups of duplicates are obtained by adopting Hierarchical Clustering. Evaluation indicates that our approach could detect most of the actual duplicates (72% in our experiment) in the repository.",2010,0, 3693,Error analyses of a 3-phase electricity meter with class-0.5 accuracy,"A design of a 3-phase TOU electricity meter with IEC687 class 0.5 accuracy is proposed. The emphasis is upon its error analyses. Current transformers and voltage dividers are chosen to be the current and voltage transducers respectively. Two T1 ADS8345 16-bit A/D converters are used to digitize 3-phase current and 3-phase voltage signals independently. The Dallas DS89C450 microcontroller processes data with 16-bit fix-pointed calculation. Experiment results show that the accuracy of the meter lies within ranges specified in the IEC687 standard. Error Analyses are compared with the measured results, and show that the main cause of error is variation of the phase shift of the CT.",2004,0, 3694,Research on smart sensor network in fault diagnose system,"A design of smart sensor network based on TMS320F2812 digital signal processor, ADXL001 acceleration sensor, AD7656 A/D converter, DS18B20 and CAN bus was introduced to establish a embeded system which can monitor the vehicle's working condition. The software and hardware of the system was described. Through writing wavelet de-noising and fault diagnosis programme in the DSP, the SNR was improved. Through the soft and hard ware design, the sensor's selfcheck and selfadjust function was achieved. The embeded computer's workload was reduced. This system will be suit for the inspection of more and more complicated vehicle system. Compared with traditional condition monitoring system, this system not only has smaller cubage, but also has higher reliability and intelligence.",2010,0, 3695,Automatic red-eye detection and correction,"""Red-eye"" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.",2002,0, 3696,Superview 3D image warping for visibility gap errors,"3D image warping is an image-based rendering technique that allows nearby views of a 3D environment to be efficiently extrapolated from a single 2D reference image that includes depth information. This method allows the real-time rendering of a virtual environment on low powered consumer devices such as PDAs, cellular phones, e-Books, etc. However one major drawback with this method is the frequent occurrence of visibility gap errors caused by a limited field of view. Visibility gaps are regions for which the new viewpoint has no information and often occur during a viewpoint rotation or lateral translation. The resulting warped image contains significant areas of unsightly holes. This paper proposes using a reference image larger than the display size with a greater field of view (superview), to reduce or eliminate visibility gap errors. However warping a larger reference image increases rendering time. In order to address the problem of the reduced frame-rate, acceleration methods such as image sub-sampling, pixel averaging and clipping are presented. It is concluded that use of an oversize reference image in conjunction with sub-sampling and clipping of the reference image produces a better quality warped image for a given frame-rate.",2003,0, 3697,Defect avoidance in a 3-D heterogeneous sensor [acoustic/seismic/active pixel/IR imaging sensor array],"A 3D heterogeneous sensor using a stacked chip is investigated. Optical active pixel sensor (APS) and IR bolometer detectors are combined to create a multispectral pixel for aligned color and infrared imaging. An acoustic and seismic micromachined sensor array obtains sound spectral and directional information. For the optical/IR imagers, fault tolerant APS cells and software methods are used for defect avoidance. For the acoustic/seismic array, spare detectors are combined with signal processing to compensate for changes in detector positions due to defects. The sensor fault distribution in turn impacts the defect avoidance in the fault tolerant TESH networked processors analyzing the sensor array.",2004,0, 3698,A new design technique for optimum logic filter using matrix type-B error correcting coding,"A brief examination in digital communications gives that the receiver has to decide and distinguish between a number of discrete signals in background noise. For this case an optimum filter is designed and some techniques are developed as f.i. the MAP (Maximum a posteriori probability), the maximum likelihood - ML, the matched filter, the Kalman filter, etc. In this paper we introduce a new design technique, which we called the optimum logic filter (OLF), using sophisticated matrix type-B error correcting coding.",2005,0, 3699,A phase correction technique applied to 700MHz 6GHz complex demodulators in multi-band wireless systems,A broadband IQ phase imbalance correction technique applied to complex demodulators is presented. The phase correction range of plusmn5 degrees and accuracy of 0.3 degrees were obtained with 4 bit resolution. The applications of the broadband IQ phase imbalance correction scheme are compensation of the phase errors originated in the LO poly-phase networks and image rejection mixers with image rejection above 40 dB. The power consumption of the IQ phase imbalance correction technique 14 mW and 20 mW were obtained in the 0.7 GHz - 4 GHz and 4 GHz - 6 GHz frequency ranges respectively in 0.18 mum CMOS technology.,2008,0, 3700,Using a Gigabit Ethernet cluster as a distributed disk array with multiple fault tolerance,"A cluster of PCs can be seen as a collection of networked low cost disks; such a collection can be operated by proper software so as to provide the abstraction of a single, larger block device. By adding suitable data redundancy, such a disk collection as a whole could act as single, highly fault tolerant, distributed RAID device, providing capacity and reliability along with the convenient price/performance typical of commodity clusters. We report about the design and performance of DRAID, a distributed RAID prototype running on a Gigabit Ethernet cluster of PCs. DRAID offers storage services under a single I/O space (SIOS) block device abstraction. The SIOS feature implies that the storage space is accessible by each of the stations in the cluster, rather than throughout one or few end-points, with a potentially higher aggregate I/O bandwidth and better suitability to parallel I/O.",2003,0, 3701,Experimental study of inter-laminar core fault detection techniques based on low flux core excitation,"A comparison between two inter-laminar stator core insulation failure detection techniques for generators based on low flux core excitation is presented in this paper. The two techniques under consideration are: 1) the iron core probe based method developed recently and 2) the existing air core probe based method. A qualitative comparison of the two techniques is presented along with an experimental comparison on a 120 MW generator stator core. The test results are compared in terms of fault detection sensitivity, signal to noise ratio, and ease of interpretation, which are the main requirements for stator core inspection. In addition to the comparison, the performance of the iron core probe technique for machines with short wedge depression depth is presented along with the recent improvements in the algorithm. It is shown that the main requirements for stator core inspection are significantly enhanced with the new iron core probe-based core fault detector.",2005,0,1876 3702,Neural network methods for error canceling in human-machine manipulation,"A neural network technique is employed to cancel hand motion error during microsurgery. A cascade-correlation neural network trained via extended Kalman filtering was tested on 15 recordings of hand movement collected from 4 surgeons. The neural network was trained to output the surgeon's desired motion, suppressing erroneous components. In experiments this technique reduced the root mean square error (rmse) of the erroneous motion by an average of 39.5%. This was 9.6% greater than the reduction achieved in earlier work, which followed the complementary approach of estimating the error rather than the desired component. Preliminary results are also presented from tests in which training and testing data were taken from different surgeons.",2001,0, 3703,Digital processing of touch signal - error probability,"A new algorithm for digital processing of a touch signal is proposed. We consider the calculation of the error probability in the process of deciding on the existence of the optical infrared beam between the infrared transmitter and receiver. The error probability is calculated in the function of: (a) the ambient illumination simulated by the reflector placed a certain distance in front of the center of the touch interface; (b) window functions implemented in the algorithm for processing of the touch signal. The error probability is calculated for some classical, time-symmetrical as well as for some original, time-asymmetrical window functions.",2001,0, 3704,Intermittent scan chain fault diagnosis based on signal probability analysis,"A new algorithm to diagnose intermittent scan chain fault in scan-based designs is proposed in this paper. An intermittent scan chain fault sometimes is triggered and sometimes is not triggered during scan chain shifting, which makes it very difficult to locate the fault sites. In this paper, we provide answers to three questions: (1) Why intermittent scan chain faults happen? (2) Why diagnosis of this type of faults is necessary? (3) How to diagnose this type of faults? The experimental results presented demonstrate that the proposed diagnosis algorithm is effective for large industrial designs with multiple intermittent scan chain faults.",2004,0, 3705,Immunet: a cheap and robust fault-tolerant packet routing mechanism,"A new and efficient mechanism to tolerate failures in interconnection networks for parallel and distributed computers, denoted as Immunet, is presented in this work. In the presence of failures, Immunet automatically reacts with a hardware reconfiguration of the surviving network resources. Immunet has four important advantages over previous fault-tolerant switching mechanisms. Its low hardware costs minimize the overhead that the network must support in absence of faults. As long as the network remains connected, Immunet can tolerate any number of failures regardless of their spatial and temporal combinations. The resulting communication infrastructure provides optimized adaptive minimal routing over the surviving topology. The system behavior under successive failures exhibits graceful performance degradation. Immunet reconfiguration can be totally transparent to the applications running on the parallel system as they will only be affected by the loss of those data packets circulating through the broken components. The rest of the packets will suffer only a tolerable delay induced by the time employed to perform the automatic network reconfiguration. Descriptions of the hardware network architecture and detailed synthetic and execution-driven simulations will demonstrate the benefits of Immunet.",2004,0, 3706,Circular pattern extraction in wafer fault mining,A method on circular pattern measurement in wafer fault pattern retrieval system is developed. It can help engineers to query circular wafer fault pattern to historical or other patterns for manufacturing fault detection and tracking. An enclosing polygon based on the vertices of boundary edges is constructed to describe the border of circular pattern. Experimental results show that the proposed approach improves the accuracy and effectiveness of circularity measurement.,2008,0, 3707,Diagnostic expert systems from dynamic fault trees,"A methodology for developing a diagnostic map for systems that can be analyzed via a dynamic fault tree is proposed in this paper. This paper shows how to automatically design a diagnostic decision tree from a dynamic fault tree used for reliability analysis. In particular the methodology makes use of Markov chains since they are mathematical models used for reliability analysis. We use approximate sensitivity as an intermediary step to obtain the Vesely-Fussell measure from the Markov chain. We used the Vesely-Fussell measure of importance as the corner stone of our methodology, because it provides an accurate measure of components' relevance from a diagnosis perspective. The outcome of this paper is a diagnostic decision tree, which was generated for a real dynamic system. The diagnostic decision tree produced is a map that can be used by repair and maintenance crew to diagnose a system without having previous knowledge or experience about the diagnosed system. The methodology we develop is capable of producing diagnostic decision trees that reduces the number of tests or checks required for a systems diagnosis.",2004,0, 3708,"A Case Study of an Electrolytic Tinning Line, with an Analysis of Faults in the Power Rectifiers",A model is represented in this paper to simulate the tin plating coating process. The control of the output current provided by high current rectifiers (HCR) is the best way to assure the quality of the final product. The model includes the interaction of power rectifiers and thickness of the final coating. ARMAX model has been used for this purpose. An application of the model is presented to identify faults in the set of power rectifiers,2005,0, 3709,A motion correction system for brain tomography based on biologically motivated models,"A motion correction system for brain tomography is presented, which employs two calibrated video cameras and three vision models (CIECAM'97, RETINA, and BMV). The system is evaluated on the pictures monitoring a subjects head while simulating PET scanning (n=12) and on the face images of subjects with different skin colours (n=31). The results on 2D images with known moving parameters show that motion parameters of rotation and translation along X, Y, and Z directions can be obtained very accurately via the method of facial landmark detection. A feasibility study of the developed algorithms is evaluated by a computer simulation of brain scanning procedures.",2008,0, 3710,On-line digital correction of the harmonic distortion in analog-to-digital converters,"A digital calibration technique for the on-line (real-time) correction of the Integral Nonlinearity (INL) in any Analog-to-Digital Converter (ADC) has been presented in this paper. MATLAB simulations substantiating the technique achieved around 20 dB improvement on a modeled 12-bit ADC with an initial Spurious-Free Dynamic Range of 80 dBFS. As a bonus, this technique can also correct for the offset error. The price paid for these benefits is the inclusion of an identical extra ADC, and some additional DSP circuitry",2001,0, 3711,"A Single-chip DBS Tuner-Demodulator SoC using Discrete AGC, Continuous I/Q Correction and 200MS/s Pipeline ADCs","A digital low-IF satellite TV tuner-demodulator SoC was realized in 0.13 mum CMOS using low power 200 MS/s eight bit pipeline ADCs. A discrete-steps delayed AGC loop using FET switched-resistors resulted in a 10 dB noise figure at max gain and +25 dBm IIP3 at min gain. The image rejection correction is continuously performed in the digital domain using an inverse gain and phase mismatch adjustment. A FFT engine was used for carrier/symbol rate estimation and channel blind scanning. SoC specifications include: 0.2 dB implementation loss, <1.3degrms integrated phase noise,<-50dBc spurs, <0.2s channel acquisition time, 1.2 W power dissipation from a dual 1.8/3.3V supply and 1.8 x 4.8 mm2 die area.",2007,0, 3712,Minimum-Error Splitting Algorithm for a Dual Layer LCD DisplayPart II: Implementation and Results,"A dual layer liquid crystal display (LCD) is able to achieve a high dynamic range by stacking two liquid crystal panels one on top of the other over an enhanced backlight unit. However, the finite distance between the two panels inevitably introduces a parallax error when the display is observed off-axis, and the dynamic range limitations of the individual panels introduce a reconstruction error near sharp edges in the input image. In Part I, we have formulated the image splitting as a constrained optimization problem in which a joint minimization of the parallax error and the visibility of the reconstruction error is performed.",2008,0, 3713,A New Method for Earth Fault Line Detection Based on Two-Dimensional Wavelet Transform in Distribution Automation,"A novel method based on two-dimensional wavelet transform to detect single-phase faults in distribution systems is proposed in this paper. After structuring analytic signals of zero sequence current, the two-dimensional wavelet transform is applied. Thus the analysis of combined signal of amplitude and phase is realized. Compared with the use of single amplitude or single phase, combined signal carries more details of transient signal. Theoretical analysis and MATLAB based simulation show that the presented method can exactly and effectively choose the faulty line in single phase-to-ground fault",2005,0, 3714,Size reduction and harmonic suppression of parallel coupled-line bandpass filters using defected ground structure,"A novel miniaturized parallel coupled-line bandpass filter with suppression of second, third and fourth harmonic frequencies, is demonstrated in this paper. The new filter is based on the slow-wave effect of the defected ground structure (DGS) to achieve size minimization, while the spurious responses are eliminated by the band-rejection property. These features offer the classical parallel coupled-line bandpass filter simultaneous compactness and wide stopband performance. Using DGS does not require the filter parameters to be recalculated and, this way, the classical design methodology for coupled-line microstrip filters can still be used. The simulations and measurements of a 2.0 GHz prototype bandpass filter are presented. The measured results agree well with the simulation data. Compared with the conventional parallel coupled-line bandpass filters, the second, third and fourth measured spurious responses are suppressed to -45, -43 and -34 dB, respectively. In addition, the size of the prototype filter circuitry can be reduced up to 20%.",2009,0, 3715,Neural network based on-line stator winding turn fault detection for induction motors,"A novel on-line neural network based diagnostic scheme, for induction machine stator winding turn fault detection, is presented. The scheme consists of a feed-forward neural network combined with a self-organizing feature map (SOFM) to visually display the operating condition of the machine on a two-dimensional grid. The operating point moves to a specific region on the map as a fault starts developing and can be used to alert the motor protection system to an incipient fault. This is a useful tool for commercial condition monitoring systems. Experimental results are provided, with data obtained from a specially wound test motor, to illustrate the robustness of the proposed turn fault detection scheme. The new method is not sensitive to unbalanced supply voltages or asymmetries in the machine and instrumentation",2000,0, 3716,A novel transmission line test and fault location methodology using pseudorandom binary sequences,"A novel pulse echo test methodology, using pseudorandom binary sequence (PRBS) excitation, is presented in this paper as an alternative to time domain reflectometry (TDR) for transmission line fault location and identification. The essential feature of this scheme is the cross correlation (CCR) of the fault response echo with the PRBS test input stimulus input which results in a unique signature for identification of the fault type, if any, or load termination present as well as its distance from the point of test stimulus injection. This fault identification method can used in a number of key industrial applications incorporating printed circuit boards, overhead transmission lines and underground cables in inaccessible locations which rely on a pathway for power transfer or signal propagation. As an improved method PRBS fault identification can be performed over several cycles at low amplitude levels online to reject normal signal traffic and extraneous noise pickup for the purpose of multiple fault coverage, resolution and identification. In this paper a high frequency co-axial transmission line model is presented for transmission line behavioural simulation with PRBS stimulus injection under known load terminations to mimic fault conditions encountered in practice for proof of concept. Simulation results, for known resistive fault terminations, with measured CCR response demonstrate the effectiveness of the PRBS test method in fault type identification and location. Key experimental test results are also presented for a co-axial cable, under laboratory controlled conditions, which substantiates the accuracy of PRBS diagnostic CCR method of fault recognition and location using a range of resistive fault terminations. The accuracy of the method is further validated through theoretical calculation via known co-axial cable parameters, fault resistance terminations and link distances in transmission line experimental testing.",2008,0, 3717,Stator-Interturn-Fault Detection of Doubly Fed Induction Generators Using Rotor-Current and Search-Coil-Voltage Signature Analysis,"A novel technique for detecting stator interturn faults in a doubly fed induction generator (DFIG) is proposed by analyzing its rotor current and search-coil voltage. So far, fault-diagnostic techniques proposed for stator-interturn-fault detection in DFIGs are based on analysis of stator current or vibration of generator. Results from these methods are ambiguous because they either fail to account for condition when the DFIG is operating under imbalanced load or these methods are based on experimental results alone without any theoretical basis. Our recent observations suggested that harmonics induced in the rotor circuit are very promising in detecting stator interturn faults in DFIGs. Hence, in this paper, an in-depth investigation is conducted to determine the origin of various harmonic components in rotor currents and their feasibility to detect stator interturn faults unambiguously. Detailed analysis is presented, which explains the mechanism by which the stator-interturn-fault-related harmonics are induced in the rotor circuit. The theory is verified with simulation and extensive experimental results. To confirm the feasibility of the proposed technique for detecting stator interturn faults and obtain results on speed sensitivity of fault detection, a prototype of digital-signal-processor-based fault-diagnostic system has been developed, which is capable of producing very fast trip signal in about 2 s.",2009,0, 3718,Design of the UWB bandpass filter by coupled microstrip lines with U-shaped defected ground structure,"A novel ultra-wideband (UWB) bandpass filter (BPF) is proposed by using coupled microstrip lines and U-shaped defected ground structure (DGS). The input and output feeding lines are connected to the ends in the same side of the coupled microstrip lines on one side of a substrate, and the U-shaped DGS is introduced behind the coupled lines on the other side of the substrate. The coupling of this kind of structure, if without DGS, is very weak. However, the U-shaped DGS generates a strong coupling, and improves the passband property of the coupled lines. The filter is simulated by 3D EM commercial software, and the results demonstrate good UWB performances. The passband defined by |S11| < -10 dB is about 3.44-10.6 GHz, and the insertion loss is about 0.11 dB at central frequency of 7 GHz. The group delay variation, which is an important factor to UWB system, is less than 0.5 ns within the operating band.",2008,0, 3719,Ultra-Wideband Bandpass Filter With Improved Upper Stopband Performance Using Defected Ground Structure,"A novel ultra-wideband (UWB) bandpass filter (BPF) with improved upper stopband performance using a defected ground structure (DGS) is presented in this letter. The proposed BPF is composed of seven DGSs that are positioned under the input and output microstrip line and coupled double step impedance resonator (CDSIR). By using CDSIR and open loop defected ground structure (OLDGS), we can achieve UWB BPF characteristics, and by using the conventional CDGSs under the input and output microstrip line, we can improve the upper stopband performance. Simulated and measured results are found in good agreement with each other, showing a wide passband from 3.4 to 10.9 GHz, minimum insertion loss of 0.61 dB at 7.02 GHz, a group delay variation of less than 0.4 ns in the operating band, and a wide upper stopband with more than 30 dB attenuation up to 20 GHz. In addition, the proposed UWB BPF has a compact size (0.27??g ~ 0.29??g , ??g : guided wavelength at the central frequency of 6.85 GHz).",2010,0, 3720,A novel soft-switching bridgeless power factor correction circuit,"A novel zero-voltage-transition (ZVT) bridgeless power factor correction circuit (PFC) was proposed. An auxiliary circuit, consisting of a resonant inductor, two blocking diodes, one wheeling diode and an assist MOSFET, was used to reduce the turn-on switching loss of the two main switches of the bridgeless PFC circuit. Soft commutation of the main switches is achieved without imposing additional voltage stress on the main switches. Feedback gate-drive signals were obtained by using existing converter IC controller. In this paper, a detailed description of the operation of the proposed circuit will be given. Based on the analysis, guidelines for the component selection are presented. A prototype of 100-kHz, 600-W, universal-line PFC bridgeless circuit was built to verify the proposed scheme.",2007,0, 3721,Design of system structure for triple-modular fault-tolerant controller,"A parallel embedded controller with triple modular fault tolerant based on real-time fast Ethernet is presented in the design of triple-modular fault-tolerant controller. And triple instruction synchronous execution (TISE) strategy is used to realized synchronous execution, data communication, and clock synchronizer among triple redundant bus (TriBUS). The fault tolerant mode to fault diagnosis of contacts is researched deeply. All these will be ready to establish the software and hardware system of triple modular fault tolerant controller based on fast Ethernet.",2008,0, 3722,Pattern recognition-a technique for induction machines rotor fault detection broken bar fault,"A pattern recognition technique based on Bayes minimum error classifier is developed to detect broken rotor bar faults in induction motors at the steady state. The proposed algorithm uses only stator currents as input without the need for any other variables. First rotor speed is estimated from the stator currents, then appropriate features are extracted. The produced feature vector is normalized and fed to the trained classifier to see if motor is healthy or has broken bar faults. Only number of poles and rotor slots are needed as preknowledge information. Theoretical approach together with experimental results derived from a 3 hp AC induction motor show the strength of the proposed method. In order to cover many different motor load conditions data are obtained from 10% to 130% of the rated load for both a healthy induction motor and an induction motor with a rotor having 4 broken bars",2001,0, 3723,Localization of IP Links Faults Using Overlay Measurements,"Accurate fault detection and localization is essential to the efficient and economical operation of ISP networks. In addition, it affects the performance of Internet applications such as VoIP, and online gaming. Fault detection algorithms typically depend on spatial correlation to produce a set of fault hypotheses, the size of which increases by the existence of lost and spurious symptoms, and the overlap among network paths. The network administrator is left with the task of accurately locating and verifying these fault scenarios, which is a tedious and time-consuming task. In this paper, we formulate the problem of finding a set of overlay paths that can debug the set of suspected faulty IP links. These overlay paths are chosen from the set of existing measurement paths, which will make overlay measurements meaningful and useful for fault debugging. We study the overlap among overlay paths using various real-life Internet topologies of the two major service carriers in the U.S. We found that with a reasonable number of concurrent failures, it is possible to identify the location of the IP links faults with 60% to 95% success rate. Finally, we identify some interesting research problems in this area.",2008,0, 3724,Aberration measurement and correction with a high resolution 1.75D array,"Accurate measurement of tissue induced aberrations is necessary for effective adaptive ultrasound imaging. We acquired single channel RF data on a 6.7 MHz, 8128 array (Tetrad Co.) operating at F/1.0 in azimuth and F/2.89 in elevation. This array was interfaced to a Siemens Elegra scanner, allowing for data acquisition during routine clinical scanning. Breast images in three patients and four volunteers (for a total of 16 scans), and thyroid and liver images in six volunteers (10 scans each) were taken. A least squares algorithm was employed to estimate the arrival time error induced by the tissue and to generate corrected images. In general, mild (20-40 nsec r.m.s.) and spatially stable aberration profiles were measured",2001,0, 3725,Three-phase rectifiers with power factor correction,"AC-DC three-phase rectifiers that can operate with input power factor correction are either boost-type converters or buck-type converters that are typically implemented using six converter switches. In this paper, alternative rectifier topologies that use fewer converter switches are reviewed. The features of each topology are discussed and the topology's advantages and disadvantages are stated. A new single-switch three-phase buck converter is presented in the paper and its feasibility is confirmed with simulated and experimental results",2005,0, 3726,A hot-swap controller amplifier module for active magnetic bearings with supreme reliability Electronic circuitry and error detection strategies,Active magnetic bearings (AMBs) represent an innovative contact free suspension technology with many advantages such as online adaptability of bearing stiffness and damping which makes AMBs especially suited for high speed rotor applications. Disadvantageous is the rather high system complexity which leads to a reduced overall reliability. To enhance AMB's reliability a new AMB concept has been developed using independently working hot-swap controller amplifier modules (HCA) for bearing control. Within this work the HCA electronic and the developed fault diagnosis is described which allows a complete functional test of all components during system operation as well as a safe deactivation of the HCA even in case of a component fault. Measurement results demonstrate the functionality of the developed HCA prototype.,2010,0, 3727,Efficient Active Probing for Fault Diagnosis in Large Scale and Noisy Networks,"Active probing is an effective tool for monitoring networks. By measuring probing responses, we can perform fault diagnosis actively and efficiently without instrumentation on managed entities. In order to reduce the traffic generated by probing messages and the measurement infrastructure costs, an optimal set of probes is desirable. However, the computational complexity for obtaining such an optimal set is very high. Existing works assume single-fault scenarios, apply only to small size networks, or use simplistic methods that are vulnerable to noises. In this paper, by exploiting the conditionally independent property in Bayesian networks, we prove a theorem on the information provided by a set of probes. Based on this theorem and structure property of Bayesian networks, we propose two approaches which can effectively reduce the computation time. A highly efficient adaptive probing algorithm is then presented. Compared with previous techniques, experiments have shown that our approach is more efficient in selecting an optimal set of probes without degrading diagnosis quality in large scale and noisy networks.",2010,0, 3728,Modern fault tolerant architectures based on partial dynamic reconfiguration in FPGAs,Activities which aim at developing a methodology of fault tolerant systems design into FPGA platforms are presented. Basic principles of partial reconfiguration are described together with the fault tolerant architectures based on the partial dynamic reconfiguration and triple modular redundancy or duplex system. Several architectures using online checkers for error detection which initiates reconfiguration process of the faulty unit are introduced as well. The modification of fault tolerant architectures into partial reconfigurable modules and main advantages of partial dynamic reconfiguration when used in fault tolerant system design are demonstrated. All presented architectures are compared with each other and proven fully functional on the ML506 development board with Virtex5 for different types of RTL digital components.,2010,0, 3729,Effects of finite weight resolution and calibration errors on the performance of adaptive array antennas,"Adaptive antennas are now used to increase the spectral efficiency in mobile telecommunication systems. A model of the received carrier-to-interference plus noise ratio (CINR) in the adaptive antenna beamformer output is derived, assuming that the weighting units are implemented in hardware, The finite resolution of weights and calibration is shown to reduce the CINR. When hardware weights are used, the phase or amplitude step size in the weights can be so large that it affects the maximum achievable CINR. It is shown how these errors makes the interfering signals leak through the beamformer and we show how the output CINR is dependent on power of the input signals. The derived model is extended to include the limited dynamic range of the receivers, by using a simulation model. The theoretical and simulated results are compared with measurements on an adaptive array antenna testbed receiver, designed for the GSM-1800 system. The theoretical model was used to find the performance limiting part in the testbed as the 1 dB resolution in the weight magnitude. Furthermore, the derived models are used in illustrative examples and can be used for system designers to balance the phase and magnitude resolution and the calibration requirements of future adaptive array antennas",2001,0, 3730,"Impact of wind farms on electromagnetic transients in 132kV network, with particular reference to fault detection","Adaptive autoreclosure has been extensively researched as a protection methodology for overhead lines, with well-known advantages over conventional autoreclosure. However, the effect of modern wind farms, specifically power electronics, on existing adaptive autoreclosure methods is unknown. Using the DIgSILENT software, a small part of the UK Generic Distribution System network is constructed as a test system and connected to built-in DFIG and full converter wind farm models. EMT simulations are carried out whilst varying the parameters known to affect single phase-ground fault voltage signatures. The Discrete Wavelet Transform is subsequently applied to these waveforms. Results show that adaptive autoreclosing schemes may need particular attention when designed for DFIG connected lines, although the traditional approach of signal processing and AI is validated since the effect of fault parameters have far more significance than the generating technology concerned.",2009,0, 3731,Hardware Fault Tolerance implemented in software at the compiler level with special emphasis on array-variable protection,"Advanced and sophisticated microprocessor-based systems are often applied in safety or mission critical subsystems. The problem of designing radiation-tolerant devices becomes very important, especially in places such as accelerators and synchrotrons, where the results of the experiments depend on the reliability of control mechanisms. In this paper, we propose a new technique for safe and reliable computing in the presence of radiation-induced errors. In our solution, Software Implemented Hardware Fault Tolerance (SIHFT) algorithms are implemented automatically during the compilation process. This approach makes it possible to use standard optimization algorithms during the compilation. In addition, a responsibility for implementing fault tolerance is transferred to the compiler and it is transparent to the programmers. Special emphasis has been placed on the array protection algorithm.",2008,0, 3732,Reduction of variance in spectral estimates for correction of ultrasonic aberration,"A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.",2006,0, 3733,A method for hunting bugs that occur due to system conflicts,"A very important class of bugs that occurs in VLSI projects, and especially in System on Chip (SoC) type projects, are bugs caused by two or more processes on chip trying to access a shared resource simultaneously. These kinds of bugs are both hard to find and very likely have the potential to cause a respin if not found since it is very hard to work around them in software (SW). In this paper we present a framework to define such conflict cases and a tool for automatically generating test cases from this definition. We have implemented this framework and tool, generated test suites, and simulated them on the Design Under Verification. Our method immediately proved its effectiveness by catching an unknown problem in a project which has already established a reasonable test suite regression that is simulated periodically.",2008,0, 3734,A watchdog processor to detect data and control flow errors,"A watchdog processor for the MOTOROLA M68040 microprocessor is presented. Its main task is to protect from transient faults caused by SEUs the transmission of data between the processor and the system memory, and to ensure a correct instructions' flow, just monitoring the external bus, without modifying the internal architecture of the M68040. A description of the principal procedures is given, together with the method used for monitoring the instructions' flow.",2003,0, 3735,Multi resolution analysis for bearing fault diagnosis,A wavelet-based vibration analysis was done for Defense Applications. Vibration measurements were carried out in MSL Engine test bed in order to verify that it meets the specifications at different load conditions. Such measurements are often carried out in connection with troubleshooting in order to determine whether vibration levels are within acceptable limits at given engine speeds and maneuvers. A State-of-the-art portable Vibration Data Recorder is used for data acquisition of real-world signals. This paper is intended to take the reader through the various stages in a signal processing of the vibration data using modern digital technology. Vibration signals are post-analyzed using Wavelet Transform for data mining of the vibration signal observed from accelerometers. Wavelet Transform (WT) techniques are applied to decipher vibration characteristics due to Engine excitation to diagnose the faulty bearing. The Time-Scale analysis by WT achieves a comparable accuracy than Fast Fourier Transform (FFT) while having a lower computational cost with fast predictive capability. The result from wavelet analysis is validated using the LabVIEW software.,2010,0, 3736,Time Stamped Fault Tolerance in Real Time Systems,"A wide application of real time systems is safety critical systems. These systems should be highly reliable. Fault tolerance is one of the approaches to achieve reliability. In this paper, a fault tolerance model for real time systems is proposed. This model incorporates the concept of time stamped fault tolerance. This model is based on distributed computing along with feed forward artificial neural network methodology. The proposed technique is based on execution of design diverse variants on replicated hardware, and assigning weights to the results produced by variants. Thus the proposed method encompasses both the forward and backward recovery mechanism, but main focus is on forward recovery. The main essence of the proposed technique is the inclusion of time for decision mechanism",2005,0, 3737,Optimal design of fault-tolerant software systems using N-version approach and pseudo-Boolean optimization methods,"A wide range of fault-tolerant techniques has been proposed to increase the reliability of software systems, some of those techniques are algorithmic fault-tolerance, concurrent error-detection, recovery block and multiple computations. The implementation of these methods helps to avoid only physical nature (hardware) faults. When designing software support, design faults should be attended, because of their sleeping character (those faults originate due to the mistakes and oversights of humans that occur while they design software systems). Here, the author describes how N-version programming as an approach to fault-tolerant software system design, permits the solution of the stated problems successfully",2000,0, 3738,FTEP: A fault tolerant election protocol for multi-level clustering in homogeneous wireless sensor networks,"A wireless sensor network has potential to monitor stimulus around it. Sensor networks have severe energy constraints, low data rate with high redundancy, and many-to-one flows. Thus, data centric mechanisms that perform in-network aggregation of data are needed. Clustering is one of the data centric mechanisms in which various cluster heads perform in-network aggregation of data. Thus, there is more load on cluster heads than regular nodes. Therefore, for load balancing the role of cluster head should be rotated among other regular nodes. Moreover, cluster heads may fail and disrupt communication. Handling such faulty cluster heads is vital to correct and efficient working of these networks. In this paper, we propose a dynamic and distributed new cluster head election algorithm with fault tolerance capabilities based upon two-level clustering scheme. If energy level of current cluster head falls below a certain limit or any cluster head fails to communicate then election process is started. Based on energy levels, election process appoints a cluster head and a back-up node to handle cluster head failure. Back-up node automatically takes over the role of cluster head once it detects failure of current cluster head. All sensors are homogeneous in nature and working in dual mode. Simulation results show significant energy savings when compared with other clustering scheme like energy efficient multi-level clustering (EEMC).",2008,0, 3739,Abnormal Process Condition Prediction (Fault Diagnosis) Using G2 Expert System,"Abnormal operating conditions (faults) in industrial processes have the potential to cause loss of production, loss of life and/or damage to environment. The accidents, which could cost industry billons of dollars per year, can be prevented if abnormal process condition is predicted and controlled in advance. Due to the increased process complexity and instability in operating conditions, the existing control system may have a limited ability to provide practical assistance to both operators and engineers. Advanced software applications, based on expert system, has the potential to assist engineers in monitoring, detecting, diagnosing abnormal condition and thus providing safe guards against unexpected process conditions.",2007,0, 3740,Interturn Fault Diagnosis in Induction Motors Using the Pendulous Oscillation Phenomenon,"A robust interturn fault diagnostic approach based on the concept of magnetic field pendulous oscillation, which occurs in induction motors under faulty conditions, is introduced in this paper. This approach enables one to distinguish and classify an unbalanced voltage power supply and machine manufacturing/construction imperfections from an interturn fault. The experimental results for the two case studies of a set of 5-hp and 2-hp induction motors verify the validity of the proposed approach. Moreover, it can be concluded from the experimental results that if the circulating current level in the shorted loop increases beyond the phase current level, an interturn fault can be easily detected using the proposed approach even in the presence of the existence of motor manufacturing imperfection effects",2006,0, 3741,Fault detection in reactive ion etching systems using one-class support vector machines,"A robust method to detect faults in reactive ion etching systems using optical emission spectroscopy data is proposed. The approach is based on one-class support vector machines (SVMs). Unlike previously proposed fault detection methods, this approach only requires data collected during normal equipment operation to be trained. The results obtained suggest that this technique can detect equipment faults with exceptional accuracy. The SVM used detected all faults, yielding a detection accuracy of 100% with zero false alarms",2005,0, 3742,Fault tolerant remote terminal units (RTUs) in SCADA systems,"A SCADA system uses a number of remote terminal units (RTUs) for collecting field data and sending it back to a master station, via a communication system. The master station displays the acquired data and allows the operator to perform remote control tasks. An RTU is a microprocessor based standalone data acquisition control unit. As the RTUs work in harsh environment, the processor inside the RTU is susceptible to random faults. If the processor fails, the equipment or process being monitored by it will become inaccessible. This paper proposes a fault tolerant scheme to untangle the RTU's failure issue. According to the scheme, every RTU will have at least two processing elements. In case of either processor's failure, the surviving processor will take over the tasks of the failed processor to perform its tasks. With this approach, an RTU remain functional despite the failure of the processor inside the RTU. Reliability and Availability modeling of the proposed fault tolerant scheme have presented.",2010,0, 3743,Performance Enhancement of Microstrip Open Loop Resonator Band Pass Filter By Defected Ground structures,"A selective bandpass filter is presented by using a pairs of open loop resonators, each has a perimeter about half-wavelength. For the improvement of bandwidth, we have used a DGS under coupled area. A prototype filter circuit of center frequency at 1.8 GHz has been fabricated with PTFE substrate. Measured result shows bandwidth of 200 MHz with insertion loss of 0.2 dB at center frequency of 1.95 GHz. We have used defected ground structures under feed lines for extra stopband rejection mode to the 2nd and 3rd harmonics of the filter with attenuation losses over than -25 dB, without affecting the center frequency and insertion loss of the originally designed filter.",2007,0, 3744,Effects of phase-locked-loop circuit on a self-commutated BTB system under line faults,"A self-commutated BTB (back-to-back) system is an AC-DC-AC power flow controller between two utility grids, and PLL (phase-locked-loop) circuits are typically used for detecting their phase information. The performance of the BTB system during line faults is strongly affected by the dynamic behavior of the PLL circuit in terms of the AC-current fluctuation, the DC-voltage fluctuation, and the DC magnetic flux deviation in the converter-transformers. However, no paper or article has been discussed explicitly on their mutual relationship. The aim of this paper is to establish a design procedure of the PLL circuit which is suitable for the BTB system. This paper also deals with the DC magnetic flux deviation in the converter-transformers under double-line-to-ground (DLG) faults.",2008,0, 3745,Concave and Convex Area on Planar Curve for Shape Defect Detection and Recognition,"A shape representation based on concave and convex area along a closed curve is presented. Curvature estimation is done to the input curve and searched for its critical points. Splitting the critical points into concave and convex critical points, the concave and convex area is computed. This technique is tested on shape defect detection of starfruit and also to shape recognition. In the first case, defect is measured with concave energy and obtained a stable measure, which is proportional with the defect. In shape recognition, starfruit's stem is identified to remove it from the starfruit shape, as it will contribute to false computation in defect measurement",2006,0, 3746,Sources of Error in AC Losses Measurement Using V-I Method,"A significant cryogenic load is generated in rapidly-cycling superconducting magnets due to ac losses. This load should not be excessive, otherwise cryogenic system will not be able to evacuate heat and magnet will quench. To verify magnet design and to know real cryogenic load the test of ac losses has to be performed. Electrical and calorimetric methods can be used for this purpose. The calorimetric method is slower because it requires stabilization of cryogenic system. The electrical method called V-I is faster and it can be applied to any magnet independent of its cooling schemes. With this method ac losses are measured directly and results are provided on-line. However, the electrical method is considered more error prone, mainly due to the fact that power factor of superconducting magnets is close to zero.",2009,0, 3747,Optimization Design of Power Factor Correction Converter Based on Genetic Algorithm,"A small-signal model is used to design the controller parameters of the conventional Power Factor Correction (PFC) converter. The dynamics of the converter is nonlinear, therefore, it is hard to derive desirable performance. Genetic algorithm is used to optimize the control parameters of PFC converter in this paper, by this way, the quasi-optimal control parameters can be obtained with the predefined fitness criterion. In order to obtain the fitness of an individual, Simulink model of the PFC converter is established and called in manuscript file of the Matlab, in which the genetic algorithms is programmed to search the optimal control parameters. Simulation results indicate that the overshoot of the voltage transient response, the Total Harmonic Distortion (THD) is reduced by using the optimized parameters. The proposed method provides a common method for the optimal design of the (structure and control parameters of) converter.",2010,0, 3748,Multilevel-converter-based VSC transmission operating under fault AC conditions,"A study of a floating-capacitor (FC) multilevel-converter-based VSC transmission system operating under unbalanced AC conditions is presented. The control strategy is based on the use of two controllers, i.e. a main controller, which is implemented in the synchronous d-q frame without involving positive and negative sequence decomposition, and an auxiliary controller, which is implemented in the negative sequence d-q frame with the negative sequence current extracted. Automatic power balancing during AC fault is achieved without communication between the two converters by automatic power modulation on the detection of abnormal DC voltages. The impact of unbalanced floating capacitor voltages of the FC converter on power devices is described. A software-based method, which adds square waves whose amplitudes vary with the capacitor voltage errors to the nominal modulation signals for fast capacitor voltage balancing during faults, is proposed. Simulations on a 300 kV DC, 300 MW VSC transmission system based on a four-level FC converter show good performance of the proposed control strategy during unbalanced conditions caused by single-phase to ground fault.",2005,0, 3749,Fault-Tolerant Topologies and Switching Function Algorithms for Three-Phase Matrix Converter based AC Motor Drives Against Open and Short Phase Failures,"A study of fault-tolerant matrix converter is presented for remedial topological structures and control techniques against both open-faults and short-faults occurring in the ac-ac matrix converter drives. Topologies of the matrix converter drives have been proposed to allow the matrix converter based drives for tolerating both open and short phase failures. Switching function algorithms with closed form expressions, based on switching matrices, have been developed to provide the matrix converter drives with continuous and disturbance-free operation after opened phase faults and shorted phase failures. The developed switching function matrices, containing symmetric and antisymmetric mode matrices with appropriate sequences of input voltages, allow to synthesize redefined output waveforms under the open-phase and short- phase faults. In addition, the matrix converter topology and the switching scheme are proposed to produce three-phase balanced sinusoidal output currents even after short-phase failures. Simulation and experimental results shows the feasibility of the proposed topologies and the developed switching function techniques in case of both the open-phase and short-phase faults.",2007,0, 3750,Use of interleaving and error correction to infrared patterns for the improvement of position estimation systems,"A system capable of estimating the position of a mobile target based on the error rate of the received infrared patterns has been presented recently in [1]. In the present work, we explore the effect of using different infrared patterns to the estimation accuracy, the instant noise immunity and the system speed. We apply interleaving and forward error correcting techniques to correct the burst errors stemming from instant noise. We achieved a 25%-50% stability improvement, retrieved 20% less failures and the convergence speed is enhanced by a factor of 4-12.",2008,0, 3751,Early error detection in industrial strength cache coherence protocols using SQL,"A table-driven approach for designing industrial strength cache coherence protocols based on relational database technology is described. Protocols are specified using several interacting multi-input, multi-output controller state machines represented as database tables. Protocol scenarios specified using SQL constraints are solved to automatically generate database tables, and to statically check protocol properties including absence of deadlocks and other protocol invariants. The debugged tables are mapped to hardware using SQL operations while preserving protocol properties. The approach is deployed at Fujitsu System Technology Division in the design of their next generation multiprocessor and has discovered several errors early in the design cycle.",2003,0, 3752,An error tolerance scheme for 3D CMOS imagers,"A three-dimensional (3D) CMOS imager constructed by stacking a pixel array of backside illuminated sensors, an analog-to-digital converter (ADC) array, and an image signal processor (ISP) array using micro-bumps (ubumps) and through-silicon vias (TSVs) is promising for high throughput applications. However, due to the direct mapping from pixels to ISPs, the overall yield relies heavily on the correctness of the ubumps, ADCs and TSVs - a single defect leads to the information loss of a tile of pixels. This paper presents an error tolerance scheme for the 3D CMOS imager that can still deliver high quality images in the presence of bump, ADC, and/or TSV failures. The error tolerance is achieved by properly interleaving the connections from pixels to ADCs so that the corrupted data, if any, can be recovered in the ISPs. A key design parameter, the interleaving stride, is decided by analyzing the employed error correction algorithm. Architectural simulation results demonstrate that the error tolerance scheme enhances the effective yield of an exemplar 3D imager from 46% to 99%.",2010,0, 3753,Coupling Three-Dimensional Mesh Adaptation with an A Posteriori Error Estimator,"A three-dimensional unstructured mesh adaptation technique coupled to a posteriori error estimation techniques is presented. In contrast to other work [1,2] the adaptation in three dimensions is demonstrated using advanced unstructured meshing techniques to realize automatic adaptation. The applicability and usability of this complete automation are presented with a real-world example.",2005,0, 3754,Fault-Tolerant Destributed Systems in a Mobile Agent Model,"A transactional agent is a mobile agent to manipulate objects with some type of commitment condition. We assume computers may stop by fault while networks are reliable. In the client-server model, servers are fault-tolerant according to the replication and checkpointing technologies. However, an application program cannot be performed if a client is faulty. A program can be performed on another operational computer even if a computer is faulty in the transactional agent model. There are kinds of faulty computers; current, destination, and sibling computers where a transactional agent now exist, will move, and has visited, respectively. We discuss how the transactional agent is tolerant of the types of computer faults",2006,0, 3755,Modeling of stress-enhancement at defects inside cable insulation,"AC breakdowns are commonly used as a performance indicator for power cables, and yet the data generated can be misinterpreted if the test section of cable is found to have a manufacturing defect. As an example, in a recent publication, dielectric a breakdown value on a 17-year field aged cable was treated as a ""suspension"" by the authors after the discovery of a conductor shield skip at the failure location; and yet, others have taken the same data set and have analyzed the uncensored data as indicative of the material performance. In an effort to resolve the differences, the same data set is considered here, with an assumption that degree of aging is not significantly impacted by any enhancement of field-aging stresses. Statistical analysis is performed to determine if the questionable breakdown value can be considered an outlier. A 2-dimensional finite element analysis based upon the shape of the defect enables an estimate of the local stress enhancement factor, and a ""corrected"" breakdown value is calculated. The original authors conservative treatment of the questionable breakdown value as a ""suspension"" is supported by analysis with inclusion of a stress-corrected breakdown value, and the two approaches yield similar failure distributions. Use of the uncorrected value in discussions related to failure probabilities on the low-stress side of the distribution is shown to substantially underestimate failure stresses",2006,0, 3756,A software security testing method based on typical defects,"According to CERT/CC, ten defects known are responsible for 75% of security breaches in today software applications. Those defects are named as typical security defects. Based on that, a security testing method is given. In the method, a modeling technique with threat tree is described. Finally, a threat tree traversal algorithm (Tri-T algorithm) based on depth-first-search is designed and is used in an example to generate the test sequence.",2010,0, 3757,One Step More to Understand the Bug Report Duplication Problem,"According to recent work, duplicate bug reports impact negatively on software maintenance and evolution productivity due to, among other factors, the increased time spent on report analysis and validation. Therefore, a considerable amount of time is lost mainly with duplicate bug report analysis. In this sense, this work presents am exploratory study using data from bug trackers from private and open source projects, in order to understand the possible factors (i.e. software life-time, size, amount of bug reports, etc.) that cause bug report duplication and its impact on software development. This work also discusses bug report characteristics that could help identifying duplicates.",2010,0, 3758,Fault diagnosis system based on smart bearing,"According to statistics, a lot of the rotating machinery faults are caused by the bearings, so the smart bearing technique is important for ensuring safety of them. For the smart bearing, a representative definition is that sensing devices of the different use are integrated into the traditional bearing in order to realize self-diagnosis. For condition of the variable speed, variable load and heavy load, the diagnostic technology cannot satisfy requirement of the fault feature extraction at present. This paper presents a new smart bearing of the multi-parameters including of two vibration acceleration sensing devices, two speed sensing devices, and two temperature sensing devices. In addition, the heavy noise can be decreased for extraction of the weak fault signals by the embedded integrated mode of bearing and sensor.",2008,0, 3759,The design of general-purpose automatic testing and fault diagnosis system based on VXI bus,"According to the principles of generalization, modularization, and standardization, we have designed a general testing system for large-scale and complicated electronic equipment based on VXI bus. It introduces the design principle and the structure of the hardware and the software of the general testing system in the paper. It adopts Bayesian networks representation method to represent the uncertainty information in the system. The system has important meaning to improve the test and diagnostic capability for the electromechanical device.",2009,0, 3760,An approach to analog fault diagnosis using genetic algorithms,"A procedure for the multifrequency fault diagnosis of analog circuits is presented. It permits, by means of an optimization procedure based on a genetic algorithm, to select the set of frequencies which better leads to locate parametric faults in analog circuits. By exploiting symbolic analysis techniques, a program implementing the proposed procedure has been developed. An example of the application is also included.",2004,0, 3761,Quasi-static modeling of defected ground structure,"A quasi-static equivalent-circuit model of a dumbbell-shaped defected ground structure is developed. The equivalent-circuit model is derived from the equivalent inductance and capacitance developed due to the perturbed return current path on the ground and the narrow gap, respectively. The theory is validated against the commercial full-wave solver CST Microwave Studio. Finally, the calculated results are compared with the measured results. Good agreement between the theory, the commercially available numerical analyses, and the experimental results validates the developed theoretical model.",2006,0, 3762,On-board fault-tolerant SAR processor for spaceborne imaging radar systems,"A real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images has been developed for advanced spaceborne radar imaging systems. In this paper, we present the integrated design approach, from top-level algorithm specifications, system architectures, design methodology, functional verification, performance validation, down to hardware design and implementation.",2005,0, 3763,Sausage appearance defect inspection system based on machine vision,A real-time sausage appearance defect inspection system is presented in this paper. The general hardware and software structure is presented and a model method for sausage defect detection is proposed. This inspection system gives consistent and repeatable satisfactory results under real industrial conditions with sufficient speed.,2010,0, 3764,"Mismatch Shaping Techniques to Linearize Charge Pump Errors in Fractional- PLLs","A recent charge pump linearization technique demonstrated 8 dB reduction in spurious tones caused by charge pump current mismatch in delta-sigma fractional-N phase locked loops. In this paper, two purely digital mismatch shaping techniques are proposed to modify and further improve the charge pump linearization technique. Both the techniques suppress spurious tones by randomizing the residual charge pump mismatch error power. The second technique further spectrally shapes the residual charge pump mismatch errors to suppress close-in phase noise. No spurs are observed and -120 dBc/Hz phase noise is achieved at frequency offsets lower than 10 kHz in simulation. A theoretical proof of the spectral shaping of the charge pump mismatch error power is also presented.",2010,0, 3765,"Small Errors in ""Toward Formalizing Domain Modeling Semantics in Language Syntax'",A recent paper on domain modeling had State Charts with semantic errors.,2005,0, 3766,An affine combination of two LMS adaptive filters - statistical analysis of an error power ratio scheme,A recent paper studied the statistical behavior of an affine combination of two LMS adaptive filters that simultaneously adapt on the same inputs. The filter outputs are linearly combined to yield a performance that is better than that of either filter. Various decision rules can be used to determine the time-varying combining parameter (n). A scheme based on the ratio of error powers of the two filters was proposed in. Monte Carlo simulations demonstrated nearly optimum performance for this scheme. The purpose of this paper is to analyze the statistical behavior of such error power scheme. Expressions are derived for the mean behavior of (n) and for the weight mean-square deviation. Monte Carlo simulations show excellent agreement with the theoretical predictions.,2009,0, 3767,Fault insertion testing of a novel CPLD-based fail-safe system,"According to the standard IEC 61508 fault insertion testing is required for the verification of fail-safe systems. Usually these systems are realized with microcontrollers. Fail-safe systems based on a novel CPLD-based architecture require a different method to perform fault insertion testing than microcontroller-based systems. This paper describes a method to accomplish fault insertion testing of a system based on the novel CPLD-based architecture using the original system hardware. The goal is to verify the realized safety integrity measures of the system by inserting faults and observing the behavior of the system. The described method exploits the fact, that the system contains two channels, where both channels contain a CPLD. During a test one CPLD is configured using a modified programming file. This file is available after the compilation of a VHDL-description, which was modified using saboteurs or mutants. This allows injecting a fault into this CPLD. The other CPLD is configured as fault-free device. The entire system has to detect the injected fault using its safety integrity measures. Consequently it has to enter and/or maintain a safe state.",2009,0, 3768,Optics Correction Based on MOEMS and PDS,"According to the thermal-optical effect and the elasto-optical effect, the relationship of the refractive index with the temperature and the stress of the optical window in the aerodynamic environment is obtained. Based on building the model of grid, ray-tracing in optical window is carried out. Then a novel method to correct the wave-front and reduce the blurring caused by aero-optical effect is introduced. It combines the adaptive optics using the micro-opto-electro-mechanical system (MOEMS) and phase diversity speckle (PDS) technique. Now the development of the nano or micro fabrication technique offers the possibility for the real-time correction system using the combination of MEMS and the nano or micro optics technology. Furthermore, the method of phase diversity speckle is less susceptible to systematic errors introduced by optical hardware, and it also works well for extended objects. In this paper, the key technique of the wave-front correction under aero-optical condition is developed based on MOEMS and PDS, and the feasibility of this technique in the high-speed missile guide system is analyzed theoretically",2006,0, 3769,Wavelet-aided SVM tool for impulse fault identification in transformers,"Accurate diagnosis of faults in transformers can significantly enhance the safety, reliability, and economics of power systems. In the case of a fault, it has been established that the pattern of the fault currents contain a typical signature of the nature and location of the fault for a given winding. This paper describes a new approach using wavelet transform (WT) for extraction of features from the impulse test response of a transformer in time-frequency domain and support vector machine in regression mode to classify the patterns inherent in the features extracted through the WT of different fault currents. This paper also describes an approach to identify the type and location of the transformer faults accurately by analyzing experimental impulse responses that contain noise. Here, experimental impulse responses have been preprocessed with the help of wavelet-packet filters to remove the unwanted noise from the signal and thereby enhance the analyzing capability of continuous wavelet transform.",2006,0, 3770,A fast algorithm for SVPWM in three phase power factor correction application,"A novel algorithm for space vector pulse width modulation in three phase power factor correction applications is proposed. The durations for the active vectors that formed the sector containing the desired reference voltage vector are calculated directly by matrix pre-decomposing, without looking up sinusoidal and tangential tables, based on TMS320F240. Therefore running speed and control precise of the program can be improved greatly. As a result the switching frequency and power density of the rectifier is increased considerably. A new method for detecting sector is given as well. Simulated and experimental results are provided to verify the proposed algorithm in the end of the paper.",2004,0, 3771,A novel approach to minimizing the risks of soft errors in mobile and ubiquitous systems,"A novel approach to minimizing the risks of soft errors at modeling level of mobile and ubiquitous systems is outlined. From a pure dependability viewpoint, critical components, whose failure is likely to impact on system functionality, attract more attention of protection/prevention mechanisms (against soft errors) than others do. Tolerating soft errors can be much improved if critical components can be identified at an early design phase and measures are taken to lower their criticalities at that stage. This improvement is achieved by presenting a criticality ranking (among the components) formed by combining a prediction of soft errors, consequences of them, and a propagation of failures at system modeling phase; and pointing out the ways to apply changes in the model to minimize the risks of degradation of desired functionalities. Case study results are given to illustrate and validate the approach.",2009,0, 3772,A novel clock-fault detection and self-recovery circuit based on time-to-voltage converter,"A novel clock-fault detection and self-recovery circuit is proposed to remedy against the effects of clock abnormality and irregularity. The clock self-recovery circuit is based on the principle of the time-to-voltage converter and the whole structure is very simple in contrast with the general clock recovery structure methodology. This circuit also can save the chip area and has the features of frequency auto-adaptability and clock deletion recovery. The simulation shows that the circuit is power efficient and very suitable for integration to realize the clock self-recovery, and the maximal frequency of clock self-recovery is 350 MHz.",2008,0, 3773,A novel color interpolation algorithm by pre-estimating minimum square error,"A novel color interpolation algorithm for the color filter array (CFA) in digital still cameras (DSCs) is presented. The paper introduces pre-estimating of the minimum square error to address the color interpolation for CFA. In order to estimate the missing pixels in the Bayer CFA pattern, the weights of adjacent color pattern pairs are decided by matrix computation. We adopt the color model (KR, KB) used in many color interpolation algorithms for CFA. The proposed algorithm can achieve better performance, as shown in the experimental results. Compared with previous methods, the proposed color interpolation algorithm can provide a high quality image in DSCs.",2005,0, 3774,Desensitization of Camera-Aided Manipulation to Target Specification Errors,"Applications of vision-based remotely operated robotic systems range from planetary exploration to hazardous waste remediation. For space applications, where communication time lags are large, the target selection and robot positioning tasks may be performed sequentially, differing from conventional telerobotic maneuvers. For these point-and-move systems, the desired target must be defined in the image plane of the cameras either by an operator or through image processing software. Ambiguity of the target specification will naturally lead to end-effector positioning errors. In this paper, the target specification error covariance is shown to transform linearly to the end-effector positioning error. In addition, a methodology for optimal estimation of camera-view parameters of a vision-based robotic system based on target specification errors is presented. The proposed strategy is based on minimizing the end-effector error covariance matrix. Experimental results are presented demonstrating an increase in end-effector positioning, compared to traditional view parameter estimation by up to 32%.",2004,0, 3775,Bio - Inspired & Traditional Approaches to Obtain Fault Tolerance,"Applying some observable phenomena from cells, focused on their organization, function, control and healing mechanisms, a simple fault tolerant implementation can be obtained. Traditionally, fault tolerance has been added explicitly to a system by including redundant hardware and/or software, which takes over when an error has been detected. These concepts and ideas have been applied before with the triple modular redundancy. Our approach is to design systems where redundancy was incorporated implicitly into the hardware and to mix bio-inspired and traditional approaches to deal with fault tolerance. These ideas are shown using a discrete cosine transform (application) as organ, its MAC (function) interconnected as cell and parity redundancy checker (error detector) as immune system to obtain a fault tolerance design",2006,0, 3776,Automated FEM mesh optimization for nonlinear problems based on error estimation [IC packaging applications],"Applying the fundamental work of Zienkiewicz and Boroomand, this paper introduces a methodology for automated mesh optimization based on estimates of the discretization error. The objective of this effort has been the use of the methodology with a commercial FEM code (ANSYSTM) and its application to thermal stress analyses of flip chip modules (FC) and chip size packages (CSP) with solder joints, which means the presence of nonlinear material models (plasticity and creep).",2004,0, 3777,A Parametric Model Approach to Arc Fault Detection for DC and AC Power Systems,"Arc faults can arise in any electrical system as a consequence of loose connections, inadvertent damage during maintenance and/or insulation aging. In this paper, a new approach to the detection of arc faults is discussed. By using the proposed parametric models of the fault signals, an arc fault can be identified quickly and reliably, so that action can be taken to prevent the arc from escalating into a serious fire and causing loss of life and/or property. One model was obtained for DC power systems and another model was obtained for AC power systems. These proposed models were able to distinguish arc faults from normal operation of the various electrical loads that were tested",2006,0, 3778,Arc fault management by solid state switches for enhanced automotive safety,"Arc faults can arise in automotive 42 V systems as a consequence of loose connections, inadvertent damage during maintenance or insulation aging, leading to fire and/or loss of vehicle control. This paper describes the control of solid-state switches to manage arc faults and permit short-term operation at a power level below the insulation ignition threshold, thereby enhancing automotive safety",2005,0, 3779,Arc Fault Detection and Discrimination Methods,"Arc waveform characteristics can be evaluated with various methods to recognize the presence of hazardous arc fault conditions. Discussion covers the arc phenomena and how it is generated in a low voltage electrical distribution circuit, as well as the isolation of the presence of hazardous conditions versus conditions that could falsely mimic the presence of an arc fault. Many waveform characteristics and conditions support the detection of hazardous arc faults and foster a more robust design, capable of withstanding unwanted tripping conditions.",2007,0, 3780,Transmission line model influence on fault diagnosis,"Artificial neural networks have been used to develop software applied to fault identification and classification in transmission lines with satisfactory results. The input data to the neural network are the sampled values of voltage and current waveforms. The values proceed from the digital fault recorders, which monitor the transmission lines and make the data available in their analog channels. It is extremely important, for the learning process of the neural network, to build databases that represent the fault scenarios properly. The aim of this paper is to evaluate the influence of transmission line models on fault diagnosis, using constant and frequency-dependent parameters.",2004,0, 3781,Development of load models for fault induced delayed voltage recovery dynamic studies,"As a result of a multiple contingency fault and breaker failure event at two Metro Atlanta 230 kV substations in 1999, Southern Company experienced a wide area voltage depression lasting around 15 seconds. The event resulted in a 1900 MW load loss. Dynamic simulations utilizing standard static stability load models were not successful in replicating the event. However, the actual response of the transmission system was replicated utilizing dynamic simulations with aggregate load models that included the effect of induction motors and distribution system impedances. Since the event in 1999, load has grown exceptionally in the North Georgia region. As a result of the load growth, capital projects are being implemented in Southern Company to appropriately manage the exposure to both NERC reliability standard category B and D fault induced delayed voltage recovery events. Therefore, it is critical that the load models used in dynamic studies correctly represent the behavior of actual load. This is necessary to ensure that the timing and effectiveness of capital projects are appropriately quantified. Both the formulation of the aggregate load models used to replicate the 1999 event, and ongoing efforts to refine the load model used to assess future exposure to fault induced delayed voltage recovery are discussed in this paper.",2008,0, 3782,A Fault Tolerant Method for Residue Arithmetic Circuits,"As a result of shrinking device dimensions, the occurrence of transient errors is increasing. This causes system reliability to be reduced. Thus, fault-tolerant methods are becoming increasingly important, particularly in safety-critical applications. In this paper a novel fault-tolerant method is proposed through combining time redundancy with information redundancy to reduce hardware complexity. Residue codes are selected as the source of information redundancy and the proposed technique is compared with some well-known fault tolerant schemes considering required hardware and delay. This method can be applied to various types of arithmetic circuits. Simulations results of a multiplier circuit shows that by using quadruple residue redundancy in comparison with a simple Residue Redundancy when multiplying two 64-bit numbers, number of gates can be reduced to 90% by exposing only 9% extra delay. Therefore,this technique can effectively reduce hardware complexity and consequently leads to large savings on the ALU as a whole,while introducing only a reasonable delay.",2009,0, 3783,Probabilistic analysis of CAN with faults,"As CANs (controller area networks) are being increasingly used in safety-critical applications, there is a need for accurate predictions of failure probability. In this paper we provide a general probabilistic schedulability analysis technique which is applied specifically to CANs to determine the effect of random network faults on the response times of messages. The resultant probability distribution of response times can be used to provide probabilistic guarantees of real-time behaviour in the presence of faults. The analysis is designed to have as little pessimism as possible but never be optimistic. Through simulations, this is shown to be the case. It is easy to apply and can provide useful evidence for justification of an event-triggered bus in a critical system.",2002,0, 3784,Design and evaluation of hybrid fault-detection systems,"As chip densities and clock rates increase, processors are becoming more susceptible to transient faults that can affect program correctness. Up to now, system designers have primarily considered hardware-only and software-only fault-detection mechanisms to identify and mitigate the deleterious effects of transient faults. These two fault-detection systems, however, are extremes in the design space, representing sharp trade-offs between hardware cost, reliability, and performance. In this paper, we identify hybrid hardware/software fault-detection mechanisms as promising alternatives to hardware-only and software-only systems. These hybrid systems offer designers more options to fit their reliability needs within their hardware and performance budgets. We propose and evaluate CRAFT, a suite of three such hybrid techniques, to illustrate the potential of the hybrid approach. For fair, quantitative comparisons among hardware, software, and hybrid systems, we introduce a new metric, mean work to failure, which is able to compare systems for which machine instructions do not represent a constant unit of work. Additionally, we present a new simulation framework which rapidly assesses reliability and does not depend on manual identification of failure modes. Our evaluation illustrates that CRAFT, and hybrid techniques in general, offer attractive options in the fault-detection design space.",2005,0, 3785,A novel Klein-Nishina based scatter correction method for SPECT and planar imaging,"An algorithm correcting for the fraction of scattered events in SPECT and planar images has been developed. The algorithm utilises a pixel-based multi-channel analyser for data acquisition and works locally, pixel-by-pixel or on clusters of pixels. The differential Klein-Nishina cross-section, modified to fit the energy resolution and the sensitivity of a specific gamma camera was first determined. Furthermore, in a selected energy window, covering part of the Compton distribution, the modified Klein-Nishina distribution was scaled to the same count level as the experimental spectra. The scaled Klein-Nishina distribution was subsequently used for estimating the amount of scattered photons in the upper half of the photo-peak. To ensure a stable peak position a pixel peak-alignment routine was used. After subtraction of the estimated scatter distribution the number of unscattered photons are given. Assuming the photo-peak to be symmetrical in absence of scatter, the unscattered photon distribution in the upper half of the photo-peak may be mirrored (folded) into the lower part of the photo-peak, thereby estimating the ""clean"" photo-peak in that part of the window",2001,0, 3786,Coverage gain estimation for multi-burst forward error correction in DVB-H networks,"An approach for increasing the reception robustness of mobile broadcast streaming services has been developed for mobile broadcast systems based on time-slicing, such as DVB-H, employing multi-burst FEC at link or application layer. Multiple bursts will be encoded jointly in order to overcome burst errors caused by signal level variations. The approach shows high potential which can be characterized by a link margin gain due to reduced CNR requirements to cope with fast fading and shadowing. Nevertheless, the achieved gain depends on several system parameters (encoding period and coding rate), the physical environment (correlation of shadowing and multi-path fading) and on the mobility of the users (velocity and trajectory). The paper deals with the coverage estimation and network gain due to multi-burst FEC for vehicular users in a realistic urban scenario. Since the user behavior has to be considered the gain cannot be directly included into the link budget. Thus, a methodology has been developed in order to estimate the coverage of multi-burst FEC services based on dynamic system-level simulations. Results are shown by means of simulations in realistic scenarios and field measurements in urban environments.",2009,0, 3787,Digital correction of circuit imperfections in cascaded - modulators composed of 1st-order sections,"An approach to remove the effects of amplifier finite gain and C-ratio mismatches in the 1-1-1 (MASH) and the 1-1-1-1 cascaded sigma-delta modulator is presented. By correcting the digital outputs with estimates of the parasitic errors due to analog circuit imperfections, uncancelled quantization noise terms can be removed. A 1-1-1-1 cascaded modulator, implemented as a fully differential switched-capacitor circuit, has been fabricated in a 1.2 m, double-poly, n-well CMOS process. Measurements of the modulator verify that for an amplifier gain of 60 dB, C-ratio mismatch errors of 0.52% and 0.054%, the error correction offers an overall improvement in SNDR of 12-22 dB. A 12 Vrms sine wave can be restored with a positive SNDR",2000,0, 3788,Single-event-upset-like fault injection: a comprehensive framework,An approach to reproduce radiation ground testing results for the study of microprocessors vulnerability to single event upset (SEU) is described in this paper. Resulting cross-sections fit very well with measured ones.,2005,0, 3789,Fault-Tolerance of Robust Feed-Forward Architecture Using Single-Ended and Differential Deep-Submicron Circuits Under Massive Defect Density,"An assessment of the fault-tolerance properties of single-ended and differential signaling is shown in the context of a high defect density environment, using a robust error-absorbing circuit architecture. A software tool based on Monte-Carlo simulations is used for the reliability analysis of the examined logic families. A benefit of the differential circuit over standard single-ended is shown in case of complex systems. Moreover, analysis of reliability of different circuits and discussion on the optimal granularity of redundant blocks was made.",2006,0, 3790,An advanced model for automatic fault management in distribution networks,"An automatic computer model, called the FI-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of substation SCADA and medium voltage distribution network automation systems, including protective relays and AM/FM/GIS. In the model, three different techniques are used for fault location: computed fault distance; analysis of fault indicator readings; and statistical analysis of the line section fault frequencies. Once the faulty section is identified, it is automatically isolated by the remote control of line switches, and supply is restored into the remaining parts of the feeder",2000,0, 3791,Automatic EEG Artifact Removal: A Weighted Support Vector Machine Approach With Error Correction,"An automatic electroencephalogram (EEG) artifact removal method is presented in this paper. Compared to past methods, it has two unique features: 1) a weighted version of support vector machine formulation that handles the inherent unbalanced nature of component classification and 2) the ability to accommodate structural information typically found in component classification. The advantages of the proposed method are demonstrated on real-life EEG recordings with comparisons made to several benchmark methods. Results show that the proposed method is preferable to the other methods in the context of artifact removal by achieving a better tradeoff between removing artifacts and preserving inherent brain activities. Qualitative evaluation of the reconstructed EEG epochs also demonstrates that after artifact removal inherent brain activities are largely preserved.",2009,0, 3792,Flexible docking mechanism using combination of magnetic force with error-compensation capability,"An auto-recharging system for a mobile robot can help the robot to perform its tasks constantly and without human intervention. For implementing the system, a docking mechanism is required. This paper presents a new docking mechanism with a localization error-compensation capability. The proposed mechanism uses the combination of mechanical structure and magnetic forces between the docking connectors. It is a structure to improve the allowance ranges of lateral and directional docking errors, in which the robot is able to dock into the docking station. Consequently, this mechanism reduces dependency of a robot control and allows easy docking with only mechanical configuration. In this paper, the superiority of the proposed mechanism is verified with experimental results.",2008,0, 3793,A Transfer Fault Diagnosing Method for Protocol Conformance Test Based on FSMs,"An efficient fault diagnosis is very helpful to save the time and resource of revising the found faults in the protocol conformance test procedure. However, it is very difficult to diagnose transfer faults accurately and quickly. This paper proposes a test precondition (called input-correct precondition) which is satisfied through the local test in the system development procedure. We further presents a transfer fault diagnosing method based on the finite state machine (FSM), which is applied to diagnose the possible transfer faults in an implementation that satisfies the input-correct precondition. The diagnosing method attempts to provide desirable transfer fault hints or locate some transfer faults accurately in terms of the protocol conformance test results.",2009,0, 3794,Entropy-based optimum test points selection for analog fault dictionary techniques,"An efficient method to select an optimum set of test points for dictionary techniques in analog fault diagnosis is proposed. This is done by searching for the minimum of the entropy index based on the available test points. First, the two-dimensional integer-coded dictionary is constructed whose entries are measurements associated with faults and test points. The problem of optimum test points selection is, thus, transformed to the selection of the columns that isolate the rows of the dictionary. Then, the likelihood for a column to be chosen based on the size of its ambiguity set is evaluated using the minimum entropy index of test points. Finally, the test point with the minimum entropy index is selected to construct the optimum set of test points. The proposed entropy-based method to select a local minimum set of test points is polynomial bounded in computational cost. The comparison between the proposed method and other reported test points selection methods is carried out by statistical experiments. The results indicate that the proposed method more efficiently and more accurately finds the locally optimum set of test points and is practical for large scale analog systems.",2004,0, 3795,An efficient and secure fault-tolerant conference-key distribution scheme,"An original approach to establish a computationally secure and robust conference key between multiple users is presented, which is built on known secret sharing schemes and requires the authenticated and encrypted point-to-point channels between servers and users. By running of the protocol, every honest user of a given conference can get a common key, even if a minority of the servers malfunction or misbehave. This scheme does not rely on any unproven cryptographic assumptions or on the availability of any tamper-proof hardware. By using zero knowledge proof, any corrupted information and incorrect results can be detected. And by distributing the sensitive security information across several servers and never reconstructing any key at a single location, the compromise of a few servers will not compromise the privacy of any key. Analysis shows that under the assumption of a Diffie-Hellman decisional problem, a passive adversary gets zero knowledge about the conference key, and in the random oracle model, an active adversary cannot impersonate successfully. We have implemented the scheme in a distributed environment. By conducting a number of experiments in the fault-free case and various fault scenarios, we show that it has an acceptable performance of practicability.",2004,0, 3796,Methodology for characterization of high-speed multi-conductor metal interconnections and evaluation of measurement errors,"Analysis and design of interconnects in high speed integrated circuits and systems involves models in the form of multiconductor transmission lines. The fundamental parameters of those models are matrices of capacitance, (C), inductance, (L), resistance, (R), and conductance (G). We present a methodology for measurement of entries in capacitance matrix. The entries of capacitance matrices can be calculated using numerical solvers of electrostatic fields established under the assumption of suitable biasing of interconnect structures. Numerical calculations of complete field equations are very complex and expensive in terms of computer time, therefore several approximations are made in constructing interconnect dedicated software packages available on the market. Because of these approximations it is necessary to validate the calculations via measurements. Calculation of the off-diagonal entries of capacitance matrix from measurements of ""two-terminal"" capacitances is strongly corrupted by the measuring errors. The method involves direct capacitance measurement in multi-conductor structures and provides analysis of accuracy.",2002,0, 3797,Analysis of Single-Phase-to-Ground Fault Generated Initial Traveling Waves,"Analysis of fault generated traveling waves is the base to implement traveling waves based protection and fault location. However, the structure at fault point is not asymmetrical under single-phase to ground fault condition in multi-phase power system, so that traveling waves analysis method of single circuit can not be applied. The paper at first analyzes initial traveling wave at fault point generated by the fault through resistance, according to superimposed theory and using phase-to-module transformation method, then considers the fault generated traveling waves' characteristics at the relay point. At last, EMTP is implemented to verify the correctness of analysis of single-phase-to-ground fault generated initial traveling waves",2005,0, 3798,Fast fault location algorithm based on node betweenness,"Analysis of real communication network's performance after node fault has always been the research hotpots of network resilience and fault node location. In this article, we adopt four different fault generation policies to network dataset including real communication networks and simulated networks. Experiments focus on the relationship between fault nodes and network structure indications. We found that betweenness based strategy is more harmful to network structure. After modifying traditional SNMP Polling mechanism, the heuristic fault node location algorithm based on node's betweenness is proposed in this paper to promote fault location performance of traditional SNMP Polling in the time of finding fault nodes in real networks of star topology and mixed topology.",2010,0, 3799,A Business-Oriented Fault Localization Approach Using Digraph,"Analyzed here is a fault localization approach based on directed graph in view point of business software. The fault propagation model solved the problem of obtaining the dependency relationship of faults and symptoms semi-automatically. The main idea includes: get the deployment graph of managed business from the topography of network and software environment; generate the adjacency matrix of the graph; compute the transitive closure of adjacency matrix and obtain a so-called dependency matrix; independent fault locates in the main diagonals of the dependency matrix, elements of column is faultpsilas symptoms domain, which are divided into immediate effect and transitive effect. In real world, those elements will denote symptoms, but the two class affected nodes have different probability of occurrence, that is, immediate symptoms can be observed more likely than transitive symptoms and symptom of fault itself is most likely observed. Based on the hypothesis, a new fault localization algorithm is proposed. The simulation results show the validity and efficiency of this fault localization approach.",2009,0, 3800,Fault injection in VHDL descriptions and emulation,Analyzing at an early stage of the design the potential faulty behaviors of a circuit becomes a major concern due to the increasing probability of faults. It is proposed to carry out such an analysis using fault injections in RT-level VHDL descriptions and hardware prototyping of the circuit under design. Injection of erroneous transitions is automated and results are presented,2000,0, 3801,Design of soft error resilient linear digital filters using checksum-based probabilistic error correction,"Any error detecting or correcting code must meet specific code distance criteria to be able to detect and correct specified numbers of errors. Prior work in the area of error detection and correction in linear digital systems using real number checksum codes has shown that at least two checksums are necessary for error correction in linear digital filters and that a fair amount of computation on the two checksums must be performed before ""perfect"" error compensation can be achieved. In this paper, it is shown that a single checksum can be used to perform probabilistic error correction in linear digital filters with the objective of improving filter SNR in the presence of repetitive injected errors. This approach is designed to partially correct the errors. Comparison against a system with no error correction shows up to 7 dB SNR improvement using the proposed method",2006,0, 3802,Position location error analysis by AOA and TDOA using a common channel model for CDMA cellular environments,"AOA and TDOA are known to be promising methods and are developed separately. Combination or cooperation of the two methods has not been possible due to lack of the applicable channel models. The COST-207 model is utilized to provide the angular information for the AOA method. The angular characteristic of the channel can be obtained from the given temporal channel model, hence fusing of the approaches is now possible. Different properties of the methods are revealed to verify the proposition in literature. As a future research area, the fusing of the two different methods for better reliability and accuracy is mentioned",2000,0, 3803,Non-Contact Measurement for Mechanical Fault detection in Production Line,"Appliance manufacturing companies more often ask for an automatic on-line inspection system to accurately monitor the characteristics of all their products. It is well known that vibration tests enable discrimination between good and faulty products and hence the analysis of the vibration signals can be used for quality control of household appliances on the production lines. Laser Doppler Vibrometry (LDV) is now an established technique for vibration measurements in industrial applications where non-contact operations are essential. Despite the advantages of the LDV, speckle noise occurs when rough surfaces are measured and the object is moving. Therefore, spike removal is a crucial point for a reliable system of mechanical defects detection. This paper deals with the integration of pattern recognition techniques into an automatic test system for data acquisition and classification, in order to detect mechanical faults of washing machines (WM) in the production line. In particular, as the electrical motor is one of the most critical part of the assembled system, the goal is to detect the faults related to the motor by the use of a Laser Doppler Vibrometer pointing the tub of the washing machine. First, data acquisition and its problems are introduced. Then, the adopted pre-processing techniques for speckle noise reduction is illustrated. Finally, feature extraction and real examples are shown to test the system.",2007,0, 3804,Sensorless Position Estimation in a Fault Tolerant Surface-Mounted Permanent Magnet AC Motor Drive with Redundancy,"Although mechanical rotor position sensors are one of the least reliable parts in motor drives, past research in fault-tolerant PM drives has largely focused on motor and inverter topologies. While there has been substantial research on sensorless control algorithms, these are generally not directly applicable to the electrically isolated phase windings used in fault-tolerant PM drives. This paper proposes a position sensorless algorithm which is suitable for fault-tolerant PM drives and is capable of operating under fault conditions. The algorithm is based on examining the phase winding incremental flux linkages, and then estimating multiple incremental rotor positions using every pair of phases of the motor drive. Computer simulation results are provided to demonstrate the accuracy of the position estimation under both healthy and faulted conditions",2006,0, 3805,Syntactic fault patterns in OO programs,"Although program faults are widely studied, there are many aspects of faults that we still do not understand, particularly about OO software. In addition to the simple fact that one important goal during testing is to cause failures and thereby detect faults, a full understanding of the characteristics of faults is crucial to several research areas. The power that inheritance and polymorphism brings to the expressiveness of programming languages also brings a number of new anomalies and fault types. In prior work we presented a fault model for the appearance and realization of OO faults that are specific to the use of inheritance and polymorphism. Many of these faults cannot appear unless certain syntactic patterns are used. The patterns are based on language constructs, such as overriding methods that directly define inherited state variables and non-inherited methods that call inherited methods. If one of these syntactic patterns is used, then we say the software contains an anomaly and possibly a fault. We describe the syntactic patterns for each OO fault type. These syntactic patterns can potentially be found with an automatic tool. Thus, faults can be uncovered and removed early in development.",2002,0, 3806,Fault-based testing in the absence of an oracle,"Although testing is the most popular method for assuring software quality, there are two recognized limitations, known as the reliable test set problem and the oracle problem. Fault-based testing is an attempt by Morell to alleviate the reliable test set problem. In this paper, we propose to enhance fault-based testing to address the oracle problem as well. We present an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs",2001,0, 3807,Analysis and Proposition of Error-Based Model to Predict the Minimum Reliability of Software,"Although, there are several software reliability models existing today, yet for software in the useful life where no upgradation, debugging or modification is done and also where no efforts are taken to avoid the errors, there exists several contradicting opinions. Some people say that in the useful life the reliability is time dependent others say that they are independent of time. In the literature, many software reliability models had been developed in compatible with hardware reliability models which are directly a function of time. This paper analyzes these aspects with the existing software reliability models, determines their deficiencies and effectiveness, and finally proposes an innovative error based generalized model for predicting the minimum reliability of software.",2009,0, 3808,AC induction motor direct torque control system with DSP controller and fault diagnosis,"An AC induction motor direct torque control system with DSP controller is described. The system hardware and software structures, fault diagnosis principle and the testing results are presented in this paper. The system mentioned in this paper has very quick dynamic response, wide speed regulating scope and high reliability",2001,0, 3809,Novel PCB sensor based on Rogowski coil for transmission lines fault detection,"An accurate transformation for high voltage and large current to small signals is essential for power protect, measure, and control. Traditional iron-core based Rogowski Coil is usually applied in transforming transient & large currents to small signals. High precision Rogowski Coils are designed as sensors using printed circuit boards (PCB) in the paper. The novel PCB sensor is applied for faults generated traveling waves detection. The two circuit board strips construction is presented. And the sensors' characteristics are analyzed in detail. A new transmission line faults detection method using the PCB sensor is also proposed. EMTP simulation and prototypes test results show that the PCB sensor has less transmission delay and fits to apply in traveling wave based faults detection and location.",2009,0, 3810,Ice profiling sonars: a comparison of error budgets,"An accurate understanding of the volume of ice in the Arctic is a critical component of understanding the Earth's heat budget. Estimation of this volume is a complex problem involving both spatial coverage issues as well as accuracy of the ice thickness measurements themselves. Much of the data from which such estimations are made has been taken from 637-class US Navy submarines. Unfortunately, there will be little or no data from this class of submarine in the future as they are all being decommissioned. New observations will be made with new sonars and the ability to accurately and robustly compare observations between the new and old measurement systems is necessary to minimize the confusion due to differences between them. This paper describes the sonar system used to collect the existing data sets from the 637-class submarines, a new sonar for making similar observations from autonomous underwater vehicles (AUVs) in the future and establishes a reference framework for evaluating error budgets in this type of sonar system. A careful intercomparison between the old and the new sonars to establish a robust basis for extending the observational time series should be done",2001,0, 3811,Advanced Fault Analysis Software System (or AFAS) for Distribution Power Systems,"An advanced fault analysis software system (or AFAS) is currently being developed at Concurrent Technologies Corporation (CTC) to automatically detect and locate low and high impedance, momentary and permanent faults in distribution power systems. Microsoft Visual Studio is used to integrate advanced software packages and analysis tools (including DEW, AEMPFAST, PSCAD, and CTC's DSFL) under the AFAS platform. AFAS is an intelligent, operational, decision-support fault analysis tool that utilizes PSCAD to simulate fault transients of distribution systems to improve fault location accuracy of DSFL tool and to enhance DSFL capabilities for predicting low and high impedance, momentary and permanent faults. The implementation and evaluation results of this software tool have been presented.",2007,0, 3812,Application of Run Sequences Comparison in Software Fault Diagnosis,"After analyzing the software failure mechanism, the method of software run sequences comparison is presented, which increases the effectiveness of software fault diagnosis. The method applies the thought of nearest neighbor to searching the nearest neighbor normal run of the fault run by using the edit distance. By comparing the fault run sequences with its nearest neighbor, the difference between them can be found, which generates the program suspect range report. And the evaluation function is presented. Finally the experiment verifies the approach.",2009,0, 3813,Instruction-Based Self-Testing of Delay Faults in Pipelined Processors,"Aggressive processor design methodology using high-speed clock and deep submicrometer technology is necessitating the use of at-speed delay fault testing. Although nearly all modern processors use pipelined architecture, no method has been proposed in literature to model these for the purpose of test generation. This paper proposes a graph theoretic model of pipelined processors and develops a systematic approach to path delay fault testing of such processor cores using the processor instruction set. The proposed methodology generates test vectors under the extracted architectural constraints. These test vectors can be applied in functional mode of operation, hence, self-test becomes possible. Self-test in a functional mode can also be used for online periodic testing. Our approach uses a graph model for architectural constraint extraction and path classification. Test vectors are generated using constrained automatic test pattern generation (ATPG) under the extracted constraints. Finally, a test program consisting of an instruction sequence is generated for the application of generated test vectors. We applied our method to two example processors, namely a 16-bit 5-stage VPRO pipelined processor and a 32-bit pipelined DLX processor, to demonstrate the effectiveness of our methodology",2006,0, 3814,Online error detection and correction of erratic bits in register files,"Aggressive voltage scaling needed for low power in each new process generation causes large deviations in the threshold voltage of minimally sized devices of the 6T SRAM cell. Gate oxide scaling can cause large transient gate leakage (a trap in the gate oxide), which is known as the erratic bits phenomena. Register file protection is necessary to prevent errors from quickly spreading to different parts of the system, which may cause applications to crash or silent data corruption. This paper proposes a simple and cost-effective mechanism that increases the resiliency of the register files to erratic bits. Our mechanism detects those registers that have erratic bits, recovers from the error and quarantines the faulty register. After the quarantine period, it is able to detect whether they are fully operational with low overhead.",2009,0, 3815,Software-Intensive Equipment Fault Diagnosis Research Based on D-S Evidential Theory,"Aiming at limitations of the current fault diagnosis of the software-intensive equipment (SIE), considering the advantages of D-S evidential theory at dealing with multi-information, this paper presents a method of fault diagnosis at the decision level based on D-S evidential theory. The method establishes system structure of fault diagnosis, constructs the reasonable basic probability assignment algorithm, carries out multi-criteria fusion using D-S fusion model and method. The method is proved to be effective for fault location by instance. It makes diagnosed information more definite and improves the accuracy of diagnosis.",2009,0, 3816,Application of fuzzy classification by evolutionary neural network in incipient fault detection of power transformer,"Aiming at the incipient fault detection of power transformer, the paper proposes a novel fuzzy classification by evolutionary neural network. The method models the membership fuctions of all fuzzy sets by utilizing a three-layer feedforward neural network, and trains a group of neural networks by combining the modified Evolutionary Strategy with Levenberg-Marquardt optimization method in order to accelerate convergence and avoid falling into local minima. Thus each trained neural network denotes an ""expert"" model. The classification results obtained from all ""expert"" models are integrated according to the absolute-majority-voting rule. A lot of samples are tested, and the testing results demonstrate that the novel method is much better in neural network structure, classification accuracy, generalization capability, fault-tolerance ability and robustness that then other traditional methods.",2004,0, 3817,Texture feature extraction and its application in fault signal recognition,"Aiming at the online fault diagnoses, the texture features which are usually used in image processing are firstly applied in the early fault signal recognition problems. After the parameter R based on gray-level co-occurrence matrix is defined, the parameter R extraction method of texture features is presented. Then, the novel fault signal recognition algorithm based on the parameter R of the texture feature is proposed. According to the algorithm, the pattern recognitions of the power cable in the normal state, the fault states of the single-phase open circuit, the single-phase short circuit grounding, and the two-phase short circuit grounding, and three-phase short circuit can be achieved correctly and effectively, which are proved by the simulation experiments.",2010,0, 3818,Logical method for detecting faults by fault detection table,"Algebro-logic vector method for diagnosing faults of systems and their components based on the use a fault detection table and transactional graph, is proposed. The method allows decreasing the verification time of software model.",2010,0, 3819,Time vs. space in fault-tolerant distributed systems,"Algorithms for solving agreement problems can be classified in two categories: (1) those relying on failure detectors (FDs), which we call FD-based, and (2) those that rely on a group membership service (GMS), which we call GMS-based. This paper discusses the advantages and limitations of these two approaches and proposes an extension to the GMS approach that combines the advantages of both approaches, without their drawbacks. This extension leads us to distinguish between time-triggered suspicions of processes and space-triggered exclusions",2001,0, 3820,Application of Fault Tree in Software Safety Analysis,"Along with the development of information technology, application of computer is increasing, software reliability and safety are become more and more regarded, so make use of fault tree analysis in analyzing the probability of failure of every module of system, thereby find the key module which is more impact the system safety, also make use of structure importance coefficient in quantitative analysis of importance extent of every module in the system.",2009,0, 3821,Impedance correction of one-port coaxial loads using reference air line,An impedance measurement method for one-port coaxial load using multiple reference air lines as the impedance standard and offset open and short as the reflection standard is presented in this paper. Measured impedances for termination and sliding load are presented and the measurement uncertainty is evaluated.,2002,0, 3822,A comprehensive evaluation of capture-recapture models for estimating software defect content,"An important requirement to control the inspection of software artifacts is to be able to decide, based on more objective information, whether the inspection can stop or whether it should continue to achieve a suitable level of artifact quality. A prediction of the number of remaining defects in an inspected artifact can be used for decision making. Several studies in software engineering have considered capture-recapture models to make a prediction. However, few studies compare the actual number of remaining defects to the one predicted by a capture-recapture model on real software engineering artifacts. The authors focus on traditional inspections and estimate, based on actual inspections data, the degree of accuracy of relevant state-of-the-art capture-recapture models for which statistical estimators exist. In order to assess their robustness, we look at the impact of the number of inspectors and the number of actual defects on the estimators' accuracy based on actual inspection data. Our results show that models are strongly affected by the number of inspectors, and therefore one must consider this factor before using capture-recapture models. When the number of inspectors is too small, no model is sufficiently accurate and underestimation may be substantial. In addition, some models perform better than others in a large number of conditions and plausible reasons are discussed. Based on our analyses, we recommend using a model taking into account that defects have different probabilities of being detected and the corresponding Jackknife Estimator. Furthermore, we calibrate the prediction models based on their relative error, as previously computed on other inspections. We identified theoretical limitations to this approach which were then confirmed by the data",2000,0, 3823,"Improved pose measurement and tracking system for motion correction of awake, unrestrained small animal SPECT imaging","An improved optical landmark-based pose measurement and tracking system has been developed to provide 3D animal pose data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained laboratory animals. The six degree of freedom animal position and orientation measurement data are time synchronized with the SPECT list mode data to provide for motion correction after the scan and before reconstruction. The tracking system employs infrared (IR) markers placed on the animal's head along with synchronized, strobed IR LEDs to illuminate the reflectors and freeze motion while minimizing reflections. A new design trinocular stereo image acquisition system using IEEE 1394 CMOS cameras acquires images of the animal with markers contained within a transparent enclosure. The trinocular configuration provides improved accuracy, range of motion, and robustness over the binocular stereo previously used. Enhanced software detects obstructions, automatically segments the markers, rejects reflections, performs marker correspondence, and calculates the 3D pose of the animal's head using image data from three cameras. The new hardware design provides more compact camera positioning with enhanced animal viewing through the 360 degree SPECT scan. This system has been implemented on a commercial gantry and tested using live mice and has been shown to be more reliable with higher accuracy than the previous system. Experimental results showing the improved motion tracking results are given.",2007,0, 3824,Investigation of induction machine phase open circuit faults using a simplified equivalent circuit model,"An induction motor model based on the per-phase equivalent circuit is used to simulate operation with an open circuit fault. The model considers both spatial field harmonics as well as saturation effects to correctly model the motor's behaviour under faulty conditions, where the non-linearities may produce problematic torque pulsations due to the unbalanced nature of the winding distribution. A model is presented which takes into account the most prominent nonlinearities, however keeping simulation times low in order to enable the development of fault tolerant control strategies. In addition, this paper presents a study of the behaviour of an induction motor drive with a phase open circuit fault. A new fault remedial control strategy for this type of fault will also be described. Experimental tests on an instrumented vector controlled rig have been used to verify simulation results.",2008,0, 3825,Diversified Process Replic for Defeating Memory Error Exploits,"An interpretation of the notion of software diversity is based on the concept of diversified process replic. We define pr as the replica of a process p which behaves identically to p but has some ""structural"" diversity from it. This makes possible to detect memory corruption attacks in a deterministic way. In our solution, p and pr differ in their address space which is properly diversified, thus defeating absolute and partial overwriting memory error exploits. We also give a characterization and a preliminary solution for shared memory management, one of the biggest practical issue introduced by this approach. Speculation on how to deal with synchronous signals delivery is faced as well. A user space proof-of-concept prototype has been implemented. Experimental results show a 68.93% throughput slowdown on a worst-case, while experiencing only a 1.20% slowdown on a best-case.",2007,0, 3826,Distortion correction of LDMOS power amplifiers using hybrid RF second harmonic injection/digital predistortion linearization,"An LDMOS RF power amplifier for RF multichannel wireless systems with improved IMD performance characteristics is presented. The application of two combined linearization methods is being tested with the help of circuit simulation software ADS. The injection of the fundamental signal's second harmonic in the RF amplifier, as well as a digital predistortion technique, is combined in order to achieve IMD improvement. By proper selection of phase and amplitude of the injected second harmonic signal, it is possible to reduce IMD products that have already been reduced by the well established method of digital predistortion",2006,0, 3827,Continual on-line training of neural networks with applications to electric machine fault diagnostics,"An online training algorithm is proposed for neural network (NN) based electric machine fault detection schemes. The algorithm obviates the need for large data memory and long training time, a limitation of most AI-based diagnostic methods for commercial applications, and in addition, does not require training prior to commissioning. Experimental results are provided for an induction machine stator winding turn-fault detection scheme that uses a feedforward NN to compensate for machine and instrumentation nonidealities, to illustrate the feasibility of the new training algorithm for real-time implementation",2001,0, 3828,An approach to detecting duplicate bug reports using natural language and execution information,"An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as duplicate and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",2008,0, 3829,Probability density of the phase error of a digital interferometer with overlapped FFT processing,"Advanced interferometric direction finding systems often use the Fast Fourier Transform (FFT) for computing the relative phase shifts of the signals corresponding to pairs of antenna elements. An exact formula was recently derived for the probability density of the phase error for the case when the overlap ratio of the FFT input data blocks is 0. This paper extends the previous work to derive an exact formula for the probability density for the general case of an arbitrary overlap ratio [0, 1).",2010,0, 3830,Simulation of Elman Neural network Extension Strategy Generator to Pattern Deformation Error in Flexibility Material Treating Field,"After analyzing flexibility material processing (such as quilting processing) influencing factor of pattern deformation, the edges of the original image and the deformation image are extracted. Then they are changed into coordinate. On top of it, the data are put into the Elman neural networks to train which has been built and the original image is used to as the teacher signal to tutoring. At last, the matter extenics model is built qua the database's data of the extension decision strategy generator by analysis the simulation result.",2008,0, 3831,A NURBS-based error concealment technique for corrupted images from packet loss,"An error concealment using non-uniform rational B-spline (NURBS) is proposed. NURBS has been employed by many CAD/CAM systems as a fundamental geometry representation. Despite the fact that NURBS has gained tremendous popularity from the CAD/CAM and computer graphics community, its application on exploring the image problem only received little attention. On the other hand, the image contents might be corrupted or lost during transmission. Although there are quite a few existing techniques, yet developing an effective approach to conceal the error remains as one of the hottest research topics. Thus the aim of this study is to develop an image reconstruction technique using NURBS. The key idea is to use NURBS to represent the portion of image data without corruption. By accomplishing this, a single-hidden layer neural network is employed to learn the appropriate control points of NURBS. After learning, NURBS is then used to render the corrupted image data. Experimental results indicate that the proposed approach exhibits promising performance.",2002,0, 3832,A high-performance fault-tolerant software framework for memory on commodity GPUs,"As GPUs are increasingly used to accelerate HPC applications by allowing more flexibility and programmability, their fault tolerance is becoming much more important than before when they were used only for graphics. The current generation of GPUs, however, does not have standard error detection and correction capabilities, such as SEC-DED ECC for DRAM, which is almost always exercised in HPC servers. We present a high-performance software framework to enhance commodity off-the-shelf GPUs with DRAM fault tolerance. It combines data coding for detecting bit-flip errors and checkpointing for recovering computations when such errors are detected. We analyze performance of data coding in GPUs and present optimizations geared toward memory-intensive GPU applications. We present performance studies of the prototype implementation of the framework and show that the proposed framework can be realized with negligible overheads in compute intensive applications such as N-body problem and matrix multiplication, and as low as 35% in a highly-efficient memory intensive 3-D FFT kernel.",2010,0, 3833,FTC-Charm++: an in-memory checkpoint-based fault tolerant runtime for Charm++ and MPI,"As high performance clusters continue to grow in size, the mean time between failures shrinks. Thus, the issues of fault tolerance and reliability are becoming one of the challenging factors for application scalability. The traditional disk-based method of dealing with faults is to checkpoint the state of the entire application periodically to reliable storage and restart from the recent checkpoint. The recovery of the application from faults involves (often manually) restarting applications on all processors and having it read the data from disks on all processors. The restart can therefore take minutes after it has been initiated. Such a strategy requires that the failed processor can be replaced so that the number of processors at checkpoint-time and recovery-time are the same. We present FTC-Charms ++, a fault-tolerant runtime based on a scheme for fast and scalable in-memory checkpoint and restart. At restart, when there is no extra processor, the program can continue to run on the remaining processors while minimizing the performance penalty due to losing processors. The method is useful for applications whose memory footprint is small at the checkpoint state, while a variation of this scheme - in-disk checkpoint/restart can be applied to applications with large memory footprint. The scheme does not require any individual component to be fault-free. We have implemented this scheme for Charms++ and AMPI (an adaptive version of MPl). This work describes the scheme and shows performance data on a cluster using 128 processors.",2004,0, 3834,On-chip debugging-based fault emulation for robustness evaluation of embedded software components,"As manufacturers integrate more off-the-shelf components in embedded products, their robustness evaluation becomes more necessary. This requirement is however difficult to meet using non-intrusive evaluation methods, especially in the case of systems-on-a-chip (SoCs). Research presented in this paper investigates the use of on-chip-debugging (OCD) mechanisms to evaluate the ability of SoC-embedded software components to withstand the occurrence of external faults. These faults are emulated by corrupting the information that components are able to receive through their public interfaces. Once a fault has been injected, reaction of targeted components is studied using OCD monitoring capabilities. The ability of these capabilities to run in parallel with the rest of the SoC internal mechanisms is exploited in order to carry out previous tasks without requiring the source code of the component under study and without interfering (neither spatially nor temporally) with the system nominal execution. Results show potentials and limitations of the approach and let us define directions for future investigation.",2005,0, 3835,Automated antenna detection and correction methodology in VLSI designs,"As more and more devices are packed on a single chip and as the complexities of VLSI designs are increasing, antenna detection and correction is becoming an increasingly challenging task. The paper presents a methodology, which employs a combination of prevention and correction of antennae at various stages of ASIC (Application specific Integrated Circuits) design flow such as cell library development, block design flow and chip design flow. The methodology advocates adding protection diodes only in a certain number of cells in the library. We have implemented this methodology in our ASIC design flow and are able to solve antenna issues in designs with negligible impact on die size (24% increase in die-size in less than 5% of the designs) and performance (0.3%-0.6% worst case impact to delay). By employing this methodology, we found that the number of antennae in the final layout reduced to very small number and even to zero in some cases, and we were able to save the time involved in correcting antennae.",2003,0, 3836,Computers detecting inferior printing quality and errors,"As observed, the described approach to computer-aided error detection in printed matter proves itself to be exact and reliable. Our prototype printing error detection computer program has also undergone extreme testing in an industrial environment. The problem of excessive detection duration will have to be solved by an optimization of the registration and interpolation procedures. When considering the coarse image registration, a solution in the frequency domain seems appealing, where the translational and rotational information can be extracted from the phase and amplitude spectra, respectively. Fine image registration must be optimized even more. One of the ways we are presently researching tries to minimize the quantity of data included in the computation, at each iteration. We can do this by using images with lower resolution or taking into registration only an image's segment. Afterwards, the obtained registration parameters would be applied to the original-size images. For the time being, a second optimization attempt with partial images seems the most promising.",2007,0, 3837,ESoftCheck: Removal of Non-vital Checks for Fault Tolerance,"As semiconductor technology scales into the deep submicron regime the occurrence of transient or soft errors will increase. This will require new approaches to error detection. Software checking approaches are attractive because they require little hardware modification and can be easily adjusted to fit different reliability and performance requirements. Unfortunately, software checking adds a significant performance overhead. In this paper we present ESoftCheck, a set of compiler optimization techniques to determine which are the vital checks, that is, the minimum number of checks that are necessary to detect an error and roll back to a correct program state. ESoftCheck identifies the vital checks on platforms where registers are hardware-protected with parity or ECC, when there are redundant checks and when checks appear in loops. ESoftCheck also provides knobs to trade reliability for performance based on the support for recovery and the degree of trustiness of the operations. Our experimental results on a Pentium 4 show that ESoftCheck can obtain 27.1% performance improvement without losing fault coverage.",2009,0, 3838,Systematic defects in software cost-estimation models harm management,"As software development becomes an increasingly important enterprise, managerial requirements for cost estimation increase, yet developmers continue a rather a long history of failing to cost software systems development adequately. Here, it is contented that poor results are due, in part, to some traditionally recognized problems and, in part, to a defect in the models themselves. Identifying the defect of software cost models is the purpose of this paper",2001,0, 3839,An empirical investigation of fault types in space mission system software,"As space mission software becomes more complex, the ability to effectively deal with faults is increasingly important. The strategies that can be employed for fighting a software bug depend on its fault type. Bohrbugs are easily isolated and removed during software testing. Mandelbugs appear to behave chaotically. While it is more difficult to detect these faults during testing, it may not be necessary to correct them; a simple retry after a failure occurrence may work. Aging-related bugs, a sub-class of Mandelbugs, can cause an increasing failure rate. For these faults, proactive techniques may prevent future failures. In this paper, we analyze the faults discovered in the on-board software for 18 JPL/NASA space missions. We present the proportions of the various fault types and study how they have evolved over time. Moreover, we examine whether or not the fault type and attributes such as the failure effect are independent.",2010,0, 3840,Soft errors in SRAM-FPGAs: A comparison of two complementary approaches,"As SRAM-based FPGAs are introduced in safety- or mission-critical applications, the availability of suitable electronic design automations (EDA) tools for predicting systems dependability becomes mandatory for designers. Nowadays designers can opt either for workload-independent EDA tools, which provide information about system's dependability disregarding the workload the system is supposed to elaborate when deployed in the mission, or workload-dependant approaches. In this paper, we compare two tools for predicting the effects of soft errors in circuits implemented using SRAM-based FPGAs, a workload-independent one (STAR) and a workload-dependant one (FLIPPER). Experimental results show that the two tools are complementary and can be used fruitfully for obtaining accurate predictions.",2007,0,5648 3841,Decoding STAR code for tolerating simultaneous disk failure and silent errors,"As storage systems grow in size and complexity, various hardware and software component failures inevitably occur, resulting in disk malfunction in failures, as well as silent errors. Existing techniques and schemes overcome the failures and silent errors in a separate fashion. In this paper, we advocate using the STAR code as a unified and systematic mechanism to simultaneously tolerate failures on one disk and silent errors on another. By exploring the unique geometric structure of the STAR code, we propose a novel efficient decoding algorithm - EEL. Both theoretical and experimental performance evaluations show that EEL constantly outperforms a naive Try-and-Test approach by large factors in overall decoding throughput.",2010,0, 3842,Optimizing device size for soft error resilience in sub-micron logic circuits,"As technology nodes are being scaled down, soft errors induced by particle strikes are becoming a troublesome reliability issue in logic circuits. Various sizing techniques commonly used to reduce soft error rate in the past are expensive in terms of area, performance, and energy consumption. These methods require changes to adapt to sub-micron technologies. This study introduces two novel sizing methods that selectively upsize transistor networks of a circuit. Our first proposed methodology formulates the soft error rate minimization as a mathematical optimization problem and searches for the best area distribution such that maximum reliability gain is obtained. This methodology assures that optimal solutions are achieved within given area budget provided to the designer. However, generating optimal solution requires very high CPU time. Therefore, we propose a heuristic based methodology which upsizes only selected transistor network in sensitive gates based on soft error sensitivity of each gate. With proper sensitive gate selection and area distribution algorithms proposed in this technique, we show through experimental results that our heuristic driven method gives satisfactory reliability improvement compared to our first method, while requiring relatively small computation time.",2010,0, 3843,New Approach for Defect Inspection on Large Area Masks,"Besides the mask market for IC manufacturing, which mainly uses 6 inch sized masks, the market for the so called large area masks is growing very rapidly. Typical applications of these masks are mainly wafer bumping for current packaging processes, color filters on TFTs, and Flip Chip manufacturing. To expose e.g. bumps and similar features on 200 mm wafers under proximity exposure conditions 9 inch masks are used, while in 300 mm wafer bumping processes 14 inch masks are handled. Flip Chip manufacturing needs masks up to 28 by 32 inch. This current maximum mask dimension is expected to hold for the next 5 years in industrial production. On the other hand shrinking feature sizes, just as in case of the IC masks, demand enhanced sensitivity of the inspection tools. A defect inspection tool for those masks is valuable for both the mask maker, who has to deliver a defect free mask to his customer, and for the mask user to supervise the mask behavior conditions during its lifetime. This is necessary because large area masks are mainly used for proximity exposures. During this process itself the mask is vulnerable by contacting the resist on top of the wafers. Therefore a regular inspection of the mask after 25, 50, or 100 exposures has to be done during its whole lifetime. Thus critical resist contamination and other defects, which lead to yield losses, can be recognized early. In the future shrinking feature dimensions will require even more sensitive and reliable defect inspection methods than they do presently. Besides the sole inspection capability the tools should also provide highly precise measurement capabilities and extended review options.",2007,0, 3844,An efficient method for compensating the truncation DC-error in a multi-stage digital filter,"Binary two's complement operation in a digital circuit brings increased word length in operation results. In this case, the LSB bit truncation is performed to meet the system requirement. However, truncating the word adds an undesired DC-error to the signal and degrades system performance. To solve this problem, a complementary method is suggested. The proposed scheme improves system performance with a simple method that involves inverting the sign of the truncated signal value. The suggested truncation DC-error reducing method is applied to the multi-stage digital FIR filter implementation on the FPGA and the results are analyzed.",2004,0, 3845,"Bio-Robustness and Fault Tolerance: A New Perspective on Reliable, Survivable and Evolvable Network Systems","Biological structures and organizations in nature, from gene, molecular, immune systems, and biological populations, to ecological communities, are built to stand against perturbations and biological robustness is therefore ubiquitous. Furthermore, it is intuitively obvious that the counterpart of bio-robustness in engineered systems is fault tolerance. With the objective to stimulate inspiration for building reliable and survivable computer networks, this paper reviews the state-of-the-art research on bio-robustness at different biological scales (level) including gene, molecular networks, immune systems, population, and community. Besides identifying the biological/ecological principles and mechanisms relevant to biological robustness, we also review major theories related to the origins of bio-robustness, such as evolutionary game theory, self-organization and emergent behaviors. Evolutionary game theory, which we present in a relative comprehensive introduction, provides an ideal framework to model the reliability and survivability of computer networks, especially the wireless sensor networks. We also present our perspectives on the reliability and survivability of computer networks, particularly wireless sensor and ad hoc networks, based on the principles and mechanisms of bio-robustness reviewed in the paper. Finally, we propose four open questions including three in engineering and one in DNA code robustness to demonstrate the bidirectional nature of the interactions between bio-robustness and engineering fault tolerance.",2008,0, 3846,Onboard autonomy and fault protection concept of the BIRD satellite,"BIRD (Bi-Spectral Infra-Red Detection) has been demonstrating new technologies since its launch on 22. October, 2001 with the PSLV-C3 from Shar/India successfully into a sun-synchronous low Earth orbit at 560 km. Besides the successful in-orbit test of the detection and evaluation of vegetation fires with micro satellites, BIRD has also been demonstrating a number of advanced spacecraft bus technologies, especially in the field of satellite autonomy and fault detection and protection. A number of ingenious features make it possible to operate the 92 kg satellite in a comfortable and safely way. Special features include the autonomous management of onboard computer failures, surveillance and response to critical parameters limit exceeding, system attitude anomalies. A robust redundancy philosophy and optimised ground-spacecraft interaction concept has contributed to the success of the BIRD mission, which was designed to operate for one year and has completed now its second year of operation. The paper describes the related new technologies and the results from the experience with BIRD.",2003,0, 3847,Fault tolerant Block Based Neural Networks,"Block Based Neural Networks (BBNNs) have shown to be a practical means for implementing evolvable hardware on reconfigurable fabrics for solving a variety of problems that take advantage of the massive parallelism offered by a neural network approach. This paper proposes a method for obtaining a fault tolerant implementation of BBNNs by using a biologically inspired layered design. At the lowest level, each block has its own online detection and correcting logic combined with sufficient spare components to ensure recovery from permanent and transient errors. Another layer of hierarchy combines the blocks into clusters, where a redundant column of blocks can be used to replace blocks that cannot be repaired at the lowest level. The hierarchical approach is well-suited to a divide-and-conquer approach to genetic programming whereby complex problems are subdivided into smaller parts. The overall approach can be implemented on a reconfigurable fabric.",2010,0, 3848,Fault Detection of Bloom Filters for Defect Maps,"Bloom filters can be used as a data structure for defect maps in nanoscale memory. Unlike most other applications of Bloom filters, both false positive and false negative induced by a fault cause a fatal error in the memory system. In this paper, we present a technique for detecting faults in Bloom filters for defect maps. Spare hashing units and a simple coding technique for bit vectors are employed to detect faults during normal operation. Parallel write/read is also proposed to detect faults with high probability even without spare hashing units.",2008,0, 3849,Analyzing and Extending MUMCUT for Fault-based Testing of General Boolean Expressions,"Boolean expressions are widely used to model decisions or conditions of a specification or source program. The MUMCUT, which is designed to detect seven common faults where Boolean expressions under test are assumed to be in Irredundant Disjunctive Normal Form (IDNF), is an efficient fault-based test case selection strategy in terms of the fault-detection capacity and the size of selected test suite. Following up our previous work that reported the fault-detection capacity of the MUMCUT when it is applied to general form Boolean expressions, in this paper we present the characteristic of the types of single faults committed in general Boolean expressions that a MUMCUT test suite fails to detect, analyze the certainty why a MUMCUT test suite fails to detect these types of undetected faults, and provide some extensions to enhance the detection capacity of the MUMCUT for these types of undetected faults.",2006,0, 3850,Interactive Software and Hardware Faults Diagnosis Based on Negative Selection Algorithm,"Both hardware and software of computer systems are subject to faults. However, traditional methods, ignoring the relationship between software fault and hardware fault, are ineffective to diagnose complex faults between software and hardware. On the basis of defining the interactive effect to describe the process of the interactive software and hardware fault, this paper present a new matrix-oriented negative selection algorithm to detect faults. Furthermore, the row vector distance and matrix distance are constructed to measure elements between the self set and detector set. The experiment on a temperature control system indicates that the proposed algorithm has good fault detection ability, and the method is applicable to diagnose interactive software and hardware faults with small samples.",2008,0, 3851,A Study of Fault Coverage of Standard and Windowed Watchdog Timers,"Both standard and windowed watchdog timers were designed to detect flow faults and ensure the safe operation of the systems they supervise. This paper studies the effect of transient failures on microprocessors, and utilizes two methods to compare the fault coverage of both watchdog timers. The first method is injecting a fault while a processor is reading an image from RAM and sending it to the VGA RAM for display. This method is implemented on FPGA, and visually demonstrates the existence of fast watchdog resets which can not be detected by standard watchdog timers, and faulty resets which occur undetected within the safe window of the windowed watchdog timers. The second method is a simulation where the fault coverage for each watchdog timer system is calculated. This simulation tries to take into consideration many factors which could affect the outcome of this comparison.",2007,0, 3852,Application of Particle Swarm Optimization and RBF Neural Network in Fault Diagnosis of Analogue Circuits,"BP neural network has the shortcoming of over-fitting, local optimal solution, which affects the practicability of BP neural network. RBF neural network is a feedforward neural network, which has the global optimal closing ability. However, the parameters in RBF neural network need determination. Particle swarm optimization is presented to choose the parameters of RBF neural network. The particle swarm optimization-RBF neural network method has high classification performance, and is applied to fault diagnosis of analogue circuits. Finally, the result of fault diagnosis cases shows that the particle swarm optimization - RBF neural network method has higher classification than BP neural network.",2009,0, 3853,Fault-Tolerant BPEL Workflow Execution via Cloud-Aware Recovery Policies,"BPEL is the de facto standard for business process modeling in today's enterprises and is a promising candidate for the integration of business and scientific applications that run in Grid or Cloud environments. In these distributed infrastructures, the occurrence of faults is quite likely. Without sophisticated fault handling, workflows are frequently abandoned due to software or hardware failures, leading to a waste of CPU hours. The fault handling mechanisms provided by BPEL are well suited for handling faults of the business logic, but infrastructure-induced errors should be handled automatically to avoid over-complication of workflow design and keep concerns separated. This paper identifies classes of faults that can be resolved automatically by the infrastructure, and provides a policy-based approach to configure this automatic behavior without the need for adding explicit fault handling mechanisms to the BPEL process. The proposed approach provides automatic redundancy of services using a Cloud infrastructure to allow substitution of defective services. An implementation based on the ActiveBPEL engine and Amazon's Elastic Compute Cloud is presented.",2009,0, 3854,Rotor fault detection in inverter drive systems using inverter input current analysis,"Broken rotor bar/ end-ring is a common fault in squirrel cage induction motors and has been thoroughly investigated in the case of AC grid supply. In this paper, this type of fault is studied in case of inverter driven motor. Induction motor is driven using scalar Volts/ Hz control method. Motor stator current and inverter input current spectra are studied for frequency components due to fault. Inverter input current spectrum gives the ability of fault detection and severity classification, even under low load and low motor severity fault condition in steady-state more easily than using motor stator current data, as it has the capability to produce advantage in variable frequency operation.",2010,0, 3855,Automated duplicate detection for bug tracking systems,"Bug tracking systems are important tools that guide the maintenance activities of software developers. The utility of these systems is hampered by an excessive number of duplicate bug reports-in some projects as many as a quarter of all reports are duplicates. Developers must manually identify duplicate bug reports, but this identification process is time-consuming and exacerbates the already high cost of software maintenance. We propose a system that automatically classifies duplicate bug reports as they arrive to save developer time. This system uses surface features, textual semantics, and graph clustering to predict duplicate status. Using a dataset of 29,000 bug reports from the Mozilla project, we perform experiments that include a simulation of a real-time bug reporting environment. Our system is able to reduce development cost by filtering out 8% of duplicate bug reports while allowing at least one report for each real defect to reach developers.",2008,0, 3856,Strategy to Detect Bug in Pre-silicon Phase,"Bugs still escape to post-silicon despite huge effort has been put into validating the design in pre-silicon phase. This could cost an immediate stepping while some other bugs may have a software work around. Running more tests may still miss the bugs. Therefore it is necessary to have an effective strategy during pre-silicon phase. This paper will present a strategy to derive the test points from the validation objective, and set the domain to test based on the micro-architecture, before entering simulation environment. This strategy utilized coverage based validation (CVB), with test points and domain coded as coverage points, while the test generator directed the transactions into the domain to test. This provides a comprehensive validation coverage to the design under test.",2009,0, 3857,Built-In Soft Error Resilience for Robust System Design,"Built-in soft error resilience (BISER) is an architecture-aware circuit design technique for correcting soft errors in latches, flip-flops and combinational logic. BISER enables more than an order of magnitude reduction in chip-level soft error rate with minimal area impact, 6-10% chip-level power impact, and 1-5% performance impact (depending on whether combinational logic error correction is implemented or not). In comparison, several traditional error-detection techniques introduce 40-100% power, performance and area penalties, and require significant efforts for designing and validating corresponding recovery mechanisms. In addition, BISER enables system design with configurable soft error protection features. Such features are extremely important for future designs targeting applications with a wide range of power, performance and reliability constraints. Design trade-offs associated with BISER and other existing soft error protection techniques are also analyzed.",2007,0, 3858,An Error Analysis of the Scattering Matrix Renormalization Transform,"By characterizing the condition number as an indicator of accuracy, an error analysis is performed to determine how different auxiliary terminations affect the accuracy of the scattering matrix renormalization transform. Such an error analysis plays an essential role in enhancing the accuracy and widening the applicability of the port renormalization method for measuring multiport devices with a two-port vector network analyzer.",2009,0, 3859,Impact of data cache memory on the single event upset-induced error rate of microprocessors,Cache memories embedded in most of complex processors significantly contribute to the global single event upset-induced error rate. Three different approaches allowing the study of this contribution by fault injection are investigated in this paper.,2003,0, 3860,Predictive distribution reliability analysis considering post fault restoration and coordination failure,"Calculation of predicted distribution reliability indexes can be implemented using a distribution analysis model and the algorithms defined by Distribution System Reliability Handbook, EPRI Project 1356-1 Final Report. The calculation of predicted reliability indexes is fairly straightforward until post fault restoration and coordination failure are included. This paper presents the methods used to implement predictive reliability with consideration for post fault restoration and coordination failure into a distribution analysis software model",2001,0, 3861,An Efficient Error-Bounded General Camera Model,"Camera models are essential infrastructure in computer vision, computer graphics, and visualization. The most frequently used camera models are based on the single- viewpoint constraint. Removing this constraint brings the advantage of improved flexibility in camera design. However, prior camera models that eliminate the single- viewpoint constraint are inefficient. We describe an approximate model for coherent general cameras, which projects efficiently with user chosen accuracy. The rays of the general camera are partitioned into simple cameras that approximate the camera locally. The simple cameras are modeled with k- ray cameras, a novel class of non-pinhole cameras. The rays of a k-ray camera interpolate between k construction rays. We analyze several variants of k-ray cameras. The resulting compound camera model is efficient because the number of simple cameras is orders of magnitude lower than the original number of rays camera, and because each simple camera offers closed-form projection.",2006,0, 3862,Fault diagnosis method based on D-S evidence theory,"Capabilities that prognostics and system health management (PHM) system detect fault, isolate fault and forecast fault directly determine the effectiveness of the maintenance work. With the development of sensor technology and signal processing methods, in order to precisely detect and identify faults, fault diagnosis is a typical multi-sensor fusion problem. New challenges have arisen with regard to obtaining a reliable fault diagnostic result based on multi-source information. From the viewpoint of evidence theory, information obtained from each sensor can be considered as a piece of evidence, and as such, multi-sensor based machines diagnosis can be viewed as a problem of evidence fusion. In this paper we investigate the use of Dempster-Shafer (D-S) evidence theory as a tool for modeling and fusing multi-source information from the machines. We present a preliminary review of evidence theory and explain how the multi-sensor machine diagnosis problem can be framed in the context of this theory. We propose a method for enhancing the effectiveness of basic probability assignment functions in modeling and the method combining pieces of evidence. By introducing importance index, the issues of evidence importance in the practical application of D-S evidence theory are addressed. Finally, we report a case study to demonstrate the efficacy of our method.",2010,0, 3863,Unusual capacitance emission transients in CIGS caused by large defect entropy changes,"Capacitance transient data from bias-pulse experiments on CdS/CIGS solar cells show an unusual behavior at high temperatures. Above 350 K a minority carrier trap, with a larger activation energy than a majority carrier trap, emits faster than the lower activation-energy minority trap. A simple enthalpy model for trap emission cannot explain this counterintuitive behavior, but the more complete Gibbs free energy model that includes entropy can explain it. We show that entropy plays a major role in carrier emission from traps in CIGS.",2005,0, 3864,Issues insufficiently resolved in Century 20 in the fault-tolerant distributed computing field,"As the 21st Century has just opened up, it is a fitting time to reflect on the evolution of the fault-tolerant distributed computing technology that occurred in the last century. The author's view of that evolution is sketched in this paper, with emphasis on the major issues that were insufficiently resolved in the 20th Century. Such issues are naturally among what the author believes to be the prime subjects that need to be addressed in this decade by the research community. A substantial part of this paper deals with the issues that need to be resolved to advance the real-time fault-tolerant distributed computing branch into a mature practicing field",2000,0, 3865,Cycle error correction in asynchronous clock modeling for cycle-based simulation,"As the complexity of SoCs is increasing, hardware/software co-verification becomes an important part of system verification. C-level cycle-based simulation could be an efficient methodology for system verification because of its fast simulation speed. The cycle-based simulation has a limitation in using asynchronous clocks that causes inherent cycle errors. In order to reuse the output of a C-level cycle-based simulation for the verification of a lower level model, the C-level model should be cycle-accurate with respect to the lower level model. In this paper, a cycle error correction technique is presented for two asynchronous clock models. An example design is devised to show the effectiveness of the proposed method. Our experimental results show that the fast speed of cycle-based simulation can be fully exploited without sacrificing the cycle accuracy",2006,0, 3866,System-level modeling and validation increase design productivity and save errors,"As the complexity of system on chip (SoC) devices rises to include scores, in some cases hundreds, of distinct blocks, system validation becomes a critical concern. A variety of techniques have emerged to help designers verify that individual blocks of a device meet performance specification. But what about functional intent? Are performance goals achieved? In this paper, we make the case for high-level system validation before RTL implementation, and present a flow to approach this increasingly essential task.",2005,0, 3867,Workshop: defect detection in distributed software development,"As the complexity of today's products increases, single projects, single departments or even single companies can no longer develop total products, causing concurrent and distributed development. Today and worldwide, industries are facing complex product development and its vast array of associated problems, relating to project organization, project control and product quality. Many processes will become distributed as well. The defect detection process, so important for measuring and eventually achieving product quality, is typically one of the first to experience problems caused by the distributed nature of the project. The distribution of defect detection activities over several parties introduces risks like the inadequate review of work products, occurrence of ""blind spots"" with respect to test coverage or over-testing of components. Lifecycle-wide coordination of defect detection is therefore needed to ensure effectiveness and efficiency of defect detection activities.",2003,0, 3868,Soft errors in advanced computer systems,"As the dimensions and operating voltages of computer electronics shrink to satisfy consumers' insatiable demand for higher density, greater functionality, and lower power consumption, sensitivity to radiation increases dramatically. In terrestrial applications, the predominant radiation issue is the soft error, whereby a single radiation event causes a data bit stored in a device to be corrupted until new data is written to that device. This article comprehensively analyzes soft-error sensitivity in modern systems and shows it to be application dependent. The discussion covers ground-level radiation mechanisms that have the most serious impact on circuit operation along with the effect of technology scaling on soft-error rates in memory and logic.",2005,0, 3869,Practical experience with fault location in MV cable network,"As the major cause of interruption time is a fault in the MV network, the Dutch grid operator Alliander puts effort in faster location of faults in these networks, which entirely consist of buried cables. After several pilots a fault location system was implemented in 10 substations equipped with the SASensor protection and control system. The SASensor system uses a high sampling rate and collects event recording data as soon as a set point for the current is exceeded. Therefore also events like inrush currents and transients due to self extinguishing single phase faults are recorded. The data processing routine in the substation is able to distinguish between a real fault and other events. The process is fully automated. All events are shown on a PC screen and, in case of a fault, printed on paper. The system performs according requirements on accuracy and processing time.",2009,0, 3870,"Cache and memory error detection, correction, and reduction techniques for terrestrial servers and workstations","As the size of the SRAM cache and DRAM memory grows in servers and workstations, cosmic-ray errors are becoming a major concern for systems designers and end users. Several techniques exist to detect and mitigate the occurrence of cosmic-ray upset, such as error detection, error correction, cache scrubbing, and array interleaving. This paper covers the tradeoffs of these techniques in terms of area, power, and performance penalties versus increased reliability. In most system applications, a combination of several techniques is required to meet the necessary reliability and data-integrity targets.",2005,0, 3871,Algorithm-based checkpoint-free fault tolerance for parallel matrix computations on volatile resources,"As the size of today's high performance computers increases from hundreds, to thousands, and even tens of thousands of processors, node failures in these computers are becoming frequent events. Although checkpoint/rollback-recovery is the typical technique to tolerate such failures, it often introduces a considerable overhead. Algorithm-based fault tolerance is a very cost-effective method to incorporate fault tolerance into matrix computations. However, previous algorithm-based fault tolerance methods for matrix computations are often derived using algorithms that are seldomly used in the practice of today's high performance matrix computations and have mostly focused on platforms where failed processors produce incorrect calculations. To fill this gap, this paper extends the existing algorithm-based fault tolerance to the volatile computing platform where the failed processor stops working and applies it to scalable high performance matrix computations with two dimensional block cyclic data distribution. We show the practicality of this technique by applying it to the ScaLAPACK/PBLAS matrix-matrix multiplication kernel. Experimental results demonstrate that the proposed approach is able to survive process failures with a very low performance overhead",2006,0, 3872,An Evaluation of Two Bug Pattern Tools for Java,"Automated static analysis is a promising technique to detect defects in software. However, although considerable effort has been spent for developing sophisticated detection possibilities, the effectiveness and efficiency has not been treated in equal detail. This paper presents the results of two industrial case studies in which two tools based on bug patterns for Java are applied and evaluated. First, the economic implications of the tools are analysed. It is estimated that only 3-4 potential field defects need to be detected for the tools to be cost-efficient. Second, the capabilities of detecting field defects are investigated. No field defects have been found that could have been detected by the tools. Third, the identification of fault-prone classes based on the results of such tools is investigated and found to be possible. Finally, methodological consequences are derived from the results and experiences in order to improve the use of bug pattern tools in practice.",2008,0, 3873,Fabric defects detecting and rank scoring based on Fisher criterion discrimination,"Automatic texture defect detection is highly important for many fields of visual inspection. This paper studies the application of advanced computer image processing techniques for solving the problem of automated defect detection for textile fabrics. The approach is used for the quality inspection of local defects embedded in homogeneous textured surfaces. Above all, the size of the basic texture units of the fabric image is acquired by calculating auto correlation function in weft direction and in wrap direction. Then the sizes of the basic texture units are taken as criterion to segment the fabric image. During scanning the fabric texture image, the basic units are segmented. And the Fisher criterion discriminator is used to assign each unit to a class at the same time. Afterwards, the fabric detects are measured according to the relationship of the suffix of the image pixel and the scale of the image and ranked scale by comparing with America Four Points System. Experiments with real fabric image data show that it is effective.",2009,0, 3874,Autonomic Fault-Management and resilience from the perspective of the network operation personnel,"Autonomic networks are an emerging technology which is promising to reduce the complexity of human-driven network management processes and enable a variety of so-called self-* features such as self-configuration, self-optimization, etc, inside the network devices and the network as a whole. Autonomic behaviors are widely understood as a control loop implemented by an autonomic entity that automates management processes and controls diverse aspects of a set of resources. Though automation is necessary and achievable, autonomic decision-making-elements of the network can not fully perform decisions on every task of the network without requiring some degree of a human-in-the-loop in some of the decisions. From the operator's perspective, controllability of the control loop and decision notification from the autonomic network is a vital issue that needs to be addressed. In this paper we present our considerations on how an Autonomic Fault-Management control loop (detect an incident - find the root cause behind it - remove the root cause) can be controlled by the network operation personnel.",2010,0, 3875,Fault Tolerant Planning for Critical Robots,"Autonomous robots offer alluring perspectives in numerous application domains: space rovers, satellites, medical assistants, tour guides, etc. However, a severe lack of trust in their dependability greatly reduces their possible usage. In particular, autonomous systems make extensive use of decisional mechanisms that are able to take complex and adaptative decisions, but are very hard to validate. This paper proposes a fault tolerance approach for decisional planning components, which are almost mandatory in complex autonomous systems. The proposed mechanisms focus on development faults in planning models and heuristics, through the use of diversification. The paper presents an implementation of these mechanisms on an existing autonomous robot architecture, and evaluates their impact on performance and reliability through the use of fault injection.",2007,0, 3876,Implementation of Fault Tolerant Mechanism in the BACnet/IP Protocol,"BACnet (Building Automation and Control Network) is the international standard of communication protocol designed specifically for building automation, control and management. BACnet provides BACnet/IP protocol for interfacing BACnet to Internet. In the BACnet/IP protocol, the device named BBMD (BACnet Broadcasting Management Device) plays the role of maintaining communication link between field devices in a building and remote controller outside the building. In case when a fault occurs in the BBMD, field devices within a building cannot communicate with control devices outside the building, and remote control of building facilities cannot be guaranteed. This study introduces an implementation method of fault tolerant mechanism in the BACnet/IP protocol.",2007,0, 3877,A novel 20Hz power injection protection scheme for stator ground fault of pumped storage generator,"Based on the analysis of the start-up of pumped storage generator under motor circumstance, it is found out that the conventional 20 Hz injection component is influenced by the low frequency component when ground faults take place in the stator windings for static frequency converter (SFC) start-up generator. The conventional relay is just simply blocked in case mal-operation during 20 Hz working condition. To overcome the disadvantage, a novel scheme is proposed to improve grounding- resistance(R) calculation accuracy and enhance ability to limit impact of 20 Hz component from generator. The simulation results proved its effectiveness and practicability of the scheme.",2008,0, 3878,Application of BPNN and CBR on Fault Diagnosis for Missile Electronic Command System,"Based on the complexity of the mobile missile electronic command system (MMECS), applying the single method in system fault diagnosis can hardly achieve satisfactory results. The fault diagnosis system combining the BP neural network (BPNN) method and the case-based reasoning (CBR) method was presented. The framework of the mixed neural network and the case presentation was put forward. The question of redundancy reasoning was solved, moreover, it can interpret the diagnoses by providing the successful case. Finally, with the example of voice interrupt, the system's correctness and validity was proved. It is shown that the system is suitable for both the operators training and online decision making for the army",2006,0, 3879,Reconstruction of time-varying fields in wireless sensor networks using shift-invariant spaces: Iterative algorithms and impact of sensor localization errors,"Based on the concept of hybrid shift-invariant spaces, we develop a distributed protocol for the reconstruction of time-varying physical fields in wireless sensor networks. The localized nature of these spaces allows for a clustered network architecture that leads to low communication overhead. Capitalizing on the sparsity of the reconstruction matrix, we propose an iterative reconstruction algorithm whose complexity per time-slot is linear in the number of sensors. We furthermore analyse the impact of sensor localization errors on the mean square error of the reconstructed field and provide numerical simulations illustrating our results.",2010,0, 3880,The improved magnetic shield type high Tc superconducting fault current limiter and the transient characteristic simulation,"Based on the equivalent circuit of a power transformer, considering both the nonlinear magnetizing characteristic of an iron-core and the nonlinear resistance of a superconducting secondary winding (V-I characteristic), the simulation of a magnetic shield type high Tc superconducting FCL was carried out by Simulink module of Matlab software. The transient response, especially the primary short circuit current is obtained. The influence of the saturation of an iron-core, and the specified structure of the magnetic circuit are analyzed. Based on the simulation, an improved magnetic shield type HTS FCL, a cross section adjustable power transformer with a multi-turn superconducting secondary winding is proposed. Its advantages include stable apparent impedance during the occurrence of a fault, shorter recovery time and lower voluminal energy dissipation during controlled S-N transition.",2003,0, 3881,A QoS-aware fault tolerant middleware for dependable service composition,"Based on the framework of service-oriented architecture (SOA), complex distributed systems can be dynamically and automatically composed by integrating distributed Web services provided by different organizations, making dependability of the distributed SOA systems a big challenge. In this paper, we propose a QoS-aware fault tolerant middleware to attack this critical problem. Our middleware includes a user-collaborated QoS model, various fault tolerance strategies, and a context-aware algorithm in determining optimal fault tolerance strategy for both stateless and stateful Web services. The benefits of the proposed middleware are demonstrated by experiments, and the performance of the optimal fault tolerance strategy selection algorithm is investigated extensively. As illustrated by the experimental results, fault tolerance for the distributed SOA systems can be efficient, effective and optimized by the proposed middleware.",2009,0, 3882,Meshing Simulation and Experimental Analysis of Transmission Error for Modified Spiral Bevel Gear,"Based on the meshing theory of spiral bevel gears, the Transmission error (TE)curves of two different pairs of spiral bevel gears (one pair is modified spiral bevel gears, another one is general spiral bevel gears) are achieved. According to the characteristic of theoretical TE curve, the spiral bevel gear transmission error measurement system is designed, and the comparative experiment for the two pairs of gears were carried out on the system under different loads. The real TE of the two pair of gears has acquired. The experimental results show that modified spiral bevel gear pair is better than the general one in the dynamic behavior, which was verified the new method that modifies tooth face with modified tools is useful and effective to reduce gear vibration and noise.",2010,0, 3883,Fault-tolerant strategy based on dynamic model matching for dual-redundant computer in the space robot,"Based on the requirements of space robot control system such as high reliability, low power consumption, small size and real-time, this paper presents a dual redundant fault-tolerant strategy for space robot controller. According to the peculiarity of the space robot control systems and its workflow, this strategy is based on the fault-tolerant hardware architecture and the linear running model of space robot software. The two computers communicate with each other by 485 bus and heartbeat line, and share field data through CAN bus in this architecture. The imported criterions include the heartbeat signal, the pattern matching, the dynamic synchronization data and the results returned from the other device. The fault-tolerant strategy which has redundancy criterion is formulated in this paper. This design can not only ensure the time performance of switching between two computers, but also can lower the communication data and improve the reliability and effectiveness of dual-redundant fault-tolerant system.",2008,0, 3884,Polynomial intensity correction for multimodal image registration,"Based on the standard sum of squared differences (SSD) criterion, we develop a fast and robust multimodal image registration algorithm that incorporates a polynomial model to capture the complex relationship between the intensity values of two images. The resulting energy function can be efficiently optimized using the Gauss-Newton method in either joint or constrained fashion. Comparisons using simulated data reveal that our method has a similar capture range to that of the mutual information. Furthermore, we demonstrate accurate MR-PET registration results for images with abnormal structures.",2009,0, 3885,Simulation analysis of error correction method of headlamp detection based on driving direction,"At present, the elimination of headlamp detection error is to straighten the test-waiting vehicle, namely the straightening device makes the vehicle body axis to be right with headlight detector at the distance required. However, the headlamp illumination direction should be the vehicle's driving direction and the perpendicular direction of rear axle is the vehicle's driving direction, so the detection benchmark to headlamp should be the vehicle's driving direction. This paper proposes a new method of error correction of headlamp detection based on bias angle identification of axle placed. Firstly, this paper establishes a tire model, and carries out research on modeling and simulating bias angle of tire, then make error analysis of bias angle identification. Simulation analysis shows that when changes some influence factors, for example tire size, shape, and convex volume, the device is still of high accuracy, so the method is feasible. This paper also establishes the experimental system of detecting axle bias angle, furthermore makes a test and error analysis pointing to the system. The result verifies the simulation conclusions.",2010,0, 3886,The application of FMEA in defect reduction for the spindle motor assembly process for hard disk drives,"At the present time, Failure Mode and Effect Analysis (FMEA) technique is frequently used in manufacturing industry to deal with undesirable situations. These can occur throughout the various phases of the product life cycle. The objectives of this research are to analyze and identify potential problems in the spindle motor assembly process for 2.5-inch hard disk drives. The process often encounters the problem of hub flange height failure. The result of this research may reveal the failures within five steps of the focusing process. The Risk Priority Number (RPN) of each failure may now be calculated. As a result of this, the five highest RPNs may be selected in order to take corrective action. It could be concluded that, by applying the FMEA technique, all risks in the process have been pinpointed and all recommended corrective actions have been taken. As a final step, the process can be improved by a two-factor factorial experiment with the significant level at 0.05 in order to determine the optimum setup conditions. By implementing FMEA, the defective parts could be reduced from 6,294.36 DPPM to 3,788.27 DPPM.",2008,0, 3887,Fault diagnosis of certain missile based on dynamic test,"ATS(automatic test system) is one of the major martial maintenance equipments for certain missile. But the available missile ATS prolongs with traditional static test method basically, and exists some shortcomings. This paper proposed fault diagnosis based on dynamic test to upgrade certain missile ATS. Certain missile's dynamic test is realized by simulating the missile's flight course. According to the missile's characters displayed at each stage of boost, cruise, fall, swoop in real flight, we issue control commands to course channel, pitching channel and incline channel at the right moment respectively. At the same time, simulate the missile's pose change by controlling the turntable's moving. So we can simulate the missile's flight and attack course factually. A novel residual error energy characteristic vector method based on WPT (wavelet packet transform) is proposed, and is used to extract fault feature. Then import the fault residual error energy characteristic vector to the BP neural network to diagnosis faults. Experiments prove that this method is effective.",2007,0, 3888,Automated Addition of Fault-Tolerance to SCR Toolset: A Case Study,"Automated addition of fault-tolerance to existing programs is highly desirable, as it allows the designer to focus on the system behavior in the absence of faults and leave the fault-tolerance aspect to automated techniques that guarantee correctness by construction. Automated addition of fault-tolerance is expected to be more successful if it is done under the hood, i.e., where the designer can continue to utilize existing tools and the addition of fault-tolerance is orthogonal to the tools that they use. This will reduce the learning curve for adding fault-tolerance as well as make addition of fault-tolerance across different design tools. With this motivation, in this paper, we focus on automated addition of fault-tolerance to the SCR tools. We illustrate our approach using two case studies: an Altitude Switch Controller and an Automobile Cruise Controller.",2008,0, 3889,Variation-tolerant hierarchical voltage monitoring circuit for soft error detection,"As device feature size continues to scale down to the nanometer regime, the decreasing critical charge fundamentally reduces noise margins of devices and in turn increases the susceptibility of the ICs to external noise sources such as particle strikes. While protection techniques for memory such as ECC are mature and effective, protections for logic errors remain imperfect. Full-blown redundancy solutions for microprocessors such as mirrored cores and triple-modular redundancy incur significant overhead and are clearly limited to the niche market of mission-critical servers. The fundamental inefficiency of such redundancy lies in the repetition of all operations to detect the discrepancy caused by events much rarer than cycle-to-cycle activities. Clearly, for the vast majority of general-purpose systems, a detection mechanism that has low standby energy consumption is called for. In this paper, we propose a circuit-level solution to detect errors by monitoring the supply rail disturbance caused by a particle strike. Combined with checkpointing and rollback support, such a circuit can provide a high level of protection against particle-strike induced soft errors. At 17%, the power overhead of the design is reasonable and much lower than prior art. The design is also tolerant to process, voltage, and temperature (PVT) variations.",2009,0, 3890,FPGA defect tolerance: impact of granularity,"As device sizes shrink, FPGAs are increasingly prone to manufacturing defects. The ability to tolerate multiple defects is anticipated to be very important at 45nm and beyond. One possible approach to this growing problem is to add redundancy to create a defect-tolerant FPGA architecture. Using area, delay and yield metrics, this paper compares two redundancy strategies: a coarse-grain approach using spare rows and columns and a fine-grain approach using spare wires. For low defect levels and large array sizes, the coarse-grain approach offers a lower area overhead, but it is relatively intolerant to an increase in defect count. In contrast, the fine-grain approach has a fixed overhead of up to 50%, but the architecture can tolerate an increasing number of defects as array size grows. To achieve a similar level of yield recovery, the coarse-grain approach requires an area overhead in excess of 100%",2005,0, 3891,Modeling and Simulation of Multi-operation Microcode-Based Built-In Self Test for Memory Faults,"As embedded memory area on-chip is increasing and memory density is growing, problem of faults is growing exponentially. This necessitates defining of novel test algorithms which can detect these new faults. March Tests belong to the newer line of testing algorithms which offer to detect these exponentially escalating faults. Most of these new March algorithms consist of as many as five or six operations per March element. This work presents an architecture which can implement these new March tests having number of operations per element according to the growing needs of embedded memory testing.",2010,0, 3892,Estimating Error Rate in Defective Logic Using Signature Analysis,"As feature size approaches molecular dimensions and the number of devices per chip reaches astronomical values, VLSI manufacturing yield significantly decreases. This motivates interests in new computing models. One such model is called error tolerance. Classically, during the postmanufacturing test process, chips are classified as being bad (defective) or good. The main premise in error-tolerant computing is that some bad chips that fail classical go/no-go tests and do indeed occasionally produce erroneous results actually provide acceptable performance in some applications. Thus, new test techniques are needed to classify bad chips according to categories based upon their degree, of acceptability with respect to predetermined applications. One classification criterion is error rate. In this paper, we first describe a simple test structure that is a minor extension to current scan-test and built-in self-test structures and that can be used to estimate the error rate of a circuit. We then address three theoretical issues. First, we develop an elegant mathematical model that describes the key parameters associated with this test process and incorporates bounds on the error in estimating error rate and the level of confidence in this estimate. Next, we present an efficient testing procedure for estimating the error rate of a circuit under test. Finally, we address the problem of assigning bad chips to bins based on their error rate. We show that this can be done in an efficient, hence cost-effective, way and discuss the quality of our results in terms of such concepts as increase effective yield, yield loss, and test escape",2007,0, 3893,Error Correction of Noisy Block Cipher Using Cipher and Plaintext Characteristics,"Contemporary proven cryptographic algorithms, like the advanced encryption standard (AES), are used in many secure data storage systems. Cipher data when written or read might be subject to noise. Classical error detection and correction methods are not suitable for encrypted data. In this paper, error detection and correction is performed at the receiver end, without any changes to the encryption algorithm. One of the properties of encrypted information is that all encrypted blocks have a minimum hamming distance from each other. This property is exploited to obtain the exact correct block. When error correction based on the encrypted data cannot be performed, natural language properties of plaintext data are used to eliminate noise. The plaintext blocks surrounding the noisy plaintext block are used to generate possible candidates. In case a unique solution is not achieved, n-gram properties of the plaintext language are used to rank the possibilities and promote the best fit.",2009,0, 3894,New tools and methodology for advanced parametric and defect structure test,"Continuing scaling trends in semiconductor technology, as well as the test requirements of new technologies being incorporated with mainstream silicon integrated circuits, has increased the complexity of parametric and defect structure testing. New testers are required which can drastically improve the throughput of parametric test, as well as efficiently test new array based process diagnostic structures. Addressing these needs requires merging the traditionally separate functions of digital and parametric test equipment. We describe the development of a new hybrid test system, which combines the features of parametric and digital testers, and in addition introduces a high degree parallelism in its parametric test functions. The test system was developed for high throughput inline test (parallel test) of defect structures, semiconductor parametric macros, and advanced array based process monitors down to pA current levels, as well as traditional all digital yield macros, such as SRAMs.",2010,0, 3895,Fault Tolerant Network Routing through Software Overlays for Intelligent Power Grids,"Control decisions of intelligent devices in critical infrastructure can have a significant impact on human life and the environment. Ensuring that the appropriate data is available is crucial for making informed decisions. Such considerations are becoming increasingly important in today's cyber-physical systems that combine computational decision making on the cyber side with physical control on the device side. The job of ensuring the timely arrival of data falls onto the network that connects these intelligent devices. This network needs to be fault tolerant. When nodes, devices or communication links fail along a default route of a message from A to B, the underlying hardware and software layers should ensure that this message will actually be delivered as long as alternative routes exist. Existence and discovery of multi-route pathways is essential in ensuring delivery of critical data. In this work, we present methods of developing network topologies of smart devices that will enable multi-route discovery in an intelligent power grid. This will be accomplished through the utilization of software overlays that (1) maintain a digital structure for the physical network and (2) identify new routes in the case of faults.",2010,0, 3896,A real-time PES supporting runtime state restoration after transient hardware-faults,"Controlling safety-critical real-time applications that cannot immediately be transferred to a safe state requires highly reliable programmable electronic systems (PESs). This demand for fault-tolerance is usually satisfied by applying redundant processing structures inside each PES and, additionally, configuring multiple PES redundantly. Instead of minimising the failure probability of single PESs, it is also desirable to provide a redundant configuration of PESs with the capability to re-start single units at runtime. This requires copying a PESs internal state at runtime, since a re-started unit must equalise its internal state with that of its redundant counterparts before the redundant processing can be rejoined. As a result, redundancy attrition due to transient faults is prevented, since failed channels can be brought back on line. This article states the problems concerned with runtime state restoration of real-time systems, discusses the advantages and disadvantages of existing techniques and introduces a hardware-supported state restoration concept",2006,0, 3897,Techniques and algorithms for fault grading of FPGA interconnect test configurations,"Conventional fault simulation techniques for field programmable gate arrays (FPGAs) are very complicated and time consuming. The alternative, FPGA fault emulation technique, is incomplete and can be used only after the FPGA chip is manufactured. In this paper, we present efficient algorithms for computing the fault coverage of a given FPGA test configuration. The faults considered are opens and shorts in FPGA interconnects. The presented technique is able to report all detectable and undetectable faults and, compared with conventional methods, is orders of magnitude faster.",2004,0, 3898,Fault grading FPGA interconnect test configurations,"Conventional fault simulation techniques for FPGAs are very complicated and time consuming. The other alternative, FPGA fault emulation technique, is incomplete, and can be used only after the FPGA chip is manufactured. In this paper, we present efficient algorithms for computing the fault coverage of a given FPGA test configuration. The faults considered are opens and shorts in FPGA interconnects. Compared to conventional methods, our technique is orders of magnitude faster, while is able to report all detectable and undetectable faults.",2002,0, 3899,An approach of fault detection based on multi-mode,"Conventional multi-scale principal component analysis (MSPCA) only detects fault, but it canpsilat detect fault types. For these problems, a method of fault detection based on multi-mode that incorporates MSPCA into adaptive resonance (ART) neural network is presented. Firstly, this method presents a wavelet transform for samples data, and principal component analysis can be used to analyze data at each scale. Then ART is used to classify reconstruction data. It can detect fault effectively, and ART2 can classify fault using wavelet denoising easily, it separates the fault successfully in the system. At last, it develops multi-mode fault detection in autocorrelation system application through computer simulation experiment. The theory and simulation experiments shows that this method is of wide application prospect.",2008,0, 3900,Provably all-convex optimal minimum-error convex fitting algorithm using linear programming,"Convexity is a key property to global optimal for mathematical programming. Previous convex fitting works can only guarantee convexity at table entries or sampled points using semi-definite programming (SDP). In this work, we demonstrate that convexity can be guaranteed not only at listed tabular entries but also whole functional domain with minimum perturbation using simply linear programming (LP). Extensive experimental results of industrial cell library demonstrate that our method can reach global convexity 9X faster than the SDP approach. Its application on circuit tuning is also presented.",2010,0, 3901,Software defect rediscoveries: a discrete lognormal model,"Corrective software maintenance, which consists of fixing defects that escape detection and manifest as field failures, is expensive, yet vital to ensuring customer satisfaction. To allocate and use maintenance resources effectively, it is necessary to understand the defect occurrence phenomenon in the field. A preliminary analysis of the defect occurrence data suggests that software defects vary in rate from corner cases, which may occur only once, to the pervasive, which occur many times. This suggests that the distribution of occurrence counts is heavy-tailed. Theoretical reasons and mounting evidence indicate that the distribution of defect occurrence rates is lognormal. We hypothesize that the distribution of occurrence rates is lognormal, and further hypothesize that the distribution of the number of occurrence counts follows the discrete-lognormal, also known as the Poisson-lognormal. We confirm that hypothesis, using a variety of data from widely used networking software. We also discuss how the Discrete-Lognormal applies to subsets of defects, where subsets are formed according to year of occurrence, products, orthogonal defect classification (ODC) age, and severities. We use straightforward interpretations of the parameters of the lognormal to understand how values differ for different types of defects and for different software characteristics",2005,0, 3902,Networked vehicles for automated fault detection,"Creating fault detection software for complex mechatronic systems (e.g. modern vehicles) is costly both in terms of engineer time and hardware resources. With the availability of wireless communication in vehicles, information can be transmitted from vehicles to allow historical or fleet comparisons. New networked applications can be created that, e.g., monitor if the behavior of a certain system in a vehicle deviates compared to the system behavior observed in a fleet. This allows a new approach to fault detection that can help reduce development costs of fault detection software and create vehicle individual service planning. The COSMO (consensus self-organized modeling) methodology described in this paper creates a compact representation of the data observed for a subsystem or component in a vehicle. A representation that can be sent to a server in a backoffice and compared to similar representations for other vehicles. The backoffice server can collect representations from a single vehicle over time or from a fleet of vehicles to define a norm of the vehicle condition. The vehicle condition can then be monitored, looking for deviations from the norm. The method is demonstrated for measurements made on a real truck driven in varied conditions with ten different generated faults. The proposed method is able to detect all cases without prior information on what a fault looks like or which signals to use.",2009,0, 3903,Estimation of error in large area underwater photomosaics using vehicle navigation data,Creating geometrically accurate photomosaics of underwater sites using images collected from an AUV or ROV is a difficult task due to dimensional errors which grow as a function of 3D image distortion and the mosaicking process. Although photomosiacs are accurate locally their utility for accurately representing a large survey area is jeopardized by this error growth. Evaluating the error in a mosaic is the first step in creating globally accurate photomosaics of an unstructured environment with bounded error. Using vehicle navigation data and sensor offsets it is possible to estimate the error present in large area photomosaics independent of the mosaic construction method. This paper presents a study of the error sources and an estimation of the error growth across an underwater photomosaic. World coordinate locations of the individual image centers are projected into the image coordinate space of the mosaic. The spatial error is then shown as the divergence between the position of the corresponding image centers in the mosaic and the positions determined by the navigation projection. Accurate world coordinate system position estimates of the image centers are obtained from the on board navigation sensors and the EXACT acoustic navigation system. Several large area mosaics using imagery collected by the JASON ROV are shown as examples,2001,0, 3904,Test of data retention faults in CMOS SRAMs using special DFT circuitries,"Data retention faults in CMOS SRAMs are tested by sensing the voltage at the data bus lines. Sensing the voltage at one of the data bus lines with proper DFT (design for testability) reading circuitry allows the fault-free memory cells to be discriminated from the defective cell(s). Two required DFT circuitries for applying this technique are proposed. The cost of the proposed approach in terms of area, test time and performance degradation is analysed. A CMOS memory array with the proposed DFT circuitries has been designed and fabricated. The experimental results show the feasibility of this technique.",2004,0, 3905,Integrated error management for media-on-demand services,"Data servers for multimedia applications like news-on-demand represent a severe bottleneck, because a potentially (very) high number of users concurrently retrieve data with high data rates. In the Intermediate Storage Node Concept (INSTANCE) project, we develop a new architecture for media-on-demand servers that maximizes the number of concurrent clients a single server can support. Traditional bottlenecks, like copy operations, multiple copies of the same data element in main memory, and checksum calculation in communication protocols are avoided by applying three orthogonal techniques: zero-copy-one-copy memory architecture, network level framing, and integrated error management. We describe the design, implementation, and evaluation of our integrated error management mechanism. Our results show that the reuse of parity information from a RAID system as forward error correction information in the transport protocol reduces the server workload and enables smooth playout at the client",2001,0, 3906,Portable faultloads based on operator faults for DBMS dependability benchmarking,Databases play a central role in the information infrastructure of most organizations. The characterization of DBMS (database management systems) dependability is then of utmost importance. Existing performance benchmarks for transactional and database areas include two major components: a workload and a set of performance measures. The definition of a benchmark to characterize dependability needs a new component - the faultload. Operator faults represent a major cause of failures in large DBMS. This paper proposes three approaches for the definition of portable faultloads based on operator faults to benchmark the dependability of DBMS and shows a benchmarking example of a commercial (Oracle) and an open source (PostgreSQL) database.,2004,0, 3907,Waveform analysis of the bridge type SFCL during load changing and fault time,"DC reactor type superconducting fault current limiter (SFCL) has drawn the interest of some researchers in developing such device and more research work is being carried out in order to make it practically feasible. We have pointed out one issue that is not properly examined yet on such a device during load changing time. As we know, it is very difficult to introduce DC bias voltage to the reactor coil of the bridge type SFCL and some researchers are developing such device without using DC bias current. In such a case, the voltage drop occurs at the load terminal during the load increasing time caused by the DC reactor's inductance. By using the Electro-Magnetic Transients in DC systems which is the simulator of electric networks (EMTDC) software we carried out analysis of first few half cycles of the voltage and current waveforms after the load is increased. We also performed the same analysis for fault conditions. The peak value of the waveforms is considered in calculating the voltage drop at load terminal during the load changing time. The analysis can be used in selecting an appropriate inductance value for designing such SFCL.",2003,0, 3908,On techniques for handling soft errors in digital circuits,"Dealing with soft errors due to particle strikes is the next major challenge in implementing digital systems. This study thoroughly investigates the effect of device size on circuit soft error rate and identifies methods to reduce soft error rate in combinational circuits. In particular, we propose three novel methods that upsize only selected gates and /or transistor networks. In order to obtain the most appropriate technique for soft error rate reduction in small technology node circuits, we conduct experiments and compare the results for several upsizing techniques including all gates, selected gates and transistor networks based on their fault sensitivities, and parallel networks with soft error rate saturation consideration. Consequently, it is discovered that some upsizing scenarios perform large improvement whereas others do not or even increase the soft error rate. The use of fault sensitivity analysis approach with parallel transistor network upsizing based on the contribution of each sensitive gate can reasonably reduce overall circuit sensitivity. Experimental results show an average reduction in soft error rate about 20% with a very small area overhead of 2% for benchmark circuits using our technique.",2010,0, 3909,"Effectiveness and limitations of various software techniques for ""soft error"" detection: a comparative study",Deals with different software based strategies allowing the on-line detection of bit flip errors arising in microprocessor-based digital architectures as the consequence of the interaction with radiation. Fault injection experiments put in evidence the detection capabilities and the limitations of each of the studied techniques,2001,0, 3910,Error detection support in a cellular modeling end-user programming environment,"Debugging tools for end-user programming have largely concentrated on generic and general-domain applications, such as spreadsheets. Complex, domain specific tasks such as cellular modeling, however, introduce new types of errors, non-existent in a general domain. These errors may be generated at typographical, semantic, modeling, or cognitive levels. Our work explores requirements and potential for error detection and debugging systems within an end-user programming (EUP) environment geared toward modeling cellular biology.",2002,0, 3911,Attribute Selection in Software Engineering Datasets for Detecting Fault Modules,"Decision making has been traditionally based on managers experience. At present, there is a number of software engineering (SE) repositories, and furthermore, automated data collection tools allow managers to collect large amounts of information, not without associated problems. On the one hand, such a large amount of information can overload project managers. On the other hand, problems found in generic project databases, where the data is collected from different organizations, is the large disparity of its instances. In this paper, we characterize several software engineering databases selecting attributes with the final aim that project managers can have a better global vision of the data they manage. In this paper, we make use of different data mining algorithms to select attributes from the different datasets publicly available (PROMISE repository), and then, use different classifiers to defect faulty modules. The results show that in general, the smaller datasets maintain the prediction capability with a lower number of attributes than the original datasets.",2007,0, 3912,Case-Based Reasoning for Fault Diagnosis and Prognosis,"Case-based reasoning (CBR) is a mature technology used in help-desk customer services. To implement this technology to the diagnosis and prognosis of dynamic systems, many important factors must be considered. This paper reports on our development of a Matlab system for multi-agent fault diagnosis for predictive health monitoring. In particular, we apply this technology to the SIMULINK simulation of a dynamic chiller system. Our system can detect single component faults and also multiple component faults in the system and advise on how to fix the problem using case based reasoning.",2006,0, 3913,Proactive Cellular Network Faults Prediction Through Mobile Intelligent Agent Technology,"Cellular network faults prediction models using mobile intelligent agent are presented in this paper. Cellular networks are uncertain and dynamic in their behaviours and therefore we use different artificial intelligent techniques to develop platform independent, autonomous, reasoning and robust agents that can report on any unforeseen anomaly within the cellular network service provider. The specific design and implementation is done using Java agent development framework (JADE). The partial results obtained from the experiments conducted are presented and discussed in this paper.",2007,0, 3914,Pitfalls of hierarchical fault simulation,"Certain circuit structures, such as self-loop, asynchronous reset, and clock division, may not be visible in a hierarchical (mixed) simulation system. Since the simulator does not know about their existence, it cannot cope with them like it normally would in a flat circuit. If this leads to a logic-simulation problem, users can usually discover them easily during the validation process. However, if it only causes fault-simulation inaccuracy, it is hard to find the problem. In this paper, we show examples illustrating their existence. The examples negate an assumption that has been used in many papers on mixed-mode simulation. The examples have been abstracted from real industrial designs of microprocessors.",2004,0, 3915,Analysis of pervasive multiple-component defects in a large software system,"Certain software defects require corrective changes repeatedly in a few components of the system. One type of such defects spans multiple components of the system, and we call such defects pervasive multiple-component defects (PMCDs). In this paper, we describe an empirical study of six releases of a large legacy software system (of approx. size 20 million physical lines of code) to analyze PMCDs with respect to: (1) the complexity of fixing such defects and (2) the persistence of defect-prone components across phases and releases. The overall hypothesis in this study is that PMCDs inflict a greater negative impact than do other defects on defect-correction efficacy. Our findings show that the average number of changes required for fixing PMCDs is 20-30 times as much as the average for all defects. Also, over 80% of PMCD-contained defect-prone components still remain defect-prone in successive phases or releases. These findings support the overall hypothesis strongly. We compare our results, where possible, to those of other researchers and discuss the implications on maintenance processes and tools.",2009,0, 3916,Current practice and a direction forward in checkpoint/restart implementations for fault tolerance,"Checkpoint/restart is a general idea for which particular implementations enable various functionalities in computer systems, including process migration, gang scheduling, hibernation, and fault tolerance. For fault tolerance, in current practice, implementations can be at user-level or system-level. User-level implementations are relatively easy to implement and portable, but suffer from a lack of transparency, flexibility, and efficiency, and in particular are unsuitable for the autonomic (self-managing) computing systems envisioned as the next revolutionary development in system management. In contrast, a system-level implementation can exhibit all of these desirable features, at the cost of a more sophisticated implementation, and is seen as an essential mechanism for the next generation of fault tolerant - and ultimately autonomic - large-scale computing systems. Linux is becoming the operating system of choice for the largest-scale machines, but development of system-level checkpoint/restart mechanisms for Linux is still in its infancy, with all extant implementations exhibiting serious deficiencies for achieving transparent fault tolerance. This paper provides a survey of extant implementations in a natural taxonomy, highlighting their strengths and inherent weaknesses.",2005,0, 3917,Checkpointing and error recovery in a uniprocessor system with on-chip cache,Checkpointing and rollback error recovery technique used in fault-tolerant systems allows recovery from errors without a need for a global restart of computation. This paper presents two efficient and low-cost schemes to handle soft (transient) errors in a uniprocessor system with on-chip cache memory. These user-transparent schemes are implemented in hardware and require negligible hardware overhead in the designs of processor and cache memory. The first scheme uses a write-through policy for on-chip cache and a checkpoint is established on each write-through; while the second scheme offers improvement over the working of the first scheme by including a second level cache in the memory hierarchy. A simple mathematical model is developed and a trade-off analysis between two schemes is presented,2001,0, 3918,Fault-tolerant parallel applications with dynamic parallel schedules,"Commodity computer clusters are often composed of hundreds of computing nodes. These generally off-the-shelf systems are not designed for high reliability. Node failures therefore drive the MTBF of such clusters to unacceptable levels. The software frameworks used for running parallel applications need to be fault-tolerant in order to ensure continued execution despite node failures. We propose an extension to the flow graph based Dynamic Parallel Schedules (DPS) development framework that allows non-trivial parallel applications to pursue their execution despite node failures. The proposed fault-tolerance mechanism relies on a set of backup threads located in the volatile storage of alternate nodes. These backup threads are kept up to date by duplication of the transmitted data objects and periodical checkpointing of thread states. In case of a failure, the current state of the threads that were on the failed node is reconstructed on the backup threads by re-executing operations. The corresponding valid re-execution order is automatically deduced from the data flow graph of the DPS application. Multiple simultaneous failures can be tolerated, provided that for each thread either the active thread or its corresponding backup thread survives. For threads that do not store a local state, an optimized mechanism eliminates the need for duplicate data object transmissions. The overhead induced by the fault tolerance mechanism consists mainly of duplicate data object transmissions that can, for compute bound applications, be carried out in parallel with ongoing computations. The increase in execution time due to fault tolerance therefore remains relatively low. It depends on the communication to computation ratio and on the parallel programs efficiency.",2005,0, 3919,Fault evaluation for security-critical communication devices,"Communications devices for government or military applications must keep data secure, even when their electronic components fail. Combining information flow and risk analyses could make fault-mode evaluations for such devices more efficient and cost-effective. Conducting high-grade information security evaluations for computer communications devices is intellectually challenging, time-consuming, costly, and error prone. We believe that our structured approach can reveal potential fault modes because it simplifies evaluating a device's logical design and physical construction. By combining information-flow and risk-analysis techniques, evaluators can use the process to produce a thorough and transparent security argument. In other work, we have applied static analysis techniques to the evaluation problem, treating a device's schematic circuitry diagram as an information flow graph. This work shows how to trace information flow in different operating modes by representing connectivity between components as being conditional on specific device states. We have also developed a way to define the security-critical region of components with particular security significance by identifying components that lie on a path from a high-security data source to a low-security sink. Finally, to make these concepts practical, we have implemented them in an interactive analysis tool that reads schematic diagrams written in the very high speed integrated circuit (VHSIC) hardware description language.",2006,0, 3920,Combined Fault Tolerance and Scheduling Techniques for Workflow Applications on Computational Grids,"Complex scientific workflows are now Increasingly executed on computational grids. In addition to the challenges of managing and scheduling these workflows, reliability challenges arise because of the unreliable nature of large-scale grid infrastructure. Fault tolerance mechanisms like over-provisioning and checkpoint-recovery are used in current grid application management systems to address these reliability challenges. In this work, we propose new approaches that combine these fault tolerance techniques with existing workflow scheduling algorithms. We present a study on the effectiveness of the combined approaches by analyzing their impact on the reliability of workflow execution, workflow performance and resource usage under different reliability models, failure prediction accuracies and workflow application types.",2009,0, 3921,Design Diverse-Multiple Version Connector: A Fault Tolerant Component Based Architecture,"Component based software engineering (CBSE) is a new archetype to construct the systems by using reusable components ldquoas it isrdquo. To achieve high dependability in such systems, there must be appropriate fault tolerance mechanism in them at the architectural level. This paper presents a fault tolerant component based architecture that relies on the C2 architectural style and is based on design diverse and exception handling fault tolerance strategies. The proposed fault tolerant component architecture employs special-purpose connectors called design diverse-multiple version connectors (DD-MVC). These connectors allow design diverse n-versions of components to run in parallel. The proposed architecture has a fault tolerant connector (FTC), which detects and tolerates different kinds of errors. The proposed architecture adjusts the tradeoff between dependability and efficiency at run time and exhibits the ability to tolerate the anticipated and unanticipated faults effectively. The applicability of proposed architecture is demonstrated with a case study.",2008,0, 3922,Using a system-level bit-error-rate model to predict on-orbit performance,"Component single-event upset (SEU) rates are used to model and predict system bit-error rate (BER) performance in trade and specification verification analyses. Simplifying trade studies involving component cost, delivery-time reductions, and part substitution effects are important benefits.",2003,0, 3923,Formal semantics of models for computational engineering: a case study on dynamic fault trees,"Computational modeling tools are critical to engineering. In the absence of a sufficiently complete, mathematically precise, abstract specification of the semantics of the modeling framework supported by such a tool, rigorous validation of the framework and of models built using it is impossible; there is no sound basis for program implementation, verification or documentation; the scientific foundation of the framework remains weak; and significant conceptual errors in framework definition and implementation are likely. Yet such specifications are rarely defined. We present an approach based on the use of formal specification and denotational semantics techniques from software engineering and programming language design. To illustrate the approach, we present elements of a formal semantics for a dynamic fault tree framework that promises to aid reliability analysis. No such specification of the meaning of dynamic fault trees has been defined previously. The approach revealed important shortcomings in the previous, informal definitions of the framework, and thus led to significant improvements, suggesting that formally specifying framework semantics is critical to effective framework design",2000,0, 3924,Simulation and fault detection of three-phase induction motors,"Computer simulation of electric motor operation is particularly useful for gaining an insight into their dynamic behaviour and electro-mechanical interaction. A suitable model enables motor faults to be simulated and the change in corresponding parameters to be predicted without physical experimentation. This paper presents both a theoretical and experimental analysis of asymmetric stator and rotor faults in induction machines. A three-phase induction motor was simulated and operated under normal healthy operation, with one broken rotor bar and with voltage imbalances on one phase of supply. The results illustrate good agreement between both simulated and experimental results.",2002,0, 3925,Advanced fault management as a part of Smart Grid solution,"Concept of advanced fault management as a part of the Smart Grid solution is based on full coordination of local automation, locally controlled switchgear and relay protection with maximum exploitation for minimizing fault duration and undelivered energy. Island operation is proposed as possible solution for energization of important consumers in order to maximally protect such consumers from outages. All considerations are elaborated trough examples. Also some ideas for further improvements are also suggested.",2008,0, 3926,CIFTS: A Coordinated Infrastructure for Fault-Tolerant Systems,"Considerable work has been done on providing fault tolerance capabilities for different software components on large-scale high-end computing systems. Thus far, however, these fault-tolerant components have worked insularly and independently and information about faults is rarely shared. Such lack of system-wide fault tolerance is emerging as one of the biggest problems on leadership-class systems. In this paper, we propose a coordinated infrastructure, named CIFTS, that enables system software components to share fault information with each other and adapt to faults in a holistic manner. Central to the CIFTS infrastructure is a Fault Tolerance Backplane (FTB) that enables fault notification and awareness throughout the software stack, including fault-aware libraries, middleware, and applications. We present details of the CIFTS infrastructure and the interface specification that has allowed various software programs, including MPICH2, MVAPICH, Open MPI, and PVFS, to plug into the CIFTS infrastructure. Further, through a detailed evaluation we demonstrate the nonintrusive low-overhead capability of CIFTS that lets applications run with minimal performance degradation.",2009,0, 3927,A fault-tolerant directory-based cache coherence protocol for CMP architectures,"Current technology trends of increased scale of integration are pushing CMOS technology into the deep-submicron domain, enabling the creation of chips with a significantly greater number of transistors but also more prone to transient failures. Hence, computer architects will have to consider reliability as a prime concern for future chip-multiprocessor designs (CMPs). Since the interconnection network of future CMPs will use a significant portion of the chip real state, it will be especially affected by transient failures. We propose to deal with this kind of failures at the level of the cache coherence protocol instead of ensuring the reliability of the network itself. Particularly, we have extended a directory-based cache coherence protocol to ensure correct program semantics even in presence of transient failures in the interconnection network. Additionally, we show that our proposal has virtually no impact on execution time with respect to a non fault-tolerant protocol, and just entails modest hardware and network traffic overhead.",2008,0, 3928,Numerical prediction of static form errors in the end milling of thin-walled workpiece,"Cutting deformation is the key factor influencing the precision and quality of the machined thin-walled workpiece, and to keep the maximum surface form errors under the permissible errors is the ultimate purpose of the form errors prediction. Cutting forces are analyzed and classified into six types according to combination of cutting depth, and cutting-force model for thin-walled workpiece machining is developed, then a finite-element model is presented to analyze the surface dimensional errors in peripheral milling of aerospace thin-walled workpieces. The efficient flexible iterative algorithm is proposed to calculate the deflections and the maximum surface form errors as contrasted with the rigid iterative algorithm used in the literatures. Meanwhile, some key techniques such as the finite-element modeling of the tool-workpiece system; the determinant algorithm to judge instantaneous immersion boundaries between a cutter element and the workpiece; iterative scheme for the calculations of tool-workpiece deflections considering the former convergence cutting position; and the method for calculating the position and magnitude of the maximum surface form errors are developed and presented in detail. The presented simulation model can control the surface errors in the permissible errors region without calculating the errors all over the workpiece, hence computing speed is greatly increased. The proposed approach is validated and proved to be efficient through comparing the obtained numerical results with the test results.",2006,0, 3929,Ion implant data log analysis for process control and fault detection,"Data mining techniques have been introduced to the semiconductor industry in recent years. In this paper, we report on progress in developing a network based implant data log (IDL) data analysis system which can be used to access data from multiple tools and multiple recipes. The system that is currently under development can be used to generate individual control charts from any of over 100 process variables for a user selectable process recipe or from all implants in the database. Multi-variable models are being developed to compare relationships among process variables. These models calculate predicted values of process variables, and the differences between the model and the actual variables are used as indicators of process drift, hardware malfunctions, recipe integrity, and in some cases mis-processing.",2002,0, 3930,A defect estimation approach for sequential inspection using a modified capture-recapture model,"Defect prediction is an important process in the evaluation of software quality. To accurately predict the rate of software defects can not only facilitate software review decisions, but can also improve software quality. In this paper, we have provided a defect estimation approach, which uses defective data from sequential inspections to increase the accuracy of estimating defects. To demonstrate potential improvements, the results of our approach were compared to those of two other popular estimation approaches, the capture-recapture model and the re-inspection model. By using the proposed approach, software organizations may increase the accuracy of their defect predictions and reduce the effort of subsequent inspections.",2005,0, 3931,Efficiency enhancement of microstrip patch antenna with defected ground structure,"Defected ground structures (DGS) have been developed to improve characteristics of many microwave devices. Although the DGS has advantages in the area of the microwave filter design, microstrip antenna design for different applications such as cross polarization reduction and mutual coupling reduction etc., it can also be used for the antenna size reduction. The etching of a defect in the ground plane is a unique technique for the antenna size reduction. The DGS is easy to be an equivalent LC resonator circuit. The value of the inductance and capacitance depends on the area and size of the defect. By varying the various dimensions of the defect, the desired resonance frequency can be achieved. In this paper the effect of dumbbell shaped DGS, to the size reduction of a microstrip patch antenna is investigated. Then a cavity backed structure is used to increase the efficiency of the microstrip patch antenna, in which the electric walls are placed surrounding the patch. The simulation is carried out with IE3D full wave EM simulator.",2008,0, 3932,"Building an adaptable, fault tolerant, and highly manageable web server on clusters of non-dedicated workstations","Clustered server architecture is increasingly being viewed as a successful and cost-effective approach to building a high-performance Web server. Existing server-clustering schemes have typically concentrated on the following issues: scalability, high availability, and user transparency. In this paper, we argue that the design goals of the Web server cluster should include adaptability, fault tolerance, and high manageability. In the presence of the Internet's highly unpredictable workload, the server system should be self-adapting to changing circumstances. We address this problem by building a Web server on a cluster of non-dedicated workstations. Such a server can easily recruit non-dedicated nodes dynamically in response to load bursts. Based on such a scheme, we designed and implemented an innovative approach that enables an ongoing request to be smoothly migrated to another node either in response to a node failure or overload. We also designed and implemented a management system that enables the Web site manager to manage and maintain the distributed server as a single large system",2000,0, 3933,Using Golomb rulers for optimal recovery schemes in fault tolerant distributed computing,"Clusters and distributed systems offer fault tolerance and high performance through load sharing. When all computers are up and running, we would like the load to be evenly distributed among the computers. When one or more computers break down the load on these computers must be redistributed to other computers in the cluster. The redistribution is determined by the recovery scheme. The recovery scheme should keep the load as evenly distributed as possible even when the most unfavorable combinations of computers break down, i.e. we want to optimize the worst-case behavior. In this paper we define recovery schemes, which are optimal for a number of important cases. We also show that the problem of finding optimal recovery schemes corresponds to the mathematical problem called Golomb rulers. These provide optimal recovery schemes for up to 373 computers in the cluster.",2003,0, 3934,Network based high accuracy realtime GPS positioning for GCP correction of high resolution satellite imagery,"Cm-level high accuracy real-time GPS positioning system is realized nation wide in Japan based on networked Real Time Kinematics GPS (RTK-GPS), and has been commercially operated by Mitsubishi Electric Co. named PAStrade, Positioning Augmentation Services, since September 2003. Also we have recently devised more convenient method at submeter or decimeter accuracy using low cost and compact code-ranging GPS receiver utilizing correction data fed from PAS system. Both of these methods will reduce time and cost significantly to produce precise satellite imagery based on Ground Control Point (GCP), as well as to update maps in geo-centric coordinate, which is necessary to use satellite imagery operationally with GPS",2004,0, 3935,Stochastic Fault Trees for cross-layer power management of WSN monitoring systems,"Critical systems require supervising infrastructures to keep their unreliability under control. We propose safety-critical systems to be modeled through a fault-tolerant architecture based on Stochastic Fault Trees (SFTs) and we refer to a scenario where the monitoring infrastructure is a Wireless Sensor Network (WSN). SFTs associate the failure time of leaf events with a non-Markovian (GEN) cumulative distribution function (CDF) and support the evaluation of system unreliability over time. In the reference scenario, the SFT model dynamically updates system unreliability according to samples delivered by the WSN, it maintains a dynamic measure of the safe time-horizon within which the system is expected to operate under a given threshold of unreliability, and it also provides the WSN with a measure of the contribution of each basic event to system unreliability.",2009,0, 3936,Embedded-software-based approach to testing crosstalk-induced faults at on-chip buses,"Crosstalk effects on long interconnects are becoming significant for high-speed circuits. This paper addresses the problem of testing crosstalk-induced faults at on-chip buses in system-on-a-chip (SOC) designs. We propose a method to self-test on-chip buses at-speed, by executing an automatically synthesized program using on-chip processor cores. The test program, executed at system operational speed, can activate and capture the worst-case crosstalk effects on buses and achieve a complete coverage of crosstalk-induced logical and delay faults. This paper discusses the method and the framework for synthesizing such a test program. Based on the bus protocol, the instruction set architecture of an on-chip processor core, and the system specification, the method generates deterministic tests in the form of instruction sequences. The synthesized test program is highly modularized and compact. The experimental results show that, for testing interconnects between a processor core and any other on-chip core, a 3 K-byte program is sufficient to achieve the complete coverage for crosstalk-induced logical and delay faults",2001,0, 3937,Design of Integrated Fault Diagnostic System (FDS),"Early diagnosis of plant faults/deviations is a critical factor for optimized and safe plant operation. Although smart controllers and diagnosis systems are available and widely used in chemical plants, however, some faults couldn't be detected. Major reason is the lack of learning techniques that can learn from operational running data and previous abnormal cases. In addition, operator and maintenance engineer opinions and observations are not well used, and useful diagnosis knowledge is ignored. Providing link between operation management, maintenance management and fault diagnostic and monitoring systems will enable closing such gap where diagnostic and monitoring results can be used more effectively for real time operation support, and optimized plant maintenance. In addition, operation and maintenance findings and discovered knowledge can be used effectively for plant condition monitoring. This research work presents the framework and mechanism for such integrated fault diagnostic system, which is called FDS. The proposed idea will support operation and maintenance planning as well as overall plant safety",2006,0, 3938,Trend analysis using real time fault simulation for improved fault diagnosis,"Early fault detection is critical for safe and optimum plant operation and maintenance in any chemical plant. Quick corrective action can help in minimizing quality and productivity offsets and can assist in averting hazardous consequences in abnormal situations. In this paper, fault diagnosis based on trends analysis is considered where integrated equipment behaviors and operation trajectory are analyzed using a trend-matching approach. A qualitative representation of these trends using IF-THEN rules based on neuro-fuzzy approach is used to find root causes and possible and consequences for any detected abnormal situation. Experimental plant is constructed to provide real time fault simulation data for fault detection method verification.",2007,0, 3939,Predicting the number of fatal soft errors in Los Alamos national laboratory's ASC Q supercomputer,"Early in the deployment of the Advanced Simulation and Computing (ASC) Q supercomputer, a higher-than-expected number of single-node failures was observed. The elevated rate of single-node failures was hypothesized to be caused primarily by fatal soft errors, i.e., board-level cache (B-cache) tag (BTAG) parity errors caused by cosmic-ray-induced neutrons that led to node crashes. A series of experiments was undertaken at the Los Alamos Neutron Science Center (LANSCE) to ascertain whether fatal soft errors were indeed the primary cause of the elevated rate of single-node failures. Observed failure data from Q are consistent with the results from some of these experiments. Mitigation strategies have been developed, and scientists successfully use Q for large computations in the presence of fatal soft errors and other single-node failures.",2005,0, 3940,Ground control for the geometric correction of PAN imagery from Indian remote sensing (IRS) satellites,"Efficacy of a hand-held global positioning system (GPS) receiver in stand-alone mode of GPS measurements was investigated by taking measurements on different dates. These measurements were compared to those observed by DGPS and total station for the validation of hand-held GPS receiver accuracy. Ground control point (GCP) coordinates were derived from 1:25,000 and 1:50,000 scale topographic maps, and by using two different types of GPS receivers- a stand-alone hand-held and dual frequency DGPS receivers. GCPs derived from each source were used independently for the GCP-based geometric correction of IRS PAN sensor images by using affine mapping function. GCPs derived from maps yielded root mean squares (RMS) error from 15 to 35 m respectively. However, GCPs derived by DGPS or stand-alone mode hand-held GPS receiver gave RMS error in the range of 3 to 6 meter, which is very close to spatial resolution of PAN sensor imagery (5.8 m). The mean values of GCP coordinates observed with the help of hand-held GPS receiver in stand-alone mode might prove a cost effective solution for the determination of GCP coordinates in the geometric correction of current high-resolution imagery from IRS satellites.",2003,0, 3941,Performance of Cooperative Sensing at the MAC Level: Error Minimization Through Differential Sensing,"Efficient operation of cognitive personal area networks (CPANs) necessitates accurate and efficient sensing of the primary user activity. This is accomplished in a cooperative manner by a number of nodes in the CPAN; the results of sensing are combined by the CPAN coordinator to form a comprehensive and timely channel map. The error of the sensing process is affected by various factors, including the ratio of the number of sensing nodes to the number of channels. In this paper, we present a probabilistic model of the sensing process and derive an analytical solution for the minimum number of sensing nodes that keeps the sensing error below prescribed limits. Then, we discuss three differential sensing policies in which separate sets of sensing nodes target idle and active channels only and show that the policy in which idle channels are given priority, but not exclusive treatment, achieves the best performance, as measured by the number of channels for which the information in the channel map is erroneous and the mean duration of that erroneous information.",2009,0, 3942,Linux Bugs: Life Cycle and Resolution Analysis,"Efforts to improve application reliability can fade if the reliability of the underlying operating system on which the application resides is not seriously considered. An important first step in improving the reliability of an operating system is to first gain insights into why and how the bugs originate, contributions of the different modules to the bugs, their distribution across severities, the different ways in which the bug may be resolved and the impact of bug severity on the resolution time. To gain this insight we conducted an extensive analysis of the publicly available bug data on the Linux kernel over a period of seven years. Our observations suggest that the Linux kernel may draw significant benefits from the continual reliability improvement efforts of its developers. These efforts, however, are disproportionately targeted towards popular configurations and hardware platforms, due to which the reliability of these configurations may be better than those that are not commonly used. Thus, a key finding of our study is that it may be prudent to restrict to using common configurations and platforms when using open source systems such as Linux in applications with stringent reliability expectations.",2008,0, 3943,Selecting time-frequency representations for detecting rotor faults in BLDC motors operating under rapidly varying operating conditions,"Electric motors work continuously under operating conditions that rapidly vary with time. Motor diagnostics in a non-stationary environment is challenging due to the need for sophisticated signal processing techniques. In this paper, the use of quadratic TFRs is presented as a solution for the diagnostics of rotor faults in brushless DC (BLDC) motors operating under constantly changing load and speed conditions. Four time-frequency representations are considered in this paper-short-time Fourier transform (STFT), Wigner-Ville distribution (WVD), Choi-Williams distribution (CWD), and the Zhao-Atlas marks distribution (ZAM). The drawbacks of these distributions and methods to overcome them are also presented. The TFRs are implemented in real-time and their load computations are compared in order to study their suitability for implementation in a commercial system.",2005,0, 3944,Algorithm Based fault tolerant state estimation of power systems,"Electric power industry deregulation has transformed state estimation from an important application into a critical one. State estimations in power systems involve very tedious computations since large systems can consist of thousands of buses and lines. In this paper, we apply algorithm based fault tolerance techniques to the Gauss-Newton iterative algorithm that solves a weighted least-squares (WLS) state estimator of a power system. These techniques efficiently detect computational errors due to transient and permanent faults in the hardware. We show that the modified state estimation algorithm has low computational over-head and is free from false alarms",2004,0, 3945,Hardware-in-the-loop simulation of fault tolerant control for an electric power steering system,"Electric power steering (EPS) systems are rapidly replacing existing traditional hydraulic power steering systems due to fuel and cost savings. The reliability of a column mounted EPS is improved by adding an alternate control scheme that is tolerant to a torque sensor failure (FTC). To accomplish this, a motor model based observer is used to estimate the total torque on the motor shaft. An independent estimate of the road reaction torque is generated from vehicle navigation signals and subtracted from the total to estimate the torque sensor output. A Hardware-in-the-loop (HIL) simulation is described where the EPS model, road vehicle dynamics and developed control scheme are simulated on an Opal RTtrade real-time platform. As the steering assist motor is the integral component to the control and estimation schemes, a physical DC motor is placed in-the-loop in lieu of the motor model. This simulation validates the developed method under more realistic operating conditions than using software simulation alone; such a HIL simulation can be useful as a development tool since it is more repeatable and cost effective than a full in-vehicle test.",2008,0, 3946,A new approach to fault location in a single core underground cable system using combined fuzzy logic and wavelet analysis,"Electric utilities often face the problem of finding the exact location of a fault in a distribution cable. These faults often occur at the worst possible time and cause the maximum amount of inconvenience to the utility's customers. Most fault location techniques in use today require the judgment of skilled operators, can also produce less than desired results for rapid fault location and thus may inflict additional damage to the cable. When a fault occurs in a cable, there are specific relationships between fault voltage, fault current, the resulting fault impedance and location. Thus, with regard to fuzzy logic which is an effective way to map an input space to an output space, it can be employed in fault location as an efficient, economic and adaptable method (compared with other artificial intelligence systems) by simply representing the aforementioned specific relationships as 'if-then' rules combined with a set of common-sense rules. This paper presents the results of investigations into a new fault location technique using advanced signal processing technique based on wavelet technology to extract useful information and this is then applied to fuzzy logic in order to identify and locate the disturbances in a distribution system comprising of single core underground cables, through pattern recognition.",2004,0, 3947,Fault current level issues for urban distribution network with high penetration of distributed generation,"Electrical power production from distributed generation (DG) is playing an increasing role in the supply of electricity in liberalised electricity markets. The popularity of DG is on the rise due to a number of reasons that include: deregulation of power system, increasing difficulty faced in installing new transmission and distribution infrastructure and recent technological advance in the area of DG energy sources. Currently DG attracted to both distribution utilities and electricity users, as it can provide meaningful advantages to both. The increasing demand on the urban distribution network (UDN) imposed by new DG, such as renewable sources and Combined Heat Power (CHP), will impact on the operation of the UDN in a number of areas including fault current levels and voltage levels. In general most DG connections are small CHP plants employing reciprocating engine or gas turbine as a prime mover directly coupled to synchronous generators with electrical output up to 1MVA and with a 0.415kV generating voltage are mainly connected to low voltage busbars of 0.415kV and in some cases to 10.5kV busbars through a transformer. In general, all new connected DG causes some increase in fault level. For significant volumes, connection of DG that would most likely occur in the UDN, which would lead to increase in fault level issues as the UDN tend to have the lowest fault level headroom. The aim of this paper is to present the consequences and operating limitations of connecting DG to the UDN. The methodology used is based on the latest edition of IEC 60909 standard to calculate the maximum fault level in UDN with DG connected at MV/LV levels. The application of the methodology is demonstrated using ERAC power system analysing software on fictitious UDN, resembling part of a typical UDN where continuity of power supply is very important. A discussion is also included on potential measures available to reduce the fault level.",2009,0, 3948,Optimization of reflow-thermal profile by design of experiments with response surface methodology for minimizing solder-ball defects,"Electronics-assembly companies are always trying to reduce solder-ball defects as their customers keep tightening specification since the solder balls could cause electrical short circuit and damage to the product. The effect of solder ball defects is not only on quality risk, but also on scrap costs. Consequently, electronics-assembly companies need to keep focusing on minimizing this type of defects. One initiative to overcome this problem is to minimize the spattering of solder while the product is passing through reflow-soldering oven. Reflow soldering is a complex process, which requires specific optimal soldering conditions based on experimental data. The trial-and-error method to determine the optimal conditions would require a prohibitively large number of experiments, however, and so the Response Surface Methodology (RSM) has been employed in this research to overcome this problem.",2008,0, 3949,The Design Of Embedded Bus monitoring And Fault Diagnosis System Based On Protocol SAE J1939,"Embedded bus monitoring and fault diagnosis system, which was based on protocol SAE J1939 was designed in this paper. And this system took the 32-bit embedded one as a hardware platform, customized a WinCE6.0 operation system and used EVC as the tool to design the embedded application. The functions of CAN communication, protocol defamations etc were realized. Good human-computer interaction is developed and the system has already been applied on the bus.",2010,0, 3950,Low power embedded DRAMs with high quality error correcting capabilities,"Embedded memories are part of almost any embedded system. To ensure an error free operation, error detecting/correcting codes or alternative schemes for on-line consistency checking can be used. The trade-offs with respect to error detection capabilities and hardware cost has been investigated in previous work. However, very often embedded systems also have to work with small batteries (e.g. mobile devices), and power consumption becomes a second crucial issue. In this paper a low power design for on-line consistency checking is proposed. The proposed scheme is analyzed with respect to its power consumption and compared to error detecting/correcting schemes based on error correcting codes. The results show that on-line consistency checking based on the modulo-2 address characteristic can ensure both low error detection latencies and reduced power consumption in contrast to alternative schemes.",2005,0, 3951,FD-HGAC: a hybrid heuristic/genetic algorithm hardware/software co-synthesis framework with fault detection,"Embedded real-time systems are becoming increasingly complex. To combat the rising design cost of those systems, co-synthesis tools that map tasks to systems containing both software and specialized hardware have been developed. As system transient fault rates increase due to technology scaling, embedded systems must be designed in fault tolerant ways to maintain system reliability. This paper presents and analyzes FD-HGAC, a tool using a genetic algorithm and heuristics to design real-time systems with partial fault detection. Results of numerous trials of the tool are shown to produce systems with average 22% detection coverage that incurs no cost or performance penalty.",2005,0, 3952,An Architecture for Runtime State Restoration after Transient Hardware-Faults in Redundant Real-Time Systems,"Employing programmable electronic systems (PESs) in safety-critical real-time applications that cannot immediately be transferred to safe states requires especially high degrees of fault-tolerance. Conventionally, this demand is satisfied not only by configuring multiple PESs redundantly, but also by applying redundant processing structures inside each PES. Instead, it is also desirable to provide the capability to rehabilitate a PES's faulty state by copying the internal state from its redundant counterparts at runtime. Thus, redundancy attrition due to transient faults is prevented, since failed channels can be brought back on line. Here, the problems concerned with state restoration at runtime are stated, the advantages and disadvantages of existing techniques are discussed, and a hardware-supported concept is introduced",2006,0, 3953,An approach to detecting domain errors using formal specification-based testing,"Domain testing, a technique for testing software or portions of software dominated by numerical processing, is intended to detect domain errors that usually arise from incorrect implementations of desired domains. This paper describes our recent work aiming to provide support for revealing domain errors using formal specifications. In our approach, formal specifications serve as a means for domain modeling. We describe a strong domain testing strategy that guide testers to select a set of test points so that the potential domain errors can be effectively detected, and apply our approach in two case studies for test cases generation.",2004,0, 3954,Efficient software checking for fault tolerance,"Dramatic increases in the number of transistors that can be integrated on a chip make processors more susceptible to radiation-induced transient errors. For commodity chips which are cost- and energy-constrained, software approaches can play a major role for fault detection because they can be tailored to fit different requirements of reliability and performance. However, software approaches add a significant performance overhead because they replicate the instructions and add checking instructions to compare the results. In order to make software checking approaches more attractive, we use compiler techniqes to identify the ""unnecessary"" replicas and checking instructions. In this paper, we present three techniques. The first technique uses boolean logic to identify code patterns that correspond to outcome tolerant branches. The second technique identifies address checks before loads and stores that can be removed with different degrees of fault coverage. The third technique identifies the checking instructions and shadow registers that are unnecessary when the register file is protected in hardware. By combining the three techniques, the overheads of software approaches can be reduced by an average 50%.",2008,0, 3955,Optimizing Dual-Core Execution for Power Efficiency and Transient-Fault Recovery,"Dual-core execution (DCE) is an execution paradigm proposed to utilize chip multiprocessors to improve the performance of single-threaded applications. Previous research has shown that DCE provides a complexity-effective approach to building a highly scalable instruction window and achieves significant latency-hiding capabilities. In this paper, we propose to optimize DCE for power efficiency and/or transient-fault recovery. In DCE, a program is first processed (speculatively) in the front processor and then reexecuted by the back processor. Such reexecution is the key to eliminating the centralized structures that are normally associated with very large instruction windows. In this paper, we exploit the computational redundancy in DCE to improve its reliability and its power efficiency. The main contributions include: 1) DCE-based redundancy checking for transient-fault tolerance and a complexity-effective approach to achieving full redundancy coverage and 2) novel techniques to improve the power/energy efficiency of DCE-based execution paradigms. Our experimental results demonstrate that, with the proposed simple techniques, the optimized DCE can effectively achieve transient-fault tolerance or significant performance enhancement in a power/energy-efficient way. Compared to the original DCE, the optimized DCE has similar speedups (34 percent on average) over single-core processors while reducing the energy overhead from 93 percent to 31 percent.",2007,0, 3956,Spike noise in soft under layer for perpendicular recording and its impact on error rate,"Dual-layer perpendicular recording uses a magnetically soft under layer (SUL) beneath the recording layer to conduct the flux from the recording head. The required thickness of the SUL is determined by the SUL moment, the write pole moment and pole dimensions. Typically, the SUL thickness is from tens to a few hundred nanometers. Domains are often present in this thickness range. In this study, we examine in detail the properties of spike noise and assess the effect of spike noise on error rate.",2005,0, 3957,Scheduling in Grid: Rescheduling MPI applications using a fault-tolerant MPI implementation,"Due to advancement in grid technologies, resources spread across the globe can be accessed using standard general-purpose protocols. Simulations and scientific experiments were earlier restricted due to limited availability of the resources. These are now carried out vigorously in the grid. Grid environments are dynamic in nature. The resources in a grid are heterogeneous in nature and are not under a central control. So scheduling in grid is complex. The initial schedule obtained for an application may not be good as it involves the selection of resources at a future time. The resource characteristics like CPU availability, memory availability, network bandwidth etc keep changing. Rescheduling becomes necessary under these conditions. The research experiment uses the fault-tolerant functionalities of MPICH-V2 to migrate MPI processes. Load-balancing modules, which make a decision of when and where to migrate a process are added into the MPICH-V2 system. Simulations are done to show that process migration is viable rescheduling technique for computationally intensive applications. The research experiment also gives brief descriptions of some existing fault-tolerant MPI implementations.",2007,0, 3958,Fault-tolerant distributed shared memory on a broadcast-based architecture,"Due to advances in fiber-optics and VLSI technology, interconnection networks that allow multiple simultaneous broadcasts are becoming feasible. Distributed-shared-memory implementations on such networks promise high performance even for applications with small granularity. This paper presents the architecture of one such implementation, called the simultaneous optical multiprocessor exchange bus, and examines the performance of augmented DSM protocols that exploit the natural duplication of data to maintain a recovery memory in each processing node and provide basic fault tolerance. Simulation results show that the additional data duplication necessary to create fault-tolerant DSM causes no reduction in system performance during normal operation and eliminates most of the overhead at checkpoint creation. Under certain conditions, data blocks that are duplicated to maintain the recovery memory are utilized by the underlying DSM protocol, reducing network traffic, and increasing the processor utilization significantly.",2004,0, 3959,"Improved read performance in a cost-effective, fault-tolerant parallel virtual file system (CEFT-PVFS)","Due to the ever-widening performance gap between processors and disks, I/O operations tend to become the major performance bottleneck of data-intensive applications on modern clusters. If all the existing disks on the nodes of a cluster are connected together to establish high performance parallel storage systems, the cluster's overall performance can be boosted at no additional cost. CEFT-PVFS (a RAID 10 style parallel file system that extends the original PVFS), as one such system, divides the cluster nodes into two groups, stripes the data across one group in a round-robin fashion, and then duplicates the same data to the other group to provide storage service of high performance and high reliability. Previous research has shown that the system reliability is improved by a factor of more than 40 with mirroring while maintaining a comparable write performance. This paper presents another benefit of CEFT-PVFS in which the aggregate peak read performance can be improved by as much as 100% over that of the original PVFS by exploiting the increased parallelism. Additionally, when the data servers, which typically are also computational nodes in a cluster environment, are loaded in an unbalanced way by applications running in the cluster, the read performance of PVFS will be degraded significantly. On the contrary, in the CEFT-PVFS, a heavily loaded data server can be skipped and all the desired data is read from its mirroring node. Thus the performance will not be affected unless both the server node and its mirroring node are heavily loaded.",2003,0, 3960,Efficient techniques for reducing error latency in on-line periodic built-in self-test,"Due to the high cost of failure, verification and testing now account for more than half of the total lifetime cost of an integrated circuit (IC). Increasing emphasis needs to be placed on finding design errors and physical faults as early as possible in the life of a digital system, new algorithms need to be devised to create tests for logic circuits, and more attention should be paid to synthesis for test and on-line testing. On-line testing requires embedding logic that continuously checks the system for correct operation. Built-in self-test (BIST) is a technique that modifies the IC by embedding test mechanisms directly into it. BIST is often used to detect faults before the system is shipped and is potentially a very efficient way to implement on-line testing. Error latency is the elapsed time between the activation of an error and its detection. Reducing the error latency is often considered a primary goal in on-line testing.",2010,0, 3961,SEPIC converter to perform power factor correction in a ballast for fluorescent lamps,"Due to their inherent advantages, electronic ballasts for gas discharge lamps have found widespread application in recent years. But, up to now, each ballast had to be tailored to fit a given lamp, resulting in a multitude of ballasts, that differ only by their operating parameters. One important requirement in order to operate different lamps with the same ballast was the availability of cost-effective microcontrollers which allow a great amount of flexibility. The second barrier to overcome was the restriction of a fixed bulk voltage given by a standard boost converter. This paper will point out how a SEPIC converter used as a power factor correction circuit will introduce a new degree of freedom to the ballast, resulting in multi-lamp ability. An introduction to the operating principles of the SEPIC will be given and the extra features, which are inherent to the SEPIC, will be discussed.",2005,0, 3962,Silicon Debug for Timing Errors,"Due to various sources of noise and process variations, assuring a circuit to operate correctly at its desired operational frequency has become a major challenge. In this paper, we propose a timing-reasoning-based algorithm and an adaptive test-generation algorithm for diagnosing timing errors in the silicon-debug phase. We first derive three metrics that are strongly correlated to the probability of a candidate's being an actual error source. We analyze the problem of circuit timing uncertainties caused by delay variations and test sampling. Then, we propose a candidate-ranking heuristic, which is robust with respect to such sources of timing uncertainty. Based on the initial ranking result and the timing information, we further propose an adaptive path-selection and test-generation algorithm to generate additional diagnostic patterns for further improvement of the first-hit-rate. The experimental results demonstrate that combining the ranking heuristic and the adaptive test-generation method would result in a very high resolution for timing diagnosis.",2007,0, 3963,Approach to diesel engine fault diagnosis based on crankshaft angular acceleration measurement and its realization,"Diesel engine is a kind of complex power generating machine, and play an important role in industry, which failure rate is high. How to utilize new science and technology to carry out diesel engine fault diagnosis is a lasting topic. Instantaneous angular acceleration of diesel crankshaft contains a little information for diesel engine fault diagnosing and its power balance evaluating. In this paper the theory and the method of diesel engine fault diagnosis based on angular acceleration measurement are studied. At the same time, the high speed micro-controller AVR8535 which has unique function of automatically capturing the rising or falling edge of square wave is studied, and it was utilized in the diesel engine's crankshaft angular acceleration measuring system. In this paper its software and hardware was designed, which supplied a whole solution to diesel engine fault diagnosis based on instantaneous angular acceleration.",2005,0, 3964,Study on fault-diagnosis models of different neural networks and ensemble,"Different diagnosis models, including multiplayer perceptron (MLP), radial basis function (RBF) and two types of support vector machines (SVMs), were designed, analyzed and compared based on the fault diagnosis of an analogue circuit instance. The experimental results show SVM model is of higher classification rate than MLP and RBF models, while MLP model has better ability to deal with uncertain signals. Considering different models correspond to different strategies, we combine four models of MLP, RBF and two SVMs to combine a diagnosis ensemble, which can achieve more accurate results than any individual model in the ensemble. The ensemble technique can provide a theoretical basis for further study on the fault diagnosis of analogue circuits.",2010,0, 3965,Design and Implementation of Fault Tolerance in the BACnet/IP Protocol,"Digital communication networks have become a core technology in advanced building automation systems. The building automation and control network (BACnet) is a standard data communication protocol designed specifically for building automation and control systems. BACnet provides the BACnet/IP (B/IP) protocol for data communication through the Internet. Every B/IP device uses a B/IP broadcast management device (BBMD) to deliver remote or global BACnet broadcast messages. In this paper, we propose a fault-tolerant BBMD for the B/IP protocol. The fault-tolerant BBMD improves the connectivity of B/IP networks because a backup BBMD automatically inherits the role of a defective primary BBMD. The fault-tolerant BBMD was designed to provide backward compatibility with existing B/IP devices. In this paper, we implemented the fault-tolerant BBMD and examined its validity using an experimental model.",2010,0, 3966,Design of IIR digital filters with prescribed phase error and reduced group-delay error,"Digital filters are often required to have constant group delays in many applications. Existing designs of IIR digital filters with no explicit constraints on the filters' group-delay responses usually lead to large group-delay errors, especially near the band edges. In design methods with constraints on group-delay error, much reduction of group-delay error have been obtained, but the phase error may not be small enough. In this paper, we design the IIR filter by imposing constraints on its frequency-response error and phase error, and using a sigmoid upper-bound function to shape the phase error. With this method, we have obtained both small phase error and group-delay error. In order to implement the design method, we combine the Steiglitz-McBride strategy with a relaxation scheme to convert the nonconvex design problem into a series of feasible quadratically constrained quadratic programming problems. Two example filters with specifications given in the literature are provided to compare the proposed design method with several existing methods. Design results demonstrate the effectiveness of the proposed method and good properties of the designed filters.",2010,0, 3967,Software reliability allocation of digital relay for transmission line protection using a combined system hierarchy and fault tree approach,"Digital relay is a special purpose signal processing unit in which the samples of physical parameters such as current, voltage and other quantities are taken. With the proliferation of computer technology in terms of computational ability as well as reliability, computers are being used for such digital signal processing purposes. As far as computer hardware is concerned, it has been growing steadily in terms of power and reliability. Since power plant technology is now globally switching over to such computer-based relaying, software reliability naturally emerges as an area of prime importance. Recently, some computer-based digital relay algorithms have been proposed based on frequency-domain analysis using wavelet-neuro-fuzzy techniques for transmission line faults. A software reliability allocation scheme is devised for the performance evaluation of a multi-functional, multi-user digital relay that does detection, classification and location of transmission line faults.",2008,0, 3968,Performance validation of fault-tolerance software: a compositional approach,"Discusses the lessons learned in the modeling of a software fault tolerance solution built by a consortium of universities and industrial companies for an Esprit project called TIRAN (TaIlorable fault-toleRANce framework for embedded applications). The requirements of high flexibility and modularity for the software have lead to a modeling approach that is strongly based on compositionality. Since the interest was in assessing both the correctness and the performance of the proposed solution, we have cared for these two aspects at the same time, and, by means of an example, we show how this was a central aspect of our analysis.",2001,0, 3969,A novel fuzzy logic approach to transformer fault diagnosis,"Dissolved gas in oil analysis is an well established in-service technique for incipient fault detection in oil-insulated power transformers. A great deal of experience and data in dissolved gas in oil analysis (DGA) is now available within the utilities. Actually, diagnostic interpretations were solely done by human experts using past knowledge and standard techniques such as the ratio method. In this paper, a novel fuzzy logic approach is adopted to develop a computer based intelligent interpretation of transformer faults using Visual basic and C++ programming. The proposed fuzzy logic based software as been tested and tuned using over 800 dissolved gas in oil analysis (DGA) case histories. This highly reliable tool has then been utilized in detection and verification of 20 transformer faults. The proposed diagnostic tool is very useful to both expert and novice engineers in DGA result interpretation",2000,0, 3970,Time Coordination of Distance Protections Using Probabilistic Fault Trees With Time Dependencies,"Distance protection of the electrical power system is analyzed in the paper. Electrical power transmission lines are divided into sections equipped with protective relaying system. Numerical protection relays use specialized digital signal processors as the computational hardware, together with the associated software tools. The input analogue signals are converted into a digital representation and processed according to the appropriate mathematical algorithms. The distance protection is based on local and remote relays. Hazard is the event: remote circuit breaker tripping provided the local circuit breaker can be opened. Coordination of operation of protection relays in time domain is an important and difficult problem. Incorrect values of time delays of protective relays can cause the hazard. In the paper, the time settings are performed using probabilistic fault trees with time dependencies (PFTTD). PFTTD is built for the above mentioned hazard. PFTTD are used in selection of time delays of primary (local) and backup (remote) protections. Results of computations of hazard probabilities as a function of time delay are given.",2010,0, 3971,Fast detector of symmetrical fault during power swing for distance relay,"Distance relay should be blocked during power swing to ensure the reliability, but still should trip as soon as possible after an internal fault occurs during power swing. It was very difficult to detect the symmetrical fault reliably and fast during power swing with complex power swing conditions and fault conditions considered. This paper presents a new fast detector of symmetrical fault during power swing. Based on the sudden reduction of absolute value of the change rate of power swing centre voltage (PSCV), the presented detector can detect the symmetrical fault reliably and sensitively in two cycles. This detector is easy to set and immune to the swing period, fault arc, fault location and power angle. EMTP simulations and real-time digital simulator system (RTDS) tests prove the presented detector is fast, sensible and reliable.",2005,0, 3972,A Deterministic Methodology for Identifying Functionally Untestable Path-Delay Faults in Microprocessor Cores,"Delay testing is crucial for most microprocessors. Software-based self-test (SBST) methodologies are appealing, but devising effective test programs addressing the true functionally testable paths and assessing their actual coverage are complex tasks. In this paper, we propose a deterministic methodology, based on the analysis of the processor instruction set architecture, for determining rules arbitrating the functional testability of path-delay faults in the data path and control unit of processor cores. Moreover, the performed analysis gives guidelines for generating test programs. A case study on a widely used 8-bit microprocessor is provided.",2008,0, 3973,On the Automatic Generation of Test Programs for Path-Delay Faults in Microprocessor Cores,"Delay testing is mandatory for guaranteeing the correct behavior of today's high-performance microprocessors. Several methodologies have been proposed to tackle this issue resorting to additional hardware or to software self test techniques. Software techniques are particularly promising as they resort to Assembly programs in normal mode of operation, without requiring circuit modifications; however, the problem of generating effective and efficient test programs for path- delay fault detection is still open. This paper presents an innovative approach for the generation of path-delay self-test programs for microprocessors, based on an evolutionary algorithm and on ad-hoc software simulation/hardware emulation heuristic techniques. Experimental results show how the proposed methodology allows generating suitable test programs in reasonable times.",2007,0, 3974,Simulation of faults in DFIG-based wind farms,"Demand for wind power has increased considerably due to technological advances and favorable government policies. As a result, large wind farms with multi-megawatt capacity are connected to sub-transmission and transmission systems. With high penetrations of wind energy, performance of the overall system is affected by the technical impacts introduced by wind turbine generators (WTG). Fault current contributions from WTGs will have a significant impact on the protection and control of the wind farm as well as the interconnected system. This paper initially describes the modeling aspects of Doubly Fed induction Generator (DFIG) during steady state and faulty conditions. Further, a 9 MW wind farm with 6 units of 1.5 MW DFIG is modeled in Matlab/Simulink and, voltage and current waveforms are presented and discussed for symmetrical and asymmetrical faults created within the wind farm and in the power system.",2009,0, 3975,The Optimal Morlet Wavelet and Its Application on Mechanical Fault Detection,"De-noising and extraction of the weak signal are very important to mechanical fault detection in which case signals often have very low signal-to-noise ratio (SNR). In this paper, a denoising method based on the optimal Morlet wavelet is applied to feature extraction for mechanical vibration signals. The wavelet shape parameters are optimized based on kurtosis maximization criteria. The effectiveness of the proposed technique on the extraction of impulsive features of mechanical fault signals has been proved by practical experiments.",2009,0, 3976,Pruning single event upset faults with petri nets,"Dependability of embedded systems is becoming a serious concern even for mass-market systems. Usually, designs are verified by means of fault injection campaigns, but the length of a thorough test often collides with the severe requirements about design cycle times. The number of fault injection experiments is thus usually reduced by performing random fault injections, or by focusing on selected fault models, or on components that depend on specific architectures and workloads. This forces to begin the validation campaign only when the system is fully designed, since specific details about the implementation or the workload are required. In this work, we propose to perform early fault pruning analysis on a formal model of the system, in order to identify the most critical components and computation cycles as soon as possible.",2009,0, 3977,A fault tolerant journalized stack processor architecture,"Dependable architectures play an important role in many areas that impact our lives. Dependability is achieved by using a set of analysis and design techniques that increases the complexity and consequently the cost of systems. In this paper, to meet low cost requirement of IP cores, we propose a simple dependable stack processor architecture using a re-execution model of instructions in the case of error detection in consecutive sequences of instructions execution. The architecture is based on applying two memory journals as intermediate stages between processor and main memory in write operations. Then, we present the results obtained by using the developed emulation tools.",2009,0, 3978,Efficient fault tolerant scheduling on Controller Area Network (CAN),"Dependable communication is becoming a critical factor due to the pervasive usage of networked embedded systems that increasingly interact with human lives in many real-time applications. Controller Area Network (CAN) has gained wider acceptance as a standard in a large number of industrial applications, mostly due to its efficient bandwidth utilization, ability to provide real-time guarantees, as well as its fault-tolerant capability. However, the native CAN fault-tolerant mechanism assumes that all messages transmitted on the bus are equally critical, which has an adverse impact on the message latencies, results in the inability to meet user defined reliability requirements, and, in some cases, even leads to violation of timing requirements. As the network potentially needs to cater to messages of multiple criticality levels (and hence varied redundancy requirements), scheduling them in an efficient fault-tolerant manner becomes an important research issue. We propose a methodology which enables the provision of appropriate guarantees in CAN scheduling of messages with mixed criticalities. The proposed approach involves definition of fault-tolerant feasibility windows of execution for critical messages, and off-line derivation of optimal message priorities that fulfill the user specified level of fault-tolerance.",2010,0, 3979,Dead-time correction for a rotating rod normalization in a cylindrical PET system,"Depending on the geometry between the rotating coincidence rod source, the PET detectors and the rod activity, a variable amount of block dead-time is found in a PET system during normalization. This dead-time is driven by the relative location between the rod source and the crystals in the block detector. Normalization scans were acquired on a GE Discovery ST PET-CT system with 3 rod activities of duration such that the total acquired counts (T+S+R) were held constant. To develop a model of the dead-time, acquisitions at six static source locations, centered over each crystal in a single block detector, were acquired for each of the rod activity levels. The resultant block busy data were analyzed such that the profile of block busy as the rod traversed all lines-of-response (with respect to the said block) was found. The profile was fit with a Gaussian function and parameterized by the FWHM and amplitude of the Gaussian. For image analysis, a 20 cm uniform cylinder and a whole-body patient scan were analyzed in reconstructed image space. The datasets were reconstructed with each normalization correction, then with the normalization corrected for dead-time. A model has been found and is described allowing application of dead-time correction to a rotating rod normalization scan based on measured block-busy in the normalization raw data. The model corrects for dead-time effects in the normalization. Analysis of image quality impact shows that bias and variance effects are reduced.",2003,0, 3980,Application of neural networks and filtered back projection to wafer defect cluster identification,"During an electrical testing stage, each die on a wafer must be tested to determine whether it functions as it was originally designed. In the case of a clustered defect on the wafer, such as scratches, stains, or localized failed patterns, the tester may not detect all of the defective dies in the flawed area. To avoid the defective dies proceeding to final assembly, an existing tool is currently used by a testing factory to detect the defect cluster and mark all the defective dies in the flawed region or close to the flawed region; otherwise, the testing factory must assign five to ten workers to check the wafers and hand mark the defective dies. This paper proposes two new wafer-scale defect cluster identifiers to detect the defect clusters, and compares them with the existing tool used in the industry. The experimental results verify that one of the proposed algorithms is very effective in defect identification and achieves better performance than the existing tool.",2002,0, 3981,Identification of wafer defect clusters using a self-organizing multilayer perceptron,"During an electrical testing stage, each die on a wafer must be tested to determine whether it functions as it was originally designed. In the case of a clustered defect on the wafer, such as scratches, stains, or localized failed patterns, the tester may not detect all of the defective dies in the flawed area. To avoid the defective dies proceeding to final assembly, a testing factory must assign five to ten workers to check the wafers and hand mark the defective dies in the flawed region or close to the flawed region. This work proposes an automatic wafer-scale defect cluster identifier using a multilayer perceptron to detect the defect cluster and mark all the defective dies. The proposed work is also compared with an existing tool used in the industry. The experimental results verify that our proposed algorithm is very effective in defect identification and achieves better performance than the existing tool.",2002,0, 3982,Diminution of errors in the technological process of hybrid integrated circuits by the implementation of the microcontrollers,"During technological processing, performances and costs are affected by errors due to technological variations and the instability of the information system components that monitor the process. The hybrid integrated circuit (HIC) technological process for the automobile industry is entirely automatic. Studies of this process determined the main reasons for errors (nonuniformity of materials, variation of environmental conditions, high level noises that appear during data acquisition, and the reaction of machines to system commands). One solution that is both reliable and very cheap is assured by one upgrade of the information system using local elements based on the Intel 80C852 microcontroller. This solution realizes correction of the errors during local data acquisition and cancellation of data transmission errors in the technological process. Also, intelligent elements built with the Intel 80C852 microcontroller were added to the functional adjustments to realize monitoring of the environmental conditions and of the statistical data picked up during the working period of the final products. This paper presents a data acquisition local module built with an adaptive filter, the associated software with very short execution time, the generalized product model, the most frequent error types at several points of the technological process, the central place for adjustment and comparative results obtained after the modernization process",2001,0, 3983,Cloud-Rough Model Reduction with Application to Fault Diagnosis System,"During the system fault period, usually the explosive growth signals including fuzziness and randomness are too redundant to make right decision for the dispatcher. So intelligent methods must be developed to aid users in maintaining and using this abundance of information effectively. An important issue in fault diagnosis system (FDS) is to allow the discovered knowledge to be as close as possible to natural languages to satisfy user needs with tractability and to offer FDS robustness. At this junction, the cloud theory is introduced. The mathematical description of cloud has effectively integrated the fuzziness and randomness of linguistic terms in a unified way. A cloud-rough model is put forward. Based on it, a method of knowledge representation in FDS is developed which bridges the gap between quantitative knowledge and qualitative knowledge. In relation to classical rough set, the cloud-rough model can deal with the uncertainty of the attribute and make a soft discretization for continuous ones. A novel approach, including discretization, attribute reduction, value reduction and data complement, is presented. The data redundancy is greatly reduced based on an integrated use of cloud theory and rough set theory. Illustrated with a power distribution FDS shows the effectiveness and practicality of the proposed approach.",2006,0,2158 3984,A middleware aided robust and fault tolerant dynamic reconfigurable architecture,Dynamic reconfiguration enhances embedded system with at run-time adaptive functionality and is an improvement in terms of resource utilization and system adaptability. SRAM-based FPGAs provides a dynamic reconfigurable platform with high logic density. The requirements for such an embedded high flexible system based on FPGAs are robustness and reliability to prevent operation interrupts or even system failures. The complexity of a dynamic reconfigurable system with adaptive processing module demands high effort for the user. Therefore a high level abstraction of the communication issues is required to support application development by an appropriate middleware. To achieve such a flexible embedded system we present our network-on-chip (NoC) approach system-on-chip wire (SoCWire) and outline its performance and suitability for robust dynamic reconfigurable systems. Furthermore we introduce a suitable embedded middleware concept to support the system reconfiguration and the software application development process.,2009,0, 3985,Fault-Tolerant Scheduling for Periodic Tasks based on DVFS,"Dynamic voltage and frequency scaling (DVFS) technique is emerging in various battery-operated embedded systems to reduce the energy consumption and prolong the working life of the system. However, DVFS technology has been proved to have some direct and negative effects on the reliability of the system. Most existing schedulers of real-time tasks based on DVFS only focus on minimizing energy consumption without taking the fault-tolerant into account. To solve this problem, in this study, we developed a novel energy-aware fault-tolerant (EAFT) technique that was tailored for the real-time periodic tasks. The heuristic EAFT balances the allocation of slacks used for reducing energy consumption and used for re-executing the failed tasks. The simulation results showed that the proposed reliability-aware schemes could guarantee the system reliability and significantly save energy comparing to the existing allocating schemes.",2008,0, 3986,Harmonic distortion and measurement principles based on digital fault recorder (DFR) analysis,"Each type of device causing harmonics has a particular shape of harmonic current and voltage (amplitude and phase displacement). This work provides a methodology for analyzing the distortion from the data record of digital fault recorders in order to quantify the distortion in current and voltage. This can be done by decomposing the signal into its constituent components in the frequency domain, because of the fact that it is not practical to obtain and represent all the system detail for analysis. It can lead to inaccurate estimation of distortion in voltages and currents. A simple but realistic approach for resonance analysis is presented.",2009,0, 3987,Energy efficient soft-decision error control in wireless sensor networks,"Energy efficient reliable communication over unpredictable wireless medium is a major challenge for resource constrained wireless sensor nodes employed in process/environment control application. In this paper, we suggest the soft decision decoding (SDD) based advanced Forward Error Correction (FEC) scheme for low power distributed sensor nodes. The proposed BCH (Bose-Chaudhuri-Hocquenghem) based adaptive Chase-2 decoding scheme offers attractive energy benefits as compared to hard-decision decoding (HDD). The reduced decoding complexity is obtained by limiting the codeword search space and fewer algebraic operations than standard chase-2. The realistic environmental scenario incorporating path loss, Rayleigh fading and additive white Gaussian noise has been considered to investigate the performance of the proposed scheme. A detailed comparative analysis is carried out with HDD of BCH codes using IEEE 802.15.4 compliant widely used MicaZ node parameters. The simulation results indicate that for low power WSN, proposed SDD based adaptive scheme could offer better tradeoff between energy and reliability than HDD schemes.",2010,0, 3988,Fault diagnosis and failure prognosis for engineering systems: A global perspective,"Engineering systems, such as aircraft, industrial processes, manufacturing systems, transportation systems, electrical and electronic systems, etc., are becoming more complex and are subjected to failure modes that impact adversely their reliability, availability, safety and maintainability. Such critical assets are required to be available when needed, and maintained on the basis of their current condition rather than on the basis of scheduled or breakdown maintenance practices. Moreover, on-line, real-time fault diagnosis and prognosis can assist the operator to avoid catastrophic events. Recent advances in Condition-Based Maintenance and Prognostics and Health Management (CBM/PHM) have prompted the development of new and innovative algorithms for fault, or incipient failure, diagnosis and failure prognosis aimed at improving the performance of critical systems. This paper introduces an integrated systems-based framework (architecture) for diagnosis and prognosis that is generic and applicable to a variety of engineering systems. The enabling technologies are based on suitable health monitoring hardware and software, data processing methods that focus on extracting features or condition indicators from raw data via data mining and sensor fusion tools, accurate diagnostic and prognostic algorithms that borrow from Bayesian estimation theory, and specifically particle filtering, fatigue or degradation modeling, and real-time measurements to declare a fault with prescribed confidence and given false alarm rate while predicting accurately and precisely the remaining useful life of the failing component/system. Potential benefits to industry include reduced maintenance costs, improved equipment uptime and safety. The approach is illustrated with examples from the aircraft and industrial domains.",2009,0, 3989,Spatial error concealment for H.264 using sequential directional interpolation,"Error concealment at the decoder restores erroneous macroblocks (MBs) caused by channel errors. In this paper, we propose a novel spatial error concealment algorithm based on prediction modes of intra-blocks which are included in a H.264-coded stream and highly correlated to the direction of local edge within the block. The key contribution is to sequentially interpolate each pixel in a lost MB by utilizing edge directions and strengths efficiently estimated from the neighboring blocks, preserving local edge continuity for more visually acceptable images. The proposed scheme is simple to implement and more reliably recover high-detailed content in corrupted MBs. The experimental results shows the proposed method achieves reduction in speed by 14%~39% as compared to existing method, and outperforms them in PSNR by 0.5~1 dB as well as in subjective visual evaluation.",2008,0, 3990,Software Implementation of a Novel Approach to Improving Burst Errors Correction Capability of Hamming Code,"Error correction coding has been a crucial part of data transmission or storage. In high-reliability applications, the single error correction and double error detections Hamming code may not provide adequate protection against burst errors. This makes multiple-error correction highly desirable. This paper proposed a novel approach to improving burst errors correction capability of the Hamming code, while retaining code rate as maximum as possible. Software implementation of Hamming encoding and decoding are proposed in this paper, and the simplicity and effectiveness of their implementations are demonstrated with example.",2007,0, 3991,Quantitative Analysis of the Error Propagation Phenomenon in Distributed Information Systems,"Error propagation analysis is of fundamental importance to assure safe operation and management of abnormal situations in any distributed information system. In this paper,the quantitative method is proposed to analyze all possible error propagation scenarios based on different topologies,error types and probabilities. The stated thesis was verified based on the experiments conducted on simulation model.From a safety point of view, the results provide some ideas of robustness: the knowledge how to design the most error resistant architectures.",2009,0, 3992,Directional high-resistance earth fault detector based on zero-sequence components and wavelet transform,"Distance relays are subject to fail to detect some earth faults since the fault resistance is usually much more in the earth faults rather than the ph-ph faults. Typically a traditional zero-sequence-based method is used to detect a high-resistance earth fault. However, this method fails to detect some high-resistance earth faults. On the other hand, it may trip under normal conditions where there is a little unbalance. This paper introduces a new method utilizing both zero-sequence components and wavelet transform to detect high-resistance earth faults. This method does not trip under normal conditions due to unbalancing whereas it is very sensitive to high-resistance earth faults. It will be shown by using proper simulations that the proposed method is fast and reliable.",2010,0, 3993,Formal Modelling and Analysis of Business Information Applications with Fault Tolerant Middleware,"Distributed information systems are critical to the functioning of many businesses; designing them to be dependable is a challenging but important task. We report our experience in using formal methods to enhance processes and tools for development of business information software based on service-oriented architectures. In our work, which takes place in an industrial setting, we focus on the configuration of middleware, verifying application-level requirements in the presence of faults. In pilot studies provided by SAP, we used the Event-B formalism and the open Rodin tools platform to prove properties of models of business protocols and expose weaknesses of certain middleware configurations with respect to particular protocols. We then extended the approach to use models automatically generated from diagrammatic design tools, opening the possibility of seamless integration with current development environments. Increased automation in the verification process, through domain-specific models and theories, is a goal for future work.",2009,0, 3994,Design and implementation of fault tolerant CORBA-based middleware platform,"Distributed middleware can reduce the development cycle of distributed application system, and provide a good development environment and transparent support for distributed application. Developing the fault-tolerant distributed application platform based on middleware, it not only greatly simplifies the development of the fault-tolerant distributed system itself, but also improves the flexibility, scalability and reliability of distributed application, and is easier to deploy and manage the fault-tolerant system. From this perspective, we have designed and implemented a fault-tolerant platform based on active replication by using CORBA. Comparing the result in different systems, we know the middle tier is the bottleneck of the platform performance.",2010,0, 3995,A comprehensive approach for reliability worth assessment of the automated fault management schemes,"Distribution automation application for fault management in the electricity distribution networks is one of the main potential remedial actions to reduce customers' outage times and hence improve service reliability. For this purpose, various automation schemes have been developed and introduced in different countries and by different venders. However, the challenge for electric utilities, especially in today's competitive electricity market, is to identify and evaluate potential reliability reinforcement schemes. Accordingly, appropriate schemes must be determined and prioritized for implementation. In this context, reliability cost/worth assessment plays an important role. A comprehensive approach is proposed in this paper to quantitatively assess the effects of various automated fault management schemes on the distribution system reliability.",2010,0, 3996,Aspect oriented software fault tolerance and analytically redundant design framework,"Diversity or redundancy based software fault tolerance does not come for free; rather it introduces additional complexity to the core functionality in the form of redundancy development, management and controlled execution. This results in tangling of core functionality with the fault tolerance concerns. This paper presents a novel design framework using static and dynamic advice provided by aspect oriented programming. The proposed strategy introduces, manage and exercise different fault tolerance strategies such that modularization is achieved by separation of these concerns from core functionality. A Mathematical Model of an Inverted Pendulum Control System has been used as a case study to demonstrate the effectiveness of the proposed design framework.",2010,0, 3997,Sequential correction of perspective warp in camera-based documents,"Documents captured with hand-held devices, such as digital cameras often exhibit perspective warp artifacts. These artifacts pose problems for OCR systems which at best can only handle in-plane rotation. We propose a method for recovering the planar appearance of an input document image by examining the vertical rate of change in scale of features in the document. Our method makes fewer assumptions about the document structure than do previously published algorithms.",2005,0, 3998,Development of Efficient Algorithm for Fault Location on Series Compensated Parallel Transmission Lines,"Development of efficient algorithm on series compensated parallel transmission line is presented. The new algorithm is developed using instantaneous fault data collected from both ends of the line, in order to estimate the fault location and fault resistance accurately. Firstly, single transmission line with series compensation unit, which is located in the middle, is modelled using ATP program and tested with the new algorithm with number of different fault cases. Then the algorithm further improved for the testing of parallel series compensated parallel transmission lines. A sample example of estimation of fault location and resistance on single line model is reported and discussed in this paper, and a comprehensive summary of fault location estimations for single line model is presented in this paper. At present, the algorithm is being tested for parallel transmission lines and results obtained from trial run are satisfactory. However, further testing of parallel transmission lines case will be presented in publication.",2005,0, 3999,March-based RAM diagnosis algorithms for stuck-at and coupling faults,"Diagnosis technique plays a key role during the rapid development of the semiconductor memories, for catching the design and manufacturing failures and improving the overall yield and quality. Investigation on efficient diagnosis algorithms is very important due to the expensive and complex fault/failure analysis process. We propose March-based RAM diagnosis algorithms which not only locate faulty cells but also identify their types. The diagnosis complexity is O(17N) and O((17+10B)N) for bit-oriented and word-oriented diagnosis algorithms, respectively, where N represents the address number and B is the data width. Using the proposed algorithms, stuck at faults, state coupling faults, idempotent coupling faults and inversion coupling faults can be distinguished. Furthermore, the coupled and coupling cells can be located in the memory array. Our word-oriented diagnosis algorithm can distinguish all of the inter-word and intra-word coupling faults, and locate the coupling cells of the intra-word inversion and idempotent coupling faults. With additional 2B-1 operations, the algorithm can further locate the intra-word state coupling faults. With improved diagnostic resolution and test time, the proposed algorithms facilitate the development and manufacturing of semiconductor memories",2001,0, 4000,March DSS: A New Diagnostic March Test for All Memory Simple Static Faults,"Diagnostic march tests are powerful tests that are capable of detecting and identifying faults in memories. Although march SS was published for detecting simple static faults, no test has been published for identifying all faults possibly present in memory cells. In this paper, we target all published simple static faults. We identify faults that cannot be distinguished due to their analog behavior. We present a new methodology for generating irredundant diagnostic march tests for any desired subset of the simple static faults using the necessary and sufficient conditions for fault detection. Using that methodology, along with a verification tool, and trial and error, we were able to build a new diagnostic test for all distinguishable faults named march DSS. March DSS is the first test that is capable of identifying all distinguishable memory static faults. Compared to the latest most comprehensive published diagnostic march test, march DSS provides significant improvement in terms of fault coverage, time complexity, and power consumption. By targeting the same faults, we were able to provide a new test equivalent to the latest published test with 46% improvement in time complexity.",2007,0, 4001,Current harmonics analysis as a method of electrical faults diagnostic in switched reluctance motors,"Diagnostic method of electrical faults in switched reluctance motors (SRM) based on current harmonics analysis is presented in this paper. The classification of electrical faults in SRM's has been discussed, simulation model and laboratory setup presented. For the SRM motor of 6/4 design the conclusions drawn from the tests conducted both in the simulated and laboratory conditions for regular operation and selected electrical faults. The electrical fault type found in the machine was determined on the content of harmonics of higher order in the source current. The conclusions are included.",2007,0, 4002,On-line identification of faults in fault-tolerant imagers,"Detection of defective pixels that develop on-line is a vital part of fault tolerant schemes for repairing imagers during operation. This paper presents a new algorithm for the identification of stuck low, stuck high and partially stuck pixels in both regular and fault tolerant APS systems. The algorithm does not require specialized illuminations but instead operates on a sequence of regular images and uses statistical information extracted from each image to decide the state of each pixel. Unlike previous techniques, simulations of this technique show that it can find all faulty pixels without misidentifying good pixels as faulty. Under typical conditions, the algorithm successfully converges on the correct result within 238 images for a fault tolerant APS, and 16 images for a regular APS. More extensive simulations have shown that these results can be extended to high-resolution sensors and complex defect models that include hot pixels, without a significant decline in performance.",2005,0, 4003,An Integrated Framework for Checking Concurrency-Related Programming Errors,"Developing concurrent programs is intrinsically difficult. They are subject to programming errors that are not present in traditional sequential programs. Our current work is to design and implement a hybrid approach that integrates static and dynamic analyses to check concurrency-related programming errors more accurately and efficiently. The experiments show that the hybrid approach is able to detect concurrency errors in unexecuted parts of the code compared to dynamic analysis, and produce fewer false alarms compared to static analysis. Our future work includes but is not limited to optimizing performance, improving accuracy, as well as locating and confirming concurrency errors.",2009,0, 4004,Migrating Fault Trees To Decision Trees For Real Time Fault Detection On International Space Station,"Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to capture the contents of fault trees and detect faults by running the telemetry data through the decision trees in real time. Decision trees (also called classification trees) are the binary trees built from data samples and can classify the objects into different classes. In our case, the decision trees can classify different fault events or normal events. Given a set of data samples, decision trees can be built and trained, and then by running the new data through the trees, classification and prediction can be made. In this way, diagnostic knowledge for fault detection and isolation can be represented as diagnostic rules; we call this tree the diagnostic decision tree (DDT). By showing the fault path in decision trees, we also can point out the root cause when a fault occurs. Since all the procedures and algorithms are available to build decision trees, the trees built are cost effective and time effective. Because the diagnostic decision trees are based on available data and previous knowledge of subsystem logic, the DDT can also be trained to predict faults and detect unknown faults. Based on this, the needs for on-board real time diagnostics can readily be met. Diagnostic Decision Tree- s are built based on the fault trees as static trees that serve as the fundamental diagnostic trees, and the dynamic DDTs are built over time from vehicle telemetry data. The dynamic DDT will add the functionalities of prediction, and will be able to detect unknown faults. Crew or maintenance engineers can use the decision tree system without having previous knowledge or experience about the diagnosed system. To our knowledge, this is the first paper to propose a solution to build diagnostics decision trees from fault trees, which convert the reliability analysis models to diagnostic models. We show through mapping and ISS examples that the approach is feasible and effective. We also present future work and development",2005,0, 4005,A Tool Supporting Evaluation of Non-markovian Fault Trees,"Fault Trees are widely employed in the industrial practice to support safety and reliability analysis. Various works have improved the classic formulation by replacing fixed probabilities of leaf events with Markovian distributions over time. We present operation principles, user interface and implementation architecture of a tool supporting editing and evaluation of Fault Trees where the time of occurrence of leaf events follows a generalized probability density function.",2008,0, 4006,Detecting Double Faults on Term and Literal in Boolean Expressions,"Fault-based testing aims at selecting test cases to guarantee the detection of certain prescribed faults in programs. The detection conditions of single faults have been studied and used in areas like developing test case selection strategies, establishing relationships between faults and investigating the fault coupling effect. It is common, however, for programmers to commit more than one fault. Our previous studies on the detection conditions of faults in Boolean expressions show that (1) some test case selection strategies developed for the detection of single faults can also detect all double faults related to terms, but (2) these strategies cannot guarantee to detect all double faults related to literals. This paper supplements our previous studies and completes our series of analysis of the detection condition of all double fault classes in Boolean expressions. Here we consider the fault detection conditions of combinations of two single faults, in which one is related to term and the other is related to literal. We find that all such faulty expressions, except two, can be detected by some test case selection strategies for single fault detection. Moreover, the two exception faulty expressions can be detected by existing strategies when used together with a supplementary strategy which we earlier developed to detect double literal faults.",2007,0, 4007,A novel approach for ground fault detection,"Faulted lines must be repaired and returned to service in the shortest possible time to provide reliable service to the customers. Fault detection is an important aspect of system protection, as it involves personnel as well as equipment safety. The research on fault detection techniques is mainly based on conventional methods, not suitable for detecting high impedance ground faults. This paper discusses a novel approach for detecting high impedance ground faults using state-of-the-art signal processing technology. Some laboratory test results are reported. The reported work offers promise in realizing dependable and secure high impedance ground fault detection using latest technology.",2004,0, 4008,Design and Implementation of an Integrated Fault-Supervising System for Large HPCs,"Faults and failures are the biggest obstacles that limit high-performance computing systems (HPCs) to exert their functions and performance. To minimize or dispel their influences, the HPCs must be supervised to obtain correlative information in time, according to which effective actions could be taken as soon as possible. To probe into this solution, an integrated fault-supervising system (IFS) designed for a large HPC system is presented in this paper, with large numbers of distributed sensors and intelligent control units to acquire fault information rapidly. Furthermore, it has the ability of automatic emergency processing in certain circumstances according to the acquired information, and it supports both local and remote management with convenient and visual interfaces. Up to now, the supervising system has been acting well for a few years and helps the target system reach over 90% availability, which indicates in a degree that the design is successful and the supervising system deserves further research and could have a bright future in wider range of application.",2008,0, 4009,Measuring application error rates for network processors,"Faults in computer systems can occur due to a variety of reasons. In many systems, an error has a binary effect, i.e. the output is either correct or it is incorrect. However, networking applications exhibit different properties. For example, although a portion of the code behaves incorrectly due to a fault, the application can still work correctly. Integrity of a network system is often unchanged during faults. Therefore, measuring the effects of faults on the network processor applications require new measurement metrics to be developed. In this paper, we highlight essential application properties and data structures that can be used to measure the error behavior of network processors. Using these metrics, we study the error behavior of seven representative networking applications under different cache access fault probabilities.",2004,0, 4010,Fault-Tolerant Prediction-Based Scheme for Target Tracking Application,"Fault-tolerance is an important function in target tracking application using wireless sensor networks. We propose in this paper, an efficient fault-tolerant approach for target tracking that prevents the loss of the target. Instead of using a single prediction mechanism, our approach uses a multi-level incremental prediction technique that adjusts the prediction precision of the target movement. The responsible node of target detection uses multiple historical information pieces to calculate multi-level predictions which have different precision levels according to the number of information pieces used. Thanks to our parametric prediction model, our approach increases the prediction success rate and decreases the target loss frequency compared to basic approaches that use simple prediction models.",2009,0, 4011,Fault-tolerant average execution time optimization for general-purpose multi-processor system-on-chips,"Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.",2009,0, 4012,New techniques for speeding-up fault-injection campaigns,"Fault-tolerant circuits are currently required in several major application sectors, and a new generation of CAD tools is required to automate the insertion and validation of fault-tolerant mechanisms. This paper outlines the characteristics of a new fault-injection platform and its evaluation in a real industrial environment. The fault-injection platform is mainly used for assessing the correctness and effectiveness of the fault tolerance mechanisms implemented within ASIC and FPGA designs. The platform works on register transfer-level VHDL descriptions which are then synthesized, and is based on commercial tools for VHDL parsing and simulation. It also details techniques devised and implemented within the platform to speed-up fault-injection campaigns. Experimental results are provided, showing the effects of the different techniques, and demonstrating that they are able to reduce the total time required by fault-injection campaigns by at least one order of magnitude",2002,0, 4013,Fault-tolerant logic gates using neuromorphic CMOS Circuits,"Fault-tolerant design methods for VLSI circuits, which have traditionally been addressed at system level, will not be adequate for future very-deep submicron CMOS devices where serious degradation of reliability is expected. Therefore, a new design approach has been considered at low level of abstraction in order to implement robustness and fault-tolerance into these devices. Moreover, fault tolerant properties of multi-layer feed-forward artificial neural networks have been demonstrated. Thus, we have implemented this concept at circuit-level, using spiking neurons. Using this approach, the NOT, NAND and NOR Boolean gates have been developed in the AMS 0.35 mum CMOS technology. A very straightforward mapping between the value of a neural weight and one physical parameter of the circuit has also been achieved. Furthermore, the logic gates have been simulated using SPICE corners analysis which emulates manufacturing variations which may cause circuit faults. Using this approach, it can be shown that fault-absorbing neural networks that operate as the desired function can be built.",2007,0, 4014,Fiber Optical Gyro Fault Diagnosis based on Wavelet Transform and Neural Network,"Fault diagnosis plays an important role in detecting the reliability of integrated navigation. This paper proposed an intelligent method which combined wavelet transform with neural network to enhance efficiency. The combined method between wavelet transform and neural network was in series. Based on Daubechies, wavelet symmetry had been constructed. Through Daubechies eight-layer wavelet decomposing, detailed information of eight layers was achieved. Then the 8-dimensional eigenvector was used to train three-layer RBF neural network as fault sample. For RBF network is good at classifying, the network can detect a fault on-line after training. At the same time, it can classify faults and alarm. Gyro signals were chosen as the simulation inputs, the results indicated the method's applicability and effectiveness.",2008,0, 4015,Rapid detection of faults for safety critical aircraft operation,"Fault diagnosis typically assumes a sufficiently large fault signature and enough time for a reliable decision to be reached. However, for a class of safety critical faults on commercial aircraft engines, prompt detection is paramount within a millisecond range to allow accommodation to avert undesired engine behavior. At the same time, false positives must be avoided to prevent inappropriate control action. To address these issues, several advanced features were developed that operate on the residuals of a model based detection scheme. We show that these features pick up system changes reliably within the required time. A bank of binary classifiers determines the presence of the fault as determined by a maximum likelihood hypothesis test. We show performance results for four different faults at various levels of severity and show performance results throughout the entire flight envelope on a high fidelity aircraft engine model.",2004,0, 4016,Automating power system fault diagnosis through multi-agent system technology,"Fault diagnosis within electrical power systems is a time consuming and complex task. SCADA systems, digital fault recorders, travelling wave fault locators and other monitoring devices are drawn upon to inform the engineers of incidents, problems and faults. Extensive research by the authors has led to the conclusion that there are two issues which must be overcome. Firstly, the data capture and analysis activity is unmanageable in terms of time. Secondly, the data volume leads to engineers being overloaded with data to interpret. This paper describes how multi-agent system technology, combined with intelligent systems, can be used to automate the fault diagnosis activity. Within the multi-agent system, knowledge-based and model-based reasoning are employed to automatically interpret SCADA system data and fault records. These techniques and the design of the multi-agent system architecture that integrates them are described. Consequently, the use of engineering assistant agents as a means of providing engineers with decision support, in terms of timely and summarised diagnostic information tailored to meet their personal requirements, is discussed.",2004,0, 4017,Versatile and Efficient Techniques for Speeding-Up Circuit Level Simulated Fault-Injection Campaigns,"Fault injection in circuit level has proved to be cumbersome and time-consuming when employed to characterize the soft error sensitivity of digital circuits, hence new generation of CAD tool is required to automate the faults insertion and the validation of soft error mitigation mechanisms of the circuits. This paper outlines the characteristics of a new fault-injection platform HSECT-SPI (HIT Soft Error Characterization Toolkit-Spice Based) and its evaluation in some benchmark circuits implemented with distinct processes and soft error hardening techniques. It also details some techniques devised and implemented within the platform to automate and speed-up the circuit level fault-injection experiments. Experimental results are provided, showing that the platform is efficient, accurate and can direct the design of soft error immune circuits with at least three orders of magnitudes speed gain.",2008,0, 4018,Generating non-uniform distributions for fault injection to emulate real network behavior in test campaigns,"Fault injection is an efficient technique to evaluate the robustness of computer systems and their fault tolerance strategies. In order to obtain accurate results from fault injection based tests, it is important to mimic real conditions during a test campaign. When testing dependability attributes of network applications the real faulty behavior of networks must be closely emulated. We show how probability distributions can be used to inject communication faults that closely resemble the behavior observed in real network environments. To demonstrate the strengths of this strategy we develop a reusable and extensible entity called FIEND, integrate it to a fault injector and use the resulting tool to run test experiments injecting non-uniform distributed faults in a network application taken as example.",2009,0, 4019,Real Time Fault Injection Using Enhanced OCD -- A Performance Analysis,"Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead",2006,0, 4020,Fault injection approach based on dependence analysis,"Fault injection is used to validate a system in the presence of faults. Jaca, a software injection tool developed in previous work, is used to inject faults at interfaces between classes of a system written in Java. We present a strategy for fault injection validation based on dependence analysis. The dependence analysis approach is used to help in reducing the number of experiments necessary to cover the system's interfaces. For the experiments we used a system that consists of two integrated components, an ODBMS performance test benchmark, Wisconsin 007 and an ODBMS, Ozone. The results of some experiments and their analysis are presented.",2005,0, 4021,LFI: A practical and general library-level fault injector,"Fault injection, a critical aspect of testing robust systems, is often overlooked in the development of general-purpose software. We believe this is due to the absence of easy-to-use tools and to the extensive manual labor required to perform fault injection tests. This paper introduces LFI (library fault injector), a tool that automates the preparation of fault scenarios and their injection at the boundary between shared libraries and applications. LFI extends prior work by automatically profiling fault behaviors of libraries via static analysis of their binaries, thus reducing the dependence on human labor and perfect documentation. We present techniques for automatically generating injection scenarios and we describe a simple language for expressing such scenarios. LFI does not require access to libraries' source code and works for Linux, Windows, and Solaris on x86 and SPARC platforms.",2009,0, 4022,Fault Injection Resilience,"Fault injections constitute a major threat to the security of embedded systems. Errors occurring in the cryptographic algorithms have been shown to be extremely dangerous, since powerful attacks can exploit few of them to recover the full secrets. Most of the resistance techniques to perturbation attacks have relied so far on the detection of faults. We present in this paper another strategy, based on the resilience against fault attacks. The core idea is to allow an erroneous result to be outputted, but with the assurance that this faulty information conveys no information about the secrets concealed in the chip. We first underline the benefits of FIR: false positive are never raised, secrets are not erased uselessly in case of uncompromising faults injections, which increases the card lifespan if the fault is natural and not malevolent, and FIR enables a high potential of resistance even in the context of multiple faults. Then we illustrate two families of fault injection resilience (FIR) schemes suitable for symmetric encryption. The first family is a protocol-level scheme that can be formally proved resilient. The second family mobilizes a special logic-level architecture of the cryptographic module. We notably detail how a countermeasure of this later family, namely dual-rail with precharge logic style, can both protect both against active and passive attacks, thereby bringing a combined global protection of the device. The cost of this logic is evaluated as lower than detection schemes. Finally, we also give some ideas about the modalities of adjunction of FIR to some certification schemes.",2010,0, 4023,An emotional decision making help provision approach to distributed fault tolerance in MAS,"Fault is inevitable especially in MASS (multi-agent system) because of their distributed nature. This paper introduces a new approach for fault tolerance using help provision and emotional decision-making. Tasks that are split into criticality-assigned real-time subtasks according to their precedence graph are distributed among specialized agents with different skills. If a fault occurs for an agent in a way that it cannot continue its task, the agent requests help from the others with the same skill to redo or continue its task. Requested agents would help the faulty agent based on their nervousness on their own tasks compared to his task. It Is also possible for an agent to discover death of another agent, which has accepted one of his tasks, by polling. An implementation using JADE platform is presented in this paper and the results are reported.",2004,0, 4024,Essential Fault-Tolerance Metrics for NoC Infrastructures,"Fault-tolerant design of network-on-chip communication architectures requires the addressing of issues pertaining to different elements described at different levels of design abstraction - these may be specific to architecture, interconnection, communication and application issues. Assessing the effectiveness of a particular fault-tolerant implementation can be a challenging task for designers, constrained with tight system performance specifications and other requirements In this paper, we provide a top-down view of fault-tolerance methods for NoC infrastructures, and present a range of metrics used for estimating their quality. We illustrate the use of these metrics by simulating a few simple but realistic fault-tolerant scenarios.",2007,0, 4025,Fault-Tolerant Soft Starter Control of Induction Motors With Reduced Transient Torque Pulsations,"Fault-tolerant operation of induction motors fed by soft starters when experiencing thyristor/silicon-controlled rectifier open-circuit or short-circuit switch fault is presented in this paper. The present low-cost fault mitigation solution can be easily retrofitted, without significant cost increase, into the existing off-the-shelf three-phase soft starters to enhance the reliability and fault-tolerant capability of such soft starter systems. In the event of either thyristor open-circuit or short-circuit switch fault in any one of the phases, the fault-tolerant soft starters are capable of operating in a two-phase control mode using a novel resilient closed-loop control scheme. The performance resulted from the present fault-tolerant soft starter control has demonstrated reduced motor starting transient torque pulsations as well as reduced motor inrush current magnitude. The present fault-tolerant approach is applicable to any soft starters that control small to large integral horsepower induction motors. Simulation results along with supporting experimental results for a 1.492-kW, 460-V, four-pole, three-phase induction motor are presented here to demonstrate the soundness and effectiveness of the present fault-tolerant approach.",2009,0, 4026,A Fault-Tolerant Routing Algorithm of P2P Network Based on Hierarchical Structure,"Fault-tolerant routing in P2P network has been a hot point. To raise the performance of fault-tolerant routing can highly enhance the stability and efficiency of P2P network. Through research, we find most routing errors is caused by highly dynamic characteristic of P2P network, such as peers frequently join, leave and fail, which is the main factor that induce routing errors. We proposed an improved algorithm to raise the performance of fault-tolerant routing of P2P network based on hierarchical structure theory, which is named as FTARH (Fault-tolerant Algorithm of Routing with Hiberarchy). The new algorithm makes full use of the superfluous capability of super peers to enhance the performance of fault-tolerant routing of entire P2P network.",2010,0, 4027,Fault-tolerant scheduling using primary-backup approach for optical grid applications,"Fault-tolerant scheduling is an important issue for optical grid applications because of various grid resource failures. To improve the availability of the DAGs (directed acyclic graphs), a primary-backup approach is considered when making DAG scheduling decision. Experiments demonstrate the effectiveness and the practicability of the proposed scheme.",2009,0,7577 4028,Experiences with formal specification of fault-tolerant file systems,"Fault-tolerant, replicated file systems are a crucial component of todaypsilas data centers. Despite their huge complexity, these systems are typically specified only in brief prose, which makes them difficult to reason about or verify. This paper describes the authorspsila experience using formal methods to improve our understanding of and confidence in the behavior of replicated file systems. We wrote formal specifications for three real-world fault-tolerant file systems and used them to: (1) expose design similarities and differences; (2) clarify and mechanically verify consistency properties; and (3) evaluate design alternatives. Our experience showed that formal specifications for these systems were easy to produce, useful for a deep understanding of system functions, and valuable for system comparison.",2008,0, 4029,Novel classifier fusion approahces for fault diagnosis in automotive systems,"Faulty automotive systems significantly degrade the performance and efficiency of vehicles, and oftentimes are the major contributors of vehicle breakdown; they result in large expenditures for repair and maintenance. Therefore, intelligent vehicle health-monitoring schemes are needed for effective fault diagnosis in automotive systems. Previously, we developed a data-driven approach using a data reduction technique, coupled with a variety of classifiers, for fault diagnosis in automotive systems. In this paper, we consider the problem of fusing classifier decisions to reduce diagnostic errors. Specifically, we develop three novel classifier fusion approaches: class-specific Bayesian fusion, joint optimization of fusion center and of individual classifiers, and dynamic fusion. We evaluate the efficacies of these fusion approaches on an automotive engine data. The results demonstrate that dynamic fusion and joint optimization, and class-specific Bayesian fusion outperform traditional fusion approaches. We also show that learning the parameters of individual classifiers as part of the fusion architecture can provide better classification performance.",2007,0, 4030,A fault-tolerant system-on-programmable-chip based on domain-partition and blind reconfiguration,"Field programmable gate arrays (FPGAs) are widely used in building Systems-on-Programmable-Chips (SOPCs) since they contain plenty of reconfigurable heterogeneous resources providing the facility to implement various intellectual property cores. However, with the shrinking device feature size and the increasing die area, nowadays FPGAs can be deeply affected by the errors induced by electromigration and radiation, which results in challenges of building reliable SOPCs. In this paper, a SOPC implementing a smart 1553B bus node is presented to investigate the challenges and illustrate a feasible approach for building a complex system aimed at high reliability and low recovery latency on a commercial FPGA. First, a general reliability model, the DomainPartition (DP) model, is introduced to formulate the SOPCs which contain multiple alternative configurations proving the fault recovery capability. The assignment of the alternative configurations for maximizing the reliability is then determined according to a first-order optimal solution under the DP framework. Finally, the blind reconfiguration technique is used to reduce the recovery latency. The experiments based on a Monte Carlo simulation approach are carried out to evaluate the reliability and the latency. The obtained results show that higher reliability is attainable with less overhead than the generic triple-modular redundancy method.",2010,0, 4031,Errors in determinstic wireless fingerprinting systems for localisation,"Fingerprinting is a technique that records vectors of received power from several transmitters, and later matches these to a new measurement to position the new user. This paper examines data used in earlier fingerprinting experiments in a WiFi network to characterize the eventual positioning errors. The implied relationship between real distance and ldquovectorrdquo distance between fingerprints is tested and found to be poor. However, because fingerprinting algorithms use nearest neighbour techniques, these nearby fingerprints were examined and found to be better behaved.",2008,0, 4032,Fault Localization for Firewall Policies,"Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. Ensuring the correctness of firewall policies through testing is important. In firewall policy testing, test inputs are packets and test outputs are decisions. Packets with unexpected (expected) evaluated decisions are classified as failed (passed) tests. Given failed tests together with passed tests, policy testers need to debug the policy to detect fault locations (such as faulty rules). Such a process is often time-consuming.To help reduce effort on detecting fault locations, we propose an approach to reduce the number of rules for inspection based on information collected during evaluating failed tests. Our approach ranks the reduced rules to decide which rules should be inspected first. We performed experiments on applying our approach. The empirical results show that our approach can reduce 56% of rules that are required for inspection in fault localization.",2009,0, 4033,Minimizing the error of time difference of arrival method in mobile networks,"Estimating the position of a mobile set is of great importance in new mobile services. However, in most cases, the accuracy should be less than 100 meters. This accuracy is hard to reach especially in urban areas. The main problem is that there are a lot of obstacles like buildings between the BTS and the mobile set. Thus the time measured between BTS and the mobile set is somehow greater than the time it takes the wave to travel directly between two points. This paper introduces an optimized solution for TDOA as one of the most efficient ways for finding the location of a mobile phone. Considering the standards and limitations of both GSM and UMTS, the authors present a solution for optimizing mobile phone positioning. In this new method, the positioning is done via common TDOA methods. Moreover, the non-linear effects of the environment such as non-line of sight error (NLOS) are minimized through comparing the measured data with pre measured points to reduce the error of finding the MS location. The simulations show that compared to direct measurement, the optimized protocol, reduces the error of location method by 33%.",2005,0, 4034,Efficiency of probabilistic testability analysis for soft error effect analysis: A case study,"Evaluating the potential functional effects of soft errors (single or multiple bit-flips) in digital circuits becomes a critical design constraint. The usual approaches, based on fault injection techniques, suffer several limitations. New approaches, better suited to large circuits with complex workloads, are therefore suitable. An innovative approach was recently proposed, based on probabilistic testability analysis. This paper compares the presented results with results obtained from extensive fault injection campaigns.",2009,0, 4035,"Fighting bugs: remove, retry, replicate, and rejuvenate","Even if software developers don't fully understand the faults or know their location in the code, software rejuvenation can help avoid failures in the presence of aging-related bugs. This is good news because reproducing and isolating an aging-related bug can be quite involved, similar to other Mandelbugs. Moreover, monitoring for signs of software aging can even help detect software faults that were missed during the development and testing phases. If, on the other hand, a developer can detect a specific aging-related bug in the code, fixing it and distributing a software update might be worthwhile. In the case of the Patriot missile-defense system, a modified version of the software was indeed prepared and deployed to users. It arrived at Dhahran on 26 February 1991 - a day after the fatal incident.",2007,0, 4036,Minimizing Latency in Fault-Tolerant Distributed Stream Processing Systems,"Event stream processing (ESP) applications target the real-time processing of huge amounts of data. Events traverse a graph of stream processing operators where the information of interest is extracted. As these applications gain popularity, the requirements for scalability, availability, and dependability increase. In terms of dependability and availability, many applications require a precise recovery, i.e., a guarantee that the outputs during and after a recovery would be the same as if the failure that triggered recovery had never occurred. Existing solutions for precise recovery induce prohibitive latency costs, either by requiring continuous checkpoint or logging (in a passive replication approach) or perfect synchronization between replicas executing the same operations (in an active replication approach). We introduce a novel technique to guarantee precise recovery for ESP applications while minimizing the latency costs as compared to traditional approaches. The technique minimizes latencies via speculative execution in a distributed system. In terms of scalability, the key component of our approach is a modified software transactional memory that provides not only the speculation capabilities but also optimistic parallelization for costly operations.",2009,0, 4037,Time Series Forecasting Model with Error Correction by Structure Adaptive Support Vector Machine,"Exactly power load forecasting especially the short-term load forecasting is of important significance in the case of energy shortage today. In power system, due to the complexity of the historical load data and the randomness of a lot of uncertain factors influence, the observed historical data showed linear and nonlinear characteristics. A hybrid methodology is proposed to take advantage of the unique strength of autoregressive integrated moving average (ARIMA) and SVM (support vector machine) networks in linear and nonlinear modeling, which is an error correction method to create synergies in the overall forecasting process. ARIMA model is used to generate a linear forecast in the first stage, and then SVM is developed as the nonlinear pattern recognition to correct the estimation error in ARIMA forecast. The effectiveness of the hybird-model has been tested by one example. The experimental results show that the hybrid model can more effectively improve the forecasting accuracy than ARIMA-BP.",2008,0, 4038,Probing Human Error as Causal Factor in Incidents with Major Accident Potential,"Existing literature suggests that human error is directly linked to the majority of industrial incidents and accidents. The increasing use of computers and software in safety-critical systems makes systems more integrated and dependent and this can increase the risk if barrier integrity is reduced. This paper demonstrates how the Human Factors Assessment and Classification System (HFACS) can be applied to analyze incidents with major accident potential. One report from a Norwegian offshore Oil & Gas incident with major accident potential was analyzed to classify the causal factors for the purpose of making a relative comparison of these factors. The results revealed that failures on the organizational level were the most prevailing, representing almost three-quarters of all causal factors, while unsafe acts represented fourteen percent. Thus, organizational factors appear essential in risk management and failures on the organizational level require more attention to mitigate risks in the future.",2009,0, 4039,Towards a control-theoretical approach to software fault-tolerance,"Existing schemes for software fault-tolerance are based on the ideas of redundancy and diversity. Although being experimentally tested valid, existing fault-tolerant schemes are mainly ad hoc and lack theoretically rigorous foundation. They substantially increase software complexity and incur high development costs. They also impose challenges for real-time concurrent software systems where timing requirements may be stringent and faults in concurrent processes can propagate one another. In This work we treat software fault-tolerance as a robust supervisory control (RSC) problem and propose a RSC approach to software fault-tolerance. In this approach the software component under consideration is treated as a controlled object that is modeled as a generalized Kripke structure or finite-state concurrent system, and an additional safety guarder or supervisor is synthesized and compounded to the software component to guarantee the correctness of the overall software system, which is aimed to satisfy a temporal logic (CTL*) formula, even if faults occur to the software component. The proposed RSC approach requires only a single version of software and is based on a theoretically rigorous foundation. It is essentially an approach of model construction and thus complementary to the approach of model checking. It is a contribution to the theory of supervisory control, software fault-tolerance as well as the emerging area of software cybernetics that explores the interplay between software and control.",2004,0, 4040,Image Dependent Spatial Shape Error Concealment for Multiple Shapes,"Existing shape error concealment techniques consider a single closed shape at a time. However, in many applications there are commonly multiple shapes in a scene/object and they are not necessarily closed. Existing techniques attempt to conceal errors whenever they find a decoded shape is broken. However, in a multiple shape context, it is not that straightforward, as it becomes crucial to determine whether a segment is broken due to data losses or just the beginning of a new segment. This paper presents an image dependent shape-error concealment technique for multiple shapes (ISCM) which exploits textural information using a rubberband function to determine proper localisation of the shape errors and effectively recover them. Comparative experimental results analysis confirms both a superior error concealment performance and an improved robustness of ISCM technique.",2009,0, 4041,Improved message logging versus improved coordinated checkpointing for fault tolerant MPI,"Fault tolerance is a very important concern for critical high performance applications using the MPI library. Several protocols provide automatic and transparent fault detection and recovery for message passing systems with different impact on application performance and the capacity to tolerate a high fault rate. In a recent paper, we have demonstrated that the main differences between pessimistic sender based message logging and coordinated checkpointing are: 1) the communication latency and 2) the performance penalty in case of faults. Pessimistic message logging increases the latency, due to additional blocking control messages. When faults occur at a high rate, coordinated checkpointing implies a higher performance penalty than message logging due to a higher stress on the checkpoint server. We extend this study to improved versions of message logging and coordinated checkpoint protocols which respectively reduces the latency overhead of pessimistic message logging and the server stress of coordinated checkpoint. We detail the protocols and their implementation into the new MPICH-V fault tolerant framework. We compare their performance against the previous versions and we compare the novel message logging protocols against the improved coordinated checkpointing one using the NAS benchmark on a typical high performance cluster equipped with a high speed network. The contribution of This work is twofold: a) an original message logging protocol and an improved coordinated checkpointing protocol and b) the comparison between them.",2004,0, 4042,A Fault Detection Mechanism for Fault-Tolerant SOA-Based Applications,"Fault tolerance is an important capability for SOA-based applications, since it ensures the dynamic composition of services and improves the dependability of SOA-based applications. Fault detection is the first step of fault detection, so this paper focuses on fault detection, and puts forward a fault detection mechanism, which is based on the theories of artificial neural network and probability change point analysis rather than static service description, to detect the services that fail to satisfy performance requirements at runtime. This paper also gives reference model of fault-tolerance control center of enterprise services bus.",2007,0, 4043,A Classification-Based Approach to Fault-Tolerance Support in Parallel Programs,"Fault tolerance is an important requirement for long-running parallel programs. This paper presents a different approach to fault-tolerance support in message-passing parallel programs based on their structural and behavioral characteristics, commonly known as patterns. A classification of these patterns and their applicable fault-tolerance strategies is aimed to facilitate an application developer to incorporate appropriate fault-tolerance strategies to an application. Fault-tolerance strategies for two of the patterns are discussed, and one specific strategy is elaborated and analyzed. The presented strategies have been incorporated into a fault-tolerance support framework called FT-PAS. One objective of the framework is to separate the fault tolerance related details from an application developer's main objectives (separation-of-concerns). The paper presents the additional key features of the framework, and concludes with a discussion on current and future research directions.",2009,0, 4044,Position estimator including saturation and iron losses for encoder fault detection of doubly-fed induction machine,"Fault tolerance is gaining interest to increase reliability and availability, for example for distributed energy systems. Doubly fed induction generators need rotor position information for high performance control. As part of a multi-sensor fault tolerant generator system, fault tolerance of the position sensor is presented, using a position estimator. A known position estimator is improved and thoroughly described, including effects of machine saturation and iron losses. The fault detection and isolation is described and the reconfiguration from encoder to sensorless operation is shown. Measurements are provided and show good and fast results. Steady state properties of encoder and sensorless operation are compared.",2008,0, 4045,"Fault-tolerant routing in A PRDT(2,1)-based NoC","Fault tolerance is one of the most dominant issues for NoC systems. This paper presents a new fault tolerant routing algorithm for a network topology PRDT(2,1). The proposed algorithm converts a fault region into a rectangular in shape without disabling a large number of non-faulty nodes. It provides strong self-adaptability while utilizing as few virtual channels as possible. As a result, the connection keeps unchanged only if the network does not break by fault regions. In addition, it has been shown that the proposed algorithm can guarantee the connection and the deadlock-free of the network. The result of simulation shows that the proposed routing algorithm is of feasibility of gracefully degraded operation.",2010,0, 4046,Emulation of Transient Software Faults for Dependability Assessment: A Case Study,"Fault Tolerance Mechanisms (FTMs) are extensively used in software systems to counteract software faults, in particular against faults that manifest transiently, namely Mandelbugs. In this scenario, Software Fault Injection (SFI) plays a key role for the verification and the improvement of FTMs. However, no previous work investigated whether SFI techniques are able to emulate Mandelbugs adequately. This is an important concern for assessing critical systems, since Mandelbugs are a major cause of failures, and FTMs are specifically tailored for this class of software faults. In this paper, we analyze an existing state-of-the-art SFI technique, namely G-SWFIT, in the context of a real-world fault-tolerant system for Air Traffic Control (ATC). The analysis highlights limitations of G-SWFIT regarding its ability to emulate the transient nature of Mandelbugs, because most of injected faults are activated in the early phase of execution, and they deterministically affect process replicas in the system. We also notice that G-SWFIT leaves untested the 35% of states of the considered system. Moreover, by means of an experiment, we show how emulation of Mandelbugs is useful to improve SFI. In particular, we emulate concurrency faults, which are a critical sub-class of Mandelbugs, in a fully representative way. We show that proper fault triggering can increase the confidence in FTMs' testing, since it is possible to reduce the amount of untested states down to 5%.",2010,0, 4047,The Design and Implementation of the Computer Aided Fault Tree Analysis System Based on UML and J2EE Technology,"Fault Tree Analysis (FTA) is a hotspot both in academic field and in modern industry. The FTA method is a means that evaluates the system reliability and safeties, using to predict and examine the faults, analyzing the weak link of system, guiding to circulate and maintain, and the optimization design of system. Along with the technical exaltation of the computer science, the computer aided technology has already become an important method to the fault tree analysis. In order to explore a effective FTA model for modern industry domain in China, this paper studied FTA and its development, analyzed the problems existed in FTA as well as the deficiency in implementing FTA, and proposed an integrated FTA system - Computer Aided FTA System (CAFTA). Finally, this paper expounded the process of the system's designing and implementing based on UML and J2EE technology.",2009,0, 4048,Fault Recovery Designs for Processor-Embedded Distributed Storage Architectures with I/O-Intensive DB Workloads,"Fault recovery has become an essential capability for systems that process large data-intensive workloads. Processor-embedded distributed storage architectures offload user-level processing to the peripheral from the host servers. Our earlier work investigated the performance benefits of such architectures for disk- and MEMS-based smart storage devices. In this paper, we focus on the issue of fault recovery. We propose recovery schemes for TPC-H based workloads, and evaluate several recovery scenarios applicable to both disk- and MEMS-based smart storage architectures.",2005,0, 4049,Efficient multiway graph partitioning method for fault section estimation in large-scale power networks,"Fault section estimation (FSE) of large-scale power networks can be implemented effectively by the distributed artificial intelligence (AI) technique. In this paper, an efficient multiway graph partitioning method is proposed to partition the large-scale power networks into the desired number of connected subnetworks with balanced working burdens in performing FSE. The number of elements at the frontier of each subnetwork is also minimised in the method. The suggested method consists of three basic steps: forming the weighted depth-first-search tree of the studied power network; partitioning the network into connected, balanced subnetworks and minimising the number of the frontier nodes of the subnetworks through iterations so as to reduce the interaction of FSE in adjacent subnetworks. The relevant mathematical model and partitioning procedure are presented. The method has been implemented with the sparse storage technique and tested in the IEEE 14-bus, 30-bus and 118-bus systems, respectively. Computer simulation results show that the proposed multiway graph partitioning method is effective for the large-scale power system FSE using the distributed AI technique",2002,0, 4050,FPGA-based fault simulator,"Fault simulation allows evaluation of reliability properties of developed designs. The complexity of the designs is growing, which makes software-based simulation methods unusable. Hardware-based fault simulation can bring desired speedup. Partial dynamic reconfiguration is a way of fault injection. Reconfiguration time is often considered as a main weakness of this technique. This paper describes an FPGA-based fault simulator, where reconfiguration is performed by an embedded processor core, which eliminates this drawback. Error-detection-code based CED circuits are used in experiments; the results of the experiments are reported",2006,0, 4051,Instruction-Level Fault Tolerance Configurability,"Fault tolerance (FT) is becoming increasingly important in computing systems. FT features are based on some form of redundancy, which adds a significant cost to a system, either increasing the required amount of hardware resources or degrading performance. To enable a user to choose between stronger FT or performance, some schemes have been proposed, which can be configured for each application to use the available redundancy to increase either reliability or performance. We propose to have an instruction-level, rather than application-level, configurability of this kind, since some applications (for example, multimedia) can have different reliability requirements for their different parts. We propose to apply weaker (or no) FT techniques to the less critical parts. This yields a certain time or resource gain, which can be used to apply stronger FT techniques to the more critical parts, thereby, increasing the overall FT. We show how some existing FT techniques can be adapted to support instruction-level FT configurability, and how a programmer can specify the desired FT of particular instructions or blocks of instructions in assembly or in a high-level programming language. In some cases compiler can assign the FT level to instructions automatically. Experimental results demonstrate that reducing the FT of non-critical instructions can lead to significant performance gains compared to a redundant execution of all the instructions. The fault coverage of this scheme is also evaluated, demonstrating that it is very application-specific. For some applications the fault coverage is very admissible, but unacceptable for others.",2007,0, 4052,Instruction Precomputation for Fault Detection,"Fault tolerance (FT) is becoming increasingly important in computing systems. This work proposes and evaluates the instruction precomputation technique to detect hardware faults. Applications are profiled off-line, and the most frequent instruction instances with their operands and results are loaded into the precomputation table when executing. The precomputation-based error detection technique is used in conjunction with another method that duplicates all instructions and compares the results. In the precomputation-enabled version, whenever possible, the instruction compares its result with a precomputed value, rather than executing twice. Another precomputation-based scheme does not execute the precomputed instructions at all, assuming that precomputation provides sufficient reliability. Precomputation improves the fault coverage (including permanent and some other faults) and performance of the duplication method. The proposed method is compared to an instruction memoization-based technique. The performance improvements of the precomputation- and memoization-based schemes are comparable, while precomputation has a better long-lasting fault coverage and is considerably cheaper.",2009,0, 4053,Safety verification of a fault tolerant reconfigurable autonomous goal-based robotic control system,"Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called mission data system (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.",2007,0, 4054,Crash fault detection in celerating environments,"Failure detectors are a service that provides (approximate) information about process crashes in a distributed system. The well-known ldquoeventually perfectrdquo failure detector, diamP, has been implemented in partially synchronous systems with unknown upper bounds on message delay and relative process speeds. However, previous implementations have overlooked an important subtlety with respect to measuring the passage of time in ldquoceleratingrdquo environments, in which absolute process speeds can continually increase or decrease while maintaining bounds on relative process speeds. Existing implementations either use action clocks, which fail in accelerating environments, or use real-time clocks, which fail in decelerating environments. We propose the use of bichronal clocks, which are a composition of action clocks and real-time clocks. Our solution can be readily adopted to make existing implementations of diamP robust to process celeration, which can result from hardware upgrades, server overloads, denial-of-service attacks, and other system volatilities.",2009,0, 4055,A fault diagnosis model for embedded software based on FMEA/FTA and bayesian network,"Failure modes and effects analysis (FMEA) and fault tree analysis (FTA) are two effective fault analysis technologies and the integration of them is also applied widely in many industry domains. But when they are used for fault diagnosis, the ability of inference is not very enough and especially they are not suitable to use the fault related symptoms to do some posterior inference. To solve this problem, this paper combines FMEA and FTA based on Bayesian Network (BN) to form a fault diagnosis analysis model. Case study shows that this model has a good FMEA/FTA fusion ability and posterior inference ability for embedded software fault diagnosis.",2009,0, 4056,FPGA Implementation of Pipelined Architecture for Optical Imaging Distortion Correction,"Fast and efficient operation is a major challenge for complex image processing algorithms executed in hardware. This paper describes novel algorithms for correcting optical geometric distortion in imaging systems, together with the architectures used to implement them in FPGA-based hardware. The proposed architecture produces a fast, almost real-time solution for the correction of image distortion implemented using VHDL with a single Xilinx FPGA XCS3 1000-4 device. Using dedicated SRLC16 shift registers to build the synchronous FIFOs is an ideal utilization of the device resources available. The experimental results show that the barrel distortion can be quickly corrected with a very low residual error. The design can also be applied to other imaging processing algorithms in optical systems",2006,0, 4057,Spectrum-Based Multiple Fault Localization,"Fault diagnosis approaches can generally be categorized into spectrum-based fault localization (SFL, correlating failures with abstractions of program traces), and model-based diagnosis (MBD, logic reasoning over a behavioral model). Although MBD approaches are inherently more accurate than SFL, their high computational complexity prohibits application to large programs. We present a framework to combine the best of both worlds, coined BARINEL. The program is modeled using abstractions of program traces (as in SFL) while Bayesian reasoning is used to deduce multiple-fault candidates and their probabilities (as in MBD). A particular feature of BARINEL is the usage of a probabilistic component model that accounts for the fact that faulty components may fail intermittently. Experimental results on both synthetic and real software programs show that BARINEL typically outperforms current SFL approaches at a cost complexity that is only marginally higher. In the context of single faults this superiority is established by formal proof.",2009,0, 4058,Fault diagnosis of pneumatic actuator using adaptive network-based fuzzy inference system models and a learning vector quantization neural network,"Fault diagnosis in pneumatic actuators is a very difficult task due to the inherent high nonlinearity and uncertainty. Developing models of nonlinear systems with adaptive network-based fuzzy inference systems (ANFISs) has recently received attention. Models that are built upon ANFISs overcome the disadvantages of ordinary fuzzy modeling and can be very suitable for generalized modeling of nonlinear plants. We set up a group of ANFIS models which are relatively common in practice, corresponding to various situations of a pneumatic actuator, including normal, low and high supply pressure. Considering the advantage that a learning vector quantization (LVQ) neural network has a powerful ability to classification, we then utilize a LVQ neural network as a fault diagnosis scheme by abstracting the data of ANFIS models as the input vectors for nonlinear plants. The effectiveness is demonstrated via experiments on a pneumatic actuator.",2004,0, 4059,Built-in fault injection in hardware - the FIDYCO example,"Experimental fault-injection plays a key role in the process of fault tolerance validation. In this paper we discuss the limitations of conventional experimental setups and investigate how highly complex FPGAs can aid in overcoming these. Based on a thorough analysis of the potential aims of fault-injection experiments we derive a set of conditions for the design of an FPGA-based fault-injection toolset. We present the fault-injection tool FIDYCO as an example implementation of this concept. Our FPGA-based toolset has three main advantages: First, the availability of a physical target system allows to perform experiments in real time. Second, the programmable nature of the FPGA target platform facilitates controllability and observability comparable to that of simulation-based approaches. Third, the tight integration of fault injector and device under test on the same hardware platform allows for a higher precision of fault injection and diagnostic resolution.",2004,0, 4060,Fault current limiter allocation and sizing in distribution system in presence of distributed generation,"Expose of distributed generation (DG) to the distribution network increases the fault current level. This will give rise to fault current which is normally greater than interrupt capability of breakers and fuses. The introduction of solid state fault current limiters (SSFCLs) becomes an effective way for suppressing such a high short-circuit current fault in distribution systems. In this paper, the effect of proposed SSFCL on reduction of fault current is investigated. Then genetic algorithm is employed to search for the optimal number, locations and size of proposed SSFCL. The Numerical and simulation results show the efficiency of proposed GA based FCL allocation and sizing method in terms of minimization of distribution protection system cost.",2009,0, 4061,A novel error recovery scheme for H.264 video and its application in conversational services,"Extensive researches have been conducted to enhance robustness for Internet video transmission. It has been found that feedback channel is an effective tool to provide error recovery ability and it is most suitable for interactive and conversational environments. Multiframe is another useful approach to enhance both coding efficiency and robustness. In this paper, we propose an algorithm which combines feedback channel and multiframe to achieve rapid error recovery ability for H.264 video. Simulation results show that our scheme can best utilize the feedback information. At the end of this paper, its application in conversational services such as Internet videophone is discussed.",2004,0, 4062,Basic vibration signal processing for bearing fault detection,"Faculty in the College of Engineering at the University of Alabama developed a multidisciplinary course in applied spectral analysis that was first offered in 1996. The course is aimed at juniors majoring in electrical, mechanical, industrial, or aerospace engineering. No background in signal processing or Fourier analysis is assumed; the requisite fundamentals are covered early in the course and followed by a series of laboratories in which the fundamental concepts are applied. In this paper, a laboratory module on fault detection in rolling element bearings is presented. This module is one of two laboratory modules focusing on machine condition monitoring applications that were developed for this course. Background on the basic operational characteristics of rolling element bearings is presented, and formulas given for the calculation of the characteristic fault frequencies. The shortcomings of conventional vibration spectral analysis for the detection of bearing faults is examined in the context of a synthetic vibration signal that students generate in MATLAB. This signal shares several key features of vibration signatures measured on bearing housings. Envelope analysis and the connection between bearing fault signatures and amplitude modulation/demodulation is explained. Finally, a graphically driven software utility (a set of MATLAB m-files) is introduced. This software allows students to explore envelope analysis using measured data or the synthetic signal that they generated. The software utility and the material presented in this paper constitute an instructional module on bearing fault detection that can be used as a stand-alone tutorial or incorporated into a course.",2003,0, 4063,Issues of Fail-Over Switching for Fault-Tolerant Ethernet Implementation,Fail-over switching is a key factor in approaches to fault-tolerant Ethernet (FTE) implementation. This paper presents some key points in the fail-over switching process of popular two-port FTE. A new algorithm of fail-over switching for multi-ports is then presented. This is based on our new hybrid approach. A simulation tool has been developed to aid identification of fail-over switching time. Recommendations on the selection of the design parameters for FTE implementation are provided. These results can be directly applicable to conventional hardware-based approaches.,2009,0, 4064,A Grouping-Based Strategy to Improve the Effectiveness of Fault Localization Techniques,"Fault localization is one of the most expensive activities of program debugging, which is why the recent years have witnessed the development of many different fault localization techniques. This paper proposes a grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness. The applicability of the strategy is assessed over - Tarantula and a radial basis function neural network-based technique; across three different sets of programs (the Siemens suite, grep and gzip). Results are suggestive that the grouping-based strategy is capable of significantly improving the fault localization effectiveness and is not limited to any particular fault localization technique. The proposed strategy does not require any additional information than what was already collected as input to the fault localization technique, and does not require the technique to be modified in any way.",2010,0, 4065,A BBN-Based Approach for Fault Localization,"Fault localization techniques help programmers find out the locations and the causes of the faults and accelerate the debugging process. The relation between the fault and the failure is usually complicated, making it hard to deduce how a fault causes the failure. Analysis of variance is broadly used in many correlative researches. In this paper, a Bayesian belief network (BBN) for fault reasoning was constructed based on the suspicious pattern, whose nodes consist of the suspicious pattern and the callers of the methods that constitute the suspicious pattern. The constructing algorithm of the BBN, the correlative probabilities, and the formula for the conditional probabilities of each arc of the BBN were defined. A reasoning algorithm based on the BBN was proposed, through which the faulty module can be found and the probability for each module containing the fault can be calculated. An evaluation method was proposed. Experiments were executed to evaluation this fault localization technique. The data demonstrated that this technique could achieve an average accuracy of 0.761 and an average recall of 0.737. This fault localization technique is very effective and has high practical value.",2009,0, 4066,Modern fault location technique for the utility,"Fault location (FL) is one of the most important diagnostics tasks in transmission and distribution networks. Due to long distances of the power lines, the search and repair can take a long time, while modern microprocessor technology can help automate the FL process and give a precise (up to a span) location of the fault. The fault extinction time is reduced dramatically. In this report, the basic sources of FL inaccuracy in existing techniques are discussed. The new algorithms and solutions are offered in order to reach higher accuracy of the fault position estimate in application. The proposed techniques of FL and power system modeling have been tested and put in operation, which approved its characteristics conformance. The application issues of the FL implementation in autonomous intelligent electronic devices (IEDs) are outlined on the basis of a response of a large utility.",2009,0, 4067,Maximum Throughput Obtaining of IEEE 802.15.3 TDMA Mechanism under Error-Prone Channel,"IEEE 802.15.3 efficiently uses time division multiple accesses (TDMA) to support the quality of service (QoS) for multimedia traffic or the transfer of multi-megabyte data for music and image files. In the TDMA mechanism for an allocated channel time (channel time allocation, CTA) and known bit error rate of the channel, the throughput can be maximized by dynamically adjusting the frame size. In this paper a throughput model under non-ideal channel condition was formulated, and then the adaptive frame size can be calculated from the model. In addition a feasible implementation of this adaptive scheme is presented. The mathematical analysis and simulation results demonstrate the effectiveness of our adaptive scheme.",2009,0, 4068,Fault Tolerance via Diversity for Off-the-Shelf Products: A Study with SQL Database Servers,"If an off-the-shelf software product exhibits poor dependability due to design faults, then software fault tolerance is often the only way available to users and system integrators to alleviate the problem. Thanks to low acquisition costs, even using multiple versions of software in a parallel architecture, which is a scheme formerly reserved for few and highly critical applications, may become viable for many applications. We have studied the potential dependability gains from these solutions for off-the-shelf database servers. We based the study on the bug reports available for four off-the-shelf SQL servers plus later releases of two of them. We found that many of these faults cause systematic noncrash failures, which is a category ignored by most studies and standard implementations of fault tolerance for databases. Our observations suggest that diverse redundancy would be effective for tolerating design faults in this category of products. Only in very few cases would demands that triggered a bug in one server cause failures in another one, and there were no coincident failures in more than two of the servers. Use of different releases of the same product would also tolerate a significant fraction of the faults. We report our results and discuss their implications, the architectural options available for exploiting them, and the difficulties that they may present.",2007,0, 4069,Automatic Detection of Defects in Solar Modules: Image Processing in Detecting,"Image acquisition devices which can get infrared image of solar modules is designed by using the principles of the semiconductor's electroluminescence, and image processing is applied to the detection system which can detect the defects automatically including black pieces, fragmentation, broken grid, crack and so on. At first the defects of the infrared image are classified and then the defects' types and locations are marked out after filtering, single-chip division, gray-scale transformation, binary, feature description and extraction, finally the results are feeded back to the database. This method increases the defects' types (such as invisible crack) which the manual testing is difficult to identify, it also can eliminate human errors which manual testing may produce possibly and can reduce labor costs, defects' rates, further it can improve the detection's efficiency and productivity of production line.",2010,0, 4070,Diagnosis of Bridging Defects Based on Current Signatures at Low Power Supply Voltages,"Improvement of diagnosis methodologies is a key factor for fast failure analysis and yield improvement. As bridging defects are a common defect type in CMOS circuits, diagnosing this class of defect becomes relevant for present and future technologies. Bridging defects cause two additional current components, the bridge and the downstream current. This work presents the effect of the downstream current on current signatures and its impact on the diagnosis of such defects. The authors demonstrate that the impact of downstream current is minimized at low power supply (Vdd) values. Therefore, current measurements at low power supply voltages are proposed to enhance bridge diagnosis. Experimental evidence of this behaviour is presented for real devices. Furthermore, current signatures measured at VVLV are used for the diagnosis of fifteen failing 0.18mum technology devices, which are demonstrated to contain a bridging defect.",2007,0, 4071,An analysis of the fault correction process in a large-scale SDL production model,"Improvements in the software development process depend on our ability to collect and analyze data drawn from various phases of the development life cycle. Our design metrics research team was presented with a largescale SDL production model plus the accompanying problem reports that began in the requirements phase of development. The goal of this research was to identify and measure the occurrences of faults and the efficiency of their removal by development phase in order to target software development process improvement strategies. The number and severity of problem reports were tracked by development phase and fault class. The efficiency of the fault removal process using a variety of detection methods was measured Through our analysis of the system data, the study confirms that catching faults in the phase of origin is an important goal. The faults that migrated to future phases are on average eight times more costly to repair. The study also confirms that upstream faults are the most critical faults and more importantly it identifies detailed design as the major contributor of faults, including critical faults.",2003,0, 4072,Fault tolerant control for unstable systems: a linear time varying approach,"In (passive) fault tolerant control design, the objective is to find a fixed compensator, which maintains a suitable performance - or at least stability - in the event that a fault should occur. A major theoretical obstacle to obtain this objective, is that even if the system models corresponding to the occurrence of various faults are simultaneously stabilizable by a linear, time-invariant compensator, this compensator might have to be of a very high order, as shown in a recent publication. In this paper, we propose a design procedure for a time-varying compensator, which overcomes the obstacle for any finite number of faults with a controller order of no more than the plant order. The performance of this compensator might be poor, but a heuristic procedure for improving the performance is also shown, and an example demonstrates that this improvement can be truly significant.",2004,0, 4073,An Intelligent System for Wrong Data Detection and Correction for Demand Forecasting Purpose,"In a company, the manual checking of load data resulting from measurements is a repetitive process. It takes a long time and is subject to errors, since after one or two hours performing that task, it is difficult for the licensee's technician to identify the existing deviations in the large databases. Another aspect is that the data in those bases are usually distributed through tables on the screen. Such fact renders the task more complicated due to the lack of a better notion of sets such as possible rearrangements, for example, if those same data were displayed in graphics. The aim of the current work is to substitute the load monitoring previously done through manual checking by a computer system specially conceived to meet the necessities of Energias Brasil Bandeirante. Because of the proposed methodological development and the implemented computer software, the data checking is practically automatic, eliminating errors that could not be identified. The computer software brings a new paradigm to check flaws in the data, making possible to relate them in many dimensions. The software performs a repetitive task based on pattern recognition techniques. Moreover, the program indicates possible flaws in measurement, easing the correction of figures and helping in the correct measurement of the company's load. This paper aims to present the methodology that was developed to automate the load monitoring and its computer implementation",2006,0, 4074,Hybrid based approach for fault tolrance in a multi-agent system,"In a complex manufacturing environment the ability to handle unexpected events is from crucial importance. Due to their distributed nature, multi-agent system provides a convenient way to enable failure tolerance. This paper presents a multi-agent architecture where the agents are combined with an ontology and subjacent low-level based on IEC61499 standard in order to enhance fault tolerance in such complex systems. We propose a hybrid approach for fault tolerance, where different types of agents are using different heartbeat mechanisms to detect exceptional events. The comparison with other related approaches shows its applicability with a reduced message flow at the same time.",2009,0, 4075,Experimental Results with Forward Erasure Correction and Real Video Streaming in Hybrid Wireless Networks,"In a heterogeneous MANET, based on wireless LANs linked together by satellite, the overall channel efficiency is impaired by multiple effects, because of multipath fading in the terrestrial segment and atmospheric fading on the satellite link. In this paper we address this issue by applying forward erasure correction codes (FZC) to MPEG-4 video sequences exchanged by the hosts of a hybrid network, made of a satellite link and a wireless LAN using 802.11b devices. A standard video streaming application runs on one end of the satellite link while, at the other end, a wireless ad hoc network receives the multicast video stream. This work aims at demonstrating the improvement in quality of service (QoS) of the video transmitted in the hybrid network. The main parameters measured are the packet loss, the delivery delay, and the overhead in bandwidth occupancy imposed by the use of FZC. The received video is then evaluated by using a MOS (mean opinion score) procedure",2005,0, 4076,Tripping of wind turbines during a system fault,"In a power system with significant wind power penetration, the tripping of wind turbines for a system fault condition can be a major concern to the system operator. With wind turbines employing different under-voltage protection settings available in the system, a fault on a transmission line can lead to tripping of a large number of wind farms depending on the voltages at the wind farm buses during the fault and the protection settings of the wind farms. As a result, an N-l contingency may evolve into an N-(K+1) contingency, where K is the number of wind farms tripped due to low voltage conditions during a fault. The severity of such a condition depends on the output of the wind farms during a fault, and if it coincides with the high wind and off peak loading condition, it might lead to system instabilities. Therefore, it is important for the system operator to be aware of such limiting events during system operation and be prepared to take proper control actions. This can be achieved by incorporating the wind farm tripping status during N-1 static security assessment procedure. In this paper a methodology based on Z-bus algorithm is proposed to determine the tripping status of wind farms for a worst case fault, so that it can be used in contingency evaluation procedure. The proposed algorithm is implemented in MATLAB and tested with a 23 bus test system.",2008,0, 4077,A feasible schedulability analysis for fault-tolerant hard real-time systems,"Hard real-time systems require predictable performance despite the occurrence of failures. In this paper, the authors proposed a new fault-tolerant priority assignment algorithm based on worst-case response time schedulability analysis for fault-tolerant hard real-time system. This algorithm can be used, together with the schedulability analysis, to effectively improve system fault resilience when the two traditional fault-tolerant priority assignment policies cannot improve system fault resilience. Also, a fault-tolerant priority configuration search algorithm for the proposed analysis was presented.",2005,0, 4078,Motion correction for augmented fluoroscopy - application to liver embolization,"Hepatic embolization is a procedure designed to cut off blood supply to liver tumors, either hepatocellular carcinomas (HCC) or metastases from other parts of the body. While it often serves as a palliative treatment, it can also be indicated as a precursor to liver resection and liver transplants. The procedure itself is conducted under fluoroscopic X-ray guidance. Contrast agent is administered to opacify the vasculature and to indicate the arterial branches that feed the treatment target. These supply routes are then blocked by embolic agents, cutting off the tumor's blood supply. While methods exist to enhance fluoroscopic images and reduce the dependency on contrast agent, they are typically confounded by patient respiratory motion and are hence not effective for abdominal interventions. This paper presents an appearance based tracking algorithm that quickly and accurately compensates for the liver's bulk motion due to respiration, thereby enabling the application of fluoroscopic augmentations (i.e. image overlays) for hepatic embolization procedures. To quantify the accuracy of our algorithm, we manually identified vascular and artificial landmarks in fluoroscopy sequences acquired from three patients during free breathing. The average postmotion compensation landmark misalignment was 1.9 mm, with the maximum landmark misalignment not exceeding 5.5 mm.",2008,0, 4079,M2 meter as a part of closed-loop adaptive optical system for high-power laser beam correction,"Here, we demonstrate how closed-loop adaptive optical system can be used to obtain a good focused beam. A bimorph mirror is used as a wavefront corrector. Closed-loop software analyses the focal spot that is measured by M2 meter and then calculates voltages to be applied to the mirror electrodes. This adaptive system can correct the low-order slow changing aberrations without any measurements of the wavefront. The optimal correction of the high-power beam aberrations can be found by the use of genetic and hill-climbing algorithms.",2005,0, 4080,Heterogeneous Error Protection of H.264/AVC Video Using Hierarchical 16-QAM,"Heterogeneous error protection (HEP) of H.264/AVC coded video is investigated using hierarchical quadrature amplitude modulation (HQAM), which takes into consideration the non- uniformly distributed importance of intracoded frame (I-frame) and predictive coded frame (P-frame) as well as the sensitivity of the coded bitsream against transmission errors. The HQAM constellation are used to give different degrees of error protection of the most important information of the video content. The performance of the transmission system is evaluated under additive Gaussion Noise (AWGN). The simulation results indicate that the strategy produces a high quality of the reconstructed video data compared with uniform protection.",2009,0, 4081,A potentially significant on-wafer high-frequency measurement calibration error,"High accuracy radio-frequency (RF) measurements typically require a calibration to remove the undesired effects of the measurement apparatus. The calibration consists of measuring some combination of known standards such as short, open, load, through, and delay. When measurements are performed on-wafer for silicon RF integrated circuits (RFICs), a two-step calibration/de-embedding technique is typically used. First, the measurement system is calibrated to a reference plane located at the probe tips through measurement of calibration standards fabricated on an impedance-standard substrate. Second, on-wafer de-embedding standards are measured in an attempt to shift the reference plane to the terminals of the device under test (DUT). While significant effort has gone into the development of improved on-wafer de-embedding schemes, discrepancies between actual and de-embedded data still exist. In this article, we first discuss a specific case (a spiral inductor on silicon) for which there was a significant discrepancy between measurement and analysis. The problem is found to be with the measurement. This problem is detailed, and a technique we call ""synthetic calibration"" is described that can be used with any electromagnetic (EM) analysis to quantify calibration error for any proposed set of calibration standards. Due to the high expense and time required for wafer fabrication, it is important to successfully complete such a calibration validation prior to tape-out.",2005,0, 4082,Linux Highly Available (HA) Fault-Tolerant Servers,"High availability is becoming increasingly important as our business depend more and more on computers. Unfortunately, many off-the-shelf solutions for high availability (HA) are expensive and require expertise. This paper explains the design and implementation of an inexpensive high-availability solution for our business-critical needs without requiring the use of expensive additional hardware or software. Along with discussion on high availability, this paper also discusses the data integrity of files and database, of the services which are to be made highly available. Using HTTP as the service example and MySQL as the database to be replicated for data integrity, a two node cluster has been configured to implement the concept.",2007,0, 4083,The Effects of Traffic Patterns on Power Consumption of Torus-Connected NoCs with Faults,"High performance, reliability, transient and permanent fault-tolerance, and low energy consumption are major objectives of Networks-on-Chip (NoCs). Since,different applications impose various communication requirements in NoCs, a number of research studies have revealed that the performance advantages of routing schemes are more noticeable on power consumption under different traffic patterns. However,the power consumption issues of NoCs have not been thoroughly investigated in the presence of faulty regions. To the best of our knowledge, this research is the first attempt to examine the effects of most popular traffic patterns (i.e., Uniform, Local, and Hot-Spot) on power consumption of NoCs in the presence of permanent faults.",2009,0, 4084,Fault analysis on double three-phase to six-phase converted transmission line,"High phase order transmission system is being considered a viable alternative for increasing the power transmission capability of overhead electric power transmission over existing right-of-way. This paper presents the faults analysis of six-phase transmission system. In this context, fault analysis has been conducted on the Goudey-Oakdale 2-bus test system. The results of these investigations are presented in the form of typical time responses. The PSCAD/EMTDC is used for the simulation studies",2005,0, 4085,Performability/Energy Tradeoff in Error-Control Schemes for On-Chip Networks,"High reliability against noise, high performance, and low energy consumption are key objectives in the design of on-chip networks. Recently some researchers have considered the impact of various error-control schemes on these objectives and on the tradeoff between them. In all these works performance and reliability are measured separately. However, we will argue in this paper that the use of error-control schemes in on-chip networks results in degradable systems, hence, performance and reliability must be measured jointly using a unified measure, i.e., performability. Based on the traditional concept of performability, we provide a definition for the ??Interconnect Performability??. Analytical models are developed for interconnect performability and expected energy consumption. A detailed comparative analysis of the error-control schemes using the performability analytical models and SPICE simulations is provided taking into consideration voltage swing variations (used to reduce interconnect energy consumption) and variations in wire length. Furthermore, the impact of noise power and time constraint on the effectiveness of error-control schemes are analyzed.",2010,0, 4086,Redundant functional faults reduction by saboteurs synthesis [logic verification],"High-level descriptions of digital systems are perturbed by using high-level fault models in order to perform functional verification. Fault lists should be accurately created in order to avoid waste of time during ATPG and fault simulation. However, automatic fault injection tools can insert redundant faults which are not symptoms of design errors. Such redundant faults should be removed from the fault list before starting the verification session. This paper proposes an automatic strategy for high-level faults injection, which removes redundant bit coverage faults. An efficient implementation of a bit coverage saboteur is proposed, which allows one to use synthesis for redundant faults removal. Experimental results highlight the effectiveness of the methodology. By using the proposed injection strategy, functional APTG time is reduced and fault coverage is increased.",2003,0, 4087,Fault-tolerant defect prediction in high-precision foundry,"High-precision foundry production is subjected to rigorous quality controls in order to ensure a proper result. Such exams, however, are extremely expensive and only achieve good results in a posteriori fashion. In previous works, we presented a defect prediction system that achieved a 99% success rate. Still, this approach did not take into account sufficiently the geometry of the casting part models, resulting in higher raw material requirements to guarantee an appropriate outcome. In this paper, we present here a fault-tolerant software solution for casting defect prediction that is able to detect possible defects directly in the design phase by analysing the volume of three-dimensional models. To this end, we propose advanced algorithms to recreate the topology of each foundry part, analyze its volume and simulate the casting procedure, all of them specifically designed for an robust implementation over the latest graphic hardware that ensures an interactive design process.",2010,0, 4088,Soft error mitigation for SRAM-based FPGAs,"FPGA-based designs are more susceptible to single-event up-sets (SEUs) compared to ASIC designs, since SEUs in configuration bits of FPGAs result in permanent errors in the mapped design. Moreover, the number of sensitive configuration bits is two orders of magnitude more than user bits in typical FPGA-based circuits. In this paper, we present a high-reliable low-cost mitigation technique which can significantly improve the availability of designs mapped into FPGAs. Experimental results show that, using this technique, the availability of an FPGA mapped design can be increases to more than 99%.",2005,0, 4089,Automated SEU fault emulation using partial FPGA reconfiguration,"FPGAs are subjected to SEU faults. Fault emulation methods are used to verify the behavior of the system in the presence of fault. In this paper an automated fault emulation approach is presented. An original, fully automated extraction of SEU fault sources is introduced and the injection procedure for various types of faults in FPGA configuration and user memory is explained. Faults are injected during run-time using an embedded microprocessor. Only the resources affected by the faults are reconfigured. A prototype fault injection tool was developed and the approach is demonstrated on two different FPGA applications, micro processor BIST, and AES BIST.",2010,0, 4090,Designing fault-tolerant techniques for SRAM-based FPGAs,"FPGAs have become prevalent in critical applications in which transient faults can seriously affect the circuit's operation. We present a fault tolerance technique for transient and permanent faults in SRAM-based FPGAs. This technique combines duplication with comparison (DWC) and concurrent error detection (CEO) to provide a highly reliable circuit while maintaining hardware, pin, and power overheads far lower than with classic triple-modular-redundancy techniques.",2004,0, 4091,On the development of transfer function method for fault identification in large power transformers on load,"Frequency Response Analysis (FRA) is now established as a proven tool in identifying some typical mechanical faults in power transformers. However, all FRA measurements so far have been off-line. An on-line technique is very much desired by the utilities, as this will prevent continued operation of the transformer while it is carrying a fault to failure stage. In this paper, challenges and some progress made so far toward the development of an on-line FRA technique has been presented. The paper also reports on the development of a high frequency magnetic coupler for this purpose. Results on some sensitivity analysis on a large reactor coil showing significant but characteristic changes on the FRA signatures to parameter variations. Observations of results obtained show that an on-line FRA tool is achievable but still there are some issues that need to be addressed.",2000,0, 4092,Automatic detection of vowel pronunciation errors using multiple information sources,"Frequent pronunciation errors made by L2 learners of Dutch often concern vowel substitutions. To detect such pronunciation errors, ASR-based confidence measures (CMs) are generally used. In the current paper we compare and combine confidence measures with MFCCs and phonetic features. The results show that the best results are obtained by using MFCCs, then CMs, and finally phonetic features, and that substantial improvements can be obtained by combining different features.",2009,0, 4093,Novel flux linkage control of switched reluctance motor drives using observer and neural network-based correction methods,"From the perspective of control of switched reluctance motor (SRM) drives, current control is typically used as an inner loop control method. In this paper, observer and artificial neural network (ANN)-based novel flux linkage control of SRM drives is presented and examined as an alternate approach to current control. The main advantage of flux linkage control is computational simplicity due to the insensitivity of controller gains to machine operation conditions, while current control depends on controller gains which are very sensitive to self-inductance of SRMs. Flux linkage control needs a reliable flux linkage estimator for desirable control of SRMs. Integration method to estimate flux linkage from measured phase voltages, currents and resistances is commonly used, but it is sensitive to measurement error and white noise. Another way to measure the flux linkage is to use a look-up table which is very sensitive to input currents because it is current- and position-based data. In this paper, a simple observer-based voltage and ANN-based current correction method is proposed to overcome the measurement error. Furthermore, ANNs with two layers and five neurons are applied to produce an acceptable flux linkage estimate at each corrected current and measured position, instead of a look-up table. Finally, simulation results are presented to validate its performance.",2005,0, 4094,SoC Symbolic Simulation: a case study on delay fault testing,"Functional test methodologies such as software-based self-test appear to suit well SoC delay fault testing. State-of-the-art solutions in this topic are quite far from maturity and few works consider software-based diagnosis for delay faults. In this paper we evaluate benefits and costs in using symbolic simulation for SoCs, in particular focusing on embedded processor core testing. Symbolic simulation principles are key to enable fast analysis and speed up delay fault diagnosis; to cope with SoC behavior, the traditional 6-valued symbolic algebra was expanded in order to tackle X and Z logic states. As a case study we consider a large design including many core types and suitable DFT for performing high quality test without scan chains.",2008,0, 4095,Research on Fan Machinery Fault Diagnosis System Based on Fusional Neural Network,"Fusional neural network which is founded on information fusion and artificial neural network is proposed in this paper. With this novel algorithm, the fan machinery fault diagnosis system model is built. Meanwhile, the output diagnosis values are loaded into the sample library of the neural network to form the self adapting system. It is proved that the accuracy of the fault diagnosis conclusion can be improved by using fusional neural network.",2008,0, 4096,Fault-Tolerant Permanent Magnet Motor Drive Topologies for Automotive X-By-Wire Systems,"Future automobiles will be equipped with x-by-wire systems to improve reliability, safety, and performance. The fault-tolerant capability of these systems is crucial due to their safety critical nature. Three fault-tolerant inverter topologies for permanent magnet brushless direct current motor drives suitable for automotive x-by-wire systems are analyzed. A figure of merit taking into account both cost and postfault performance is developed for these drives. Simulation results of the two most promising topologies for various inverter faults are presented. The drive topology with the highest postfault performance and cost effectiveness is built and evaluated experimentally.",2010,0,7651 4097,Transient Fault Prediction Based on Anomalies in Processor Events,"Future microprocessors will be highly susceptible to transient errors as the sizes of transistors decrease due to CMOS scaling. Prior techniques advocated full scale structural or temporal redundancy to achieve fault tolerance. Though they can provide complete fault coverage, they incur significant hardware and/or performance cost. It is desirable to have mechanisms that can provide partial but sufficiently high fault coverage with negligible cost. To meet this goal, we propose leveraging speculative structures that already exist in modern processors. The proposed mechanism is based on the insight that when a fault occurs, it is likely that the incorrect execution would result in abnormally higher or lower number of mispredictions (branch mispredictions, L2 misses, store set mispredictions) than a correct execution. We design a simple transient fault predictor that detects the anomalous behavior in the outcomes of the speculative structures to predict transient faults",2007,0, 4098,History Index of Correct Computation for Fault-Tolerant Nano-Computing,"Future nanoscale devices are expected to be more fragile and sensitive to external influences than conventional CMOS-based devices. Researchers predict that it will no longer be possible to test a device and then throw it away if it is found to be defective, as every circuit is expected to have multiple hard and soft defects. Fundamentally new fault-tolerant architectures are required to produce reliable systems that will survive with manufacturing defects and transient faults. This paper introduces the History Index of Correct Computation (HICC) as a run-time reconfiguration technique for fault-tolerant nano-computing. This approach identifies reliable blocks on-the-fly by monitoring the correctness of their outputs and forwarding only good results, ignoring the results from unreliable blocks. Simulation results show that history-based TMR modules offer a better response to fault tolerance at the module level than do conventional fault-tolerant approaches when the faults are nonuniformly distributed among redundant units. A correct computation rate of 99% is achieved despite a 13% average injected fault rate, when one of the redundant units and the decision unit are fault-free as well as when both have a low injected fault rate of 0.1%. A correct computation rate of 89% is achieved when faults are nonuniformly distributed at an average fault rate of 11% and fault rate in the decision unit is 0.5%. The robustness of the history-based mechanism is shown to be better than both majority voting and a Hamming detection and correction code.",2009,0, 4099,General review of fault diagnostic in wind turbines,"Global wind electricity-generating capacity increased by 28.7 percent in 2008 to 120,798 Gigawatts. This represents a twelve-fold increase from a decade ago, when world wind-generating capacity stood at less than 5 GW [1]. With wind becoming a key part of the electrical mix in Denmark (20% with 3.1 GW), Spain (8% with 10 GW), and Germany (6% with 18.4 GW), wind turbine reliability is having a bigger effect on overall electrical grid system performance and reliability [1]. This shows the impact of faults and downtime on the reliability of wind turbine especially for offshore wind farms which although are some of the most environmentally friendly and efficient methods to generate electricity in the world. However, the maintenance costs are high because of their remote location. This can amount to as much as 25 to 30% of the total energy production [2]. The aim of this paper is to present an overview of fault detection in wind turbines, study and analyze the faults and their root-causes. The paper also explores different techniques used in early fault detection to form base information for future work to build a general fault diagnostic scheme for wind turbines.",2010,0, 4100,A study of student strategies for the corrective maintenance of concurrent software,"Graduates of computer science degree programs are increasingly being asked to maintain large, multi-threaded software systems; however, the maintenance of such systems is typically not well-covered by software engineering texts or curricula. We conducted a think-aloud study with 15 students in a graduate-level computer science class to discover the strategies that students apply, and to what effect, in performing corrective maintenance on concurrent software. We collected think-aloud and action protocols, and annotated the protocols for a number of behavioral attributes and maintenance strategies. We divided the protocols into groups based on the success of the participant in both diagnosing and correcting the failure. We evaluated these groups for statistically significant differences in these attributes and strategies. In this paper, we report a number of interesting observations that came from this study. All participants performed diagnostic executions of the program to aid program comprehension; however, the participants that used this as their predominant strategy for diagnosing the fault were all unsuccessful. Among the participants that successfully diagnosed the fault and displayed high confidence in their diagnosis, we found two commonalities. They all recognized that the fault involved the violation of a concurrent-programming idiom. And, they all constructed detailed behavioral models (similar to UML sequence diagrams) of execution scenarios. We present detailed analyses to explain the attributes that correlated with success or lack of success. Based on these analyses, we make recommendations for improving software engineering curriculums by better training students how to apply these strategies effectively.",2008,0, 4101,Computationally efficient algorithms for multiple fault diagnosis in large graph-based systems,"Graph-based systems are models wherein the nodes represent the components and the edges represent the fault propagation between the components. For critical systems, some components are equipped with smart sensors for on-board system health management. When an abnormal situation occurs, alarms will be triggered from these sensors. This paper considers the problem of identifying the set of potential failure sources from the set of ringing alarms in graph-based systems. However, the computational complexity of solving the optimal multiple fault diagnosis (MFD) problem is exponential. Based on Lagrangian relaxation and subgradient optimization, we present a heuristic algorithm to find approximately the most likely candidate fault set. A computationally cheaper heuristic algorithm - primal heuristic - has also been applied to the problem so that real-time MFD in systems with several thousand failure sources becomes feasible in a fraction of a second. This paper also considers systems with asymmetric and multivalued alarms (tests).",2003,0, 4102,Dynamic programming and the graphical representation of error-correcting codes,"Graphical representations of codes facilitate the design of computationally efficient decoding algorithms. This is an example of a general connection between dependency graphs, as arise in the representations of Markov random fields, and the dynamic programming principle. We concentrate on two computational tasks: finding the maximum-likelihood codeword and finding its posterior probability, given a signal received through a noisy channel. These two computations lend themselves to a particularly elegant version of dynamic programming, whereby the decoding complexity is particularly transparent. We explore some codes and some graphical representations designed specifically to facilitate computation. We further explore a coarse-to-fine version of dynamic programming that can produce an exact maximum-likelihood decoding many orders of magnitude faster than ordinary dynamic programming",2001,0, 4103,A distributed graphical environment for interactive fault simulation and analysis,"Graphical user interfaces can provide interactive and intuitive visual communication to fault simulation and analysis application programs, enhancing the capabilities of engineers to conduct studies with ease and flexibility. Unfortunately, such benefits often come at the price of efficient CPU utilization and more complicated maintenance activities. Employing a distributed architecture can mitigate these costs by trading the one-time cost of more complex design activities for the long term benefits of ease of use and efficient resource utilization. TUFTsim, a multi-level concurrent simulation system, has been designed to address these concerns and demonstrates that an extensible, distributed architecture can be created without incurring excessive cost in processor and resource consumption. In addition, the investment in this architecture has also yielded benefits with respect to long-term maintenance and other software engineering considerations.",2002,0, 4104,Color error-diffusion halftoning,"Grayscale halftoning converts a continuous-tone image (e.g., 8 bits per pixel) to a lower resolution (e.g., 1 bit per pixel) for printing or display. Grayscale halftoning by error diffusion uses feedback to shape the quantization noise into high frequencies where the human visual system (HVS) is least sensitive. In color halftoning, the application of grayscale error-diffusion methods to the individual colorant planes fails to exploit the HVS response to color noise. Ideally the quantization error must be diffused to frequencies and colors, to which the HVS is least sensitive. Further it is desirable for the color quantization to take place in a perceptual space so that the colorant vector selected as the output color is perceptually closest to the color vector being quantized. This article discusses the design principles of color error diffusion that differentiate it from grayscale error diffusion, focusing on color error diffusion halftoning systems using the red, green, and blue (RGB) space for convenience.",2003,0, 4105,Group communication protocols under errors,"Group communication protocols constitute a basic building block for highly dependable distributed applications. Designing and correctly implementing a group communication system (GCS) is a difficult task. While many theoretical algorithms have been formalized and proved for correctness, only few research projects have experimentally assessed the dependability of GCS implementations under complex error scenarios. This paper describes a thorough error-injection experimental campaign conducted on Ensemble, a popular GCS. By employing synthetic benchmark applications, we stress selected components of the GCS $the group membership service, the FIFO-ordered reliable multicast - under various error models, including errors in the memory (text and heap segments) and in the network messages. The data show that about 5-6% of the failures are due to an error escaping Ensemble's error-containment mechanism and manifesting as a fail silence violation. This constitutes an impediment to achieving high dependability, the natural objective of GCSs. Our results are derived for a particular system (Ensemble), and more investigation involving other GCSs is required to generalize the conclusions. Nevertheless, through an accurate analysis of the failure causes and the error propagation patterns, this paper offers insights into the design and the implementation of robust GCSs.",2003,0, 4106,FlexiTP: A Flexible-Schedule-Based TDMA Protocol for Fault-Tolerant and Energy-Efficient Wireless Sensor Networks,"FlexiTP is a novel TDMA protocol that offers a synchronized and loose slot structure. Nodes in the network can build, modify, or extend their scheduled number of slots during execution, based on their local information. Nodes wake up for their scheduled slots; otherwise, they switch into power-saving sleep mode. This flexible schedule allows FlexiTP to be strongly fault tolerant and highly energy efficient. FlexiTP is scalable for a large number of nodes because its depth-first-search schedule minimizes buffering, and it allows communication slots to be reused by nodes outside each other's interference range. Hence, the overall scheme of FlexiTP provides end-to-end guarantees on data delivery (throughput, fair access, and robust self-healing) while also respecting the severe energy and memory constraints of wireless sensor networks. Simulations in ns-2 show that FlexiTP ensures energy efficiency and is robust to network dynamics (faults such as dropped packets and nodes joining or leaving the network) under various network configurations (network topology and network density), providing an efficient solution for data-gathering applications. Furthermore, under high contention, FlexiTP outperforms 2-MAC in terms of energy efficiency and network performance.",2008,0, 4107,Analytical results for reconfiguration of E-112-track switch torus arrays with multiple fault types,"For a redundant (array) system, he (or she) who considers that the total system reliability might be worse than a nonredundant system because of additional circuit faults might naturally say, ""Is the redundancy really useful?"". The main purpose of the paper is to answer the question for redundant array systems using E-112 track switches (Tadayoshi Horita and Itsuo Takanami, 2000) and a reconfiguration method. The answer is ""the redundancy using these methods is useful!"". Furthermore, the condition that the assumption that additional circuits such as tracks, switches, etc. are fault-free becomes valid is given",2001,0, 4108,Compiler-managed register file protection for energy-efficient soft error reduction,"For embedded systems where neither energy nor reliability can be easily sacrificed, we present an energy efficient soft error protection scheme for register files (RF). Unlike previous approaches, our method explicitly optimizes for energy efficiency and exploits the fundamental tradeoff between reliability and energy. While even simple compiler-managed RF protection scheme is more energy efficient than hardware schemes, this work formulates and solves further compiler optimization problems to significantly enhance the energy efficiency of RF protection schemes by an additional 24%.",2009,0, 4109,A Compiler-Microarchitecture Hybrid Approach to Soft Error Reduction for Register Files,"For embedded systems, where neither energy nor reliability can be easily sacrificed, this paper presents an energy efficient soft error protection scheme for register files (RFs). Unlike previous approaches, the proposed method explicitly optimizes for energy efficiency and can exploit the fundamental tradeoff between reliability and energy. While even simple compiler-managed RF protection scheme can be more energy efficient than hardware schemes, this paper formulates and solves further compiler optimization problems to significantly enhance the energy efficiency of RF protection schemes by an additional 30% on average, as demonstrated in our experiments on a number of embedded application benchmarks.",2010,0, 4110,Using Fault Modeling in Safety Cases,"For many safety-critical systems a safety case is built as part of the certification or acceptance process. The safety case assembles evidence to justify that the design and implementation of a system avoid hazardous software behavior. Fault modeling and analysis can provide a rich source of evidence that the design meets safety goals. However, there is currently little guidance available to bridge the gap between the fault modeling that developers perform and the mandated safety case. In this experience report we describe results and open issues from an investigation of how evidence from software tool supported fault modeling and analysis of a spacecraft power system could assist in safety-case construction. The ways in which the software fault models can provide evidence for the safety case appears to be applicable to other critical systems.",2008,0, 4111,Defining goal-driven fault management metrics in a real world environment: a case-study from Nokia,"For measurements to be worthwhile, they must be linked to the goals of an organization. Automated data collection systems are effective, but they can encourage collecting only data that is easily available. The Fixed Switching unit from Nokia has automated data collection systems supporting its processes. It started measurement based improvement of the fault management process by defining goal-driven metrics on top of the existing systems. Fault management includes the analysis and correction of faults, and the delivery of corrections. The work consisted of translating high-level goals into measurement goals, applying the Goal-Question-Metric paradigm to identify questions and indicators, and identifying needed data. Starting with a workshop setting helped to get practitioners involved. Having specified measurement goals helped defining concrete metrics that are targeted at specific audiences. The defined metrics serve the purposes of monitoring fault traffic, and process improvement. Metrics need to evolve, and the metrics implemented first use only data from the fault tracking system, providing the best reporting facilities at the moment. However, metrics that require qualitative checking of data cannot be automated",2000,0, 4112,Automotive fault diagnosis - part II: a distributed agent diagnostic system,"For pt.I see Crossman, J.A. et al., ibid., p.1063-75. We describe a novel diagnostic architecture, distributed diagnostics agent system (DDAS), developed for automotive fault diagnosis. The DDAS consists of a vehicle diagnostic agent and a number of signal diagnostic agents, each of which is responsible for the fault diagnosis of one particular signal using either a single or multiple signals, depending on the complexity of signal faults. Each signal diagnostic agent is developed using a common framework that involves signal segmentation, automatic signal feature extraction and selection, and machine learning. The signal diagnostic agents can concurrently execute their tasks; some agents possess information concerning the cause of faults for other agents, while other agents merely report symptoms. Together, these signal agents present a full picture of the behavior of the vehicle under diagnosis to the vehicle diagnostic agent. DDAS provides three levels of diagnostics decisions: signal-segment fault; signal fault; vehicle fault. DDAS is scalable and versatile and has been implemented for fault detection of electronic control unit (ECU) signals; experiment results are presented and discussed.",2003,0, 4113,Robust paradigm for diagnosing hold-time faults in scan chains,"Hold-time violation is a common cause of failure at scan chains. A robust new paradigm for diagnosing such failures is presented. As compared to previous methods, the main advantage of this is the ability to tolerate non-ideal conditions, for example, under the presence of certain core logic faults or for those faults that manifest themselves intermittently. The diagnosis problem is first formulated as a 'delay insertion process'. Upon this formulation, two algorithms - a 'greedy' algorithm and a so-called 'best-alignment-based' algorithm - is proposed. Experimental results on a number of practical designs and ISCAS'89 benchmark circuits are presented to demonstrate its effectiveness.",2007,0, 4114,An efficient fault diagnosis technique for home unified service system,"Home services have been emerging as a new profit for operators and a variety of home services are now being deployed. Service failures in millions of home networks impose new challenges to service operators due to the lack of operations and maintenances. In this paper, we investigate the use of a classic fault diagnosis technique, named codebook technique, for the fault diagnosis in home unified service system (HUSS) which uses a platform based technique to provide a variety of home services. To reduce the management cost and obtain the desired level of robustness, we propose two efficient minimum codebook algorithms to find the minimum set of symptoms. We evaluate the technique through a test bed of HUSS and intensive simulations. The evaluated results show that it can localize the service failures effectively and efficiently.",2010,0, 4115,Defect Tolerance in Homogeneous Manycore Processors Using Core-Level Redundancy with Unified Topology,"Homogeneous manycore processors are emerging for tera-scale computation. Effective defect tolerance techniques are essential to improve the yield of such complex integrated circuits. In this paper, we propose to achieve fault tolerance by employing redundancy at the core-level instead of at the microarchitecture-level. When faulty cores existing on-chip in this architecture, how to reconfigure the processor with the most effective topology is a relevant research problem. We present novel solutions for this problem, which not only maximize the performance of the manycore processor, but also provide a unified topology to operating system and application software running on the processor. Experimental results show the effectiveness of the proposed techniques.",2008,0, 4116,A Fault-Tolerant Active Pixel Sensor for Mitigating Hot Pixel Defects,"Hot pixel defects are unavoidable in many solid-state image sensors. Affected pixels accumulate dark signal over the course of an exposure, grossly diminishing dynamic range and often rendering measurements unusable. Experiments suggest the mechanisms causing hot pixels are highly localized and the defect will be confined to a single pixel. A redundant, fault-tolerant active pixel sensor architecture that has previously been applied to other defect types is investigated for the suppression of hot pixels. A recovery scheme using minimal computational power is also described.",2007,0, 4117,Questioning Human Error Probabilities in Railways,"Human errors are regarded as one of the main causes for railway accidents these days. In spite of this fact, the consideration of human error probabilities in quantified risk analyses has been very rudimentary up to now. A lack of comprehensive data and analyses in literature lead to the use of estimations and values from other industries. This paper discusses the transferability of human error probabilities for railways and identifies problems in handling methods and values. A model of working systems is used to demonstrate the particularities of railway work places and to derive a structure for performance shaping factors that influence the human error probability. A holistic approach is proposed to support the determination of appropriate human error probabilities for railways.",2008,0, 4118,Correction of humidity effect for detection of human body odor,"Humidity strongly effects the sensitivity of odor sensors. It is therefore a major problem in the use of electronic nose (E-nose) for most applications including detecting human body odor from armpits, where humidity at the armpits can varies to a large extent due to various human activities. In this paper, we propose both hardware and software approaches to correct the humidity effect. The E-nose was designed to efficiently measure volatile organic compounds generated from human body and was most optimized if both the hardware and software corrections were employed. Principle component analysis (PCA) method was used for pattern recognition and discrimination of human body odor. After humidity correction, our special designed E-nose not only shows the capability in detecting human body odor, but it is also able to classify two different persons who have the same life style and activities.",2008,0, 4119,A Fast Algorithm for Robust Mixtures in the Presence of Measurement Errors,"In experimental and observational sciences, detecting atypical, peculiar data from large sets of measurements has the potential of highlighting candidates of interesting new types of objects that deserve more detailed domain-specific followup study. However, measurement data is nearly never free of measurement errors. These errors can generate false outliers that are not truly interesting. Although many approaches exist for finding outliers, they have no means to tell to what extent the peculiarity is not simply due to measurement errors. To address this issue, we have developed a model-based approach to infer genuine outliers from multivariate data sets when measurement error information is available. This is based on a probabilistic mixture of hierarchical density models, in which parameter estimation is made feasible by a tree-structured variational expectation-maximization algorithm. Here, we further develop an algorithmic enhancement to address the scalability of this approach, in order to make it applicable to large data sets, via a K-dimensional-tree based partitioning of the variational posterior assignments. This creates a non-trivial tradeoff between a more detailed noise model to enhance the detection accuracy, and the coarsened posterior representation to obtain computational speedup. Hence, we conduct extensive experimental validation to study the accuracy/speed tradeoffs achievable in a variety of data conditions. We find that, at low-to-moderate error levels, a speedup factor that is at least linear in the number of data points can be achieved without significantly sacrificing the detection accuracy. The benefits of including measurement error information into the modeling is evident in all situations, and the gain roughly recovers the loss incurred by the speedup procedure in large error conditions. We analyze and discuss in detail the characteristics of our algorithm based on results obtained on appropriately designed synthetic data experime- ts, and we also demonstrate its working in a real application example.",2010,0, 4120,Gaze correction in video communication with single camera,"In face-to-face video communication, we commonly use only a single camera placed on top of a monitor screen. This general configuration gives poor eye contact problem to decrease the feeling of natural conversation since the user stares at the monitor screen rather than directly staring at the camera lens. In this paper we present a new approach for natural feeling in video communication using image-based modeling and rendering techniques. Our facial modeling approach has two components. The first component is to estimate the eye position from an input image to find the gaze-correction angle and generate the basic model to represent the user's facial shape. The second component is a model-based shape approximation from motion which generates the user's simple facial shape to be used in rendering. To render a good eye-contact image, we propose 3D mesh warping technique, a method to rotate input image with the correction angle and the facial model. Our approach is effective and convenient since it uses the characteristic information about a facial scene. Preliminary experimental results with real facial image shows the enhanced naturalness which the face-to-face video communication has to offer.",2002,0, 4121,An integrated failure detection and fault correction model,"In general, software reliability models have focused an modeling and predicting failure occurrence and have not given equal priority to modeling the fault correction process. However, there is a need for fault correction prediction, because there are important applications that fault correction modeling and prediction support. These are the following: predicting whether reliability goals have been achieved, developing stopping rules for testing, formulating test strategies, and rationally allocating test resources. Because these factors are related, we integrate them in our model. Our modeling approach involves relating fault correction to failure prediction, with a time delay estimated from a fault correction queuing model.",2002,0, 4122,Using Evolutionary Testing to Find Test Scenarios for Hard to Reproduce Faults,"In industrial practice, developers are often unable to reproduce errors that are encountered by end-users or testers. Evidently, reproducibility is important for investigating the root cause of the error, since without knowing what causes the error, a developer cannot repair the software. This paper reports on the successful application of evolutionary testing by Rila Solution EAD to solve two real reproducibility problems they encountered. Rila's software application, the ChatPC, suffered from memory and data corruption faults, that were reported by users in the field but could not be reproduced in-house after considerable effort. This paper presents two case studies that show how evolutionary testing resulted in finding execution scenarios that could reliably reproduce the mentioned faults observed in the application.",2010,0, 4123,An Improved Knowledge Connectivity Condition for Fault-Tolerant Consensus with Unknown Participants,"For self-organized networks that possess highly decentralized and self-organized natures, neither the identity nor the number of processes is known to all participants at the beginning of the computation because no central authority exists to initialize each participant with some context information. Hence, consensus, which is essential to solving the agreement problem, in such networks cannot be achieved in the ways for traditional fixed networks. To address this problem of Consensus with Unknown Participants (CUP), a variant of the traditional consensus problem was proposed in the literature, by relaxing the requirement for the original knowledge owned by every process about all participants in the computation. Correspondingly, the CUP problem considering process crashes was also introduced, called the Fault-Tolerant Consensus with Unknown Participants (FT-CUP) problem. In this paper, we propose a knowledge connectivity condition sufficient for solving the FT-CUP problem, which is improved from the one proposed in our previous work.",2010,0, 4124,High-performance fault-tolerant CORDIC processor for space applications,"For space applications, high performance and high reliability are two indicators that become more and more important to a processor. After an analysis of the different architectures and fault tolerant techniques, an implementation of CORDIC processor employing 3N codes for error detection in a 3 granularly pipelined architecture is presented. A high data throughput of 300MFLOPS is achieved and the circuit complexity only increased 9.5% after applied the fault tolerant technique",2006,0, 4125,An EMF activity tree based BPEL defect pattern testing method,"For testing BPEL defects efficiently, a novel BPEL defect pattern testing architecture based on the EMF activity tree technology is proposed. The EMF activity tree that is similar to abstract syntax tree is used to describe the BPEL service process structure. The mapping method from the DOM object tree of a BPEL file to the EMF activity tree and the recursive algorithm to generate an EMF activity tree are represented in detail. A typical EMF activity tree is shown and the visitor design pattern based traversal method is stated. The directions to enhance this technology are illustrated finally.",2010,0, 4126,A New Data Format and a New Error Control Scheme for Optical-Storage Systems,"For the requirement of the high-density disc, the coding efficiency and the performance of correcting errors become more and more important. We present a new data format and a new error control scheme of NVD (Next-generation Versatile Disc), which is one of the key technologies of optical-storage system. The new data format of NVD which significantly reduces data redundancy increases the encoding efficiency up to about 4% more than that of DVD. Meanwhile, using 2 error correction codes decoders and a more powerful interleaving process prior to DVD, the new error control scheme of NVD has largely improves the burst error correction capability, though NVD has less error correction codes.",2007,0, 4127,The application of RBF network based on the principle of immune in the condenser's fault diagnosis,"For the specific problems which is determined by the Center value and the width of the RBF neural network function, a new network training method is proposed. It is based on the immune theory and combines immune genetic algorithm with RBF neural network. And it is applied to realize the failure diagnosis of condenser. The result of verification shows that the proposed method can improve the accuracy and the speed of failure diagnosis effectively and has certain application value in engineering.",2010,0, 4128,Mutation-Based Testing of Format String Bugs,"Format string bugs (FSBs) make an implementation vulnerable to numerous types of malicious attacks. Testing an implementation against FSBs can avoid consequences due to exploits of FSBs such as denial of services, corruption of application states, etc. Obtaining an adequate test data set is essential for testing of FSBs. An adequate test data set contains effective test cases that can reveal FSBs. Unfortunately, traditional techniques do not address the issue of adequate testing of an application for FSB. Moreover, the application of source code mutation has not been applied for testing FSB. In this work, we apply the idea of mutation-based testing technique to generate an adequate test data set for testing FSBs. Our work addresses FSBs related to ANSI C libraries. We propose eight mutation operators to force the generation of adequate test dataset. A prototype mutation-based testing tool named MUFORMAT is developed to generate mutants automatically and perform mutation analysis. The proposed operators are validated by using four open source programs having FSBs. The results indicate that the proposed operators are effective for testing FSBs.",2008,0, 4129,Optical Metrology System for Radar Phase Correction on Large Flexible Structure,"In aerospace applications there is an increasing interest in metrology systems. Metrology systems are used in applications such as wave front correction and formation flying, for measuring deployable structure deformation/oscillations, and as the crude stage for interferometer missions. In this paper we describe a concept for a metrology system. The metrology system concept will be able to determine the Cartesian (x,y,z) coordinates of 100+ fiducials to an accuracy of 1 mm with an update rate of 10 Hz. Considerable deployment uncertainty can be accepted. The system operates by laser illuminated fiducials feed through optical fibers. One fiducial is illuminated at a time. A camera reads the transverse position of the fiducial, and the distance to the fiducial is determined by modulating the laser light and measuring a phase difference. The inertial orientation of the structure is measured by imaging the stars. A metrology system as described is essential to a radar antenna on a large flexible structure.",2008,0, 4130,Chaotic analysis of partial discharge (CAPD) as a novel approach to investigate insulation degradation caused by the various defects,"In connection with the monitoring of insulation degradation of large power apparatus in order to predict their unexpected service failures, a statistical treatment, such as phase resolved partial discharge analysis (PRPDA), has been established for the on-line monitoring system during the past decades. However, this method has shown some inconveniences to distinguish the nature of the PD source in power apparatus. In this regard, a novel approach based on the chaotic analysis (CAPD) is proposed describing the fundamental ideas, outcomes and different viewpoints from the conventional PRPDA. As a model for the possible defects causing sudden failures in service, several types of specimen were prepared. Partial discharge signals, originated from those samples, were measured and analyzed by means of PRPDA and CAPD respectively. Throughout this paper, it seems that the correlation between the consecutive PD pulses, depending on the nature of PD, could be clarified by CAPD. Therefore, it could be considered that the nature of PD source can be distinguished more distinctively when the PRPDA is combined with CAPD",2001,0, 4131,How Well Do Test Case Prioritization Techniques Support Statistical Fault Localization,"In continuous integration, a tight integration of test case prioritization techniques and fault-localization techniques may both expose failures faster and locate faults more effectively. Statistical fault-localization techniques use the execution information collected during testing to locate faults. Executing a small fraction of a prioritized test suite reduces the cost of testing, and yet the subsequent fault localization may suffer. This paper presents the first empirical study to examine the impact of test case prioritization on the effectiveness of fault localization. Among many interesting empirical results, we find that coverage-based and random techniques can be more effective than distribution-based techniques in supporting statistical fault localization.",2009,0, 4132,Reliability evaluation based on fuzzy fault tree,"In conventional fault tree analysis (FTA), some complex and uncertain events such as human errors cannot be handled effectively. Fuzzy fault tree analysis (fuzzy FTA) integrating fuzzy set evaluation and probabilistic estimation is proposed to evaluate vague events. The reliability of water supply subsystem in fire protection systems is analyzed using the proposed approach and the results prove the validity of the fuzzy FTA.",2010,0, 4133,A design tool for large scale fault-tolerant software systems,"In order to assist software designers in the application of fault-tolerance techniques to large scale software systems, a computer-aided software design tool has been proposed and implemented that assess the criticality of the software modules contained in the system. This information assists designers in identifying weaknesses in large systems that can lead to system failures. Through analysis and modeling techniques based in graph theory, modules are assessed and rated as to the criticality of their position in the software system. Graphical representation at two levels facilitates the use of cut set analysis, which is our main focus. While the task of finding all cut sets in any graph is NP-complete, the tool intelligently applies cut set analysis by limiting the problem to provide only the information needed for meaningful analysis. In this paper, we examine the methodology and algorithms used in the implementation of this tool and consider future refinements. Although further testing is needed to assess performance on increasingly complex systems, preliminary results look promising. Given the growing demand for reliable software and the complexities involved in the design of these systems, further research in this area is indicated.",2004,0, 4134,An Empirical Comparison of Fault-Prone Module Detection Approaches: Complexity Metrics and Text Feature Metrics,"In order to assure the quality of software product, early detection of fault-prone products is necessary. Fault-prone module detection is one of the major and traditional area of software engineering. However, comparative study using the fair environment rarely conducted so far because there is little data publicly available. This paper tries to conduct a comparative study of fault-prone module detection approaches.",2010,0, 4135,Corrective maintenance task ascertain method research based on FMECA,"In order to avoid repetitive FMECA (Failure Modes, Effects and Criticality Analysis) in supportability analysis tasks FMECA outcome data from reliability analysis is usually successively utilized tasks. But engineer usually encounter difficulties including that indenture levels definition are difference between reliability and supportability analysis and that FMECA outcomes data may not be directly inputted to MTA (Maintenance Task Analysis) for corrective maintenance task analysis. To solve above difficulties corrective maintenance task ascertainment method was presented according to comparing the difference between FMECA indenture level in reliability and supportability analysis in the precondition of inheriting FMECA outcome data from reliability analysis. Corrective maintenance task ascertainment procedure was constructed prone to operation for engineering. The related analysis worksheets formats were presented. It can be proved that good effects were achieved in terms of the applications in a type of plane development institute. The proposed methodology is demonstrated to make up the gap between FMECA and MTA in supportability analysis. FMECA and supportability analysis data can be seamless integrated.",2009,0, 4136,Constrained free form deformation based algorithm for geometric distortion correction of echo planar diffusion tensor images,"In order to differentiate between normal and abnormal variations in brain diffusion tensor images, it is necessary to develop medical atlases. Atlas creation requires removal of spatial distortions in individual subject diffusion weighted images. In this paper we suggest a new approach using non-linear warping based on optic flow to map both baseline and diffusion weighted echo planar images to the anatomically correct T2 weighted spin echo image. The method is readily implemented and does not require a pre-processing step of rigid alignment. A global histogram matching precedes the base line EP image correction. A Markov random field based classification algorithm was implemented to cluster T2 weighted images into four different tissue type classes. This information was then used to synthesize diffusion based image models used in the warping algorithm to correct the geometric distortions in the diffusion weighted EP images.",2004,0, 4137,Study on Fault Tree Analysis of Fuel Cell Stack Malfunction,"In order to enhance the reliability and safety of Fuel Cell Engine(FCE), combined the composition of FCE developed by our group with the electrochemical reaction mechanism of fuel cell, the fault symptom of fuel cell stack malfunction was defined and analyzed from four aspects: hardware faults, software faults, environmental and man-made factors. Then its fault tree model was established, all the common fault causes were figured out qualitatively by Fussel Algorithm and were classified as 19 minimal cut sets. At last, the happening probability of top event, important degree of probability and key importance of each basic event were quantitatively calculated. Based on the study and analysis above, several effective rectification measures implied in practical work were put forward, which can provide helpful guiding significance to the control, management and maintenance of FCE in future.",2010,0, 4138,Test Generation and Diagnostic Test Generation for Open Faults with Considering Adjacent Lines,"In order to ensure high quality of DSM circuits, testing for the open defect in the circuits is necessary. However, the modeling and techniques for test generation for open faults have not been established yet. In this paper, we propose a method for generating tests and diagnostic tests based on a new open fault model. Firstly, we show a new open fault model with considering adjacent lines [9]. Under the open fault model, we reveal more about the conditions to excite the open fault. Next we propose a method for generating tests for open faults by using a stuck-at fault test with don't cares. We also propose a method for generating a diagnostic test that can distinguish the pair of open faults. Finally, experimental results show that (1) the proposed method is able to achieve 100% fault coverages for almost all benchmark circuits and (2) the proposed method is able to reduce the number of indistinguished open fault pairs.",2007,0, 4139,Pattern-Based Modeling and Analysis of Failsafe Fault-Tolerance in UML,"In order to facilitate incremental modeling and analysis of fault-tolerant embedded systems, we introduce an object analysis pattern, called the detector pattern, that provides a reusable strategy for capturing the requirements of failsafe fault-tolerance in an existing conceptual model, where a failsafe system satisfies its safety requirements even when faults occur. We also present a method that (i) uses the detector pattern to help create a behavioral model of a failsafe fault-tolerant system in UML, (ii) generates and model checks formal models of UML state diagrams of the fault-tolerant system, and (Hi) visualizes the model checking results in terms of the UML diagrams to facilitate model refinement. We demonstrate our analysis method in the context of an industrial automotive application.",2007,0, 4140,Quantitative correlation of the metastable defect in Cz-silicon with different impurities,"In order to identify the components responsible for the creation of the metastable defect in boron-doped Cz-Si, the impact of different impurities on the defect concentration has been examined carefully on a wide range of different Cz-materials by means of lifetime measurements. In good agreement with previous studies a linear dependence on the boron concentration has been found. The impact of carbon can be neglected. Concerning the correlation with the interstitial oxygen concentration, a correlation exponent between 1.5 and 1.9 has been found. This exponent is shifted to its lower bound after an optimized high-temperature pretreatment, whose impact on the quantitative correlations is investigated in detail. The strong scatter in the oxygen correlation points towards an indirect impact of oxygen on the defect center. Since the vacancy concentration is known to strongly influence oxygen behavior, its impact on the metastable defect concentration is investigated.",2003,0, 4141,A novel approach to architecture of radar fault diagnosis system based on mobile agents,"In order to improve radar fault diagnosis system, a new architecture of fault diagnosis system based on mobile agents is proposed. The architecture is based on an embedded network built-in radar system. It utilizes all kinds of mobile fault diagnostic agents in embedded network to detect shortcomings of distributed subsystems in radar. In the architecture, all MFDAs can migrate in embedded network, and can be centralized a personal computer so as to be updated and retrained conveniently for different batches of radar systems. In this paper, three kinds of start-up modes of fault diagnosis are illustrated, two kinds of multi-agent cooperation diagnostic frameworks are introduced, and a kind of structure of MFDA is addressed.",2010,0, 4142,Efficient fault-prone software modules selection based on complexity metrics,"In order to improve the software reliability early, this paper proposes an efficient algorithm to select fault-prone software module. Based on software module's complexity metrics, the algorithm uses modified cascaded-correlation algorithm as neural network classifier to select the fault-prone software module. Finally, by analyzing the algorithm's application in the project MAP, the paper shows the advantage of the algorithm.",2009,0, 4143,The Application of Nerve Net Algorithm to Reduce Vehicle Weigh in Motion System Error,"In order to improve the Vehicle Weigh in Motion System with precision, the paper introduces Nerve Net Algorithm to error analysis. The article sets up a neural network model by determining the neural network input and output variables. Then, the function is defined by net training on MATLAB software. Finally through the experimental verification of Nerve Net Algorithm improving Weigh in Motion System accuracy is feasible.",2010,0, 4144,The cold rolling strip surface defect on-line inspection system based on machine vision,"In order to inspect the cold rolling strip main surface defects in while product line running normally and control manufacturing process more efficiently, a novel cold rolling strip surface defect on-line inspection system is designed. Modularization design idea is adopted, the whole system is made up of light source module, image collection module, defect image processing module and server module. For making sure real-time characteristics of the system, high-speed linear CCD camera and relational high-speed fiber image collection card, high-speed FPGA image processing card designed by ourselves are adopted in hardware, RT-Linux real-time operation system is adopted in software. Aiming at low contrast and high noise of defect image, homomorphic filtering algorithm based on PDE is proposed for image preprocess; considering of the exist of defect image transition region, image segmentation algorithm based fuzzy-set and information entropy theory is adopted to accomplish defect abstraction. Based on these, kinds of features of defect are extracted, the defect classification based on fuzzy support vector machine is designed and classification experiment is finished, better classification effect is obtained.",2010,0, 4145,Fault identification and prevention for PVC management in ATM networks,"In order to meet the need of network management for emerging large complex heterogeneous communication networks, a distributed proactive self-adjusting management (DPSAM) framework was developed. The framework facilitates the incorporation of artificial intelligence and distributed computing technologies in building advanced network management systems. PMS, a PVC (permanent virtual circuit) management system for ATM networks, is developed based on DPSAM framework. PMS provides a scalable, end-to-end path management solution required for today's ATM network and service management. It aims to assist network operators to perform PVC operations with simplified procedures and automatic optimum route selection. It also provides effective decision-making support for PVC fault identification and prevention. In this paper, PVC fault identification and prevention along with an overview of the DPSAM framework and PMS are presented",2000,0, 4146,What Makes a Good Bug Report?,"In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report. The analysis of the 466 responses revealed an information mismatch between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are, at the same time, most difficult to provide for users. Such insight is helpful for designing new bug tracking tools that guide users at collecting and providing more helpful information. Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. The participants of our survey also provided 175 comments on hurdles in reporting and resolving bugs. Based on these comments, we discuss several recommendations for better bug tracking systems, which should focus on engaging bug reporters, better tool support, and improved handling of bug duplicates.",2010,0, 4147,Predicting defects using network analysis on dependency graphs,"In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics.",2008,0, 4148,Comprehensive evaluation of association measures for fault localization,"In statistics and data mining communities, there have been many measures proposed to gauge the strength of association between two variables of interest, such as odds ratio, confidence, Yule-Y, Yule-Q, Kappa, and gini index. These association measures have been used in various domains, for example, to evaluate whether a particular medical practice is associated positively to a cure of a disease or whether a particular marketing strategy is associated positively to an increase in revenue, etc. This paper models the problem of locating faults as association between the execution or non-execution of particular program elements with failures. There have been special measures, termed as suspiciousness measures, proposed for the task. Two state-of-the-art measures are Tarantula and Ochiai, which are different from many other statistical measures. To the best of our knowledge, there is no study that comprehensively investigates the effectiveness of various association measures in localizing faults. This paper fills in the gap by evaluating 20 well-known association measures and compares their effectiveness in fault localization tasks with Tarantula and Ochiai. Evaluation on the Siemens programs show that a number of association measures perform statistically comparable as Tarantula and Ochiai.",2010,0, 4149,Fast computation of maximum time interval error for telecommunications clock stability characterization,"In telecommunications standards, maximum time interval error (MTIE) is one of the main time domain quantities for characterizing clock stability. However, the direct computation of MTIE, defined in ITU-T Recommendation G.810, tends to be unmanageable given large number of samples. This work proposes a fast computation approach for MTIE. The proposed approach is based on the recursive algorithm and the computation exactly conforms to the ITU-T G.810 MTIE definition. Compared with the direct computation approach, the computational complexity of the proposed approach is reduced by a factor of N, the number of samples. This study, thus, demonstrates that the real-time MTIE measurement or monitoring employing proposed computation approach is feasible.",2005,0, 4150,Edge Weighted Spatio-Temporal Search for Error Concealment,"In temporal error concealment (EC), the sum of absolute difference (SAD) is commonly used to identify the best replacement macroblock. Even though the use of SAD ensures spatial continuity and produces visually good results, it is insufficient to ensure edge alignment. Other distortion criteria based solely on structural alignment may also perform poorly in the absence of strong edges. In this paper, we propose a spatio-temporal EC search algorithm using an edge weighted SAD distortion criterion. This distortion criterion ensures both edge alignment and spatial continuity. We assume the loss of motion information and use zero motion vector as the starting search point. We show that the proposed algorithm outperforms the use of unweighted SAD in general. Most importantly, the perceptual quality of EC is improved due to edge alignment while ensuring spatial continuity.",2007,0, 4151,Defect detection and identification in textile fabrics using Multi Resolution Combined Statistical and Spatial Frequency Method,"In textile industry, reliable and accurate quality control and inspection becomes an important element. Presently, this is still accomplished by human experience, which is more time consuming and is also prone to errors. Hence automated visual inspection systems become mandatory in textile industries. This Paper presents a novel algorithm of fabric defect detection by making use of multi resolution combined statistical and spatial frequency method. Defect detection consists of two phases, first is the training and next is the testing phase. In the training phase, the reference fabric images are cropped into non-overlapping sub-windows. By applying MRCSF the features of the textile fabrics are extracted and stored in the database. During the testing phase the same procedure is applied for test fabric and the features are compared with database information. Based on the comparison results, each sub-window is categorized as defective or non-defective. The classification rate obtained by the process of simulation using MATLAB was found to be 99%.",2010,0, 4152,Correction,"In the above title (ibid, vol. 44, issue 3, pp. 114-123, Jun 02), corrections were made to Equation 38. There were various typographical errors.",2007,0, 4153,"Correction [to ""Importance of gender homophily in the computer science classroom"" (Summer 07 43-47]","In the above titled paper (ibid., vol. 26, no. 2, pp. 43-47, Summer 07), typographical errors appeared in several places. The corrected text is presented here.",2007,0, 4154,"Corrections to On the Suitability of a High- Gate Dielectric in Nanoscale FinFET CMOS Technology","In the above titled paper (ibid., vol. 55, no. 7, pp. 1714-1719), the propagation delays in Table III are incorrect, being too long by a factor of two. Furthermore, there was a typo in the table title. The corrected table and title are presented here.",2009,0, 4155,A runtime approach for software fault analysis based on interpolation,"In the application system, obtaining the information of the system at runtime and analyzing them are important for system adjustment. Many runtime metrics can be collected from software systems, and some statistical relationships exist among these metrics. Extracting the information of these metrics from the monitoring data and then analyzing the relationships between these metrics is an effective way to detect failure and diagnose fault. This paper proposes a fault analysis approach for the system at runtime which gets the information of the system by monitoring. We demonstrate this approach in a case study which shows that our approach is effective and is beneficial to find the relationship between the fault and the component.",2010,0, 4156,A Software Based Approach for Providing Network Fault Tolerance in Clusters with uDAPL interface: MPI Level Design and Performance Evaluation,"In the arena of cluster computing, MPI has emerged as the de facto standard for writing parallel applications. At the same time, introduction of high speed RDMA-enabled interconnects like InfiniBand, Myrinet, Quadrics, RDMA-enabled Ethernet has escalated the trends in cluster computing. Network APIs like uDAPL (user direct access provider library) are being proposed to provide a network-independent interface to different RDMA-enabled interconnects. Clusters with combination(s) of these interconnects are being deployed to leverage their unique features, and network failover in wake of transmission errors. In this paper, we design a network fault tolerant MPI using uDAPL interface, making this design portable for existing and upcoming interconnects. Our design provides failover to available paths, asynchronous recovery of the previous failed paths and recovery from network partitions without application restart. In addition, the design is able to handle network heterogeneity, making it suitable for the current state of the art clusters. We implement our design and evaluate it with micro-benchmarks and applications. Our performance evaluation shows that the proposed design provides significant performance benefits to both homogeneous and heterogeneous clusters. Using a heterogeneous combinations of IBA and Ammasso-GigE, we are able to improve the performance by 10-15% for different NAS parallel benchmarks on 8 times 1 configuration. For simple micro-benchmarks on a homogeneous configuration, we are able to achieve an improvement of 15-20% in throughput. In addition, experiments with simple MPI micro-benchmarks and NAS applications reveal that network fault tolerance modules incur negligible overhead and provide optimal performance in wake of network partitions",2006,0, 4157,Low-Complexity Orthogonal Spectral Signal Construction for Generalized OFDMA Uplink With Frequency Synchronization Errors,"In orthogonal frequency-division multiplexing, the total spectral resource is partitioned into multiple orthogonal subcarriers. These subcarriers are assigned to different users for simultaneous transmission in orthogonal frequency-division multiple access (OFDMA). In an unsynchronized OFDMA uplink, each user has a different carrier frequency offset (CFO) relative to the common uplink receiver, which results in the loss of orthogonality among subcarriers and thereby multiple access interference. Hence, OFDMA is very sensitive to frequency synchronization errors. In this paper, we construct the received signals in frequency domain that would have been received if all users were frequency synchronized. A generalized OFDMA framework for arbitrary subcarrier assignments is proposed. The interference in the generalized OFDMA uplink due to frequency synchronization errors is characterized in a multiuser signal model. Least squares and minimum mean square error criteria are proposed to construct the orthogonal spectral signals from one OFDMA block contaminated with interference that was caused by the CFOs of multiple users. For OFDMA with a large number of subcarriers, a low-complexity implementation of the proposed algorithms is developed based on a banded matrix approximation. Numerical results illustrate that the proposed algorithms improve the system performance significantly and are computationally affordable using the banded system implementation",2007,0, 4158,Error Reduction Based on Error Categorization in Arabic Handwritten Numeral Recognition,"In practical applications, errors should not be treated equally, but conditionally. In this paper, errors are categorized based on different costs in misclassification. Accordingly, the characteristics of the error categorization and the corresponding strategies for correcting them are proposed. Verification based on Arabic Handwritten Numeral Recognition is considered as one application to utilize these definitions and strategies. As a result, the recognition results improved from 98.47% to 99.05%, and errors were significantly reduced by over 35% compared to previous studies. When a rejection measurement was applied, and the rejection threshold was adjusted to maintain the same error rate, both the recognition rate and reliability increased from 96.98% to 97.89% and from 99.08% to 99.28%, respectively.",2010,0, 4159,Are found defects an indicator of software correctness? An investigation in a controlled case study,"In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a ""hacker"" approach to software development, and 3) more research is needed to elaborate this issue.",2004,0, 4160,A new fault-tolerant technique for improving schedulability in multiprocessor real-time systems,"In real-time systems, tasks have deadlines to be met despite the presence of faults. Primary-Backup (PB) scheme is one of the most common schemes that has been employed for fault-tolerant scheduling of real-time tasks, wherein each task has two versions and the versions are scheduled on two different processors with time exclusion. There have been techniques proposed for improving schedulability of the PB-based scheduling. One of the more popular ones include Backup-Backup (BB) overloading, wherein two or more backups can share/overlap in time on a processor. In this paper we propose a new schedulability enhancing technique, called primary-backup (PB) overloading, in which the primary of a task can share/overlap in time with the backup of another task an a processor. The intuition is that, for both primary and backup of a task, the PB-overloading can assign an earlier start time than that of the BB-overloading, thereby increasing the schedulability. We conduct schedulability and reliability analysis of PB- and BB-overloading techniques through simulation and analytical studies. Our studies show that PB-overloading offers better schedulability (25% increase in the guarantee ratio) than that of BB-overloading, and offers reliability comparable to that of BB-overloading. The proposed PB-overloading is a general technique that can be employed in any static or dynamic fault-tolerant scheduling algorithm",2001,0, 4161,Performance analysis of forward error correcting codes in IPTV,In recent years the forward error correction (FEC) schemes at the binary erasure channel have been researched for many applications including DVB-H and IPTV systems. In most cases the packet-level FEC strategies are implemented by either Reed-Solomon (RS) codes at the link layer or Raptor codes at the application layer. Recently an enhanced decoding method for RS codes was presented. The enhanced-decoding scheme is a combination of erasure and error decoding. In this paper we consider the performance comparison of the enhanced-decoding RS code at the link layer and Raptor code at the application layer in the burst mode transmission to give a guidance of FEC scheme for IPTV application. It is noted that the efficient Raptor decoding algorithm makes it possible to broadcast multimedia data more reliable and faster.,2008,0,5961 4162,Research and application of the fault information integration and intelligent analysis in distributed grid,"In recent years, to build smart grid becomes a new focus, and how to fully collect and make effective use of the information of the intelligent secondary device get abroad attention. This paper presents a grid fault information intelligent analysis and processing system. It is a platform of fault information intelligent processing, used for an incorporate control center system of province and district level. It works in two parts, substation and control center. In the substation, a slave system device gather the information of relays and deliver it to the master system in the control center through Ethernet, and a recorder proxy device gather the information of recorders and deliver it to the master system through another Ethernet channel. The software in the control center has a layered and distributed structure, and import the information of relays and recorders with two isolated server in the data layer. The application layer of the software system can gather the information to a integrated report of a grid disturbance or fault with special process, include of using the fault index,fault time, etc. This method is effective of reduce the requirement of the hardware resource of the slave system, and can decrease the chance of Ethernet block. It can support the decision of the dispatcher and fleetly deal the gird fault. This system has been used in the control center of Taizhou district in Zhejiang province, and remarkably improves the level of management. The system is programmed to integrate with information of other systems, such as SCADA and EMS, and will get whole view of the grid information to give stronger support of building a smart grid.",2010,0, 4163,Correction of image coordinate using landmark for setting error,"In recently years, the production method has changed from the diversification of a sense of values from mass production to diverse-types-and-small-quantity production on the production site. Therefore, a productive system that can correspond to changes in the environment is required. In a productive system that performs handling tasks, we use a hand-eye system with a camera and a manipulator and perform hand-eye calibration from two dimension images obtained by camera to the manipulator coordinates a three dimension system of in the hand-eye system. However, the camera and the manipulator positions in the previous system changed, and great error from the position, which performed the calibration, was caused even when the error was minute. Therefore, in this hand-eye system, changing this production system is difficult or re-calibration is needed. The projection matrix must be calculated so that the image coordinates obtained with the camera and the manipulator coordinates of three dimensions correspond again in the re-calibration. However, calculating the projection matrix requires great care. We propose a technique that simply calculates the manipulator coordinates of three dimensions from image coordinates without calculating the projection matrix again by correcting the image coordinates. Our experiment results show that the image coordinates can be corrected without calculating the projection matrix again using a landmark. In addition, a three-dimensional position was corrected simply, and we were able to handle the object.",2009,0, 4164,An automated software fault injection tool for robustness assessment of java COTs,"In line with market demands and the need for technological innovations, designing and implementing software and hardware components for computing systems is growing in complexity. In order to cope with such complexity whilst meeting market needs, engineers often rely on design integration with commercial-of-the-shelf-components (COTs). In the case where lives and fortunes are at stake, there is a need to ensure dependability of COTs in terms of their robustness before they can be adopted in such an environment. However, it is not often possible to thoroughly test COTs for robustness because their design as well as source codes are usually unavailable. In order to address some of the above issues, we have developed an automated software fault injection tool, ca lled SFIT, based on the use of computational reflection and Java technology. This paper describes our experiences with SFIT performing robustness testing of Java COTs, called Jada, a Linda tuple space implementation.",2006,0, 4165,Automated severity assessment of software defect reports,"In mission critical systems, such as those developed by NASA, it is very important that the test engineers properly recognize the severity of each issue they identify during testing. Proper severity assessment is essential for appropriate resource allocation and planning for fixing activities and additional testing. Severity assessment is strongly influenced by the experience of the test engineers and by the time they spend on each issue. The paper presents a new and automated method named SEVERIS (severity issue assessment), which assists the test engineer in assigning severity levels to defect reports. SEVERIS is based on standard text mining and machine learning techniques applied to existing sets of defect reports. A case study on using SEVERIS with data from NASApsilas Project and Issue Tracking System (PITS) is presented in the paper. The case study results indicate that SEVERIS is a good predictor for issue severity levels, while it is easy to use and efficient.",2008,0, 4166,"A Model Driven Architecture approach to fault tolerance in Service Oriented Architectures, a performance study","In modern service oriented architectures (SoA) identifying the occurrences of failure is a crucial task, which can be carried out by the creation of diagnosers to monitor the behavior of the system. Model driven architecture (MDA) can be used to automatically create diagnosers and to integrate them into the system to identify if a failure has occurred. There are different methods of incorporating a diagnoser into a group of interacting services. One option is to modify the BPEL file representing services to incorporate the diagnoser. Another option is to implement the diagnoser as a separate service which interacts with the existing services. Moreover, the interaction between the diagnoser and the services can be either orchestration or choreography. As result, there are four options for the implementation of the diagnoser into the SoA via MDA. This paper reports on an Oracle JDeveloper plugin tool developed which applies MDA to create these four possible implementations and compares the performance of them with the help of a case study.",2008,0, 4167,Delay-fault diagnosis using timing information,"In modern technologies, process variations can be quite substantial, often causing design timing failures. It is essential that those errors be correctly and quickly diagnosed. Unfortunately, the resolution of the existing delay-fault diagnostic methodologies is still unsatisfactory. In this paper, the feasibility of using the circuit timing information to guide the delay-fault diagnosis is investigated. A novel and efficient diagnostic approach based on the delay window propagation (DWP) is proposed to achieve significantly better diagnostic results than those of an existing delay-fault diagnostic commercial tool. Besides locating the source of the timing errors, for each identified candidate the proposed method determines the most probable delay defect size. The experimental results indicate that the new method diagnoses timing faults with very good resolution.",2005,0, 4168,Robust Partial Volume Segmentation with Bias Field Correction in Brain MRI,"In MR imaging, image noise, bias field, and partial volume effect are adverse phenomena that increases inter-tissue overlapping and hampers quantitative analysis. This study provides a powerful fully automated classification method, which combines the bias field correction and PV segmentation together. The method has been validated on simulated and real MR images for which gold standard segmentation available. The experimental results show that the proposed method is more accurate and robust than currently available models",2006,0, 4169,Color Correction Preprocessing for Multiview Video Coding,"In multiview video, a number of cameras capture the same scene from different viewpoints. There can be significant variations in the color of views captured with different cameras, which negatively affects performance when the videos are compressed with inter-view prediction. In this letter, a method is proposed for correcting the color of multiview video sets as a preprocessing step to compression. Unlike previous work, where one of the captured views is used as the color reference, we correct all views to match the average color of the set of views. Block-based disparity estimation is used to find matching points between all views in the video set, and the average color is calculated for these matching points. A least-squares regression is performed for each view to find a function that will make the view most closely match the average color. Experimental results show that when multiview video is compressed with joint multiview video model, the proposed method increases compression efficiency by up to 1.0 dB in luma peak signal-to-noise ratio (PSNR) compared to compressing the original uncorrected video.",2009,0, 4170,A colour correction preprocessing method for multiview video coding,"In multiview video, a number of cameras capture the same scene from different viewpoints. There can be significant variations in the colour of views captured with different cameras, which negatively affects performance when the videos are compressed with inter-view prediction. In this paper, a method is proposed for correcting the colour of multiview video sets as a preprocessing step to compression. The corrected YUV values of a pixel are expressed as a weighted sum of the original YUV values. Disparity estimation is used to find matching points between a reference view and the view being corrected. A least squares regression is performed on these sets of matching points to find the optimal weight parameters that will make the current view most closely match the reference. Experimental results show that the proposed method produces colours that closely match the reference view. Furthermore, when multiview video is compressed with H.264 using inter-view prediction, the proposed method increases compression efficiency by up to 1.0 dB compared to compressing the original uncorrected video.",2008,0, 4171,A novel ray-space based color correction algorithm for multi-view video,"In multi-view video, color inconsistency among different views always exists because of imperfect camera calibration, CCD noise, etc. Since color inconsistency greatly reduces the coding efficiency and rendering quality of multi-view video, a novel ray-space based color correction algorithm is proposed in this paper. Firstly, for each epipolar plane image (EPI) in ray-space domain, feature points are extracted to form a corresponding feature EPI (FEPI). Secondly, radon transform is applied to each FEPI to detect corresponding points from different views and the average color is calculated from the detected corresponding points. Finally, for each viewpoint image, the optimal color correction matrix is calculated by minimizing the error energy between the color of the current view and the average color based on the least square error criteria. Experimental results show that the proposed algorithm greatly improves the color consistency among different views. Moreover, the coding efficiency of the corrected multi-view images is greatly improved compared to that of the original ones and the ones corrected by histogram matching method.",2009,0, 4172,A Tool for Automatic Defect Detection in Models Used in Model-Driven Engineering,"In the Model-Driven Engineering (MDE) field, the quality assurance of the involved models is fundamental for performing correct model transformations and generating final software applications. To evaluate the quality of models, defect detection is usually performed by means of reading techniques that are manually applied. Thus, new approaches to automate the defect detection in models are needed. To fulfill this need, this paper presents a tool that implements a novel approach for automatic defect detection, which is based on a model-based functional size measurement procedure. This tool detects defects related to the correctness and the consistency of the models. Thus, our contribution lays in the new approach presented and its automation for the detection of defects in MDE environments.",2010,0, 4173,Distributed Fault Diagnosis Using Dependency Modeling without Revealing Subsystem Details,"In the past decade, researchers have studied how to model complex systems for diagnostics using multi- signal digraphs and many algorithms have been proposed to perform inference on this graphical model. Multi-signal dependency model can be applied under a single agent paradigm or a cooperative multi-agent paradigm. Under the multi-agent paradigm, we have developed a distributed diagnosis algorithm which considered perfect tests and assembled dependency graphs at a central location (by knowing model of each agent). However, each subsystem dependency model may be constructed by an independent modeler who will be unlikely to reveal any proprietary information through the dependency model. So the preferred mode of model development will result in independent subsystem models that need to interact during operation in a distributed framework while preserving their integrity and proprietary nature. To meet these requirements, it is desirable not to force each agent to reveal its dependency graph structure. In addition, due to improper setup, operator error, electromagnetic interference, or aliasing inherent in the signature analysis, the nature of tests may be unreliable (imperfect). In this paper, we will extend our distributed diagnosis algorithms in three areas: 1). Dependency modeling without revealing subsystem dependencies; 2). Handle unreliable tests; 3). Distributed multiple fault diagnosis.",2008,0, 4174,Integrating Fault Recovery and Quality of Security in Real-Time Systems,"In the past five years, mandatory security requirements and fault tolerance have become critical criteria for most real-time systems. Although many conventional fault-tolerant or security approaches were investigated and applied to real-time systems, most existing schemes only addressed either security demands ignoring the fault-tolerant requirements or vice versa. To bridge this technology gap in real-time systems, in this paper we propose a way of integrating fault recovery and confidentiality services. The novel integration of security and fault recovery makes it possible to implement next-generation real-time systems with high reliability and quality of security. Experimental results from real-world applications show that our approach can significantly improve security over the conventional approaches by up to 661.56% while providing an efficient means of fault tolerance.",2007,0, 4175,Constraint net based error recovery for high-frequency information of remote sensing image in JPEG2000 transmission,"In the transmission of JPEG2000-coded remote sensing image (RSI), errors occurring in the codestream may result in the loss of the high-frequency information of RSI. A new method based on constraint net (CN) is proposed for error recovery of the high-frequency information. The CN of a wavelet coefficient block represents the relationships between each coefficient and its neighborhoods. The wavelet coefficient block has proved to be determined by its CN and surrounding neighboring coefficients. By using the correlation of inter-bands and the preservation property of edge feature and texture structure in high-resolution and low-resolution subbands, the CN of the damaged code-block that causes the loss of high-frequency information is computed. Together with the surrounding neighboring coefficients, the CN is used to recover the damaged code-block. Experimental results show the high-frequency information is well recovered and the quality of the reconstructed image is significantly improved",2006,0, 4176,Frame Based Error Concealment in H.264/AVC by Refined Motion Prediction,"In the transmission of low bit-rate video bit-stream, a packet loss usually results in whole frame loss, and the frame based error concealment (FBEC) becomes a necessary technique for video decoders in many real time applications. In this paper, we propose a method of FBEC to improve the visual quality in H.264/AVC decoders. The lost frames are reconstructed by predicting motion vectors from neighbor macro-blocks of the previous frames. With the proposed algorithm, the decoding performance measured in PSNR is around 1dB higher than that of JM10.0. The subjective quality also improves significantly. This method has little overhead of extra computing components, and may be easily implemented in hardware",2006,0, 4177,Error rates of M-ary signals with. Multichannel reception in Nakagami-m fading channels,"In this letter, we present closed. form expressions for the exact average symbol-error rate (SER) of M-ary modulations with multichannel reception over Nakagami-m fading channels. The derived expressions extend already available results for the nondiversity case, to maximal-ratio combining-(MRC) and postdetection equal-gain combining (EGC) diversity systems. The average SERs are given in terms of Lauricella's multivariate hypergeometric function FD (n) . This function exhibits a finite integral representation that can be used for fast and accurate numerical computation of the derived expressions",2006,0, 4178,On the error probability of binary and M-ary signals in Nakagami-m fading channels,"In this letter, we present new closed-form formulas for the exact average symbol-error rate (SER) of binary and M-ary signals over Nakagami-m fading channels with arbitrary fading index m. Using the well-known moment generating function-based analysis approach, we express the average SER in terms of the higher transcendental functions such as the Gauss hypergeometric function, Appell hypergeometric function, or Lauricella function. The results are generally applicable to arbitrary real-valued m. Furthermore, with the aid of reduction formulas of hypergeometric functions, we show previously published results for Rayleigh fading (m=1) as special cases of our expressions.",2004,0, 4179,Fault detection in a belt-drive system using a proportional reduced order observer,In this paper a fault detection method is proposed to detect the belt breakdown in a belt drive system where it is assumed that a DC motor drives an inertial load through a belt. The proposed approach is based on a proportional reduced order observer designed using differential algebraic techniques. Experimental results are given to evaluate the proposed approach.,2004,0, 4180,Fault Tolerant and Adaptive GPS Attitude Determination System,"In this paper a fault tolerant platform for the GPS attitude determination is proposed. The algorithm encompasses speed, adaptability and performance as its key objectives and also deals with single hard errors (SHE) from the fault tolerance perspective. The technique is based on the ambiguity function method (AFM) but overcomes restrictions and computational overheads incurred by existing techniques such as AFM. The adaptation of the GPS architecture is done by using fine-grained or cellular parallel genetic algorithm to compute more efficiently the attitude parameters in terms of speed and performance. The algorithm also has the ability to efficiently search the complex search space imposed by the problem in addition to being immune to cycle slips compared to other conventional methods.",2009,0, 4181,Categorization of minimum error forecasting zones using a geostatistic wind model,"In this paper a geostatistic wind direction model is applied to trace a wind speed map, based on data from official measurement weather stations distributed within the region of Andalucia-Spain. Each station's performance is assessed by comparing real measurements to those resulting from the linear interpolation of the rest. Once an error is associated to the station, the error is drawn in a map, in which minimum error zones can be delimited. Frequency and wind speed in each direction are the magnitudes of interest to get a first categorization of wind resources associated to the region. The interest of the method relies in the possibility of forecasting everywhere within the region with an error inside the tolerable margins.",2009,0, 4182,Model-based fault detection in induction Motors,"In this paper a model-based fault detection method for induction Motors is presented. A new filtering technique based on Unscented Kalman filters and Extended Kalman filters, is utilized as a state estimation tool in broken bars detection of induction motors. Using the merits of these recent nonlinear estimation tools UKF and EKF, rotor resistance of an induction motor is estimated only by the sensed stator currents and voltages information. In order to compare the estimation performances of EKF and UKF, both observers are designed for the same motor model and run with the same covariance matrices under the same conditions. The results show the superiorly of UKF over EKF in highly nonlinear systems, as it provides better estimates of which is most critical for rotor fault detection.",2010,0, 4183,Autonomous fault recovery technology for continuous service in Distributed VoD system,"In the video on demand (VoD) service, users request heterogeneous video quality based on their preference and usage environment which are dynamically changing. On the other hand, service providers request to provide variety service with minimum total storage volume in the system because the volume of the video data is too huge. In addition, considering the characteristics of the application, services must be distributed without stopping playback or deteriorating video quality. However, conventional VoD system constructed on redundant content servers and centralized management cannot satisfy these requirements. Autonomous VoD system (AVoDS) is proposed to meet these requirements. This system is based on the faded information field (FIF) architecture in which each node collaborate with the other nodes for service provision and utilization supported by the mobile agent. In this system, the layered streaming video data are employed and each data are distributed on different service layers. Therefore the system can provide service with adaptability and minimum total storage volume. In this paper, implementation of the prototype of AVoDS is introduced and autonomous fault recovery technology is proposed for the continuous service provision. The fault of the node is autonomously detected by the connected nodes and the fault recovery processes for each user are widely distributed to other nodes in the system. The effectiveness of the proposed technology is proved through simulation",2007,0, 4184,Designing equally fault-tolerant configurations for kinematically redundant manipulators,"In this article, the authors examine the problem of designing nominal manipulator Jacobians that are optimally fault-tolerant to multiple joint failures. In this work, optimality is defined in terms of the worst case relative manipulability index. Building on previous work, it is shown that for a robot manipulator working in three-dimensional workspace to be equally fault-tolerant to any two simultaneous joint failures, the manipulator must have precisely six degrees of freedom. A corresponding family of Jacobians with this property is identified. It is also shown that the two-dimensional workspace problem has no such solution.",2009,0, 4185,Video error concealment based on data hiding in the 3D wavelet domain,In this contribution a novel method for video error control and concealment is proposed. The system is based on the use of data hiding techniques for transmitting the extra information needed to recover the data lost during transmission. The embedded data is a sub sampled binary version of each key frame of the shot. The embedding is performed by using the Quantization Index Modulation (QIM) scheme in the 3D wavelet transform domain. The experimental results show the effectiveness of the proposed approach.,2010,0, 4186,Exact error probability analysis of rectangular QAM for single- and multichannel reception in nakagami-m fading channels,"In this contribution, we derive exact closed-form expressions for the average symbol error probability (SEP) of arbitrary rectangular quadrature amplitude modulation (QAM) for single- and multichannel diversity reception over independent but not-necessarily identically distributed Nakagami-m fading channels. The diversity branches may hence exhibit identical or distinctive power levels and their associated Nakagami indexes need not be the same. Our work extends previous results pertaining to nondiversity reception of M-ary rectangular QAM over Rayleigh fading channels and multichannel reception of M-ary square QAM over Nakagami-m fading channels. For a given number L of diversity branches and a corresponding set of arbitrary real-valued Nakagami indexes not less than 1/2, our SEP results are expressed in terms of Gauss's hypergeometric function 2F1 and Lauricella's multivariate hypergeometric function FD (L) of L variables, both of which can be efficiently evaluated using standard numerical softwares.",2009,0, 4187,Study of the Test Flow Optimization Method in Radar Fault Isolation,"In order to optimize test flow after the default flow was modified by hand, a new software framework for the radar fault isolation was illustrated. This framework separated all mapping algorithms from the test flow graph so as to modify flow and to insert mapping algorithm dynamically in testing process. Based on this framework, a kind of optimization method of test flow was proposed and studied. By defining an objective function, we could evaluate all candidate test flows so as to get a optimizing flow. An example explained how to search the flow from candidate flows.",2009,0, 4188,Ajax-based information publishing system for traveling wave fault location,"In order to real-time issue high precision faults location results for fault treatment, a Web based fault information publishing system for power grid traveling wave fault location is designed in the paper. The system adopts three-layer frame based on Browse/Server (B/S) mode, and uses Microsoft IIS 5.1 as Web Server. Ajax (Asynchronous JavaScript and XML) technique is applied to realize the location results release in real-time, which can create more dynamic and highly available web user interfaces (close to a local desktop application). In addition, a detailed study is carried out for data security. The software has been applied in a real electric power network, practical operation results show that the software can release faults information dynamically and share faults data safely.",2009,0, 4189,Application of PNN to Fault Diagnosis of IC Engine,"In order to simplify data stream of automobile diagnosing instruments, a fault diagnostic method for internal combustion (IC) engine based on probability neural network (PNN) was presented. At first a PNN model was established, and then based on the sample of Jetta ATK engine, the model was trained and simulated by a number of sample sets of symptoms and troubles. Simultaneously, the comparison has been done between PNN and backpropagation (BP) network.The simulation experimental results demonstrated that PNN model is more feasible and successful than BP network model and could make data stream of diagnosing instruments easier.",2009,0, 4190,Research and Assessment of the Reliability of a Fault Tolerant Model Using AADL,"In order to solve the problem of the assessment of the reliability of the fault tolerant system, the work in this paper is devoted to analyze a subsystem of ATC (air traffic control system), and use AADL (architecture analysis and design language) to build its model. After describing the various software and hardware error states and as well as error propagation from hardware to software, the work builds the AADL error model and convert it to GSPN (general stochastic Petri net). Using current Petri Net technology to assess the reliability of the fault tolerant system which is based on ATC as the background, this paper receives good result of the experiment.",2008,0, 4191,Fault-tolerant communication over Micronmesh NOC with Micron Message-Passing protocol,"In the future multi-processor system-on-chip (MPSoC) platforms are becoming more vulnerable to transient and intermittent faults due to physical level problems of VLSI technologies. This sets new requirements to the fault-tolerance of the messaging layer software which applications use for communication, because the faults make the operation of the Network-on-Chip (NoC) hardware of the MPSoCs less reliable. This paper presents Micron Message-Passing (MMP) Protocol which is a light-weight protocol designed for improving the fault tolerance of the messaging layer of the MPSoCs where Micronmesh NoC is used. Its fault-tolerance is implemented by watchdog timers and cyclic redundancy checks (CRC) which are usable for detecting packet losses, communication deadlocks, and bit errors. These three functionalities are necessary, because without them the software executed on the MPSoCs is not able to detect the faults and recover from them. This paper presents also how the MMP Protocol can be used for implementing applications which are able to recover from communication faults.",2009,0, 4192,An intelligent and efficient fault location and diagnosis scheme for radial distribution systems,"In this paper, an effective fault location algorithm and intelligent fault diagnosis scheme are proposed. The proposed scheme first identifies fault locations using an iterative estimation of load and fault current at each line section. Then an actual location is identified, applying the current pattern matching rules. If necessary, comparison of the interrupted load with the actual load follows and generates the final diagnosis decision. Effect of load uncertainty and fault resistance has been carefully investigated through simulation results that turns out to be very satisfactory.",2004,0, 4193,An Error Concealment Scheme for Entire Frame Losses for H.264/AVC,"In this paper, an error concealment scheme is proposed to conceal an entirely lost frame in a compressed video bitstream due to errors introduced during transmission. The proposed scheme targets low bit rate video transmission applications using H.264/AVC. The motion field of the lost frame is first reconstructed by copying the co-located motion vectors and reference indices from the last decoded reference frame. After the motion field estimation of the missing frame, motion compensation is performed to reconstruct the frame. This technique reuses existing modules of the video decoder and it does not incur extra complexity compared to decoding a normal frame. It has also been adopted as a non-normative decoder option to the JM reference software at the JVT meeting in Poznan, Poland in July 2005 [1] and has been incorporated into the SA4 video ad hoc group's toolkit at the 3GPP meeting at Paris [2] in September 2005. Simulation results will show its improved performance over other simple error concealment schemes such as ""frame copy,"" both subjectively and objectively, without significant complexity overhead.",2006,0, 4194,Error resilient macroblock rate control for H.264/AVC video coding,"In this paper, an error resilient rate control scheme for the H.264/AVC standard is proposed. This scheme differs from traditional rate control schemes in that macroblock mode decisions are not made only to minimize their rate-distortion cost, but also take into account that the bitstream will have to be transmitted through an error-prone network. Since channel errors will probably occur, error propagation due to predictive coding should be mitigated by adequate Intra coding refreshes. The proposed scheme works by comparing the rate-distortion cost of coding a macroblock in Intra and Inter modes: if the cost of Intra coding is only slightly larger than the cost of Inter coding, the coding mode is changed to Intra, thus reducing error propagation. Additionally, cyclic Intra refresh is also applied to guarantee that all macroblocks are eventually refreshed. The proposed scheme outperforms the H.264/AVC reference software, for typical test sequences, for error-free transmission and several packet loss rates.",2008,0, 4195,Fuzzy Expert System for Defect Classification for Non-Destructive Evaluation of Petroleum Pipes,"In this paper, an expert system has been outlined to classify the defects in metallic petroleum pipelines using acoustic techniques with non-destructive evaluation (NDE) protocols, the proposed system maps the quantitative defect data through a novel perception-based kernel that has its roots in multidimensional fuzzy set theory to map the relative weights given to various features; mathematical or statistical, to the decision surface to deduce the type of the defect. The system has a centralized database which holds the defect information in the form of known and calculated features. The known features and their quantitative representations are used to initialize the database. Then experiments are conducted on known defects and the collected experimental data is then modeled into autoregressive process models using state of the art ltinfin deconvolution algorithm. With each feature set, a classifier tag is associated that assigns a class number to that defect. The classifier tag is then used to classify any new data using the fuzzy classifier.",2007,0, 4196,A No DC-Gain Error Small-Signal Model for the Zero-Voltage-Switching Phase-Shift-Modulated Full-Bridge DC-DC Converter,"In this paper, an improved small-signal model for the zero-voltage-switching phase-shift-modulated full-bridge which eliminates the DC-gain errors of its line-to-output and control-to-output model transfer functions is presented. A DC-gain and a frequency response comparison among small-signal models are evaluated. The model presented here demonstrates to be accurate for different designs parameters, being it in a simple s-domain polynomial ratio form. Experimental frequency response demonstrates similar results with transfer functions responses of the improved model here presented",2006,0, 4197,Automated Deployment of Distributed Software Components with Fault Tolerance Guarantees,"In this paper, an MILP-based methodology is presented that allows to optimize the deployment of a set of software components over a set of computing resources, with respect to fault tolerance and response times. The MILP model takes into account the reliability and performance parameters of hardware nodes and links, and optimizes a (configurable) trade-off between reliability and performance by replicating software components where necessary and finding an optimal deployment for them. The complete system can be modeled using UML component diagrams and activity diagrams, and an algorithm is presented to transform the UML model to the MILP model. The resulting deployment can then be fed back into the UML model. The applicability of the approach is demonstrated through a case study.",2008,0, 4198,Spatial shape error concealment for object-based image and video coding,"In this paper, an original spatial shape error-concealment technique, to be used in the context of object-based image and video coding schemes, is proposed. In this technique, it is assumed that the shape of the corrupted object at hand is in the form of a binary alpha plane, in which some of the shape data is missing due to channel errors. From this alpha plane, a contour corresponding to the border of the object can be extracted. However, due to errors, some parts of the contour will be missing and, therefore, the contour will be broken. The proposed technique relies on the interpolation of the missing contours with Be zier curves, which is done based on the available surrounding contours. After all the missing parts of the contour have been interpolated, the concealed alpha plane can be easily reconstructed from the fully recovered contour and used instead of the erroneous one improving the final subjective impact.",2004,0, 4199,An Unequal Error Protection Framework for DVB-H and Its Application to Video Streaming,"In this paper, an unequal error protection (UEP) scheme is proposed for DVB-H based on its link layer forward error correction. The scheme preserves the existing DVB-H protocol stack and maintains its original time diversity. The UEP scheme is further applied to DVB-H video streaming service. The H.264 based temporal scalable coding is employed to facilitate UEP application and provides a set of UEP configurations to accommodate different channel conditions. Experimental results show graceful video quality degradation provided by the proposed framework.",2008,0, 4200,Using run-time reconfiguration for fault injection in hardware prototypes,"In this paper, approaches using run-time reconfiguration (RTR) for fault injection in programmable systems are introduced. In FPGA-based systems an important characteristic is the time to reconfigure the hardware. With novel FPGA families (e.g. Virtex, AT6000) it is possible to reconfigure the hardware partially in run-time. Important time-savings can be achieved when taking advantage of this characteristic for fault injection as only a small part of the device must be reconfigured",2000,0,661 4201,Using run-time reconfiguration for fault injection applications,"In this paper, approaches using run-time reconfiguration for fault injection in programmable systems are introduced. In FPGA-based systems an important characteristic is the time to reconfigure the hardware, including re-synthesis, place and route and finally bitstream downloading. Modifications can be carried out at low-level, directly in the bitstream, so that re-synthesizing the description can be avoided to inject new faults. Moreover, with some FPGA families (e.g. Virtex or AT6000), it is possible to reconfigure the hardware partially at run-time. Important time-savings can be achieved when taking advantage of these features. These characteristics fit well to apply with fault injection where the injection necessitates the reconfiguration of only a few resources of the device with a few modifications. Time gains can be various depending on the number and kind of faults to be injected and the device used for the experiments. The experiments show that his approach could be several orders faster than the implementation using Compile-Time Reconfiguration",2001,0, 4202,Fault diagnosis for substation automation based on Petri nets and coding theory,"In this paper, coding-based methodology for faults monitoring of discrete event systems is applied to electric power system. Using Petri nets and coding theory to perform the fault diagnosis is further studied. One feeder of the substation is modeled by Petri net using real time discrete events, and redundant places are introduced to form structure redundancy and facilitate fault diagnosis. Based on the previous work, the feeder's all possible failures are analyzed and a new method to model these failures is given. Then parity check from coding theory is used to form an encoded Petri net, therefore faults can be detected and identified through error syndrome. Method to construct important matrix, generator matrix, is presented. The simulation is simple, fast and shows very high accuracy, as combining with error correction theory. The result of simulation shows that the scheme has good performance in real-time substation fault diagnosis.",2004,0, 4203,Error Rate Performance of Multilevel Signals with Coherent Detection,"In this paper, coherent detection for multilevel correlated signaling sets in additive white Gaussian noise is addressed. From the decision rule, a general analytical expression for the symbol error probability (SEP) is derived, which is in the form of a single integral. Our new result is not only in agreement with a known equivalent expression, but it also has a much simpler analytical form. The structure of the correlation matrix under consideration of the signaling set is quite generic, including various known signaling sets as special cases. Hence the derived SEP expression is useful for analyzing the performance of various coherent modulation schemes such as multilevel phase-shift and frequency-shift keying. Specific correlation structures which minimize the SEP are also studied. Based on eigendecomposition or LU decomposition, generic methods for constructing a correlated signaling set for any correlation matrix under consideration are also provided.",2008,0, 4204,Irregular Puncturing for Convolutional Codes and the Application to Unequal Error Protection,"In this paper, convolutional codes are studied for puncturing with irregular puncturing periods. Irregular puncturing can generate punctured codes with more available rates and better bit-error-rate performance compared with the conventional scheme with a single puncturing period. For the application to unequal error protection, a new multiplexing scheme is also proposed for rate-compatible punctured convolutional (RCPC) codes which can guarantee smooth transition between rates without extra overheads. Finally, families of good RCPC codes with irregular puncturing tables are given by a computer search",2006,0, 4205,Online fault diagnosis of hybrid electric vehicles based on embedded system Changqing Song,"In this paper, embedded system was used to work for the online fault diagnosis of hybrid electric vehicles (HEV). This system took 32-bit embedded one as a hardware platform, customized a WinCE6.0 operation system and used Embedded Visual C++ (EVC) as the tool to design the embedded application. Through this online diagnosis device, the failure phenomenon, failure causes and failure rules on hybrid electric vehicles were put forward. CAN communication and protocol SAE J1939 were applied in this system. As for the need of Real-timing fault-diagnosis, a new method of layered fault-diagnosis theory and reasoning algorithm which based on event trigger was presented. Meanwhile, the degree of confidence was counted by uncertainty reasoning rules. Testing results show that the online fault diagnosis equipment was very accurate and effective.",2010,0, 4206,Identication error analysis of block orthogonal modulations for cognitive radio,"In this paper, error probability analysis is presented for a block orthogonal modulation identification using General Orthogonal Modulation (GOM). In a N-dimensional space, GOM is defined by a transfer matrix obtained by N-dimensional rotations. The modulation parameters are rotation planes and rotation angles.",2010,0, 4207,Error resilient MQ coder and map JPEG 2000 decoding,"In this paper a novel error resilient MQ coder for reliable JPEG 2000 image delivery is designed. The proposed coder uses a forbidden symbol in order to force a given amount of redundancy in the codestream. At the decoder side, the presence of the forbidden symbol allows for powerful error correction. Moreover the added redundancy can be easily controlled and the proposed coder is kept backward compatible with MQ. In this work excellent improvements in the case of image transmission across both BSC and AWGN channels are obtained by means of a maximum a posteriori estimation technique.",2004,0, 4208,Flow Control Using a Combination of Robust and NeuroFuzzy Controllers in Feedback Error Learning Framework,"In this paper a novel hybrid strategy is employed in order to improve the controller performance. The main idea is combination of classical and intelligent controllers. Feedback error learning (FEL) as a two degrees of freedom (2DOF) control scheme, has been introduced based on this idea. This paper takes a step ahead of traditional FEL schemes which combine a PID controller with an intelligent inverse based controller. We introduce a robust FEL scheme and the robust controller replaces the conventional PID controller. The Robust controller is designed based on the Hinfin approach and the intelligent controller has ANFIS structure. This novel algorithm is implemented in a Flow plant to track the desired value of flow and reject unwanted disturbances in the practical system. The results are brought to prove the practical power of the novel method and are compared with other control schemes.",2006,0, 4209,A Novel Alarm Processing and Fault Diagnosis Expert System Based on BNF Rules,"In this paper a novel power system alarm processing and fault diagnosis expert system (AFDES) is presented. For the application of expert system (ES), there is a bottle-neck problem to obtain a maturing rule-base. Backus-Naur Form (BNF) is used to design a kind of expert rule frame which operator can write and add the rules with his own defining language to rule-base in ES. By this way a maturing rule-base can be set up gradually in the ES of power system. After brief introduction of using the BNF to construct the rule-base of fault diagnosis, the system configuration and alarm processing and fault diagnosis are described. The tactics of alarm messages pretreatment also introduced to accelerate the efficiency of reasoning engine, in which the alarm message filtration and classification according priority and synthesis level. At last, a case had been studied and analyzed to show how to forecast the key following fault which may happen right after the former trip according the occurred alarm messages for precaution",2005,0, 4210,Modeling of Non-Salient PM Synchronous Machines under Stator Winding Inter-turn Fault Condition: Dynamic Model - FEM Model,In this paper a simple dynamic model for a rotor surface mounted PMSM with inter-turn winding fault is derived using the reference frame theory. Finite element analysis is used for parameter determination and study of pole number influence. The proposed model is validated by time stepping FEM analysis. The dynamic model results exhibit the same trend as predicted by FEM analysis for different fault insulation resistances.,2007,0, 4211,Safe and fault-tolerant control with logical circuits,In this paper a specific redundancy method for improvement of the safety and reliability of the safety critical control systems using logical circuits is suggested. The method is proven mathematically and two configurations for practical implementation are shown.,2003,0, 4212,Planar Microwave Bandpass Filters with Defected Ground Resonators,"In this paper a study of some planar microwave bandpass filters composed of defected ground resonators is presented. Different kinds of couplings between two defected ground resonators are investigated: electric coupling, magnetic coupling and two different mixed couplings. The values of the coupling coefficients are extracted from the simulation results, obtained by using full-wave EM-field simulation software. On the basis of this study, some second-order coupled planar microwave bandpass Chebyshev filters are designed. The simulated performances of these novel filter structures are very close to the filter requirements, validating this way the design method and its results",2006,0, 4213,"Imprecise Computation Model, synchronous periodic real-time task sets and total weighted error","In this paper an analysis of Imprecise Computation Model with respect to synchronous periodic, real-time task sets is given. In the analysis earliest deadline first (EDF) and rate monotonic (RM) scheduling algorithms for mandatory subtask sets are assumed. Two different approaches are considered. In the first approach the mandatory subtask set is modified in a way that mandatory execution times are extended. In the second approach the mandatory and the optional subtask sets are separately scheduled. The solution which minimizes the total weighted error is given for both cases. The single preemptive processor system is assumed.",2010,0, 4214,How to characterize the problem of SEU in processors & representative errors observed on flight,"In this paper are first summarized representative examples of anomalies observed in systems operating on-board satellites as the consequence of the effects of radiation on integrated circuit, showing that single event upsets (SEU) are a major concern. An approach to predict the sensitivity to SEUs of a software application running on a processor-based architecture is then proposed. It is based on fault injection experiments allowing estimating the average rate of program dysfunctions per upset. This error rate, if combined with static cross-section figures obtained from radiation ground testing, provides an estimation of the target program error rate. The efficiency of this two-step approach was demonstrated by results obtained when applying it to various processors.",2005,0, 4215,Bit Error Model of Wireless Channel Based on Chaos Theory,"In this paper chaos theories are applied to analyze wireless channel, and finds the chaotic characteristic of error bits in wireless channel. A novel and simple wireless discrete channel model - simple chaos (SC) model based on chaotic theories, is introduced and analyzed in detail. Through theoretical analysis and numerical simulation, analysis of the corresponding relationship between the model's parameters and different communication channels proving the model's adaptability to the burst characteristics of wireless channels.",2008,0, 4216,Coincidence Summing Corrections in Gamma-Ray Spectrometry Using GEANT4 Code,"In this paper coincidence summing corrections in gamma-ray spectrometry have been analyzed in depth. Experimental setup included a n-type germanium detector and two efficient geometries of measurement (Marinelli beaker and air filter) calibrated with multi gamma radionuclides in the energy range of 40-1500 keV. Monte Carlo simulations were carried out using the GEANT4 code in order to develop this work. Firstly an optimization of the detector dimensions was performed to obtain the full-energy peak efficiency curve. Next, the simulation of the decay schemes for involved multi gamma radionuclides were appended to the GEANT4 code including beta, alpha, EC decay, together with the subsequent relaxation process consisting of a cascade of fluorescent X-rays and Auger electrons. In order to test the simulations an comparison between an experimental and a calculated spectra of 133Ba point source was carried out. Finally coincidence summing effects for gamma-ray emission and X-ray, Auger electron, beta and EC electron were studied in detail.",2009,0, 4217,Cross-layer issues and forward error correction for wireless video broadcast,"In this paper different means for adding reliability in different protocol layers are discussed, in order to reliably broadcast multimedia data within existing wireless networks. In particular, we investigate the performance of additional forward error correction in the physical layer, the radio link control layer, and the RTP layer for multimedia broadcast and multicast services over GERAN systems. Advantages and drawbacks when applying means for reliability in different layers are shown. We introduce a simple receiver modification, referred to as permeable-layer receiver (PLR), which exploits traditionally useless information at the receiver, while the transmitter is kept unchanged. Significant performance gains are reported. Finally, the application to H.264/AVC based wireless video broadcast is discussed and the performance of different system designs for video transmission is shown",2005,0, 4218,Detecting Single and Multiple Faults Using Intelligent DSP and Agents,"In this paper intelligent agents and DSP techniques are integrated to detect single and multiple faults in electrical circuits. Agents are used to model the AC electrical circuit. A DSP engine is embedded into the agents to analyse the signals, i.e. the energy transfer between the physical components. An AC to DC rectifier circuit is chosen as test-bed for the proposed solutions",2006,0, 4219,Fault Localization in Underground Distribution Network Using Modeling and Simulation,In this paper is considered on-line fault localization in radial underground distribution network. Fault localization can be done on the basis of transient recordings of currents and voltages in substations HV/MV during a fault. Estimated fault impedance is compared with results of short circuit calculation for considered distribution feeder. This approach has been examined by fault simulations on the models of real distribution feeders in ED Tuzla by assuming isolated and grounded network,2005,0, 4220,Application of rough set theory in network fault diagnosis,"In this paper rough set theory is researched and applied in computer network fault diagnosis. Original MIB( information base of management) data from network which reflect network fault are collected first, and a reduction algorithm based on attribute significance and attribute frequency is implemented on the MIB data, which removing inconsistent or erroneous MIB data. Based on attribute core and user preference attribute set, the algorithm makes not only use of advantage of these two algorithm, but also the universality of core, user background knowledge, and domain experience. At the same time, the minimal support degree and minimal belief degree is introduced into rough set theory for decision rules discovery and get decision rules.",2005,0, 4221,Enhancement of Bug Tracking Tools; the Debugger,"In this paper test documentation and effort estimations have been investigated as well as Bug Tracking Tools. Four different existing Bug Tracking Tools have been compared with each other along with their features and drawbacks. Then a new one, the Debugger has been proposed. Testing is one of the most important tasks in developing any software. According to the experts, time and money play the crucial role in testing process, if someone wants to reduce the cost of developing a new project, he should take care of testing. In order to do this, documenting the test results could help the developer to classify them in different categories such as different process models and different types of errors in each developing life cycle phase, then by having these classified results, it would be easy to estimate the future test cases in order to reduce the cost of testing phase and eventually developing cost in similar upcoming projects.",2010,0, 4222,Detection and correction of limit cycle oscillations in second -order recursive digital filter,"In this paper the effects of limit cycle oscillations in recursive second order digital filter is studied and the remedies for curing the problems of limit cycle oscillations are described. Limit cycle deletion using state space representation for second order system implemented with finite word-length register, is depicted. Necessary and sufficient condition to prevent limit cycle oscillation has also been described.",2005,0, 4223,Error detection in addition chain based ECC Point Multiplication,"In this paper the problem of error detection in elliptic curve point multiplication is faced. Elliptic Curve Point Multiplication is often used to design cryptographic algorithms that use fewer bits than other methods with the same security level. One of the mode used to break the security of cryptosystem is the injection of a fault in the hardware realizing the cryptographic algorithm. Therefore, to avoid this kind of attack, is very important to develop cryptosystems that are able to detect errors induced by a fault. The paper takes into account the algorithm for elliptic Curve Point Multiplication based on a sequence of additions called ldquoaddition chainrdquo and shows how suitable modifications of the algorithms used for computing the point multiplication adds the error detection property to the algorithm.",2009,0, 4224,Wood Defect Detection using Grayscale Images and an Optimized Feature Set,"In this paper we address the issue of detecting defects in wood using features extracted from grayscale images. The feature set proposed here is based on the concept of texture and it is computed from the co-occurrence matrices. The features provide measures of properties such as smoothness, coarseness, and regularity. Comparative experiments using a color image based feature set extracted from percentile histograms are carried to demonstrate the efficiency of the proposed feature set. Two different learning paradigms, neural networks and support vector machines, and a feature selection algorithm based on multi-objective genetic algorithms were considered in our experiments. The experimental results show that after feature selection, the grayscale image based feature set achieves very competitive performance for the problem of wood defect detection relative to the color image based features",2006,0, 4225,On the selection of unidirectional error detecting codes for self-checking circuits area overhead and performance optimization,"In this paper we address the issue of optimizing the area overhead and performance of self-checking circuits using all unidirectional error detecting codes (AUEDCs), with no impact on system's reliability. In particular, we propose an error detecting code selection approach that, starting from the consideration of the functional circuit topology, allows us to identify whether or not all output bits can be simultaneously erroneous, thus actually mandating the adoption of an AUEDC. We show that, differently from common expectations, this may frequently be not the case (for approximately the 50% of the considered benchmarks) for all possible internal node stuck-ats, transistor stuck-ons, transistor stuck-opens and resistive bridgings. We then propose a tool that, starting from the (combinational or sequential) circuit high level description, allows us to identify whether or not this is the case and, in particular, which is the maximal number (t) of possibly simultaneously erroneous output bits. Based on this information, a lower redundancy error detecting code (e.g., a t-UEDC) is adopted, rather than an AUEDC, thus generally allowing reducing area overhead and impact on system's performance. Such a code is automatically implemented by our developed tool, whose effectiveness has been verified for benchmark circuits and for a FPGA implemented prototype.",2005,0, 4226,A two-stage-classifier for defect classification in optical media inspection,"In this paper we address the problem of inspecting optical media like compact disks and digital versatile disks. Here, defective disks have to be identified during production. For optimizing the production process and in order to be able to decide how critical a certain defect is, the defects found have to be classified. As this has to be done online, the classification algorithm has to work very fast. With regard to speed, the well known minimum distance classifier is usually a good choice. However, when training data are not well clustered in the feature-space this classifier becomes rather unreliable. To trade-off speed and reliability we propose a two-stage-algorithm. It combines the fast minimum distance classification with a reliable fuzzy k-nearest neighbor classifier. The resulting two-stage-classifier is considerably faster than the fuzzy k-nearest neighbor classifier. Its classification rates are in the range of the fuzzy k-nearest neighbor classifier and far better than those of the minimum distance classifier. To evaluate the results, we compare them to the results obtained using various standard classifiers.",2002,0, 4227,Error sensitivity data structures and retransmission strategies for robust JPEG 2000 wireless imaging,"In this paper we address the problem of JPEG 2000 imaging in a wireless environment. We first define a flexible and efficient data structure for the description of the error sensitivity of different parts of a JPEG 2000 codestream or file format; the data structure is designed in such a way that it can seamlessly integrated as payload of a JPEG 2000 marker segment or file format box. Moreover, we investigate ARQ policies for robust packet-based JPEG 2000 image transmission over 3G mobile communication systems, and highlight how the proposed data structure can be exploited to improve the end-to-end performance.",2003,0, 4228,Sensor Localization Error Decomposition: Theory and Applications,"In this paper we consider performance characterizations of self-localization algorithms for sensor networks. The location parameters have a natural decomposition into relative configuration and centroid transformation components based on the influence of measurements and prior information in the problem. A linear representation of the transformation parameter space, which includes rotations and translations, is used for decomposition of general localization error covariance matrices. The proposed decomposition may be applied to any estimator, the posterior CramAr-Rao bound (CRB) in a Bayesian setting, or a traditional CRB. Along with the CRB itself, the relative-transformation decomposition provides insight into how external inputs effect absolute localization performance. This partitioning of error is also useful to higher level applications in a sensor network that utilize results of the localization service and must account for its uncertainty. Examples are presented and an application demonstrates the utility of relative error decomposition to the problem of angle-of-arrival estimation with sensor location uncertainty.",2007,0, 4229,Evaluating Coverage of Error Detection Logic for Soft Errors using Formal Methods,"In this paper we describe a methodology to measure exactly the quality of fault-tolerant designs by combining fault-injection in high level design (HLD) descriptions with a formal verification approach. We utilize BDD based symbolic simulation to determine the coverage of online error-detection and -correction logic. We describe an easily portable approach, which can be applied to a wide variety of multi-GHz industrial designs",2006,0, 4230,Using agent wills to provide fault-tolerance in distributed shared memory systems,"In this paper we describe how we use mobile objects to provide distributed programs coordinating through a persistent distributed shared memory (DSM) with tolerance to sudden agent failure, and use the increasingly popular Linda-like tuple space languages as an example for implementation of the concept. In programs coordinating and communicating through a DSM a data structure is shared between multiple agents, and the agents update the shared structure directly. However, if an agent should suddenly fail it is often hard for the agents to make the data structures consistent with the new application state. For example consider if a data structure contains a list of active agents. In such a case, transactions can be used when adding and removing agent names from the list ensuring that that the data structure is consistent and does not become corrupted should an agent fail. However If failure of the agent occurs after the name has been added, how does the application ensure the list is correct? We argue that using mobile objects we can provide wills for the agents to effectively enable them to ensure the shared data structure is application consistent, even once they have Sailed We show how we have integrated the use of agent wills into a Linda system and show that we have not increased the complexity, of program writing. The integration is simple and general, does not alter the underlying semantics of the operations performed in the will and the use of mobility is transparent to the programmer",2000,0, 4231,Dynamic observers for fault diagnosis of timed systems,"In this paper we extend the work on dynamic observers for fault diagnosis [1], [2], [3] to timed automata. We study sensor minimization problems with static observers and then address the problem of computing the most permissive dynamic observer for a system given by a timed automaton.",2010,0, 4232,Algorithms of real-time correction of the fuel map and the ignition map of a race combustion engine with spark ignition,"In this paper we have presented the design and the functioning of a programmable controller for combustion engines with spark ignition, designed for the tuning of sports cars. We have presented the method of connection of the computer with the target system, a draft of the algorithm of engine's work and the structure of used data. The second part has been dedicated to the tested methods of the adaptive correction of the fuel map and ignition map contents. We have presented two of the designed and used algorithms, as well as the principles and effects of their work. We have also compared their performances and utility during tuning and use of a race car engine.",2010,0, 4233,Non-uniformity correction using cosine functions basis and total variation constraint,"In this paper we introduce a new non-uniformity correction technique for receiver dependent intensity fluctuations in NMR images. Our method was designed for lower limb images, particularly those acquired in the context of muscular dystrophy studies . The new approach was motivated by the fact that in pathological cases we cannot make assumption about the characteristics of the various tissues in the image, which is a prerequisite and a main limitation for the numerous techniques proposed in the literature. In this work we considered a parametric model for the non-uniformity field based on combination of cosine functions. The estimation of the parameters was done by minimizing a cost function that reduces the variance in the subcutaneous fat as well as the total variation of the non-uniformity function. Experimental results were promising and showed the efficiency of the proposed approach.",2010,0, 4234,High-speed serial communication with error correction using 0.25 m CMOS technology,"In this paper we propose a novel design for an autonomous high-speed serial off and on-chip communication system which incorporates impedance tuning, error correction with a packet transfer and a parallel asynchronous interface. The constructed transmitter-receiver pair has throughput of 5 Gbit/s. With error correction and packet transfer overhead accounted for this construct has bandwidth of 500 ++ language. An illustrative example of (255,239) RS code using this program shows that the speed of the decoding process is approximately three times faster than that of the inverse-free Berlekamp-Massey algorithm.",2003,0, 4252,A high efficient error concealment scheme based on auto-regressive model for video coding,"In this paper, a high efficient temporal error concealment scheme based on auto-regressive (AR) model is proposed for video coding. The proposed AR based error concealment scheme includes a forward AR model for P slice, and a bi-direction AR model for B slice. First, we utilize the block matching algorithm (BMA) to select the best motions for lost blocks from the motions of available neighboring blocks. Then, the proposed AR model coefficients are computed according to the spatial neighboring pixels and their temporal-correlated pixels indicated by the selected best motions. Finally, applying the AR model, each pixel of the lost block is interpolated as a weighted summation of pixels in the reference frame along the selected best motions. Simulation results show that the performance of the proposed scheme is superior to conventional temporal error concealment methods.",2009,0, 4253,Plane to plane parallelism error evaluation base on new generation Geometrical Product Specification,"In this paper, a mathematical evaluation model of plane to plane parallelism satisfying the Minimum zone conditions is developed. According to the characteristics of the parallelism error evaluation, a new approach based on the particle swarm optimization is proposed. Based on uncertainty theory of the new generation Geometrical Product Specification (GPS), the result uncertainty computation model is built in order to make the result more complete and specification. Sample is proposed to prove the method of this paper can assess the parallelism error accurately and effectively. The computing efficiency and accuracy can be improved greatly by using the Particle Swarm Optimization (PSO) algorithm.",2010,0, 4254,Estimating the fault location for the transmission line from one end using MATLAB 6.5-Simulink Toolbox,"In this paper, a method for estimating the fault location on transmission lines using the positive and zero sequences of fault voltages signals is presented. The transmission line is considered firstly, as series impedance and then both series and shunt impedances effects are taken into account. Effectiveness of the value of the fault resistance has been investigated. Voltages signals are obtained using several simulation studies developed through the use of MATLAB6.5-Simulink Toolbox.",2006,0, 4255,A new error concealment algorithm for H.264 video transmission,"In this paper, a new error concealment algorithm for the new coding standard H.264 is presented. The algorithm consists of a block size determination step to determine the size type of the lost block and a motion vector recovery step to find the lost motion vector from multiple reference frames. The main feature of this algorithm are as follows. In the block size determination step, we propose a criterion to determine the size type of the lost block from the current frame. In the motion vector recovery step, the optimal motion vector for the lost block chosen from multiple previous reference frames with the minimum value of the side match distortion. The proposed algorithm not only can determine the most correct mode for the lost block, but also can save much more computation time for motion vector recovery. Experimental results show that the proposed algorithm achieves 0.47 dB improvement over the conventional VM method.",2004,0, 4256,Fuzzy Logic Based Error Concealment Algorithm for H.264 Video Transmission,"In this paper, a new error concealment algorithm is proposed for the coding standard H.264. The algorithm consists of a block size determination step to determine the size type of the lost block and a motion vector recovery step to find the lost motion vector. The former step uses fuzzy logic method to select the size type of the lost block from the current frame. In the latter step, the optimal motion vector for the lost block is chosen from multiple previous reference frames with the minimum value of the side match distortion. The proposed algorithm determines the most correct mode for the lost block, and save much more computation time for motion vector recovery. Experimental results show that the proposed algorithm achieves 0.5-3 dB improvement than conventional VM method.",2005,0, 4257,Induction motor fault diagnosis using voltage spectrum of an auxiliary winding,"In this paper, a new method for induction motor fault diagnosis is presented. It is based on the so-called voltage spectrum of an auxiliary small winding inserted between two of the stator phases. An expression of the inserted winding voltage is presented. After that, discrete Fourier transform analyzer is required for converting the voltage signal from the time domain to the frequency domain. Simulations results curried out for defected and non defected motor show the effectiveness of the proposed method.",2007,0, 4258,"On soft-decoding of the (24, 12, 8) extended Golay code up to six errors","In this paper, a new soft-decision decoder of the (24, 12, 8) binary extended Golay code up to six errors is proposed. First, by using the error pattern obtained from hard decoder, the method of determining possible error patterns is developed. The emblematic probability value of each error pattern is then defined as the product of the individual bit-error probabilities corresponding to the locations of the possible error patterns. The most likely one among these error patterns is obtained by choosing the maximum of the emblematic probability values of possible error patterns. Finally, simulation results in additive white Gaussian noise (AWGN) show that this decoder reduce the decoding complexity although it performs a slight loss of coding gain than the modified Chase's II algorithm proposed by Hackett.",2009,0, 4259,Adaptively Switching Between Directional Interpolation and Region Matching for Spatial Error Concealment Based on DCT Coefficients,"In this paper, a novel spatial error concealment algorithm, which adaptively switches between directional interpolation and region matching, is proposed. Different from the previous spatial error concealment methods, which just utilize smooth property, the algorithm exploits both smooth property and texture information to recover the lost blocks. Based on the DCT coefficients in the available neighboring MBs, the algorithm automatically analyzes whether the MB is ""smooth-like"" or ""texture-like"" and adaptively select directional interpolation or region matching to recover the lost MB. The proposed algorithm has been evaluated on H.264 reference software JM 9.0. The experimental results demonstrate that the proposed method can achieve better PSNR performance and visual quality, compared with weighted pixel average (WPA) which is adopted in H.264, directional interpolation-only and region matching-only",2006,0, 4260,Fault detection for OSPF based E-NNI routing with probabilistic testing algorithm,"In this paper, a probabilistic testing algorithm is proposed to increase the fault coverage for OSPF based E-NNI routing protocol testing. It automatically constructs random network topologies and checks database information consistency with real optical network topology and resource for each generated topology. Theoretical analysis indicates that our algorithm can efficiently increase the fault coverage. This algorithm has been implemented in a software test tool called E-NNI Routing Testing System (ERTS). Experiment results based on ERTS are also reported.",2008,0, 4261,A Reducing Transmission-Line Fault Current Method,"In this paper, a reducing transmission-line fault current method with capacitor compensators is proposed to limit the transmission-line fault current in power systems. In the normal mode of operation, the shunt capacitors banks as reactive power compensators that delivers reactive power to increase the power factor and used on medium-length and long transmission lines to increase line loadability and to maintain voltages near rated values. Their important effect is to reduce line-voltage drops and to increase the power factor and the steady-state stability limit. When faults states occurs, the capacitor another effect is to reduce transmission-line fault current peak value. Simulations performed in MATLAB/Simulink environment indicate that the proposed performance for capacitor compensators performs well to limit the fault currents of transmission lines and line-voltage drops.",2010,0, 4262,An improved strategy to detect the switching device fault in NPC inverter system,"In this paper, a simple fault detection scheme is proposed for improving the reliability under the open-switch fault of a three-level neutral point clamped (NPC) inverter. The fault of switching device is detected by checking the change of a pole-voltage in a NPC inverter. This method has the advantages of fast detection ability and a simple realization for fault detection, compared with existing methods. Reconfiguration is also performed by two-phase control method that used to supply the balancing three-phase power to the load continuously. This proposed method minimizes a bad influence on the load caused by fault occurrence. This method can also be embedded into the existing NPC inverter system as a subroutine without excessive computational effort. The proposed scheme has been verified by simulation and experimental results.",2007,0, 4263,Fault Tolerant Structure for SRAM-Based FPGA via Partial Dynamic Reconfiguration,"In this paper, activities which aim at developing a methodology of fault tolerant systems design into SRAM based FPGA platforms with different types of diagnostic approaches are presented. Basic principles of partial dynamic reconfiguration are described together with their impact on the fault tolerance of the digital design in FPGA. A generic controller for driving dynamic reconfiguration process of faulty unit is demonstrated and analyzed. Parameters of the generic partial reconfiguration controller are experimentally verified. The developed controller is compared with other approaches based on micro controllers inside FPGA. A structure which can be used in fault tolerant system design into SRAM-based FPGA using partial reconfiguration controller is then described. The presented structure is proven fully functional on the ML506 development board for different types of RTL components.",2010,0, 4264,Fault current limiter application to improve the dynamic performance of dispersed generation systems under voltage sag,"In this paper, an application of FACTS devices to dispersed generation systems is proposed. In the proposed systems, the fault current limiter (FCL) is applied to disconnect the critical loads and the dispersed generator from the utility power systems by detecting the voltage magnitude when the power system's fault occurs. An influence of the voltage sag magnitude, duration and other parameters is investigated by simulations",2001,0, 4265,Harmonic resistance emulator technique for three-phase unity power factor correction,"In this paper, a new technique for three-phase power factor correction, using the typical three-phase line side active front-end converter, is proposed. The proposed technique is capable of simplifying the three-phase power factor correction algorithms to a greater extent. As a consequence, the sampling time will reduce considerably and switching frequency of the converter can be pushed further. The proposed scheme is suitable for the sine-triangle PWM (pulse width modulation) implementation but it completely eliminates the need of frame-synchronization. It also avoids the forward and backward d-q reference-frame transformations. Moreover, presetting of the two orthogonal references is also not required. Simulation results are presented.",2005,0, 4266,Nonlinear H control design of structural systems: An experimental study under faults,"In this paper, a nonlinear H control is developed for active mass damper systems subject to external perturbation. This robust controller is composed by the sum of a linear term plus a chattering component. The linear term is designed using linear matrix inequality (LMI) theory. Then, the chattering term is added to improve controller performance. Lyapunov theory is invoked to validate our control design. According with experiments, where a flexible two levels building with active mass damper and external perturbation is employed, they show that this chattering term improves controller performance. However, when a fault occurs, this chattering term is complaining.",2010,0, 4267,A fast method for fault section identification in series compensated transmission lines,"In this paper, a novel and fast method for fault section identification in compensated series transmission lines based on the high frequency traveling wave has been proposed. The method uses the relation of magnitude and polarity between wavefronts of high frequency travelling waves induced by fault. For accurately and fast extracting polarity and magnitude of travelling wave, wavelet transform and modulus maxima are used. Validation of this method is carried out by PSCAD/EMTP and MATLAB simulations for typical 400 kV power system faults. Simulation results reveal high performance of the method.",2010,0, 4268,Miniaturized Microstrip Lowpass Filter With Wide Stopband Using Suspended Layers and Defected Ground Structure (DGS),"In this paper, a novel compact wideband rejection LPF using defected ground structure (DGS) is presented. The proposed LPF consists of hi-lo etched slots in ground metallic plane as defected ground structure(DGS) and of microstrip hi-lo structure, which corresponding to capacitance and Inductance on the top layer. The effect of the DGS slot on the characteristics of the investigated filter is examined. In this work we have proposed a simple method to realize a compact DGS LPF with good characteristics . In order to prove the efficiency of the method, a comparison is made between the new DGS LPF and conventional filters , which shows that the proposed filter with etched cells is enough to obtain better performance by suppressing ripples and a very large stop band. Measured results are found to be in good agreement with the simulation results.",2008,0, 4269,Error concealment for stereoscopic images using mode selection,"In this paper, a novel error concealment (EC) method for compressed stereoscopic image pairs is presented, which contains a new binocular EC mode and an improved monocular EC mode. The proposed algorithm selects appropriate EC mode to conceal the error block (EB) in the stereoscopic image according to the local characteristic of the EB. The experimental results demonstrate that the proposed scheme has good subjective and objective EC performance in stereoscopic images as compared to monocular mode.",2010,0, 4270,Radiometric and geometric correction of RADARSAT-1 images acquired in alpine regions for mapping the snow water equivalent (SWE),"In this paper, introduced is an application of two radiometric slope correction methods on standard RADARSAT images in a mountainous environment like the Alps. Because of the highly varying topography, such corrections are needed to reduce the distortions on the backscattering coefficients when trying to monitor the snow characteristics from SAR data in alpine regions. This paper discusses the results obtained by the two different methods over dry and wet snow cover; both algorithms significantly reduced the effect of local slope facing the radar, but may not compensate enough for the steep slope over 30.",2003,0, 4271,Error performance of joint transmit and receive antenna selection in two hop amplify-and-forward relay system over Nakagami-m fading channels,"In this paper, performance of joint transmit and receive antenna selection in two hop amplify-and-forward relay system is analyzed over frequency non-selective and slowly Nakagami-m fading channels. In the system, all nodes are equipped with multiple antennas; Transmit Antenna Selection (TAS) is employed at the source and relay for transmission, Selection Combining (SC) is employed at the relay and destination for reception. The source and destination communicate by the help of single relay and source-destination link is not available. In this paper, we derive closed form Cumulative Distribution Function (CDF) and Moment Generating Function (MGF) of received Signal-to-Noise-Ratio (SNR). We also derive Symbol Error Probability (SEP) expressions for the considered system. Analytical results are validated by the simulations.",2010,0, 4272,Error performance of Transmit Antenna Selection / Maximal Ratio Combining in two hop Amplify-and-Forward relay system over Rayleigh fading channel,"In this paper, performance of Transmit Antenna Selection / Maximal Ratio Combining (TAS / MRC) in two hop Amplify-and-Forward (AF) relay system is analyzed over frequency non-selective and slowly Rayleigh fading channel. In the system, all nodes are equipped with multiple antennas; TAS is employed at the source and relay for transmission, MRC is employed at the relay and destination for reception. The source and destination communicate by the help of single relay and source-destination link is not available. In this paper, we derive closed form Cumulative Distribution Function (CDF) and Moment Generating Function (MGF) of received Signal-to-Noise-Ratio (SNR). We also derive Symbol Error Probability (SEP) expressions for the considered system. Analytical results are validated by the simulations.",2010,0, 4273,A Real-Time Measurement Method of Temperature Fields and Thermal Errors in Machine Tools,"In this paper, the authors establish a system to carry out real-time measurement of temperature fields and thermal errors in machine tools. A mean filter is introduced in signal processing to get more accurate experimental data. An experiment is performed on a vertical machining center, and the experimental data are analyzed in figures. In thermal errors data processing, the thermal deformation of the aluminum plate for fixing the laser displacement sensors are considered to improve the accuracy.",2010,0, 4274,"Modelling DGM(1,1) under the Criterion of the Minimization of Mean Absolute Percentage Error","In this paper, we present linear programming method in order to estimate the parameters of the DGM(1,1) model under the criterion of the minimization of mean absolute percentage error (MAPE) (some authors called average relative error). A published article is chosen for practical tests of this method, the results show that this method can obviously improve the simulation accuracy.",2009,0, 4275,Fault Tolerant SoC Architecture Design for JPEG2000 using Partial Reconfigurability,"In this paper, we present the design of a new architecture tolerating faults for the Image compression standard JPEG2000. The proposed fault tolerant design is based on adding a new reconfigurable core to the rest of the cores of the SoC. When a fault happens, it is tolerated using this reconfigurable core. The paper explains the hardware architecture allowing this inter-core communication toward fault tolerance. The target is to achieve a good reliability by this fault tolerance strategy and in the same time achieve the required speed allowing to the JPEG2000 to deal with video rather than still Image compression. The high speed is implemented using an optimized data organization and memories arrangement for the computation consuming blocks of the JPEG2000. The operating speed of the proposed architecture is 125 MHz for ALTERA FPGA implementation. The proposed architecture has increased the speed by a factor of 1.5, when compared to similar memory requiring architectures and decreased the memory requirement by a factor of 1.2, when compared to similar speed requiring architectures. Additionally, the proposed architecture achieves 91.45% fault coverage and it requires only 21% hardware overhead. The architecture has an optimum latency of 78.8 seconds corresponding to an optimum test sequence of n=985. The VHDL implementation of the six blocks of JPEG2000, corresponding to the full chain, has been developed and successfully validated on various types of ALTERA FPGA.",2007,0, 4276,A self-tuning DVS processor using delay-error detection and correction,"In this paper, we present the implementation and silicon measurements results of a 64bit processor fabricated in 0.18m technology. The processor employs delay-error detection and correction scheme called Razor to eliminate voltage safety margins and scale voltage 120mV below the first failure point. It achieves 44% energy savings over the worst case operating conditions for a 0.1 % targeted error rate at a fixed frequency of 120MHz.",2005,0,7835 4277,A Self-Tuning Dynamic Voltage Scaled Processor Using Delay-Error Detection and Correction,"In this paper, we present the implementation and silicon measurements results of a 64bit processor fabricated in 0.18mum technology. The processor employs a delay-error detection and correction scheme called Razor to eliminate voltage safety margins and scale voltage 120mV below the first failure point. It achieves 44% energy savings over the worst case operating conditions for a 0.1% targeted error rate at a fixed frequency of 120MHz",2006,0, 4278,Improving Kernel Density Classifier Using Corrective Bandwidth Learning with Smooth Error Loss Function,"In this paper, we propose a corrective bandwidth learning algorithm for Kernel Density Estimation (KDE)-based classifiers. The objective of the corrective bandwidth learning algorithm is to minimize the expected error-rate. It utilizes a gradient descent technique to obtain the appropriate bandwidths. The proposed classifier is called the ""Empirical Mixture Model"" (EMM) classifier. Experiments were conducted on a set of multivariate multi-class classification problems with various data sizes. The proposed classifier has an error-rate closer to the true model compared to conventional KDE-based classifiers for both small and large data sizes. Additional experiments on standard machine learning datasets showed that the proposed bandwidth learning algorithm performed very well in gen-eral.",2008,0, 4279,Data reduction and clustering techniques for fault detection and diagnosis in automotives,"In this paper, we propose a data-driven method to detect anomalies in operating Parameter Identifiers (PIDs) and in the absence of any anomaly, classify faults in automotive systems by analyzing PIDs collected from the freeze frame data. We first categorize the operating parameter data using automotive domain knowledge. The dataset thus obtained is then analyzed using Principal Component Analysis (PCA) and Independent Component Analysis (ICA) for finding coherence among the PIDs. Then we use clustering algorithms based on both linear distance and information theoretic measures to assign coherent PIDs to the same class or cluster. A comparative analysis of the behavior of PIDs belonging to the same cluster can now be made for detecting anomaly in PIDs. Since a system fault is characterized by the values by of all PIDs across all the clusters, we use the joint probability distribution of the independent components of all PIDs to characterize the fault and find the divergence between the joint distributions of training and test data to classify faults. The proposed method can analyze available parameter data, categorize PIDs into informative or non-informative category, and detect fault condition from the clusters. We demonstrate the algorithm by way of an application to operating parameter data collected during faults in catalytic converters of vehicles.",2010,0, 4280,On the identification of robot parameters by the classic calibration algorithms and error absorbing trees,"In this paper, we propose a feasible method to construct a virtual manipulator in a 3D graphics environment, which is equivalent to a real manipulator including its dynamics. For this purpose, we first calibrate the parameters of robot dynamics by the two classic algorithms. Unfortunately, both the classic methods are not practically stable because each motion pattern includes noise and error. To overcome this drawback, we indirectly absorb the position differences between experimental and calculated manipulators by three types of learning trees. Moreover, many neighbor dynamic motions are directly memorized by the same learning trees. As a result, when real and virtual robots are independently supervised by the PD control, their angular errors of rotational joints amount to zero. In addition, even though both robots are independently supervised as slave arms from a master arm by the same sequence of forces in a bilateral control based on the PD control, motion sequences of real and virtual slave arms equal to each other. As a result, a virtual manipulator can be used in a 3D graphics animation, which is truly replaced of a real manipulator.",2002,0, 4281,Joint temporal error control for H.264,"In this paper, we propose a joint temporal error control (JTEC) method for H.264 which combines RDO-based macroblock (MB) classification at the encoder and adaptive partition size error concealment at the decoder. The encoder classifies the MBs by evaluating the sensitivity of the MBs as the RD cost between the concealment error and the bits needed for the additional motion information. Additional motion information can be transmitted for the error sensitive MBs such as the original motion vector or motion vector index. The decoder utilizes the additional motion information for the sensitive blocks. Non-sensitive blocks are concealed by the adaptive partition size (APS) method. APS selects the best partition mode for error concealment by minimizing the weighted double-side external boundary matching error (WDS-EBME), which jointly measures the inter-MB boundary discontinuity, inter-partition boundary discontinuity and intra-partition block artifacts in the recovered MB. A progressive concealment method is developed for the 4 times 4 partition mode.",2009,0, 4282,A key-frame-based error resilient coding scheme for video transmission over differentiated services networks,"In this paper, we propose a key-frame- based error resilient coding scheme for video transmission over differentiated services networks. Key-frames are fixed in advance. Each frame, if inter-coding, can only choose the latest coded and reconstructed key frame as its reference picture. After coding and packetisation, compressed video packets are transmitted with differentiated service classes. More specifically, we assign the key-frame packets to the assured class, and assign all the other packets to the regular best-effort class. Although the assured class has much lower packet loss rate than the best-effort class, it cannot guarantee no packet loss. Retransmission is used on the source-layer level in the case of losing a key-frame packet. By this means, we can stop transmission errors propagating along the prediction loop. Our scheme can not only be used with on-line coding video transmission, but also can be used with precompressed video transmission. Simulation results verify the validity of our scheme.",2007,0, 4283,Low-error carry-free fixed-width multipliers and their application to DCT/IDCT,"In this paper, we propose a low-error fixed-width redundant multiplier design. The design is based on the statistical analysis of the value of the truncated partial products in binary signed-digit representation with modified Booth encoding. The overall truncation error is significantly reduced with negligible hardware overhead. Simulation on DCT/IDCT of images with 256 gray levels shows our proposed multiplication design has higher PSNR/SNR.",2004,0, 4284,A mixed-integer formulation for fault detection and diagnosis: modelling and an illustrative example,"In this paper, we propose a mixed integer optimization for diagnosis of fault events which changes the structure of the system model. The proposed approach aims it selecting the right model of the systems among a bank of models when some faults occur. This approach suits to the case where both abrupt faults (intermittent or permanent such as saturation or sudden shutdown) and incipient faults (continuous) are considered. The optimization helps finding the best combination of faults that have occurred. So doing, we go further than analyzing incidence matrix of residual. Also, this approach allows to introduces fault occurrence logics such as the ones encountered when establishing fault trees.",2002,0, 4285,Exploration of Autonomous Error-Tolerant (AET) Celluar Networks in System-on-a Package (SoP) for Future Nanoscle Electronic Systems,"In this paper, we propose a nanocore/CMOS hybrid system-on-package (SoP) architecture that is suitable for any emerging nanotechnology. It combines the high density of nanoscale devices and some excellent properties of current CMOS technology including high voltage gain and interconnection bandwidth and speed. The local computing cell is autonomous and error-tolerant (AET cell), interconnected with nearest neighbors through crossbar interconnect arrangement. Some key issues are studied in more detail: a possible communication network adopting time-division-multiplexing scheme; power distribution network; sensor and control network",2006,0, 4286,Exact closed-form expression for average symbol error rate of MIMO-MRC systems,"In this paper, we derive closed-form expressions for exact average symbol error rate (SER) of multiple-input multiple-output (MIMO)-maximum ratio combining (MRC) systems with M-ary modulations. We obtain the tractable compact closed-form expressions for the average SER in terms of the Gauss and Appell hypergeometric functions which are provided as library functions in common mathematical softwares such as MAPLE and MATHEMATICA. The analysis is validated by comparing with Monte-Carlo simulations and we further show that our general SER expressions reduce to the previous known results for binary signals (M = 2) as a special case. Applying the series representation of the Gauss and Appell hypergeometric function, we derive a very tight approximation for the average SER.",2008,0, 4287,Intelligent signal segment fault detection using fuzzy logic,"In this paper, we describe a fuzzy logic system used in a signal diagnostic agent (SDA) for signal segment fault diagnosis. A SDA is trained to detection the fault of a signal. The SDA provides two levels of decisions, the signal segment level and signal level, using fuzzy logic. At the signal segment level, we developed a fuzzy learning algorithm that learns from good vehicle signals only. The fuzzy learning algorithm was implemented in the framework of a SDA, and the experiments using engine electronic control unit signals are presented and discussed in the paper",2002,0, 4288,Low overhead soft error detection and correction scheme for reconfigurable pipelined data paths,"In this paper, we describe a novel scheme for radiation hardening of high performance pipelined architectures and data paths. The proposed technique uses a local ground bus decoupled from the global ground using an additional pull down device, to detect a transient error. Combining the detector output with duplicated pipeline registers enables an instruction execution through the data path to be repeated as soon as the error is detected. The detector outputs from various stages in a pipelined data path are manipulated to maintain correctness of data in the event of a transient error detection and corresponding instruction roll back. The proposed technique is extremely effective for errors of different pulse widths and comes without the extra cost of error checking codes, watch dog processors and logic core duplication as used by other techniques in literature. Our scheme provides 100% radiation hardening over all process corners with only 9.7% and 21.73% area and power overhead respectively with the delay overhead being masked out by the pipeline stages used in modern high performance data path architectures.",2010,0, 4289,Robust monitoring of link delays and faults in IP networks,"In this paper, we develop failure-resilient techniques for monitoring link delays and faults in a service provider or enterprise IP network. Our two-phased approach attempts to minimize both the monitoring infrastructure costs as well as the additional traffic due to probe messages. In the first phase of our approach, we compute the locations of a minimal set of monitoring stations such that all network links are covered, even in the presence of several link failures. Subsequently, in the second phase, we compute a minimal set of probe messages that are transmitted by the stations to measure link delays and isolate network faults. We show that both the station selection problem as well as the probe assignment problem are NP-hard. We then propose greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factor for the probe assignment problem. These approximation ratios are provably very close to the best possible bounds for any algorithm.",2003,0, 4290,Observer-based Fault Detection for Piecewise Linear Systems: Discrete-time Cases,"In this paper, we discuss the fault detection with unknown inputs for a class of discrete-time piecewise linear systems. Piecewise linear systems are mostly partitioned based on their state variables. Due to the system noise and estimation errors, the transitions of actual state and its estimate may not be synchronized, as well as the system modes. Motivated by the recent works [31], [29], [32], [30], [27], we consider the fault detection problem using non-synchronized observer and presents several less conservative design approaches.",2007,0, 4291,Availability requirement for fault management server,"In this paper, we examine the availability requirement for the fault management server in high-availability communication systems. According to our study, we find that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability as long as the fail-safe ratio (the probability that the failure of the fault management server will not bring the system down) and the fault coverage ratio (the probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper",2001,0, 4292,Comparison of Outlier Detection Methods in Fault-proneness Models,"In this paper, we experimentally evaluated the effect of outlier detection methods to improve the prediction performance of fault-proneness models. Detected outliers were removed from a fit dataset before building a model. In the experiment, we compared three outlier detection methods (Mahalanobis outlier analysis (MOA), local outlier factor method (LOFM) and rule based modeling (RBM)) each applied to three well-known fault-proneness models (linear discriminant analysis (LDA), logistic regression analysis (LRA) and classification tree (CT)). As a result, MOA and RBM improved Fl-values of all models (0.04 at minimum, 0.17 at maximum and 0.10 at mean) while improvements by LOFM were relatively small (-0.01 at minimum, 0.04 at maximum and 0.01 at mean).",2007,0, 4293,Fault table generation using Graphics Processing Units,"In this paper, we explore the implementation of fault table generation on a Graphics Processing Unit (GPU). A fault table is essential for fault diagnosis and fault detection in VLSI testing and debug. Generating a fault table requires extensive fault simulation, with no fault dropping, and is extremely expensive from a computational standpoint. Fault simulation is inherently parallelizable, and the large number of threads that a GPU can operate on in parallel can be employed to accelerate fault simulation, and thereby accelerate fault table generation. Our approach, called GFTABLE, employs a pattern parallel approach which utilizes both bit-parallelism and thread-level parallelism. Our implementation is a significantly modified version of FSIM, which is pattern parallel fault simulation approach for single core processors. Like FSIM, GFTABLE utilizes critical path tracing and the dominator concept to reduce runtime. Further modifications to FSIM allow us to maximally harness the GPU's huge memory bandwidth and high computational power. Our approach does not store the circuit (or any part of the circuit) on the GPU. Efficient parallel reduction operations are implemented in our implementation of GFTABLE. We compare our performance to FSIM*, which is FSIM modified to generate a fault table on a single core processor. Our experiments indicate that GFTABLE, implemented on a single NVIDIA GeForce GTX 280 GPU card, can generate a fault table for 0.5 million test patterns on average 7.85x faster when compared with FSIM*. With the NVIDIA Tesla server, our approach would be potentially 34.82x faster.",2009,0, 4294,Effect of fault dependency and debugging time lag on software error models,"In this paper, we first show how several existing SRGMs based on NHPP models can be comprehensively derived by applying the time-dependent delay function. Moreover, for most conventional SRGMs, they assume that detected errors are immediately corrected. But this assumption may not be realistic in practice. Therefore, we incorporate the ideas of failure dependency and time-dependent delay function into software reliability growth modeling. New SRGMs are proposed and numerical illustrations based on real data set are presented. Evaluation results show that the proposed framework to incorporate both failure dependency and time-dependent delay function for SRGM has a fairly accurate prediction capability.",2004,0, 4295,A new approach for mitigating carrier phase multipath errors in multi-gnss real-time kinematic (RTK) receivers,"In this paper, we introduce a new approach for RTK positioning using triple-frequency combinations of GNSS measurements in presence of carrier phase multipath. The proposed method is based on a modification of the LAMBDA method, where the a-priori information on multipath errors is exploited as a constraint in the optimization and ambiguities search process to mitigate the effect of multipath. Triple-frequency combinations of measurements is used to formulate a new carrier phase multipath index, then incorporate it as additional constraint in the LAMBDA method cost function for multi-frequency ambiguity resolution. Simulations and real experiments shows the effectiveness of the developed scheme.",2010,0, 4296,Watermarking Algorithm for Print-Scan Based on HVS and Multiscale Error Diffusion,"In this paper, we introduce a watermarking algorithm for halftone image. The arithmetic is based on HVS and multiscale error diffusion. We insert special binary image in the halftone image and study on it after printing and scanning. Using the proposed methods, several hundred information bits can be embedded into images with perfect recovery against the print-scan operation. Different printing resolution, different scanning resolution and printing paper type are experimented. The experiments show that the watermarking algorithm is robust to print-scan process.",2008,0, 4297,Suitable graphical user interface selection based on human errors using analytic hierarchy process,"In this paper, we propose a new model for the design method of graphical user interfaces for audio visual remote controllers based on analytic hierarchy processes. The goal of this model is to reduce the human error in order to modify the graphical user interface of a wireless remote controller to obtain the most suitable interface for every user. This paper proposes a new model with the seven evaluation criteria; inherent ability, lack of skill, lack of knowledge, slip, lapse, mistake and violation. As alternatives, we decided on four design strategies for the user interface; vision assistance, cognition assistance, operation assistance, and memorizing. The proposed method is evaluated by a prototype assuming a real-time OS on an embedded microprocessor. Furthermore, we confirmed this proposal as effective.",2010,0, 4298,Sizing of 3-D Arbitrary Defects Using Magnetic Flux Leakage Measurements,"In this paper, we propose a new procedure to estimate the shape of the opening and the depth profile of an arbitrary three-dimensional (3-D) defect from magnetic flux leakage (MFL) measurements. We first use the Canny edge detection algorithm to estimate the shape of the defect opening. Then we use an inversion procedure based on the space mapping (SM) methodology in order to approximate the defect depth profile efficiently. To demonstrate the accuracy of the proposed inversion technique, we reconstruct defects of arbitrary shapes from simulated MFL signals. The procedure is then tested with experimental data of two metal-loss defects. In both cases, the proposed approach shows good agreement between the actual and estimated defect parameters.",2010,0, 4299,Spatial interpolation algorithm for error concealment,"In this paper, we propose a new spatial interpolation algorithm for intra-frame error concealment. The method aims at interpolating areas in the image, which have been affected by packet loss. We have proposed an edge detection technique to aid the bilinear interpolation. The edge-detection scheme is based on designing a robust Hough transform-based technique that is capable of systematically connecting edges irrespective of the number of edge points surrounding missing areas. The connected edges are used to divide the missing areas into different regions for interpolation along the directions of each detected line. Simulation results demonstrate that the proposed algorithm can recover the missing areas with a greater accuracy, when compared with the bilinear interpolation technique.",2008,0, 4300,Content-Adaptive Macroblock Partitioning scheme for error concealment of H.264/AVC coded video,"In this paper, we propose a new temporal error concealment method for H.264/AVC coded video using the existing information of neighboring macroblock modes from the coded bit-stream. The lost macroblock is partitioned into eight possible types, and each partition is concealed with an estimated motion vector giving a smoother concealment. An overlapped block motion compensation technique is further used to avoid spatial discontinuities in the concealed regions. This helps to minimize the structural degradations and the blocking artifacts. Experimental results show that the proposed scheme achieves higher video quality values than the existing schemes.",2009,0, 4301,Forward Error Correction-Based 2-D Layered Multiple Description Coding for Error-Resilient H.264 SVC Video Transmission,"In this paper, we propose a novel 2-D layered multiple description coding (2DL-MDC) for error-resilient video transmission over unreliable networks. The proposed 2DL-MDC scheme allocates multiple description sub-bitstreams of a 2-D scalable bitstream to two network paths with unequal loss rates. We formulate the 2-D scalable rate-distortion problem and derive the expected distortion for the proposed scheme. To minimize the end-to-end distortion given the total rate budget and packet loss probabilities, we need to optimally allocate source and channel rates for each hierarchical sublayer of the scalable bitstream. The conventional Lagrangian multiplier method can be utilized to solve this problem but with overwhelming computational complexity. Therefore, we consider the use of the genetic algorithm to solve the rate-distortion optimization problem. The simulation results verify that the proposed method is able to achieve significant performance gain as opposed to the conventional equal rate allocation method.",2009,0, 4302,TERCOS: A Novel Technique for Exploiting Redundancies in Fault-Tolerant and Real-Time Distributed Systems,"In this paper, we propose a novel fault-tolerant technique, which is seamlessly integrated with fixed-priority-based scheduling algorithm to explore redundancies to enhance schedulability in fault-tolerant and real-time distributed systems. Our fault-tolerant technique makes use of the primary-backup scheme to tolerate permanent hardware failures. Most importantly, the proposed technique (referred to as Tercos) terminates the execution of active backup copies when corresponding primary copies are successfully completed, therefore Tercos can reduce scheduling lengths in fault-free scenario to enhance schedulability by virtue of executing portions of active backup copies in passive forms. Experimental results show that compared with existing algorithm in literature, Tercos can significantly improve schedulability by up to 17.0%(with an average of 9.7%).",2007,0, 4303,A Parallel Algorithm for H.264/AVC Deblocking Filter Based on Limited Error Propagation Effect,"In this paper, we propose a parallel algorithm for H.264/AVC deblocking filter which is scalable to the number of processors. Unlike the conventional approach, which is limited by the independent data units, the designed algorithm allows issuing dependent data units concurrently to decrease the penalty from synchronization of data units. For the general-purpose dual-core processors, experimental results show that our method speeds up 1.72 and 1.39 times as compared with optimized sequential method and the well-known wavefront parallelizing method, respectively.",2007,0, 4304,Fixing Design Errors With Counterexamples and Resynthesis,"In this paper, we propose a resynthesis framework, called COunterexample-guided REsynthesis (CoRe), that automatically corrects errors in digital designs. The framework is based on a simulation-based abstraction technique and performs an error correction through two innovative circuit resynthesis solutions: distinguishing-power search and goal-directed search, which modify the functionality of circuits' internal nodes to match the correct behavior. In addition, we propose a compact encoding of resynthesis information, called the Pairs of Bits to be Distinguished, which is a key enabler for our resynthesis techniques. Compared with previous solutions, CoRe is more powerful for the following reasons: (1) It can fix a broader range of error types because it is not bounded by specific error models; (2) it derives the correct functionality from simulation vectors without requiring golden netlists; and (3) it can be applied with a broad range of verification flows, including formal and simulation-based flows.",2008,0, 4305,Self Fault-Managed and High Available P2P Architecture for Next Generation Network Management Systems,"In this paper, we propose a self fault-managed and self reconfigurable peer-to-peer architecture for increasing the availability of the network management system (NMS). We intend to achieve such a goal through increasing fault tolerance property of the NMS by applying redundancy. However the architecture we proposed does not impose any hardware redundancy but software redundancy. This is mainly because we use some peers in several roles and thus add some software redundancy which is easily tolerable by advance processors of NMSpsilas peers. We conduct extensive simulation study to examine the performance of the proposed architecture in presence of nodes' failure. We also investigate effect of failure in different nodes and sensitiveness of the architecture to those failures. The results show that the proposed architecture offers higher availability in comparison to a non-fault tolerant Peer-to-Peer NMS.",2009,0, 4306,An Antecedence Graph Approach for Fault Tolerance in a Multi-Agent,"In this paper, we propose a strategy to implement fault-tolerance in a multi-agent system. We have based our strategy on the concept of antecedence graphs, used in causal logging and as used by the manetho protocol for distributed systems. Each agent in the multi-agent system keeps an antecedence graph of all the collaborating agents in the system. If one or more agents fail due to any reason, the other agents can reconstruct the same agent state in a partial or comprehensive manner by using their own antecedence graphs. The recovering agents then regenerate their antecedence graphs and message logs and replay the messages to achieve a global consistent state, after which normal operation continues. We believe that introducing fault tolerance in a multi-agent system through antecedence graphs is novel and provides a low overhead and effective solution for fault-tolerance in a multi-agent system.",2006,0, 4307,A fault-tolerant real-time supervisory scheme for an interconnected four-tank system,"In this paper, the implementation of a Command Governor (CG) strategy on a real-time computing system is described for the supervision of a laboratory four-tank test-bed. In particular, the real-time architecture has been developed on the RTAI/Linux operating system kernel and the CG module has been implemented in C++ on a general purpose off-the-shelf computing unit. An accurate model of the the four-tank process has been derived from both physical and experimental data and the applicability of the proposed method has been proved by means of real-time tests, which testified on the CG strategy ability to enforce the prescribed operative constraints even under unexpected adverse conditions, e.g. water pumps failures.",2010,0, 4308,Study on Rotor Position Detection Error in Sensorless BLDC Motor Drives,"In this paper, the main reason causing rotor position detection error in sensorless BLDC motor drives that the band pass filters are employed to get the zero-crossings of back-emf is presented. The effects on commutation that result from the rotor position detection error are investigated, and the simulation results are given. The correction method is suggested, and the experimental results are provided",2006,0, 4309,Symbol error rates of MPSK and MDPSK for optimum combining with multiple interferers in a Rayleigh fading channel,"In this paper, the performance of M-ary signals for optimum combining with multiple co-channel interferers is analyzed in a flat Rayleigh fading channel. The closed-form expressions of average symbol error rates of M-ary phase shift keying and M-ary differential phase shift keying are derived by using the moment generating function-based approach and the orthogonal approximation which is employed to evaluate the eigenvalues of the covariance matrix of the received interference-plus-noise. This analysis is a convenient way of evaluating the performance of optimum combining and comparing it with that of maximal-ratio combining. In the cases of single and multiple interferer(s), the accuracy of the approximation is assessed through computer simulations.",2002,0, 4310,Characterization of non-destructive inspection data from composite structure defects,"In this paper, the processing and characterization of aircraft defect signal data have been explored and studied. The signal data was collected using the nondestructive ultrasonic inspection technique, Pitch/Catch swept method. Defects of different size and depths were studied and signal processing techniques were used to identify features that provide the most distinction among the defects. The discussion in this paper is only limited to the parts of the classification process that pertain to data collection, pre-processing, and feature extraction.",2009,0, 4311,Effect of servo control frequency on contour errors in a bi-axial CNC machine,"In this paper, the relation between servo control frequency and contour error is studied in a bi-axis CNC machine. The objective is to determine the effect of servo control frequency on the contouring accuracy thereby allowing a proper selection of this frequency depending upon the accuracy required. A mathematical analysis is first carried out using a commonly used dynamic model for a servomotor-driven position feedback control system. This was followed by experimental studies in a mini-CNC machine using both the basic linear and circular contours. The results show that the servo control frequency will have effect on contouring accuracies when very high precision are required, in the order of m.",2010,0, 4312,Study on the Features of Loudspeaker Sound Faults,"In this paper, the short-time Fourier transformation (STFT) is adopted to transform the loudspeaker sound signal. By STFT, the one-dimensional loudspeaker response signal is converted into two-dimensional time-frequency figure. Then, the figure is decomposed into a number of areas according to its harmonics distribution. The peak and mean values of every area are computed. Through observation and calculation, the features of loudspeaker defects are found. According to the experiment, this method is very effective and universal for different types of loudspeakers.",2009,0, 4313,The fault injection system applied to laboratory's test validation of FDR and FIR,"In this paper, the significance and practicability of the fault injection system applied to laboratory's test validation of fault detection rate (FDR) and fault isolation rate (FIR), is illustrated. Then, this system's hardware design, software design and its realization means are introduced. Finally, by testing this system aimed at some missile's subsystem, the flow, the result and its analysis of trial are put forward.",2009,0, 4314,A novel reflectometer for cable fault analysis with pulse reflection method,"In this paper, the studies of finding the failure location on a sample cable by using the pulse reflection method and the effects of parameters resulting from the fault on the results of the measurement have been examined. A pulse generator with amplitude of 5 V and pulse width between 40 ns and 2000 ns has been designed as a novel pulse reflection meter. The images of reflection seen on the oscilloscope screen have been transferred to software of ANALYSIS, developed for measurement of pulse reflection by means of serial communication and have been examined here.",2009,0, 4315,Effect of rotor position error on commutation in sensorless BLDC motor drives,"In this paper, two kinds of commutation modes of the brushless DC(BLDC) motor drives, the delaying commutation and the leading commutation, are discussed in detail. The current of the unexcited phase is calculated under an ideal operation condition, and the condition of circulating current occurring is analyzed. The result with the compensated commutation is provided. The theoretical analysis is confirmed by the experiment results.",2005,0, 4316,Automatic detection and correction of red-eye effect,"In this paper, we address the problem of automatic red-eye detection and correction. A four step, novel approach is presented where the first step consists of face detection in the input image. In the second step the face image is converted to gray-scale image in such a way that facilitates the detection of iris pair and its corresponding radius values. In third and fourth steps iris pair is located, and the central points of irises and their corresponding radius values are utilized to desaturate the redness inside iris regions. The desaturation scheme is adaptive to the severity of redness. Images with different severity level and size of redness are used to test the robustness of the proposed scheme. We also compare our correction schemes with two existing automatic methods.",2009,0, 4317,Low-complexity frame importance modelling and resource allocation scheme for error-resilience H.264 video streaming,"In this paper, we addressed the problem of redundancy allocation for protecting packet loss for better quality of service (QoS) in real-time H.264 video streaming. A novel error-resilient approach is proposed for the transmission of pre-encoded H.264 video stream under bandwidth constrained networks. A novel frame importance model is derived for estimating relative importance index for different H.264 video frames. Combining with the characteristics of the network, the optimal resource allocation strategy for different video frames can be determined for achieving improved error resilience. The model uses frame error propagation index (FEPI) to characterize video quality degradation caused by error propagation in different frames in a GOP when suffer from packet loss. This model can be calculated in DCT domain with the parameters extracted directly from the bitstream. Therefore, the complexity of the proposed scheme is very low and much better for real-time video transmission. Simulation results show that the proposed scheme can improve the receiver side reconstructed video quality remarkably under different channel loss patterns.",2008,0, 4318,"Analysis of transmit diversity schemes: impact of fade distribution, spatial correlation and channel estimation errors","In this paper, we analyze the average symbol error rate performance of arbitrary two dimensional signal constellations in conjunction with both open-loop and close-loop transmit diversity schemes in a generalized fading environment (including Rayleigh, Nakagami-m, Rician and mixed multipath fading environments). The mathematical framework can treat the case of non-identical fading parameters and dissimilar mean signal strengths across the diversity paths. We also present an analysis of the impact of spatial correlation on the performance of various transmit diversity schemes over Nakagami multipath fading channels. The impact of imperfect channel estimates (Gaussian errors) on various transmit diversity schemes in Rayleigh fading environment is also studied via analytical as well as simulation techniques.",2003,0, 4319,H. 264 Error Resilience Coding Based on Multihypothesis Motion Compensated Prediction,"In this paper, we propose efficient schemes for enhancing the error robustness of multi-hypothesis motion-compensate predictive (MHMCP) coder without sacrificing the coding efficiency significantly. The proposed schemes utilize the concept of reference picture interleaving and data partitioning to make the MHMCP-coded video more resilient to channel errors, especially for burst channel error. Besides, we also propose a scheme of integrating adaptive intra-refresh into the proposed MHMCP coder to further improve the error recovery speed. Extensive simulation results show that the proposed methods can effectively and quickly mitigate the error propagation and the penalty on coding efficiency for clean channels due to the inserted error resilience features is rather minor",2005,0, 4320,Error analysis in Croatian morphosyntactic tagging,"In this paper, we provide detailed insight on properties of errors generated by a stochastic morphosyntactic tagger assigning multext-East morphosyntactic descriptions to Croatian texts. Tagging the Croatia Weekly newspaper corpus by the CroTag tagger in stochastic mode revealed that approximately 85 percent of all tagging errors occur on nouns, adjectives, pronouns and verbs. Moreover, approximately 50 percent of these are shown to be incorrect assignments of case values. We provide various other distributional properties of errors in assigning morphosyntactic descriptions for these and other parts of speech. On the basis of these properties, we propose rule-based and stochastic strategies which could be integrated in the tagging module, creating a hybrid procedure in order to raise overall tagging accuracy for Croatian.",2009,0, 4321,Simulation of the defect removal process with queuing theory,"In this paper, we simulate the defects removal process using finite independent queues with different capacity and loading, which represent the limitation of developers and the ability differences of developers. The re-assignment strategy used in defects removal represents the cooperation between relevant developers. Experimental results based on real data show that the simulated approach can provide very useful and important information which can help project manager to estimate the duration of the whole defects removal process, the utilization of each developers and the defects remain at a specific time.",2009,0, 4322,Performance analysis and improvements for a simulation-based fault injection platform,"In this paper, we study and present two techniques to improve the performance of a simulation-based fault injection platform that inserts bit flips in order to model soft errors on digital circuits. The platform is based on the ESA Data Systems Divisionpsilas SEE simulation tool. In contrast with methods based on emulation, the proposed approach reduces the complexity and costs, supplying a test environment with the same reliability as emulation systems. Only one disadvantage appears when comparing both methodologies: the lower performance of the simulation in cases where the fault injection campaigns are very large. Two proposals have been developed in order to address this drawback: the first one is based on software (through checkpoints) and the second one uses parallel computation.",2008,0, 4323,Error reduction in non-electric measurement by interpolation combined with loop transformation method,"In this paper, we used second order interpolation method via three succeeding data points in narrow area, combined with loop transformation method and used samples to build an algorithm for processing of measurement data to reduce non-linear errors and transformation errors of non-electric measuring devices. Simulation and experimental results shown the ability to reduce errors of measurement devices.",2010,0, 4324,Fault-tolerant execution of mobile agents,"In this paper, we will address the list of problems that have to be solved in mobile agent systems and we will present a set of fault-tolerance techniques that can increase the robustness of agent-based applications without introducing a high performance overhead. The framework includes a set of schemes for failure detection, checkpointing and restart, software rejuvenation, a resource-aware atomic migration protocol, a reconfigurable itinerary, a protocol that avoids agents to get caught in node failures and a simple scheme to deal with network partitions. At the end, we will present some performance results that show the effectiveness of these fault-tolerance techniques",2000,0, 4325,One Double Levels Error Resilient Scheme of Joint Source and Channel Coding,"In this paper, we will consider the joint source and channel coding problem and propose a double-level error resilient scheme of joint source and channel coding. Compared to other joint source and channel coding schemes, we insert one coordination structure named error resilient entropy coding between source coding and channel coding to achieve additional error resilient capability. Using arithmetic coding based on forbidden symbol and standard LDPC code, we prove that, in coding redundancy and computational complexity, our scheme outperforms either separate source and channel coding scheme or joint source and channel coding scheme with synchronization words.",2009,0, 4326,Levenberg-Marquardt neural network for gear fault diagnosis,"In this study we are trying with the Levenberg-Marquardt neural network model to the problem of gear fault diagnosis. By using second derivative information, the network convergence speed is promoted and the generalization performance is enhanced. Taking a certain gearbox fault signal acquisition experimental system for instance, Matlab software and its neural network toolbox are used to model and simulate. The simulation result shows that Levenberg-Marquardt neural network has a good performance for the common gear fault diagnosis and it can identify various types of faults stably and accurately. Furthermore, compared with conventional BP neural network, the Levenberg-Marquardt neural network reduces training epochs and promotes diagnosis accuracy.",2010,0, 4327,Coping with abstraction in object orientation with a special focus on application errors,"In this study we present and discuss various solution strategies used by students concerning error-handling. Our data is based on accumulated instruction experience gained during several years of advanced OOP course with Java. Analysis the provided solutions according to a set of categories based on constructive principles concerning software programming and on a classification of abstraction levels concerning error handling. The obtained results reveals that majority of students have difficulties in utilizing the advanced error-handling mechanism offered by modern programming languages (i.e., exception mechanism). The students have also difficulties in exhibiting high level of abstraction concerning a proper design of exceptions' hierarchy.",2010,0, 4328,Multiple hypotheses and their credibility in on-line fault diagnosis,"In this study, a new method that handles multiple hypotheses is presented for fault diagnosis using sequence of event recorders (SERs). To quantify the certainty of hypotheses, a method to calculate their credibility is provided. The proposed techniques are integrated in a generalized alarm analysis module (GAAM) and have been tested with numerous scenarios from the Italian power system",2001,0, 4329,Adaptation of neural network and application of digital ultrasonic image processing for the pattern recognition of defects in semiconductor,"In this study, the classification of artificial defects in semiconductor devices are performed by using pattern recognition technology. For this target, a pattern recognition algorithm including user made software was developed and the total procedure including image processing and self-organizing map was treated by a backpropagation neural network, where image processing was composed of ultrasonic image acquisition, equalization filtering, binary processing and edge detection. Image processing and self-organizing map were compared as preprocessing methods for the reduction of dimensionality as input data into multi-layer perceptron or backpropagation neural networks. Also, the pattern recognition technique has been applied to classify two kinds of semiconductor defects: cracks and delamination. According to these results, it was found that the self-organizing map provided recognition rates of 83.4% and 75.7% for delamination and cracks, respectively, while BP provided 100% recognition rates for the results",2001,0, 4330,A Fault Tolerant Topology Control Algorithm for Large-Scale Sensor Networks,"In this paper, we present a Distributed Geography-based Fault Tolerant topology control algorithm (DGFT) for static large-scale wireless sensor networks. We introduce the scale-free characteristic of complex networks into the topology of large- scale wireless sensor networks to obtain robustness and time efficiency. DGFT enables wireless nodes to define the topology by neighbor relationship, based on certain initialized weights whose distribution follows a negative power law. We prove that the topology constructed under DGFT is strongly connected and bidirectional. Simulation studies show that the resulting topology has good network performance in terms of transmission delay and robustness.",2007,0, 4331,Automatic Synthesis of Fault Detection Modules for Mobile Robots,"In this paper, we present a new approach for automatic synthesis of fault detection modules for autonomous mobile robots. The method relies on the fact that hardware faults typically change the flow of sensory perceptions received by the robot and the subsequent behavior of the control program. We collect data from three experiments with real robots. In each experiment, we record all sensory inputs from the robots while they are operating normally and after software-simulated faults have been injected. We use back- propagation neural networks to synthesize task-dependent fault detection modules. The performance of the modules is evaluated in terms of false positives and latency.",2007,0, 4332,Error-Rate Analysis of FHSS Networks Using Exact Envelope Characteristic Functions of Sums of Stochastic Signals,"In this paper, we present a novel approach to analytically study and evaluate the error probability of frequency- hopping spread-spectrum (FHSS) systems with intentional or nonintentional interference over the additive white Gaussian noise and Rayleigh-fading channels. The new approach is based on the derivation of new formulas for the exact envelope characteristic function (EECF) of the general sum of n stochastic sinusoidal signals, with each of the n signals having different random amplitude and phase angle. The envelope probability density function (pdf) is obtained from the characteristic function (CF), which, in the important cases of interest, is shown to also give simpler formulas in terms of the Fourier transform (FT) of the Bessel functions. Previously, the Ricean envelope density had only been verified for the very special case where n = 1, and the phase is uniform and independent of amplitude. Here, a new formula for the exact density of the envelope of noisy stochastic sinusoids (EDENSS) is presented, which leads to the generalization of the Ricean envelope-density (GRED) formula under the most general conditions, namely, n ges 1 signals, dependent amplitudes, and phases having an arbitrary joint pdf. The EDENSS and GRED formulas are applied to compute the pdfs needed in noncoherent detection under noise. The derived formulas also lead to the exact formulas for the error probability of the FHSS networks using M-ary amplitude shift keying (MASK) without setting limits on the number of interferers or the symbol alphabet. The power of our EECF and FT methods is further demonstrated by their ability to give an alternative derivation of the exact general envelope-density (EGED) formula, which has previously been reported by Maghsoodi. The comparative numerical results also support the analytical findings.",2008,0, 4333,Design and Analysis of a Sensing Error-Aware MAC Protocol for Cognitive Radio Networks,"In this paper, we present a spectrum sensing error-aware MAC protocol for a cognitive radio (CR) network collocated with multiple primary networks. We explicitly consider sensing errors in the CR MAC design, since such errors are inevitable for practical spectrum sensors. Two spectrum sensing polices are presented with which secondary users collaboratively sense the licensed channels. The sensing policies are then incorporated into p-Persistent CSMA to coordinate dynamic spectrum access for CR network users. We present an analysis of the interference and throughput performance of the proposed CR MAC, and find the analysis highly accurate in our simulation studies. The proposed sensing error-aware CR MAC protocol outperforms two existing approaches with considerable margins in our simulations, which justify the importance of considering spectrum sensing errors in CR MAC design.",2009,0, 4334,Testing of LUT delay aliasing faults in SRAM based FPGAs using half-frequencies,"In this paper, we present a technique for testing the delay aliasing faults associated with LUTs in SRAM based FPGAs. We compare the outputs of two identical LUTs when one is operated at half the frequency of the other. A Built in Self Test (BIST) circuitry consisting of a Test Pattern Generator, a Comparator, and the Circuit Under Test (CUT) is mapped on the FPGA. Application of input sequence vectors at half frequencies to the LUTs enable the detection of delay and aliasing faults which may go undetected by other techniques. The technique is verified using VHDL based simulations. The results are also experimentally verified using a Virtex II FPGA board.",2007,0, 4335,Access Control System: A Cost Effective Protection Scheme for Fiber Fault Identification,"In this paper, we present access control system (ACS) a cost effective protection for fiber fault identification in fiber to the home (FTTH) passive optical network (PON). To achieve better protection, ACS can access by remote host via Internet or LAN, with the specification available a complete system can be made with 24 man-hours. Our solution is shown to be effective in improving live-fiber services.",2009,0, 4336,Software fault injection for survivability,"In this paper, we present an approach and experimental results from using software fault injection to assess information survivability. We define information survivability to mean the ability of an information system to continue to operate in the presence of faults, anomalous system behavior, or malicious attack. In the past, finding and removing software flaws has traditionally been the realm of software testing. Software testing has largely concerned itself with ensuring that software behaves correctly-an intractable problem for any non-trivial piece of software. In this paper, we present off-nominal testing techniques, which are not concerned with the correctness of the software, but with the survivability of the software in the face of anomalous events and malicious attack. Where software testing is focused on ensuring that the software computes the specified function correctly, we are concerned that the software continues to operate in the presence of faults, unusual system events or malicious attacks",2000,0, 4337,"Improving fault-tolerance in intelligent video surveillance by monitoring, diagnosis and dynamic reconfiguration","In this paper, we present an approach for improving fault-tolerance and service availability in intelligent video surveillance (IVS) systems. A typical IVS system consists of various intelligent video sensors that combine image sensing with video analysis and network streaming. System monitoring and fault diagnosis followed by appropriate dynamic system reconfiguration mitigate effects of faults and therefore enhance the system's fault-tolerance. The applied monitoring and diagnosis unit (MDU) allows the detection of both node- and system-level faults. Lacking redundant hardware such reconfigurations are established by graceful degradation of the overall application. An optimizer module that performs multi-criterion optimization is used to compute a new degraded system configuration by trading off quality of service (QoS), energy consumption, and service availability. We demonstrate the functionality of our approach by an illustrative example.",2005,0, 4338,Enhancing Fault-Tolerance in a Distributed Mutual Exclusion Algorithm,"In this paper, we present an efficient fault-tolerant token-based algorithm for achieving mutual exclusion (ME) in distributed systems. Nishio et al's fault-tolerant mutual exclusion algorithm requires feedbacks from every other site to recover from token loss. This results in considerable amount of waiting time and false token loss detection. Though Manivannan et al's algorithm solves the problems in Nishio et al's, their algorithm cannot work if a failed site is not repaired within a finite time. This paper proposes an approach to remove the drawback of Manivannan et al's method for achieving fault-tolerance. Our algorithm gives better performance in terms of message complexity (MC), Synchronization delay (SD), response time (RT) and degree of fault- tolerance in comparison with Manivannan et al's algorithm.",2006,0, 4339,Fault tolerant control method for steer-by-wire system,"In this study, we describe a model based fault tolerant control method for steer-by-wire system. First of all general redundancy schemes and current works for steer-by-wire fault tolerant control systems are examined. Then the paper focuses on the fault tolerant control methods of key sensors and DC motors for steer-by-wire system. With the aim to provide a steady and reliable fault tolerant control algorithm we synthesize Adaptive Fading Kalman Filter and fault eigenvectors as a framework to establish the fault tolerant control system. The effectiveness of this research is demonstrated by simulation. Finally key innovations of the fault tolerant control method are discussed.",2009,0, 4340,Efficient Fault-Tolerant Backbone Construction in Tmote Sky Sensor Networks,"In this study, we have investigated the effectiveness of building AFault-Tolerant BackboneA for data dissemination in Tmote Sky sensor networks. Tmote Sky sensors provide programmable and adjustable output power for data transmission. Users can control adequate transmission power for each sensor. Based on our measurements of Tmote Sky, there is a steadily transmitted distance for every power level. For certain power level, successfully-transmitted ratio was approximately 100 percent when the distance between sender and receiver was less than the steadily-transmitted distance. In accordance with the character on Tmote Sky, the ideas of fault-tolerant backbone has been made for constructing a fault-tolerant and stable system for Tmote Sky. The fault-tolerant backbone protocol builds up a connected backbone, in which nodes are endowed with a sleep/awake schedule. Practical experimental results reveal the fast fault recovery and high successfully-transmitted ratio can be fulfilled in the realistic system. The following goals in the implementation have been reached, including self-configurable fault-tolerant groups, automatic backbone construction, automatic failure recovery, and route repair.",2009,0, 4341,The Learning with Errors Problem (Invited Survey),"In this survey we describe the Learning with Errors (LWE) problem, discuss its properties, its hardness, and its cryptographic applications.",2010,0, 4342,Optical satellite images for co-seismic horizontal offsets estimate and fault trace mapping using Phase-corr technique,"In this work is presented a new robust unwrapping-free phase correlation method, for retrieving the coseismic displacement field and the surface rupture fault-trace mapping using optical data. Phase-corr method does not need phase unwrapping and has been proved to be robust under a wide variety of circumstances. The method has been applied at two different test cases, Izmit (Turkey) and Kashmir (Pakistan) earthquakes, occurred on August 17, 1999, and October 8, 2005, respectively. We measured the near-field deformations exploiting two geometrically corrected IRS images with similar look angles in the case of Izmit earthquake, while the Kashmir earthquake coseismic displacement has been retrieved by ASTER data. The results show that Phase-corr method can be used for deriving the coseismic slip offsets due to a large earthquake (and to map its fault trace) using optical data from different sensors.",2010,0, 4343,Detection of Stator Winding Faults in Induction Machines Using an Internal Flux Sensor,"In this work is presented the implementation of a special flux coil sensor inside three-phase induction motors is used as experimental platforms. This sensor is sensitive to electromagnetic field and is used for detection and diagnosis electrical faults. It was established the relationship between the main electrical faults (inter-turn short circuits and unbalanced voltage supplies) and the signals of magnetic flux in order to identify the characteristic frequencies of those faults. The experimental results shown the efficiency of the flux coil sensor developed and the strategies for detection, diagnosis and monitoring tasks. The results were undoubtedly impressive and the system developed can be adapted and used in real predictive maintenance programs in industries.",2007,0, 4344,Probabilistic fault prediction of incipient fault,"In this work, a probabilistic fault prediction approach is presented for prediction of incipient fault in an uncertain way. The approach has two stages. In the first stage, normal data is analyzed by principle component analysis (PCA) to get control limits of the statistics of T2 and SPE. In the second stage, fault data starts by PCA so as to derive the statistics of T2 and SPE. Then, the samplings of these two statistics obeying some certain prediction distribution are obtained using Bayesian AR model on the basis of the Winbugs software. At last, one-step prediction fault probabilities are estimated by kernel density estimation method according to the statistics' corresponding control limits. The prediction performance of this approach is illustrated using the data from the simulator of the Tennessee Eastman process.",2010,0, 4345,Comparison between 65nm bulk and PD-SOI MOSFETs: Si/BOX interface effect on point defects and doping profiles,"In this work, the influence of the Silicon/Buried Oxide Interface (Si/BOX) on the electrical characteristics of Silicon-On-Insulator (SOI) MOSFETs is investigated by means of numerical simulations. Considering the state-of-art dopant diffusion models and the effect of Si/BOX interface as a point defect sink, process simulations were performed to investigate the two-dimensional diffusion behaviour of the dopant impurities. The impact of the Si/BOX interface on the shape of the different active zones profiles was investigated by analyzing the standard electrical characteristics of CMOS devices. Finally, a new electrical characterization methodology is detailed to better analyze dopants lateral diffusion profiles.",2009,0, 4346,A New Statistical Model for the Behavior of Ranging Errors in TOA-Based Indoor Localization,"In time of arrival (TOA)-based indoor geolocation systems ranging error is a function of the bandwidth of the system and the availability of the direct path between the transmitter and the receiver. With a detected direct path (DDP) conditions and ultra wideband (UWB) transmission, precise range estimates are feasible while in undetected direct path (UDP) conditions large ranging errors occur which can not be cured with the increase of the transmission power or bandwidth. UDP conditions are caused by large metallic objects between the transmitter and the receiver or increase in the distance of the transmitter and the receiver so that the direct path fades away but the receiver still receives signal from other paths. For a given location of the transmitter, with respect to the huge metallic objects, the probability of occurrence of the UDP conditions changes. This paper provides an analytical method for calculation of the overall statistics of the ranging error for different location of the transmitter in a typical indoor environment. Results can be used for the analysis of the performance of precise RF localization techniques for sensor networks. Based on this model we show that the IEEE P802.15.3 recommended model is not adequate to represent the behavior of the ranging errors in typical indoor environments.",2007,0, 4347,Using defect analysis feedback for improving quality and productivity in iterative software development,"In today's business where speed is of essence, an iterative development approach that allows the functionality to be delivered in parts has become a necessity and an effective way to manage risks. Iterative development allows feedback from an iteration to influence decisions in future iterations, thereby making software development more responsive to changing user and business needs. In this paper we discuss the role of defect analysis as a feedback mechanism to improve the quality and productivity in an iteratively developed software project. We discuss how analysis of defects found in one iteration can provide feedback for defect prevention in later iterations, leading to quality and productivity improvement. We give an example of its use and benefits on a commercial project",2005,0, 4348,Analysis on the perceptual impact of bit errors in practical video streaming applications,"In video streaming applications, there are two main sources of distortion in video quality: source distortion caused by video compression and channel distortion derived from failures in transmission, including packet losses and bit errors. The impact of bit errors on the perceived video quality depends on several different factors, such as error resilience of the data format itself, used error concealment mechanism and characteristics of the bit error pattern. In this paper, we have studied these different influencing factors from different perspectives, ranging from telecommunications to video signal processing and subjective quality assessment. We show that our results are useful for assessment and development of schemes to optimize bit error resilience in video streaming and broadcasting applications.",2009,0, 4349,"REDFLAG a Run-timE, Distributed, Flexible, Lightweight, And Generic fault detection service for data-driven wireless sensor applications","Increased interest in Wireless Sensor Networks (WSNs) by scientists and engineers is forcing WSN research to focus on application requirements. Data is available as never before in many fields of study; practitioners are now burdened with the challenge of doing data-rich research rather than being data-starved. In-situ sensors can be prone to errors, links between nodes are often unreliable, and nodes may become unresponsive in harsh environments, leaving to researchers the onerous task of deciphering often anomalous data. Presented here is the REDFLAG fault detection service for WSN applications, a Run-timE, Distributed, Flexible, detector of faults, that is also Lightweight And Generic. REDFLAG addresses the two most worrisome issues in data-driven wireless sensor applications: abnormal data and missing data. REDFLAG exposes faults as they occur by using distributed algorithms in order to conserve energy. Simulation results show that REDFLAG is lightweight both in terms of footprint and required power resources while ensuring satisfactory detection and diagnosis accuracy. Because REDFLAG is unrestrictive, it is generically available to a myriad of applications and scenarios.",2009,0, 4350,Evaluation of a new low cost software level fault tolerance technique to cope with soft errors,"Increasing soft error rates make the protection of combinational logic against transient faults in future technologies a major issue for the fault tolerance community. Since not every transient fault leads to an error at application level, software level fault tolerance has been proposed by several authors as a better approach. In this paper, a new software level technique to detect and correct errors due to transient faults is proposed and compared to a classic one, and the costs of detection and correction for both approaches are compared and discussed.",2010,0, 4351,Simulations within Information Fusion - The Need for Fault Tolerance High Fault Tolerance Degree,"Information fusion infrastructures have a need for real-time simulations. Many information fusion applications inherently execute real-world action as a result of combining simulations and real hardware or end-users. We articulate the need for a high degree of fault tolerance for systems dealing with real-world actions. In this paper we elaborate on why traditional simulation infrastructures is not sufficient and provide a solution this problem. Our approach include usage of a whiteboard architecture that serve as a communication layer as well as storage system. The whiteboard architecture is implemented on a distributed active real-time database prototype called DeeDS NG. The paper describes the different degrees of fault tolerance available; ranging from fault masking down to best effort fault tolerance, depending on the particular needs of a system. For the future we are planing a series of simulation experimentsto validate our claims concering the usfulness of the whitebord architecture described in the paper.",2010,0, 4352,Research on information engineering surveillance risk evaluation based on fault tree analysis,"Information security risk analysis method is now a hot issue of information security management field. Fault tree analysis method has proposed since the 1960s, obtain the widespread application in many large-scale complicated system's security fall-safe analyses. It's recognized as an effective method for the complex system reliability, security analysis. The basic principle, qualitative analysis and quantitative analysis of fault tree analysis method are introduced. And this article introduced briefly the information security risk analysis method, and to has carried on the exhaustive elaboration based on fault tree's risk analysis's modeling way and the analysis principle. Finally, an example is introduced to tome to the conclusion whether the project is feasible.",2010,0, 4353,Influence of team size and defect detection technique on inspection effectiveness,"Inspection team size and the set of defect detection techniques used by the team are major characteristics of the inspection design, which influences inspection effectiveness, benefit and cost. The authors focus on the inspection performance of a nominal, that is non-communicating team, similar to the situation of an inspection team after independent individual preparation. We propose a statistical model based on empirical data to calculate the expected values for the inspection effectiveness and effort of synthetic nominal teams. Further, we introduce an economic model to compute the inspection benefits, net gain, and return on investment. With these models we determine (a) the best mix of reading techniques (RTs) to maximize the average inspection performance for a given team size, (b) the optimal team size and RT mix for a given inspection time budget, and (c) the benefit of an additional inspector for a given team size. Main results of the investigation with data from a controlled experiment are: (a) benefits of an additional inspector for a given RT diminished quickly with growing team size, thus, above a given team size a mix of different RTs is more effective and has a higher net gain than using only one RT; (b) the cost-benefit model limits team size, since the diminishing gain of an additional inspector at some point is more than offset by his additional cost",2001,0, 4354,The modeling and analysis of geometric error for machining center by homogeneous coordinate transformation method,"In this paper, the collectivity error transformation matrix from the tip of the cutting tool to the workbench is set up for TH6350 machining center, by based on this, the numerical model of the universal geometric error of machining center is built up by applying the homogeneous coordinate transformation principle and the rigid-body hypothesis, and the geometric error of TH6350 machining center is calculated, the numerical model of geometric error is verified and analyzed by the way. The analyzed result plays a very important rule to improving designing, error compensation and practical machining of TH6350 machining center.",2006,0, 4355,"The Lth and (L+1)th order asymptotic symbol error rate based SNR expressions as a function of modulation scheme, constellation size and diversity order in a Rayleigh fading channel","In this paper, the expression for symbol error rate (SER) or BER of linear modulation in Rayleigh fading as a function of signal-to-noise ratio (SNR), modulation scheme and diversity order, is inverted, thereby giving SNR as a function of SER. The key is to represent the PDF (or MGF) by a Maclaurin series and discard all but the first one or two terms. For large SNRs, the first term in the series is accurate and simple closed-form SNR expressions for an arbitrary diversity order are derived, in terms of asymptotic SER (ASER), in SNR. Here ASER only has the term of the inverse SNR raised to the system diversity order L, thereby called the Lth order ASER. When the SNR is not large enough, the first two terms in the series, the terms containing Lth and (L_1)th order, are used and proven to be accurate. Based on the derived ASERs, we then present closed-form SNR expressions for smaller SNRs. In this paper, closed-form expressions for BPSK, MQAM and MPSK signals are presented, respectively. Using the derived closed-form expressions, one can easily design a link budget for a given BER requirement without going through tedious and time consuming simulations (for example, in MATLAB). As an application, several examples are presented to illustrate the ease of usage of the presented SNR closed-form expressions in a wireless SISO and MIMO link budget design. Tables are given to show the SNR expressions for their SER ranges with different modulation schemes/constellation sizes for diversity order varying from one to four.",2009,0, 4356,Summarizing software artifacts: a case study of bug reports,"Many software artifacts are created, maintained and evolved as part of a software development project. As software developers work on a project, they interact with existing project artifacts, performing such activities as reading previously filed bug reports in search of duplicate reports. These activities often require a developer to peruse a substantial amount of text. In this paper, we investigate whether it is possible to summarize software artifacts automatically and effectively so that developers could consult smaller summaries instead of entire artifacts. To provide focus to our investigation, we consider the generation of summaries for bug reports. We found that existing conversation-based generators can produce better results than random generators and that a generator trained specifically on bug reports can perform statistically better than existing conversation-based generators. We demonstrate that humans also find these generated summaries reasonable indicating that summaries might be used effectively for many tasks.",2010,0, 4357,BFT-WS: A Byzantine Fault Tolerance Framework for Web Services,"Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called BFT-WS. BFT-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in BFT-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency. Our performance measurements confirm that BFT-WS incurs only moderate runtime overhead considering the complexity of the mechanisms.",2007,0, 4358,Comparison of impedance and travelling wave fault location using real faults,"Market deregulation has changed the way transmission utilities manage their lines as they are not paid by the amount of transmitted power, but by the availability of their lines. The reason for that is the concept of free energy market (generation prices), that need the transmission system to be always available for large energy block transfers of the cheapest generator in the system. In this new model, the transmission utilities are penalized for the time a line is off service to the system after a permanent fault. The time needed for restoring the line to the system is mostly lost in locating the real point of the fault. This paper show how this time can be significantly reduced by using traveling wave.",2010,0, 4359,Software-based fault-tolerant routing algorithm in multidimensional networks,"Massively parallel computing systems are being built with hundreds or thousands of components such as nodes, links, memories, and connectors. The failure of a component in such systems will not only reduce the computational power but also alter the network's topology. The software-based fault-tolerant routing algorithm is a popular routing to achieve fault-tolerance capability in networks. This algorithm is initially proposed only for two dimensional networks (Suh et al., 2000). Since, higher dimensional networks have been widely employed in many contemporary massively parallel systems; this paper proposes an approach to extend this routing scheme to these indispensable higher dimensional networks. Deadlock and livelock freedom and the performance of presented algorithm, have been investigated for networks with different dimensionality and various fault regions. Furthermore, performance results have been presented through simulation experiments",2006,0, 4360,Error whitening criterion for adaptive filtering: theory and algorithms,"Mean squared error (MSE) has been the dominant criterion in adaptive filter theory. A major drawback of the MSE criterion in linear filter adaptation is the parameter bias in the Wiener solution when the input data are contaminated with noise. We propose and analyze a new augmented MSE criterion called the Error Whitening Criterion (EWC). EWC is able to eliminate this bias when the noise is white. We will determine the analytical solution of the EWC, discuss some interesting properties, and develop stochastic gradient and other fast algorithms to calculate the EWC solution in an online fashion. The stochastic algorithms are locally computable and have structures and complexities similar to their MSE-based counterparts (LMS and NLMS). Convergence of the stochastic gradient algorithm is established with mild assumptions, and upper bounds on the step sizes are deduced for guaranteed convergence. We will briefly discuss an RLS-like Recursive Error Whitening (REW) algorithm and a minor components analysis (MCA) based EWC-total least squares (TLS) algorithm and further draw parallels between the REW algorithm and the Instrumental Variables (IV) method for system identification. Finally, we will demonstrate the noise-rejection capability of the EWC by comparing the performance with MSE criterion and TLS.",2005,0, 4361,Measurement Error Analysis of Autonomous Decentralized Load Tracking System,"Measurement error analysis is presented for autonomous decentralized load tracking system. We propose a load tracking mechanism based on incomplete management of load information from subsystems. The system employs autonomous decentralized architecture, which enables online expansion and fault tolerance. Each subsystem asynchronously broadcasts load information to the data field shared by the subsystems. Load information in the data field is measured for a limited period of time to make the tracking ability efficient. The number of measurements and the estimation of total load from the measurements are stochastic variables. This paper shows the statistical model of measurement noise disturbing the tracking mechanism",2006,0, 4362,An empirical approach for software fault prediction,"Measuring software quality in terms of fault proneness of data can help the tomorrow's programmers to predict the fault prone areas in the projects before development. Knowing the faulty areas early from previous developed projects can be used to allocate experienced professionals for development of fault prone modules. Experienced persons can emphasize the faulty areas and can get the solutions in minimum time and budget that in turn increases software quality and customer satisfaction. We have used Fuzzy C Means clustering technique for the prediction of faulty/ non-faulty modules in the project. The datasets used for training and testing modules available from NASA projects namely CM1, PC1 and JM1 include requirement and code metrics which are then combined to get a combination metric model. These three models are then compared with each other and the results show that combination metric model is found to be the best prediction model among three. Also, this approach is compared with others in the literature and is proved to be more accurate. This approach has been implemented in MATLAB 7.9.",2010,0, 4363,Fault diagnosis and fault tolerance of drive systems - Status and research,"Mechatronic components allow to design systems with many new functionalities and to turn away from classical design paradigms which had been required for purely mechanical designs. However, the increase in complexity and the renunciation from purely mechanical to electro-mechanical designs goes along with an increase in the possibility of unpredictable faults. Therefore, the design paradigm also has to increase from fault detection and diagnosis to fault-tolerance by redundancy at least for safety related applications. The idea for this paradigm change is that faults cannot entirely be avoided in complex systems. Rather, the occurrence of faults is accepted and measures are taken as to limit the implications of the fault and maintain the operability despite these impairments caused by the fault. The paper at hand will first introduce the overall setup of a component respectively system with integrated fault management. Fault detection and diagnosis are compactly sketched before fault tolerance principles are introduced. The paper looks at both hardware tolerance and analytical tolerance. As the reaction to the fault can cause a functional degradation even in the presence of redundancy, the different degradation steps are also introduced. Then, different designs of fault tolerant electric and hydraulic actuators as well as sensors are presented. An essential requirement for the inclusion of the designs into this paper was at least an experimental realization of the concept. A total of 11 different designs is presented. Finally, the different designs are compared and future developments are shortly discussed.",2009,0, 4364,Peer-to-Peer Error Recovery for Hybrid Satellite-Terrestrial Networks,"Media companies (and other organizations with large amounts of digital content) require prompt broadcast of extremely large files from a single source to a collection of geographically dispersed destinations. Due to the high cost of terrestrial networks of sufficient bandwidth, satellite networks are commonly used for such transfers. However, current satellite transfers rely on expensive error correction via forward error correction and whole-file retransmission. This paper presents a new, hybrid solution combining the advantages of satellite and terrestrial networks to provide cost-effective reliable file transfer. Specifically, we propose a new peer-to-peer scheme exploiting fast terrestrial networks and multiple receivers to recover from high loss rates (5% or more) in near real-time (latency < 400ms). This solution is efficient, robust under variable packet loss and connectivity, user tunable, scales well, and doubles bandwidth compared to existing approaches. The system has been validated via extensive simulations using a terrestrial network based on the AT&T common backbone core network",2006,0, 4365,Analysis and Prevention of Dispension Errors by Using Data Mining Techniques,"Medical treatment techniques have been improved continuously in the past years. However, the better approaches are still needed to solve medical treatment problems. One important topic in this field is the analysis and prevention of medication errors. In this paper, we focus on the problem of dispensing error that is one important problem of medication errors and we proposed a prevention model by using three approaches. The proposed dispensing error mining framework consists of two phases, namely the modeling and prediction phases. Firstly, Statistical approach (logistic regression) and data mining approaches (C4.5 and SVM) are used to analyze dispensing error problem and to build classification models. Three kinds of factors, namely drug-names factor, drug-properties factor and environmental factor, with totally thirteen attributes are used in the modeling phase. In prediction phase, new drugs thus can be analyzed for the probability of dispensing error by the model so as to prevent dispensing error. At last, experimental results on real dataset showed that the proposed approach is effective and the considered factors can actually increase the accuracy of the model",2007,0, 4366,Zoltar: A Toolset for Automatic Fault Localization,"Locating software components which are responsible for observed failures is the most expensive, error-prone phase in the software development life cycle. Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important process for the development of dependable software. In this paper we present a toolset for automatic fault localization, dubbed Zoltar, which hosts a range of spectrum-based fault localization techniques featuring BARINEL, our latest algorithm. The toolset provides the infrastructure to automatically instrument the source code of software programs to produce runtime data, which is subsequently analyzed to return a ranked list of diagnosis candidates. Aimed at total automation (e.g., for runtime fault diagnosis), Zoltar has the capability of instrumenting the program under analysis with fault screeners as a run-time replacement for design-time test oracles.",2009,0, 4367,The application of multi-function interface MVB NIC in distributed locomotive fault detecting and recording system,"Locomotive condition monitoring and fault diagnosis system is an important component of modern locomotive, it needs a reliable, high-speed communication network to ensure that the system's reliable operation in the complex locomotive environment. The Controller Area Network (CAN) used in the existing distributed locomotive fault detecting and recording system is not suitable for vehicles bus, so the paper brought forward the scheme using the Multifunction Vehicle Bus (MVB). Firstly, it described the alteration of system structure and operating principle key design concepts in detail, next designed the multi-function interface MVB NIC using SOPC (system on a programmable chip) technology, given the realization of hardware and software, ultimately proceeded the network test in the lab, and verified the correctness and feasibility of the design. The improved network has farther transmission distance, higher rates, better reliability and real-time.",2010,0, 4368,Error-resilient LZW data compression,"Lossless data compression systems are typically regarded as very brittle to transmission errors. This limits their applicability to domains like noisy tetherless channels or file systems that can possibly get corrupted. Here we show how a popular lossless data compression scheme used in file formats GIF, PDF, and TIFF, among others, can be made error-resilient in such a way that the compression performance is minimally affected. The new scheme is designed to be backward-compatible, that is, a file compressed with our error-resilient algorithm can be still decompressed by the original decoder. In this preliminary report, we present our scheme, collect some experimental data supporting our claims, and provide some theoretical justifications.",2006,0, 4369,Fusion of the MR image to SPECT with possible correction for partial volume effects,"Low spatial resolution and the related partial volume effects limit the diagnostic potential of brain single photon emission computed tomography (SPECT) imaging. As a possible remedy for this problem we propose a technique for the fusion of SPECT and MR images, which requires for a given patient the SPECT data and the T1-weighted MR image. Basically, after the reconstruction and coregistration steps, the high-frequency part of the MR, which would be unrecoverable by the set SPECT acquisition system + reconstruction algorithm, is extracted and added to the SPECT image. The tuning of the weight of the MR on the resulting fused image can be performed very quickly, any iterative reconstruction algorithm can be used and, in the case that the SPECT projections are not available, the proposed technique can also be applied directly to the SPECT image, provided that the performance of the scanner is known. The procedure has the potential of increasing the diagnostic value of a SPECT image. Even in the locations of SPECT-MR mismatch it does not significantly affect quantitation over regions of interest (ROIs) whose dimensions are decidedly larger than the SPECT resolution distance. On the other hand, appreciable corrections for partial volume effects are expected in the locations where the contrast in the structural MR matches the corresponding contrast in functional activity.",2006,0, 4370,Low-complexity video error concealment for mobile applications using OBMA,"Low-complexity error concealment techniques for mobile applications are studied extensively in this paper. The boundary matching algorithm (BMA) is an attractive choice for mobile video concealment due to its low complexity. Here, we examine a variant of BMA called the outer boundary matching algorithm (OBMA). Although BMA and OBMA are similar in their design principle, it is empirically observed that OBMA outperforms BMA by a significant margin (typically, 0.5 dB or higher) while maintaining the same level of complexity. In this work, we attempt to explain the superior performance of OBMA, and conclude that OBMA provides an excellent tradeoff between the complexity and the quality of concealed video for a wide range of test video sequences and error conditions.",2008,0, 4371,Predicting Defects and Changes with Import Relations,Lowering the number of defects and estimating the development time of a software project are two important goals of software engineering. To predict the number of defects and changes we train models with import relations. This enables us to decrease the number of defects by more efficient testing and to assess the effort needed in respect to the number of changes.,2007,0, 4372,Correction of MR k-space data corrupted by spike noise,"Magnetic resonance images are reconstructed from digitized raw data, which are collected in the spatial-frequency domain (also called k-space). Occasionally, single or multiple data points in the k-space data are corrupted by spike noise, causing striation artifacts in images. Thresholding methods for detecting corrupted data points can fail because of small alterations, especially for data points in the low spatial frequency area where the k-space variation is large. Restoration of corrupted data points using interpolations of neighboring pixels can give incorrect results. The authors propose a Fourier transform method for detecting and restoring corrupted data points using a window filter derived from the striation-artifact structure in an image or an intermediate domain. The method provides an analytical solution for the alteration at each corrupted data point. It can effectively restore corrupted k-space data, removing striation artifacts in images, provided that the following 3 conditions are satisfied. First, a region of known signal distribution (for example, air background) is visible in either the image or the intermediate domain so that it can be selected using a window filter. Second, multiple spikes are separated by the full-width at half-maximum of the point spread function for the window filter. Third, the magnitude of a spike is larger than the minimum detectable value determined by the window filter and the standard deviation of k-space random noise.",2000,0, 4373,In vivo velocity and flow errors quantification by phase-contrast magnetic resonance imaging,"Magnetic resonance imaging is a very efficient tool for assessing velocity and flow in the cardiovascular system under normal and pathological conditions. However, this technique still has some limitations that produce different type of errors. In this study, velocities and flow were measured in vivo using phase-contrast method to determine the optimal number of phases allowing the minimization of the errors. The effect of velocity encoding upsampling was also investigated. The results showed that a number of phases between 1624 is a good compromise to accurately estimate both ejection and regurgitant flows. Furthermore, a time shift effect caused by velocity encoding upsampling was found and a corrective linear model was proposed. These considerations may reduce flow and velocity measurement errors in normal and pathological conditions.",2008,0, 4374,Automatic Generation of Detection Algorithms for Design Defects,"Maintenance is recognised as the most difficult and expansive activity of the software development process. Numerous techniques and processes have been proposed to ease the maintenance of software. In particular, several authors published design defects formalising ""bad"" solutions to recurring design problems (e.g., anti-patterns, code smells). We propose a language and a framework to express design defects synthetically and to generate detection algorithms automatically. We show that this language is sufficient to describe some design defects and to generate detection algorithms, which have a good precision. We validate the generated algorithms on several programs",2006,0, 4375,Towards real-time hardware gamma correction for dynamic contrast enhancement,"Making the transition between digital video imagery acquired by a focal plane array and imagery useful to a human operator is not a simple process. The focal plane array ?sees? the world in a fundamentally different way than the human eye. Gamma correction has been historically used to help bridge the gap. The gamma correction process is a non-linear mapping of intensity from input to output where the parameter gamma can be adjusted to improve the imagery's visual appeal. In analog video systems, gamma correction is performed with analog circuitry and is adjusted manually. With a digital video stream, gamma correction can be provided using mathematical operations in a digital circuit. In addition to manual control, gamma correction can also be automatically adjusted to compensate for changes in the scene. We are interested in applying automatic gamma correction in systems such as night vision goggles where both low latency and power efficiency are important design parameters. We present our results in developing an automatic gamma correction algorithm to meet these requirements. The algorithm is comprised of two parts, determination of the desired value for gamma and the application of the correction. The calculation of the gamma value update is performed based upon statistical metrics of the imagery's intensity. HDL code implementing the measurement of the statistical metrics has been developed and tested in hardware. Both the computation of a gamma update and the application of the gamma correction were simplified to basic arithmetic operations and two specialized functions, logarithm and exponentiation of a constant base by a variable exponent. We present approximation methods for both specialized functions simplifying their implementation into basic arithmetic operations. The hardware implementations of the approximations allow the above requirements to be met. We evaluate the accuracy of the approximations as compared to full resolution double-precision float- - ing point mathematical operations. We present the final results for visual judging to evaluate the impact of the approximations.",2009,0, 4376,A research on fault diagnosis of grounding grids,"Making use of the electric network theory and the matrix theory, the fault diagnosis equations are set up in the form of sensitivity matrix. Then the method of multiple-excitation is applied to increase the number of independent equations and resolve the problem of morbid equations effectively. By introducing the micro-disposal method and the iterative algorithm, we can use the least squares approach to solve the nonlinear equations progressively by a series of linear equations. The results of the emulation calculation and simulation experiment show that the proposed method is simple and feasible, so it can be applied to fault diagnosis of grounding grid.",2009,0, 4377,Topology discovery for network fault management using mobile agents in ad-hoc networks,"Managing today's complex and increasingly heterogeneous networks requires in-depth knowledge and extensive training as well as collection of very large amount of data. Fault management is one of the functional areas of network management that entails detection, identification and correction of anomalies that disrupt services of a network. The task of fault management is even harder in ad-hoc networks where the topology of the network changes frequently. It is very inefficient if not impossible to discover the ad-hoc network topology using traditional practices of network discovery. We propose a mobile multi agent system for topology discovery that will allow fault management functions in ad-hoc network. Comparison to current mobile agent based topology discovery systems is also presented",2005,0, 4378,Analysis of Timing Error Aperture Jitter on the Performance of Sigma Delta ADC for Software Radio Mobile Receivers,"Jitter is the limiting effect for high speed analog-to digital converters with high resolution and wide digitization bandwidth, which are required in receivers in order to support high data rates. The rapid development of digital wireless system has led to a need of high resolution and high speed analog to digital converter. The proper selection of data converters, both analog to digital converters and digital to analog converters (DACs) is one of the most challenging steps in designing software radio. The performance of a data converter is dependent upon the accuracy and stability of the clock supplied to the circuits. When data converter employ a high sampling rate, clocking issues become magnified and significant distortion can be result. This paper describes the effect of aperture jitter on the performance of sigma delta ADC and present analytical evaluation of the performance and mean error power spectrum due to aperture jitter application has favored the use of oversampling delta sigma ADC (analog-to-digital converters) due to their better speed-accuracy tradeoff. Delta-sigma modulator is one of the key building blocks, which can be implemented using DT (discrete-time) and CT (continuous-time) techniques. Compared to their DT counterparts, CT delta-sigma modulators have recently attracted more and more attentions due to their advantages in terms of high speed, low power, low noise and intrinsic anti-aliasing capability. In this paper, we concentrate on the discrete implementation. Section 2 presents an aperture jitter effect in SDM in terms of SNR. In the last few years different authors derived formulas to quantify the SNR limiting effect of jitter in ADCs. While Walden used a worst case approach, Kobayashi presented an exact formula which allows calculating the SNR in the presence of an aperture jitter.",2009,0, 4379,JPEG Error Analysis and Its Applications to Digital Image Forensics,"JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.",2010,0, 4380,Adaptive Fuzzy Prediction of Low-Cost Inertial-Based Positioning Errors,"Kalman filter (KF) is the most commonly used estimation technique for integrating signals from short-term high performance systems, like inertial navigation systems (INSs), with reference systems exhibiting long-term stability, like the global positioning system (GPS). However, KF only works well under appropriately predefined linear dynamic error models and input data that fit this model. The latter condition is rather difficult to be fulfilled by a low-cost inertial measurement unit (IMU) utilizing microelectromechanical system (MEMS) sensors due to the significance of their long- and short-term errors that are mixed with the motion dynamics. As a result, if the reference GPS signals are absent or the Kalman filter is working for a long time in prediction mode, the corresponding state estimate will quickly drift with time causing a dramatic degradation in the overall accuracy of the integrated system. An auxiliary fuzzy-based model for predicting the KF positioning error states during GPS signal outages is presented in this paper. The initial parameters of this model is developed through an offline fuzzy orthogonal-least-squares (OLS) training while the adaptive neuro-fuzzy inference system (ANFIS) is implemented for online adaptation of these initial parameters. Performance of the proposed model has been experimentally verified using low-cost inertial data collected in a land vehicle navigation test and by simulating a number of GPS signal outages. The test results indicate that the proposed fuzzy-based model can efficiently provide corrections to the standalone IMU predicted navigation states particularly position.",2007,0, 4381,Topographic Correction for ALOS PALSAR Interferometry,"L-band synthetic aperture radar (SAR) interferometry is very successful for mapping ground deformation in densely vegetated regions. However, due to its larger wavelength, the capacity to detect slow deformation over a short period of time is limited. Stacking and small baseline subset (SBAS) techniques are routinely used to produce time series of deformation and average deformation rates by reducing the contribution of topographic and atmospheric noise. For large sets of images that are presently available from C-band European Remote Sensing Satellites (ERS-1/2) and Environmental Satellite (ENVISAT), the standard stacking and SBAS algorithms are accurate. However, the same algorithms are often inaccurate when used for processing of interferograms from L-band Advanced Land Observing Satellite Phased Array type L-band SAR (ALOS PALSAR). This happens because only a limited number of interferograms is acquired and also because of large spatial baselines often correlated with the time of acquisition. In this paper two techniques are suggested that can be used for removing the residual topographic component from stacking and SBAS results, thereby increasing their accuracy.",2010,0, 4382,Lightweight fault-localization using multiple coverage types,"Lightweight fault-localization techniques use program coverage to isolate the parts of the code that are most suspicious of being faulty. In this paper, we present the results of a study of three types of program coverage-statements, branches, and data dependencies-to compare their effectiveness in localizing faults. The study shows that no single coverage type performs best for all faults-different kinds of faults are best localized by different coverage types. Based on these results, we present a new coverage-based approach to fault localization that leverages the unique qualities of each coverage type by combining them. Because data dependencies are noticeably more expensive to monitor than branches, we also investigate the effects of replacing data-dependence coverage with an approximation inferred from branch coverage. Our empirical results show that (1) the cost of fault localization using combinations of coverage is less than using any individual coverage type and closer to the best case (without knowing in advance which kinds of faults are present), and (2) using inferred data-dependence coverage retains most of the benefits of combinations.",2009,0, 4383,Linear-implicit time integration schemes for error-controlled transient nonlinear magnetic field simulations,Linear-implicit time-step methods of the Rosenbrock type are proposed for the time integration of the nonlinear differential-algebraic systems of equations that arise in transient magnetic field simulations. These methods avoid the iterative solution of the nonlinear systems due to their built-in Newton procedures. Embedded lower order schemes allow an error-controlled adaptive time stepping to take into account the dynamics of the underlying process. Numerical tests show the applicability of these methods for error-controlled adaptive time integration of nonlinear magnetodynamic problems and the comparison to established singly diagonal implicit Runge-Kutta methods shows the benefits and possible problems of these specialized Runge-Kutta methods.,2003,0, 4384,Estimating the Odometry Error of a Mobile Robot by Neural Networks,"Localization is the accurate estimation of robot's current position and is critical for map building. Odometry modeling is one of the main approaches to solving the localization problem, the other being a sensor based correspondence solver. Currently few robot positioning systems support calibration of odometry errors in both feature rich indoor and landmark poor outdoor environments. To achieve good performance in various environments, the mobile robot has to be able to learn to localize in unknown environments, and reuse previously computed environment specific localization models. This paper presents a method combining the standard Back-Propagation technique and a feed-forward neural network model for odometry calibration for both synchronous and differential drive mobile robots. This novel method is compared with a generic localization module and an optimization based approach, and found to minimize odometry error because of its nonlinear input-output mapping ability. Experimental results demonstrate that the neural network approach incorporating Bayesian Regularization provides improved performance and relaxes constraints in the UMBmark method.",2009,0, 4385,A human study of fault localization accuracy,"Localizing and repairing defects are critical software engineering activities. Not all programs and not all bugs are equally easy to debug, however. We present formal models, backed by a human study involving 65 participants (from both academia and industry) and 1830 total judgments, relating various software- and defect-related features to human accuracy at locating errors. Our study involves example code from Java textbooks, helping us to control for both readability and complexity. We find that certain types of defects are much harder for humans to locate accurately. For example, humans are over five times more accurate at locating extra statements than missing statements based on experimental observation. We also find that, independent of the type of defect involved, certain code contexts are harder to debug than others. For example, humans are over three times more accurate at finding defects in code that provides an array abstraction than in code that provides a tree abstraction. We identify and analyze code features that are predictive of human fault localization accuracy. Finally, we present a formal model of debugging accuracy based on those source code features that have a statistically significant correlation with human performance.",2010,0, 4386,Assuring fault classification agreement - an empirical evaluation,"Inter-rater agreement is a well-known challenge and is a key issue when discussing fault classification. Fault classification is, by nature, a subjective task since it highly depends on the people performing the classification. Measures are required to hinder the subjective nature of fault classification to propagate through the fault classification process and onto subsequent activities using the classified faults, for example process improvement. One approach to prevent the subjective nature of fault classification is to use multiple raters and measure inter-rater agreement. We evaluate the possibility to have an independent group of people classifying faults. The objective is to evaluate whether such a group could be used in a process improvement initiative. An empirical study is conducted with eight persons classifying 30 faults independently. The study concludes that the provided material were unsatisfactory to obtain inter-rater agreement.",2004,0, 4387,Intra-distance Derived Weighted distortion for error resilience,"Intra coding is one of the most effective ways of reducing the impact of error propagation caused by predictive coding. However, intra coding requires a higher bitrate when compared to inter coding. In order to use Inter coding and reduce error propagation it is important that inter macroblocks predict from AsafeA areas that have a decreased chance of spreading errors. To this end we propose a low complexity method of biasing the prediction mechanism towards recently intra updated macroblocks. We devise a method of adjusting the distortion used in rate distortion optimization to take into account the temporal distance of the last Intra macroblock. Our simulations show that our intra-distance derived weighting (IDW) method improves video coding performance in a lossy environment by up to 1.4 dB for a modest increase in bitrate.",2009,0, 4388,Slow advances in fault-tolerant real-time distributed computing,"Is fault-tolerant (FT) real-time computing a specialized branch of FT computing? The key issue in real-time (RT) computing is to economically produce systems that yield temporal behavior which is relatively easily analyzable and acceptable in given application environments. Fault-tolerant (FT) RT computing has been treated by the predominant segment of the FT computing research community as a highly specialized branch of FT computing. This author believes that the situation should be changed. It seems safe to say that FT techniques for which useful characterizations of temporal behavior have not been or cannot be developed, are at best immature, if not entirely useless. This means that FT RT computing is at the core of FT computing.",2004,0, 4389,Fuzzy Reliability Analysis of an iSCSI-Based Fault Tolerant Storage System Organization,"ISCSI is a newly emerging protocol with the goal of implementing the storage area network (SAN) technology over TCP/IP, which brings economy and convenience whereas it also raises performance and reliability issues. This paper presents a implementation of iSCSTbased fault tolerant storage system, and then analyzed the reliability of this system by combination of fuzzy logic and Markov modeling. This reliability method is a technique for analyzing fault tolerant designs under considerable uncertainty, such as is seen in compilations of component failure rates, the presented model provides the estimation of the lower and upper boundary of iSCSTbased fault tolerant storage system with a single run of the model.",2007,0, 4390,Fault Detection Using the k-Nearest Neighbor Rule for Semiconductor Manufacturing Processes,"It has been recognized that effective fault detection techniques can help semiconductor manufacturers reduce scrap, increase equipment uptime, and reduce the usage of test wafers. Traditional univariate statistical process control charts have long been used for fault detection. Recently, multivariate statistical fault detection methods such as principal component analysis (PCA)-based methods have drawn increasing interest in the semiconductor manufacturing industry. However, the unique characteristics of the semiconductor processes, such as nonlinearity in most batch processes, multimodal batch trajectories due to product mix, and process steps with variable durations, have posed some difficulties to the PCA-based methods. To explicitly account for these unique characteristics, a fault detection method using the k-nearest neighbor rule (FD-kNN) is developed in this paper. Because in fault detection faults are usually not identified and characterized beforehand, in this paper the traditional kNN algorithm is adapted such that only normal operation data is needed. Because the developed method makes use of the kNN rule, which is a nonlinear classifier, it naturally handles possible nonlinearity in the data. Also, because the FD-kNN method makes decisions based on small local neighborhoods of similar batches, it is well suited for multimodal cases. Another feature of the proposed FD-kNN method, which is essential for online fault detection, is that the data preprocessing is performed automatically without human intervention. These capabilities of the developed FD-kNN method are demonstrated by simulated illustrative examples as well as an industrial example.",2007,0, 4391,"Simulated faults on directional, ground, overcurrent relays with emphasis on the operational impact on mutually coupled, intact lines","It is common practice to build two unrelated circuits on the same physical pole structures. When a ground fault occurs in one line of the mutually coupled pair, it can induce sufficient current in the intact line to cause a false trip. This is an effect of the zero sequence current. From an active transmission system, three pairs of 120 kV transmission lines were selected for investigation. Both double and single line-to-ground faults were simulated with and without mutual coupling. The resulting ground currents reported at the directional, ground, overcurrent relays that protect these lines were compared and the time-over-current settings of these relays were examined. The maximum current on the intact line does not occur at the common bus. It occurs at a point where the mutually coupled lines separate. This phenomenon must be considered in the design of the protection scheme",2000,0, 4392,Application of FFT on a vehicle Transient Fault Recorder,"It is described that Fast Fourier Transform (FFT) implementation in floating-point DSP TMS320VC33. The result indicate that assemble language is more suitable for implementing the complicated algorithms than c language, validates the correctness of FFT algorithms, indicates that the implement of FFT with DSP own antitone indirect addressing is very convenient, the real-time performance is also very good, It satisfied the requirement of spectrum analysis in a vehicle Transient Fault Recorder in the field of harmonics analysis and vibration analysis.",2008,0, 4393,Straight-Edge Extraction in Distorted Images Using Gradient Correction,"Many camera lenses, particularly low-cost or wide-angle lenses, can cause significant image distortion. This means that features extracted naively from such images will be incorrect. A traditional approach to dealing with this problem is to digitally rectify the image to correct the distortion, and then to apply computer vision processing to the corrected image. However, this is relatively expensive computationally, and can introduce additional interpolation errors. We propose instead to apply processing directly to the distorted image from the camera, modifying whatever algorithm is used to correct for the distortion during processing, without a separate rectification pass. In this paper we demonstrate the effectiveness of this approach using the particular classic problem of gradient-based extraction of straight edges. We propose a modification of the Burns line extractor that works on a distorted image by correcting the gradients on the fly using the chain rule, and correcting the pixel positions during the line-fitting stage. Experimental results on both real and synthetic images under varying distortion and noise show that our gradient-correction technique can obtain approximately a 50% reduction in computation time for straight-edge extraction, with a modest improvement in accuracy under most conditions.",2009,0, 4394,Cleanroom: Edit-Time Error Detection with the Uniqueness Heuristic,"Many dynamic programming language features, such as implicit declaration, reflection, and code generation, make it difficult to verify the existence of identifiers through standard program analysis. We present an alternative verification, which, rather than analyzing the semantics of code, highlights any name or pair of names that appear only once across a program's source files. This uniqueness heuristic is implemented for HTML, CSS, and JavaScript, in an interactive editor called Cleanroom, which highlights lone identifiers after each keystroke. Through an online experiment, we show that Cleanroom detects real errors, that it helps developers find these errors more quickly than developers can find them on their own, and that this helps developers avoid costly debugging effort by reducing how many times a program is executed with potential errors. The simplicity and power of Cleanroom's heuristic may generalize well to other dynamic languages with little support for edit-time name verification.",2010,0, 4395,Data Fusion based on RBF Neural Network for Error Compensation in Resistance Strain Gauge Force Transducers,"Many factors, such as environmental temperature and material elasticity, can affect the output of resistance strain gauge force transducers used in vehicle traction force measurements. A data fusion method based on radial basis function (RBF) neural network is proposed to reduce the negative effects and compensate the measurement error. A multiquadrics kernel is utilized as the kernel function for the RBF neural networks. It fuses the environmental temperature in the force measurement while realizing an accurate compensation of errors. Tests have been carried out within temperature ranging from -10deg C to 60degC and the results show that the maximum error with load 80000N is below 0.5 % after compensation while it is greater than 6 % before compensation.",2007,0, 4396,Error Analysis of Geometric Ellipse Detection Methods Due to Quantization,"Many geometric methods have been used extensively for detection of ellipses in images. Though the geometric methods have rigorous mathematical framework, the effect of quantization appears in various forms and introduces errors in the implementation of such models. This unexplored aspect of geometric methods is studied in this paper. We identify the various sources that can affect the accuracy of the geometric methods. Our results show that the choice of points used in geometric methods is a very crucial factor in the accuracy. If the curvature covered by the chosen points is low, then the error may be significantly high. We also show that if numerically computed tangents are used in the geometric methods, the accuracy of the methods is sensitive to the error in the computation of the tangents. Our analysis is used to propose a probability density function for the relative error of the geometric methods. Such distribution can be an important tool for determining practical parameters like the size of bins or clusters in the Hough transform. It can also be used to compare various methods and choose a more suitable method.",2010,0, 4397,Wavelet analysis based protection for high impedance ground fault in supply systems,"Many high impedance ground faults (HIGF) that happen in a low voltage (LV) system often cause loss of customer supply, fire and human safety hazards. Traditional ground fault protection is provided by residual current circuit breaker (RCCB). The RCCB often causes nuisance tripping and it is difficult to detect HIGF. Wavelet analysis based HIGF protection is developed in the paper. The wavelet transform is applied to filter out some frequency bands of harmonics from residual current and line current. The root mean square (RMS) value of harmonics are calculated using their wavelet coefficients directly. HIGF is identified from disturbance by the RMS difference between the residual current and the line current. The digital protection scheme is designed. EMTP simulation results show that the new protection is able to detect HIGF and prevent electric shock with high sensitivity and robustness.",2002,0, 4398,Improving bug tracking systems,"It is important that information provided in bug reports is relevant and complete in order to help resolve bugs quickly. However, often such information trickles to developers after several iterations of communication between developers and reporters. Poorly designed bug tracking systems are partly to blame for this exchange of information being stretched over time. Our paper addresses the concerns of bug tracking systems by proposing four broad directions for enhancements. As a proof-of-concept, we also demonstrate a prototype interactive bug tracking system that gathers relevant information from the user and identifies files that need to be fixed to resolve the bug.",2009,0, 4399,Finding All Small Error-Prone Substructures in LDPC Codes,"It is proven in this work that it is NP-complete to exhaustively enumerate small error-prone substructures in arbitrary, finite-length low-density parity-check (LDPC) codes. Two error-prone patterns of interest include stopping sets for binary erasure channels (BECs) and trapping sets for general memoryless symmetric channels. Despite the provable hardness of the problem, this work provides an exhaustive enumeration algorithm that is computationally affordable when applied to codes of practical short lengths n ap 500. By exploiting the sparse connectivity of LDPC codes, the stopping sets of size les 13 and the trapping sets of size les11 can be exhaustively enumerated. The central theorem behind the proposed algorithm is a new provably tight upper bound on the error rates of iterative decoding over BECs. Based on a tree-pruning technique, this upper bound can be iteratively sharpened until its asymptotic order equals that of the error floor. This feature distinguishes the proposed algorithm from existing non-exhaustive ones that correspond to finding lower bounds of the error floor. The upper bound also provides a worst case performance guarantee that is crucial to optimizing LDPC codes when the target error rate is beyond the reach of Monte Carlo simulation. Numerical experiments on both randomly and algebraically constructed LDPC codes demonstrate the efficiency of the search algorithm and its significant value for finite-length code optimization.",2009,0, 4400,The effect of error in position co-ordinates of the receiving antenna on the single-satellite-mode GPS timing,"It is well established that the position co-ordinates of the receiving antenna should be determined precisely in advance for getting Time of the GPS receiver through a single GPS satellite technique. So it is desirable to know the extent of accuracy of the position co-ordinate required for a particular timing accuracy. In this paper, an analytical expression relating to the position error and the corresponding error in the Time of the GPS receiver has been derived. Time error of the GPS receiver caused by the error in position coordinates largely depends on the position of the satellite indicated by the respective elevation and azimuth of the satellite. To validate the derived formulation it is important to configure the experimental plan very judiciously. A special experiment has accordingly been conducted at National Physical Laboratory, New Delhi, India (NPLI). The observed data have been found to tally well with the derived relation.",2009,0, 4401,Outlier correction in image sequences for the affine camera,"It is widely known that, for the affine camera model, both shape and motion can be factorized directly from the so-called image measurement matrix constructed from image point coordinates. The ability to extract both shape and motion from this matrix by a single SVD operation makes this shape-from-motion approach attractive; however, it can not deal with missing feature points and, in the presence of outliers, a direct SVD to the matrix would yield highly unreliable shape and motion components. Here, we present an outlier correction scheme that iteratively updates the elements of the image measurement matrix. The magnitude and sign of the update to each element is dependent upon the residual robustly estimated in each iteration. The result is that outliers are corrected and retained, giving improved reconstruction and smaller reprojection errors. Our iterative outlier correction scheme has been applied to both synthesized and real video sequences. The results obtained are remarkably good.",2003,0, 4402,Metrics selection for fault-proneness prediction of software modules,"It would be valuable to use metrics to identify the fault-proneness of software modules. It is important to select the most appropriate particular metric subset for fault-proneness prediction. We proposed an approach of metrics selection, which firstly utilized the correlation analysis to eliminate the high the correlation metrics and then ranked the remaining metrics based on the gray relational analysis. Three classifiers, that were logistic regression model, NaiveBayes, and J48, were utilized to empirically investigate the usefulness of selected metrics. Our results, based on a public domain NASA data set, indicate that 1) proposed method for metrics selection is effective, and 2) using 3-4 metrics gets the balanced performance for fault-proneness prediction of software modules.",2010,0, 4403,Post-Silicon Bug Localization in Processors Using Instruction Footprint Recording and Analysis (IFRA),"Instruction Footprint Recording and Analysis (IFRA) overcomes challenges associated with an expensive step in post-silicon validation of processors-pinpointing the bug location and the instruction sequence that exposes the bug from a system failure. On-chip recorders collect instruction footprints (information about flows of instructions and what the instructions did as they passed through various design blocks) during the normal operation of the processor in a post-silicon system validation setup. Upon system failure, the recorded information is scanned out and analyzed offline for bug localization. Special self-consistency-based program analysis techniques, together with the test program binary of the application executed during post-silicon validation, are used for this purpose. Major benefits of using IFRA over traditional techniques for post-silicon bug localization are as follows: 1) it does not require full system-level reproduction of bugs, and 2) it does not require full system-level simulation. Simulation results on a complex superscalar processor demonstrate that IFRA is effective in accurately localizing electrical bugs with very little impact on overall chip area.",2009,0, 4404,Injecting Inconsistent Values Caused by Interaction Faults for Experimental Dependability Evaluation,"Interaction faults caused by a flawed external system designed by a third party are a major issue faced by interconnected systems. Fault injection is a valuable tool for evaluating the dependability of such scenarios. Several types of errors caused by interaction faults may be injected by existing approaches, even though previous work focused on other types of faults, such as hardware and software faults. This is not the case of inconsistent values - data that is correctly received and syntactically correct, but inconsistent with what it should represent. In this paper, we propose a novel methodology to inject inconsistent values caused by interaction faults, including hand-defined, random and se- mantically significant values. We also describe a simulation tool which implements the proposed mechanism to aid dependability evaluation in a system that uses the Universal Plug and Play standard to communicate.",2008,0, 4405,Fault injection testing for distributed object systems,Interface based fault injection testing (IFIT) is proposed as a technique to assess the fault tolerance of distributed object systems. IFIT uses the description of an object's interface to generate application dependent faults. A set of application independent faults is also proposed. IFIT reveals inadequacies of the fault recovery mechanisms present in the application. The application of IFIT to different distributed object systems is described,2001,0, 4406,Modeling the Propagation of Intermittent Hardware Faults in Programs,"Intermittent hardware faults are bursts of errors that last from a few CPU cycles to a few seconds. Recent studies have shown that intermittent fault rates are increasing due to technology scaling and are likely to be a significant concern in future systems. We study the impact of intermittent hardware faults in programs. A simulation-based fault-injection campaign shows that the majority of the intermittent faults lead to program crashes. We build a crash model and a program model that represents the data dependencies in a fault-free execution of the program. We then use this model to glean information about when the program crashes and the extent of fault propagation. Empirical validation of our model using fault-injection experiment shows that it predicts almost all actual crash-causing intermittent faults, and in 93% of the considered faults the prediction is accurate within 100 instructions. Further, the model is found to be more than two orders of magnitude faster than equivalent fault-injection experiments performed with a microprocessor simulator.",2010,0, 4407,A State Machine for Detecting C/C++ Memory Faults,"Memory faults are major forms of software bugs that severely threaten system availability and security in C/C++ program. Many tools and techniques are available to check memory faults, but few provide systematic full-scale research and quantitative analysis. Furthermore, most of them produce high noise ratio of warning messages that require many human hours to review and eliminate false-positive alarms. And thus, they cannot locate the root causes of memory faults precisely. This paper provides an innovative state machine to check memory faults, which has three main contributions. Firstly, five concise formulas describing memory faults are given to make the mechanism of the state machine simple and flexible. Secondly, the state machine has the ability to locate the cause roots of the memory faults. Finally, a case study applying to an embedded software, which is written in 50 thousand lines of C codes, shows it can provide useful data to evaluate the reliability and quality of software",2005,0, 4408,Correction for head movements in positron emission tomography using an optical motion-tracking system,"Methods capable of correcting for head motion in all six degrees of freedom have been proposed for positron emission tomography (PET) brain imaging but not yet demonstrated in human studies. These methods rely on the accurate measurement of head motion in relation to the reconstruction coordinate frame. We present methodology for the direct calibration of an optical motion-tracking system to the reconstruction coordinate frame using paired coordinate measurements obtained simultaneously from a PET scanner and tracking system. We also describe the implementation of motion correction, based on the multiple acquisition frame method originally described by Picard and Thompson (1997), using data provided by the motion tracking system. Effective compensation for multiple six-degree-of-freedom movements is demonstrated in dynamic PET scans of the Hoffman brain phantom and a normal volunteer. We conclude that reduced distortion and improved quantitative accuracy can be achieved with this method in PET brain studies degraded by head movements",2002,0,6182 4409,Ortho-rectification and terrain correction of polarimetric SAR data applied in the ALOS/Palsar context,Methods for terrain correction of polarimetric SAR data were studied and developed. Ortho-rectification resampling and amplitude correction utilized Stokes matrix data. The Stokes matrix of thermal noise was subtracted before amplitude normalization. Application of an azimuth-slope correction algorithm resulted in slightly narrower distribution of orientation angles compared to input data.,2007,0, 4410,Toward fault-tolerant and reconfigurable digital microfluidic biochips,"Microfluidics-based biochips are revolutionizing high-throughput sequencing, parallel immunoassays, blood chemistry for clinical diagnostics, and drug discovery. These devices enable the precise control of nanoliter volumes of biochemical samples and regents. They combine electronics with biology, and they integrate various bioassay operations, such as sample preparation, analysis, separation, and detection. This survey paper provides an overview of droplet-based digital microfluidic biochips. It describes emerging techniques for designing fault-tolerant and reconfigurable digital microfluidic biochips. Recent advances in fault modeling, testing, diagnosis and reconfiguration techniques are presented. These quality-driven techniques ensure that biochips can be used reliably during liquid-based biochemical assays.",2010,0, 4411,Flexible fault tolerance in configurable middleware for embedded systems,"MicroQoSCORBA (MQC) is a middleware platform that focuses on embedded applications by providing a very fine level of configurability of its internal orthogonal components. Using this configurability, a developer can generate a customized middleware instantiation that is tailored to both the requirements and constraints of a specific embedded application and the embedded hardware. One of the key components provided by MQC is a set of fault-tolerant mechanisms, which allow for support of applications that require a higher level of reliability. This document provides a detailed description of the algorithms and protocols selected for these mechanisms, along with a discussion of their implementation and incorporation into the MQC platform.",2003,0, 4412,Correction of motion artifact in cardiac optical mapping using image registration technique,"Myocardial contraction causes motion artifact in cardiac optical recording. Mechanical and chemical methods have been used, both with significant limitations, to reduce motion artifact in optical mapping. We propose an image registration approach using mutual information between image frames to solve this problem. The algorithm was tested with optical mapping data from isolated, perfused rabbit hearts. Both affine and nonrigid registration methods reduced motion artifact as measured by a reduction in excessive positive and negative deflection in the optical potential traces after the registration process. Such an approach could be further developed for real-time, in vivo electrophysiological measurement.",2002,0, 4413,On Space Exploration And Human Error - A Paper on Reliability and Safety,"NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probable almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability.",2005,0, 4414,Optimal Design of Nearfield Wideband Beamformers Robust Against Errors in Microphone Array Characteristics,"Nearfield wideband beamformers for microphone arrays have wide applications, such as hands-free telephony, hearing aids, and speech input devices to computers. The existing design approaches for nearfield wideband beamformers are highly sensitive to errors in microphone array characteristics, i.e., microphone gain, phase, and position errors, as well as sound speed errors. In this paper, a robust design approach for nearfield wideband beamformers for microphone arrays is proposed. The robust nearfield wideband beamformers are designed based on the minimax criterion with the worst case performance optimization. The design problems can be formulated as second-order cone programming and be solved efficiently using the well-established polynomial time interior-point methods. Several interesting properties of the robust nearfield wideband beamformers are derived. Numerical examples are given to demonstrate the efficacy of the proposed beamformers in the presence of errors in microphone array characteristics.",2007,0, 4415,On error analysis and distributed phase steering for wireless network coding over fading channels,"Network coding notions promise significant gains in wireless networks' throughput and quality of service. Future systems employing such paradigms are known to be also highly scalable and resilient to node failure and churn rates. We propose a simple framework where a single relay listens to two nodes transmitting simultaneously over the same band in the presence of Nakagami-m fading. For this multiple-access channel (MAC), we derive in closed-form the exact bit error rate of antipodal signaling with maximum-likelihood detection. As the MAC is the bottleneck in error of the overall system, this provides a good performance measure of the aggregate architecture. Using the new error expressions derived, we then propose a simple closed-loop cooperation strategy where via a ternary feedback from the relay node, significant gains in signal to noise ratio at the relay can be achieved. Our novel error analysis method is applicable to a number of other systems such as the vertical Bell labs spacetime (V-BLAST) scheme and synchronous multi-user systems.",2009,0, 4416,Fault-tolerance abilities implementation with spare cells in bio-inspired hardware systems,"Network communication algorithms development is presented in the paper, with the purpose to implement bio-inspired hardware systems which exhibit the abilities of living organisms, such as: evolution capabilities, self-healing and fault-tolerance. In the first steps of these research efforts an embryonic system with bi-dimensional FPGA-based artificial cell network is designed and tested through careful computer-aided simulations. Two specially developed algorithms were implemented in the network communication strategy, in order to avoid physical faults and errors in the laboratory experimented VLSI hardware architecture. The basic challenge of all these experiments is to develop embryonic systems with fault-tolerant and self-healing properties, as main hardware structures in a large scale high security process control and industrial applications.",2009,0, 4417,A practical implementation of the fault-tolerant Daisy-chain clock synchronization algorithm on CAN,"Networked processing units are becoming widely used in the automotive embedded system domain aiming not only to reduce vehicle weight and cost but also to assist the driver to cope with critical situations. Because the fact that these embedded networked systems are strictly involved with human safety, there is a high demand on dependability requirements which can only be guaranteed if active redundancy is employed. Considering that the processing units are usually connected by a shared serial media, the underlying communication platform is the most important building block. It must provide low-level support for deterministic data transmission as well as a global time base to coordinate the actions of replicated units. Within this context, this paper presents the development of the fault-tolerant Daisy-chain clock synchronization algorithm over the CAN protocol, resulting in an highly optimized communication architecture for safety-critical applications. Implementation issues and some obtained practical results are also discussed in the paper",2006,0, 4418,Analog circuit fault diagnosis using bagging ensemble method with cross-validation,"Neural Network (NN) ensemble approach has been an appealing topic in the field of analog circuit fault diagnosis lately. In this paper, a new method for fault diagnosis of analog circuits with tolerance based on NN ensemble method with cross-validation is proposed. Firstly, bias-variance decomposition shows the theoretical guide on how to choose the component networks when composing the ensemble. Secondly, output voltage signal of the Circuit Under Test (CUT) has been obtained after the stimulus imposed on the CUT. After getting the corresponding fault feature sets, Bagging algorithm is employed to produce the different training sets in order to train the different component networks, and cross-validation technique has been employed to further improve fault diagnosis accuracy. Finally, the outputs of the component ensemble members are combined to isolate the CUT faults. Simulations result shows the superior performance of this proposed approach. This system is able to effectively improve the generalization ability of the analog circuit fault classifier and increase the fault diagnosis accuracy.",2009,0, 4419,Supervised Neural Network Modeling: An Empirical Investigation Into Learning From Imbalanced Data With Labeling Errors,"Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.",2010,0, 4420,Error Correction Capability in Chaotic Neural Networks,"Neural networks are able to learn more patterns with the incremental learning than with the correlative learning. The incremental learning is a method to compose an associative memory using a chaotic neural network. In the former work, it was found that the capacity of the network increases along with its size, with some threshold value and that it decreases over that size. The threshold value and the capacity varied by the learning parameter. In this paper, the capacity of the networks was investigated by changing the learning parameter. Through the computer simulations, it turned out that the capacity increases in proportion to the network size. Then, the error correction capability is estimated with learned patterns changing to the maximum capacity.",2009,0, 4421,Motorbike engine faults diagnosing system using neural network,"Monitoring systems for motorbike industry requires high and efficient degree of performance. In recent years, automatic identification and diagnosis of motorbike engine faults has become a very complex and critical task. The noise produced by a motorbike engine is an important information source of fault diagnosis. Artificial Neural Network finds applications in many industries including condition monitoring and fault diagnosis. In this paper a simple feature extraction algorithm that extracts the features from the engine noise signal using discrete wavelet transform is presented. The engine noise signals are decomposed into 8 levels using Daubechies ldquodb4rdquo wavelet family. The eight level coefficients energy of approximated version and detailed version are computed and used as features. Three simple neural network models are developed and trained by conventional backpropagation algorithm for identifying the motorbike engine faults and the average classification rates are around 85%.",2008,0, 4422,AppWatch: detecting kernel bug for protecting consumer electronics applications,"Most consumer electronics products are equipped with diverse devices since they try to provide more services following the convergence trends. Device drivers for those devices are known to cause system failures. Most previous approaches to enhance reliability have been concerned with the kernel, not with applications. In consumer electronics, however, a main application plays a core role of the product. This paper proposes a new mechanism called AppWatch to keep a consumer electronics application reliable against misbehavior of device drivers. AppWatch exploits page management mechanism of the operating system to protect the address space of the application. Since AppWatch can be implemented at a low engineering cost, it is applicable to most systems only if they have the virtual memory system. AppWatch also provides selective protection of applications so that other unprotected applications are isolated from performance loss, if any. We have tested AppWatch in a consumer electronics environment. The result shows that AppWatch effectively protects application programs at a reasonable performance overhead in most workloads, whereas data-intensive workloads show high overhead. AppWatch also protects applications with little performance interference to other unprotected applications.",2010,0, 4423,H Dynamic observer design with application in fault diagnosis,"Most observer-based methods applied in fault detection and diagnosis (FDD) schemes use the classical two-degrees of freedom observer structure in which a constant matrix is used to stabilize the error dynamics while a post filter helps to achieve some desired properties for the residual signal. In this paper, we consider the use of a more general framework which is the dynamic observer structure in which an observer gain is seen as a filter designed so that the error dynamics has some desirable frequency domain characteristics. This structure offers extra degrees of freedom and we show how it can be used for the sensor faults diagnosis problem achieving detection and estimation at the same time. The use of weightings to transform this problem into a standard Hproblem is also demonstrated.",2005,0, 4424,Similarity-Guided Streamline Placement with Error Evaluation,"Most streamline generation algorithms either provide a particular density of streamlines across the domain or explicitly detect features, such as critical points, and follow customized rules to emphasize those features. However, the former generally includes many redundant streamlines, and the latter requires Boolean decisions on which points are features (and may thus suffer from robustness problems for real-world data). We take a new approach to adaptive streamline placement for steady vector fields in 2D and 3D. We define a metric for local similarity among streamlines and use this metric to grow streamlines from a dense set of candidate seed points. The metric considers not only Euclidean distance, but also a simple statistical measure of shape and directional similarity. Without explicit feature detection, our method produces streamlines that naturally accentuate regions of geometric interest. In conjunction with this method, we also propose a quantitative error metric for evaluating a streamline representation based on how well it preserves the information from the original vector field. This error metric reconstructs a vector field from points on the streamline representation and computes a difference of the reconstruction from the original vector field.",2007,0, 4425," -Space and Image-Space Combination for Motion-Induced Phase-Error Correction in Self-Navigated Multicoil Multishot DWI","Motion during diffusion encodings leads to different phase errors in different shots of multishot diffusion-weighted acquisitions. Phase error incoherence among shots results in undesired signal cancellation when data from all shots are combined. Motion-induced phase error correction for multishot diffusion-weighted imaging (DWI) has been studied extensively and there exist multiple phase error correction algorithms. A commonly used correction method is the direct phase subtraction (DPS). DPS, however, can suffer from incomplete phase error correction due to the aliasing of the phase errors in the high spatial resolution phases. Furthermore, improper sampling density compensation is also a possible issue of DPS. Recently, motion-induced phase error correction was incorporated in the conjugate gradient (CG) image reconstruction procedure to get a nonlinear phase correction method that is also applicable to parallel DWI. Although the CG method overcomes the issues of DPS, its computational requirement is high. Further, CG restricts to sensitivity encoding (SENSE) for parallel reconstruction. In this paper, a new time-efficient and flexible k-space and image-space combination (KICT) algorithm for rigid body motion-induced phase error correction is introduced. KICT estimates the motion-induced phase errors in image space using the self-navigated capability of the variable density spiral trajectory. The correction is then performed in k -space. The algorithm is shown to overcome the problem of aliased phase errors. Further, the algorithm preserves the phase of the imaging object and receiver coils in the corrected k -space data, which is important for parallel imaging applications. After phase error correction, any parallel reconstruction method can be used. The KICT algorithm is tested with both simulated and in vivo data with both multishot single-coil and multishot multicoil acquisitions. We show that KICT correction results in diffusion-weighted- images with higher signal-to-noise ratio (SNR) and fractional anisotropy (FA) maps with better resolved fiber tracts as compared to DPS. In peripheral-gated acquisitions, KICT is comparable to the CG method.",2009,0, 4426,Sinogram-based motion correction of PET images using optical motion tracking system and list-mode data acquisition,"Motion of the head during brain positron emission tomography (PET) acquisitions has been identified as a source of artifact in the reconstructed image. A number of techniques have been proposed to correct for this motion artifact, but they are unable to correct for a motion during an acquisition. The aim of this study was to develop a sinogram-based motion correction (SBMC) technique to correct directly the head motion during a PET scan using a motion tracking system and list-mode data acquisition. This method uses a rebinning procedure whereby the lines of response (LOR) are geometrically transformed according to the current values of six-dimensional motion data. A Michelogram was recomposed using the rebinned LOR, and the motion-corrected sinogram was generated. In the motion corrected image, the blurring artifact due to the motion was reduced by the SBMC technique. This technique was applied to actual PET data acquired in the list-mode, and demonstrated the potential for real-time motion correction of head movements during a PET acquisition.",2004,0,1 4427,Error concealment for motion-compensated interpolation,"Motion-compensated interpolation is usually employed at the receiver end in order to improve the quality of the video, when a low-bit-rate video is encoded in conjunction with frame dropping. The authors propose a scheme that can exploit the block-based motion vector field available at the decoder to avoid the complex motion estimation. The scheme is based on an iterative refinement technique that employs the finite-element method to efficiently conceal the interpolation errors caused by unfilled holes or overlapped pixels in the predicted frames. As a consequence, no pixel classification is needed in the proposed scheme, thus reducing substantially the computational complexity. The scheme is capable of concealing the errors in the homogeneous regions as well as in regions containing sharp edges. The proposed scheme is simulated with the original frames of a number of test sequences, as well as implemented with the H.264/AVC decoded frames. The results from these extensive simulations show that the proposed scheme results in reconstructed frames having a better visual quality and a lower computational complexity than the existing schemes.",2010,0, 4428,"Intelligent based modelling, control and fault detection of chemical process","Neutralizing pH value of sugar cane juice is the important craft in the control process in the clarifying process of sugar cane juice, which is the important factor to influence output and the quality of white sugar. On the one hand, it is an important content to control the neutralized pH value within a required range, which has the vital significance for acquiring high quality purified juice, reducing energy consumption and raising sucrose recovery. On the other hand, it is a complicated physical-chemistry process, which has the characteristics of strong non linearity, time-varying, large time-delay, and multiinput. Therefore, there has not a very good solution to control the neutralized pH value. Firstly, in this paper, neural network model for the clarifying process of sugar juice is established based on gathering 1200 groups of realtime sample data in a sugar factory.",2010,0, 4429,New error probability expressions for optimum combining with MPSK modulation,New expressions are derived for the exact symbol error probability and bit error probability for optimum combining with M-ary phase shift keying. The expressions are for any numbers of equal power co-channel interferers and reception branches. It is assumed that the aggregate interference and noise is Gaussian and that both the desired signal and interference are subject to flat Rayleigh fading. The new expressions have low computational complexity as they contain only a single integral form with finite limits and finite integrand.,2002,0, 4430,A class of m-ary asymmetric symbol error correcting codes constructed by graph coloring,"Non-binary m-ary symbols such as alphabets and numeric characters are used in man-machine interfaces, e.g., keyboards and optical character readers (OCR). Since errors in m-ary data are generally asymmetric, m-ary asymmetric symbol error control codes are applicable for man-machine interfaces. Asymmetric symbol error locating codes for character recognition have been proposed by Saowapa, Kaneko and Fujiwara (see Trans. of IEICE A, vol.J84-A, no.1, p.73-83, 2001). This paper proposes a new type of m-ary asymmetric symbol error correcting codes constructed by colorings of an error directionality graph which expresses a set of asymmetric symbol errors. The proposed codes can be applied to m-ary data with fixed character strings, such as, postal codes, bank account numbers, and article numbers",2001,0, 4431,An Interval Intelligent-based Approach for Fault Detection and Modelling,"Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately.",2007,0, 4432,Notice of Retraction
Research on measuring point configuration based on particle swarm optimization technology in fault diagnosis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In view of complex dynamics system like the vehicles gear gearbox, it established the structure in the diversity drive and the diversity response transmission characteristic model. After considered the actual situation that the internal vibration signal of various parts arrived at the box body surface and folds mutually put together through the different transmission way, it proposed the essential method that it solved failure detection the sensor optimal and the measuring point disposition question with particle swarm optimizes algorithm. Analyzed and transmitted characteristic parameter optimization of the measuring point through various measuring points' test, it not only has given the basic optimal principle of measuring point arrangement, but also has proved that sensor and measuring point optimization disposition is feasible basing on the particle swarm optimization techniques.",2010,0, 4433,Notice of Retraction
Design of a continuous error correction pipeline,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In order to improve the performance of the electronical system, pipeline is usually used in satellite microprocessors, Single Event Upset (SEU) is a main reason for abnormal working of computer system in spacecrafts. SEU is prone to induce error in pipeline and cause failure of the whole system. A continuous error correction pipeline scheme is presented in this paper. It can continuously perform fault detection and automatic correction for data errors in register file in real-time.",2010,0, 4434,Notice of Retraction
Single Terminal Fault Location Based on Improved Fault Recorder,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In order to achieve single terminal fault location by fault recorder, considering of current status of the distributed fault recorder system while structure and performance of device we improved the fault recorder hardware architecture as four data acquisition modules controlled by CPLD and the data processing analysis module controlled by master DSP to enhance system sampling and processing level relied on powerful data analysis and processing capabilities of high performance DSP, and then put forward a fault location algorithm based on differential equation which executed by DSP itself. A large number of simulation results show the algorithm can satisfy the practical engineering requirements, therefore this research demonstrates that use power system fault recorder to get single terminal fault location is feasible from either hardware or software aspects.",2010,0, 4435,Notice of Retraction
Multi-source deviation analysis and rapid correction of locating points,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The application of the Automatic Drilling/Riveting (ADR) System realizes automation of the wing panel riveting assembly. However, a major obstacle to the ADR of wing assembly in the past has been an inability to rapidly reduce locating deviation that usually existed between the upper-riveting head and actual locating points. This case largely influences actual production. To increasing locating accuracy and processing efficiency, a new method for errors analysis and rapid correction was proposed. Firstly, by analyzing the process flow of ADR, the resources and causes of errors were obtained from the analyzed results. Secondly, defined locating points in 3D model, then offset the points to modify the deviation between locating points in 3D numerical modeling and actual locating points in located and clamped wing panels. The modified data is from site measuring. Thirdly, software CATIA secondary development software CAA (Component Application Architecture) was applied to develop Rapid Error Correction Module in off-line programming. The locating points were obtained by the Agent technology and the batch definition was implemented on the visual interactive definition platform. The locating points were corrected by the AUTOMATION interface and the offset based on the template document was completed, the final locating points were determined by the linear programming. Finally, implementation on wing panel riveting assembly was introduced, and the results demonstrated the functionality of the method.",2010,0, 4436,Obtaining reference ICC profile by reversed color matching method and tracing correction,"Obtaining the proofer's ICC profile and the reference ICC profile (or press ICC profile) is vital in establishing digital proofing environment. Generally, the press ICC profile is got by measuring a press sample including standard color patches. In this paper, a reversed method is put forward, in stead of printing standard chart, the real design pages and a few typical images are assembled and output both CTP plates and digital proof simulating a similar press ICC profile, the press ICC profile is got finally by several visual corrections. And in order to trace the changing of the printing process, the printing quality and ICC characteristics control strips are designed and renew the reference ICC profile all the time. Based on the new method of obtaining the press ICC profile, the digital proof has achieved a perfect color matching effect.",2010,0, 4437,Automatic closed eye correction,"On a large group picture, having all people open their eyes can turn out to be a difficult task for photographers. Therefore, in this paper, we describe an original method to automatically correct closed eyes on everyday pictures. For this aim, we explore the combination possibilities of (1) active shape model (ASM) to detect facial features, such as eyes, nose and head shape, and (2) Poisson editing to clone open eyes seamlessly. To improve the performance of seamless cloning, we suggest a pre-processing method that adjusts skin luminosity between two pictures. A nearest neighbor-based search to find the best suited pair of eyes among a set of donor candidates is also presented. We applied the proposed algorithm on several pictures and obtained very natural results, which demonstrates the validity of our approach.",2009,0, 4438,Coseismic Horizontal Offsets and Fault-Trace Mapping Using Phase Correlation of IRS Satellite Images: The 1999 Izmit (Turkey) Earthquake,"On August 17, 1999, a strong earthquake (Mw 7.4) occurred along the western sector of the North Anatolian Fault system in Turkey. The epicenter was located near the city of Izmit, 50 km east of Istanbul. Previous works determined the coseismic surface displacements by satellite synthetic aperture radar (SAR) interferometry (InSAR) and satellite optical-image correlation. In 1999, the highest spatial resolution orbiting camera was the panchromatic sensor (PAN), a 5.8-m pixel sensor (SPOT 2 was a 10-m pixel sensor) onboard the Indian Remote Sensing (IRS) satellite. We propose to apply a new phase-correlation method to PAN images to study the coseismic rupture due to the Izmit earthquake. The phase-correlation method does not need phase unwrapping and was proved to be robust under a wide variety of circumstances. Image correlometry deals with the quantification of the subpixel offsets over the whole image, allowing displacement measurement with an accuracy that is proportional to the pixel size. We measured the near-field deformations exploiting two geometrically corrected IRS images with similar look angles. A quality check of the derived offset map was performed by comparison with GPS benchmarks and SPOT offsets. The results show that IRS PAN images can be correlated to derive coseismic slip offsets due to a large earthquake (and to map its fault trace).",2010,0, 4439,A novel method for DC system grounding fault monitoring on-line and its realization,"On the basis of comparison and analysis of the present grounding fault monitoring methods such as method of AC injection, method of DC leakage and so on, this paper points out their shortcomings in practical applications. A novel method named method of different frequency signals for detecting grounding fault of DC system is advanced, which can overcome the bad influence of distributed capacitor between the ground and branches. Finally a new kind of detector based on the proposed method is introduced. The detector, with C8051F041 as kernel, adopting method of different frequency signals, realizes on-line grounding fault monitoring exactly. The principles, hardware and software design are introduced in detail. The experimental results and practical operations show that the detector has the advantages of high-precision, better anti-interference, high degree of automation, low cost, etc.",2008,0, 4440,Fault Diagnosis System Based on Virtual Instrument for Ship Electromechanical System,"On the basis of the combination of virtual instrument and database technology, a fault diagnosis system for ship electromechanical system is developed. The whole structure, principle and key techniques are described in detail. Visual software is designed with the powerful tool, Visual C++. By using ODBC techniques, the problem of sharing data between Visual C++ and database is solved. Agilent data acquisition/switch unit is used to acquire data. The application indicates that the system is practical and applicable.",2010,0, 4441,Error control schemes for on-chip communication links: the energy-reliability tradeoff,"On-chip interconnection networks for future systems on chip (SoC) will have to deal with the increasing sensitivity of global wires to noise sources such as crosstalk or power supply noise. Hence, transient delay and logic faults are likely to reduce the reliability of across-chip communication. Given the reduced power budgets for SoCs, in this paper, we develop solutions for combined energy minimization and communication reliability control. Redundant bus coding is proved to be an effective technique for trading off energy against reliability, so that the most efficient scheme can be selected to meet predefined reliability requirements in a low signal-to-noise ratio regime. We model on-chip interconnects as noisy channels and evaluate the impact of two error recovery schemes on energy efficiency: correction at the receiver stage versus retransmission of corrupted data. The analysis is performed in a realistic SoC setting, and holds both for shared communication resources and for peer-to-peer links in a network of interconnects. We provide SoC designers with guidelines for the selection of energy efficient error-control schemes for communication architectures.",2005,0, 4442,Using WS-BPEL to Implement Software Fault Tolerance for Web Services,"One area of the Web services architecture yet to be standardised is that of fault tolerance for services. At the same time, WS-BPEL is moving from a de facto standard to an OASIS ratified standard for combining services into processes. This paper investigates the feasibility of using WS-BPEL as an implementation technique for fault tolerant Web services. The mapping of various fault tolerance patterns to WS-BPEL is presented. A prototype tool for combining service interfaces into a single facade and configuring fault tolerant mechanisms on a per-operation basis is also discussed. It is found that most fault tolerance patterns readily map onto WS-BPEL concepts, particularly when using the upcoming 2.0 version of the language. Evaluating and minimising the performance overheads involved in process execution is identified as a key future direction, as is working on the functionality and usability of the configuration tool",2006,0, 4443,Streaming real-time audio and video data with transformation-based error concealment and reconstruction,"One fundamental problem with streaming audio and video data over unreliable IP networks is that packets may be dropped or arrive too late for playback. Traditional error control schemes are not attractive because they either add redundant information that may worsen network traffic, or rely solely on the inadequate capability of the decoder to do error concealment. The authors propose a simple yet efficient transformation based algorithm in order to conceal network losses in streaming real time audio and video data over the Internet. In the receiver side, we adopt a simple reconstruction algorithm based on interpolation, as sophisticated concealment techniques cannot be employed in software based real time playback. In the sender side, we design a linear transformation with the objective of minimizing the mean squared error, assuming that some of the descriptions are lost and that the missing information is reconstructed by simple averaging at the destination. We further integrate the transformations in case of video streaming in the discrete cosine transform (DCT) to produce an optimized reconstruction based DCT. Experimental results show that our proposed algorithm performs well in real Internet tests",2000,0, 4444,Coordinated checkpoint versus message log for fault tolerant MPI,"MPI is one of the most adopted programming models for large clusters and grid deployments. However, these systems often suffer from network or node failures. This raises the issue of selecting a fault tolerance approach for MPI. Automatic and transparent ones are based on either coordinated checkpointing or message logging associated with uncoordinated checkpoint. There are many protocols, implementations and optimizations for these approaches but few results about their comparison. Coordinated checkpoint has the advantage of a very low overhead on fault free executions. In contrary a message logging protocol systematically adds a significant message transfer penalty. The drawbacks of coordinated checkpoint come from its synchronization cost at checkpoint and restart times. In this paper we implement, evaluate and compare the two kinds of protocols with a special emphasis on their respective performance according to fault frequency. The main conclusion (under our experimental conditions) is that message logging becomes relevant for a large scale cluster from one fault every hour for applications with large dataset.",2003,0, 4445,WTMR--A New Fault Tolerance Technique for Wireless and Mobile Computing Systems,"Much research has been done to evolve cost-effective techniques for fault tolerance of applications running on static distributed systems. With the advent of technology, wireless systems emerged by the end of the last decade, and received considerable attention as useful environments for mobile distributed systems. The requirements for designing fault tolerant techniques for the mobile computing systems have also come to the front. The objective of this paper is to study, analyse the effectiveness of (i) triple modular redundancy (TMR) technique and (ii) checkpoint and recovery technique that may be used by wireless/mobile systems for tolerating faults. A wirelessTMR-(WTMR) checkpointing technique is proposed in the paper that uses checkpointing technique to add fault tolerance to a fault tolerant TMR node in wireless system",2007,0, 4446,Zero-Steady-State-Error Input-Current Controller for Regenerative Multilevel Converters Based on Single-Phase Cells,"Multicell converters are one of the alternative topologies for medium-voltage industrial drives. For an application requiring regenerative capability, each power cell must be constructed with a three- or single-phase pulsewidth-modulation (PWM) rectifier as front end. The choice of single-phase PWM rectifiers for the input of the cells results in a reduced number of power switches and a simpler input transformer than the three-phase equivalent. However, its control is not as straightforward. This paper proposes the use of higher order resonant controllers in the classical control structure of the single-phase PWM rectifier. This ensures zero steady-state tracking error of the reference current at fundamental frequency. A detailed description of the design criteria for the position of the zeros and poles of the controller is given. Experimental results showing the good performance of the single-phase input cells and its proposed control are included",2007,0, 4447,Generation of compact test sets with high defect coverage,"Multi-detect (N-detect) testing suffers from the drawback that its test length grows linearly with N. We present a new method to generate compact test sets that provide high defect coverage. The proposed technique makes judicious use of a new pattern-quality metric based on the concept of output deviations. We select the most effective patterns from a large N-detect pattern repository, and guarantee a small test set as well as complete stuck-at coverage. Simulation results for benchmark circuits show that with a compact, 1-detect stuck-at test set, the proposed method provides considerably higher transition-fault coverage and coverage ramp-up compared to another recently-published method. Moreover, in all cases, the proposed method either outperforms or is as effective as the competing approach in terms of bridging-fault coverage and the surrogate BCE+ metric. In many cases, higher transition-fault coverage is obtained than much larger N-detect test sets for several values of N. Finally, our results provide the insight that, instead of using N-detect testing with as large N as possible, it is more efficient to combine the output deviations metric with multi-detect testing to get high-quality, compact test sets.",2009,0, 4448,Constrained error propagation for efficient image transmission over noisy channels,"Multimedia applications increasingly require efficient transmission of still and moving images over wireless channels. However, wireless channels originally designed for voice transmission impose severe bandwidth constraints on image data. In addition, data transmitted over wireless channels are subject to degradation due to such phenomena as burst noise and fading that cause errors. The JPEG standard has been developed for image data compression necessary for efficient transmission. The emerging JPEG-2000 standard addresses the transmission error problem by including provisions for error-resilience tools. These include resynchronization, data recovery and error concealment. In this paper, we survey current error resilience techniques for image data, and focus on the synchronization aspect of error resilience. In particular, we propose an application of self-synchronizing variable length codes (VLC) to achieve the dual goal of efficient source coding and constrained error propagation to localize the effects of data corruption. This makes subsequent data recovery and error concealment more effective. Performance analysis of the proposed technique is also presented",2002,0, 4449,Multimedia processor-based implementation of an error-diffusion halftoning algorithm exploiting subword parallelism,"Multimedia processor-based implementations of digital image processing algorithms have become important since several multimedia processors are now available and can replace special-purpose hardware-based systems because of their flexibility. Multimedia processors increase throughput by processing multiple pixels simultaneously using a subword-parallel arithmetic and logic unit architecture. The error-diffusion halftoning algorithm employs feedback of quantized output signals to faithfully convert a multi-level image to a binary image or to one with fewer levels of quantization. This makes it difficult to achieve speedup by utilizing the multimedia extension. In this study, the error-diffusion halftoning algorithm is implemented for a multimedia processor using three methods: single-pixel, single-line, and multiple-line processing. The single-pixel approach is the closest to conventional implementations, but the multimedia extension is used only in the filter kernel. The single-line approach computes multiple pixels in one scan-line simultaneously, but requires a complex algorithm transformation to remove dependencies between pixels. The multiple-line method exploits parallelism by employing a skewed data structure and processing multiple pixels in different scan-lines. The Pentium MMX instruction set is used for quantitative performance evaluation including run-time overheads and misaligned memory accesses. A speedup of more than ten times is achieved compared to the software (integer C) implementation on a conventional processor for the structurally sequential error-diffusion halftoning algorithm",2001,0, 4450,A Fault Tolerant Comparison Internet Shopping System: BestDeal by Using Mobile Agent,"Mobile agents have been advocated to support electronic commerce over the Internet. While being a promising paradigm, many intricate problems such as security and fault tolerance need to be solved to make this vision reality. In this paper we have proposed a fault tolerant comparison internet shopping system BestDeal. We assume that both the mobile agent and the host responsible to execute mobile agent are test worthy and mobile agent does not get tampered, kidnapped or robbed on its way. Hierarchical Fault Tolerance Protocol (HFTP) has been used to make this application fault tolerant i.e. user, who launches the mobile agent receives it back with correct result within time limit in spite of hardware and software faults such as link failure, host failure, or crash of mobile agent or mobile agent system. Proposed protocol has been modeled by using CPN tools and been analyzed by using simulations and data gathering tools.",2009,0, 4451,Generic Fault-Tolerance Mechanisms Using the Concept of Logical Execution Time,"Model-based development has become state of the art in software engineering. Unfortunately, the used code generators often focus on the pure application functionality. Features like automatic generation of fault-tolerance mechanisms are not covered. One main reason is the inadequacy of the used models. An adequate model must have amongst others explicit execution semantics and must be suited to support replica determinism and automatic state synchronization. These requirements are fulfilled when using the concept of logical execution time, a time-triggered approach. This approach hides the implementation details like the physical execution from the user, In contrast to other time-triggered paradigms. Within this paper, we present a solution to exploit this concept to realize major fault-tolerance mechanisms in a generic way.",2007,0, 4452,Localizing Program Errors via Slicing and Reasoning,"Model-based program debugging exploits discrepancies between the program behavior anticipated by a programmer and the program's actual behavior when executed on a set of inputs. From symptoms exhibited by a failing trace, potential culprits in the program canbe localized. However, since the cause of the error is nested deeper into the code than the error itself, localizing errors and correcting the errors are most time consuming hard work. The error trace produced by a model checker may contain more information than it appears. Thus, counter examples can be enough and are indicative for the cause of violation of the property. We present an assumption-based approach to localize the cause of a property violation using reasoning with constraints. In order to reduce the time consuming for error localizing, we first use dynamic program slicing to localize several statements to account for the violation of property. Assumption among these statements is then made to point out which statement(s) is (are) faulty. Some constraints will be introduced from the properties which are model checked for the program. A calculus of reasoning with these constraints is processed under the assumption along a counterexample. If the result may be consistent, the assumption is true (we can localize errors in those statements which the assumption suppose them to be faulty), otherwise, the assumption is wrong and another assumption should be made. Some examples support the applicability and effectiveness of our approach.",2008,0, 4453,Cause-effect modeling and simulation of power distribution fault events,"Modeling and simulation are important to study power distribution faults due to the limited actual data and high cost of experimentations. Although a number of software packages are available to simulate the electric signals, approaches for simulating fault events in different environments are not well developed yet. In this paper, we propose a framework for modeling and simulating fault events in power distribution systems based on environmental factors and cause-effect relations among them. The spatial and temporal aspects of significant environmental factors leading to various faults are modeled as raster maps and probability distributions, respectively. The cause-effect relations are expressed as fuzzy rules and a hierarchical fuzzy inference system is built to infer the probability of faults given the simulated environments. This work will be helpful in fault diagnosis for different local systems and provide a configurable data source to other researchers and engineers in similar areas as well. A sample fault simulator we have developed is used to illustrate the approaches.",2010,0, 4454,A new approach of gross errors detection for soft sensing data based on cluster analysis,"Modeling data plays a very important role in the process of establishing an accurate soft sensing model. Gross errors detection for modeling data could ensure the good quality of modeling data, and then ensure the good performance of soft sensor model. In this paper, a new gross errors detection method based on cluster analysis is proposed. Unlike the traditional methods, the new method does not rely on the mechanism model. And the new method is suitable to the characters of soft sensor better. A new cluster algorithm is presented to detect the gross errors of modeling data based on the special characters of soft sensor. The new clustering algorithm detects the gross errors by analyzing the Euclidean distance between the data points and the center of data set. The experiments demonstrate that the new detection approach based on new clustering method could detect the gross error effectively.",2010,0, 4455,State space model based dimensional errors analysis for rigid part in multistation manufacturing processes,"Modeling of variation propagation in multistation machining process is one of the most important research fields. In this paper, a mathematic model to depict the part dimensional variation of the complex multistation manufacturing process is formulated. A linear state space dimensional propagation equation is established through kinematics analysis of the relationships among of locating parameter variation, locating datum variation, so the dimensional error accumulation and transformation within the multistation process are quantitatively described. A systematic procedure to build the model is presented, which enhances the way to determine the variation sources in complex machining systems. Finally, an industrial case of multistation machining part in one manufacturing shop is given to testify the validation and practicability of the method. The analytical model is essential to quality control and improvement for multistation systems in machining quality forecasting and design optimization.",2010,0, 4456,Radtest - Testing Board for the Software Implemented Hardware Fault Tolerance Research,"Modern experiments in particle physics are based on advanced and sophisticated electronic systems which have to operate under radiation impact. The problem of designing a hardened system becomes very important, especially in places such as accelerators and synchrotrons where the results of the experiments depend on control system based on digital devices eg. microcontrollers. This paper highlights new solutions of the reliability problem known as the software implemented hardware fault tolerance. That is a strict software approach and could be used with unhardened, commercial off-the-shelf (COTS) components.",2007,0, 4457,Multidimensional Layered Forward Error Correction Using Rateless Codes,Modern layered or scalable video coding technologies generate a video bit stream with various inter layer dependencies due to references between the layers. This work proposes a method for extending forward error correction (FECs) codes following dependency structures within the media. The proposed layer-aware FEC (L-FEC) generates repair symbols so that protection of less important dependency layers can be used with protection of more important layers for combined error correction. The L-FEC approach is exemplary applied to rateless LT and Raptor codes. Gains for more important layers can be achieved without increasing the total FEC code rate. The performance gain of the L-FEC is shown by simulation results with receiver-driven layered multicast transmission using scalable video coding (SVC) with a Raptor-based L-FEC.,2008,0, 4458,Notice of Retraction
Application of Network Technique in Transformer Fault Diagnosis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

This paper introduces computer network technique of Local Area Network(LAN), Wide Area Network(WAN).It states that fault diagnosis technique of ES & ANN.Computer network remote diagnosis and monitoring are led in transformer fault diagnosis. The feasibility and validity of network system for transformer fault diagnosis are explained by some examples.",2010,0, 4459,Novel algorithms for earth fault indication based on monitoring of shunt resistance of MV feeder as a part of relay protection,Novel methods for very high-resistance earth fault identification and location in isolated or high impedance earthed distribution systems have been developed. The novel indication algorithms are able to detect and locate faults up to some hundreds of kilo-ohms. These algorithms were implemented in a microprocessor-based feeder terminal. The indication methods have proven to be very appropriate for the implementation and the preliminary results from the field installation and field experiments are promising,2001,0, 4460,An architecture for a fault tolerant highly reconfigurable shop floor,"Nowadays Internet enabled technologies have open wide the door for penetrating new markets and profiting from specific business scenarios. In the manufacturing domain these opportunities are of volatile nature and rather than requiring large scale production of a single item demand mid/low volume production with variants of a certain good. The impact on the shop floor activities is considerable and calls for highly reconfigurable set ups. In this paper the authors propose an agent based architecture, which follows the reconfigurable manufacturing systems (RMS) framework, validated by an implementation, in the discrete manufacturing domain, that supports seamless runtime reconfiguration while being tolerant to faults in running processes.",2008,0, 4461,"Colour space transformation, colour correction and exact colour reproduction with CNN technology","Nowadays many problems requiring huge computing power have arisen. Although the performance of digital processors doubles every year, there are certain tasks where the computation cannot be carried out within a reasonable time interval. Such hard problems are the analysis of big dynamical systems or real-time exact colour reproduction. The exact colour visualization of motion pictures is necessary in industrial, medical and scientific research areas. Thus, for example, exact colour reproduction is required for remote medical diagnosis or remote operation. The doctor has to see the same image that appears in reality. Device dependent colour appearance may cause faulty decisions. Nowadays these problems cannot be solved perfectly because many steps of the transformation are not completely known and the huge number of computations cannot be done in real-time even by the fastest PC. In this article we describe some methods to produce exact colours in a remote medical diagnostic system.",2002,0, 4462,Fault Detection and Diagnosis in a Set InverterInduction Machine Through Multidimensional Membership Function and Pattern Recognition,"Nowadays, electrical drives generally associate inverter and induction machine. Thus, these two elements must be taken into account in order to provide a relevant diagnosis of these electrical systems. In this context, the paper presents a diagnosis method based on a multidimensional function and pattern recognition (PR). Traditional formalism of the PR method has been extended with some improvements such as the automatic choice of the feature space dimension or a ldquononexclusiverdquo decision rule based on the k-nearest neighbors. Thus, we introduce a new membership function, which takes into account the number of nearest neighbors as well as the distance from these neighbors with the sample to be classified. This approach is illustrated on a 5.5 kW inverter-fed asynchronous motor, in order to detect supply and motor faults. In this application, diagnostic features are only extracted from electrical measurements. Experimental results prove the efficiency of our diagnosis method.",2009,0, 4463,The Intelligent Fault Diagnosis for Composite Systems Based on Machine Learning,"Nowadays, electronic devices are getting more complex, which make it also more difficult to use a single reasoning technique to meet the demands of the fault diagnosis. Integrating two or more reasoning techniques becomes a trend in developing intelligent diagnosis. In this paper we discuss the intelligent diagnosis problems and propose a diagnosis architecture for composite systems, which combines rule-based diagnosis and model-based diagnosis. These two diagnosis programs not only work efficiently with machine learning in different stages of the fault diagnosis process, but also efficiently improve the process by making the best use of their individual advantages",2006,0, 4464,Analysis and Utilizing of the Error Models in Network Education with NS3,"NS3 (Network Simulator version 3) can construct a network environment to help teacher in computer network teaching. It displays more benefits than the traditional ways. Because it is an open source software, NS3 can be used freely and make the network simulation easy, furthermore, the result of the network simulation can be analyzed with different software tools. In this paper we focus on the error model of WLAN in NS3 which can be used in the teaching of network course, introducing and analyzing the related theories about the error models used in NS3 physical layer, so as to help researchers understand and utilize the error models in NS3 Smoothly. In addition, several error model simulation experiments were carried out.",2010,0, 4465,Utility of popular software defect models,Numerical models can be used to track and predict the number of defects in developmental and operational software. This paper introduces techniques to critically assess the effectiveness of software defect reduction efforts as the data is being gathered. This can be achieved by observing the fundamental shape of the cumulative defect discovery curves and by judging how quickly the various defect models converge to common predictions of long term software performance,2002,0, 4466,Autonomous fault tolerant multi-robot cooperation using artificial immune system,"Multi-robot cooperation is an active research field where researchers have proposed different cooperation strategies. However, most of them have not considered fault tolerance, which will be critical when one or more robots fail during cooperation. In this paper we propose a self regulated fault tolerant multi-robot cooperation system based on the principles of artificial immune systems. The proposed approach relies on broadcasting of the capability and cost function of a robot that is required for cooperation while also taking care of partial and full failure of a robot during the communication for cooperation. In our system, a robot is regarded as an antibody and its environment as an antigen. The communication and cooperation strategies are inspired by Jernpsilas idiotypic network theory and the structure of the antibody. The developed methodology is verified by simulation.",2008,0, 4467,Fault tolerance through re-execution in multiscalar architecture,"Multi-threading and multiscaling are two fundamental microarchitecture approaches that are expected to stay on the existing performance gain curve. Both of these approaches assume that integrated circuits with over billion transistors will become available in the near future. Such large integrated circuits imply reduced design tolerances and hence increased failure probability. Conventional hardware redundancy techniques for desired reliability in computation may severely limit the performance of such high performance processors. Hence we need to study novel methods to exploit the inherent redundancy of the microarchitectures, without unduly affecting the performance, to provide correct program execution and/or detect failures (permanent or transient) that can occur in the hardware. This paper proposes a time redundancy technique suitable for multiscalar architectures. In the multiscalar architecture, there are usually several processing units to exploit the instruction level parallelism that exists in a given program. The technique in this paper uses a majority of the processing units for executing the program as in the traditional multiscalar paradigm while using the remainder of the processing units for re-executing the committed instructions. By comparing the results from the two program executions, errors caused by permanent or transient faults in the processing units can be detected. Simulation results presented in this paper demonstrate that this can be achieved with about 5-15% performance degradation",2000,0, 4468,Using a Fault Hierarchy to Improve the Efficiency of DNF Logic Mutation Testing,"Mutation testing is a technique for generating high quality test data. However, logic mutation testing is currently inefficient for three reasons. One, the same mutant is generated more than once. Two, mutants are generated that are guaranteed to be killed by a test that kills some other generated mutant. Three, mutants that when killed are guaranteed to kill many other mutants are not generated as valuable mutation operators are missing. This paper improves logic mutation testing by 1) extending a logic fault hierarchy to include existing logic mutation operators, 2) introducing new logic mutation operators based on existing faults in the hierarchy, 3) introducing new logic mutation operators having no corresponding faults in the hierarchy and extending the hierarchy to include them, and 4) addressing the precise effects of equivalent mutants on the fault hierarchy. An empirical study using minimal DNF predicates in avionics software showed that a new logic mutation testing approach generates fewer mutants, detects more faults, and outperforms an existing logic criterion.",2009,0, 4469,Bandwidth Mismatch Correction for a Two-Channel Time-Interleaved A/D Converter,"Mismatches between sample-and-hold (S/H) circuits in a time-interleaved analog-to-digital data converter (ADC) cause undesirable distortions in the output spectrum. To reduce these undesired spectral components, we introduce a hybrid filter-bank model of a two channel time-interleaved ADC. The model allows the development of a digital domain correction technique that removes the first-order effects of S/H mismatches. A single FIR correction filter is required, and simulations demonstrate the effectiveness of the proposed correction method",2007,0, 4470,Examining the Relationships between Performance Requirements and Not a Problem Defect Reports,"Missing or imprecise requirements can lead stakeholders to make incorrect assumptions. A ""not a problem"" defect report (NaP) describes a software behavior that a stakeholder regards as a problem while the developer believes this behavior is acceptable and chooses not to take any action. As a result, a NaP wastes the time of the development team because resources are spent analyzing the problem but the quality of the software is not improved. Performance requirements specification and analysis are instance-based. System performance can change based upon the execution environment or usage patterns. To understand how the availability and precision of performance requirements can affect NaP occurrence rate, we conducted a case study on an embedded control module. We applied the performance refinement and evolution model to examine the relationship between each factor in the performance requirements and the corresponding NaP occurrence rate. Our findings show that precise specification of subjects or workloads lowers the occurrence rate of NaPs. Precise specification of measures or environments does not lower the occurrence rate of NaPs. Finally, the availability of performance requirements does not affect NaP occurrence rate in this case study.",2008,0, 4471,Application of support vector machines for fault diagnosis in power transmission system,"Post-fault studies of recent major power failures around the world reveal that mal- operation and/or improper co-ordination of protection system were responsible to some extent. When a major power disturbance occurs, protection and control action are required to stop the power system degradation, restore the system to a normal state and minimise the impact of the disturbance. However, this has indicated the need for improving protection co-ordination by additional post-fault and corrective studies using intelligent/knowledge-based systems. A process to obtain knowledge-base using support vector machines (SVMs) is presented for ready post-fault diagnosis purpose. SVMs are used as Intelligence tool to identify the faulted line that is emanating and finding the distance from the substation. Also, SVMs are compared with radial basis function neural networks in datasets corresponding to different fault on transmission system. Classification and regression accuracies are is reported for both strategies. The approach is particularly important for post-fault diagnosis of any mal-operation of relays following a disturbance in the neighbouring line connected to the same substation. This may help to improve the fault monitoring/diagnosis process, thus assuring secure operation of the power systems. To validate the proposed approach, results on IEEE 39-Bus New England system are presented for illustration purpose.",2008,0, 4472,BLoG: Post-Silicon bug localization in processors using bug localization graphs,"Post-silicon bug localization - the process of identifying the location of a detected hardware bug and the cycle(s) during which the bug produces error(s) - is a major bottleneck for complex integrated circuits. Instruction Footprint Recording and Analysis (IFRA) is a promising post-silicon bug localization technique for complex processor cores. However, applying IFRA to new processor microarchitectures can be challenging due to the manual effort required to implement special microarchitecture-dependent analysis techniques for bug localization. This paper presents the Bug Localization Graph (BLoG) framework that enables application of IFRA to new processor microarchitectures with reduced manual effort. Results obtained from an industrial microarchitectural simulator modeling a state-of-the-art complex commercial microarchitecture (Intel Nehalem, the foundation for the Intel CoreTM i7 and CoreTM i5 processor families) demonstrate that BLoG-assisted IFRA enables effective and efficient post-silicon bug localization for complex processors with high bug localization accuracy at low cost.",2010,0, 4473,Hybrid Fault Diagnosis Scheme Implementation for Power Distribution Systems Automation,"Power distribution automation and control are important tools in the current restructured electricity markets. Unfortunately, due to its stochastic nature, distribution systems faults are hardly avoidable. This paper proposes a novel fault diagnosis scheme for power distribution systems, composed by three different processes: fault detection and classification, fault location, and fault section determination. The fault detection and classification technique is wavelet based. The fault-location technique is impedance based and uses local voltage and current fundamental phasors. The fault section determination method is artificial neural network based and uses the local current and voltage signals to estimate the faulted section. The proposed hybrid scheme was validated through Alternate Transient Program/Electromagnetic Transients Program simulations and was implemented as embedded software. It is currently used as a fault diagnosis tool in a Southern Brazilian power distribution company.",2008,0, 4474,Comparisons of logistic regression and artificial neural network on power distribution systems fault cause identification,"Power distribution systems play an important role in modern society. Proper outage root cause identification is often essential for effective restorations when outages occur. This paper reports on the investigation and results of two classification methods: logistic regression and neural network applied in power distribution fault cause classifier. Logistic regression is seldom used in power distribution fault diagnosis, while neural network, has been extensively used in power system reliability researches. Evaluation criteria of the goodness of the classifier includes: correct classification rate, true positive rate, true negative rate, and geometric mean. Two major distribution faults, tree and animal contact, are used to illustrate the characteristics and effectiveness of the investigated techniques.",2005,0, 4475,Power distribution systems fault cause identification using logistic regression and artificial neural network,"Power distribution systems play an important role in modern society. When outages occur, fast and proper restorations are crucial to improve system reliability. Proper outage root cause identification is often essential for effective restorations. This paper reports on the investigation of two classification methods: logistic regression and neural network, applied in power distribution fault cause classifier (PDFCC) and comparison of their results. Logistic regression is seldom used in power distribution fault diagnosis, while neural network has been extensively used in power system reliability researches. Evaluation criteria of the goodness of PDFCC includes: correct classification rate, true positive rate, true negative rate, and geometric mean. This paper also discusses the practical application issues including data insufficiency, imbalanced data constitution, and threshold setting that are often faced in power distribution fault diagnosis. Two major distribution faults, tree and animal contact, are used to illustrate the characteristics and effectiveness of the investigated techniques",2005,0, 4476,Development of an expert system to fault diagnosis of three phase induction motor drive system,"Power electronic systems are considered as one of the most important components in many applications, such as nuclear reactors, aerospace, military applications and life saving machines. In such applications the system should be high reliable, a knowledge-based expert system was developed to diagnose faults in a three phase induction motor system . A software tool called KAPPA PC 2.1 was used to develop the expert system, the system is modeled in MATLAB SIMULINK and the simulation results at normal and fault conditions was rewritten as a set of if-then rules by which the expert system can discriminate the fault.",2008,0, 4477,A very fast unblocking scheme for distance protection to detect symmetrical faults during power swings,"Power swing blocking function in distance relays is necessary to distinguish between a power swing and a fault. However the distance relay should be fast and reliably unblocked if any fault occurs during a power swing. Although unblocking the relay under asymmetrical fault conditions is straightforward based on detecting the zero- or negative-sequence component of current but symmetrical fault detection during a power swing presents a challenge since there is no unbalancing. This paper presents a very fast detection method used to detect symmetrical faults occurring during a power swing. Based on a 50 Hz component getting on three-phase active power after symmetrical fault inception and using Fast Fourier Transform (FFT), the proposed detection method can reliably and quickly detect symmetrical faults occurring during power swing in one power cycle, i.e. 0.02 second. This detection method is easy to set and immune to the fault inception time and fault location. Power swing and fault conditions are simulated by using software PSCAD/EMTDC. FFT is performed by using On-Line Frequency Scanner block included in the software.",2010,0, 4478,An Intelligent Approach Using Support Vector Machines for Monitoring and Identification of Faults on Transmission Systems,"Power system disturbances are often caused by faults on transmission lines. When faults occur in a power system, the protective relays detect the fault and initiate tripping of appropriate circuit breakers, which isolate the affected part from the rest of the power system. Generally extra high voltage (EHV) transmission substations in power systems are connected with multiple transmission lines to neighboring substations. In some cases mal-operation of relays can happen under varying operating conditions, because of inappropriate coordination of relay settings. Due to these actions the power system margins for contingencies are decreasing. Hence, power system protective relaying reliability becomes increasingly important. In this paper an approach is presented using support vector machine (SVM) as an intelligent tool for identifying the faulted line that is emanating from a substation and finding the distance from the substation. Results on 24-bus equivalent EHV system, part of Indian southern grid, are presented for illustration purpose. This approach is particularly important to avoid mal-operation of relays following a disturbance in the neighboring line connected to the same substation and assuring secure operation of the power systems",2006,0, 4479,Field experience of transformer untanking to identify electrical faults and comparison with Dissolved Gas Analysis,"Power transformer consists of components which are under consistent thermal and electrical stresses. The major component which degrades under these stresses is the paper insulation of the power transformer. The life of the transformer is determined by the condition of the paper insulation. The degradation of the paper insulation will be accelerated with the presence of electrical fault. Electrical fault in power transformer can be categorized into two which are Partial Discharge (PD) and Arcing. A PD will eventually develop into arcing. Any electrical fault in the transformer can be detected by using Dissolved Gas Analysis technique. The DGA can be used to differentiate between the types of faults in the transformer. However, DGA alone is not conclusive in determining the electrical fault in the transformer. As a complement, acoustic partial discharge technique was used to detect the electrical fault in the transformer. In this paper, the detection of electrical fault in two units of 33/11 kV, 15 MVA transformers were done by using Dissolved Gas Analysis (DGA). Then, the acoustic partial discharge test was carried out to detect the activity and locate the source of the electrical fault. During the acoustic partial discharge testing, some electrical discharge signal was picked-up from On-Load Tap Changer (OLTC) tank. Then, the transformers were un-tanking for physical inspection. Based on the inspection done on two transformers, the DGA analysis methods were unable to detect the OLTC oil contamination in the main tank oil and it is very dependent on the transformer conservator tank design. The acoustic partial discharge technique proves to be a useful tool in detecting electrical discharges in the power transformer.",2009,0, 4480,A new approach for real-time multiple open-circuit fault diagnosis in voltage source inverters,"Practically all the diagnostic methods for open-circuit faults in voltage source inverters (VSI) developed during the last decades, are focused on the occurrence of single faults and do not have the capability to handle and identify multiple failures. This paper presents a new method for real-time diagnostics of multiple open-circuit faults in voltage source inverters feeding ac machines. In contrast with the majority of the methods found in the literature which are based on the motor phase currents average values, the average absolute values are used here as principal quantities in order to formulate the diagnostic variables. These prove to be more robust against the issue of false alarms, carrying also information about multiple open-circuit failures. Furthermore, by the combination of these variables with the machine phase currents average values, it is possible to obtain characteristic signatures, which allow for the detection and identification of single and multiple open-circuit faults.",2010,0, 4481,Geometric correction and validation of Hyperion and ALI data for EVEOSD,"Precise geometric correction of EO-1's Hyperion data is essential to link ground spectral data and satellite hyperspectral data. Two scenes have been selected from sites of the EVEOSD (Evaluation and Validation of EO-1 for Sustainable Development of Forests) project. One site is the Greater Victoria Watershed District (GVWD) located on south Vancouver Island, BC and the other is Hoquiam located in southwestern Washington State. Ground Control Point (GCP) collection has been performed using a feature fitting method in which high accuracy, orthorectified photo-derived polygons of features are used for tie-down. For example lakes are adjusted to match the same feature obvious in the hyperspectral imagery. This technique allows for easier estimation of a GCP's precise fit to the imagery. A third (11) of the GCPs were identified as check points to validate the geometric models. GCPs were collected independently from both the VNIR and SWIR arrays of the Hyperion sensor to determine the adjustment factor required to remove the displacement and skew between these arrays. The adjustment can then be applied to GCPs collected from one array to make a compatible geometric correction model for both arrays. The polynomial and rational function correction methods have been applied to both scenes with various orders applied to each function. The effect of terrain distortion removal is evaluated in using the rational function method. Hyperion data can be geocorrected with surprising accuracy. For example, we obtained 10 m RMS on check points with the rational function. With a second order polynomial we achieved 13 m RMS without terrain correction. The accuracy of this latter result is due to the small swath width of the sensor. Applying terrain correction does improve the accuracy of geometric correction in areas with high relief. A similar procedure was applied to EO-1's ALI sensor and this paper compares the results for Hyperion and ALI geometric fidelity.",2002,0, 4482,Delay fault testing of IP-based designs via symbolic path modeling,"Predesigned blocks called intellectual property (IP) cores are increasingly used for complex system-on-a-chip (SoC) designs. The implementation details of IP cores are often unknown or unavailable, so delay testing of such designs is difficult. We propose a method that can test paths traversing both IP cores and user-defined blocks, an increasingly important but little-studied problem. It models representative paths in IP circuits using an efficient form of binary decision diagram (BDD) and generates test vectors from the BDD model. We also present a partitioning technique, which reduces the BDD size by orders of magnitude and makes the proposed method practical for large designs. Experimental results are presented that show that it robustly tests selected paths without using extra logic and, at the same time, protects the intellectual contents of IP cores.",2001,0, 4483,Clustering and Metrics Thresholds Based Software Fault Prediction of Unlabeled Program Modules,"Predicting the fault-proneness of program modules when the fault labels for modules are unavailable is a practical problem frequently encountered in the software industry. Because fault data belonging to previous software version is not available, supervised learning approaches can not be applied, leading to the need for new methods, tools, or techniques. In this study, we propose a clustering and metrics thresholds based software fault prediction approach for this challenging problem and explore it on three datasets, collected from a Turkish white-goods manufacturer developing embedded controller software. Experiments reveal that unsupervised software fault prediction can be automated and reasonable results can be produced with techniques based on metrics thresholds and clustering. The results of this study demonstrate the effectiveness of metrics thresholds and show that the standalone application of metrics thresholds (one-stage) is currently easier than the clustering and metrics thresholds based (two-stage) approach because the selection of cluster number is performed heuristically in this clustering based method.",2009,0, 4484,"Predictive Learning, Prediction Errors, and Attention: Evidence from Event-related Potentials and Eye Tracking","Prediction error (surprise) affects the rate of learning: We learn more rapidly about cues for which we initially make incorrect predictions than cues for which our initial predictions are correct. The current studies employ electrophysiological measures to reveal early attentional differentiation of events that differ in their previous involvement in errors of predictive judgment. Error-related events attract more attention, as evidenced by features of event-related scalp potentials previously implicated in selective visual attention (selection negativity, augmented anterior N1). The earliest differences detected occurred around 120 msec after stimulus onset, and distributed source localization (LORETA) indicated that the inferior temporal regions were one source of the earliest differences. In addition, stimuli associated with the production of prediction errors show higher dwell times in an eye-tracking procedure. Our data support the view that early attentional processes play a role in human associative learning.",2007,0, 4485,Modeling the Effect of Size on Defect Proneness for Open-Source Software,"Quality is becoming increasingly important with the continuous adoption of open-source software. Previous research has found that there is generally a positive relationship between module size and defect proneness. Therefore, in open-source software development, it is important to monitor module size and understand its impact on defect proneness. However, traditional approaches to quality modeling, which measure specific system snapshots and obtain future defect counts, are not well suited because open-source modules usually evolve and their size changes over time. In this study, we used Cox proportional hazards modeling with recurrent events to study the effect of class size on defect- proneness in the Mozilla product. We found that the effect of size was significant, and we quantified this effect on defect proneness.",2007,0,5373 4486,Congestion and error control for layered scalable video multicast over WiMAX,"Quality of service (QoS) of wireless multimedia can be seriously degraded due to the more dynamically changing end-to-end available bandwidth caused by the wireless fading/shadowing and link adaptation. Moreover, the increased occurrence of wireless radio transmission errors also results in higher bursty rate of packet loss when compared with wired IP networks. An end-system driven solution featuring embedded probing for layered multicast of scalable video is thus proposed in this paper. By taking advantage of QoS features offered by one of the four proposed WiMAX service flow arrangement, this system aims at more flexible layer constructing and subscription while reliable in diverse channel conditions and fitting users' demand. The system optimality comes from the best tradeoff of number of video layers subscription with number of additional FEC packets insertion to simultaneously satisfy the estimated available bandwidth and the estimated wireless channel error condition",2007,0, 4487,Single Electron Fault in QCA Binary Wire,Quantum cellular automata (QCA) represents an emerging technology at the nanotechnology level. There are various faults which may occur in QCA cells. One of these faults is the single electron fault (SEF) that can happen during manufacturing or operation of QCA circuits. A detailed simulation based logic level modeling of single electron fault for QCA binary wire is represented in this paper.,2009,0, 4488,A fault-tolerant protocol for railway control systems,"Railway control systems are largely based on data communication and network technologies. With the adoption of Ethernet-IP as the main technology for building end-to-end real-time networks on railway control systems, the requirement to deliver high availability, quality and secure services over Ethernet has become strategic. Critical real-time traffic is generally penalized and the maximum restoration time of 50 msec sometimes is exceeded because of real-time applications hangings, so passengers' safety could be committed. It occurs on more than twenty percent of critical fail tests performed. Our main goal is to minimize restoration time from the application point of view. This article describes a protocol to improve critical real-time railway control systems. The algorithm designed gives us fast recoveries when railway computers fail down. The protocol permits to manage the railway control system from every computer in the network mixing unicast and multicast messages. Simulations obtained for a real railway line are shown. We have reached excellent results limiting critical failures recoveries to less than 50 msec",2006,0, 4489,"Corrections to scatterometer wind vectors from the effects of rain, using high resolution NEXRAD radar collocations","Rain in the atmosphere and impacts on the ocean surface lead to erroneous observations of the Ku-band normalized radar cross section (NRCS) for the ocean surface, which is collected by orbiting Scatterometers. Rain can cause large errors in satellite-based estimates of the sea surface wind speed and direction derived from the affected data, depending on the surface wind speed and the rainrate. Rain within the radar beam results in attenuation and additive volume backscatter to the satellite. In order to correct each NRCS using a physically based electromagnetic model, the 3-D rain reflectivity must be measured throughout the satellite's Ku-band beam with high resolution, and be nearly simultaneous with the satellite. Using satellite observations within the range of the NWS NEXRAD radars, these S-band data provide 2 km horizontal resolution and comparable vertical resolution, within 4 minutes of the satellite overpass. The correction technique also includes removal of the augmented surface roughness due to rain impacts. The surface wind vectors are then recalculated using the corrected NRCS, with the same wind-retrieval algorithm as that is used to produce the SeaWinds data product. Case studies will be presented that show the improved wind vector estimates over a significant area in several coastal regions, (the U.S. East Coast and the Gulf of Mexico), when comparing the corrected winds with the NCEP winds and buoy measurements.",2005,0, 4490,"On extractors, error-correction and hiding all partial information","Randomness extractors (Nisan and Zuckerman, 1996) allow one to obtain nearly perfect randomness from highly imperfect sources randomness, which are only known to contain ""scattered"" entropy. Not surprisingly, such extractors have found numerous applications in many areas of computer science including cryptography. Aside from extracting randomness, a less known usage of extractors comes from the fact that they hide all deterministic functions of their (high-entropy) input (Dodis and Smith, 2005): in other words, extractors provide certain level of privacy for the imperfect source that they use. In the latter kind of applications, one typically needs extra properties of extractors, such as invertibility, collision-resistance or error-correction. In this abstract we survey some of such usages of extractors, concentrating on several recent results by the author (Dodis et al., 2004 and Dodis and Smith, 2005). The primitives we survey include several flavors of randomness extractors, entropically secure encryption and perfect one-way hash functions. The main technical tools include several variants of the leftover hash lemma, error correcting codes, and the connection between randomness extraction and hiding all partial information. Due to space constraints, many important references and results are not mentioned here; interested reader can find those in the works of Dodis et al. (2004) and Dodis and Smith (2005)",2005,0, 4491,Photoelastic stress analysis error quantification in vasculature models for robot feedback control,"Real-time and accurate stress calculation in walls of vasculature is desired to provide catheter insertion robots of feedback control without changing the catheter stiffness and lumen. This feedback source has also applications in endovascular surgery simulation for human skills and medical tools evaluation. For that purpose we consider photoelastic effect, as birefringence produced by light retardation relates with the stress inside the photoelastic materials. In this research a polariscope was designed for urethane elastomer vasculature models, the photoelastic coefficient of urethane elastomer was measured, and the camera system was calibrated to quantify and reduce error of the measurement system. An average error of 3.6% was found for the pressure range of 70-189 mmHg inside the model of urethane elastomer, this enables to calculate accurately stress in vasculature models during Human Blood Pressure Simulation (HBPS). That way we will be able to compare in a closed loop stress produced by HBPS and by the catheter motion when manipulated by a robot.",2010,0, 4492,Real time control design for mobile robot fault tolerant control. Introducing the ARTEMIC powered mobile robot,"Real-time applications should timely deliver synchronized data-sets, minimize latency and jitter in their response and meet their performance specifications in the presence of disturbances and faults. The fault tolerant behavior in mobile robots refers to the possibility to autonomously detect and identify faults as well as the capability to continue operating after a fault occurred. This paper introduces a real-time distributed control application with fault tolerance capabilities for differential wheeled mobile robots, named ARTEMIC. The paper focuses on design details and performance analysis at the system operation level. Some stress tests are executed during normal operation to validate the proposal. Specific design, development and implementation details will be provided in this paper.",2010,0, 4493,A forward error recovery technique for real-time MPEG-2 video transport and its performance over wireless IEEE 802.11 LAN,"Real-time MPEG-2 video transport applications do not usually have the luxury of a reverse channel for recovering from any errors that might occur during communication. Degradation in quality of decoded video frames is immediately apparent in the presence of errors in headers. In this paper, we focus on protecting header information by replicating it in any free space that might be available in the defined MPEG-2 transport stream packets. We also present our implementation experience over wired ATM as well as wireless IEEE 802.11 LAN by incorporating this forward error recovery approach with a real-time MPEG-2 encoder. In our experiments, it is found that the free space available is generally more than adequate for replicating essential header information",2000,0, 4494,A nonpreemptive real-time scheduler with recovery from transient faults and its implementation,"Real-time systems (RTS) are those whose correctness depends on satisfying the required functional as well as the required temporal properties. Due to the criticality of such systems, recovery from faults is an essential part of a RTS. In many systems, such as those supporting space applications, single event upsets (SEUs) are the prevalent type of faults; SEUs are transient faults and affect a single task at a time. We present a scheme to guarantee that the execution of real-time tasks can tolerate SEUs and intermittent faults assuming any queue-based scheduling technique. Three algorithms are presented to solve the problem of adding fault tolerance to a queue of real-time tasks by reserving sufficient slack in a schedule so that recovery can be carried out before the task deadline without compromising guarantees given to other tasks. The first algorithm is a dynamic programming optimal solution, the second is a linear-time heuristic for scheduling dynamic tasks, and the third algorithm comprises extensions to address queues with gaps between tasks (gaps are caused by precedence, resource, or timing constraints). We show through simulations that the heuristics closely approximate the optimal algorithm. Finally, we describe the implementation of the modified admission control algorithm, non-preemptive scheduler, and recovery mechanism in the FT-RT-Mach operating system.",2003,0, 4495,Fault Tolerance-Genetic Algorithm for Grid Task Scheduling using Check Point,"One motivation of grid computing is to aggregate the power of widely distributed resources, and provide non-trivial services to users. To achieve this goal, an efficient grid fault tolerance system is an essential part of the grid. Rather than covering the whole grid fault tolerance area, this survey provides a review of the subject mainly from the perspective of check point. In this review the challenges for fault tolerance are identified. In grid environments, execution failures can occur for various reasons such as network failure, overloaded resource conditions, or non-availability of required software components. Thus, fault-tolerant systems should be able to identify and handle failures and support reliable execution in the presence of concurrency and failures. In scheduling a large number of user jobs for parallel execution on an open-resource grid system, the jobs are subject to system failures or delays caused by infected hardware, software vulnerability, and distrusted security policy. In this paper we propose a task level fault tolerance. Task-level techniques mask the effects of the execution failure of tasks. Four task level techniques are retry, alternate resource, check point and replication. Check point technique strategy achieves optimal load balance across different grid sites. These fault tolerance task level techniques can upgrade grid performance significantly at only a moderate in extra resources or scheduling delays in a risky grid computing environment.",2007,0, 4496,On Quantifying Fault Patterns of the Mesh Interconnect Networks,"One of the key issues in the design of multiprocessors system-on-chip (MP-SoCs), multicomputers, and peer-to-peer networks is the development of an efficient communication network to provide high throughput and low latency and its ability to survive beyond the failure of individual components. Generally, the faulty components may be coalesced into fault regions, which are classified into convex and concave shapes. In this paper, we propose a mathematical solution for counting the number of common fault patterns in a 2-D mesh interconnect network including both convex (I-shape, II-shape, square-shape) and concave (L-shape, U- shape, T-shape, +-shape, H-shape) regions. The results presented in this paper which have been validated through simulation experiments can play a key role when studying, particularly, the performance analysis of fault-tolerant routing algorithms and measure of a network fault-tolerance expressed as the probability of a disconnection.",2007,0, 4497,FEDA: Fault-tolerant Energy-Efficient Data Aggregation in wireless sensor networks,"One of the key issues in wireless sensor networks is data collection from sensors .In this area, data aggregation is an important technique for reducing the energy consumption. Also, reliability and robustness of transferring data is one of the important challenges. The in-network data aggregation approach which is proposed in this paper, besides achieving ideal energy consumption by limiting a number of redundant and unnecessary responses from the sensor nodes, it can increase the chance of receiving data packets at the destination and cause to a more accurate results. By utilization of J-Sim simulator, the proposed approach is compared and evaluated with some important approaches in this area. The simulation results show that by using the proposed approach, the loss amount of the data packets and the average energy consumption of network will be considerably reduced.",2008,0, 4498,Assessing Diagnostic Techniques for Fault Tolerance in Software,"One of the main concerns in software safety critical applications is to ensure sufficient reliability if one cannot prove the absence of faults. Fault tolerance (FT) provides a plausible method for improving reliability claims in the presence of systematic failures in software. It is plausible that some software FT techniques offer increased protection than others. However, the extent of claims that can be made for different FT software architectures remains unclear. We investigate an approach to FT that integrates data diversity (DD) assertions and traditional assertions (TA). We also present the principles of a method to assess the effectiveness of the approach. The aim of this approach is to make it possible to evolve more powerful FT and thereby improve reliability. This is a step towards the aim of understanding the effectiveness of FT safety-critical applications and thus making it easier to use FT in safety arguments",2007,0, 4499,AOA Assisted NLOS Error Mitigation for TOA-Based Indoor Positioning Systems,"One of the major challenges for TOA-based accurate indoor positioning systems is the blockage of the LOS path, or equivalently direct path (DP), due to obstructions. Since accurate ranging of these systems depend on the detection of the DP between the transmitter and receiver, significant errors will be introduced into the ranging measurements once the DP cannot be detected. This condition is hence called the undetected direct path (UDP) condition. Owing to the fact that indoor wireless channels exhibit rich multipath propagation, the multipath components other than the DP may be utilized in mitigating errors occurring in UDP areas. In this paper, we introduce a method based on both TOA and AOA information from other multipath components to substantially mitigate the ranging error. We present our results as a comparison with commonly used methods for TOA-based ranging and we show that our proposed technique outperforms these traditional methods.",2007,0, 4500,Fault-Tolerant Online Backup Service: Formal Modeling and Reasoning,"Online backup service software provides automated, offsite, secure online data backup and recovery for remote computers. How to satisfy functional requirements and guarantee the fault tolerance of online backup service software is a difficult but crucial problem faced by software designers. In this paper, we investigate to incorporate the fault tolerant techniques in the system design, and propose a fault-tolerant online backup service model (FOBSM) to guide the development of online backup service system. The FOBSM comprises four components: backup client (BC), backup server (BS), storage server (SS), and online backup exception handler (OBEH). The first three components constitute three-party functional units, whereas OBEH serves as the centralized exception handling mechanism, which is devised to receive the external exceptions raised by the other entities, transform them into a global exception, and propagate it to the related entities to handle, so as to improve the fault tolerance of the software greatly. In order to provide precise and explicit idioms to system designers, we use Object-Z language to specify the FOBSM. Following the Object-Z reasoning rules, we reason about the fault tolerant properties of FOBSM and demonstrate that it can improve fault tolerance of the online backup service software effectively.",2009,0, 4501,On-line fault detection of sensor measurements,"On-line fault detection in sensor networks is of paramount importance due to the convergence of a variety of challenging technological, application, conceptual, and safety related factors. We introduce a taxonomy for classification of faults in sensor networks and the first on-line model-based testing technique. The approach is generic in the sense that it can be applied on an arbitrary system of heterogeneous sensors with an arbitrary type of fault model, while it provides a flexible tradeoff between accuracy and latency. The key idea is to formulate on-line testing as a set of instances of a non-linear function minimization and consequently apply nonparametric statistical methods to identify the sensors that have the highest probability to be faulty. The optimization is conducted using the Powell nonlinear function minimization method. The effectiveness of the approach is evaluated in the presence of random noise using a system of light sensors.",2003,0, 4502,Fault area network for electrical power transformers-a novel tool for on-line monitoring of large power transformers,"On-line monitoring of transformers for the assessment of the health condition of costly capital equipment like an electrical power transformer, is a topic of heated discussions at several international conferences. Many times, the basic discussion pertains to the transformer being on-line or the device monitoring the transformer is on-line. Both these considerations are valid, however the availability of the on-line monitoring system being available to monitor the health of a transformer which is on-line is of greater importance to the operator as well as the utility. On-line monitoring systems configured as a fault area network can ease the exercise of monitoring several transformers located in a power substation. The paper describes this methodology developed for this system that we have developed and successfully used in real-time.",2002,0, 4503,Understanding Expressions of Unwanted Behaviors in Open Bug Reporting,"Open bug reporting allows end-users to express a vast array of unwanted software behaviors. However, users' expectations often clash with developers' implementation intents. We created a classification of seven common expectation violations cited by end-users in bug report descriptions and applied it to 1,000 bug reports from the Mozilla project. Our results show that users largely described bugs as violations of their own personal expectations, of specifications, or of the user community's expectations. We found a correlation between a reporter's expression of which expectation was being violated and whether or not the bug would eventually be fixed. Specifically, when bugs were expressed as violations of community expectations rather than personal expectations, they had a better chance of being fixed.",2010,0, 4504,Detect Related Bugs from Source Code Using Bug Information,"Open source projects often maintain open bug repositories during development and maintenance, and the reporters often point out straightly or implicitly the reasons why bugs occur when they submit them. The comments about a bug are very valuable for developers to locate and fix the bug. Meanwhile, it is very common in large software for programmers to override or overload some methods according to the same logic. If one method causes a bug, it is obvious that other overridden or overloaded methods maybe cause related or similar bugs. In this paper, we propose and implement a tool Rebug-Detector, which detects related bugs using bug information and code features. Firstly, it extracts bug features from bug information in bug repositories; secondly, it locates bug methods from source code, and then extracts code features of bug methods; thirdly, it calculates similarities between each overridden or overloaded method and bug methods; lastly, it determines which method maybe causes potential related or similar bugs. We evaluate Rebug-Detector on an open source project: Apache Lucene-Java. Our tool totally detects 61 related bugs, including 21 real bugs and 10 suspected bugs, and it costs us about 15.5 minutes. The results show that bug features and code features extracted by our tool are useful to find real bugs in existing projects.",2010,0, 4505,Non-linear image registration for correction of motion artifacts during optical imaging of human hearts,"Optical imaging of cardiac electrical activity can be used to elucidate patho-physiological mechanisms of cardiac arrhythmias. However, cardiac motion during optical imaging causes significant error in electrophysiological measurements such as action potential duration. In particular, cardiac tissue in fibrillation introduces highly non-linear imaging artifacts. We present a novel approach that uses non-linear image registration to correct for in-plane cardiac motion, particularly of non-linear origin found during cardiac arrhythmias. The algorithm is performed entirely post-acquisition and does not require a complicated optical setup. It is computationally fast, and available as open source. The algorithm was tested with images acquired from five excised dilated myopathic human hearts and the results show that the image registration method significantly reduces both non-linear and linear motion-related artifacts in both sinus rhythm and ventricular fibrillation. This algorithm corrects for non-linear imaging artifacts caused by cardiac motion that are impossible to correct using linear registration methods.",2008,0, 4506,A new approach for identifying fiber fault and detecting failure location,"Optical time domain reflectometer (OTDR) is used to measure fiber attenuation and loss as well as locate connection, splice, crack, bend, and breakdown along an optical fiber. This paper proposed and demonstrated a MATLAB-based graphical user interface (GUI) named as centralized failure detection system (CFDS) that able to measure the optical signal level, attenuation, loss and also locate the break point in faulty fiber for multi fibers in a time. CFDS is interfaced with the OTDR to accumulate every measurement result to be displayed on a single computer screen for further analysis. CFDS will identify and present the parameters of optical fiber such as the fiber's status either in working or non-working condition (occur breakdown or failure in a faulty fiber), magnitude of decreasing as well as the location, failure location and other details as shown in the OTDR's screen.",2008,0, 4507,Performance of RSCC-QPSK-OFDM system with phase error and perfect compensation in Rayleigh fading channel,"Orthogonal frequency division multiplexing (OFDM) combines the advantage of high achievable rates and relatively easy implementation. In this paper, the combination of the recursive systematic convolutional code (RSCC) and the OFDM with pilot-aided data in Raleigh fading channel with compensation the phase error and perfect compensation has been suggested. The system is called RSCC-QPSK-OFDM. The simulation of RSCC-QPSK-OFDM with phase error and perfect compensation for both hard and soft decision of convolutional decoder are given, the effect of code rate and constraint length in the proposed system has been studied. BER performance of rate 1/2, soft decision, RSCC-QPSK-OFDM system with phase error compensation has a gain factor 9 dB at 2middot10-4 compared with un-coded system. BER performance of the proposed system with phase error and amplitude compensation has a gain factor 10 dB at 10-4.",2009,0, 4508,Error Negativity Does Not Reflect Conflict: A Reappraisal of Conflict Monitoring and Anterior Cingulate Cortex Activity,"Our ability to detect and correct errors is essential for our adaptive behavior. The conflict-loop theory states that the anterior cingulate cortex (ACC) plays a key role in detecting the need to increase control through conflict monitoring. Such monitoring is assumed to manifest itself in an electroencephalographic (EEG) component, the error negativity (Ne or error-related negativity [ERN]). We have directly tested the hypothesis that the ACC monitors conflict through simulation and experimental studies. Both the simulated and EEG traces were sorted, on a trial-by-trial basis, as a function of the degree of conflict, measured as the temporal overlap between incorrect and correct response activations. The simulations clearly show that conflict increases as temporal overlap between response activation increases, whereas the experimental results demonstrate that the amplitude of the Ne decreases as temporal overlap increases, suggesting that the ACC does not monitor conflict. At a functional level, the results show that the duration of the Ne depends on the time needed to correct (partial) errors, revealing an on-line modulation of control on a very short time scale.",2008,0, 4509,The effect of testing on reliability of fault-tolerant software,"Previous models have investigated the impact upon diversity - and hence upon the reliability of fault-tolerant software built from 'diverse' versions - of the variation in 'difficulty' of demands over the demand space. These models are essentially static, taking a single snapshot view of the system. In this paper, we consider a generalisation in which the individual versions are allowed to evolve - and their reliability to grow - through debugging. In particular, we examine the trade-off that occurs in testing between, on the one hand, the increasing reliability of individual versions, and on the other hand the possible diminution of diversity.",2004,0, 4510,Effects of clipping on the error performance of OFDM in frequency selective fading channels,"Previous studies on the effect of the clipping noise on the error performance of orthogonal frequency-division multiplexing (OFDM) systems in frequency selective fading channels provide pessimistic results. They do not consider the effect of channel fading on the clipping noise. The clipping noise is added at the transmitter and hence fades with the signal. Here, the authors show that the ""bad"" subcarriers that dominate the error performance of the OFDM system are least affected by the clipping noise and, as a result, the degradation in the error performance of OFDM system in fading channels is very small.",2004,0, 4511,Fault-Tolerant Scheduling of Independent Tasks in Computational Grid,"Primary-backup approach is a common approach used for fault tolerance wherein each task has a primary copy and a backup copy on two different processors. The backup copy can overlap with other backup copies on the same processor, as long as their corresponding primary copies are scheduled on different processors. In this paper, we consider the problem of fault-tolerant scheduling of independent tasks using primary-backup approach with backup overlapping in computational grid. A fault-tolerant scheduling algorithm is developed which minimizes replication cost for backup copy by taking into account backup overlapping in cost function. A centralized scheme and a distributed scheme are developed for the proposed algorithm and their performance are studied through simulation experiments",2006,0, 4512,Diagnose Multiple Stuck-at Scan Chain Faults,"Prior effect-cause based chain diagnosis algorithms suffer from accuracy and performance problems when multiple stuck-at faults exist on the same scan chain. In this paper, we propose new chain diagnosis algorithms based on dominant fault pair to enhance diagnosis accuracy and efficiency. Several heuristic techniques are proposed, which include (1) double candidate range calculation, (2) dynamic learning and (3) two- dimensional space linear search. The experimental results illustrate the effectiveness and efficiency of the proposed chain diagnosis algorithms.",2008,0, 4513,Proactive fault handling for system availability enhancement,"Proactive fault handling combines prevention and repair actions with failure prediction techniques. We extend the standard availability formula by five key measures: (1) precision and (2) recall assess failure prediction while failure handling is gauged by (3) prevention probability, (4) repair time improvement, and (5) risk of introducing additional failures. We give a short survey of actions that are suited to be combined with failure prediction and provide a procedure to estimate the five key measures. Altogether, this allows to quantify the impact of proactive fault handling on system availability and may provide valuable input for system design.",2005,0, 4514,Teleportation-Based Motion Planner for Design Error Analysis,"Probabilistic path planning techniques have proven to be vital for finding and validating solutions for difficult industrial assembly tasks. Nevertheless, the failure of a path planner to find a solution to a task does not suggest how to correct the error. We suggest a methodology to identify possible bottlenecks and present an algorithm to analyze the extent to which the design must be modified in order for the task to complete successfully. We validate our algorithm on two industrial problems involving design errors, and explain how to interpret the results in order to improve the design.",2009,0, 4515,A new probing scheme for fault detection and identification,"Probing technology has been used as a fault detection and identification method in computer networks and successful applications have been reported. One of the most appealing features of probing-based schemes is that it is an active approach. A set of probes can be sent on a periodic basis. If a network failure is detected, the outcomes of these probes are further analyzed to determine the root cause of the problem. However, the availability of a large set of such probes may in fact place a huge burden on management systems in terms of extra management traffic and storage space. Hence, the need of minimizing such a probing set has become highly desirable. In this work, we propose a preplanned probe selection scheme, in which a small set of probes are chosen such that it maintains the diagnostic power of the original set. The new approach is based on the constraint satisfaction problem paradigm and its powerful search techniques are exploited. The efficiency of the new algorithm has been demonstrated by the results reported.",2009,0, 4516,The study of fault tolerant system design using complete evolution hardware,"Process engineering, process design and simulation, process supervision, control and estimation, process fault detection and diagnosis rely on the effective processing of unpredictable and imprecise information. A majority of applications require cooperation of two or more independently designed, separately located, but mutually affecting subsystems. In addition to good behavior of each of the subsystems, an effective coordination is very important to achieve the desired overall performance. Such a co-ordination can permit the use of commercially designed subsystems to perform more sophisticated tasks than at present and improve the operational reliability. However, such a co-ordination is very difficult to attain mainly due to the lack of precise system models and/or dynamic parameters. In such situations, the evolvable hardware (EHW) techniques, which can achieve the sophisticated level of information processing the brain is capable of, can excel. In this paper, a new multiple-sensor coordinator based sensor validation scheme combining the techniques of evolvable hardware and neural networks, is presented. The idea of this work is to develop a system that is resistant or tolerant to sensor failures (fault tolerance) by utilizing multiple sensor inputs connected to a programmable VLSI chip. The proposed system can be views as process modeling formalism and given the appropriate network topology; is capable of characterizing non linear functional relationships. The structure of the resulting evolvable hardware based process model can be considered generic, in the sense that little prior process knowledge is required. The knowledge about the plant dynamics and mapping characteristics are implicitly stored within the network. The proposed system help in extending the range of operation of the conventional control systems with respect to sensor validation at no extra (hardware) costs. The proposed design algorithms focus on using the characteristics that evolved systems present like, for example adaptation, auto-regulation and learning. The proposed sensor was tested for its effectiveness by introducing different sensor failures such as: sensor fails as open circuit, sensor fails as short circuit, multiple sensor failure etc. on a real time plant and in - each case the performance index was computed and found to be acceptable.",2005,0, 4517,FEM simulations in designing saturated iron core superconducting fault current limiters,"Proper design of the iron cores is crucial for a saturated iron core superconducting fault current limiter. An optimized design requires guaranteeing sufficient magnetic saturation on the iron cores holding the AC coils, ensuring the efficiency of current limiting, minimizing the use of materials, as well as satisfying the safety and stability obligations. One of the key tasks in the design is the series of electromagnetic calculations, including the complex three-dimensional electromagnetic field calculation. It is extremely difficult to calculate the magnetic field distribution through purely analytical methods because of the nonlinearity of magnetization for a ferromagnetic material. We used Finite Element Method (FEM) to solve the field distribution problem, and adopted Ansoft Maxwell for the computer simulation. Simulation results revealed that the iron cores with varying cross-sections of different windows were most effective for obtaining ideal magnetizing force distribution. Optimized design of the iron cores could be based on this simulation method. Experiments on a 380 V/30 A prototype were carried out to verify the designs.",2009,0, 4518,A guide to digital fault recording event analysis,"Proper interpretation of fault and disturbance data is critical for the reliability and continuous operation of the power system. A correct interpretation gives you valuable insight into the conditions and performance of various power system protective equipment. Analyzing records is not an intuitive process and requires system protection knowledge and experience. Having an understanding of the fundamental guidelines for the event analysis process is imperative for new power engineers to properly evaluate faults. As senior power engineers retire, knowledge of how to decipher fault records could be lost with them. This paper addresses aspects of power system fault analysis and provides the new event analyst with a basic foundation of the requirements and steps to analyze and interpret fault disturbances.",2010,0, 4519,Automated Fault Tree Generation: Bridging Reliability with Text Mining,"Proper preventive maintenance of complex systems, such as those used for power generation and medical diagnosis is dependent on the availability of their up-to-date reliability models. These models are constructed from historical maintenance and fault information of the equipment. Due to the complex nature of these machines, constructing these models involves significant manual effort which limits the widespread use of reliability-centric maintenance schemes. In this paper, we describe a process for automating the construction of fault trees, a class of non-state space reliability models, by analyzing maintenance data available as free-form text. It uses a combination of linguistic analysis and domain knowledge to identify the nature of the failure from short plain text descriptions of equipment faults. This information is used to automatically enrich and evolve existing fault trees for better reliability estimation",2007,0, 4520,Hierarchical application aware error detection and recovery,"Proposed is a four-tired approach to develop and integrate detection and recovery support at different levels of the system hierarchy. The proposed mechanisms exploit support provided by (i) embedded hardware, (ii) operating system, (iii) compiler, and (iv) application.",2004,0, 4521,Hierarchical error detection in a software implemented fault tolerance (SIFT) environment,"Proposes a hierarchical error detection framework for a software-implemented fault tolerance (SIFT) layer of a distributed system. A four-level error detection hierarchy is proposed in the context of Chameleon, a software environment for providing adaptive fault tolerance in an environment of commercial off-the-shelf (COTS) system components and software. The design and implementation of a software-based distributed signature monitoring scheme, which is central to the proposed four-level hierarchy, is described. Both intra-level and inter-level optimizations that minimize the overhead of detection and are capable of adapting to runtime requirements are proposed. The paper presents results from a prototype implementation of two levels of the error detection hierarchy and results of a detailed simulation of the overall environment. The results indicate a substantial increase in availability due to the detection framework and help in understanding the tradeoffs between overhead and coverage for different combinations of techniques",2000,0, 4522,Testing approach within FPGA-based fault tolerant systems,"Proposes a test strategy for FPGAs to be applied within FPGA-based fault-tolerant systems. We propose to make some configurable logic blocks (CLBs) under test and to implement the rest of the CLBs with the normal user data. In the target fault-tolerant systems, there are two phases (the functional phase and the test phase). In the functional phase, the system achieves its normal functionality, while in the test phase, the FPGA is tested. In this phase, the configuration data of the CLBs under test are shifted on-chip in parallel to other CLBs for achieving the test in these CLBs. All the CLBs are tested in a single test phase. The shifting process control, test application and test observation are achieved by the logic managing the fault tolerance (from outside the chip). The system returns to its normal phase after all the CLBs have been scanned by the test. The application of this approach reduces the fault tolerance cost (hardware, software, time, etc). The user is then able to periodically test the chip using only the data inside the chip and without destroying the original configuration data. No particular hardware is required for saving the test data on-board. Additionally, no particular software treatment is required for the test. The testing time is reduced enormously. Unfortunately, as a consequence of implementing two types of data on-chip, a 15% decrease in the chip functionality and a 2.5% delay overhead are noticed in the case of structures similar to a 2020 Xilinx FPGA",2000,0, 4523,Exploiting FPGA-based techniques for fault injection campaigns on VLSI circuits,"Proposes an FPGA-based system to speed-up fault injection campaigns for the evaluation of the fault-tolerant capabilities of VLSI circuits. An environment is described, relying on FPGA-based emulation of the circuit. Suitable techniques are described, allowing one to emulate the effects of faults and to observe faulty behavior. The proposed approach allows the combination of the speed of hardware-based techniques, and the flexibility of simulation-based techniques. Experimental results are provided showing that significant speed-up figures with respect to state-of-the-art simulation-based techniques can be achieved",2001,0, 4524,WMD: Distortion correction of high power amplifiers using digital signal processing,Provides an abstract of the workshop presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.,2004,0, 4525,Scalable fault-tolerant network design for Ethernet-based wide area process control network systems,"Providing fault-tolerant Ethernet capability on the large control network systems becomes very important issue. In this paper, we present two efficient scalable fault-tolerant network architecture designs: the ""FTE protocol-independent multi-domain approach"" and the ""layer 2 switch/router-based multi-domain approach"", which can efficiently integrate the layer-2-based FTE protocol and the existing standard router fault-tolerant protocols such as Virtual Router Redundancy Protocol (VRRP), Hot Standby Router Protocol (HSRP), and Open Shortest Path First (OSPF). The network designs take into consideration minimizing the control overheads in supporting large network systems, meeting the detection and recovery time requirements of the application regardless of the network size, using COTS redundancy protocols without overall network performance degradation, and justifying the solution cost. The feasibility and performance of our designs are demonstrated through experiment and analysis.",2001,0, 4526,A particle swarm optimization approach for automatic diagnosis of PMSM stator fault,"Permanent magnet synchronous motors (PMSM) are frequently used to high performance applications. Accurate diagnosis of small faults can significantly improve system availability and reliability. This paper proposes a new scheme for the automatic diagnosis of interturn short circuit faults in PMSM stator windings. Both the fault location and fault severity are identified using a particle swarm optimization (PSO) algorithm. The performance of the motor under the fault conditions is simulated through lumped-parameter models. Waveforms of the machine phase currents are monitored, based on which a fitness function is formulated and PSO is used to identify the fault location and fault size. The proposed method is simulated in MATLAB environment. Simulation results provide preliminary verification of the diagnosis scheme",2006,0, 4527,Perspective Correction Method for Chinese Document Images,"Perspective distortion often appears in the image documents which were token by digital camera. This phenomenon will lead to recognition errors or failures. Therefore, a new correction algorithm is proposed in this paper for perspective distortion images of Chinese documents. The algorithm makes use of Chinese document image's horizontal characteristics of text line and Chinese characterpsilas features of vertical strokes, to find distortion information and then rectify the perspective image. This method does not require information of imagepsilas edge and paragraphpsilas format. It has a good effect against the incomplete perspective images and irregular paragraph's format. Experiment show that it takes on fast, accurate and high-robust features when using this method to correct perspective distortion in the document images.",2008,0, 4528,Towards fault tolerance pervasive computing,"Pervasive computing exists in the user's environment, the technology is sustainable if it is invisible to the user and does not intrude the user's consciousness. This requires that functioning of the multitude of devices in the environment be oblivious to the user. Therefore, the system has to be resilient to various kinds of faults and should be able to function despite faults. In addition, pervasive computing provides a platform for context-aware computing that enables automatic configuration of a pervasive system based on the environment context. The aim of this article is to highlight the various challenges and issues that confront fault tolerance pervasive computing, discuss their implications, prevent some solutions to these problems, and describe how some of these solutions are implemented in our system.",2005,0, 4529,Evaluation of a Monte Carlo scatter correction in clinical 3D PET,"Phantom and patient data were used to compare performance of a one-iteration Monte Carlo scatter correction (MC-SC-1i) for 3D PET, a vendor-supplied one-iteration single scatter model-based correction (SSS-1i) for 3D PET, unscatter-corrected 3D PET (No-SC), a SSS-1i followed by Monte Carlo scatter correction as a second iteration (MC-SSS) for 3D PET, and a convolution-subtraction scatter correction for 2D PET in terms of quantitative accuracy and lesion detectability. ROI analysis showed 2D PET images were more accurate than 3D, particularly for large phantoms, and MC-SSS corrected 3D PET images were more accurate than SSS-1i corrected 3D PET images for this data set. 2D and 3D PET images were reconstructed from 59 patient data sets. Bias of 3D PET images with respect to 2D images was determined using Corresponding Intensity Variance. 3D PET uncorrected images overestimated activity by 50% (smallest patients) to 150% (largest patients). The average absolute bias of SSS-1i corrected images (16%) was twice that of MC-SSS (8%) and more dependent on patient size. Lesion detection sensitivity in these patient images was evaluated using a Channelized Hotelling Observer. Scatter corrected 3D PET images performed 10% better than uncorrected 3D PET images for smaller patients. Slightly better lesion sensitivity was seen for large patients in images reconstructed using SSS-1i (CHO-SNR=2.230.29) compared to MC-SSS (2.080.27) and uncorrected images (2.020.23).",2003,0, 4530,Secure and Robust Error Correction for Physical Unclonable Functions,"Physical unclonable functions (PUFs) offer a promising mechanism that can be used in many security, protection, and digital rights management applications. One key issue is the stability of PUF responses that is often addressed by error correction codes. The authors propose a new syndrome coding scheme that limits the amount of leaked information by the PUF error-correcting codes.",2010,0, 4531,Optimization of Warpage Defect in Injection Moulding Process Using ABS Material,"Plastic injection moulding process produces various defects such as warpage, sink marks, weld lines and shrinkage. The purpose of present paper is to analyze the warpage defect on Acrylonitrile Butadiene Styrene (ABS) for selected part using FEA simulation. The approach was based on Taguchipsilas Method and Analysis of Variance (ANOVA) to optimize the processing parameters namely packing pressure, mould temperature, melt temperature and packing time for effective process. It was found that the optimum parameters for ABS material are packing pressure at 375 MPa, mould temperature at 40degC, melt temperature at 200degC and packing time at 1 s. Melt temperature was found to be the most significant factor followed by packing time and mould temperature. Meanwhile, packing pressure was insignificant factor contributing to the warpage in present study.",2009,0, 4532,Analytical fault detection and diagnosis (FDD) for pneumatic systems in robotics and manufacturing automation,"Pneumatic systems are often found in manufacturing floors for automation and robotic systems. Early and intelligent faults detection and diagnosis (FDD) of such systems can prevent failure of devices that causes shutdown and loss of precious production time and profits. In this paper, we introduce analytical FDD for pneumatic systems. The diagnosis system presented in this paper focuses on the signal-based approach which employs multi-resolution wavelet decomposition of various sensor signals such as pressure, flow rate, etc., to determine leak configuration. Pattern recognition technique and analytical vectorized maps are developed to diagnose an unknown leakage based on the established FDD information using affine mapping. Experimental studies and analysis are presented to illustrate the FDD system.",2005,0, 4533,Automated defect to fault translation for ASIC standard cell libraries,"Popular generic fault models, which exhibit limited realism for different IC technologies, have been widely misused due to their simplicity and cost-effective implementation. This paper introduces a system for deriving accurate, technology specific fault models that are based on analog defect simulation. The technique is formally defined and a systematic approach is developed. It is supported by a new software tool that provides a push-button solution for the previously tedious task of obtaining accurate ASIC cell defect to fault mappings. Furthermore, upon completion of the cell defect analysis, the tool automatically generates VITAL compliant, defect-injectable, VHDL cell models",2001,0, 4534,Supporting server-level fault tolerance in concurrent-push-based parallel video servers,"Parallel video servers have been proposed for building large-scale video-on-demand (VoD) systems from multiple low-cost servers. However, when adding more servers to scale up the capacity, system-level reliability will decrease as failure of any one of the servers will cripple the entire system. To tackle this reliability problem, this paper proposes and analyzes architectures to support server-level fault tolerance in parallel video servers. Based on the concurrent push architecture proposed earlier, this paper tackles three problems pertaining to fault tolerance, namely redundancy management, redundant data transmission protocol, and real-time fault masking. First, redundant data based on erasure codes are introduced to video data stored in the servers, which are then delivered to the clients to support fault tolerance. Despite the success of distributed redundancy striping schemes such as RAID-5 in disk array implementations, we discover that similar schemes extended to the server context do not scale well. Instead, we propose a redundant server scheme that is both scalable, and with lower total server buffer requirement. Second, two protocols are proposed to manage the transmission of redundant data to the clients, namely forward erasure correction which always transmits redundant data, and on-demand correction which transmits redundant data only after a server failure is detected. Third, to enable ongoing video sessions to maintain nonstop video playback during failure, we propose using fault masking at the client to recompute lost video data in real-time. In particular we derive the amount of client buffer required so that nonstop, continuous video playback can be maintained despite server failures",2001,0, 4535,A fault tolerant MPI-IO implementation using the Expand parallel file system,"Parallelism in file systems is obtained by using several independent server nodes supporting one or more secondary storage devices. This approach increases the performance and scalability of the system, but a fault in one single node can stop the whole system. To avoid this problem, data must be stored using some kind of redundant technique, so any data stored in a faulty element can be recovered. Fault tolerance can be provided in I/O systems using replication or RAID based schemes. However, most of the current systems apply the same technique for all files in the system. This paper describes the fault tolerance support provided by Expand, a parallel file system based on standard servers. Expand allows to define different fault-tolerant mechanisms at file level. The evaluation compares the performance of Expand with different configurations with PVFS using the FLASH-I/O benchmark.",2005,0, 4536,Theoretical and numerical study of MLEM and OSEM reconstruction algorithms for motion correction in emission tomography,"Patient body-motion and respiratory-motion impacts the image quality of cardiac PET or SPECT perfusion images. Several algorithms exist in the literature to correct for motion within the iterative maximum-likelihood reconstruction framework. In this work, three algorithms are derived using Poisson statistics to correct for patient motion. The first one is a motion compensated MLEM algorithm (MC-MLEM). The next two algorithms called MGEM-1 and MGEM-2 (short for Motion Gated EM, 1 and 2) use the motion states as subsets, in two different ways. Experiments were performed with NCAT phantoms with exactly known motion as the source and attenuation distributions. The SIMIND Monte Carlo simulation software was used to create SPECT projection images of the NCAT phantoms. The projection images were then modified to have Poisson noise levels equivalent to that of clinical acquisition. We investigated application of these algorithms to correction of (1) a large body-motion of 2 cm in Superior-Inferior (SI) and Anterior-Posterior (AP) directions each and (2) respiratory motion of 2 cm in SI and 0.6 cm in AP. We obtained the bias with respect to the NCAT phantom activity for noiseless reconstructions as well as the bias-variance for noisy reconstructions. The MGEM-1 advanced along the biasvariance curve faster than the MC-MLEM with iterations. The MGEM-1 also lowered the noiseless bias (with respect to NCAT truth) faster with iterations, compared to the MC-MLEM algorithms, as expected with subset algorithms. For the body motion correction with two motion states, after the 9th iteration the bias was close to that of MC-MLEM at iteration 17, reducing the number of iterations by a factor of 1.89. For the respiratory motion correction with 9 motion states, based on the noiseless bias, the iteration reduction factor was approximately 7. For the MGEM-2, however, bias-plot or the bias-variance-plot saturates with iteration because of successive interpolation error.",2008,0,4537 4537,Theoretical and Numerical Study of MLEM and OSEM Reconstruction Algorithms for Motion Correction in Emission Tomography,"Patient body-motion and respiratory-motion impacts the image quality of cardiac SPECT and PET perfusion images. Several algorithms exist in the literature to correct for motion within the iterative maximum-likelihood reconstruction framework. In this work, three algorithms are derived starting with Poisson statistics to correct for patient motion. The first one is a motion compensated MLEM algorithm (MC-MLEM). The next two algorithms called MGEM-1 and MGEM-2 (short for Motion Gated OSEM, 1 and 2) use the motion states as subsets, in two different ways. Experiments were performed with NCAT phantoms (with exactly known motion) as the source and attenuation distributions. Experiments were also performed on an anthropomorphic phantom and a patient study. The SIMIND Monte Carlo simulation software was used to create SPECT projection images of the NCAT phantoms. The projection images were then modified to have Poisson noise levels equivalent to that of clinical acquisition. We investigated application of these algorithms to correction of (1) a large body-motion of 2 cm in Superior-Inferior (SI) and Anterior-Posterior (AP) directions each and (2) respiratory motion of 2 cm in SI and 0.6 cm in AP. We determined the bias with respect to the NCAT phantom activity for noiseless reconstructions as well as the bias-variance for noisy reconstructions. The MGEM-1 advanced along the bias-variance curve faster than the MC-MLEM with iterations. The MGEM-1 also lowered the noiseless bias (with respect to NCAT truth) faster with iterations, compared to the MC-MLEM algorithms, as expected with subset algorithms. For the body motion correction with two motion states, after the 9th iteration the bias was close to that of MC-MLEM at iteration 17, reducing the number of iterations by a factor of 1.89. For the respiratory motion correction with 9 motion states, based on the noiseless bias, the iteration reduction factor was approximately 7. For the MGEM-2, however, bias-plot or the bias-vari- ance-plot saturated with iteration because of successive interpolation error. SPECT data was acquired simulating respiratory motion of 2 cm amplitude with an anthropomorphic phantom. A patient study acquired with body motion in a second rest was also acquired. The motion correction was applied to these acquisitions with the anthropomorphic phantom and the patient study, showing marked improvements of image quality with the estimated motion correction.",2009,0, 4538,Scope error detection and handling concerning software estimation models,"Over the last 25+ years, the software community has been searching for the best models for estimating variables of interest (e.g., cost, defects, and fault proneness). However, little research has been done to improve the reliability of the estimates. Over the last decades, scope error and error analysis have been substantially ignored by the community. This work attempts to fill this gap in the research and enhance a common understanding within the community. Results provided in this study can eventually be used to support human judgment-based techniques and be an addition to the portfolio. The novelty of this work is that, we provide a way of detecting and handling the scope error arising from estimation models. The answer whether or not scope error will occur is a pre-condition to safe use of an estimation model. We also provide a handy procedure for dealing with outliers as to whether or not to include them in the training set for building a new version of the estimation model. The majority of the work is empirically based, applying computational intelligence techniques to some COCOMO model variations with respect to a publicly available cost estimation data set in the PROMISE repository.",2009,0, 4539,Full contrast transfer function correction in 3D cryo-EM reconstruction,"Over the past years electron cryo-microscopy (cryo-EM) has established itself as an important tool in studying the three dimensional structure of biological molecules up to the resolution of 6-9 A. However, as we pursue even higher resolution (i.e., 3-4 A), the depth-of-field problem inherent in the contrast transfer function emerges as a limiting factor. This problem has been previously addressed in the research community (Jensen, G.J., 2000; DeRosier, D.J., 2000; Zhou, Z.H. and Chiu, W., 2003; Cohen, H.A. et al., 1984). We develop a full theoretical solution to this problem. We show that the projected image from the electron microscope corresponds to neither a slice, nor an Ewald sphere, in the Fourier space, but a pair of quadratic surfaces in that space. The general solutions to this problem for both single and double defocus exposures are developed. Simulations show the correctness of the theory.",2004,0, 4540,Fault-tolerant data delivery for multicast overlay networks,"Overlay networks represent an emerging technology for rapid deployment of novel network services and applications. However, since public overlay networks are built out of loosely coupled end-hosts, individual nodes are less trustworthy than Internet routers in carrying out the data forwarding function. Here we describe a set of mechanisms designed to detect and repair errors in the data stream. Utilizing the highly redundant connectivity in overlay networks, our design splits each data stream to multiple sub-streams which are delivered over disjoint paths. Each sub-stream carries additional information that enables receivers to detect damaged or lost packets. Furthermore, each node can verify the validity of data by periodically exchanging Bloom filters, the digests of recently received packets, with other nodes in the overlay. We have evaluated our design through both simulations and experiments over a network testbed. The results show that most nodes can effectively detect corrupted data streams even in the presence of multiple tampering nodes.",2004,0, 4541,Methods of quantum error correction,"Owing to the high sensitivity of quantum mechanical systems to even small perturbations, means of error protection are essential for any computation or communication process based on quantum mechanics. After a short introduction to quantum registers and operations as well as quantum channels, different approaches to the problem of protecting quantum information are presented",2000,0, 4542,Path diversity with forward error correction (PDF) system for packet switched networks,"Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In our previous work, we have shown that substantial reduction in packet loss can be achieved by sending packets at appropriate sending rates to a receiver from multiple senders, using disjoint paths, and by protecting packets with forward error correction. In this paper, we propose a path diversity with forward error correction (PDF) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes. We propose a scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system.",2003,0, 4543,Fault Tolerance in Multiprocessor Systems Via Application Cloning,"Record and replay (RR) is a software based state replication solution designed to support recording and subsequent replay of the execution of unmodified applications running on multiprocessor systems for fault-tolerance. Multiple instances of the application are simultaneously executed in separate virtualized environments called containers. Containers facilitate state replication between the application instances by resolving the resource conflicts and providing a uniform view of the underlying operating system across all clones. The virtualization layer that creates the container abstraction actively monitors the primary instance of the application and synchronizes its state with that of the clones by transferring the necessary information to enforce identical state among them. In particular, we address the replication of relevant operating system state, such as network state to preserve network connections across failures, and the state that results from nondeterministic interleaved accesses to shared memory in SMP systems. We have implemented RR's state replication mechanisms in the Linux operating system by making novel use of existing features on the Intel and PowerPC architectures.",2007,0, 4544,Influence of current measurement errors on parallel-connected UPS inverters,"Redundancy is a common approach for ensuring reliability on applications using UPS inverters. However, overall system's reliability and efficiency is very dependent on the power measurement precision, which if not properly done can lead to system failure. Current transformers, that are widely used to measure the output current of UPS inverters, introduce a phase shift in the output signal that negatively influences the parallelism circuits, leading to a waste in the generated power and therefore reducing system's efficiency and reliability. This paper analyses the influence of current measurement problems that may occur when two or more uninterruptible power supplies are connected together. Current measurement errors are experimentally obtained with two UPS inverters connected in parallel using the drooping method and the results, presented at the end, suggest that the power measurement needs to be carefully designed.",2009,0, 4545,Scalable mean voter for fault-tolerant mixed-signal circuits,"Redundancy techniques, such as N-tuple modular redundancy has been widely used to improve the reliability of digital circuits. Unfortunately nothing substantial has been done for the analog and mixed signal systems. In this paper, we propose a redundancy based fault-tolerant methodology to design a highly reliable analog and digital circuits and systems. The key contribution of our work is an innovative mean voter. This mean voter is a low power, small area, very high bandwidth and linearly scalable voting circuit. Unlike other conventional voters which works with odd N in an NMR, the mean voter works for both odd and even N for analog units and hence reduces the area and power further. For the proof of concept, we designed two fault tolerant analog circuits i.e. a low pass anti-aliasing analog filter and a 4-bit flash ADC. We also presented fault-tolerance mechanism in 4-bit binary adder and an FPGA cell for demonstrating its advantage in digital applications. Experimental results are reported to verify the concepts and measure the system's reliability when single upset transient may occur.",2010,0, 4546,Unequal error protection for ROI coded images over fading channels,"Region of interest (ROI) coding is a feature supported by the Joint Photographic Experts Group 2000 (JPEG 2000) image compression standard and allows particular regions of interest within an image to be compressed at a higher quality than the rest of the image. In this paper, unequal error protection (UEP) is proposed for ROI coded JPEG 2000 images as a technique for providing increased resilience against the effects of transmission errors over a wireless communications channel. The hierarchical nature of an ROI coded JPEG 2000 code-stream lends itself to the use of UEP whereby the important bits of the code-stream are protected with a strong code while the less important bits are protected with a weaker code. Simulation results obtained using symbol-by-symbol maximum a posteriori probability (MAP) decoding demonstrate that the use of UEP offers significant gains in terms of the peak signal to noise ratio (PSNR) and the percentage of readable files. Moreover the use of ROI-based UEP leads to reduced computational complexity at the receiver.",2005,0, 4547,The Effectiveness of Regression Testing Techniques in Reducing the Occurrence of Residual Defects,"Regression testing is a necessary maintenance activity that can ensure high quality of the modified software system, and a great deal of research on regression testing has been performed. Most of the studies performed to date, however, have evaluated regression testing techniques under the limited context, such as a short-term assessment, which do not fully account for system evolution or industrial circumstances. One important issue associated with a system lifetime view that we have overlooked in past years is the effects of residual defects - defects that persist undetected - across several releases of a system. Depending on an organization's business goals and the type of system being built, residual defects might affect the level of success of the software products. In this paper, we conducted an empirical study to investigate whether regression testing techniques are effective in reducing the occurrence and persistence of residual defects across a system's lifetime, in particular, considering test case prioritization techniques. Our results show that heuristics can be effective in reducing both the occurrence of residual defects and their age. Our results also indicate that residual defects and their age have a strong impact on the cost-benefits of test case prioritization techniques.",2010,0, 4548,Error Report Driven Post-Mortem Analysis,"Regulatory agencies, such as US FDA, and other third party reviewers of software have the task of comprehending apiece of software in the event of its failure. Program slicing is the preferred technique to analyze software failures. However, program slicing might yield extremely large slices and also demand that an user have intimate knowledge of the software (by specifying a slicing criterion). In this paper we propose solutions to ameliorate both of these problems. The main hypothesis of the paper is that error reports can be used to generate slicing criteria, thus allowing a third-party reviewer to use slicing without any knowledge of the system being analyzed. The first contribution of the paper is a study of how error reports can be formalized and how execution sequences (called error traces) that satisfy an error report can be identified, which can then be sliced. A primary feature of this work is that incomplete error reports could be dealt with. The second contribution of the paper is a scheme for using abstract interpretation in the generation of error traces and (potential) slicing criteria from error reports. These ""abstract"" error traces can be sliced, much like in dynamic/conditional slicing, with respect to the criteria generated, allowing for inputs to be used to reduce the size of slices. Furthermore, being abstract in nature, they are shorter than exact execution sequences and also allow for easier comprehension. We finally present a case study involving a medical device that illustrates how our approach can aid with program comprehension.",2007,0, 4549,Dynamic Reliability Block Diagrams VS Dynamic Fault Trees,"Reliability block diagrams (RBD), and fault trees (FT) are the most widely used formalisms in system reliability modeling. They implement two different approaches: in a reliability block diagram, the system is represented by components connected according to their function or reliability relationships, while fault trees show which combinations of the components failures will result in a system failure. Although RBD and FT are commonly used, they are limited in their modeling capacity of systems that have no sequential relationships among their component failures. They do not provide any elements or capabilities to model reliability interactions among components or subsystems, or to represent system reliability configuration changing (dynamics), such as: load-sharing, standby redundancy, interferences, dependencies, common cause failures, and so on. To overcome this lack, Dugan et al. developed the dynamic FT (DFT). DFT extend static FT to enable modeling of time dependent failures by introducing new dynamic gates and elements. Following this way, recently we have extended the RBD into the dynamic RBD notation. Many similarities link the DFT and the DRBD formalisms, but, at the same time, one of the aims of DRBD is to extend the DFT capabilities in dynamic behavior modeling. In the paper the comparison between DFT and DRBD is studied in depth, defining a mapping of DFT elements into the DRBD domain, and investigating if and when is possible to invert the translations from DRBD to DFT. These mapping rules are applied to an example drawn from literature to show their effectiveness",2007,0, 4550,FoN: Fault-on-Neighbor aware routing algorithm for Networks-on-Chip,"Reliability has become a key issue of Networks-on-Chip (NoC) as the CMOS technology scales down to the nanoscale domain. This paper proposes a Fault-on-Neighbor (FoN) aware deflection routing algorithm for NoC which makes routing decision based on the link status of neighbor switches within 2 hops to avoid fault links and switches. Simulation results demonstrate that in the presence of faults, the saturated throughput of the FoN switch is 13% higher on average than a cost-based deflection switch for 88 mesh. The average hop counts can be up to 1.7 less than the cost-based switch. The FoN switch is also synthesized using 65nm TSMC technology and it can work at 500MHz with small area overhead.",2010,0, 4551,A Method for Modeling and Analyzing Fault-Tolerant Service Composition,"Reliability is a key issue of the service-oriented architecture (SOA) that is widely employed in distributed systems such as e-commerce and e-government. Redundancy based technologies are usually employed for building reliable service composition on top of unreliable Web services. This paper proposes a strategy for modeling and analyzing fault tolerant service composition. The strategy consists of service selection mechanism, service synchronization mechanism and task exception mechanism. Petri nets are used to construct different components of service composition. Once the model is constructed, theories of Petri nets help prove the consistency of processing states and reliability of the strategy. The corresponding enforcement method for constructing fault-tolerant service composition is proposed. Experiments are conducted to demonstrate the applicability and effectiveness of the fault tolerant strategy.",2009,0, 4552,Doppler estimation and correction for shallow underwater acoustic communications,"Reliable mobile underwater acoustic communication systems must compensate for strong, time-varying Doppler effects. Many Doppler correction techniques rely on a single bulk correction to compensate first-order effects. In many cases, residual higher-order effects must be tracked and corrected using other methods. The contributions of this paper are evaluations of (1) signal-to-noise ratio (SNR) performance from three Doppler estimation and correction methods and (2) communication performance of Doppler correction with static vs. adaptive equalizers. The evaluations use our publicly available shallow water experimental dataset, which consists of 360 packet transmission samples (each 0.5s long) from a five-channel receiver array.",2010,0, 4553,Efficient partitioning of unequal error protected MPEG video streams for multiple channel transmission,"Reliable transmission of video over wireless networks must address the limited bandwidth and the possibility of loss. When the bandwidth is insufficient on a single channel, the video can be partitioned over multiple channels with possibly unequal characteristics at the expense of more complex channel coding (error correction). This paper addresses the problem of efficiently partitioning forward error protected, pre-encoded video data for transmission over multiple channels. The assumption of pre-encoding precludes adjustment of source rates to the channels, since it is assumed that channel characteristics are not known until immediately prior to the start of transmission. The proposed partitioning exploits the structure of MPEG video, and frames in each group-of-picture are reordered based on their decoding dependence. To be spectrally efficient, the frames of different types are unequally error protected taking different channel reliabilities into account. A pruned tree search algorithm is implemented to efficiently solve the problem. Simulation results are presented.",2002,0, 4554,"Fault tolerance in computing, compressing, and transmitting FFT data","Remote-sensing applications often calculate the discrete Fourier transform of sampled data and then compress and encode it for transmission to a destination. However, all these operations are executed on computing resources potentially affected by failures. Methods are presented for integrating various fault detection capabilities throughout the data flow path so that the momentary failure of any subsystem will not allow contaminated data to go undetected. New techniques for protecting complete source coding schemes are exemplified by examining a lossy compression system that truncates fast Fourier transform (FFT) coefficients to zero, then compresses the data further by using lossless arithmetic coding. Novel methods protect arithmetic coding computations by internal algorithm checks. The arithmetic encoding and decoding operations and the transmission path are further protected by inserting sparse parity symbols dictated by a high-rate convolutional symbol-based code. This powerful approach introduces limited redundancy at the beginning of the system but performs detection at later stages. While the parity symbols degrade efficiency slightly, the overall compression gain is significant because of the run-length coding. Well-known fault tolerance measures for FFT algorithms are extended to detect errors in the lossy truncation operations, maintaining end-to-end protection. Simulations verify that all single subsystem errors are detected and the overhead costs are reasonable",2001,0, 4555,Transparent fault-tolerant Java virtual machine,"Replication is one of the prominent approaches for obtaining fault tolerance. Implementing replication on commodity hardware and in a transparent fashion, i.e., without changing the programming model, has many challenges. Deciding at what level to implement the replication has ramifications on development costs and portability of the programs. Other difficulties lie in the coordination of the copies in the face of non-determinism. We report on an implementation of transparent fault tolerance at the virtual machine level of Java. We describe the design of the system and present performance results that in certain cases are equivalent to those of non-replicated executions. We also discuss design decisions stemming from implementing replication at the virtual machine level, and the special considerations necessary in order to support symmetric multi-processors (SMP).",2003,0, 4556,Evaluation of fault-tolerant distributed Web systems,"Replication of information among multiple servers is necessary to service requests for Web application such as Internet banking. A dispatcher in distributed Web systems distributes client requests among Web application servers and multiple dispatchers are also needed for fault-tolerant Web services. In this paper, we describe issues related to building fault-tolerant distributed Web systems. We evaluate the performance of fault-tolerant distributed Web systems based on replication. Our evaluation is conducted on LVS (Linux Virtual Server) and the Apache Web server using the request generator, LoadCube. We show some performance measurements for the systems.",2005,0, 4557,Reproducing non-deterministic bugs with lightweight recording in production environments,"Reproducing non-deterministic bugs is challenging. Recording program execution in production environments and reproducing bugs is an effective way to re-enable cyclic debugging. Unfortunately, most current record-replay approaches introduce large perturbations to either environments and/or execution flow, in addition to performance penalty and high storage overhead, which make them impracticable to be deployed in production environments. This paper presents Snitchaser - a fully user-space record-replay tool which can faithfully reproduce bugs by replaying system calls which are recorded with negligible perturbation and recording overhead. This is achieved by 1) a novel, lightweight system call interception mechanism without patching the binary instructions to reduce the perturbation to execution flow; 2) system call latch to save signal semantic; 3) periodic checkpointing to reduce the storage overhead. Snitchaser focuses on bugs caused by asynchronous events on heavily loaded, high throughput servers. Experimental results show that Snitchaser is capable of reproducing non-deterministic bugs efficiently at nearly no performance penalty. We also present two case studies on dealing with existing bugs in Lighttpd - a popular software used in many large scale systems.",2010,0, 4558,Error resilience technique for multi-view coding using redundant disparity vectors,"Research on error resilience in multi-view coding is currently receiving considerable interest. While there is a multitude of literature concerning error recovery in 2D video, due to the statistical difference in motion compensation among temporal frames and disparity compensation among view points, such methods are inadequate to cater to the requirements of multiview video transmission. This paper addresses the above issue by transmission of redundant disparity vectors for error recovery purposes. The proposed system, which is implemented using the Joint Scalable Video Model (JSVM) codec and tested using a simulated Internet Protocol (IP) packet network environment, can be used along with a suitable error concealment scheme to provide robust multi-view video transmission. The experimental results suggest that the proposed algorithm experiences a slight degradation of quality in error free environments due to the inclusion of redundant data. However, it improves the reconstructed picture quality significantly in error prone environments, specifically for Packet Loss Rates (PLRs) greater than 7%.",2010,0, 4559,Fault-tolerance scheme for an RNS MAC: performance and cost analysis,"Residue number systems (RNS) are especially useful in applications in which fault-tolerance is a requirement. Accordingly, this paper discusses a novel and elegant fault-tolerance scheme incorporated into a multiply-accumulate (MAC) unit based on RNS. The fault-tolerance is achieved by using inexpensive forward conversion procedures. The cost and performance are analyzed with respect to other designs, and the analysis indicates superior cost:performance measure for our design",2001,0, 4560,Resistive superconducting fault current limiter simulation and design,"Resistive superconducting fault current limiters (SFCL) are characterized by very fast transition to the resistive state, automatic recovery after current fault and simple structure. SFCL simulations include thermal and electrical nonlinear model analysis. In this paper few mathematical models for the effective analysis of thermo-electrical processes in resistive superconducting fault current limiter are presented. Basing on the developed models an effective algorithm for optimization model was established.",2008,0, 4561,Sensitivity Analysis for Fault-analysis and Tolerance in RF Front-end Circuitry,"RFIC reliability is fast becoming a major bottleneck in the yield and performance of modern IC systems, as process complexity and levels of integration continually increase. Due to high frequencies involved, testing these chips is both complicated and expensive. While the areas of automated testing and self-test have received significant attention over the past few years, no formal framework of fault-models or sensitivity-models exists in the RF domain. This paper describes a sensitivity analysis methodology as a first step towards such a framework. It is applied towards a low noise amplifier, and a case-study application is discussed by using design and experimental results of an adaptive LNA designed in the IBM6RF 0.25 mum CMOS process",2007,0, 4562,Robust Fault Detection in a Mixed 2/ Setting: The Discrete - Time Case,"Robust fault detection problem for discrete-time LTI systems is considered. Allowing stochastic white noises and bounded unknown deterministic disturbances to model system uncertainties, it is shown that this problem can be cast as a mixed norm H2/Halpha residual generation problem. An example is presented to illustrate the application of the results.",2006,0, 4563,Error resilient scalability for video bit-stream over heterogeneous packet loss networks,"Robust transmission of compressed video bit-streams over heterogeneous packet loss network is one of the key challenges in the contemporary video communication system. Recent researches have focused on enhancing error resilience of bit-stream through the deployment of transcoder between wired and wireless networks. However, the conventional error resilient algorithms (e.g., Intra refresh) through cascading decoding and encoding processes usually have high degree of complexity. We introduce in this paper a completely new concept of error resilient scalability. We develop an extremely low complexity scheme based on redundant pictures information. In this scheme, redundant picture information is generated at the encoder and can be applied at media gateway to determine the redundant quantity of bit-stream according to the packet loss rate of access network. By transmitting compressed bit-stream together with redundant picture information, we are able to achieve the desired error resilient scalability of video bit-stream at the media gateway for heterogeneous networks. Joint rate source-channel-distortion model is adopted to optimize the generation of redundant picture information under various packet loss rates. Experimental results demonstrate expected effectiveness of the proposed scheme in error resilience scalability.",2010,0, 4564,"Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications","Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electromechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network-based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy with respect to severity of fault conditions.",2009,0, 4565,Detection and repair of software errors in hierarchical sensor networks,"Sensor networks are being increasingly deployed for collecting critical data in various applications. Once deployed, a sensor network may experience faults at the individual node level or at an aggregate network level due to design errors in the protocol, implementation errors, or deployment conditions that are significantly different from the target environment. In many applications, the deployed system may fail to collect data in an accurate, complete, and timely manner due to such errors. If the network produces incorrect data, the resulting decisions on the data may be incorrect, and negatively impact the application. Hence, it is important to detect and diagnose these faults through run-time observation. Existing technologies face difficulty with wireless sensor networks due to the large scale of the networks, the resource constraints of bandwidth and energy on the sensing nodes, and the unreliability of the observation channels for recording the behavior. This paper presents a semi-automatic approach named H-SEND (hierarchical sensor network debugging) to observe the health of a sensor network and to remotely repair errors by reprogramming through the wireless network. In H-SEND, a programmer specifies correctness properties of the protocol (""invariants""). These invariants are associated with conditions (the ""observed variables"") of individual nodes or the network. The compiler automatically inserts checking code to ensure that the observed variables satisfy the invariants. The checking can be done locally or remotely, depending on the nature of the invariant. In the latter case, messages are generated automatically. If an error is detected at run-time, the logs of the observed variables are examined to analyze and correct the error. After errors are corrected, new programs or patches can be uploaded to the nodes through the wireless network. We construct a prototype to demonstrate the benefit of run-time detection and correction",2006,0, 4566,Stator winding turn-fault detection for closed-loop induction motor drives,"Sensorless diagnostics for line-connected machines is based on extracting fault signatures from the spectrum of the line currents. However, for closed-loop drives, the power supply is a regulated current source and hence, the motor voltages must also be monitored for fault information. In this paper, a previously proposed neural network scheme for turn fault detection in line-connected induction machines is extended to inverter-fed machines, with special emphasis on closed-loop drives. Experimental results are provided to illustrate that the method is impervious to machine and instrumentation nonidealities, and that it requires lesser data memory and computation requirements than existing schemes, which are based on data look-up tables.",2002,0,6357 4567,A Method to Enhancing Fault Tolerance in Run-Time,"Service Oriented Computing is being widely accepted as an effective paradigm for developing applications by using already developed services. Services can be developed by different developers and be subscribed by different service consumers. Because of the loosely coupled nature, Service-based applications can be typically highly dynamic and unstable. That means services can be moved, deleted, and are subject to faults. However, it is hard for service consumers and service providers to address these faults in their own method. In this paper, we define roles of service broker within interactions between service consumers and service providers and propose a method to improve fault tolerance by using fault tolerant natures in reliable manager. Based on this, services can be dynamically selected; therefore, we can achieve good service reliability.",2010,0, 4568,A provenance-aware weighted fault tolerance scheme for service-based applications,"Service-orientation has been proposed as a way of facilitating the development and integration of increasingly complex and heterogeneous system components. However, there are many new challenges to the dependability community in this new paradigm, such as how individual channels within fault-tolerant systems may invoke common services as part of their workflow, thus increasing the potential for common-mode failure. We propose a scheme that - for the first time - links the technique of provenance with that of multi-version fault tolerance. We implement a large test system and perform experiments with a single-version system, a traditional MVD system, and a provenance-aware MVD system, and compare their results. We show that for this experiment, our provenance-aware scheme results in a much more dependable system than either of the other systems tested, whilst imposing a negligible timing overhead.",2005,0, 4569,Research and implementation of fault-tolerant mechanism based on SIP call control,"Session Initiation Protocol has been a hot issue in implementation of VoIP systems as its simpleness, flexibility and high-scalability, but the unreliability of SIP communications has affected the QoS of SIP Session. In order to improve the reliability of SIP communications, a solution of SIP fault-tolerant mechanism was introduced in this paper. It contained two aspects, one was SUA (State Update Algorithm) which used for producing and sharing Session State Message in server, and the other one is LRSA (Listen Retransmission Switch Algorithm) which used for switching SIP request to one certain server. This paper also have validated and analyzed the SIP fault tolerant mechanism by experiment. According to the result of the experiment, it shows that fault-tolerant mechanism based on SIP can improve the reliability of SIP communications.",2010,0, 4570,Analysis of a Sort of Unusual Mal-Operation of Transformer Differential Protection Due to Removal of External Fault,"Several cases of mal-operation of transformer differential protection with second-harmonic blocking after clearance of external fault are reported. These mal-operations all occurred at the nonrestraint region of percentage restraint plane. The previous theory cannot be utilized directly to analyze this phenomenon. Therefore, a mathematical model for analyzing the transient course of external fault inception and removal, together with the CT model involving the magnetic hysteresis effect, is proposed in this paper. It is proved that the magnetic linkage of one CT core can be pushed into the region nearby the saturation point by the high fault current with aperiodic component. As soon as the external fault is removed, the magnetic linkage formed by the primary current with low amplitude is not high enough to pull the operating point of the magnetic linkage back to the linear region. This phenomenon is named as CT local transient saturation, which results in the big measuring angle error and relative smooth waveform. In this case, the transformer differential protection using second harmonic blocking inevitably mal-operates. This point of view is verified with the simulation tests.",2008,0, 4571,SDG-based fault diagnosis and application based on reasoning method of granular computing,"Signed directed graph (SDG) as a qualitative model is used in fault diagnosis, because it can express causal relationships among variables of large-scale complex industry systems. However, many relevant results are included in diagnostic conclusions leading to low resolution in based-SDG fault diagnosis, so to solve this problem, granule is used to formally express the elements of SDG model in this paper, after that granular base containing knowledge which reflects the causal relation of faults and symptoms is constructed. And a searching and reasoning method based on granule is used in searching of fault source, consequently fault source is obtained by searching granular base and computing the most similarity. So the resolution could be improved. A 65t/h steam boiler system is taken as an example in the paper, and its answer show the method is feasible.",2010,0, 4572,GrC and SDG-Based Fault Diagnosis System and its Simulation Platform,"Signed Directed Graph (SDG) model of hot nitric acid cooling system is constructed by using process experience method and system process, and the fault diagnosis rules are reasoned out by the reverse reasoning. The three states of nodes in SDG model are extended to seven states of nodes, which consider node deviating. Granular Computing (GrC) is introduced into SDG-based fault diagnosis, the redundant attributes in the system are simplified by heuristic reduction algorithm, and ultimately the optimal decision table is produced. The normal state of the system and equipment failure information is added to the decision table to ensure integrity of the decision table. In Matlab IDE, Graphical User Interface (GUI)-based fault diagnosis simulation platform achieves the solution of the above mentioned problem and proves the correctness and effectiveness of the proposed method.",2010,0, 4573,Application of Signed Directed Graph Based Fault Diagnosis of Atmospheric Distillation Unit,"Significant research has been done in the past 30 years to use signed directed graph (SDG) for process fault diagnosis. However, due to non-unified SDG models for control loops, highly complex and integrated nature of chemical processes, few of SDG based methods has been applied in the real chemical processes. In this paper, SDG based deep knowledge modeling and bidirectional inference algorithms are introduced. With the algorithms a SDG based fault diagnosis and decision support system is developed and applied in fault diagnosis for an atmospheric distillation unit of a large-scale refining plant in China. The results prove that the SDG based fault diagnosis and decision support system can not only arrive at the fundamental requirement of diagnosis: correctness, completeness and real-timed, but also provide decision support for operators to decrease the possibility of unscheduled shut-down or more serious accident due to abnormal situation.",2010,0, 4574,A framework on software fault localization: Variable Stress Reaction,"Since automated fault localization can improve the efficiency of both the testing and debugging process, it comes to an indispensable part of high security and reliable software development for the computer networks. A novel software fault localization framework: Variable Stress Reaction (VSR) is proposed in this paper, which works well for data type overflow detection. The experimental results show that our approach has the potential to be effective in localizing the faults for software.",2010,0, 4575,Fault Diagnosis of Bearings in Rotating Machinery Based on Vibration Power Signal Autocorrelation,"Since fault in a great number of bearings commences from a single point defect, research on this category of faults has shared a great deal in predictive diagnosis literature. Single point defects will cause certain characteristic fault frequencies to appear in machine vibration spectrum. In traditional methods, data extracted from frequency spectrum has been used to identify damaged bearing part. Because of impulsive nature of fault strikes, and complex modulations present in vibration signal, a simple spectrum analysis may result in erroneous conclusions. When a shaft rotates at constant speed, strikes due to a single point defect repeat at constant intervals. Each strike shows a high energy distribution around it. This paper considers the time intervals between successive impulses in auto-correlated vibration power signals. The most frequent interval between successive impulses determines the period of defective part. This period is related to fault frequency and therefore shows the defective part. A comparison of results extracted from the traditional and the proposed methods shows the efficiency improvement of the second method in respect of the first one",2006,0, 4576,The Application of On-Line Travelling Wave Techniques in the Location of Intermittent Faults on low Voltage Underground Cables,"Since introducing on-line fault location equipment, Scottish Power have significantly reduced both the time and costs previously associated with locating repetitive intermittent faults on their low voltage underground cable network. In addition, the equipment has proved to be valuable in locating transitory faults, which do not result in interruptions to supply, but which are frequently the cause of Power Quality complaints.",2008,0, 4577,A New Correction Magnet Package for the Fermilab Booster Synchrotron,"Since its initial operation over 30 years ago, most correction magnets in the Fermilab Booster Synchrotron have only been able to fully correct the orbit, tunes, coupling, and chromaticity at injection (400MeV). We have designed a new correction package, including horizontal and vertical dipoles, normal and skew quadrupoles, and normal and skew sextupoles, to provide control up to the extraction energy (8GeV). In addition to tracking the 15Hz cycle of the main, combined function magnets, the quadrupoles and sextupoles must swing through their full range in 1 ms during transition crossing. The magnet is made from 12 water-cooled racetrack coils and an iron core with 12 poles, dramatically reducing the effective magnet air gap and increasing the corrector efficiency. Magnetic field analyses of different combinations of multipoles are included.",2005,0, 4578,Data hiding for error concealment in H.264/AVC,"Recently, data hiding has been proposed to improve the performance of error concealment algorithms. In this paper, a new data hiding-based error concealment algorithm is proposed, that allows the increase of video quality in H.264/AVC wireless video transmission and real-time applications. Data hiding is used for carrying to the decoder the values of some inner pixels to be used to reconstruct lost macro blocks into intra frames through a bi-linear interpolation process.",2004,0, 4579,Relation between fault tolerance and reconfiguration in cellular systems,"Recently, hardware researchers have promptly begun to investigate alternative computational principles to the conventional ones. The main signs of these principles are inspiration in biology and their direct hardware implementation. Evolvable hardware, cellular computing or embryonic electronics are the most important examples. This paper describes different approaches to configuration, reconfiguration and fault tolerance implementation of two-dimensional cellular system. Simplicity of the cell, vast parallelism, and the connection locality are considered as the design restrictions",2000,0, 4580,Fault Tolerant Methods for Intermitted Failures in Virtual Large Scale Disks,"Recently, the demand of low cost large scale storages increases. We developed VLSD (Virtual Large Scale Disks) toolkit for constructing virtual disk based distributed storages, which aggregate free spaces of individual disks. However, current implementation of VLSD can mask only stop failure but cannot mask other kinds of failures such as intermitted failure. In this paper, we introduce two classes to VLSD in order to increase the intermitted fault tolerance. One is Retry Disk which retries to read/write at failures, another is VotedRAID1 which masks failures by majority voting. In this paper, we describe these classes in detail and evaluate their fault tolerance.",2010,0, 4581,A Power Efficient Approach to Fault-Tolerant Register File Design,"Recently, the trade-off between power consumption and fault tolerance in embedded processors has been highlighted. This paper proposes an approach to reduce dynamic power of conventional high-level fault-tolerant techniques used in the register file of processors, without affecting the effectiveness of the fault-tolerant techniques. The power reduction is based on the reduction of dynamic power of the unaccessed parts of the register file. This approach is applied to three transient fault-tolerant techniques: single error correction (SEC) Hamming code, duplication with parity, and triple modular redundancy (TMR). As a case study, this approach is implemented on the register file of an OpenRISC 1200 processor. The experimental calculation of the power consumption shows that the proposed approach saves about 67%, 62%, and 58% power for TMR, duplication with parity, and SEC Hamming code, respectively.",2008,0, 4582,Algorithm level re-computing with shifted operands-a register transfer level concurrent error detection technique,"Re-computing with shifted operands (RESO) is a logic level time redundancy based concurrent error detection (CED) technique. In RESO, logic level operations (and, nand, etc) are carried out twice-once on the basic input and once on the shifted input. Results from these two operations are compared to detect an error. Although using RESO operators in register transfer level (RTL) designs is straightforward, it entails time and area overhead. We developed an RTL CED technique called algorithm level re-computing with shifted operands (ARESO). ARESO does not use specialized RESO operators. Rather, it exploits RTL scheduling, pipelining, operator chaining, and multi-cycling to incorporate user specified error detection latencies. ARESO supports hardware vs. performance vs. error detection latency trade-offs. ARESO has been validated on practical design examples using Synopsys Behavior Compiler",2000,0, 4583,An Efficient Fault Tolerance Scheme for Preventing Single Event Disruptions in Reconfigurable Architectures,"Reconfigurable architectures are becoming increasingly popular with space related design engineers as they are inherently flexible to meet multiple requirements and offer significant performance and cost savings for critical applications. As the microelectronics industry has advanced, integrated circuit (IC) design and reconfigurable architectures (FPGAs, reconfigurable SoC and etc) have experienced dramatic increase in density and speed. These advancements have serious implications for the reconfigurable architectures when used in space environment where IC is subject to total ionization dose (TID) and single event effects as well. Due to transient nature of single event upsets (SEUs), these are most difficult to avoid in space-borne reconfigurable architectures. We present a unique SEU fault tolerance technique based upon double redundancy with comparison to overcome the overheads associated with the conventional schemes",2006,0, 4584,Parallel algorithm of geometrical correction for MODIS data based on triangulation network,"Since MODIS data have the feature of huge capacity and multi-spectrum, it needs too much time and frequent I/O operations to correct them by RS software. This paper proposes a parallel algorithm for MODIS Data based on triangulation network. The input images are divided into several pieces and each CPU processes a piece independently. By implementing the algorithm on a cluster system, the results show that, this parallel algorithm improves the efficiency of geometrical correction greatly.",2005,0, 4585,Feature Extraction of Helicopter Fault Signal Based on HHT,"Since most of helicopter fault signals are non-linear and non-stationary, the method based on Fourier transform (FT) and wavelet analysis are not applicable any longer. The feature extraction method based on Hilbert-Huang transform (HHT), a new method for processing non-stationary and non-linear signals, is applied to analyze helicopter fault signals. The signal is firstly decomposed into intrinsic mode function (IMF) by the empirical mode decomposition (EMD) method. Then Hilbert spectrum and Hilbert marginal spectrum are obtained from instantaneous frequency and amplitude obtained from Hilbert transform (HT). The simulation results are analyzed and compared with FT method, and then suggest the effectiveness of this method based on HHT.",2009,0, 4586,A Neural-Network Approach for Defect Recognition in TFT-LCD Photolithography Process,"Since the advent of high qualification and tiny technology, yield control in the photolithography process has played an important role in the manufacture of thin-film transistor-liquid crystal displays (TFT-LCDs). Through an auto optic inspection (AOI), defect points from the panels are collected, and the defect images are generated after the photolithography process. The defect images are usually identified by experienced engineers or operators. Evidently, human identification may produce potential misjudgments and cause time loss. This study therefore proposes a neural-network approach for defect recognition in the TFT-LCD photolithography process. There were four neural-network methods adopted for this purpose, namely, backpropagation, radial basis function, learning vector quantization 1, and learning vector quantization 2. A comparison of the performance of these four types of neural-networks was illustrated. The results showed that the proposed approach can effectively recognize the defect images in the photolithography process.",2009,0, 4587,Considering fault dependency and debugging time lag in reliability growth modeling during software testing,"Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. For most existing SRGMs, most researchers assume that faults are immediately detected and corrected. However, in practice, this assumption may not be realistic and satisfied. In this paper we first give a review of fault detection and correction processes in SRGMs. We show how several existing SRGMs based on NHPP models can be comprehensively derived by applying the time-dependent delay function. Furthermore, we show how to incorporate both failure dependency and time-dependent delay function into software reliability growth modeling. We present stochastic reliability models for software failure phenomenon based on NHPPs. Some numerical examples based on real software failure data sets are presented. The results show that the proposed framework to incorporate both failure dependency and time-dependent delay function into software reliability modeling has a useful interpretation in testing and correcting the software.",2004,0, 4588,Characterising Faults in Aspect-Oriented Programs: Towards Filling the Gap Between Theory and Practice,"Since the proposal of Aspect-Oriented Programming, several candidate fault taxonomies for aspect-oriented (AO) software have been proposed. Such taxonomies, however, generally rely on language features, hence still requiring practical evaluation based on realistic implementation scenarios. The current lack of available AO systems for evaluation as well as historical data are the two major obstacles for this kind of study. This paper quantifies, documents and classifies faults uncovered in several releases of three AO systems, all from different application domains. Our empirical analysis naturally led us to revisit and refine a previously defined fault taxonomy. We identified particular fault types that stood out amongst the categories defined in the taxonomy. Besides this, we illustrate recurring faulty scenarios extracted from the analysed systems. We believe such scenarios should be considered for the establishment of testing strategies along the software development process.",2010,0, 4589,Modeling Defect Enhanced Detection at 1550 nm in Integrated Silicon Waveguide Photodetectors,"Recent attention has been attracted by photo-detectors integrated onto silicon-on-insulator (SOI) waveguides that exploit the enhanced sensitivity to subbandgap wavelengths resulting from absorption via point defects introduced by ion implantation. In this paper, we present the first model to describe the carrier generation process of such detectors, based upon modified Shockley-Read-Hall generation/recombination, and, thus, determine the influence of the device design on detection efficiency. We further describe how the model may be incorporated into commercial software, which then simulates the performance of previously reported devices by assuming a single midgap defect level (with properties commensurate with the single negatively charged divacancy). We describe the ability of the model to highlight the major limitations to responsivity, and thus suggest improvements which diminish the impact of such limitations.",2009,0, 4590,A class of random multiple bits in a byte error correcting (Stb/EC) codes for semiconductor memory systems,"Recent high density wide I/O DRAM chips are highly vulnerable to multiple random bit errors. Therefore, correcting multiple random bit errors that corrupt a single DRAM chip becomes very important in certain applications, such as semiconductor memories used in computer and communication systems, mobile systems, aircraft and satellites. This is because, in these applications, the presence of strong electromagnetic waves in the environment or the bombardment of an energetic particle on a DRAM chip is likely to upset more than just one bit stored in that chip. Under this situation, codes capable of correcting random multiple bit errors that are confined to a single DRAM chip output are suitable for application in high speed semiconductor memory systems. This paper proposes a class of codes called single t/b-error correcting (Stb/EC) codes which are capable of correcting random t-bit errors occurring within a single b-bit byte. For the case where the chip data output is 16 bits, i.e., b = 16, the S316/EC code proposed in this paper requires only 16 check bits, that is, only one chip is required for check bits at practical information lengths such as 64, 128 and 256 bits. Furthermore, this S316/EC code is capable of detecting more than 95% of all single 16 bit byte errors at information length 64 bits.",2002,0, 4591,Fingerprinting: bounding soft-error-detection latency and bandwidth,"Recent studies suggest that the soft-error rate in microprocessor logic is likely to become a serious reliability concern by 2010. Detecting soft errors in the processor's core logic presents a new challenge beyond what error detecting and correcting codes can handle. Commercial microprocessor systems that require an assurance of reliability employ an error-detection scheme based on dual modular redundancy (DMR) in some form - from replicated pipelines within the same die to mirroring of complete processors. To detect errors across a distributed DMR pair, we develop fingerprinting, a technique that summarizes a processor's execution history into a cryptographic signature, or ""fingerprint"". More specifically, a fingerprint is a hash value computed on the changes to a processor's architectural state resulting from a program's execution. Fingerprinting summarizes the history of internal processor state updates into a cryptographic signature. The processors in a dual modular redundant pair periodically exchange and compare fingerprints to corroborate each other's correctness. Relative to other techniques, fingerprinting offers superior error coverage and significantly reduces the error-detection latency and bandwidth",2004,0, 4592,Automatic Fault Behavior Detection and Modeling by a State-Based Specification Method,"Safety assessment methods are typically based on the reliability of the single components making a system. A different notion of safety as an emergent property of the system taken as a whole is however emerging. The current state-based modeling paradigm tends at misrepresenting systemic behavior, thus contrasting the adoption and development of systemic compositional fault detection techniques. We propose a state-based formalism, highly committed towards the explicit representation of systemic behavior, by which it is possible to formally identify faulty behaviors once the regular one has been specified.",2010,0, 4593,Virtual Machine Based Hot-Spare Fault-Tolerant System,"Safety-critical systems have to work correctly and meet time constraints even in the presence of faults. Hardware fault-tolerant technology can provide a high level of reliability, but custom-designed hardware and operating system are too costly to be accepted by most common applications. At present, some research works about hot-spare fault-tolerant systems based on virtualization technology have been carried out, this method can provide a highly available system based entirely on commodity hardware and open-source operating systems. But how to control the overhead of main-backup synchronization, including space usage and time consumption is the most important thing need to consider. In this paper we present the framework of hot-spare fault-tolerant systems based on virtualization technology and analyse the procedure of main-back synchronization. We also discussed how to chose a proper strategy and opportunity of synchronization to ensure the recovery is exactly. We did several experiments to evaluate the performance of this system and the result proved our view point.",2009,0, 4594,GOP-based unequal error protection for scalable video over packet erasure channel,"Scalable video coding has been desired for many years to meet the requirement of heterogeneous networks and different end-users. It intends to encode the signal only once, but enables decoding from partial streams depending on the specific rate and resolution. The joint video team has recently developed a new scalable video coding standard, known as SVC, which is the scalable extension of AVC. A full scalability is provided by SVC including temporal scalability, spatial scalability and SNR scalability. In each scalability dimension, the compressed bitstream can be segmented into layers with different contributions to the overall video quality. This paper addresses the problem of unequal error protection for the scalable video over packet-erasure channel considering both the channel condition and the video characteristics. In addition, a local search algorithm is proposed to quickly allocate unequal amount of channel rate to protect different layers to obtain acceptable quality video with smooth degradation under different transmission error conditions. The advantage of the proposed algorithms can be demonstrated by the theoretical analysis and experimental results.",2008,0, 4595,Exploiting Tuple Spaces to Provide Fault-Tolerant Scheduling on Computational Grids,"Scheduling tasks on large-scale computational grids is difficult due to the heterogeneous computational capabilities of the resources, node unavailability and unreliable network connectivity. This work proposes GRIDTS, a grid infrastructure in which the resources select the tasks they execute, instead of a scheduler finding resources for the tasks. This solution allows scheduling decisions to be made with up-to-date information about the resources. Moreover, GRIDTS provides fault-tolerant scheduling by combining a set of fault tolerance techniques to tolerate crash faults in components of the system. The core of the solution is a tuple space, which supports the communication, but also provides support for the fault tolerance mechanisms",2007,0, 4596,ANN-based approach for fast fault diagnosis and alarm handling of power systems,Several neural network techniques have been proposed to solve the problem of automatic fault diagnosis in power systems. In this paper. A new and accurate parallel neural network-based approach for fast fault diagnosis and alarm handling of power systems is developed which overcomes some of the limitations of existing techniques. Three different neural-network configurations are proposed and their performances are compared using the data of an actual high-voltage power network.,2000,0, 4597,Detection and correction of real-word spelling errors in Persian language,"Several statistical methods have already been proposed to detect and correct the real-word errors of a context. However, to the best of our knowledge, none of them has been applied on Persian language yet. In this paper, a statistical method based on mutual information of Persian words to deal with context sensitive spelling errors is presented. Different experiments show the accuracy of correction method on a test data which only contains one real-word error in each sentence to be about 80.5% and 87% with respect to precision and recall metrics.",2010,0, 4598,A Novel Double-Data-Rate AES Architecture Resistant against Fault Injection,"Several techniques have been proposed for encryption blocks in order to provide protection against faults. These techniques usually exploit some form of redundancy, e.g. by means of error detection codes. However, protection schemes that offer an acceptable error detection rate are in general expensive, while temporal redundancy heavily affects the throughput. In this paper, we propose a new design solution that exploits temporal redundancy by DDR techniques without affecting adversely the throughput at lower clock frequencies. We will also show that the overall costs can be comparable to other solutions recently proposed.",2007,0, 4599,Reliability Considerations and Fault-Handling Strategies for Multi-MW Modular Drive Systems,"Shunt-interleaved electrical drive systems consisting of several parallel medium-voltage back-to-back converters enable power ratings of tens of MVA, low current distortions, and a very smooth air-gap torque. To meet stringent reliability and availability goals despite the large parts count, the modularity of the drive system needs to be exploited and a suitable fault-handling strategy that allows the exclusion and isolation of faulted threads is required. This avoids the shutdown of the complete system and enables the drive system to continue operation. If full power capability is also required in degraded mode operation, redundancy on a thread level needs to be added. Experimental results confirm that thread exclusion allows the isolation of the majority of faults without affecting the mechanical load. As the drive system continues to run, faulted threads can be repaired and then added on-the-fly to the running system by thread inclusion. As a result, the downtime of such a modular drive system is expected to not exceed a few hours per year.",2010,0,6371 4600,Software-Implemented Fault Injection at Firmware Level,"Software-implemented fault injection is an established method to emulate hardware faults in computer systems. Existing approaches typically extend the operating system by special drivers or change the application under test. We propose a novel approach where fault injection capabilities are added to the computer firmware. This approach can work without any modification to operating system and / or applications, and can support a larger variety of fault locations. We discuss four different strategies in X86/X64 and Itanium systems. Our analysis shows that such an approach can increase portability, the non-intrusiveness of the injector implementation, and the number of supported fault locations. Firmware-level fault injection paves the way for new research directions, such as virtual machine monitor fault injection or the investigation of certified operating systems.",2010,0, 4601,"Using product, process, and execution metrics to predict fault-prone software modules with classification trees","Software-quality classification models can make predictions to guide improvement efforts to those modules that need it the most. Based on software metrics, a model can predict which modules will be considered fault-prone, or not. We consider a module fault-prone if any faults were discovered by customers. Useful predictions are contingent on the availability of candidate predictors that are actually related to faults discovered by customers. With a diverse set of candidate predictors in hand, classification-tree modeling is a robust technique for building such software quality models. This paper presents an empirical case study of four releases of a very large telecommunications system. The case study used the regression-tree algorithm in the S-Plus package and then applied our general decision rule to classify modules. Results showed that in addition to product metrics, process metrics and execution metrics were significant predictors of faults discovered by customers",2000,0, 4602,"Relationships between Test Suites, Faults, and Fault Detection in GUI Testing","Software-testing researchers have long sought recipes for test suites that detect faults well. In the literature, empirical studies of testing techniques abound, yet the ideal technique for detecting the desired kinds of faults in a given situation often remains unclear. This work shows how understanding the context in which testing occurs, in terms of factors likely to influence fault detection, can make evaluations of testing techniques more readily applicable to new situations. We present a methodology for discovering which factors do statistically affect fault detection, and we perform an experiment with a set of test-suite- and fault-related factors in the GUI testing of two fielded, open-source applications. Statement coverage and GUI-event coverage are found to be statistically related to the likelihood of detecting certain kinds of faults.",2008,0, 4603,A Fault-Tolerant Active Pixel Sensor to Correct In-Field Hot-Pixel Defects,"Solid-state image sensors develop in-field defects in all common environments. Experiments have demonstrated the growth of significant quantities of hot-pixel defects that degrade the dynamic range of an image sensor and potentially limit low-light imaging. Existing software- only techniques for suppressing hot-pixels are inadequate because these defective pixels saturate at relatively low illumination levels. The redundant fault-tolerant active pixel sensor design is suggested to isolate point-like hot-pixel defects. Emulated hot-pixels have been induced in hardware implementations of this pixel architecture and measurements of pixel response indicate that it generates an accurate output signal throughout the sensor's entire dynamic range, even when standard pixels would be otherwise saturated by the hot defect. A correction algorithm repairs the final image by building a simple look-up table of illumination- response of a working pixel. In emulated hot-pixels, the true illumination value can be recovered with an error of plusmn5% under typical conditions.",2007,0, 4604,Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning,"Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We observed good performance from both an existing ABFT method for matrix multiplication and a novel ABFT method for exponentiation. These techniques bring us a step closer to ""rad-hard"" machine learning algorithms.",2009,0, 4605,Verifying formal specifications using fault tree analysis,"Specification before implementation has been suggested as a sensible approach to software evolution. The quality of this approach may be improved by using formal specification. However, to serve as a trustable foundation for implementation and to help reduce the cost of program testing, the formal specification must be ensured to be satisfiable, consistent, complete and accurate in recording the user requirements. In this paper, we first define these four concepts and then introduce a technique for verifying formal specifications that combines fault-tree analysis with static analysis and testing techniques",2000,0, 4606,A methodology for detecting performance faults in microprocessors via performance monitoring hardware,"Speculative execution of instructions boosts performance in modern microprocessors. Control and data flow dependencies are overcome through speculation mechanisms, such as branch prediction or data value prediction. Because of their inherent self-correcting nature, the presence of defects in speculative execution units does not affect their functionality (and escapes traditional functional testing approaches) but impose severe performance degradation. In this paper, we investigate the effects of performance faults in speculative execution units and propose a generic, software-based test methodology, which utilizes available processor resources: hardware performance monitors and processor exceptions, to detect these faults in a systematic way. We demonstrate the methodology on a publicly available fully pipelined RISC processor that has been enhanced with the most common speculative execution unit, the branch prediction unit. Two popular schemes of predictors built around a Branch Target Buffer have been studied and experimental results show significant improvements on both cases fault coverage of the branch prediction units increased from 80% to 97%. Detailed experiments for the application of a functional self-testing methodology on a complete RISC processor incorporating both a full pipeline structure and a branch prediction unit have not been previously given in the literature.",2007,0, 4607,Thai OCR error correction using token passing algorithm,"Spell checking can be used to improve OCR result, which is quite time consuming for Thai language. Since there is no explicit word boundary, the spell checking has to go through all possible ambiguity characters and word boundaries. This paper proposes a token passing algorithm, which is often used in speech recognition. To this problem, the output of OCR consists of strings of at most 5 most probable characters. The tokens are generated for each letter and are passed to the next 5 characters. Each time the token is passed, the dictionary is used to check for the correct spelling. The wrong spelling tokens are discarded. Word boundary is inserted to the token to correct boundary ambiguity. At the end of the passing processes, all possible correct spelling words are found as long as the correct characters are in the list of 5 characters. Many possible word sequences could be constructed from this process. One best word sequence could be selected using bi-gram, trigram and/or lowest token probability. This method can also handle some insertion and deletion error",2001,0, 4608,Error event analysis of EPR4 and ME2PR4 channels: captured head signals versus Lorentzian signal models,"Spin-stand error event analysis is conducted for EPR4 and modified E2PR4 (ME2PR4) channels. The error event distributions for 16/17 random code and 16/17 QMTR code at user bit densities (UBD) of 2.5, 3.0 and 3.5 were investigated. Captured spin-stand waveforms were processed through a software channel and the error events were analyzed. The distribution of error events for spin-stand data is compared to the conventional Lorentzian model for 100% AWGN noise colored by the equalizer and for the 100% jitter noise case. Results show that neither Lorentzian model accurately predicts the spin-stand results by themselves or as a mix; however by considering the two extreme models, all the major error events are accounted for",2000,0, 4609,Comparison for the accuracy of defect fix effort estimation,"Software defects have become the dominant cause of customer outage and the study of software errors has been necessitated by the emphasis on software reliability. Yet, software defects are not well enough understood to provide a clear methodology for avoiding or recovering from them. So software defect fix efforts play the critical role in software quality assurance. In this paper, the comparison of the defect fix effort estimation by using Self-Organizing Map in neural network is presented. To estimate the defect fix effort, KC3 static defect attributes that is one of the publicly available data from NASA products is used in this paper. To implement the SOM, finding different clusters from the SOM and for the calculation of fix effort time of defect SOM Toolbox in MATLAB 7.0.1 is used.",2010,0, 4610,Changes and bugs Mining and predicting development activities,"Software development results in a huge amount of data: changes to source code are recorded in version archives, bugs are reported to issue tracking systems, and communications are archived in e-mails and newsgroups. We present techniques for mining version archives and bug databases to understand and support software development. First, we introduce the concept of co-addition of method calls, which we use to identify patterns that describe how methods should be called. We use dynamic analysis to validate these patterns and identify violations. The co-addition of method calls can also detect cross-cutting changes, which are an indicator for concerns that could have been realized as aspects in aspect-oriented programming. Second, we present techniques to build models that can successfully predict the most defect-prone parts of large-scale industrial software, in our experiments Windows Server 2003. This helps managers to allocate resources for quality assurance to those parts of a system that are expected to have most defects. The proposed measures on dependency graphs outperformed traditional complexity metrics. In addition, we found empirical evidence for a domino effect, i.e., depending on defect-prone binaries increases the chances of having defects.",2009,0, 4611,When process data quality affects the number of bugs: Correlations in software engineering datasets,"Software engineering process information extracted from version control systems and bug tracking databases are widely used in empirical software engineering. In prior work, we showed that these data are plagued by quality deficiencies, which vary in its characteristics across projects. In addition, we showed that those deficiencies in the form of bias do impact the results of studies in empirical software engineering. While these findings affect software engineering researchers the impact on practitioners has not yet been substantiated. In this paper we, therefore, explore (i) if the process data quality and characteristics have an influence on the bug fixing process and (ii) if the process quality as measured by the process data has an influence on the product (i.e., software) quality. Specifically, we analyze six Open Source as well as two Closed Source projects and show that process data quality and characteristics have an impact on the bug fixing process: the high rate of empty commit messages in Eclipse, for example, correlates with the bug report quality. We also show that the product quality - measured by number of bugs reported - is affected by process data quality measures. These findings have the potential to prompt practitioners to increase the quality of their software process and its associated data quality.",2010,0, 4612,Using abstraction to improve fault tolerance,"Software errors are a major cause of outages and they are increasingly exploited in malicious attacks. Byzantine fault tolerance allows replicated systems to mask some software errors but it is expensive to deploy. The paper describes a replication technique, BFTA, which uses abstraction to reduce the cost of Byzantine fault tolerance and to improve its ability to mask software errors. BFTA reduces cost because it enables reuse of off-the-shelf service implementations. It improves availability because each replica can be repaired periodically using an abstract view of the state stored by correct replicas, and because each replica can run distinct or non-deterministic service implementations, which reduces the probability of common mode failures. We built an NFS service that allows each replica to run a different operating system. This example suggests that BFTA can be used in practice; the replicated file system required only a modest amount of new code, and preliminary performance results indicate that it performs comparably to the off-the-shelf implementations that it wraps.",2001,0, 4613,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOAbased applications.",2007,0, 4614,SOCRATES on IP router fault detection,"SOCRATES is a software system for testing correctness of implementations of IP routing protocols such as RIP, OSPF and BGP. It uses a probabilistic algorithm to automatically construct random network topologies. For each generated network topology, it checks the correctness of routing table calculation and the IP packet forwarding behavior. For OSPF, it also checks the consistency between network topologies and the link-state databases of router under test. For BGP, it further checks the BGP update redistribution. Being different than commercial testing tools, which select their test cases in an ad-hoc manner, SOCRATES chooses test cases with a guaranteed fault coverage",2000,0, 4615,A Novel Optimum Data Duplication Approach for Soft Error Detection,"Soft errors are a growing concern for computer reliability. To mitigate the effects of soft errors, a variety of software-based fault tolerance methodologies have been proposed for their low costs. Data duplication techniques have the advantage of flexible and general implementation with strong capacity for error detection. However, the trade-off between reliability, performance and memory overhead should be carefully considered before employing data duplication techniques. In this paper, we first introduce an analytical model, named PRASE (Program Reliability Analysis with Soft Errors), which is able to access the impact of soft errors for the reliability of a program. Furthermore, the analytical result of PRASE points out a factor about data reliability weight, which meters the criticality of data for the overall reliability of the target program. Based on PRASE, we propose a novel data duplication approach, called ODD, which can provide the optimum error coverage under system performance constraints. To illustrate the effectiveness of our method, we perform several fault injection experiments and performance evaluations on a set of simple benchmark programs using the SimpleScalar tool set.",2008,0, 4616,Impact of soft error challenge on SoC design,"Soft errors are a major challenge to robust design. Conventionally, designs with high level requirements for reliability and availability required protection against soft errors. However, the scaling level reached with today's nanometer technologies is moving the soft error protection requirements to SoC designs for a wide range of applications. This paper discusses the soft error challenge, its implication on SoC design practices and possible approaches to create a robust SoC design.",2005,0, 4617,Case Study: Soft Error Rate Analysis in Storage Systems,"Soft errors due to cosmic particles are a growing reliability threat for VLSI systems. In this paper we analyze the soft error vulnerability of FPGAs used in storage systems. Since the reliability requirements of these high performance storage subsystems are very stringent, the reliability of the FPGA chips used in the design of such systems plays a critical role in the overall system reliability. We validate the projections produced by our analytical model by using field error rates obtained from actual field failure data of a large FPGA-based design used in the logical unit module board of a commercial storage system. This comparison confirms that the projections obtained from our analytical tool are accurate (there is an 81% overlap in FIT rate range obtained with our analytical modeling framework and the field failure data studied)",2007,0, 4618,Improving soft-error tolerance of FPGA configuration bits,"Soft errors that change configuration bits of an SRAM based FPGA modify the functionality of the design. The proliferation of FPGA devices in various critical applications makes it important to increase their immunity to soft errors. In this work, we propose the use of an asymmetric SRAM (ASRAM) structure that is optimized for soft error immunity and leakage when storing a preferred value. The key to our approach is the observation that the configuration bitstream is composed of 87% of zeros across different designs. Consequently, the use of ASRAM cell optimized for storing a zero (ASRAM-0) reduces the failure in time by 25% as compared to the original design. We also present an optimization that increases the number of zeros in the bitstream while preserving the functionality.",2004,0, 4619,An efficient defect estimation method for software defect curves,Software defect curves describe the behavior of the estimate of the number of remaining software defects as software testing proceeds. They are of two possible patterns: single-trapezoidal-like curves or multiple-trapezoidal-like curves. In this paper we present some necessary and/or sufficient conditions for software defect curves of the Goel-Okumoto NHPP model. These conditions can be used to predict the effect of the detection and removal of a software defect on the variations of the estimates of the number of remaining defects. A field software reliability dataset is used to justify the trapezoidal shape of software defect curves and our theoretical analyses. The results presented in this paper may provide useful feedback information for assessing software testing progress and have potentials in the emerging area of software cybernetics that explores the interplay between software and control.,2003,0, 4620,Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings,"Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.",2008,0, 4621,Modeling and Analysis of Software Fault Detection and Correction Process by Considering Time Dependency,"Software reliability modeling & estimation plays a critical role in software development, particularly during the software testing stage. Although there are many research papers on this subject, few of them address the realistic time delays between fault detection and fault correction processes. This paper investigates an approach to incorporate the time dependencies between the fault detection, and fault correction processes, focusing on the parameter estimations of the combined model. Maximum likelihood estimates of combined models are derived from an explicit likelihood formula under various time delay assumptions. Various characteristics of the combined model, like the predictive capability, are also analyzed, and compared with the traditional least squares estimation method. Furthermore, we study a direct, useful application of the proposed model & estimation method to the classical optimal release time problem faced by software decision makers. The results illustrate the effect of time delay on the optimal release policy, and the overall software development cost.",2007,0, 4622,Type Inference for Soft-Error Fault-Tolerance Prediction,"Software systems are becoming increasingly vulnerable to a new class of soft errors, originating from voltage spikes produced by cosmic radiation. The standard technique for assessing the source-level impact of these soft errors, fault injection - essentially a black-box testing technique - provides limited high-level information. Since soft errors can occur anywhere, even control-structured white-box techniques offer little insight. We propose a type-based approach, founded on data-flow structure, to classify the usage pattern of registers and memory cells. To capture all soft errors, the type system is defined at the assembly level, close to the hardware, and allows inferring types in the untyped assembly representation. In a case study, we apply our type inference scheme to a prototype brake-by-wire controller, developed by Volvo Technology, and identify a high correlation between types and fault-injection results. The case study confirms that the inferred types are good predictors for soft-error impact.",2009,0, 4623,SBST for on-line detection of hard faults in multiprocessor applications under energy constraints,"Software-Based Self-Test (SBST) has emerged as an effective method for on-line testing of processors integrated in non safety-critical systems. However, especially for multi-core processors, the notion of dependability encompasses not only high quality on-line tests with minimum performance overhead but also methods for preventing the generation of excessive power and heat that exacerbate silicon aging mechanisms and can cause long term reliability problems. In this paper, we initially extend the capabilities of a multiprocessor simulator in order to evaluate the overhead in the execution of the useful application load in terms of both performance and energy consumption. We utilize the derived power evaluation framework to assess the overhead of SBST implemented as a test thread in a multiprocessor environment. A range of typical processor configurations is considered. The application load consists of some representative SPEC benchmarks, and various scenarios for the execution of the test thread are studied (sporadic or continuous execution). Finally, we apply in a multiprocessor context an energy optimization methodology that was originally proposed to increase battery life for battery-powered devices. The methodology reduces significantly the energy and performance overhead without affecting the test coverage of the SBST routines.",2010,0, 4624,A new low-cost non intrusive platform for injecting soft errors in SRAM-based FPGAs,"SRAM-based Field Programmable Gate Arrays (FPGAs) are becoming more and more popular among aerospace devices. Radiation effects have to be investigated in order to measure the fault tolerance degree of the applications and to validate new mitigation techniques. Fault injection is one of the possible evaluation methods. Several platforms have been developed in the past years in order to inject soft errors within FPGAs, however they have their greater drawbacks in intrusiveness and high-costs. In this paper we propose a new, low-cost and not-intrusive, fault injection platform to emulate soft errors within the configuration memory of SRAM-based FPGAs. In particular, radiation effects can be evaluated without modifying neither the circuit implemented in the FPGA nor the application it executes, thus allowing to study the real system that will take part to the mission. Experimental results on several test circuits are reported and commented, demonstrating the feasibility of the presented approach.",2008,0, 4625,"Combining Duplication, Partial Reconfiguration and Software for On-line Error Diagnosis and Recovery in SRAM-Based FPGAs","SRAM-based FPGAs are susceptible to Single-Event Upsets (SEUs) in radiation-exposed environments due to their configuration memory. We propose a new scheme for the diagnosis and recovery from upsets that combines i) duplication of the core to be protected, ii) partial reconfiguration to reconfigure the faulty part only, and iii) hardcore processor(s) for deciding when and which part will be reconfigured; executing the application in software instead of hardware during fault handling; and controlling the reconfiguration. A hardcore processor has smaller cross section and it is less susceptible than reconfigurable resources. Thus it can temporarily undertake the execution during upset conditions. Real experiments demonstrate that our approach is feasible and an area reduction of more than 40% over the dominant Triple Modular Redundancy (TMR) solution can be achieved at the cost of a reduction in the processing rate of the input.",2010,0, 4626,Experiences with a CANoe-based fault injection framework for AUTOSAR,"Standardized software architectures, such as AUTomotive Open System ARchitecture (AUTOSAR), are being pursued within the automotive industry in order to reduce the cost of developing new vehicle features. Many of these features will need to be highly dependable. Fault injection plays an important role during the dependability analysis of such software. This work evaluates the feasibility of leveraging the CANoe simulation environment to develop software-based methods for injecting faults into AUTOSAR applications. We describe a proof-of-concept fault-injection framework with example fault-injection scenarios, as well as implementation issues faced and addressed, lessons learned, and the suitability of using CANoe as a fault-injection environment.",2010,0, 4627,Detection and identification of topological errors from real-time measurements reconciliation,"State estimation assumes that measurement errors are statistically small, that data redundancy is adequate in number, type and topological distribution of measurements. Furthermore, network configuration and parameters are assumed known. Frequently, these hypotheses are not true. Topology errors result from errors in the status of the circuit breakers associated with a network branch such as a line, a transformer, a shunt capacitor, or a bus tie. In this paper, network configuration and parameter errors are addressed and an identification procedure is demonstrated.",2002,0, 4628,Adaptive bug isolation,"Statistical debugging uses lightweight instrumentation and statistical models to identify program behaviors that are strongly predictive of failure. However, most software is mostly correct; nearly all monitored behaviors are poor predictors of failure. We propose an adaptive monitoring strategy that mitigates the overhead associated with monitoring poor failure predictors. We begin by monitoring a small portion of the program, then automatically refine instrumentation over time to zero in on bugs. We formulate this approach as a search on the control-dependence graph of the program. We present and evaluate various heuristics that can be used for this search. We also discuss the construction of a binary instrumentor for incorporating the feedback loop into post-deployment monitoring. Performance measurements show that adaptive bug isolation yields an average performance overhead of 1% for a class of large applications, as opposed to 87% for realistic sampling-based instrumentation and 300% for complete binary instrumentation.",2010,0, 4629,Effect of errors in the system matrix on iterative image reconstruction,"Statistically based iterative image reconstruction is widely used in emission tomography. One important component in iterative image reconstruction is the system matrix, which defines the mapping from the image space to the data space. Several groups have demonstrated that an accurate system matrix can improve image quality in both SPECT and PET. While iterative methods are amenable to arbitrary and complicated system models, the true system response is never known exactly. In practice, one also has to sacrifice the accuracy of the system model because of limited computing and imaging resources. This paper analyzes the effect of errors in the system matrix on iterative image reconstruction. We derived a theoretical expression for calculating artifacts in a reconstructed image that are caused by errors in the system matrix. Using this theoretical expression, we can address the question of how accurate the system matrix needs to be. Computer simulations were conducted to validate theoretical results.",2004,0, 4630,Stonehenge: a fault-tolerant real-time network-attached storage device,"Stonehenge is a real-time network-attached storage device (NASD) that guarantees real-time data delivery to network clients even across single-disk failures. Stonehenge supports both best-effort and real-time disk read/write services, which are accessed through an object-based interface. Data access requests sent to Stonehenge can be serviced in a server push or a client pull mode. Stonehenge's ability to guarantee real-time disk performance results from a cycle-based scan-order disk scheduling mechanism. However, Stonehenge's disk I/O cycle is either completely utilized or completely idle. This on-off disk scheduling model effectively reduces the power consumption of the disk subsystem, without increasing the buffer size requirement. Finally Stonehenge exploits unused disk storage space and maintains additional redundancy dynamically beyond the RAIDS-style parity. This extra redundancy, typically in the form of disk block replication, reduces the time to reconstruct the data on the failed disk. This paper describes the system architecture of Stonehenge and reports preliminary performance measurements collected from an initial Linux-based prototype implementation using Fast Ethernet and UltraSCSI disks",2001,0, 4631,Validation of object oriented software design with fault tree analysis,"Software plays an increasing role in the safety critical systems. Increasing the quality and reliability of the software has become the major objective of software development industry. Researchers and industry practitioners, look for innovative techniques and methodologies that could be used to increase their confidence in the software reliability. Fault tree analysis (FTA) is one method under study at the Software Assurance Technology Center (SATC) of NASA's Goddard Space Flight Center to determine its relevance to increasing the quality and the reliability of software. This paper briefly reviews some of the previous research in the area of software fault tree analysis (SFTA). Next we discuss a roadmap for application of the SFTA to software, with special emphasis on object-oriented design. This is followed by a brief discussion of the paradigm for transforming a software design artifact (i.e., sequence diagram) to its corresponding software fault tree. Finally, we discuss challenges, advantages and disadvantages of SFTA.",2003,0, 4632,A Conceptual Framework to Integrate Fault Prediction Sub-Process for Software Product Lines,"Software product line engineering is a growing recent paradigm to develop similar products using reusable core assets such as architecture and test cases. The general aim is to enhance quality and decrease development costs. Current software product line engineering frameworks apply only a few quality assurance activities but today's single-system engineering has much more quality assurance activities that can be adapted to software product lines. In this study, software fault prediction sub- process is integrated into software product line engineering framework and the key activities are defined. This approach will improve quality and enhance testing process for software product lines.",2008,0, 4633,A translation of State Machines to temporal fault trees,"State Machines (SMs) are increasingly being used to gain a better understanding of the failure behaviour of safety-critical systems. In dependability analysis, SMs are translated to other models, such as Generalized Stochastic Petri Nets (GSPNs) or combinatorial fault trees. The former does not enable qualitative analysis, whereas the second allows it but can lead to inaccurate or erroneous results, because combinatorial fault trees do not capture the temporal semantics expressed by SMs. In this paper, we discuss the problem and propose a translation of SMs to temporal fault trees using Pandora, a recent technique for introducing temporal logic to fault trees, thus preserving the significance of the temporal sequencing of faults and allowing full qualitative analysis. Since dependability models inform the design of condition monitoring and failure prevention measures, improving the representation and analysis of dynamic effects in such models can have a positive impact on proactive failure avoidance.",2010,0, 4634,Statechart Features and Pre-Release Defects in Software Maintenance,Statecharts is a design notation to model reactive systems that is part of the Unified Modeling Language (UML) and it is commonly used in the automotive and telecommunication software industry. In this paper we present a study of how the use of some statechart features correlate to the number of pre-release defects in the maintenance of large systems. We discuss possible causes for these correlations and provide some advice to both UML practitioners and to designers of new visual design languages.,2007,0, 4635,Using Static Analysis to Find Bugs,"Static analysis examines code in the absence of input data and without running the code. It can detect potential security violations (SQL injection), runtime errors (dereferencing a null pointer) and logical inconsistencies (a conditional test that can't possibly be true). Although a rich body of literature exists on algorithms and analytical frameworks used by such tools, reports describing experiences in industry are much harder to come by. The authors describe FindBugs, an open source static-analysis tool for Java, and experiences using it in production settings. FindBugs evaluates what kinds of defects can be effectively detected with relatively simple techniques and helps developers understand how to incorporate such tools into software development.",2008,0, 4636,Fault Tolerance of Multiprocessor-Structured Control System by Hardware and Software Reconfiguration,"Since the traditional redundancy for fault tolerance of a control system is complex in structure and expensive, a novel method for fault tolerance of multiprocessor-structured control system by hardware and software reconfiguration is presented. Based on the fact that the control system is composed of several processors, this method performs fault detection by self-diagnosis implemented in each processor and validation of exchanged information between the processors, tolerates fault by hardware and software reconfiguration carried out by monitoring and configuring device. Security strategy and operation mode were presented. The principle of the monitoring and configuring device was discussed in detail. The method was proved by a control system of dc motor and got a satisfied result.",2007,0, 4637,An Extension of Differential Fault Analysis on SMS4,"SMS4 is a 128-bit block cipher published by as released as the symmetric-key encryption standard of Wireless Local Area Network(WLAN) by China in 2006. On the differential analysis principle, we propose an extension of differential fault attack on the SMS4 cipher. Mathematical analysis shows that our attack can recover its secret key by introducing about 40 faulty ciphertexts. Our work expands the locations of the fault injection into SMS4.",2010,0, 4638,Research of remote fault diagnostic system based on Grid,"So far, methods of fault diagnosis have been numerous. But as the complexity of modern equipment, the variability of fault, as well as trends in global manufacturing, it is difficult for any separate organization of independent to complete all the work of fault diagnosis. This article reviewed the equipment fault diagnostic technology in the course of development, in particular, remote fault diagnostic system based on Internet technology, and then on this basis, this paper proposed a new fault diagnostic method, that is grid-based remote fault diagnosis, which integrates the resources of related organization to work together. The architecture of this diagnostic system is proposed, and also the work flow of this system is described.",2009,0, 4639,The ASDMCon Project: The Challenge of Detecting Defects on Construction Sites,"Techniques for three dimensional (3D) imaging and analysis of as-built conditions of buildings are gaining acceptance in the architecture, engineering, and construction (AEC) community. Early detection of defects on construction sites is one domain where these techniques have the potential to revolutionize an industry, since construction defects can consume a significant portion of a project's budget. The ASDMCon project is developing methods to aid site managers in detecting and managing construction defects using 3D imaging and other advanced sensor technologies. This paper presents an overview of the project, its 4D visualization environment, and the 3D segmentation and recognition strategies that are being employed to automate defect detection.",2006,0, 4640,Transient fault characterization in dynamic noisy environments,"Technology trends are increasing the frequency of serious transient (soft) faults in digital systems. For example, ICs are becoming more susceptible to cosmic radiation, and are being embedded in applications with dynamic noisy environments. We propose a generic framework for representing such faults and characterizing them on-line. We formally define the impact of a transient fault in terms of three basic parameters: frequency, observability and severity. We distinguish fault modes in systems whose noise environment changes dynamically. Based on these ideas, the problem of designing on-line architectures for transient fault characterization is formulated and analyzed for several optimization goals. Finally, experiments are described that determine transient fault impact and the corresponding tests for various simulated fault modes of the ISCAS-89 benchmark circuits",2005,0, 4641,Minimizing temperature drift errors of conditioning circuits using artificial neural networks,"Temperature drift errors are a problem that affect the accuracy of measurement systems. When small amplitude signals from transducers are considered and environmental conditions of conditioning circuits exhibit a large temperature range, the temperature drift errors have a real impact in systems accuracy. In this paper, a solution to overcome the problem of temperature drift errors of conditioning circuits is proposed. As an example, a thermocouple-based temperature measurement system is considered, and the stability of its conditioning circuit (AD595) is analyzed in two cases: with and without temperature drift error compensation. An Artificial Neural Network (ANN) is used for data optimization and a Virtual Instrument, using GPIB instrumentation, is used to collect experimental data. Final results show a significant improvement in the accuracy of the system when the proposed temperature drift error compensation technique is applied to compensate errors caused by temperature variations",2000,0, 4642,Tensor reduction error analysis Applications to video compression and classification,"Tensor based dimensionality reduction has recently been extensively studied for computer vision applications. To our knowledge, however, there exist no rigorous error analysis on these methods. Here we provide the first error analysis of these methods and provide error bound results similar to Eckart-Young Theorem which plays critical role in the development and application of singular value decomposition (SVD). Beside performance guarantee, these error bounds are useful for subspace size determination according to the required video/image reconstruction error. Furthermore, video surveillance/retrieval, 3D/4D medical image analysis, and other computer vision applications require particular reduction in spatio-temporal space, but not along data index dimension. This motivates a D-1 tensor reduction. Standard method such as high order SVD (HOSVD) compress data in all index dimensions and thus can not perform the classification and pattern recognition tasks. We provide algorithm and error bound analysis of the D-1 factorization for spatio-temporal data dimensionality. Experiments on video sequences demonstrate our approach outperforms the previous dimensionality deduction methods for spatio temporal data.",2008,0, 4643,"FLAASH, a MODTRAN4-based atmospheric correction algorithm, its application and validation","Terrain categorization and target detection algorithms applied to Hyperspectral Imagery (HSI) typically operate on the measured reflectance (of Sun and sky illumination) by an object or scene. Since the reflectance is a non-dimensional ratio, the reflectance by an object is nominally not affected by variations In lighting conditions. Atmospheric Correction (also referred to as Atmospheric 'Compensation', 'Characterization', etc.) Algorithms (ACAs) are used in applications of remotely sensed HSI data to correct for the effects of atmospheric propagation on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is an ACA created for HSI applications in the visible through shortwave infrared (Vis-SWIR) spectral regime. FLAASH derives its 'physics-based' mathematics from MODTRAN4.",2002,0, 4644,Incorporating varying test costs and fault severities into test case prioritization,"Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. In previous work (S. Elbaum et al., 2000; G. Rothermel et al., 1999), we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. We present a new metric for assessing the rate of fault detection of prioritized test cases that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.",2001,0, 4645,Extended Finite State Machine Based Test Derivation Driven by User Defined Faults,"Test derivation based on user defined faults deals with the generation of tests that check if some parts of a given specification are correctly implemented in a corresponding implementation. In this paper, we present a method for test derivation based on user defined faults for extended FSM (EFSM) specifications. Given an EFSM specification ES and a set of selected transitions of ES, the method returns a test suite that is complete with respect to output and transfer faults at the selected transitions, i.e., checks whether the selected transitions are correctly implemented in a corresponding EFSM implementation. Test derivation is based on a formally defined fault model (conformance relation and types of faults) and is guided by some conditions established for complete test derivation. To reduce test derivation efforts, we propose to use appropriate slices of the specification EFSM. The slices, on one hand, preserve the facilities of the original specification (before slicing) EFSM for traversing the selected transitions and for distinguishing their final states and on the other hand, these slices are much smaller than the given specification. Application examples and a simple case study are provided.",2008,0, 4646,Fault Detection Probability Analysis for Coverage-Based Test Suite Reduction,"Test suite reduction seeks to reduce the number of test cases in a test suite while retaining a high percentage of the original suite's fault detection effectiveness. Most approaches to this problem are based on eliminating test cases that are redundant relative to some coverage criterion. The effectiveness of applying various coverage criteria in test suite reduction is traditionally based on empirical comparison of two metrics derived from the full and reduced test suites and information about a set of known faults: (1) percentage size reduction and (2) percentage fault detection reduction, neither of which quantitatively takes test coverage data into account. Consequently, no existing measure expresses the likelihood of various coverage criteria to force coverage-based reduction to retain test cases that expose specific faults. In this paper, we develop and empirically evaluate, using a number of different coverage criteria, a new metric based on the ""average expected probability of finding a fault"" in a reduced test suite. Our results indicate that the average probability of detecting each fault shows promise for identifying coverage criteria that work well for test suite reduction.",2007,0, 4647,SVM Theory and Its Application in Fault Diagnosis of HVDC System,"Support vector machine (SVM), which based on statistical learning theory, is a universal machine learning method. The fault diagnosis of nonlinear and high-controllable high voltage direct current (HVDC) system based on SVM method is proposed, which can take full advantage of effective ability and superiority of SVM in dealing with small samples, and solve many familiar problems in fault diagnosis of HVDC system. A simulation model of HVDC system is set up, and performance of SVM models under different parameters using polynomial kernel function and RBF kernel function respectively are compared. Results show the superiority of SVM method, also the validity and feasibility of the proposed method.",2007,0, 4648,Temporal error analysis for compact cross-loop direction-finding HF radar,"Surface currents and ocean swell are measured using an HF radar with a compact cross-loop antenna design. This unique direction-finding radar has been widely used for long-term monitoring and for coastal oceanography applications over the last ten years. Numerous studies have evaluated errors associated with these devices under a host of environmental and seasonal conditions, and reliable comparisons between HF radar surface current measurements and those from other technologies indicate a range of 0.08 to 0.15 ms-1, while internal error estimates that are made using the spread of data values in a time series or spectrum from the radar systems generally lead to errors of around 0.2 ms-1. These error estimates are typically based on manufacturer-recommended operating parameters that can include up to 25 specific parameters or assumptions made about the data. In particular, one of the dominant parameters in computing ocean surface currents from SeaSonde devices is the averaging time of both the power spectra and the computed radial components of the surface current vectors. This paper explores the relative effect of temporal averaging for two SeaSonde stations inWestern Australia. By analysing 31 days of radial data, we evaluate the statistical characteristics of noise in the spectra and determine the limits to the assumption that it represents a stationary ensemble of independent samples. The improvement in accuracy as we integrate over different time intervals from 10 to 180 minutes is determined by these characteristics. These results are applied to error analysis for the radial components of surface currents and compared with the errors in the gridded values of the radial components. Finally, the effects of calibrated and uncalibrated antenna patterns on the radial components is evaluated in the context of the temporal and spatial errors.",2010,0, 4649,Development of Defect Classification Algorithm for POSCO Rolling Strip Surface Inspection System,"Surface inspection system (SIS) is an integrated hardware-software system which automatically inspects the surface of the steel strip. It is equipped with several cameras and illumination over and under the steel strip roll and automatically detects and classifies defects on the surface. The performance of the inspection algorithm plays an important role in not only quality assurance of the rolled steel product, but also improvement of the strip production process control. Current implementation of POSCO SIS has good ability to detect defects, however, classification performance is not satisfactory. In this paper, we introduce POSCO SIS and suggest a new defect classification algorithm which is based on support vector machine technique. The suggested classification algorithm shows good classification ability and generalization performance",2006,0, 4650,A Novel Approach for Arcing Fault Detection for Medium-/Low-Voltage Switchgear,"Switchgear arcing faults have been a primary cause for concern for the manufacturing industry and safety personnel alike. The deregulation of the power industry being in full swing and the ever-growing competitiveness in the distribution sector call for the transition from preventive to predictive maintenance. Switchgears form an integral part of the distribution system in any power system setup. Keeping in mind the switchgear arcing faults, the aforementioned transition applies, most of all, to the switchgear industry. Apart from the fact that it is the primary cause of serious injuries to electrical workers worldwide, switchgear arcing faults directly affect the quality and continuity of electric power to the consumers. A great amount of technological advancement has taken place in the development of arc-resistant/proof switchgears. However, most of these applications focus on minimizing the damage after the occurrence of the arcing fault. The problem associated with the compromise on the quality and continuity of electric power in such a scenario still awaits a technical as well as economically feasible solution. This paper describes the development of a novel approach for the detection of arcing faults in medium-/low-voltage switchgears. The basic concept involves the application of differential protection for the detection of any arcing within the switchgear. The new approach differs from the traditional differential concept in the fact that it employs higher frequency harmonic components of the line current as the input for the differential scheme. Actual arc-generating test benches have been set up in the Power System Simulation Laboratory at the Energy Systems Research Center to represent both medium- and low-voltage levels. Hall effect sensors in conjunction with data acquisition in LabVIEW are employed to record the line current data before, during, and after the arcing phenomenon. The methodology is first put to test via simulation approach for medium-vol- - tage levels and then corroborated by actual hardware laboratory testing for low-voltage levels. The plots derived from the data gathering and simulation process clearly underline the efficiency of this approach to detect switchgear arcing faults. Both magnitude and phase differential concepts seem to provide satisfactory results. Apart from the technical efficiency, the approach is financially feasible, considering the fact that the differential protection is already being comprehensively employed worldwide.",2009,0, 4651,Critical cases of a CNC drive system-fault diagnosis via a novel architecture,"The application of a novel fuzzy-neural architecture to diagnose faults in critical cases of a CNC X-axis drive system is described. The proposed architecture utilizes the concepts of fuzzy clustering, fuzzy decision making and RBF neural networks to create a suitable model based fault detection and isolation (FDI) structure. In the present application, the authors emphasize the faults due only to the nonlinear components and the components that have a more significant effect on overall accuracy of the drive system. On 100 tests on the system, i.e. the appropriate model, the diagnostic system allocated fault location and fault size 100 per cent correctly.",2002,0, 4652,Metastable defect in Cz-Si: electrical properties and quantitative correlation with different impurities,"The application of advanced lifetime spectroscopy on standard boron-doped czochralski silicon (Cz-Si) revealed that the metastable defect acts as attractive Coulomb center (/spl sigma//sub n/(T)=/spl sigma//sub n0/T/sup -2/) which is localized in the upper band gap half at E/sub C/-E/sub t/=0.41 eV and has an electron/hole capture cross section ratio k=/spl sigma//sub n///spl sigma//sub p/=9.3. While the exact electronic structure has been determined by now for the first time, the microscopic structure has still not been clarified convincingly. In order to identify the major components of the Cz-defect, the impact of different impurities on the defect concentration has been examined carefully on a wide range of different Cz-materials. The linear correlation with boron has been confirmed. Concerning the correlation with the interstitial oxygen concentration, a correlation exponent between 1.5 and 1.9 has been found. This exponent is shifted to its lower bound after an optimized high-temperature pretreatment. The strong scatter in the oxygen correlation and its dependence on the thermal pretreatment point towards an indirect impact of oxygen on the defect center. Since the vacancy concentration is known to strongly influence oxygen behavior, its impact on the metastable defect concentration is investigated.",2003,0, 4653,Fault modelling and co-simulation in flowFET-based biological array systems,"The area of microelectronic fluidic arrays is rapidly developing into many different commercial products, one being biological arrays. As a consequence, efficient and reliable design and testing of these systems becomes of crucial importance. This paper shows some of the latest results obtained with regard to the flowFET fluidic transfer device, its fault-free and fault modelling, and the co-simulation of a microelectronic fluidic array for life sciences.",2006,0, 4654,Influence of the nonlinearity of measurement transformers on effectiveness of module error correction,The article presents the program spectral line method of module error correction in linear input circuits. The method requires the knowledge of the frequency characteristics of the module errors and the phase errors of the input circuit to be corrected. The method was used for frequency error correction of measurement transformers applied in electronic power transducers. Such input circuits are nonlinear and distort the measured signals. The article presents the analysis of the influence of the nonlinearity of the input circuits upon the effectiveness of correction of frequency errors of the input circuit module at measurements of the root-mean-square value.,2008,0, 4655,Error detection for adaptive computing architectures in spacecraft applications,"The Australian FedSat satellite will incorporate a payload to validate the use of adaptive computing architectures in spacecraft applications. The technology has many exciting benefits for deployment in spacecraft, but the space environment also represents unique challenges which must be addressed. An important consideration is that modern SRAM Field Programmable Gate Arrays (FPGAs), such as the Xilinx 4000 device used on FedSat, are vulnerable to a range of radiation induced errors. A system is required to detect and mitigate these effects. General strategies have been described in the literature, but this work is believed to be the first deployment of a complete space-ready FPGA error control system. A primary aim of the system is to quantify the range of effects that occur, so emphasis is placed on classifying a wide range of errors. Different strategies have distinct capabilities so the final system employs a blend of detection techniques",2001,0, 4656,Recent experiences in the industrial exploitation of principal component based fault detection methods,"Summary form only given. There has been a plethora of research and many industrial studies involving the application of multivariate statistical techniques for the detection and isolation of faults on process plants. Despite all this work and reported success of the studies, there remain very few online applications of the technology. To address this issue and to determine the capabilities and limitations of the multivariate statistical techniques, the Control Technology Centre Ltd. has undertaken a comprehensive study aimed specifically at the development of online multivariate solutions for process plants. This paper provides details of several of these investigations. These investigations highlight a number of factors which should be considered when conducting multivariate analysis. For example, there appears to be a need for consideration of cause and effect relationships, which is often ignored in other application studies. In addition the integration of the multivariate statistical routines into a hierarchical system is demonstrated to provide the potential for robust and self-diagnosing control systems.",2002,0, 4657,Dynamic reconfiguration for management of radiation-induced faults in FPGAs,"Summary form only given. We describe novel methods of exploiting the partial, dynamic reconfiguration capabilities of Xilinx Virtex 1000 FPGAs to manage transient faults due to radiation in space environments. The on-orbit fault detection scheme uses a radiation-hardened reconfiguration controller to continuously monitor the configuration bit streams of 9 Virtex FPGAs and to correct errors by partial, dynamic reconfiguration of the FPGAs while they continue to execute. To study single event upset (SEU) impact on our signal processing applications, we use a novel fault injection technique to corrupt configuration bits, thereby simulating SEU faults. By using dynamic reconfiguration, we can run the corrupted designs directly on the FPGA hardware, giving many orders of magnitude speed-up over purely software techniques. The fault injection method has been validated against proton beam testing, showing 97.6% agreement. Our work highlights the benefits of dynamic reconfiguration for space-based reconfigurable computing.",2004,0, 4658,The z990 first error data capture concept,"Superior availability is one of the outstanding features of modern zSeries machines, among the most highly rated of any existing computer platforms in this reqard. Many features contribute to this characteristic, some in hardware, some in software. This paper describes the first error data capture (FEDC) concept in the zSeries 990. The concept is used for both zSeries integration efficiency and its ability to gain field data for problem determination in the user environment. FEDC is not a single function, but part of all internal software (firmware) in the z990. This paper explains the overall concept and implementation details of the various internal functional layers (subsystems).",2004,0, 4659,Super-resolution with integrated radial distortion correction,"Super-resolution (SR) is a technique where a high resolution (HR) image is obtained from a low resolution (LR) image sequence. In SR, the camera scans scenery to form a mosaic of overlapped image frames. To achieve SR, the relative movement along the low resolution set of images is considered as first step, and later the spatial resolution through data fusion is increased. Resolution increase is important not only for a better image visualization, but also to get additional image details. In the image acquisition process, lenses in cameras induce optics distortions. In the algorithm proposed, the radial distortion is considered. From a priori camera motion information, a super-resolution algorithm is proposed. Unlike other algorithms, radial distortion elimination is incorporated. Radial distortion is notably present in low cost sensors. To validate the algorithm, a software system was developed, with real environments and resolution patterns measurements.",2005,0, 4660,A Bio-network Based Fault-Tolerant Architecture for Supervisory System,"Supervisory systems show increasing importance in many plants to ensure the secure and stable production. And the fault tolerance ability is a key problem. Concerned with the problem of low automation level and high operation cost, a layered Bio-Network is studied inspired by biology system, and Bio-Entity as its functional unit is analyzed. In the bottom layer, the communication infrastructure is built on Web Service to hide the difference among underlying heterogeneous systems. In the Bio-System layer, each Bio-Entity is delegated by an agent to gain the functions. Then, in the application layer, various services are encapsulated upon Bio-Entity. The application interface makes the integration with other systems more convenient. Based on Bio-Network, a novel framework of supervisory system for typical waterworks is proposed. The fault tolerant functions are analyzed in the context of Bio-Network. The practical application proves that Bio-Network based system configuration is improved and the cost is reduced.",2008,0, 4661,Quality control protocol for frame-to-frame PET motion correction,"Subject motion during Position Emission Tomography (PET) brain scans can reduce image quality, and may lead to incorrect biological outcome measures, especially during analysis of dynamic data sets. This is particularly relevant when imaging with state-of-the-art scanners such as the High Resolution Research Tomograph (HRRT, Siemens Medical Solutions). Motion correction via frame-to-frame image realignment is simpler to implement and requires fewer computing resources than methods that correct for motion during data reconstruction and has been shown to significantly improve the accuracy of dynamically-derived biological variables. However, an ongoing problem is a lack of objective criteria to validate the accuracy of frame-to-frame realignment. Visual inspection of realigned images is a common method of validation but requires a significant amount of operator time and results may vary from one operator to another. This work presents a quality control protocol that automatically flags inadequate realignments based on the comparison of motion transformation matrices obtained from two independent sources: the Polaris Vicra optical tracking device and the image based realignment algorithm AIR (Automated Image Registration). A metric was computed to determine the difference between the transformations from both methods. Realignments were accepted or flagged based on the value of the metric. Since the two methods rely on independent motion assessment tools, the chance of both algorithms giving consistently wrong estimates is low. Human test cases show that the quality control protocol is capable of correctly identifying both acceptable and incorrect realigned images, thus providing an objective quality control metric. Implementation of the protocol reduces the number of images requiring visual inspection by 72% and operator time required by 50%, decreasing both operator labour and operator-dependent biases.",2009,0, 4662,Estimating the Soft Error Vulnerability of Register Files via Interprocedural Data Flow Analysis,"Subsequently to the wall of performance and power consumption, the dependability of computing, caused by soft errors, has become a growing design concern. Since Register Files (RFs) are accessed very frequently and cannot be well protected, soft errors occurred in them is one of the top reasons for affecting the reliability of programs. To access the soft errors vulnerability of RFs, this paper presents a static estimating method via interprocedural data flow analysis. Adopting a previous method, the vulnerability of a register is firstly decomposed into intrinsic and conditional basic block vulnerabilities. Under the prerequisite of context sensitivity, we focus on the computation the post conditions of basic blocks, which can be viewed as the living probability of the target register in the future usage. Finally, the program reliability can be calculated quantitatively under the occurrence of soft errors in RFs. Experimental results from the MiBench benchmarks indicate that our method is more accurate, and compatible with the AVF methods. We also reveal that the reliability of a program has a connection with its structure, such as the RVF factors, which suggests adopting the application specified protected mechanisms for tolerating soft errors occurred in RFs.",2010,0, 4663,Improved Error Estimate for Extraordinary Catmull-Clark Subdivision Surface Patches,"Summary form only given. Based on an optimal estimate of the convergence rate of the second order norm, an improved error estimate for extraordinary Catmull-Clark subdivision surface (CCSS) patches is proposed. If the valence of the extraordinary vertex of an extraordinary CCSS patch is even, a tighter error bound and, consequently, a more precise subdivision depth for a given error tolerance, can be obtained. Furthermore, examples of adaptive subdivision illustrate the practicability of the error estimation approach.",2007,0, 4664,Accelerating bit error rate testing using a system level design tool,"System level design tools for creating DSP designs reduce the amount of time needed to create a DSP design, in part by eliminating the need for verification between system model and hardware implementation. The design is developed within a high-level modeling environment. This description is compiled into a hardware description language, and synthesized by traditional FPGA (field programmable gate array) tools. The use of system level tools can eliminate the need for extensive hardware knowledge. We demonstrate how such tools can be used to build a bit error rate (BER) tester, and how hardware co-simulation of the entire system provided a 10,000x speed-up over a pure software simulation FPGA tools. The use of system level tools can eliminate the need for extensive hardware knowledge. We demonstrate how such tools can be used to build a bit error rate (BER) tester, and how hardware co-simulation of the entire system provided a 10,000x speed-up over a pure software simulation.",2003,0, 4665,Track Down HW Function Faults Using Real SW Invariants,System level functional verification by running real software stack on FPGA prototype is essential for achieving a high quality design. But it is hard to find the exact source of hardware function faults while running large closed source system software fails. This paper proposes the idea of tracking down faults through real system software control flow invariants with current trace output hardware support. It captures qualified control flow invariant trace in reference execution and test trace; and tracks down faults through comparing offline invariant trace and test trace. The approach can deal with both deterministic and nondeterministic execution. We implemented the proof of concept in full system simulator Bochs. Our experimentation with the real closed source MS Windows XP suggests that the approach is effective in tracking down hardware function faults.,2009,0, 4666,Efficient Reliability Assessment of Redundant Systems Subject to Imperfect Fault Coverage Using Binary Decision Diagrams,"Systems requiring very high levels of reliability, such as aircraft controls or spacecraft, often use redundancy to achieve their requirements. This paper provides highly efficient techniques for computing the reliability of redundant systems involving simple k-out-of-n arrangements, and those involving complex structures which may include imbedded k-out-of-n structures. Techniques for modeling systems subject to imperfect fault coverage must be appropriate to the redundancy management architecture utilized by the system. Systems for which coverage can be associated with each of the redundant components, perhaps taking advantage of the component's built-in test capability, are modeled with what we term element level coverage (ELC); while systems which utilize majority voting for the selection from among redundant components are modeled with fault level coverage (FLC). In FLC, systems coverage is a function of the fault sequence, i.e., coverage will be greater for the initial faults which can utilize voting for redundant component selection, but will have a lower coverage value when the system must select from among the last two operational components. Occasionally, FLC systems can be adequately modeled using a simplified version of FLC in which it can be assumed that the initial fault coverage values are unity. This model is called one-on-level coverage (OLC). The FLC algorithms provided in this paper are of particular importance for the correct modeling of systems which utilize voting to select from among their redundant elements. While combinatorial, and recursive techniques for modeling ELC, FLC, and OLC have been previously reported, this paper presents new table-based algorithms, and binary decision diagram-based algorithms for these models which have superior computational complexity. The algorithms presented here provide the ability to analyse large, and complex systems very efficiently, in fact with a computational complexity com- - parable to the best available techniques for systems with perfect fault coverage.",2008,0, 4667,"Providing Real-Time Applications With Graceful Degradation of QoS and Fault Tolerance According to -Firm Model","The -firm model has recently drawn a lot of attention. It provides a flexible real-time system with graceful degradation of the quality of service (QoS), thus achieving the fault tolerance in case of system overload. In this paper, we focus on the distance-based priority (DBP) algorithm as it presents the interesting feature of dynamically assigning the priorities according to the system's current state (QoS-aware scheduling). However, DBP cannot readily be used for systems requiring a deterministic -firm guarantee since the schedulability analysis was not done in the original proposition. In this paper, a sufficient schedulability condition is given to deterministically guarantee a set of periodic or sporadic activities (jobs) sharing a common non-preemptive server. This condition is applied to two case studies showing its practical usefulness for both bandwidth dimensioning of the communication system providing graceful degradation of QoS and the task scheduling in an in-vehicle embedded system allowing fault tolerance.",2006,0, 4668,Comparison of Euclidean distance based neural networks for analog Integrated Circuits fault recognition- LVQS &SOM,"The advent of integrated circuits (ICs) and hence the subsequent miniaturization of electronic circuitry has brought out considerable difficulties encountered during identification of faults in integrated circuits during the testing phase of manufacturing and subsequent mass production. Artificial neural network (ANN) augurs well in handling such complex tasks in such systems as it generalizes well without the need to explicitly define the relationship between variables. There has been resurgence in interest among researchers in utilizing ANN for recognizing faults in analog circuits. This work aims at analyzing the role played by the various training parts of both the Euclidean distance based ANNs namely, the self organizing feature maps(SOM) and various versions of learning vector quantization neural network (LVQNN)i.e., LVQ1, LVQ 2, LVQ 2.1 and LVQ. Extensive studies have been conducted to ascertain the role played by learning rate and other unique parameters such as the role played by normalization as a part of preprocessing technique and the number of iterations for convergence. Moreover the results have been compared with the generalized multilayer feedforward network with back propagation algorithm. The best combination of network parameters was also determined. For this purpose an analog filter circuit with 1 fault free and 10 single hard faults was simulated using SPICE simulation software. Experimental results demonstrate the high classification accuracy and the adaptability of both the Euclidean classifiers and its suitability for fault recognition in analog circuits.",2007,0, 4669,Applications of the Fault Decoupling Device to Improve the Operation of LV Distribution Networks,"The aim of this paper is to present the operating principle of a new resonant device, called the fault decoupling device (FDD), able to improve power quality in electrical distribution systems. In low-voltage networks, this device can be employed in order to mitigate voltage dips due to faults or large induction motor startup. Moreover, in the presence of distributed-generation (DG) units, the FDD allows one to obtain various benefits such as a reduction of the fault current in each node of the network and an increase in the voltage at the DG unit node. In order to show the performances of the FDD, analytical studies and computer simulations were carried out which took into account various working operation conditions. Finally, the prototype of the FDD as well as the preliminary experimental results are presented.",2008,0, 4670,High-Speed Error-Correction for Leading Zero/One Anticipator,"The algorithm and its implementation of a leading zero anticipators (LZA) are very vital for the performance of a high-speed floating-point adder in current state of art microprocessor design. However, in predicting ""shift amount"" by a conventional LZA design, there may be one-bit error, which is mentioned as the possible error in the result. This paper compares the conventional LZA designs and presents a novel parallel error-detection algorithm for modifying the result of the LZA before it is sent off to improve the performance of the LZA. The novel error-detection algorithm does not depend on any carry-in signals of the adder. Therefore, it makes the error-correction to be parallel with the mantissas addition and increases the speed of the LZA to generate correct results significantly.",2010,0, 4671,Application-level fault tolerance in the orbital thermal imaging spectrometer,"Systems that operate in extremely volatile environments, such as orbiting satellites, must be designed with a strong emphasis on fault tolerance. Rather than rely solely on the system hardware, it may be beneficial to entrust some of the fault handling to software at the application level, which can utilize semantic information and software communication channels to achieve fault tolerance with considerably less power and performance overhead. We show the implementation and evaluation of such a software-level approach, application-level fault tolerance and detection (ALFTD) into the orbital thermal imaging spectrometer (OTIS).",2004,0, 4672,Automatic diagnosis and location of open-switch fault in brushless DC motor drives using wavelets and neuro-fuzzy systems,"The faulty performance of permanent-magnet (PM) brushless dc motor drives is studied under open-switch conditions. The wavelet transform is used to extract diagnostic indices from the current waveform of the motor dc link. An intelligent agent based on adaptive neuro-fuzzy inference systems (ANFIS) is developed to automate the fault identification and location process. ANFIS is trained offline using simulation results under various healthy and faulty conditions obtained from a lumped-parameter, network model. ANFIS testing shows that the system could not only detect the open-switch fault, but also identify the faulty switch. Good agreement between simulation results and measured waveforms confirms the effectiveness of the proposed methodology.",2006,0, 4673,Geometric Correction for Cone-Beam CT Reconstruction and Artifacts Reduction,"The FDK algorithm is one of the most widely referenced and used algorithm for cone-beam CT reconstruction in circular trajectory because of its simplicity of implementation and computational efficiency. However, images reconstructed by the FDK algorithm of real projection data may be blurred without electronic correction and geometric calibration, and are often plagued by deleterious ring artifacts and shading artifacts. In this paper, images reconstructed with and without detector correction are compared base on computer experiment of real biological object. Furthermore, Algorithms for shading artifacts reduction and fast ring artifacts reduction are also introduced. The experimental simulation shows that these algorithms are effective in reducing ring and shading artifacts without compromising the image resolution, and produce satisfactory results.",2008,0, 4674,An Artificial Immune System Approach for Fault Prediction in Object-Oriented Software,"The features of real-time dependable systems are availability, reliability, safety and security. In the near future, real-time systems will be able to adapt themselves according to the specific requirements and real-time dependability assessment technique will be able to classify modules as faulty or fault-free. Software fault prediction models help us in order to develop dependable software and they are commonly applied prior to system testing. In this study, we examine Chidamber-Kemerer (CK) metrics and some method-level metrics for our model which is based on artificial immune recognition system (AIRS) algorithm. The dataset is a part of NASA Metrics Data Program and class-level metrics are from PROMISE repository. Instead of validating individual metrics, our mission is to improve the prediction performance of our model. The experiments indicate that the combination of CK and the lines of code metrics provide the best prediction results for our fault prediction model. The consequence of this study suggests that class-level data should be used rather than method-level data to construct relatively better fault prediction models. Furthermore, this model can constitute a part of real-time dependability assessment technique for the future.",2007,0, 4675,A relative technique for characterization of PCV error of large aperture antennas using GPS data,"The Federal Aviation Administration's (FAA) local area augmentation system (LAAS) is a code-based differential global positioning system (DGPS) to be used for guidance of aircraft during the approach and landing phase of flight. Code-based multipath error was a limiting factor in the LAAS meeting category III accuracy requirements for this phase of flight. Consequently, a large aperture antenna was proposed for use at each ground-based DGPS reference site to reduce the impact of code-based multipath error to less than 0.25 m. The large aperture antenna accomplishes this task at the expense of providing limited coverage in the vertical plane (0-35). An ancillary high-zenith antenna is then necessary to track satellites from 35-90 in elevation. Phase center variation (PCV) has been observed to be a significant source of GPS error when translating the accumulated carrier phase (ACP) and pseudorange (PR) data between the two aforementioned antennas to a common phase center. This paper presents a relatively simple method for reducing the error caused by the PCV of a large aperture antenna. This is done by comparing the GPS ACP data from a large aperture vertically polarized dipole array antenna with undesirably large PCV (more than 20 cm), with similar data from a right-hand circularly polarized high-zenith antenna with small PCV (relative to physical center of array). A triple-differencing technique of the ACP data between the two antennas across successive epochs is used to characterize the PCV of the large aperture antenna as a function of elevation angle. The PCV correction factor is applied to the ACP data from the large aperture antenna array. The ACP data from the two antennas are then combined in software to appear as one antenna with minimal offset from PCV. A field test that employed this technique is described and test results are provided. This technique confers an advantage over other methods for PCV determination, which require precision mounting and/or robotic motion capabilities, or the use of an antenna range.",2005,0, 4676,Differences Between Observation and Sampling Error in Sparse Signal Reconstruction,"The field of Compressed Sensing has shown that a relatively small number of random projections provide sufficient information to accurately reconstruct sparse signals. Inspired by applications in sensor networks in which each sensor is likely to observe a noisy version of a sparse signal and subsequently add sampling error through computation and communication, we investigate how the distortion differs depending on whether noise is introduced before sampling (observation error) or after sampling (sampling error). We analyze the optimal linear estimator (for known support) and an l1 constrained linear inverse (for unknown support). In both cases, observation noise is shown to be less detrimental than sampling noise and low sampling rates. We also provide sampling bounds for a non-stochastic lA bounded noise model.",2007,0, 4677,An error evaluation scheme based on rotation of magnetic field in adaptive finite element analysis,"The finite-element analysis is widely used in design stage of electromagnetic apparatuses. The analysis accuracy depends on the characteristics of the finite-element mesh, e.g., number of nodes, number of elements and shape of elements. Recently, the adaptive finite-element analysis is one of the most promising numerical analysis techniques. In process of the adaptive finite-element method, the error evaluation is one of the important schemes. In this paper, a new error evaluation scheme, which is suitable for electromagnetic problems, is proposed. The proposed error evaluation method is then applied to two-dimensional and three-dimensional magnetostatic field problems for its verification",2006,0, 4678,Forward error correction based on block turbo code with 3-bit soft decision for 10-Gb/s optical communication systems,"The first experimental demonstration of a forward error correction (FEC) for 10-Gb/s optical communication systems based on a block turbo code (BTC) is reported. Key algorithms, e.g., extrinsic information, log-likelihood ratio, and soft decision reliability, are optimized to improve the correction capability. The optimum thresholds for a 3-bit soft decider are investigated analytically. A theoretical prediction is verified by experiment using a novel 3-bit soft decision large scale integrated circuit (LSI) and a BTC encoder/decoder evaluation circuit incorporating a 10-Gb/s return-to-zero on-off keying optical transceiver. A net coding gain of 10.1 dB was achieved with only 24.6% redundancy for an input bit error rate of 1.9810-2. This is only 0.9 dB away from the Shannon limit for a code rate of 0.8 for a binary symmetric channel. Superior tolerance to error bursts given by the adoption of 64-depth interleaving is demonstrated. The ability of the proposed FEC system to achieve a receiver sensitivity of seven photons per information bit when combined with return-to-zero differential phase-shift keying modulation is demonstrated.",2004,0, 4679,Trends in Firewall Configuration Errors: Measuring the Holes in Swiss Cheese,"The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point Firewall-1 rule sets. In general, that survey indicated that corporate firewalls often enforced poorly written rule sets. This article revisits the first survey. In addition to being larger, the current study includes configurations from two major vendors. It also introduces a firewall complexity. The study's findings validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule -set's complexity is (still) positively correlated with the number of detected configuration errors. However, unlike the 2004 study, the current study doesn't suggest that later software versions have fewer errors.",2010,0, 4680,An Unsupervised Diagnosis for Process Tool Fault Detection: The Flexible Golden Pattern,The flexible golden pattern (FGP) algorithm uses a patented technology of empirical scoring to detect abnormal behavior for semiconductor processing equipment or a specific processing chamber during wafer production. This algorithm does not entirely rely on manual extraction of features from data acquired on each tool. It is able to automatically select good pattern indicators from raw (temporal) signal traces. It is able to diagnose unusual behavior disregarding specificity proper to a recipe or even a chamber or even a tool if the algorithm is calibrated for such a purpose. The algorithm does not need any complicated parameter settings; the diagnosis is established by comparison of the normal process behavior to the abnormal one.,2007,0, 4681,20th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems,The following are topics are dealt with: yield analysis and modeling; scan design and test data compression; reconfiguration; error correcting codes and circuits; fault detection; fault tolerance; sensors; flash memories; error tolerance; delay fault test and timing consideration; QCA circuits; interconnect test; soft errors; test scheduling; software-based test; analog circuit design and testing.,2005,0, 4682,"The virtual analyst program: a small scale data-mining, error-analysis and reporting function","The forest inventory and analysis (FIA) program of the U.S.D.A. forest service conducts ongoing comprehensive inventories of the forest resources of the United States. Covering II states in the Upper Midwest, the North Central Region FIA (NCFIA) program has three tasks: (1) core reporting function, which produces the annual and 5-year inventory reports; (2) forest health measurements; and (3) scientific analysis of questions and themes that arise from our data. Annual reports provide updated views of the extent, composition, and change of a state's forests. These reports have a standard format divided into three broad categories of area, volume, and change. These annual reports also provide important early trend alerts and error checking functions. Data stored in the FIA database provide the ""raw material"" that analysts use to prepare the reports. The virtual analyst program at NCFIA is designed to serve these multiple needs. Incorporating our understanding of important trends and relationships, and ""redflags"" to be aware of this program seeks to automate the more repetitive functions of producing reports while highlighting any anomalies that may require an analyst's investigation. This paper discusses the underlying program logic and design of the prototype. We work backwards from the Web-based application product (.Net Technology) to information-generating vehicles that connect to the forest inventory database. Finally, we discuss the opportunity to expand this report-writing function into a customized, user-defined data query and analysis function.",2005,0, 4683,"Assessment of Transmission-Line Common-Mode, Station-Originated, and Fault-Type Forced-Outage Rates","The frequency and duration of the various types of transmission-line outages have a significant impact on the operation and reliability of industrial and commercial power systems. Knowledge of the statistical characteristics of transmission-line outages (e.g., types of faults: three phase, line-to-ground, etc.) directly affects the protection and coordination practices at these facilities. Reliability modeling of industrial and commercial power systems is dependent upon transmission-line outage characteristics (e.g., terminal- and line-related sustained outages). Generalizations of transmission-line characteristics for all voltage classes can be incorrect and extremely problematic in many cases. This paper presents the sum of the results of a ten-year study of the Mid-Continent Area Power Pool transmission-line outage-data statistics for 230-, 345-, and 500-kV lines. These data can be used for modeling the reliability of industrial and commercial facilities being serviced by transmission lines and reveal some of the common beliefs and misconceptions about transmission-line characteristics.",2010,0, 4684,The Fault Diagnosis System of Electric Locomotive Based on MAS,"The efficiency and the effect of fault diagnosis are increasingly important for electric locomotive. However, due to the complexity of the locomotive system configuration and the locomotive inferior running condition, it is much difficult to realize the fault diagnosis system on electric locomotive. In this paper, a multi-objective fault diagnosis algorithm based on multi-agent is proposed. It considers the correlation of different fault diagnosis and takes advantage of the capabilities of the multiple agents in communication and cooperation to estimate and recognize particular faults. Furthermore, a fault diagnosis system architecture is designed based on multi-agent system to support diagnosing fault online for train drivers. The practical application results show the efficiency of the proposed system.",2009,0, 4685,A universal fault diagnosis expert system based on fault tree,"The efficiency of diagnosis and the accuracy of diagnosis result have been improved by amalgamating and combining the advantage of fault tree analysis and expert system. The visible modeling of fault tree model has been realized by introducing the technique of Visio components development, which not only makes the user interface more intuitional and friendly, but also corresponds to the specification of fault tree model greatly and makes the checking of the consistency and validity of fault tree model more convenient. The modular and hierarchical thought, which is combined with the information description of object oriented, has been applied in different diagnosis objects, which not only implements the representation of knowledge base for expert system, but also enhances the system's universal character. Finally, the framework of C/S has also been built in this system by the technique of database, which improves the system's performance of open, maintenance and transplant to a large extent.",2009,0, 4686,ℛεℒ: a fault tolerance linguistic structure for distributed applications,"The embedding of fault tolerance provisions into the application layer of a programming language is a non-trivial task that has not found a satisfactory solution yet. Such a solution is very important, and the lack of a simple, coherent and effective structuring technique for fault tolerance has been termed by researchers in this field as the ""software bottleneck of system development"". The aim of this paper is to report on the current status of a novel fault tolerance linguistic structure for distributed applications characterized by soft real-time requirements. A compliant prototype architecture is also described. The key aspect of this structure is that it allows one to decompose the target fault-tolerant application into three distinct components respectively responsible for: (1) the functional service, (2) the management of the fault tolerance provisions, and (3) the adaptation to the current environmental conditions. The paper also briefly mentions a few case studies and preliminary results obtained exercising the prototype",2002,0, 4687,Intelligent Selection of Fault Tolerance Techniques on the Grid,"The emergence of computational grids has lead to an increased reliance on task schedulers that can guarantee the completion of tasks that are executed on unreliable systems. There are three common techniques for providing task-level fault tolerance on a grid: retrying, replicating, and checkpointing. While these techniques are varyingly successful at providing resilience to faults, each of them presents a tradeoff between performance and resource cost. As such, tasks having unique urgency requirements would ideally be placed using one of the techniques; for example, urgent tasks are likely to prefer the replication technique, which guarantees timely completion, whereas low priority tasks should not incur any extra resource cost in the name of fault tolerance. This paper introduces a placement and selection strategy which, by computing the utility of each fault tolerance technique in relation to a given task, finds the set of allocation options which optimizes the global utility. Heuristics which take into account the value offered by a user, the estimated resource cost, and the estimated response time of an option are presented. Simulation results show that the resulting allocations have improved fault tolerance, runtime, profit, and allow users to prioritize their tasks.",2007,0, 4688,A simple impulsiveness correction factor for control of electromagnetic interference in dynamic wireless applications,"The emerging software defined radio technologies will be an enabler for a new generation of dynamic wireless systems. It will also open up the possibility of allocating frequencies in a, more dynamic way than today. From an intersystem-interference point of view, this can cause unforeseen problems to occur due to the increased complexity in such applications. In such applications, a measure indicating whether or not a frequency band is possible to use from an electromagnetic interference point of view, must be found. A simple approach is to use the measured total average interference power within the receiver band. Since the interference impact on modern digital communication systems from an interference signal does not only depend on the power but also on the actual waveform of the interference signal, some kind of quality measure of the average-power approach would be convenient to use. In this paper, we introduce a simple quality measure of the average-power approach so that a rough adjustment for the interference-waveform properties can be done.",2006,0, 4689,The effects of quantity and location of video playback errors on the average end-users experience,"The end-user experience of a given platform or product with respect to video playback is an increasingly important aspect for engineers and product planners to understand. However, testing real world situations in an objective and repeatable fashion is complex. In an initial study, the video Gross Error Detector (GED) which is a tool that provides a quick and cost-efficient way to evaluate video playback, was mapped to end-user's perception of video smoothness. Video errors such as dropped, repeated or out of sequence frames were evaluated in varying quantities to understand the impact on the average en-user's perception of playback. A second study, discussed here, was done to understand the differences in the quantity and location of these playback errors. These end-user perception studies provide useful data so the video GED can be used to monitor errors in video for quality control, benchmark video processing and algorithms, and can be rooted into the video processing system to optimize algorithms and limit settings.",2008,0, 4690,A self-resetting method for reducing error accumulation in INS-based tracking,"The error accumulation in the INS systems is one of the major drawbacks which limits their applicability. Several efforts have been made to improve the performance of the INS systems in the tracking applications with limited success in terms of efficiency, complexity and cost. This paper proposes a new method for modelling the actual INS error accumulation and use of automatic self-resetting for reducing error accumulation within the INS system without the need of any external sensors or devices. The new method has been verified and tested for the case of a single-axis accelerometer. Furthermore, the method has been used to recover the trajectory of moving objects in outdoor environments when a full INS system is used. The testing results show that the new method considerably reduces the error accumulation in the INS systems. Moreover, the new method increases the operation time of the INS-based systems from few second to a few minutes.",2010,0, 4691,Error analysis of ultra-wideband hybrid analogue-to-digital conversion,"The error analysis of ultra-wideband hybrid analogue-to-digital conversion (H-ADC), which combines time-interleaved ADC (TI-ADC) and multiple concurrent ADC (MC-ADC), is presented. To this end, an error model is introduced comprising quantisation, overflow saturation, DC-offset, gain error, time delay, and time jitter due to sample-and-hold (S&H) circuits and clock generator. The impact of the system parameters on H-ADC performance is investigated by analytic evaluation of the error model assuming an arbitrary FDM input spectrum. The versatility of the H-ADC approach is demonstrated by a challenging future application of satellite on-board processing [1]: 1GHz overall FDM bandwidth with 12MHz channel spacing.",2004,0, 4692,Chaotic maps as parsimonious bit error models of wireless channels,"The error patterns of a wireless digital communication channel can be described by looking at consecutively correct or erroneous bits (runs and bursts) and at the distribution function of these run and burst lengths. A number of stochastic models exist that can be used to describe these distributions for wireless channels, e.g., the Gilbert-Elliot model. When attempting to apply these models to actually measured error sequences, they fail: measured data gives raise to two essentially different types of error patterns which can not be described using simple error models like Gilbert-Elliot. These two types are distinguished by their run length distribution; one type in particular is characterized by a heavy-tailed run length distribution. This paper shows how the chaotic map model can be used to describe these error types and how to parameterize this model on the basis of measurement data. We show that the chaotic map model is a superior stochastic bit error model for such channels by comparing it with both simple and complex error models. Chaotic maps achieve a modeling accuracy that is far superior to that of simple models and competitive with that of much more complex models, despite needing only six parameters. Furthermore, these parameters have a clear intuitive meaning and are amenable to direct manipulation. In addition, we show how the second type of channels can be well described by a semiMarkov model using a quantized lognormal state holding time distribution.",2003,0, 4693,Lessons learned in building a fault-tolerant CORBA system,"The Eternal system pioneered the interception approach to providing transparent fault tolerance for CORBA, which allows it to make a CORBA application reliable with little or no modification to the application or the ORB. The design and implementation of the Eternal system has influenced industrial practices by providing the basis for the specifications of the fault-tolerant CORBA standard that the Object Management Group adopted. We discuss our experience in developing the Eternal system, with particular emphasis on the challenges that we encountered and the lessons that we learned.",2002,0, 4694,JULIET: a distributed fault tolerant load balancer for .NET Web services,"The execution time of computationally-intensive applications such as protein folding and fractal generation can be reduced by implementing these applications as Web services that run in parallel. Additionally, some of these Web services may save state periodically to resume execution later on. However, currently, there is no solution to load balance this class of Web services, and to replicate the saved state for the purposes of resumption. This paper describes the architecture of JULIET, a system that load balances .NET Web services across a Windows cluster in a distributed fashion. The system is also fault tolerant since it supports failovers and replication of data generated by the Web services at the application level. The system is designed to be minimally-visible to the Web service and the client that consumes it.",2004,0, 4695,Kalman Predictive Redundancy System for Fault Tolerance of Safety-Critical Systems,"The dependence of intelligent vehicles on electronic devices is rapidly increasing the concern over fault tolerance due to safety issues. For example, an x-by-wire system, such as electromechanical brake system in which rigid mechanical components are replaced with dynamically configurable electronic elements, should be fault-tolerant because a critical failure could arise without warning. Therefore, in order to guarantee the reliability of safety-critical systems, fault-tolerant functions have been studied in detail. This paper presents a Kalman predictive redundancy system with a fault-detection algorithm using the Kalman filter that can remove the effect of faults. This paper also describes the detailed implementation of such a system using an embedded microcontroller to demonstrate that the Kalman predictive redundancy system outperforms well-known average and median voters. The experimental results show that the Kalman predictive redundancy system can ensure the fault-tolerance of safety-critical systems such as x-by-wire systems.",2010,0, 4696,Design of a fault-tolerant satellite cluster link establishment protocol,"The design of a protocol for satellite cluster link establishment and management that accounts for link corruption, node failures, and node re-establishment is presented in this paper. This protocol will need to manage the traffic flow between nodes in the satellite cluster, adjust routing tables due to node motion, allow for sub-networks in the cluster, and similar activities. This protocol development is in its initial stages. Preliminary results with eight nodes demonstrate its operations and potential problems that may arise when significant numbers of channel errors are present",2005,0, 4697,The simulation of EHF waveguide ferrite phase shifters with regard to manufacture and assembly errors,"The design of EHF aperture waveguide ferrite phase shifters for phased arrays is considered. The influences of manufacture and assembly errors, as well as of spread parameters of normalized materials are analyzed. The computer simulation was based on a high-level mathematical model comprising both the homogeneous waveguide eigenwaves evaluation and the scattering problem solution. The design of Ka and W-band Faraday-type ferrite phase shifters is summarized.",2003,0, 4698,Knowledge-based design automation of highly non-linear circuits using simulation correction,"The design of highly non-linear circuits is a challenging and time-consuming task both for designers and design-automation tools. This paper presents a method for automated design of such circuits. By combining equations and heuristics with simulation-corrections, it allows to achieve the accuracy of optimization-based sizing with the speed of knowledge-based sizing one. The correction scheme is also used to reduce the number of independent variables. Sizing, speed and accuracy allow it to be used in the design and technology migration of digital libraries, full-custom cells as well as dynamically during timing analysis to compensate long critical paths. Applications are also appealing for highly non-linear analog functions. A prototype tool has been implemented in MATLAB.",2003,0, 4699,Design and Implementation of Testing Network for Power Line Fault Detection Based on nRF905,"The design scheme of automatic power line fault detection system based on wireless sensor network is proposed. The hardware architecture and software design based on nRF905 are also described in detail. The experimental results show that the system has the advantages of simple installation, high stability and accuracy.",2007,0, 4700,A Fault Detection Toolbox for MATLAB,The developed fault detection toolbox for MATLAB is described. The new toolbox provides a comprehensive set of high level m-functions to support the design of residual generation filters using reliable numerical algorithms developed by the author. The basic computational layer is formed by the descriptor systems toolbox which contains all necessary tools to solve the underlying numerical problems. The m-functions based user interfaces ensure user-friendliness in operating with the functions of this toolbox via an object oriented approach,2006,0, 4701,Fault Tree Reuse Across Multiple Reasoning Paradigms,"The development of a diagnostic model can be a very time-consuming and manually intensive process. One must first analyze the Test Program Set (TPS) to determine the fault tree and then integrate with that any additional knowledge that can be obtained from external data sources (such as test results, maintenance actions from the various maintenance levels, run-time failure information, etc.). The diagnostic models defined in the IEEE Std. 1232-2002 (AI-ESTATE) each define a different method that can be used for a diagnostic reasoner. It has been determined that each of these models utilize the information found in the basic TPS fault tree. As the fault tree represents hard won engineering knowledge that is expensive to reproduce, it is desirable to share the fault tree representations across multiple reasoner models. This paper will layout how each model type in the AI-ESTATE standard utilizes the fault tree to perform diagnostics and how, through the use of the XML representation of the AI-ESTATE fault tree model, that basic fault tree can be shared between reasoner models. It would also be desirable to find a way to gain that fault tree knowledge without having to manually reproduce it. As such, this paper will also describe how that information can at least be semi-automatically extracted from TPS design artifacts.",2006,0, 4702,Induction Motors Modeling and Fuzzy Logic Based Turn-To-Turn Fault Detection and Localization,"The development of a turn-to-turn fault diagnosis system in induction motors stator windings is presented. The system consists in a digital faults simulator based on state variables and in an Adaptive Fuzzy Inference System (AFIS). The developed prototype was validated through comparisons between the digital simulator's results and experimental results in a three-phase, 5Hp, 4 poles, 60Hz motor. The fuzzy system uses as fault indicators the rotor slip, the negative-sequence current and the positive-sequence current angle. The proposed system is based on a sensorless data acquisition method (voltages and currents) and it is immune to variations of the load operating point. Another advantage is the faulty phase localization.",2007,0, 4703,Markov Chain Analysis of Thermally Induced Soft Errors in Subthreshold Nanoscale CMOS Circuits,"The development of future nanoscale CMOS circuits, characterized by lower supply voltages and smaller dimensions, raises the question of logic stability of such devices with respect to electrical noise. This paper presents a theoretical framework that can be used to investigate the thermal noise probability distributions for equilibrium and nonequilibrium logic states of CMOS flip-flops operated at subthreshold voltages. Representing the investigated system as a 2-D queue, a symbolic solution is proposed for the moments of the probability density function for large queues where Monte Carlo and eigenvector methods cannot be used. The theoretical results are used to calculate the mean time to failure of flip-flops built in a current 45-nm silicon-on-insulator technology modeled in the subthreshold regime including parasitics. As a predictive tool, the framework is used to investigate the reliability of flip-flops built in a future technology described in the International Technology Roadmap for Semiconductors. Monte Carlo simulations and explicit symbolic calculations are used to validate the theoretical model and its predictions.",2009,0, 4704,Recent Developments in Single-Phase Power Factor Correction,"The development of single-phase power factor correction (PFC) technologies was traditionally driven by the need for computers, telecommunication, lighting, and other electronic devices and systems to meet harmonic current limits defined by IEC 61000-3-2 and other regulatory standards. Recently, several new applications have emerged as additional drivers for the development of the technologies. One such application is commercial transport airplanes where single-phase PFC converters capable of meeting stringent airborne power quality requirements are required for in-flight entertainment (IFE), avionics, communication, and other single-phase loads. The proliferation of variable-speed motor drives in home appliances has also generated a new need for high-power (up to a few kilowatts), high-efficiency, and low-cost single-phase PFC converters. New PFC circuits, control methods, as well as EMI modeling and design techniques are being developed in response to these new requirements, which are reviewed in this paper. Specific subjects to be covered include airborne and home appliance applications, as well as EMI modeling and EMI filter design optimization.",2007,0, 4705,Nonlinear Systems Fault Diagnosis with Differential Elimination,"The differential elimination algorithm is used to eliminate the non-observed variables of the nonlinear systems. By incorporating the algebraic observability and diagnosability concepts and using numerical differentiation algorithms, another approach to the certain classes of nonlinear systems fault diagnosis problem is presented.",2009,0, 4706,Analysis of fault-tolerant five-phase IPM synchronous motor,"The choice of a multi-phase motor is a potentially fault-tolerant solution and gives rise to many advantages, respect to the traditional three-phase motor drives. In this paper an high torque density five-phase IPM synchronous motor has been studied, and the motor performance have been evaluated in the case of healthy-mode and faulty-mode operation.",2008,0, 4707,A Simple Method to Measure the Image Complexity on a Fault Tolerant Cluster Computing,"The cluster computing is widely used for image processing in entertainment applications. Measuring the required number of nodes to be used in a cluster computing helps saving nodes processing time, even if this occurs on a fault tolerant scenario. The paper analyzes the images rendering process by ray tracing on the cluster computing using the freeware software fault tolerant message-passing interface with Povray software in a Linux-based operating system. This paper uses a simple method of image complexity measure; it is shown that depending on the images complexity level, a minimum number of cluster nodes is chose to render the image instead of using all the cluster nodes. It is useful to save machine processing time and meanwhile another image can be rendered in a parallel process with a previous one and where the solution is applied in a Fault Tolerant scenario, so independent of the network fault, the cluster continues working.",2010,0, 4708,Automatic generation of fault-tolerant CORBA-services,"The Common Object Request Broker Architecture (CORBA) is the most successful representative of an object-based distributed computing architecture. Although CORBA simplifies the implementation of complex, distributed systems significantly, the support of techniques for reliable, fault-tolerant software, such as group communication protocols or replication is very limited in the state-of-the-art CORBA or even fault-tolerant CORBA. Any fault tolerance extension for CORBA components needs to trade off data abstraction and encapsulation against implementation specific knowledge about a component's internal timing behavior, resource usage and interaction patterns. These non-functional aspects of a component are crucial for the predictable behavior of fault-tolerance mechanisms. However, in contrast to CORBA's interface definition language (IDL), which describes a component's functional interface, there is no general means to describe a component's non-functional properties. We present a genetic framework, which makes existing CORBA components fault tolerant. In adherence with a given, programmer-specified fault model, our framework uses design-time and configuration-time information for automatic distributed replicated instantiation of components. Furthermore, we propose usage of aspect-oriented programming (AOP) techniques to describe fault-tolerance as a non-functional component property. We describe the automatic generation of replicated CORBA services based on aspect information and demonstrate service configuration through a graphical user-interface",2000,0, 4709,Comparison between backpropagation and RPROP algorithms applied to fault classification in transmission lines,"The computed results from implemented artificial intelligence algorithms, used to identify and classify faults in transmission lines, are discussed in this paper. The proposed methodology uses sampled data of voltage and current waveforms obtained from analog channels of digital fault recorders (DFRs) installed in the field to monitor transmission lines. The performances of resilient propagation (RPROP) and backpropagation algorithms, implemented in batch mode, are addressed for single, double and three-phase fault types.",2004,0, 4710,Analysis of Effects of Micro-Metal on Ranging Error Using Finite Difference Time Domain Methods,"The concept of indoor localization has attracted considerable attention in the field of positioning. It has been reported that time-of-arrival based localization techniques perform superior compared to received signal strength techniques or angle-of-arrival techniques. However, the accuracy of such systems is limited mainly due to unexpected large ranging errors observed in indoor environment which are mainly caused by obstruction of the direct path by a metallic object. The analysis of effects of the micro-metallic objects on the accuracy of the range estimates are shown to be a very challenging problem. In this paper we discuss some full-wave EM methods to analyze the effects of micro-metallic objects on the accuracy of the range estimates. The results of electromagnetic simulation via FDTD are compared to the channel profiles obtained from a real-time frequency-domain measurement to analyze the accuracy of a 2D approximation used. The measurements were conducted with the bandwidth of 500 MHz centered around 1 GHz. The FDTD simulation was carried out using our MATLAB FDTD Suite.",2008,0, 4711,Dynamic Frequency Analysis on Industrial Electric Power Network under External Fault Condition,"The condition of thermal power plant of some SINOPEC is simply introduced first, after choosing its receiving operation, enterprise internal power's dynamic frequency characteristic mechanism has been analyzed under external fault condition, and then arriving at a conclusion that the dynamic frequency characteristic has something to do with active power shortage and frequency adjustment effect modulus of load. At last, having selected the reasonable failure mode and mathematical models in software PSASP7.0 to simulate this dynamic frequency characteristics curve of the system, we can verify the validity of the conclusion. The above is good for revealing the weak link of the plant in future and laying the foundation for finding the way to improve the reliability of the system. At the same time, it also has some reference value for other enterprise grid.",2010,0, 4712,Dynamically Adapted Low-Energy Fault Tolerant Processors,"The constant advances on scaling have introduced several issues to the design of processing structures in new technologies. The closer one gets to nano-scale devices, the more necessary are methods to develop circuits that are able to tolerate high defect densities. At the same time, beyond area costs, there is a pressure to maintain energy and power dissipation at acceptable levels, which practically forbids classical redundancy. This paper presents a dynamic solution to provide reliability and reduce energy of a microprocessor using a dynamically adaptive reconfigurable fabric. The approach combines the binary translation mechanism with the sleep transistor technique to ensure graceful degradation for software applications, while at the same time can reduce energy by shutting off the power supply of the unused and the defective resources of a reconfigurable fabric.",2009,0, 4713,A unified instruction set programmable architecture for multi-standard advanced forward error correction,"The continuously increasing number of communication standards to be supported in nomadic devices combined with the fast ramping design cost in deep submicron technologies claim for highly reusable and flexible programmable solutions. Software defined radio (SDR) aims at providing such solutions in radio baseband architectures. Great advances were recently booked in handset-targeted SDR, covering most of the baseband processing with satisfactory performance and energy efficiency. However, as it typically depicts a magnitude higher computation load, forward error correction (FEC) has been excluded from the scope of high throughput SDR solutions and let to dedicated hardware accelerators. The currently growing number of advanced FEC options claims however for flexibility there too. This paper presents the first application-specific instruction programmable architecture addressing in a unified way the emerging turbo- and LPDC coding requirements of 3GPP-LTE, IEEE802.11n, IEEE802.16(e) and DVB-S2/T2. The proposal shows a throughput from 0.07 to 1.25 Mbps/MHz with efficiencies round 0.32 nJ/bit/iter in turbo mode and round 0.085 nJ/bit/iter in LDPC mode. The area is lower than the cumulated area of dedicated turbo and LDPC solution.",2008,0, 4714,Online error detection through observation of traffic self-similarity,"The authors present a new and universally applicable approach to error detection in packet or cell communication networks. The error detection uses measured traffic load data. The advantage is that detection of an error in a low layer reduces the probability of an undetected error in higher layers, and makes time-costly error detection in higher layers unnecessary. Most systems use static traffic load thresholds for error detection. The authors present an approach which can achieve considerably higher sensitivity. Their basic idea is to exploit the property of self-similarity in network traffic. This analytical redundancy gives a reference behaviour of the network traffic load, which allows the detection of faulty behaviour in the real network traffic load. For the error detection, the validity of the given self-similar property is checked through a deviation indicator Q based on second-order properties of the time series' distributions. This is a sufficient condition for normal (error-free) behaviour",2001,0, 4715,Monitoring known seismic faults using the permanent scatterers (PS) technique,"The authors show the results obtained using the PS technique to locate seismic faults in Southern California with high spatial resolution, and to monitor the evolution of the line of sight (LOS) displacement both in time and space",2000,0, 4716,Fault tolerance technology for autonomous decentralized database systems,"The Autonomous Decentralized Database System (ADDS) has been proposed in the background of e-business in respect to the dynamic and heterogeneous requirements of the users. With the rapid development of information technology, different companies in the field of e-business are supposed to cooperate in order to cope with the continuous changing demands of services in a dynamic market. In a diversified environment of service provision and service access, the ADDS provides flexibility to integrate heterogeneous and autonomous systems while assuring timeliness and high availability. A loosely-consistency management technology confers autonomy to each site for updating while maintaining the consistency of the whole system. Moreover, a background coordination technology, by utilizing a mobile agent, has been devised to permit the sites to coordinate and cooperate with each other while conferring the online property. The use of mobile agent, however, is critical and requires reliability with regard to mobile agent failures that may lead to bad response times and hence the availability of the system may lost. A fault tolerance technology is proposed in order that the system autonomously detect and recover the fault of the mobile agent due to a failure in a transmission link, site or bug in the software. The effectiveness of the proposition is shown by simulation.",2003,0, 4717,Fault Analysis of a MEMS Tuneable Bandpass Filter,"The availability of Micro-Electro-Mechanical Systems (MEMS) switches has enabled the design of a high Q-factor but low insertion loss tuneable bandpass filter. This paper investigates the potential faults that could occur during fabrication and long term operation of a tuneable bandpass filter using MEMS switches. The causes of the filter defects and the resulting filter response will be identified, simulated and co- related, with the final aim of being able to identify the defects by measuring the faulty responses in the future. The different defects are simulated using SONNET to obtain the response of the faulty filter. Parameters such as insertion loss and return loss of the tuneable filter vary for different faults. In the future study, the defects will be recreated and tested experimentally to corroborate simulation findings. Eventually, a relationship between defects and the filter response will be developed.",2007,0, 4718,Bug hunting: the seven ways of the Security Samurai,"The burgeoning bug population has enhanced public awareness about security. The author outlines common bug hunting methods and techniques for actually finding bugs. To systematically find bugs, individuals do need common sense (to know what to look for), dedication (to spend endless hours poking through software code), and a bit of luck (to find meaningful results). Also helpful are a touch of arrogance, a handful of tricks and tools, and considerable social skills for effective teamwork. In fact, the required qualities don't differ much from those a typical human being needs to live well in modern society. The author defines bug hunting as a systematic process in which one or more individuals try to find security flaws in a predetermined set of ""technologies"", including software products, hardware devices, algorithms, formal protocols, and real-world networks and systems. Constraints on the practice might include time, resource availability, technical expertise, money, work experience, and so on",2002,0, 4719,Capacity and error probability analysis for orthogonal space-time block codes over fading channels,"The capacity and error probability of orthogonal space-time block codes (STBCs) are considered for pulse-amplitude modulation/phase shift keying/quadrature-amplitude modulation (PAM/PSK/QAM) in fading channels. The approach is based on an equivalent scalar additive white Gaussian noise channel with a channel gain proportional to the Frobenius norm of the matrix channel for the STBC. Using this effective channel, capacity and probability of error expressions are derived for PSK/PAM/QAM modulation with space-time block coding. Rayleigh-, Ricean-, and Nakagami-fading channels are considered. As an application, these results are extended to obtain the capacity and probability of error for a multiuser direct sequence code-division multiple-access system employing space-time block coding.",2005,0, 4720,Self-healing and fault-tolerance abilities development in embryonic systems implemented with FPGA-based hardware,"The cell-based structure, which makes up the majority of biological organisms offers the ability to grow with fault-tolerance abilities and self-repair. By adapting these mechanisms and capabilities from nature, scientific approaches have helped researches understand related phenomena and associated with principles to engine complex novel digital systems and improve their capability. Founded by these observations, the paper is focused on computer-aided modeling, simulation and experimental research of embryonic systems fault-tolerance and selfhealing abilities, with the purpose to implement VLSI hardware structures which are able to imitate cells or artificial organism operation mode, with similar robustness properties like their biological equivalents from nature. The presented theoretical and simulation approaches were tested on a laboratory prototype embryonic system (embryonic machine), built with major purpose to implement self-healing properties of living organisms.",2009,0, 4721,Zero Defects Needs Third Party Software to Succeed,The challenge of achieving zero defects cannot be achieved by semiconductor OEMs acting on their own. Semiconductor test needs to understand and embrace the role that third party vendors can play in overcoming such challenges,2006,0, 4722,Enhancing Fault Tolerance of Radial Basis Functions,"The challenge of future nanoelectronic applications, e.g. in quantum computing or in molecular computing, is to assure reliable computation facing a growing number of malfunctioning and failing computational units. Modeled on biology artificial neural networks are intended to be one preferred architecture for these applications because their architectures allow distributed information processing and, therefore, will result in tolerance to malfunctioning neurons and in robustness to noise. In this work, methods to enhance fault tolerance to permanently failing neurons of Radial Basis Function networks are investigated for function approximation applications. Therefore, a relevance measure is introduced which can be used to enhance the fault tolerance or, on the contrary, to control the network complexity if it is used for pruning.",2006,0, 4723,Ship equipment fault grade assessment model based on back propagation neural network and genetic algorithm,"The factors that affect the ship equipment fault grade assessment are analyzed firstly, and then the fault grade assessment model is founded on the base of back propagation neural network. The genetic algorithm is used to quantify the value of the initial weight vector of neural network. Three methods that are gradient descent back propagation algorithm, momentum gradient descent back propagation algorithm and Levenberg-Marquard back propagation algorithm are used to train the neural network. Through lots of simulation calculation, the neural network simulation algorithm which is most adaptive to this special assessment problem and has the highest precision is found. Next, the methods through which can improve the assessment precision are given. In the end, the visualization forms of the neural network model which are compiled by Matlab and VB software is researched to improve the usability of the methods.",2008,0, 4724,Analysis of transient performance for DFIG wind turbines under the open switch faults,"The fast development of grid-integrated wind power introduces new requirements for the operation and control of power networks. In order to maintain the reliability of a host power grid, it is preferred that the grid-connected wind turbine should restore its normal operation with minimized power losses in events of grid fault. This paper presents the results for the transient performance of a 2MW doubly-fed induction generator (DFIG), a type of variable-speed wind turbine. The paper concentrates on transient performance of the said generator technology under open-switch grid faults. The simulation was performed using MATLAB - Simulink software. The results obtained have shown that the control schemes employed for the DFIG wind turbines played an effective role in the restoration of the normal operation for the wind turbine in response to grid faults. The results for both during and after the grid fault will be discussed in this paper.",2010,0, 4725,Study on the feasibility of developing high voltage and large capacity permanent-magnet-biased fault current limiter,"The fault current limiter based on permanent-magnet-biased saturation, namely PMFCL, is a preferable solution to suppress the fault current in power grids, with unique superiority and advantages in both aspects of economical investment and technological feasibility. However, the bias capability of the permanent magnet is still a pending issue as to hinder further development of the PMFCL towards high voltage (HV) and large capacity, which is yet to be clarified so far. The state-of-the-art advancement of PMFCL is firstly reviewed and its specific merits are fully accounted for. A concept of Saturation Depth Ratio is proposed and defined as to indicate that, to improve the bias capability of the permanent magnet is in effect equivalent to increasing saturation depth ratio of the iron core. With regard to a novel PMFCL topology, the mathematical relationship between the saturation depth ratio and the structural parameters is obtained through equivalent magnetic circuit analysis then verified by experiments and FEM simulation, which presents effective basis and guide to design of HV and large-capacity PMFCL. Finally, a case study of 10 kV PMFCL prototype is given. Results demonstrate that, proper adjust of section area and length of the permanent magnet can achieve preferable bias capability of the permanent magnet, and thereby show feasibility to realize HV and large-capacity PMFCL.",2009,0, 4726,Fault Diagnosis Approach Based on Fuzzy Probabilistic SDG Model and Bayesian Inference,"The fault diagnosis approach based-on signed directed graph(SDG) has better completeness and explanation facility ,but it has some disadvantages due to lack of the quantitative information, so the semi-quantitative fault diagnosis approach based on model of fuzzy probabilistic SDG and Bayesian inference is proposed, the node variable was expressed as fuzzy variable with the more information, the cause-effect relationship between the nodes was described by conditional probabilities table (CPT), the set of failure source candidates is found out by Bayesian inference and backtracking algorithm, Furthermore, the candidates in the set are ranked according to the rate of fault possibility. The primary electrical power supply system in certain a satellite was modeled with the proposed approach, the diagnosis simulation results show that the diagnostic resolution can be improved significantly; the approach is feasible to be applied to on-board diagnosis for spacecraft.",2009,0, 4727,Dependability Benchmarking Using Software Faults: How to Create Practical and Representative Faultloads,"The faultload is one of the most critical components of a dependability benchmark. It should embody a repeatable, portable, representative and generally accepted fault set. Concerning software faults, the definition of that kind of faultloads is particularly difficult, as it requires a much more complex emulation method than the traditional stuck-at or bit-flip used for hardware faults. Although faultloads based on software faults have already been proposed, the choice of adequate fault injection targets (i.e., actual software components where the faults are injected) is still an open and crucial issue. Furthermore, knowing that the number of possible software faults that can be injected in a given system is potentially very large, the problem of defining a faultload made of a small number of representative faults is of utmost importance. This paper proposes a strategy to guide the fault injection target selection and reduce the number of faults required for the faultload and exemplifies the proposed approach with a real Web-server dependability benchmark and a large-scale integer vector sort application.",2009,0, 4728,Model of stator inter-turn short circuit fault in doubly-fed induction generators for wind turbine,"The doubly fed induction generator (DFIG) is an important component of wind turbine systems. It is necessary to identify incipient faults quickly. This paper proposes a complete simulation model of DFIG in wind turbine about inter-turn short circuit fault at stator windings, which is based on multi-circuit theory. A detail analysis about simulation results is presented, especially about short circuit current. By analysis, the apparent 150 Hz, 450 Hz and current phase angle difference are taken as fault features and the fault phase also can be detected by phase angle difference. Both simulated results and experimental results of emulated inter-turn short circuit fault by paralleling a resistance with phase A are carried out. They verify the preceding analysis results. Moreover, their coincidence certificates this model is good and simulation results of inter-turn short circuit fault are correct.",2004,0, 4729,Effects of rotor bar and end-ring faults over the signals of a position estimation strategy for induction motors,"The effect of rotor faults, such as broken bars and end-rings, over the signals of a position estimation strategy for induction motor drives is analyzed using a multiple coupled circuit model. The objective of this analysis is to establish the possibility of using the estimation strategy signals for fault diagnosis in variable-speed electric drives. This strategy is based on the effect produced by inductance variation on the zero-sequence voltage, when exciting the motor with a predefined inverter switching pattern. Experimental results illustrate the feasibility of the proposal.",2005,0, 4730,Fixed point error analysis of CORDIC processor based on the variance propagation,"The effects of angle approximation and rounding in the CORDIC processor have been intensively studied for the determination of design parameters. However, the conventional analyses provide only the error bound which results in large discrepancy between the analysis and the actual implementation. Moreover, some of the signal processing architectures require the specification in terms of the mean squared error (MSE) as in the design specification of FFT processor for OFDM. This paper proposes a fixed point MSE analysis based on the variance propagation for more accurate error expression of the CORDIC processor. It is shown that the proposed analysis can also be applied to the modified CORDIC algorithms. As an example of application, an FFT processor for OFDM using the CORDIC processor is presented. The results show close match between the analysis and simulation.",2003,0, 4731,Effects of carrier phase error on EGC receivers in correlated Nakagami-m fading,"The effects of incoherently combining on dual-branch equal-gain combining (EGC) receivers in the presence of correlated, but not necessarily identical, Nakagami-m fading and additive white Gaussian noise are studied. Novel closed-form expressions for the moments of the output signal-to-noise ratio (SNR) are derived. Based on these expressions, the average output SNR and the amount of fading are obtained in closed-form. Moreover, the outage and the average bit error probability for binary and quadrature phase-shift keying are also studied using the moments-based approach. Numerical and computer simulation results clearly depict the effect of the carrier phase error, correlation coefficient, and fading severity on the EGC performance. An interesting finding is that higher values of the correlation coefficient results to lower irreducible error floors.",2005,0, 4732,Spike Timing Precision and Neural Error Correction: Local Behavior,"The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary dischargesphase locked, quasiperiodic, or chaoticwere induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errorsin this communication, missed presynaptic spikeswere determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For nonlocked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision nonlocked with error immunity.",2005,0, 4733,Software fault tree analysis for product lines,"The current development of high-integrity product lines threatens to outstrip existing tools for product-line verification. Software Fault Tree Analysis (SFTA) is a technique that has been used successfully to investigate contributing causes to potential hazards in safety-critical applications. This paper adapts SFTA to product lines of systems. The contribution is to define: (1) the technique to construct a product-line SFTA; and (2) the pruning technique required to reuse the SFTA for the analysis of a new system in the product line. The paper describes how product-line SFTA integrates with forward-analysis techniques such as Software Failure Modes, Effects, and Criticality Analysis (SFMECA), supports requirements evolution, and helps identify previously unforeseen constraints on the systems to be built. Applications to two small examples are used to illustrate the technique.",2004,0, 4734,Key Picture Error Concealment Using Residual Motion-Copy in Scalable Video Coding,"The current JSVM software only supports Frame- Copy and Motion-Copy Error Concealment (EC) methods for the concealment of key picture loss in Group of Picture (GOP). In the proposed method, we exploit motion retrieval information to conceal key frame in scalable video coding. The auxiliary motion vectors of only key picture are interleaved into the bit stream of previous key picture. To control bit-rate increased due to interleaving of supplement information, residual difference is calculated. This encoded residual difference is used to calculate the predictor so that concealment of each macroblock in a key picture will be performed. Consequently, the increase in bit-rate due to supplement information is compensated with increase in PSNR gain with correlated increase in GOP size. Moreover, we investigate the impact of key picture distortion propagation loss of hierarchical prediction in SVC. The proposed method provides drift-free substantial improvement without significant complexity.",2009,0, 4735,"Error Minimising Pipeline for Hi-Fidelity, Scalable Geospatial Simulation","The geospatial category of simulations is used to show how origin centric techniques can solve a number of accuracy related problems common to 3D computer graphics applications. Previous work identified how poor understanding of floating point related issues lead to performance, architectural and accuracy problems. This paper extends that work by including time error minimisation, lazy evaluation and progressively refined fidelity and looks at performance trade-offs. The application of these techniques to a geospatial simulation pipeline is described in order to provide more concrete guidance on how to use them",2006,0, 4736,Integrated reflection coefficient correction with respect to surface inclination and axial distance,"The integrated reflection coefficient (IRC) has been shown to be an important US parameter for tissue characterization. However, conventional estimation using a planar reference reflector results in considerably prediction errors for defocused and inclined conditions. In this study inclination and axial distance dependence of the IRC was investigated both theoretically and experimentally. Ultrasound backscatter from soft tissues was simulated by Field II. Measurements have been conducted in native cartilage samples and tissue mimicking phantoms at 40 MHz. The dependencies of IRC on the two parameters were approximated by two-dimensional polynomial functions. These results were used as correction models. Valid ranges of transducer-sample distance (TSD) and sample inclination have been estimated and the influence of the correction model on the standard deviation was evaluated by application of the correction models to measurements of native cartilage.",2009,0, 4737,Fault-Tolerant Data Acquisition in Sensor Networks,"The integrity of data has tremendous effects on the performance of any data acquisition system. Noise and other disturbances can often degrade the information or data acquired from these systems. Devising a fault-tolerant mechanism in wireless sensor networks is very important due to the construction and deployment characteristics of these low powered sensing devices. More over, due to the low computation and communication capabilities of the sensor nodes, the fault-tolerant mechanism should have a very low computation overhead. In this paper, we present a novel methodology to hierarchically design and implement a protocol for checking the integrity of data from a sensor-node.",2007,0, 4738,On-board Fault Diagnosis of HEV Induction Motor Drive at Start-up and During Idle Mode,"The integrity of the electric motors in work and passenger vehicles can best be maintained by monitoring its condition frequently on-board the vehicle. In this paper, a signal processing based fault diagnosis scheme for on-board diagnosis of rotor asymmetry at start-up and idle mode is presented. Regular rotor asymmetry tests are done when the motor is running at certain speed under load with stationary current signal assumption. It is quite challenging to obtain these regular test conditions for long enough time during daily vehicle operations. In addition, automobile vibrations cause a nonuniform air-gap motor operation which directly affects the inductances of electric motor and results quite noisy current spectrum. Therefore, examining the condition of electric motor as part of hybrid electric vehicle (HEV) powertrain, conventional rotor fault detection methods become impracticable. The proposed method overcomes the aforementioned problems simply by testing the rotor asymmetry at zero speed. This test can be achieved and repeated during start-up and idle modes. The proposed method can be implemented at no cost basically using the readily available electric motor inverter sensors and microprocessing unit. Induction motor rotor asymmetry fault signatures are experimentally tested online employing the drive embedded master processor (TMS320F2812 DSP) to prove the effectiveness of the proposed method. It is experimentally shown that the proposed method detects the fault harmonics at start-up and standstill to determine the existence and the severity of faults in HEV powertrain.",2007,0, 4739,An asymmetrical fault line selection based on I2 scalar product research in distribution system with DGs,"The introduction of DG in distribution system has many negative impacts on conventional protection. Typically it will deteriorate selectivity and sensitivity of existing relay. It will change a simple radiation distribution system with single source into a complex multi-source system. To make the nowadays broadly used protection scheme and their coordination meet the great challenge, this paper proposed a novel fault line selection approach based on negative sequence current (I2) scalar product (I2SP) to locate fault line in a DG system. Negative sequence current has a powerful ability of fault identification. I2SP can effectively decide direction of fault location without voltages. It will help to develop a wide-area protection scheme in DG system. It can easily eliminate relay mal-operation and simplify relay scheme in distribution system because of introduction of renewable energy sources. The simulation results proved its effectiveness and practicability of the method.",2008,0, 4740,Fault tolerance design in JPEG 2000 image compression system,"The JPEG 2000 image compression standard is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. The implementations of the JPEG 2000 codec are susceptible to computer-induced soft errors. One situation requiring fault tolerance is remote-sensing satellites, where high energy particles and radiation produce single event upsets corrupting the highly susceptible data compression operations. This paper develops fault tolerance error-detecting capabilities for the major subsystems that constitute a JPEG 2000 standard. The nature of the subsystem dictates the realistic fault model where some parts have numerical error impacts whereas others are properly modeled using bit-level variables. The critical operations of subunits such as discrete wavelet transform (DWT) and quantization are protected against numerical errors. Concurrent error detection techniques are applied to accommodate the data type and numerical operations in each processing unit. On the other hand, the embedded block coding with optimal truncation (EBCOT) system and the bitstream formation unit are protected against soft-error effects using binary decision variables and cyclic redundancy check (CRC) parity values, respectively. The techniques achieve excellent error-detecting capability at only a slight increase in complexity. The design strategies have been tested using Matlab programs and simulation results are presented.",2005,0, 4741,An automatic correction tool for inorganic chemical formulas,"The kernel of an automatic correction tool developed to practice exercises of nomenclature and formulation of inorganic chemistry at the technical/engineering degrees at Girona Polytechnic University is presented. Such tool consists on two parts: nomenclature and formulation. This tool was developed inside a previous e-learning platform that has two modules specially designed for the generation and correction of technical exercises. The exercise generation module, used to automatically generate different versions of a base exercise and the correction module, used to automatically correct the generated exercises by applying a solution code maintained in the base problem. We exploit the key of both modules that is the definition of the base exercise which varies according to the parameters and the descriptors. On the other hand, the usability of the interface developed for writing inorganic chemical formulas was decisive for the success of the tool. Our environment was tested during the last three academic years at the introductory course in chemistry in our university.",2010,0, 4742,Parallelization of the nearest-neighbour search and the cross-validation error evaluation for the kernel weighted k-nn algorithm applied to large data dets in matlab,The kernel weighted k-nearest neighbours (KWKNN) algorithm is an efficient kernel regression method that achieves competitive results with lower computational complexity than least-squares support vector machines and Gaussian processes. This paper presents the parallel implementation on a cluster platform of the sequential KWKNN implemented in Matlab. This implies both the parallelization of the k nearest-neighbour search and the evaluation of the cross-validation error on a large distributed data set. The results demonstrate the good performances of the implementation.,2009,0, 4743,Estimation of DC time constants in fault currents and their relation to Thevenin's impedance,"The knowledge of DC time constants in fault currents at system nodes is important to design the protection system and validate the calculation models for power system simulations. In this paper, we present methods to estimate Thevenin's impedance and DC time constants in fault currents with application to disturbance records. Additionally, a comparison between both parameters is shown. We demonstrate that measurements of the DC time constant of fault currents in disturbance records can be heavily distorted by the protective current transformer and additional instrument transformers (e. g. in the relay itself). This distortion depends on the transformer type (P, TPX, TPY, TPZ) and burden. However, by using current signal models and curve fitting methods it is still possible to estimate the required DC time constant and Thevenin's impedance. The proposed methods are tested with simulated signals and real measurements from different protective relays.",2010,0, 4744,Defect control methods for SIMOX SOI wafer manufacture and processing,"The layered structure of thin film silicon-on-insulator (SOI) wafers introduces new considerations for defect detection, particularly for optical metrology tools used to characterize and control SOI wafer processing. Multi-layer interference, as well as subsurface features of the material, can complicate the detection of surface defects. Non-particle defect types which scatter light, such as mounds, pits (including so-called HF defects), and slip lines, can be efficiently detected and classified with advanced operating modes of state-of-the art optical metrology tools. Such capabilities facilitate improvements in the wafer manufacturing process, and result in improved defect detection capabilities and material quality. This work describes defect characterization of SIMOX-SOI wafers using the KLA-Tencor Surfscan 6420 and SP1TBI",2000,0, 4745,Circumventing/identifying faults effects,"The literature includes a variety of techniques to address the Byzantine Generals Problem or the Byzantine approach. While the main goal within the Byzantine framework is to circumvent and mask the effect of unreliable units (e.g., traitors), much research has been done on identifying (i.e., demasking) unreliable units. This is often called fault identification in system diagnosis. The paper focuses on the identification of unreliable units within the Byzantine framework. Beyond its theoretical interest, an identification of faulty units contributes to accelerating the agreement process itself and drastically reducing the number of messages exchanged between units. The main features of this work are twofold. It does not impose additional assumptions or constraints on the agreement process. It limits the overhead for identifying unreliable units: the identification process is in O(n3)",2000,0, 4746,Detection of temporary faults during impulse tests using wavelets,The impulse test on power transformers simulates the conditions that exists in service when a transformer is subjected to an incoming high voltage surge due to lightning or other disturbances on the associated transmission line. There have been several studies on the application of wavelets to identify faults during impulse testing. Most of these studies have utilized simulated waveforms in which the fault is introduced by a suitable mathematical model. In this paper the authors have used a new technique to simulate such faults. Experimentally obtained waveforms are used to investigate the suitability of wavelets to detect faults during impulse testing and this paper presents preliminary results of this research work.,2003,0, 4747,Operational Two-Stage Stratified Topographic Correction of Spaceborne Multispectral Imagery Employing an Automatic Spectral-Rule-Based Decision-Tree Preliminary Classifier,"The increasing amount of remote sensing (RS) imagery acquired from multiple platforms and the recent announcements that scientists and decision makers around the world will soon have unrestricted access at no charge to large-scale spaceborne multispectral (MS) image databases make urgent the need to develop easy-to-use, effective, efficient, robust, and scalable satellite-based measurement systems. In these scientific and industrial contexts, it is well known that, to date, the operational performance of existing stratified non-Lambertian (anisotropic) topographic correction (SNLTOC) algorithms has been limited by the need for a priori knowledge of structural landscape characteristics, such as surface roughness which is land cover class specific. In practice, to overcome the circular nature of the SNLTOC problem, a mutually exclusive and totally exhaustive land cover classification map of a spaceborne MS image is required before SNLTOC takes place. This system requirement is fulfilled by the original operational automatic two-stage SNLTOC approach presented in this paper which comprises, in cascade, 1) an automatic stratification first stage and 2) a second-stage ordinary SNLTOC method selected from the literature. The former combines 1) four subsymbolic digital-elevation-model-derived strata, namely, horizontal areas, self-shadows, and sunlit slopes either facing the sun or facing away from the sun, and 2) symbolic (semantic) strata generated from the input MS image by an operational fully automated spectral-rule-based decision-tree preliminary classifier recently presented in RS literature. In this paper, first, previous works related to the TOC subject are surveyed, and next, the novel operational two-stage SNLTOC system is presented. Finally, the original two-stage SNLTOC system is validated in up to 19 experiments where the system's capability of reducing within-stratum spectral variance while preserving pixel-based spectral patterns (shapes) is - - assessed quantitatively.",2010,0, 4748,Empirical Validation of a Web Fault Taxonomy and its usage for Fault Seeding,"The increasing demand for reliable Web applications gives a central role to Web testing. Most of the existing works are focused on the definition of novel testing techniques, specifically tailored to the Web. However, no attempt was carried out so far to understand the specific nature of Web faults. This is of fundamental importance to assess the effectiveness of the proposed Web testing techniques. In this paper, we describe the process followed in the construction of a Web fault taxonomy. After the initial, top- down construction, the taxonomy was subjected to four iterations of empirical validation aimed at refining it and at understanding its effectiveness in bug classification. The final taxonomy is publicly available for consultation and editing on a Wiki page. Testers can use it in the definition of test cases that target specific classes of Web faults. Researchers can use it to build fault seeding tools that inject artificial faults which resemble the real ones.",2007,0, 4749,Hardware-software covalidation: fault models and test generation,The increasing use of hardware-software systems in cost-critical and life-critical applications has led to heightened significance of design correctness of these systems. This paper presents a summary of research in hardware-software covalidation winch involves the verification of design correctness using simulation-based techniques. This paper focuses on the test generation process for hardware-software systems as well as the fault models and fault coverage analysis techniques which support test generation,2001,0, 4750,Error Correction in 3D Coordinate Measurement,"The increasing use of micromechanical systems in industry and the permanent increase for higher measuring accuracy has led to ongoing development in the field of 3D coordinate measurement. This development includes new designs embodied in unconventional machine structures and also the application software error compensation techniques. The paper over view in case of a novel design the error sources, their influence on the measuring accuracy and the suitable compensation algorithms",2006,0, 4751,Modeling and Verification for Timing Satisfaction of Fault-Tolerant Systems with Finiteness,"The increasing use of model-based tools enables further use of formal verification techniques in the context of distributed real-time systems. To avoid state explosion, it is necessary to construct verification models that focus on the aspects under consideration.In this paper, we discuss how we construct a verification model for timing analysis in distributed real-time systems.We (1) give observations concerning restrictions of timed automata to model these systems,(2) formulate mathematical representations on how to perform model-to-model transformation to derive verification models from system models, and (3) propose some theoretical criteria how to reduce the model size. The latter is in particular important, as for the verification of complex systems, an efficient model reflecting the properties of the system under consideration is equally important to the verification algorithm itself.Finally, we present an extension of the model-based development tool FTOS, designed to develop fault-tolerant systems, to demonstrate our approach.",2009,0, 4752,The Impact of Link Error Modeling on the Quality of Streamed Video in Wireless Networks,The influence of channel error characteristics on higher layer protocols or methods which are considering or even exploiting the error statistics is significant especially in wireless networks where fading and interference effects result in error pattern correlation properties (error bursts). In this work we are analysing the impact of the channel properties directly on the quality of streamed video. We are focusing on the quality of transmitted H.264/AVC video streaming over UMTS DCH (Dedicated Channel) and compare the quality of the streamed video simulated over measured link error traces (the measurements performed in a live UMTS network) to simulations with a memoryless channel and to models with enhanced error characteristics. The results show that appropriate modeling of the link layer error characteristics is very important but it can also be concluded that the error correlation properties of the link- or the network-layer model do not have an impact on the quality of the video stream as long as the resulting IP packet error probability remains unchanged.,2006,0, 4753,Fault detection based on H states observer for networked control systems,"The influence of random short time-delay to networked control systems (NCS) is changed into an unknown bounded uncertain part. Without changing the structure of the system, an H states observer is designed for NCS with short time-delay. Based on the designed states observer, a robust fault detection approach is proposed for NCS. In addition, an optimization method for the selection of the detection threshold is introduced for better tradeoff between the robustness and the sensitivity. Finally, some simulation results demonstrate that the presented states observer is robust and the fault detection for NCS is effective.",2009,0, 4754,Radiometric characterisation of a VNIR hyperspectral imaging system for accurate atmospheric correction,"The Institut Cartogafic de Catalunya (ICC) regularly operates a Compact Airborne Spectral Imager (CASI) sensor. For this system an atmospheric correction algorithm was developed to simultaneously correct multiple overlapping images taken from different heights. First, the algorithm estimates the main atmospheric parameters with an inversion procedure using either radiometric ground measurements or image homologous areas plus a single ground measurement. Then, the code is applied to the images to obtain atmospherically corrected hyperspectral imagery. The algorithm was applied in the frame of ICC-Banyoles 2005 experiment (Spain) using multi-height imagery and field simultaneous reflectance measurements. In the validation step, the standard deviations obtained with both inversion methods were similar. In order to improve these results, the smiling effect (spectral shift) for the sensor is characterized by locating O2 absorption bands in the NIR for each CASI look direction. Additionally, a more accurate spectral sensitivity for each band has been calculated. These improvements are applied to EuroSDR-Banyoles 2008 experiment's (Spain) imagery. These results show a substantial improvement on the atmospheric correction at the absorption regions when compared to field reflectance measurements. This behaviour advises the inclusion of these developments in the inversion system.",2010,0, 4755,Operational Fault Detection in cellular wireless base-stations,"The goal of this work is to improve availability of operational base-stations in a wireless mobile network through non-intrusive fault detection methods. Since revenue is generated only when actual customer calls are processed, we develop a scheme to minimize revenue loss by monitoring real-time mobile user call processing activity. The mobile user call load profile experienced by a base-station displays a highly non-stationary temporal behavior with time-of-day, day-of-the-week and time-of-year variations. In addition, the geographic location also impacts the traffic profile, making each base-station have its own unique traffic patterns. A hierarchical base-station fault monitoring and detection scheme has been implemented in an IS-95 CDMA Cellular network that can detect faults at - base station level, sector level, carrier level, and channel level. A statistical hypothesis test framework, based on a combination of parametric, semi-parametric and non-parametric test statistics are defined for determining faults. The fault or alarm thresholds are determined by learning expected deviations during a training phase. Additionally, fault thresholds have to adapt to spatial and temporal mobile traffic patterns that slowly changes with seasonal traffic drifts over time and increasing penetration of mobile user density. Feedback mechanisms are provided for threshold adaptation and self-management, which includes automatic recovery actions and software reconfiguration. We call this method, Operational Fault Detection (OFD). We describe the operation of a few select features from a large family of OFD features in Base Stations; summarize the algorithms, their performance and comment on future work.",2006,0, 4756,Best ANN structures for fault location in single-and double-circuit transmission lines,"The great development in computing power has allowed the implementation of artificial neural networks (ANNs) in the most diverse fields of technology. This paper shows how diverse ANN structures can be applied to the processes of fault classification and fault location in overhead two-terminal transmission lines, with single and double circuit. The existence of a large group of valid ANN structures guarantees the applicability of ANNs in the fault classification and location processes. The selection of the best ANN structures for each process has been carried out by means of a software tool called SARENEUR.",2005,0, 4757,Fault Tolerance Management for a Hierarchical GridRPC Middleware,"The GridRPC model is well suited for high performance computing on grids thanks to efficiently solving most of the issues raised by geographically and administratively split resources. Because of large scale, long range networks and heterogeneity, Grids are extremely prone to failures. GridRPC middleware are usually managing failures by using 1) TCP or other link network layer provided failure detector, 2) automatic checkpoints of sequential jobs and 3) a centralized stable agent to perform scheduling. Most recent developments have provided some new mechanisms like the optimal Chandra & Toueg & Aguillera failure detector, most numerical libraries now providing their own optimized checkpoint routine and distributed scheduling GridRPC architectures. In this paper we aim at adapting to these novelties by providing the first implementation and evaluation in a grid system of the optimal fault detector, a novel and simple checkpoint API allowing to manage both service provided checkpoint and automatic checkpoint (even for parallel services) and a scheduling hierarchy recovery algorithm tolerating several simultaneous failures. All those mechanisms are implemented and evaluated on a real grid in the DIET middleware.",2008,0, 4758,Error concealment techniques for multi-view video,"The H.264/AVC multi-view extension provides for high compression ratios of multi-view sequences. The coding scheme used exploits spatial, temporal and inter-view dependability for this scope. However, in the event of transmission errors, this leads to the propagation of the distorted macroblocks, degrading the quality of the video perceived by the user. In this paper we introduce error resilient coding and error concealment techniques in Multi-view Video Coding to reduce this effect. The results obtained demonstrate that better multiview video reconstruction is obtained when Intra-coded frames are spatially concealed while Inter-coded frames are concealed using motion compensation techniques, within the multi-view prediction structure. Furthermore, additional gain in quality can be achieved when Anchor frames are concealed using a combination of spatial and motion compensation techniques.",2010,0, 4759,Application of Empirical Bayes Estimation in Error Model Identification of Two Orthometric Accelerometers,"The high accuracy accelerometer can be demarcated in multiposition tumbling experiment under 1g gravitational field. The g2 observation model of Two Orthometric Accelerometers can eliminate the corner error. there is a serious multicollinearity exit in this system because some model coefficient mix together. In view of above question, this article has given the arithmetic of Empirical Bayes Estimation(EB), and applied this method in model which is mentioned above. The result of the simulation and the experiment shows that compared with the conventional least squares method and the generalized diagonal ridge estimation, the Empirical Bayes Estimation can overcome the influence of the multicollinearity and can separate two coefficients which are The Second-Order Terms and the cross-coupling terms.",2010,0, 4760,An improved error concealment algorithm for intra-frames in H.264/AVC,The highly error-prone nature of wireless environments and limited computational power of mobile devices necessitates the implementation of robust yet simple error concealment in H.264/AVC. We propose a new and effective error concealment algorithm for intra-coded frames that utilizes the temporal redundancy in a wireless video bitstream. The proposed concealment method supports both raster scan and FMO type slices. Performance evaluations show that our approach achieves significant improvement over existing methods in both PSNR and subjective picture quality.,2005,0, 4761,Implementation results for a fault-tolerant multicellular architecture inspired by endocrine communication,"The hybrid redundancy structure found at the cellular level of higher animals provides complex organism with the three key features of a reliability-engineered system: fault tolerance, detection and recovery. For this reason, both the operation and organisation of this redundancy scheme provide an attractive source of inspiration for an electronic fault tolerant system. The electronic architecture documented within this paper models the cooperative operation and consequent fault masking of the multiple cells that form biological organs. A communication-system, inspired by endocrinology, is then used to network together these cells, coordinating their activity as organs, and controlling the operation of data processing tasks on a data stream. The bioNode hardware platform is used to implement and test the presented endocrinology inspired architecture. Results of the system's operation are provided to demonstrate the architecture's ability to maintain correct computation on. a data stream whilst being subjected to multiple and varied hardware faults.",2005,0, 4762,Overcoming double-rounding errors under IEEE 754-2008 using software,"The IEEE 754-2008 floating-point standard requires rounding to all available formats (e.g., single and double precision) from any combination of operand formats. Under the original standard, operands and results had the same format, and current hardware is likely to provide only this much. When trying to fulfil the new requirement on such hardware, there are potential double rounding problems. The problems include both incorrect incrementation and incorrect truncation. We present a software solution to both problems that requires no additional hardware, and yet still has acceptable performance.",2010,0, 4763,Calculating Incident Energy Released With Varying Ground Fault Magnitudes on Solidly Grounded Systems,"The IEEE Standard 1584-2002 is the recognized standard regarding the calculation of incident energy output from an ac three-phase arcing fault. While that standard does not provide incident energy calculations for the much more common single-phase-to-ground fault, it explains that the expectation is that such a ground fault will either self-extinguish or escalate into a three-phase fault. If the fault escalates, the three-phase calculations can be used. The amount of time that the arc burns as a single-phase fault before escalation is not defined, but a reference document used by the 1584 standard defines this escalation time as one to two cycles. No attempt is made to calculate the incident energy released during this escalation period. While exact answers to the question of how much additional energy would be released are best answered through additional testing, this paper attempts to bracket likely high and low ranges of incident energy that could be released from an arcing single-phase-to-ground fault prior to escalation into a three-phase fault. As this paper only provides a theoretical basis for the calculation of additional energy released from arcing ground faults prior to escalation, it is the opinion of the author that future standards should include testing of incident energy released from faults that begin as low-level arcing ground faults.",2010,0, 4764,Using immunology principles for fault detection,"The immune system is a cognitive system of complexity comparable to the brain and its computational algorithms suggest new solutions to engineering problems or new ways of looking at these problems. Using immunological principles, a two- (or three-) module algorithm is developed which is capable of launching a specific response to an anomalous situation for diagnostic purposes. Experimental results concerning fault detection in an induction motor are presented as an example illustrating how the immune-based system operates, discussing its capabilities, drawbacks, and future developments.",2003,0, 4765,Impact of correlation errors on the optimum Kalman filter gain identification in a single sensor environment,"The impact of errors in the innovation correlation functions evaluation, related to the suboptimal filter, on the identification of the optimum steady state Kalman filter gains are investigated. This issue arises in all real time applications, where the correlations must be calculated from experimental data. An identification algorithm proposed in the literature, with formal proof of convergence, is revisited and summarized. Based on this algorithm, equations describing this impact are developed. Simulation results are presented and discussed. As contribution, experimental results of the identification algorithm, applied to estimate the states of a position servo systems, are presented.",2004,0, 4766,On The Development Of Fault Injection Profiles,"The impact of hardware failures on software has attracted substantial attention in the study of dependable systems. Fault injection techniques have emerged as a major means to evaluate software behavior in the presence of hardware failures. However, due to the lack of knowledge of the fault distribution information, the fault location and time are randomly selected. One major drawback of this approach is that the injected faults do not represent the system's operational situation, thus software reliability cannot be credibly assessed. This paper aims at extending the use of fault injection to the reliability prediction of hardware faults. To do so, we have developed a set of analytical and simulation based methods capable of statistically reproducing the underlying physics and phenomena leading to hardware failures in a given system operational context. Such distributions are referred to as fault injection profiles, and are the basis to extend the fault injection technique with fault models that represent the actual conditions under which hardware faults occur",2007,0, 4767,Evaluating and Comparing the Impact of Software Faults on Web Servers,"The impact of software faults present in components to the larger system is currently a relevant and still open research topic. Web-based applications are simultaneously a relevant type of system for our society and are typically exposed to many software components in the server side. The impact of faults in these components to the web servers is an important aspect when evaluating the dependability properties of the entire web-serving system. This paper proposes an experimental approach to evaluate and compare the impact of software faults present in web applications on typical web servers. This approach consists in emulating realistic software faults in server-side web applications and monitoring the behavior of web server from both the server side (e.g., resource consumption) and the client side (e.g., response time, response correctness) perspective. We exemplify our methodology in case studies using three different servers and a realistic e-commerce web application and show that software faults existing in server side components can indeed affect the web server in a quantifiable manner which allow us to use our methodology for comparative purposes towards benchmarking and selecting the most robust web server.",2010,0, 4768,Saboteur-Based Fault Injection for Quantum Circuits Fault Tolerance Assessment,"The importance of reliability and fault tolerance is paramount in quantum computation. This paper proposes a Fault Tolerance Algorithms and Methodologies (FTAM) assessment technique for quantum circuits, by adopting the saboteur-based Simulated Fault Injection methodology from classical computation. By drawing the inspiration from classical computation, the HDLs were employed for performing fault injection, due to their capacity of behavioral and structural circuit description, including hierarchical features. The cornerstone of this approach is the adaptation of the available quantum computation error models (with the quantum computing features and constraints) to the classical, HDL framework of the simulated fault injection techniques. The experimental simulated fault injection campaign results are consistent with the analytical assessments -from a qualitative point of view - but at the same time they provide a much realistic description.",2007,0, 4769,Research and development On-line fault diagnosis device,"The motors are widely used in industry and play an important role. Their statuses are related to the production and security directly. Wireless-intelligent device is presented for On-line diagnosing the complexity of motor faults, and wavelet packet decomposition and the principle of fuzzy are proposed in this paper based on analyzing the motor fault. Proved by experiment that wavelet packet can effectively de-noise and the fuzzy diagnosis principle is taken as basics. It can also be applied to other complex systems fault diagnosis and has certain portability.",2010,0, 4770,Correction of Systematic Errors Due to the Voltage Leads in an AC Josephson Voltage Standard,"The National Institute of Standards and Technology (NIST) has recently reported the first application of a quantum ac Josephson voltage standard for the calibration of thermal transfer standards in the 1- to 10-kHz frequency range. This paper describes preliminary work on extending its frequency calibration range up to 100 kHz by correcting the systematic errors due to voltage leads. A ground loop created by the dc blocks, which is a previously unaccounted source of high-frequency systematic error, has been identified, and its effects are partially mitigated.",2009,0, 4771,Intrinsic Error Sources of Neural Networks,"The nature of radial basis function (RBF) networks necessitates some types of errors which can never be removed by traditional training algorithms. This paper is an attempt to introduce the natural error sources of neural networks such as bias error, iteration-restricted error, and Gibbs' error. Moreover, a new method is introduced, called post-training, to reduce these errors as far as desired",2006,0, 4772,Nondestructive defect detection in multilayer ceramic capacitors using an improved digital speckle correlation method with wavelet packet noise reduction processing,"The nondestructive detection of defects in multilayer ceramic capacitors (MLCs) in-surface mount printed circuit board assemblies has been demonstrated by using an improved digital speckle correlation method (DSCM). The internal cracks in MLCs that contribute to the thermal displacements on the MLC surface after dc electrical loading may be uniquely identified using this improved DSCM combined with double lens optical arrangement. However, it is found that Joule heating of the MLC sample takes time, and therefore the thermal displacements on the MLC surface are not obvious at the beginning of the dc electrical loading. In order to shorten the detection time and increase the resolution of the DSCM, a wavelet packet noise reduction process is introduced into the DSCM technique. This new algorithm is used to reduce the background noise in the signal so as to improve the accuracy of detection of defect locations and reduce the detection time. By introducing wavelet packet noise reduction processing, the DSCM is found to be more sensitive to and faster at detecting defects in MLC samples. Furthermore, the DSCM with wavelet packet noise reduction process can cope with the problems of edge effect, rough and warped surface, which are the limitations of the scanning acoustic microscope (SAM)",2000,0, 4773,A Consistency Check Algorithm for Component-Based Refinements of Fault Trees,"The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becoming more and more complex, which makes it difficult to guarantee safety. It is undisputed that safety must be considered before the start of development, continue until decommissioning, and is particularly important during the design of the system and software architecture. An architecture must be able to avoid, detect, or mitigate all dangerous failures to a sufficient degree. For this purpose, the architectural design must be guided and verified by safety analyses. However, state-of-the-art component-oriented or model-based architectural design approaches use different levels of abstraction to handle complexity. So, safety analyses must also be applied on different levels of abstraction, and it must be checked and guaranteed that they are consistent with each other, which is not supported by standard safety analyses. In this paper, we present a consistency check for CFTs that automatically detects commonalities and inconsistencies between fault trees of different levels of abstraction. This facilitates the application of safety analyses in top-down architectural designs and reduces effort.",2010,0, 4774,An expert system for fault diagnosis in transformers during impulse tests,"The objective of this paper is to present an expert system for fault diagnosis in transformers during impulse tests. Transformers are impulse-tested by applying a specific set of test impulse sequences and the resulting voltage and current oscillograms are recorded. The presence of any fault in the transformer winding is detected using these recorded oscillograms. These complex decision tasks are, in general, performed by experienced testing personnel, whose knowledge can be expressed as a set of production rules. The knowledge base of the expert system includes as many as possible of these production rules. To identify, characterize and locate a fault, an inference engine is developed to perform deductive reasoning based on the rules in the knowledge base and the statistical techniques. The basic idea of the expert system is to make a fault diagnosis in a user-friendly windowed environment. The modularity of the system enables easy expansion and modification of the database, the rule-base as well as the inference engine",2000,0, 4775,Studies on the optimizing model of fault diagnosis for power network,"The optimizing model of fault diagnosis for power network is formulated as a 0-1 integer programming problem. The fault equipments can be determined by means of the refined mathematical manipulation. We make a deep study of the present optimizing models and their disadvantages, especially the problems of inconsequence and non-uniqueness of some diagnosis results, and find the cause of the non-uniqueness lies in the fact that the model only takes the influence of each protection into account and ignores the conjunct influence of main and back up protections. On the basis of this point, taking a general view of the configuration characteristics of present secondary systems and the features of the diagnosis information, an improved optimizing model of fault diagnosis for power network is proposed in this paper. The particularity of breaker malfunction protection is taken into account in the new model. The expecting status functions of the equipments are redefined. The preconditioning of the raw data for fault diagnosis and the realization of the system are also explicated briefly. Theoretic analysis and instance verifications indicate that the new model is more reasonable and the diagnosis results are exacter.",2008,0, 4776,Compensating the overlay modeling errors in lithography process of wafer stepper,"The overlay modeling errors are commonly modeled as the sum of inter-field and intra-field errors in lithography process of wafer stepper. The inter-field errors characterize the global effect while the intra-field errors represent the local effect. To have a better resolution and alignment accuracy, it is important to model the overlay errors and compensate them into tolerances. This paper proposes a weighted least squares (WLS) estimator for two general overlay error models such that more accurate linear term parameters of the overlay error can be obtained. First, the least squares (LS) estimator is applied to obtain the parameters of linear and nonlinear terms. We intend to estimate the parameters of linear term while taking the nonlinear term as our modeling residual errors. Next, we use the WLS estimator to obtain more accurate parameters of linear term and thus reduced the modeling errors by choosing appropriate stepper control parameters. The WLS estimator is applied to a real data set collecting 537 wafers from a wafer fabrication facility. The test results demonstrate that the estimated linear term parameters of the WLS estimator are much more close to the assumed ones than the LS estimator.",2010,0, 4777,Fault Tolerant Controller Design to Ensure Operational Safety in Satellite Formation Flying,"The paper addresses the problem of fault tolerant control design that has the same effect as implicit control system reconfiguration for satellite formation flying to increase operational safety as it is important for successful missions. Actuator and sensor degradation can be detrimental for formation precision in terms of satellite relative positions and attitudes. In this paper, model reference adaptive control (MRAS) and quaternion based adaptive attitude control (QAAC) is proposed as alternatives to fault determination and isolation. The adaptive systems approach is simpler as it avoids explicit modelling, decision making and control redesign. Redundancy based solution is used to protect against sensor deficiencies. Simulations illustrate the efficiency of the adaptive systems implemented for the control of the position and attitude of a single craft",2006,0, 4778,Fault Diagnosis Method Based on PSO-optimized H-BP Neural Network,"The paper combined the advantage of particle swarm optimization algorithm (PSO), the global optimizing ability of Hopfield Neural Network, and the teacher supervising features of BP neural network to construct a new equipment fault diagnosis method with higher diagnosis precision, compared with the traditional single BP neural network.",2009,0, 4779,Desktop supercomputing technology for shadow correction of color images,"The paper deals with information technology for correction of shadow artifacts on color digital images obtained by photographing of paintings with the purpose of their reproduction. Shadow artifacts are caused by differences of light intensity. The problem of shadow detection and subsequent color correction is solved. The architecture of heterogeneous CPU/GPU - system implementing the elaborated technology is considered, examples of real images processing are given.",2010,0, 4780,Influence of the Transmission Channel Parameters on Error Rates and Picture Quality in DVB Baseband Transmission,"The paper deals with the component analysis of DVB (digital video broadcasting) transmission model in baseband and its source, channel and link coding. The transmission channel model is based on the digital filter design and it can be designed with the variable transmission parameters (e.g. cut-off frequency) and linear distortions with additive noise and reflected signal. Results of achieved BER (bit error rate) and SER (symbol error rate) and corresponding PQE (picture quality evaluation) analysis are presented, including the evaluation of subjective picture quality influence on normalized cut-off frequency of the channel",2006,0, 4781,BER performance of software radio multirate receivers with nonsynchronized IF-sampling and digital timing correction,"The paper deals with the design of an all digital multirate receiver with nonsynchronized IF-sampling and digital timing correction, that can be used in software defined radios. By performing timing correction prior to matched filtering, the complexity of the matched filter is reduced, but at the expense of additional aliasing. Design parameters, yielding low degradations with respect to synchronized baseband sampling, are provided.",2003,0, 4782,An Error Resilient Coding Scheme for Video Transmission Based on Pixel Line Decimation,"The loss of packets is unavoidable when compressed video data is transmitted over error prone channels. In this study, an error resilient coding scheme based on pixel lines decimation is proposed to enhance performance of error concealment for both intra and inter frames. At the encoder, an input picture is decimated by pixel lines into two similar sub-pictures and then they are merged together before encoding. At the decoder, the high correlation of two sub-pictures is employed to facilitate error concealment. For an intra frame, the lost pixel lines in a sub-picture can be spatial concealed by interpolation between spatial neighbor correct pixel lines in the other sub-picture in a short distance. For an inter frame, a corrupt macroblock can be temporal concealed by utilizing motion vectors of its co-located macroblock in the other sub-picture. Experimental results demonstrate that the proposed scheme can significantly improve both subjective and objective quality of the reconstructed picture.",2008,0, 4783,Performance evaluation of CoolMOS/sup /spl trade// and SiC diode for single-phase power factor correction applications,"The low conduction loss and switching loss characteristics make CoolMOS/sup /spl trade// and SiC diode attractive for the single-phase CCM PFC converters. In this paper, based on the device level and converter level evaluation, the loss reduction capability of the CoolMOS/sup /spl trade// and SiC diode is quantified. In addition, for the first time, a successfully operating 1 kW 400 kHz single-phase CCM PFC is demonstrated by using CoolMOS/sup /spl trade// and SiC diode.",2003,0, 4784,Fault Tolerance Middleware for Cloud Computing,"The Low Latency Fault Tolerance (LLFT) middleware provides fault tolerance for distributed applications deployed within a cloud computing or data center environment, using the leader/follower replication approach. The LLFT middleware consists of a Low Latency Messaging Protocol, a Leader-Determined Membership Protocol, and a Virtual Determinizer Framework. The Messaging Protocol provides are liable, totally ordered message delivery service by employing a direct group-to-group multicast where the ordering is determined by the primary replica in the group. The Membership Protocol provides a fast reconfiguration and recovery service when a replica becomes faulty and when a replica joins or leaves a group. The Virtual Determinizer Framework captures ordering information at the primary replica and enforces the same ordering at the backup replicas for major sources of non-determinism. The LLFT middleware maintains strong replica consistency, offers application transparency, and achieves low end-to-end latency.",2010,0, 4785,CAM05-2: A Distributed Policy Based Solution in a Fault Management Scenario,"The Madeira project, part of the Celtic Initiative1, investigates the use of a fully distributed, policy-based network management framework that exploits the peer-to-peer paradigm with the aim of providing a successful solution to Next Generation Networks (NGN) challenges. This paper is focused on the distributed policy-based approach adopted in the project. Thanks to this approach, the management system is flexible and adaptable to different management applications of networks with time-varying topologies. Currently available results coming from the execution of specific scenarios in the area of Fault Management reveal that the policy-based system architecture works properly in the highly distributed peer-to-peer environment where it is deployed.",2006,0, 4786,A new method for uninterrupted operation of wind turbines equipped with DFIGs during grid faults using FCL,"The main issue of wind turbines that equipped with doubly fed induction generators (DFIGs) is the grid faults or low voltage ride through capability. In this paper, a new solution for uninterrupted operation of wind turbine driving a DFIG has been proposed during fault condition in the grid. A fault current limiter (FCL) is placed in series with the rotor circuit. During fault condition FCL enters a huge solenoid in the rotor circuit to inhabit increasing of current in the rotor circuit. When the fault is cleared the FCL bypasses the solenoid. A static synchronous compensator (STATCOM) has been applied for supplying required reactive power in faults and steady states. Capability and modeling accuracy of the proposed method confirmed with simulating a sample power system in PSCAD/EMTDC software.",2010,0, 4787,Towards Dependable Business Processes with Fault-Tolerance Approach,"The management and automation of business processes have become an essential tasks within IT organizations. Companies could deploy business process management systems to automatize their business processes. BPMS needs to ensure that those are as dependable as possible. Fault tolerance techniques provide mechanisms to decrease the risk of possible faults in systems. In this paper, a framework for developing business processes with fault tolerance capabilities is provided. The framework presents different solutions in the fault tolerance scope. The solutions have been developed using a practical example and some results have been obtained, compared and discussed.",2010,0, 4788,Automatic Static Fault Tree Analysis from System Models,"The manual development of system reliability models such as fault trees could be costly and error prone in practice. In this paper, we focus on the problems of some traditional dynamic fault trees and present our static solutions to represent dynamic relations such as functional and sequential dependencies. The implementation of a tool for the automatic synthesis of our static fault trees from SysML system models is introduced.",2010,0, 4789,A Two-Phase Log-Based Fault Recovery Mechanism in Master/Worker Based Computing Environment,"The master/worker pattern is widely used to construct the cross-domain, large scale computing infrastructure. The applications supported by this kind of infrastructure usually features long-running, speculative execution etc. Fault recovery mechanism is significant to them especially in the wide area network environment, which consists of error prone components. Inter-node cooperation is urgent to make the recovery process more efficient. The traditional log-based rollback recovery mechanism which features independent recovery cannot fulfill the global cooperation requirement due to the waste of bandwidth and slow application data transfer which is caused by the exchange of a large amount of logs. In this paper, we propose a two-phase log-based recovery mechanism which is of merits such as space saving and global optimization and can be used as a complement of the current log-based rollback recovery approach in some specific situations. We have demonstrated the use of this mechanism in the Drug Discovery Grid environment, which is supported by China National Grid. Experiment results have proved efficiency of this mechanism.",2009,0, 4790,Error analysis and compensation for inductosyn-based position measuring system,"The measurement errors are incurred by the nonideal signal of an inductosyn and a digital resolver-to-digital conversion (RDC) are analyzed, and the corresponding expressions of measurement errors are determined. A compensation method by software is presented to reduce the influences of quadrature error, amplitude imbalance and signal offsets. Experimental results show that the measurement accuracy is improved significantly from 55 to 10 mechanical angle second. The correctness of the analysis and the effectiveness of the compensation are also verified.",2003,0, 4791,Reactance Change at Defect in Inconel Tube With Nickel Sleeving,"The metal-loss defects are partly produced in steam generator tubes by stress and heat, because steam generator tubes are used under high temperature, high pressure and radioactivity. Tube wall defects such as corrosion pits acts as stress raisers, which leads ultimately to rupture. A nickel sleeving is used for protecting the progress of metal-loss defect, but it is difficult to detect it by conventional eddy current method in steam generator tubes with nickel sleeving. So a new method is needed for detecting the metal-loss defects on the nickel sleeving steam generator tubes. In the new method, the reactance change was calculated by finite element method to detect the defects in nickel sleeving tube. The reactance amplitude calculated for the defect present in the Inconel 600 tube with nickel sleeving is well agreeable with the results measured by the fabricated probe.",2009,0, 4792,Modeling of direction-dependent Processes using Wiener models and neural networks with nonlinear output error structure,"The modeling of direction-dependent dynamic processes using Wiener models and recurrent neural network models with nonlinear output error structure is considered. The results obtained are compared for several simulated first-order and second-order processes and using three different types of input signals: a pseudorandom binary signal, an inverse-repeat pseudorandom binary signal and a multisine (sum of harmonics) signal. Experimental results on a real system, namely an electronic nose system, are also presented to illustrate the applicability of the techniques discussed.",2004,0, 4793,The MODIS operational geolocation error analysis and reduction early results,The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched December 1999 on the polar orbiting Terra satellite and since February 2000 has been acquiring earth observation data. The Terra satellite has onboard exterior orientation (position and attitude) measurement systems designed to enable unaided navigation of MODIS data to within approximately 150 m (1) at nadir. A global network of ground control points will be used to improve the navigation to approximately 50 m (1). This approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications,2000,0, 4794,The use of PSA for fault detection and characterization in electrical apparatus,"The monitoring of the actual condition of high voltage apparatus has become more and more important in the last years. One well established tool to characterize the actual condition of electric equipment is the measurement and evaluation of partial discharge data. Immense effort has been put into sophisticated statistic software-tools, to extract meaningful analyses out of data sets, without taking care of relevant correlations between consecutive discharge pulses. In contrast to these classical methods of partial discharge analysis the application of the Pulse Sequence Analysis allows a far better insight into the local defects. The detailed analysis of sequences of discharges in a voltage- and a current-transformer shows that the sequence of the partial discharge signals may change with time, because either different defects are active at different measuring times or a local defect may change with time as a consequence of the discharge activity. Hence for the evaluation of the state of degradation or the classification of the type of defect the analysis of short `homogeneous' sequences or sequence correlated data is much more meaningful than just the evaluation of a set of independently accumulated discharge data. This is demonstrated by the evaluation of measurements performed on different commercial hv-apparatus",2000,0, 4795,A novel approach to fault tolerant computing [in space systems],"The realization of fault tolerant computers requires a considerable effort, both for their development and validation. In addition, the redundancy required to achieve the fault tolerance increases power consumption, mass and volume of the computers. In order to mitigate these problems, a standardized Fault Management Element (FME) has been developed, in which the complete set of fault management functions necessary to realize fault tolerant computers are provided once and for all in standardized and fully validated form. The fault management technology of this FME is based on the Byzantine fault tolerant computer for the Russian Service Module of the International Space Station, and for the logistics vehicle ATV servicing the station. Using this FME, fault tolerant computers need not be developed in the usual sense, but are basically realized by a configuration process, which, in simplified terms, comprises an integration of an FME with each of the foreseen redundant application processor boards, and their cross-strapping via the preconceived high-speed data links of the FMEs. By this novel approach the considerable development and validation effort to realize fault tolerant computers is practically eliminated. Moreover, the FME, which is currently available as printed circuit board, will be available as ASIC in the near future, such that the above mentioned power, mass and volume problems are also greatly reduced",2001,0, 4796,Identification of insulation defects in gas-insulated switchgear by chaotic analysis of partial discharge,"The recent increase in the number of failures during the installation and insulation of gas-insulated switchgear (GIS) has led to a need for a reliable risk assessment technique. Accordingly, in this study, a fundamental database of UHF partial discharge (PD) patterns corresponding to different types of defects is presented for risk assessment and for the observation and diagnosis of the state of insulation of GIS at field sites; the patterns have been obtained by modelling a number of defects that have been reported to be the most critical in GIS. For the realisation of a system that can simultaneously detect and analyse UHF PD (it is internationally accepted that the use of such a system is the most effective method to diagnose GIS insulation), a wideband UHF sensor and amplifier are designed and fabricated. The system operation is then investigated. In addition, a chaotic analysis of partial discharge (CAPD) is proposed. This analysis can identify the type of defect by means of PD pattern classification without employing the phase information of the applied voltage signal. The proposed CAPD can replace the conventional phase-resolved partial discharge (PRPD) analysis and can be employed at field sites when the phase information is unavailable. Especially, the PD patterns of free-moving conducting particles, known to be very important in GIS, can be distinguished from those of other defects when using the CAPD method, which is not the case when the PRPD analysis is being performed.",2010,0, 4797,On constructing the minimum orthogonal convex polygon for the fault-tolerant routing in 2-D faulty meshes,"The rectangular faulty block model is the most commonly used fault model for designing fault-tolerant, and deadlock-free routing algorithms in mesh-connected multicomputers. The convexity of a rectangle facilitates simple, efficient ways to route messages around fault regions using relatively few or no virtual channels to avoid deadlock. However, such a faulty block may include many nonfaulty nodes which are disabled, i.e., they are not involved in the routing process. Therefore, it is important to define a fault region that is convex, and at the same time, to include a minimum number of nonfaulty nodes. In this paper, we propose an optimal solution that can quickly construct a set of minimum faulty polygons, called orthogonal convex polygons, from a given set of faulty blocks in a 2-D mesh (or 2-D torus). The formation of orthogonal convex polygons is implemented using either a centralized, or distributed solution. Both solutions are based on the formation of faulty components, each of which consists of adjacent faulty nodes only, followed by the addition of a minimum number of nonfaulty nodes to make each component a convex polygon. Extensive simulation has been done to determine the number of nonfaulty nodes included in the polygon, and the result obtained is compared with the best existing known result. Results show that the proposed approach can not only find a set of minimum faulty polygons, but also does so quickly in terms of the number of rounds in the distributed solution.",2005,0, 4798,Distributed fault tolerance in optimal interpolative nets,"The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper",2001,0, 4799,Diffusion properties of point defects in barium strontium titanate thin films,"The relationship between the diffusion behavior of hydrogen and the electrical properties of (Ba, Sr)TiO3 (BST) thin-film capacitors was investigated using thermal desorption spectroscopy and secondary ion mass spectroscopy analyses. It has been clearly shown that the frequency dependence of the complex impedance profile of the BST thin-film capacitors could be successfully represented by two parallel resistor-capacitor (RC) electrical equivalent networks in series correlated with the distribution of the hydrogen, namely, the Pt/BST interface region with the influence of hydrogen and the BST bulk region without the influence of hydrogen. However, the I-V properties of the BST thin-film capacitors could be determined almost from the hydrogen atoms existing at the Pt/BST interface.",2007,0, 4800,"Visual Tools for Analysing Evolution, Emergence, and Error in Data Streams","The relatively new field of stream mining has necessitated the development of robust drift-aware algorithms that provide accurate, real time, data handling capabilities. Tools are needed to assess and diagnose important trends and investigate drift evolution parameters. In this paper, we present two new and novel visualisation techniques, Pixie and Luna graphs, which incorporate salient group statistics coupled with intuitive visual representations of multidimensional groupings over time. Through the novel representations presented here, spatial interactions between temporal divisions can be diagnosed and overall distribution patterns identified. It provides a means of evaluating in non-constrained capacity, commonly constrained evolutionary problems.",2007,0, 4801,Fault-tolerance and reliability of post-CMOS systems: a circuit perspective,"The reliability of systems made of unreliable nanoelectronic devices is discussed in this paper. Massive defect density which may affect future fabrication technologies calls for novel solutions, where spatial redundancy and the voting scheme play a significant role. The averaging and thresholding voting mechanism that was used in CMOS technologies is presented in the context of nanodevices, based on SET circuits. Theoretical developments supported by numerical simulations show that the presented voter and circuit architecture are also applicable in nanoelectronic design, and are superior to classical voters.",2009,0, 4802,An Approach to the Development of Inference Engine for Distributed System for Fault Diagnosis,"The reliable and fault tolerant computers are key to the success to aerospace, and communication industries. Designing a reliable digital system, and detecting and repairing the faults are challenging tasks in order for the digital system to operate without failures for a given period of time. The paper presents a new and systematic software engineering approach of performing fault diagnosis parallel and distributed computing. The purpose of the paper would be to demonstrate a method to build a fault diagnosis for a parallel and distributed computing. The paper chooses a model posed a tremendous challenge to the user for fault analysis. The model is the classic PMC model that happens to be a parallel and distributed computing. The paper would also show a method for building an optimal inference engine by obtaining sub graphs that also preserve the necessary and sufficient conditions of the model. Coin words: Parallel and Distributed Computing, Artificial Intelligence.",2009,0, 4803,Research on dynamic simulation of the resonance fault current limiter,"The resonance fault current limiter (RFCL), which avoids the disadvantages of the series reactor, is feasible for EHV and UHV grid. To contribute to project application of the RFCL, and study its running characteristic and impact on relay protection, the dynamic simulation lab of State Grid simulation center carries out the research on simulation technique for RFCL. The principle and technical properties of RFCL applied in EHV power transmission line are analyzed. Considering the features of dynamic simulation system in the lab, the approach to choose parameters of simulation model and corresponding structural design are described. This simulation model has been connected into dynamic simulation system and the simulation experiments for inspecting the control and protection function of RFCL are performed. Experimental results show that the performances of the developed simulation model for RFCL can meet the design requirement, and can be applied in dynamic simulation tests and researches.",2010,0, 4804,On-line fault diagnosis study for roller bearing based on fuzzy fault tree,The roller bearings are commonly used in rotating machinery and play an important role. In this paper fuzzy theory and fault tree are used for On-line fault diagnosis of roller bearing. The principle of fuzzy diagnosis is taken as basics and symptom sets and faults sets are extracted through fault tree analysis. According to judgement rules the location of faults is made sure. Experiments show this method is effective for fault diagnosis of roller bearing.,2010,0, 4805,LAPACK-based condition and error estimators for Kalman filter design,The paper deals with the efficient computation of condition and error estimates in the continuous-time Kalman filter design. LAPACK-based estimators are proposed involving the solution of triangular Lyapunov equations along with one-norm computation.,2003,0, 4806, ADC Based Frequency Error Measurement in Single Carrier Digital Modulations,"The paper deals with the functional block architecture implementing the method to measure the carrier frequency error of the single carrier digital modulations. The method is able to operate on the modulations M-ary amplitude shift keying, M-ary phase shift keying and M-ary quadrature amplitude modulation used in the telecommunication systems. The functional block architecture, is obtained from the Software Radio architecture, and is based on the cascade of the analog to digital converter (ADC), the digital down converter and the base band processing. Looking to the implementation in measurement instrumentation, the performance of this cascaded architecture is investigated by considering three different ADC architectures. They are based on: (i) the pipeline, (ii) the single quantizer loop SigmaDelta modulator, and (Hi) the multistage noise shaper (MASH) SigmaDelta modulator. Numerical tests (i) confirm the important rule of the ADC in this architecture, and (ii) highlight the interesting performance of the MASH based SigmaDelta modulator for use in advanced measurement instrument",2005,0, 4807,Path delay fault testability analysis,The paper deals with the problem of testing delay path faults. We present results obtained with a newly developed test pattern generator. This generator is based on the use of reduced ordered binary decision diagrams (ROBDDs) and reveals many advantages as compared with other ATPGs published in the literature. An important contribution of the paper is the analysis of various testability features of digital circuits in respect to path delay faults. It was applied to the ISCAS89 benchmark and other circuits. The proposed testability measures are helpful in codesigning deterministic and pseudorandom delay tests as well as in improving circuit testability,2000,0, 4808,Experimental validation of fault detection and fault tolerance mechanisms,The paper deals with the problem of validating the effectiveness of hardware and software mechanisms decreasing system susceptibility to hardware faults. The validation process is based on the use of software implemented fault injector (FITS). The performed analysis concentrates on tuning the profile of faults and experiment set-ups. The presented simulation results are explained in context of the considered applications.,2002,0, 4809,Performance of multicode DS/CDMA with noncoherent M-ary orthogonal modulation in the presence of timing errors,"The paper derives an accurate approximation to the bit error rate (BER) of multicode DS/CDMA with noncoherent M-ary modulation in wideband fading channels, when timing errors are made at the receiver. This reflects the practical scenario where the path delays are imperfectly estimated, leading to synchronization errors between the correlation receivers and the received signals. The analysis is applicable to any type of fading distribution, and is shown to match closely the Monte Carlo system simulations, especially for small timing errors.",2004,0, 4810,Experimental fault-tolerant control of a PMSM drive,"The paper describes a study and an experimental verification of remedial strategies against failures occurring in the inverter power devices of a permanent-magnet synchronous motor drive. The basic idea of this design consists of incorporating a fourth inverter pole, with the same topology and capabilities of the other conventional three poles. This minimal redundant hardware, appropriately connected and controlled, allows the drive to face a variety of power device fault conditions while maintaining a smooth torque production. The achieved results also show the industrial feasibility of the proposed fault-tolerant control, that could fit many practical applications",2000,0, 4811,Unibus: Aspects of heterogeneity and fault tolerance in cloud computing,"The paper describes our on-going project, termed Unibus, in the context of facilitating fault-tolerant executions of MPI applications on computing chunks in the cloud. In general, Unibus focuses on resource access virtualization and automatic, user-transparent resource provisioning that simplify use of heterogeneous resources available to users. In this work, we present the key Unibus concepts (the Capability Model, composite operations, mediators, soft and successive conditionings, metaapplications), and demonstrate how to employ Unibus to orchestrate resources provided by a commercial cloud provider into a fault-tolerant platform, capable of executing message passing applications. In order to support fault tolerance we use DMTCP (Distributed MultiThreaded CheckPointing) that enables checkpointing at the user's level. To demonstrate that the Unibus-created, FT-enabled platform allows to execute MPI applications we ran NAS Parallel Benchmarks and measured the overhead introduced by FT.",2010,0, 4812,Application of fault current limitation techniques in a transmission system,"The paper describes the applications of fault current limiting techniques to Oman electricity transmission system to overcome high short-circuit currents in some parts of the grid. These include splitting busbars at selected grid stations, regrouping generators at power stations, opening transmission lines at critical points, and introducing fault current limiters at strategic places in the network. Computer simulation results, using DIgSILENT software package, are presented to show the effectiveness of these techniques in reducing the short-circuit currents at critical busbars. The results have shown that the calculated short-circuit currents can be reduced to be within the fault level capacity of the existing switchgears. Splitting busbars and regrouping generators are considered as short-term temporary solutions with no cost. Practical implementation of this technique at Rusail power plant is described. Employing fault current limiting reactors is considered as a long-term permanent solution.",2010,0, 4813,"On-line sensor fault detection, isolation, and accommodation in automotive engines","The paper describes the hybrid solution, based on Artificial Neural Networks, ANNs, and production rule adopted in the realization of an Instrument Fault Detection, Isolation, and Accommodation scheme for automotive applications. Details on the ANN architectures and training are given together with diagnostic and dynamic performance of the scheme.",2002,0, 4814,Power quality analysis and corrections,"The paper discusses the effect of short circuit fault on sensitive loads and methods for reducing the duration of voltage sags, swells or interruptions by improving existing protection methods or adding advanced protection schemes. Requirements for the recording of power quality events in order to understand their effects on the manufacturing process are presented. Combining multiple overcurrent elements in a complex overcurrent characteristic, using negative sequence overcurrent protection, distribution bus and breaker failure protection, fuse saving or selective backup tripping schemes are described.",2005,0, 4815,Experimental validation of fault injection analyses by the FLIPPER tool,The paper discusses the experimental validation of fault injection analyses accomplished with the FLIPPER tool. Validation has been accomplished through accelerated proton testing of a benchmark design provided by the European Space Agency.,2009,0, 4816,VHDL implementation of a neural diagnosis system: application to induction machine fault detection,"The purpose of this paper is to present an implementation method using a neural network dedicated to diagnostic. The example given is related to induction machine diagnosis but we explain how the methodology can apply to various classification systems. The proposed implementation is based on VHDL language in order to have a flexible solution in term of implementation, the prototype being supposed to be realized into FPGA architecture. We describe the VHDL modelling of the network, and how it can be used for the ""training"" of the network and the results obtained. We demonstrate the accuracy of the model and its reliability.",2004,0, 4817,Impact of attenuation and scatter correction in SPECT for quantification of cerebral blood flow using 99 mTc-ethyl cysteinate dimer,"The purpose of this study was to evaluate the effects of the attenuation correction and scatter correction methods, both validated previously, on the quantitative-estimation of rCBF using 99 mTc-ECD and SPECT. SPECT scans were performed on 7 subjects, and images were reconstructed by OSEM in which uniform and segmented maps were used for attenuation correction, with and without scatter correction, which is based on the transmission-dependent convolution subtraction technique. Segmented and uniform maps were generated from MR images. The authors also produced uniform maps using ECD images obtained at various threshold levels. Scatter correction improved the image contrast dramatically. K1 image with attenuation and scatter, corrections assuming a homogeneous CL map was consistent as compared with those by segmented map in most regions, except for in a deep structure (e.g. 7.3%). This small amount of error was also observed in a phantom study and Monte-Carlo simulation. Absolute K1 values in the reconstructed images were sensitive to the threshold level when edge of the brain was determined from the ECD images, and varied from 14.2 to 42.3%, corresponding to the threshold level from 10 to 20%, respectively. Using optimal threshold level, absolute K1 values varied by 8.8%. This suggests the need for further development of appropriate edge detection technique. This study demonstrated the scatter correction is essential in quantitative SPECT studies with ECD in brain. It was also demonstrated that the use of uniform attenuation map could provide reasonable accuracy, despite a small but significant errors in a deep structure regions",2000,0, 4818,Positive Switching Impulse Discharge Performance and Voltage Correction of Rod-Plane Air Gap Based on Tests at High-Altitude Sites,"The Qinghai-Tibet Railway is the highest railway in the world. Up to now, there were no test and service data for the external insulation of the power-supply project of the railway system above 4000 m above sea level (a.s.l.). The ldquogrdquo parameter method recommended by IEC Publication 60.1 (1989) has a limited applicable range. Therefore, based on the former tests carried on the artificial climate chamber (ACC), in this paper, a series of test investigations is conducted on the positive switching impulse (PSI) discharge performance of rod-plane air gaps with gap spacing of 0.25 to 3.0 m at the six high-altitude sites along the Qinghai-Tibet Railway with altitudes of 2820 to 5050 m. With analyses of the mathematical optimization method on the test results, the new correction method of discharge voltage is proposed. They are also checked and compared with the test results obtained from the simulation tests carried out in the ACC. It is indicated that the 50% PSI discharge voltage U 50 of the air gap at high altitude is a power function of gap spacing d, also a power function of relative pressure of dry air and absolute humidity. The influence law of atmospheric parameters on U 50 obtained at high-altitude sites is the same as that obtained in the ACC. U 50 . obtained in the ACC, is about 8.15% higher than that obtained at high-altitude sites due to the influence of nonsimulated factors, such as ultraviolet ray and cosmic radiation. The ldquogrdquo parameter method is not applicable to the regions with an altitude above 2800 m.",2009,0, 4819,"A Theory of Mutations with Applications to Vacuity, Coverage, and Fault Tolerance","The quality of formal specifications and the circuits they are written for can be evaluated through checks such as vacuity and coverage. Both checks involve mutations to the specification or the circuit implementation. In this context, we study and prove properties of mutations to finite-state systems. Since faults can be viewed as mutations, our theory of mutations can also be used in a formal approach to fault injection. We demonstrate theoretically and with experimental results how relations and orders amongst mutations can be used to improve specifications and reason about coverage of fault tolerant circuits.",2008,0, 4820,LEAP: An accurate defect-free IDDQ estimator,"The quiescent current (IDDQ) consumed by a CMOS IC is a good indicator of the presence of a large class of defects. However, the effectiveness of IDDQ testing requires appropriate discriminability of defective and defect-free currents, and hence it becomes necessary to estimate the currents involved in order to design the IDDQ test. In this work, we present a method to estimate accurately the non-defective IDDQ consumption based on a hierarchical approach at electrical (cell) and logic (circuit) levels. This accurate estimator is used in conjunction with an ATPG to obtain vectors having low/high defect-free IDDQ currents",2000,0, 4821,The RADARSAT-2 synthetic aperture radar antenna phased array error analysis and performance,"The RADARSAT-2 synthetic aperture radar is required to generate a wider range of data products than any other preceding civilian satellite SAR. To achieve the required flexibility the antenna on RADARSAT-2 is a fully active phased array, which provides two-dimensional beamforming and beamsteering. Phased arrays are subject to a wide variety of phase, amplitude and mechanical error sources, all of which vary with temperature. This paper discusses the various sources of error. These error sources are translated into beampattern variations in azimuth and elevation, and the resulting pattern variations are then translated into image quality performance variations in sensitivity, resolution and ambiguity ratios.",2003,0, 4822,Research on fault line detection and restraining transient over-voltage under arc-suppression coil with paralleled resistance grounding mode,"The principle of fault line detection and restraining transient over-voltage under arc-suppression coil with paralleled resistance grounding mode was proposed in this paper. It was verified by simulation. Compared with other grounding mode, its superiority was displayed. The value of paralleled resistance was determined at last.",2010,0, 4823,Automatic defects classification with p-median clustering technique,The problem of automatic defect recognition and classification for vision systems development is addressed. The main objectives of such systems are defect recognition and classification based on known features. The classification function is designed using cluster analysis. Two stages approach is proposed. On the first offline stage of classification a teaching process has been employed. On the second online stage inspection image is classified using its features comparison with the closest medians in real time. Comparative analysis with the state-of-the-art classification methods has demonstrated an efficiency of the proposed approach. Examples described here relate specifically to semiconductor industry but can be adopted to other manufacturing processes.,2008,0, 4824,Budget-Dependent Control-Flow Error Detection,The problem of detection of control flow errors in software has been studied extensively in literature and many detection techniques have been proposed. These techniques typically have high memory and performance overheads and hence are unusable for real-time embedded systems which have tight memory and performance budgets. This paper presents two algorithms by which the overheads associated with any detection technique can be lowered by trading off fault coverage. These algorithms are generic and can be applied to any detection technique. They can be applied either individually or cumulatively. The algorithms are validated on a previously proposed detection technique using SPEC benchmark programs. Fault injection experiments suggest that massive savings in overheads can be achieved using the algorithms with just a minor drop off in fault coverage.,2008,0, 4825,Spectral Properties and Interpolation Error Analysis for Variable Sample Rate Conversion Systems,"The problem of variable sample rate conversion (SRC) has received much attention on account of its applications in software defined radios (SDRs) that must support a wide variety of data rates. In this paper, we investigate the spectral properties of variable SRC and focus on the interpolation error obtained using any two interpolation kernels. We show that SRC is a generalization of decimation for both rational and irrational conversion ratios. In addition, a frequency domain expression for the mean-squared interpolation error is derived and simplified. Simulations presented show the degradation effects of using practical piecewise polynomial based interpolants as opposed to the underlying bandlimited sine function for several input signals.",2007,0, 4826,Assessing software implemented fault detection and fault tolerance mechanisms,The problems of hardware fault detection and correction using software techniques are analysed. They relate to our experience with a large class of applications. The effectiveness of these techniques is studied using a special fault injector with various statistical tools. A new error-handling scheme is proposed.,2003,0, 4827,Characterizing the effects of transient faults on a high-performance processor pipeline,"The progression of implementation technologies into the sub-100 nanometer lithographies renew the importance of understanding and protecting against single-event upsets in digital systems. In this work, the effects of transient faults on high performance microprocessors is explored. To perform a thorough exploration, a highly detailed register transfer level model of a deeply pipelined, out-of-order microprocessor was created. Using fault injection, we determined that fewer than 15% of single bit corruptions in processor state result in software visible errors. These failures were analyzed to identify the most vulnerable portions of the processor, which were then protected using simple low-overhead techniques. This resulted in a 75% reduction in failures. Building upon the failure modes seen in the microarchitecture, fault injections into software were performed to investigate the level of masking that the software layer provides. Together, the baseline microarchitectural substrate and software mask more than 9 out of 10 transient faults from affecting correct program execution.",2004,0, 4828,Educational software for the numerical correction of experimental magnetization curves,"The proposed software allows students to get aware of the importance of the experimental data accuracy in magnetism. A common error source for the magnetization curves (including hysteresis cycles) is the demagnetization effect and the influence of the magnetic sensor position. Our software helps the user to understand the principle and the effect of each correction method. The Graphical User Interface (GUI) is designed as a wizard, assisting students to decide which the best correction procedure could be and to obtain the intrinsic magnetic material characteristic to be used in electromagnetic field computation.",2010,0, 4829,CCTV Automatic Fault Detection system,"The purpose of Automatic Fault Detection (AFD) system is to provide efficient and earlier fault detection of Close Circuit Television (CCTV) cameras. The system is used to automatically select cameras from a predefined list, perform test algorithms, and finally report found faults to a relevant department. The paper will discuss system's architecture, fault finding techniques, current results and future expansion.",2008,0, 4830,Correction of Structured Noise in fMRI Using Spatial Independent Component Analysis: Corsica,"The physiological fluctuations (breathing and heartbeat) and brain movements are the main sources of confounds in activation and functional connectivity studies in functional magnetic resonance imaging (fMRI). The main difficulty to cope with these effects is the aliasing of cardiac and possible respiration signals for acquisitions with long TR (typically TR > 1s). We proposed a method of structured noise correction based on spatial independent component analysis, able to extract components linked to cardio-respiratory activity and brain movements. The automatic selection of noise-related components was based on a stepwise regression procedure using ""true"" physiological noise time courses as reference (extracted from regions of interest in the cerebro-spinal fluid and near major blood vessels). We evaluated the sensitivity of the selection on long-TR and short-TR datasets and we showed that our method was efficient even for long-TR datasets",2006,0, 4831,Fault-tolerant design of the IBM pSeries 690 system using POWER4 processor technology,"The POWER4-based p690 systems offer the highest performance of the IBM eServer pSeriesTM line of computers. Within the general-purpose UNIX server market, they also offer the highest levels of concurrent error detection, fault isolation, recovery, and availability. High availability is achieved by minimizing component failure rates through improvements in the base technology, and through design techniques that permit hard- and soft-failure detection, recovery, and isolation, repair deferral, and component replacement concurrent with system operation. In this paper, we discuss the fault-tolerant design techniques that were used for array, logic, storage, and I/O subsystems for the p690. We also present the diagnostic strategy, fault-isolation, and recovery techniques. New features such as POWER4 synchronous machine-check interrupt, PCI bus error recovery, array dynamic redundancy, and minimum-element dynamic reconfiguration are described. The design process used to verify error detecti on, fault isolation, and recovery is also described.",2002,0, 4832,Self fault-tolerance of protocols: A case study,"The prerequisite for the existing protocols' correctness is that protocols can be normally operated under the normal conditions, rather than dealing with abnormal conditions. In other words, protocols with the fault-tolerance can not be provided when some fault occurs. This paper discusses the self fault-tolerance of protocols. It describes some concepts and methods for achieving self faulttolerance of protocols. Meanwhile, it provides a case study, investigates a typical protocol that does not satisfy the self fault-tolerance, and gives a new redesign version of this existing protocol using the proposed approach.",2000,0, 4833,Timely use of the CAN protocol in critical hard real-time systems with faults,The presence of network errors such as electrical interference affects the timing properties of a CAN (Controller Area Network) bus. In hard real-time systems it is often better to not receive a message than to receive it too late. Aborting late messages is a form of real-time error confinement which prevents late messages affecting the timeliness of other messages and processes. This can be used to help guarantee hard real-time performance in a distributed system using CAN in the presence of unbounded network errors,2001,0, 4834,VirtualWire: a fault injection and analysis tool for network protocols,"The prevailing practice for testing protocol implementations is direct code instrumentation to trigger specific states in the code. This leaves very little scope for reuse of the test cases. In this paper, we present the design, implementation, and evaluation of VirtualWire, a network fault injection and analysis system designed to facilitate the process of testing network protocol implementations. VirtualWire injects user-specified network faults and matches network events against anticipated responses based on high-level specifications written in a declarative scripting language. With VirtualWire, testing requires no code instrumentation and fault specifications can be reused across versions of a protocol implementation. We illustrate the effectiveness of VirtualWire with examples drawn from testing Linux's TCP implementation and a real-time Ethernet protocol called Rether. In each case, 10 to 20 lines of script is sufficient to specify the test scenario. VirtualWire is completely transparent to the protocols under test, and additional overhead in protocol processing latency it introduces is below 10% of the normal.",2003,0, 4835,Fault tolerant insertion and verification: a case study,"The particular circuit structures that allow the building of a Fault Tolerant (FT) circuit have been extensively studied in the past, but currently there is a lack of CAD support in the design and evaluation of FT circuits. The aim of the AMATISTA European project (IST project 11762) is to develop a set of tools devoted to the design of FT digital circuits. The toolset is composed of: an automatic insertion tool and a simulation tool to validate the FT design. This paper is a case study describing how this set of FTI (Fault Tolerant Insertion) and FTV (Fault Tolerant Verification) tools have been used to increase the reliability in a typical automotive application.",2002,0, 4836,PDC patterns of biodegradable transformers insulation oil after experienced faults and at different moisture levels,"The PDC method is a simple and reliable analysis tool to monitor transformer insulation systems. Based on PDC measurements of the biodegradable oil in a healthy oil condition at different moisture levels and after faults (Partial discharge, Overheating and Arcing) fault identification can be achieved by comparing the PDC pattern of an oil sample against the PDC fingerprints. Furthermore, the relationship curves between the oil moisture level toward its conductivity level and measured capacitance value have been established. These curves can be used to predict the oil moisture condition and the fault type.",2010,0, 4837,Using transparent files in a fault tolerant distributed file system,"The peer-to-peer model and the bandwidth availability are fostering the creation of new distributed file systems. However, files belonging to local application and distributed applications are usually handled in the same way by the local file system, so both contribute equally to consume storage space. This paper presents a peer-to-peer distributed file system which uses the ldquotransparent filerdquo concept to improve its fault tolerance and file availability. Files are kept as transparent/volatile replicas, using the free space available in each local file system. When a replica is invalidated, peers cooperate to restore it. The proposed architecture was implemented and tested; experiments showed its feasibility, and that its costs are proportional to the size of files being replicated. The occurrence of multiple simultaneous replica invalidations did not impose a significant overhead.",2009,0, 4838,Analysis of a SPECT OSEM reconstruction method with 3D beam modeling and optional attenuation correction: phantom studies,"The performance of a fast OSEM reconstruction method with 3D collimator beam modeling and optional attenuation correction (""Flash-3D""), developed by Siemens is investigated using various phantoms, and acquisition parameters. Data spectrum's cylinder phantom with rod and spheres inserts are acquired with varying protocols by systematically changing image quality relevant parameters, such as orbit, or collimator. They are then reconstructed with different methods and varying reconstruction parameters. The rod phantom is acquired with its long axis parallel and perpendicular to the rotation axis. A comparison between the 3D, 2D, 1D iterative methods, with and without attenuation correction, and FBP is done using qualitative and semiquantitative measures of image quality. The attenuation correction is done using CT-derived mu-maps. The contrast of the hot rods in parallel alignment is for OSEM-3D AC 17% larger than for OSEM-2D AC for the 3 largest rod sections. In this configuration where each slice shows the same structure the difference between 3D and 2D is minor. For the perpendicular alignment the contrast of the OSEM-3D AC reconstruction is 118% larger than from OSEM-2D AC. OSEM-2D AC shows almost identical contrast values as FBP, which is mainly due to the distortions of the rods, due to the 2D beam modeling. OSEM-3D AC comes closest to the expected hot contrast of 3.7 for the hot-cold sphere phantom. Future work will further develop the analysis tools for the phantoms and investigate the affect of varying image quality relevant acquisition and reconstruction parameters.",2003,0, 4839,An Improved ADC-Error-Correction Scheme Based on a Bayesian Approach,"The paper presents an improved method for analog-to-digital-converter (ADC) nonlinearity correction based on a Bayesian-filtering approach. In particular, the dependence of a previous method version on the statistical characterization of the input signal has been removed. Now, the method can work on whatever stimulus signal is used without a priori knowledge about it. The proposed improvement has been validated by a numerical simulation using behavioral models provided by an ADC manufacturer and by an experiment in real ADCs.",2008,0, 4840,Fault diagnosis of a chemical process using identification techniques,"The paper presents the application results concerning the fault diagnosis of a chemical process using dynamic system identification and model-based residual generation techniques. The considered approach consists of identifying different families of models for the monitored system. Then, dynamic output observers or Kalman filters are used as residual generators. The proposed fault diagnosis and identification scheme has been tested on a real chemical process in the presence of both sensor, actuator, component faults and disturbance.",2002,0, 4841,Reliability modeling incorporating error processes for Internet-distributed software,"The paper proposes several improvements to conventional software reliability growth models (SRGMs) to describe actual software development processes by eliminating an unrealistic assumption that detected errors are immediately corrected. A key part of the proposed models is the ""delay-effect factor"", which measures the expected time lag in correcting the detected faults during software development. To establish the proposed model, we first determine the delay-effect factor to be included In the actual correction process. For the conventional SRGMs, the delay-effect factor is basically non-decreasing. This means that the delayed effect becomes more significant as time moves forward. Since this phenomenon may not be reasonable for some applications, we adopt a bell-shaped curve to reflect the human learning process in our proposed model. Experiments on a real data set for Internet-distributed software has been performed, and the results show that the proposed new model gives better performance in estimating the number of initial faults than previous approaches",2001,0, 4842,Refinement Patterns for Fault Tolerant Systems,"The paper puts forward the idea of using fault tolerance refinement patterns to assist system developers in disciplined application of software fault tolerance mechanisms in rigorous system design. Two patterns are proposed to support a correct introduction of recovery blocks and N- version programming into a system model; these are formally defined and their correctness proven. We also discuss several important issues involved in the use of these patterns in engineering systems, including tool support and pattern composition.",2008,0, 4843,Fault and Equipment Failure Analysis in Distribution Systems Using Intelligent Techniques,"The paper will describe a distribution wide monitoring and analysis system that employs intelligent methods to provide utilities with near time automated fault and equipment analysis. Fault analysis in terms of supporting restoration activities and power quality diagnostics and equipment analysis in terms of equipment degradation monitoring and apparatus failure prediction. Over the past few years utilities have deployed a significant number of IEDs that have resulted in a vast untapped source of data as well as advanced communications technologies to include BPL, fiber, and wireless that allow for the rapid collection of this data. Also, a collection of utilities have or will deploy advanced systems that include a dynamic distribution network model. The presentation will describe how the system takes advantage of these new assets and specific methods of integration. This includes the description of data access, communications, and storage technologies as well as the structure and capacities of the analysis modules.",2007,0, 4844,Fault Tolerance Issues in Non-Traditional Grids Implemented with Intelligent Agents,"The scope of this paper is to present the necessity to improve the existent ambient intelligence model with respect to data security. The security of such a model poses a great challenge due to the fact that different networks are mixing in order to provide such an environment. In addition, these networks are generally deployed and then left unattended. All these aspects joined together make it unfeasible to directly apply the traditional security mechanisms. Therefore, there is a need to analyze and better understand the security requirements of these networks. This paper provides the specific security attacks to such a model and supplies a solution for these attacks.",2008,0, 4845,Research on Structure Security Evaluation and Fault Diagnosis of Gate Crane Jib,"The service of gate crane is the age limit, and there are many problems, such as the fallen rivets and the thinner steel plates due to corrosion. The author makes three dimensional modeling and performs the finite element computation of jib system with the I-DEAS software in this paper, and then carries out failure and effects analyses and fault tree analyses based on the computations. Simulation results shown the stress computation values of jib system at certain observation points have surpassed allowable stress of structural strength. But they are smaller than the metal structure material strength limit. The RPN value and fault tree smallest cutest of the jib system got by the reliability can provide the important reference for the enterprise users in the use and maintenance of gate cranes in future.The research work indicates that the user should make the inspection regularly for jib systems and take measures to make reasonable repair in order to avoid grave accidents.",2009,0, 4846,Simulation of electrical faults of three phase induction motor drive system,"The simulation of a three-phase induction motor drive system is presented with emphasis on the electrical faults of the inverter and induction motor. Using the basic components of MATLAB SIMULINK toolboxes, the drive system is modeled. An induction motor with mechanical faults is modeled using the winding function method (WFM) and is integrated with the drive model using S-function. The influence of the harmonics produced by synchronous or asynchronous PWM power converters on the induction motor stator current fault-related components is studied. The current waveform patterns for various modes of inverter device failures are investigated. It is shown that the stator current can be used for detection of electrical faults in either the induction motor or the inverter",2001,0, 4847,An Efficient Algorithm for Skew-Correction of Document Image Based on Cyclostyle Matching,"The skew-correction of scanned document image is a necessary step undergone before some processing. For the purpose of practical, robust, real-time of the algorithm, this paper presents a new skew-detection algorithm that based on the gray projection cyclostyle matching of Document Image. The algorithm, which is based on the analysis of gray scale projection curves of horizontal and vertical direction of the list images, compares with them if its not homology,then solves the problem of how the accurate skew angle can be found. The skew-adjustment algorithm can solve the problem of image distortion using a dual linear interpolation algorithm. In the end, the authors illustrate their experimental results, which show that the presented algorithm can solve the problem of skew-detection and skew-adjustment of the scanned document image. It is an important practical significance skew-correction algorithm for a large number of scanned document images.",2008,0, 4848,Development of Worm Gear and Worm Transmission Error Measurement System,"The paper introduced the importance of transmission error (TE) measurement in brief, designed the electrical diagram of worm gear transmission error measurement system based on field programmable gate arrays (FPGA) and USB2.0 transfer protocol. The system can achieve high-speed transmission of acquired data, and displayed real-time by the host computer program. At last, the paper introduced the accuracy test process of instrument.",2010,0, 4849,Design of gearbox fault diagnosis system based on LabVIEW,"The paper introduces the method for designing the gearbox fault diagnosis system based on LabVIEW. The hardware system consists of acceleration transducer, signal conditioning card, data acquisition card, and PC. The software system with friendly interface is designed based on LabVIEW and the combination of wavelet transform and ensemble empirical mode decomposition is used to extract fault feature from the non-stationary signal. The running results show that the system could diagnose the faults of gearboxes rapidly and accurately and has a promising future.",2010,0, 4850,Improving fault coverage in system tests,The paper is devoted to the problem of self-testing in system environment (field diagnosis and maintenance at the end user). It discusses test process decomposition in the context of increasing hardware complexity and proliferation of embedded DFT and BIST circuitry in the commercial off-the shelf VLSI chips (COTS). Test observability is improved with the use of various on-line monitoring mechanisms. To optimize test effectiveness we use special tools based on direct and indirect fault coverage analysis,2000,0, 4851,Design of elliptic filters with phase correction by using genetic algorithm,The paper presents a general algorithm of designing elliptic filters with phase correctors. The proposed algorithm uses a genetic solver to adjust the corrector's transfer function of a decent order minimizing filter's non-linearities and thus increasing the design immunity for signal distortion. The solution was implemented in Matlab environment and investigated using Matlab built-in functions as well as HSpice circuitry analysis.,2009,0, 4852,Adaptive affine error concealment scheme during the H.264/AVC decoding process,"The use of H.264/AVC is being increased in various applications. In this paper, we propose an effective method that recovers the video data when the H.264/AVC-coded bitstream is corrupted. It uses an affine transform and OBMA cost criteria, and applies concealment methods adaptively. Simulation results show that the proposed method yields better video qualities than other conventional approaches.",2009,0, 4853,A Network-Level Distributed Fault Injector for Experimental Validation of Dependable Distributed Systems,"The use of Java for distributed systems and high-available applications demands the validation of their fault tolerance mechanisms to avoid unexpected behavior during execution. We present an extension of FIONA (fault injector oriented to network applications), a fault injection environment to experimentally validate the dependability of distributed Java applications. The main features of this extension are: its distributed architecture, which allows centralized configuration of multiple faults scenarios, and the support for a wider fault model associated with distributed systems, which includes network partitioning. For monitoring, FIONA supports the collection of log information and includes a helper application to integrate this information in a global log for post-mortem dependability analysis. FIONA is simple to operate, and we expect it to facilitate the conduction of validation experiments by application developers and testers",2006,0, 4854,A Wavelet Approach Applied to Transformer Fault Protection: Signal Processing,The use of multiresolution analysis and wavelets is explored in this paper as a means of discrimination between inrush and faults in power transformers. The discrimination is based on the analysis of the high frequency components of the differential signal after using Clarke transformation. High values of the high frequency spectrum have been found to be present at least once per fundamental cycle if the transformer presents an internal fault. Finite element analysis as well as laboratory test on a 15 kVA transformers are presented to discuss the technique,2005,0, 4855,FIONA: a fault injector for dependability evaluation of Java-based network applications,"The use of network applications for high availability systems requires the validation of its fault tolerance mechanisms to avoid unexpected behavior during execution. FIONA is a fault injection tool to experimentally validate these mechanisms of Java distributed applications. The tool uses JVMTI, a new interface for the development of debugging and monitoring tools that enables the instrumentation of Java applications. This approach provides complete transparency between the application under test and the fault injection tool, as well as portability. FIONA injects communication faults, making it possible to conduct the dependability evaluation of UDP based network protocols developed in Java.",2004,0, 4856,A validation fault model for timing-induced functional errors,"The violation of timing constraints on signals within a complex system can create timing-induced functional errors which alter the value of output signals. These errors are not detected by traditional functional validation approaches because functional validation does not consider signal timing. Timing-induced functional errors are also not detected by traditional timing analysis approaches because the errors may affect output data values without affecting output signal timing. A timing fault model, the Mis-Timed Event (MTE) fault model, is proposed to model timing-induced functional errors. The MTE fault model formulates timing errors in terms of their effects on the lifespans of the signal values associated with the fault. We use several examples to evaluate the MTE fault model. MTE fault coverage results shows that it efficiently captures an important class of errors which are not targeted by other metrics",2001,0, 4857,Data preprocessing for prediction of rerecirculating water chemistry faults,"The water quality data in some petrochemical company are stored in the lims database, in order to support the field operation's decision-making according to these data, it's necessary to do some appropriate data mining. However, the accuracy of the results of data mining directly associated with the quality of source data, so data preprocessing on the raw data is necessary in the data mining process. The process of data preprocessing is as follows: First, the classification of the raw data, here it is based on the frequency difference of the data acquisition. Second, data cleaning on the raw data, including cleaning the noisy data, missing data and redundant data (here mainly refers to the attribute redundancy).For the noisy data it take the method combining computer with artificial, for the missing data it mainly take the interpolation method using the mean data, and the redundancy property items were deleted; Third, data transformation for the later data processing. It mainly do the normalization to eliminate the difference of the dimension and magnitude on the raw data, so that all data can put together to make a comprehensive analysis. It uses the mean and standard deviation method to preprocessing the raw data and then make a compare of the result of the two different methods and it get the conclusion that the mean method is a better normalization method; fourth, to conduct data reduction, which refers to reduce the data storage space as far as possible while it must ensure the data integrity, it uses the principal component analysis method to do the job.",2010,0, 4858,Correction of tropospheric water vapour effect on ASAR interferogram using synchronous MERIS data,"The water vapour in troposphere has been identified as one of the major errors in SAR interferograms, which can cause a spatial delay during two non-simultaneous acquisitions. The microwave-signal propagation path delay due to water vapour may reduce the reliability of deformation measurements. In this paper, it aims to assess the water vapour effect on interferograms, and apply synchronous MERIS data to reduce the effect on ASAR interferograms. Due to the co-existence of MERIS and ASAR on board of ENVISAT satellite, they can acquire data co-located in the same time and space. So it has a unique advantage to combine MERIS and ASAR data to reduce the tropospheric water vapour effect on ASAR interferograms. However, the method is not so well operational, and still existing some problems need to be further discussed, such as: how to deal with the cloud coverage over MERIS water vapour image; and how to register MERIS to ASAR from different reference systems, and so on. These will be discussed in this paper, and novel ideas are proposed to deal with them. The discussions are based on the application of the test site in the middle and lower reaches of Yangze River, southwest Hubei province, China.",2007,0, 4859,A framework for fault-tolerance in HLA-based distributed simulations,"The widespread use of simulation in future military systems depends, among others, on the degree of reuse and availability of simulation models. Simulation support in such systems must also cope with failure in software or hardware. Research in fault-tolerant distributed simulation, especially in the context of the high level architecture (HLA), has been quite sparse. Nor does the HLA standard itself cover fault-tolerance extensively. This paper describes a framework, named distributed resource management system (DRMS), for robust execution of federations. The implementation of the framework is based on Web services and semantic Web technology, and provides fundamental services and a consistent mechanism for description of resources managed by the environment. To evaluate the proposed framework, a federation has been developed that utilizes time-warp mechanism for synchronization. In this paper, we describe our approach to fault tolerance and give an example to illustrate how DRMS behaves when it faces faulty federates",2005,0, 4860,Methods to achieve zero human error in semiconductors manufacturing,"The word of automation has been realized in semiconductors industry since assembly machine become automatic since mid 80's. The major enabler was implementation of pattern recognition system (PRS) in order to replace eyes oriented operation such as bonding location targeting. Over the last two decades, many machine manufacturers have managed to achieve fast, accurate, and repeatable bonding without much manual assistance. However, all the assembly machine manufacturers have not achieved one point process window, meaning major parameters still need to be in a specific optimum range. In many machines, password control has been known as the most often used method to control risks of human errors. Even that, several major risks of human errors could not be contained or controlled at some operation steps. One of those steps was requirement to perform recipe download, which was necessary to run various devices and packages in production. Meanwhile, recipe editing could not be avoided especially due to almost impossible opportunity in obtaining one hundred percent of PRS lighting robustness over time. Despite that, some of the non-closed loop machine parameters such as actual bond force, temperature and ultrasonic power would still require manual recording in checklist. On the other hand, most of the material handling such as bonding wire, and leadframe also require operator assistance and judgment. In order to resolve these potential human errors, a new advanced automated method was created base on the concept of computer integrated manufacturing (CIM). With commercial communication protocols, machines work interactively with host computer at various operation steps. The new CIM system has enabled various human dependant steps either automated or effectively controlled. The first few steps governed possible errors in production lot, machine ID, operator ID, and material handling. This information would be verified against multiple system databases, including planning, bi- - ll of material, machine maintenance record, operator skill certification, etc. After that, all major machine parameters would be obtained and sent to host computer with predetermined frequency, performed without human assistance. Upon receiving these data, they would be compared against process window, and any violation will trigger machine shut down. In the final stage, the correct recipe will be downloaded into machine and start allowing material flow in production. The whole system was designed and incorporated into Web-base access, convenient to extract any information at anytime and anywhere. One of the significant differences in comparing to an in-line automated system, this CIM system development and deployment do not require high cost, and hence it applies to a wide range of machines. This CIM system has proven to be a cost effective method to achieve zero human error in semiconductors manufacturing",2006,0, 4861,Determination of the Sea Surface Salinity Error Budget in the Soil Moisture and Ocean Salinity Mission,"The Soil Moisture and Ocean Salinity mission will provide sea surface salinity maps over the oceans, beginning in late 2009. In this paper an ocean salinity error budget is described, an analysis needed to identify the magnitude of the error sources associated with the retrieval. Instrumental, external noise sources, and geophysical errors have been analyzed, stressing their relative impact. This paper includes results from previous studies, addressing the impact of multisource auxiliary sea surface temperature and wind speed data on the final salinity error. It provides, moreover, a sensitivity analysis to the uncertainty of the auxiliary salinity field. Salinity retrieval has been addressed in a wide set of configurations of the inversion algorithm.",2010,0, 4862,Assessment of the Effect of Memory Page Retirement on System RAS Against Hardware Faults,"The Solaris 10 operating system includes a number of new features for predictive self-healing. One such feature is the ability of the fault management software to diagnose memory errors and drive automatic memory page retirement (MPR), intended to reduce the negative impact of permanent memory faults that generate either correctable or uncorrectable errors on system reliability, availability, and serviceability (RAS). The MPR technique allows memory pages suffering from correctable errors and relocatable clean pages suffering from uncorrectable errors to be removed from use in the virtual memory system without interrupting user applications. It also allows relocatable dirty pages associated with uncorrectable errors to be isolated with limited impact on affected user processes, avoiding an outage for the entire system. This study applies analytical models, with parameters calibrated by field experience, to quantify the reduction that can be made by this operating system self-healing technique on the system interruptions, yearly downtime, and number of services introduced by hardware permanent faults, for typical low-end and mid-range server systems. The results show that significant improvements can be made on these three system RAS metrics by deploying the MPR capability",2006,0, 4863,Automatic defects detection in industrial C/C++ software,"The solution to the problem of automatic defects detection in industrial software is covered in this paper. The results of the experiments with the existing tools are presented. These results stand for inadequate efficiency of the implemented analysis. Existing source code static analysis methods and defects detection algorithms are covered. The program model and the analysis algorithms based on existing approaches are proposed. The problems of co-execution of different analysis algorithms are explored. The ways for improvement of analysis precision and algorithms performance are proposed. Advantages of the approaches developed are: soundness of a solution, full support of the features of target programming languages and analysis of the programs lacking full source code using annotations mechanism. The algorithms proposed in the paper are implemented in the automatic defects detection tool.",2009,0, 4864,Investigation of resonance phenomena in a DVR protecting a load with PF correction capacitor,The source impedance of the supply voltage of a dynamic voltage restorer (DVR) may change due to the change of power system grid configuration. Hence the resonance frequency of the DVR system will change and that results in harmonic resonance in the DVR. This paper presents a detailed study on possible resonance phenomena in a DVR protecting a load with power factor correction capacitor. The study reveals that the harmonic resonance cannot be reliably prevented with the existing open-loop control of the DVR due to the lack of damping and stability margin. It is proved that the closed-loop controller consisting inner current loop and outer voltage loop can dampen out possible harmonic resonance in the DVR system by properly selecting the gain parameters of the controller.,2003,0, 4865,Exploration of beam fault scenarios for the Spallation Neutron Source target,"The Spallation Neutron Source (SNS) accelerator systems will provide a 1 GeV, 1.44 MW proton beam to a liquid mercury target for neutron production. In order to ensure adequate lifetime of the target system components, requirements on several beam parameters must be maintained. A series of error studies was performed to explore credible fault scenarios which can potentially violate the various beam-on-target parameters. The response of the beam-on-target parameters to errors associated with the phase-space painting process in the ring and field setpoint errors in all the ring-to-target beam transport line elements were explored and will be presented. The plan for ensuring beam-on-target parameters will also be described.",2003,0, 4866,Validation of hardware error recovery mechanisms for the SPARC64 V microprocessor,"The SPARC64 V microprocessor is designed for use in high-reliability, large-scale unix servers. In addition to implementing ECC for large SRAM arrays, the SPARC64 V microprocessor incorporates error detection and recovery mechanisms for processor logic circuits and smaller SRAM arrays. The effectiveness of these error recovery mechanisms was validated via accelerated neutron testing of Fujitsupsilas commercial unix server, the PRIMEPOWER 650. Soft errors generated in SRAM arrays were completely recovered by the implemented hardware mechanisms, and only 6.4% of the estimated neutron-induced logic circuit faults manifested as errors, 76% of which were recovered by hardware. From these tests, the soft error failure rate of the SPARC64 V microprocessor due to atmospheric neutron hits was confirmed to be well below 10 FIT.",2008,0, 4867,Global CD uniformity improvement using dose modulation pattern correction of pattern density-dependent and position-dependent errors,"The specification of mask global CD uniformity (GCDU) is ever tightening. There is no exception at the 65-nm node. Some of the key contributors affecting GCD non-uniformity are pattern-density effects such as fagging effect from the e-beam writer and macro loading effect from the etcher. In addition, the contributions from position-dependent effects are significant, and these contributions included resist developing, baking, as well as aberrations of the wafer-imaging lens. It is challenging to quantify these effects and even more so to correct them to improve the GCDU. Correction of the fogging and etch loading effects had been reported by various authors. In addition to correction for these effects, we are reporting the position-dependent effects in this paper.",2004,0, 4868,Byzantine Fault Tolerance for Electric Power Grid Monitoring and Control,"The stability of the electric power grid is crucial to every nation's security and well-being. As revealed by a number of large-scale blackout incidents in North America, the data communication infrastructure for power grid is in urgent need of transformation to modern technology. It has been shown by extensive research work that such blackout could have been avoided if there were more prompt information sharing and coordination among the power grid monitoring and control systems. In this paper, we point out the need for Byzantine fault tolerance and investigate the feasibility of applying Byzantine fault tolerance technology to ensure high degree of reliability and security of power grid monitoring and control. Our empirical study demonstrated that Byzantine fault tolerant monitoring and control can easily sustain the 60 Hz sampling rate needed for supervisory control and data acquisition (SCADA) operations with sub-millisecond response time under the local-area network environment. Byzantine fault tolerant monitoring and control is also feasible under the wide-area network environment for power grid applications that demand sub-second reaction time.",2008,0, 4869,Computer Model of Geological Faults in 3D and the Application in Beijing Olympic Green District,"The structural modeling techniques of complex geological entities contained reverse faults are discussed and a series of approaches are proposed. We discusses the principle and the process of computer modeling of geological faults in 3D, and establishes a series of applied technical proposal. Two kinds of modeling approaches for faults, such as modeling technique of fault based on stratum recover and based on interpolation in subareas, are compared, and a novel approach, named Unified Modeling Technique for stratum and fault, is presented to solve the puzzled problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of fault model of bed rock in Beijing Olympic Green District is presented and shows the practical result of this method. It deepens the profound comprehension of geological phenomenon and the modeling approach, and establishes the basic techniques of 3D geological modeling for the practical apply in geosciences field",2006,0, 4870,Fault Tolerant Network on Chip Switching With Graceful Performance Degradation,"The structural redundancy inherent to on-chip interconnection networks [networks on chip (NoC)] can be exploited by adaptive routing algorithms in order to provide connectivity even if network components are out of service due to faults, which will appear at an increasing rate with future chip technology nodes. This paper is based on a new, fine-grained functional fault model and a corresponding distributed fault diagnosis method that facilitate determining the fault status of individual NoC switches and their adjacent communication links. Whereas previous work on network fault-tolerance assume switches to be either available or fully out of service, we present a novel adaptive routing algorithm that employs the remaining functionality of partly defective switches. Using diagnostic information, transient faults are handled with a retransmission scheme that avoids the latency penalty of end-to-end repeat requests. Thereby, graceful degradation of NoC communication performance can be achieved even under high failure rates.",2010,0, 4871,A Research on Identification System for Image Defects Based on Grating Technology,The structure and mechanism of identification system for image defects based on grating technology is introduced and some crucial technologies in the system are discussed. This paper then proposes some new threads on how to satisfy the real time requirement of the system. The detection experiment manifests that these methods are useful and can satisfy the demands of system design.,2009,0, 4872,Tracking behaviour in the presence of conductive interfacial defects,The study presented in this paper aim at extending the knowledge on how interfacial defects in composite insulation systems may affect surface tracking under contaminated conditions. Model samples mimicking an inter-facial conducting defect were made of high temperature vulcanised silicone rubber moulded on epoxy substrate with a circular metallic foil inserted at the interface. They were tested for the tracking resistance by means of the inclined plane test procedure. The samples with defects exhibited shorter time to track compared to reference samples without defects. In addition the tracking was more severe on the defected samples. Electric field simulation performed in parallel revealed that presence of surface contamination increases the distortion of electric field around the defect and thus explain both the effects of more severe damage of material surface and lower time to tracking experienced in this study.,2009,0, 4873,A Software Tool for Registration Based Distortion Correction in Echo Planar Imaging,"There exists a need for more accurate functional localization through the registration of functional to anatomical magnetic resonance brain images. This paper introduces a new software tool, named NPTK, that has been developed at the Signal and Image Processing Laboratory of the University of Texas at Dallas for improved functional localization in single- subject analysis. This software toolkit is developed based on recently introduced non-rigid registration and field map distortion correction techniques, which has been validated via in-vivo validation of retrospective distortion correction in Echo- Planar Imaging. The paper provides an overview of the entire functional localization pipeline, an algorithmic description of the developed software, and a discussion on the performance and applications of the software.",2007,0, 4874,ERSA: Error Resilient System Architecture for probabilistic applications,"There is a growing concern about the increasing vulnerability of future computing systems to errors in the underlying hardware. Traditional redundancy techniques are expensive for designing energy-efficient systems that are resilient to high error rates. We present Error Resilient System Architecture (ERSA), a low-cost robust system architecture for emerging killer probabilistic applications such as Recognition, Mining and Synthesis (RMS) applications. While resilience of such applications to errors in low-order bits of data is well-known, execution of such applications on error-prone hardware significantly degrades output quality (due to high-order bit errors and crashes). ERSA achieves high error resilience to high-order bit errors and control errors (in addition to low-order bit errors) using a judicious combination of 3 key ideas: (1) asymmetric reliability in many-core architectures, (2) error-resilient algorithms at the core of probabilistic applications, and (3) intelligent software optimizations. Error injection experiments on a multi-core ERSA hardware prototype demonstrate that, even at very high error rates of 20,000 errors/second/core or 2??10-4 error/cycle/core (with errors injected in architecturally-visible registers), ERSA maintains 90% or better accuracy of output results, together with minimal impact on execution time, for probabilistic applications such as K-Means clustering, LDPC decoding and Bayesian networks. Moreover, we demonstrate the effectiveness of ERSA in tolerating high rates of static memory errors that are characteristic of emerging challenges such as Vccmin problems and erratic bit errors. Using the concept of configurable reliability, ERSA platforms may also be adapted for general-purpose applications that are less resilient to errors (but at higher costs).",2010,0, 4875,Fault tolerant automotive systems: an overview,There is a trend in the automobile industry for an increasing number of safety-related electronic systems in vehicles that are directly responsible for active and passive vehicle safety. These applications will increase overall vehicle safety by liberating the driver from routine tasks and assisting the driver to find solutions in critical situations. Thus it is of the utmost importance to solve fault tolerance issues of the electronic systems themselves. This paper gives an overview of the strategies and structures employed in the automotive environment to assure a good degree of fault tolerance both for complete systems and for integrated circuits,2001,0, 4876,A Neural network based approach for modeling of severity of defects in function based software systems,"There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this paper, five Neural Network Based techniques are explored and comparative analysis is performed for the modeling of severity of faults present in function based software systems. The NASA's public domain defect dataset is used for the modeling. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that out of the five neural network based techniques Resilient Backpropagation algorithm based Neural Network is the best for modeling of the software components into different level of severity of the faults. Hence, the proposed algorithm can be used to identify modules that have major faults and require immediate attention.",2010,0, 4877,Characterization of defects in photovoltaics using thermoreflectance and electroluminescence imaging,Thermal and electroluminescence (EL) imaging techniques are widely accepted as powerful tools for analyzing solar cells. We have identified and characterized various defects in photovoltaic devices with sub-micron spatial resolution using a novel thermoreflectance imaging technique that can simultaneously obtain thermal and EL images with a mega-pixel silicon-based CCD. Linear and non-linear shunt defects are investigated as well as electroluminescent breakdown regions at reverse biases as low as -5V. Pre-breakdown sites with electroluminescence are observed. The wavelength flexibility of thermoreflectance imaging is explored and thermal images of sub-micrometer defects are obtained through glass that would typically be opaque for infrared light. Image sequences show a 10s thermal transient response of a 15m defect in a polysilicon solar cell. Nanosecond reverse bias voltage pulses are used to detect breakdown regions in thin-film a-Si solar cells with EL.,2010,0, 4878,A Novel High Performance Boost Power Factor Correction (PFC) Converter with an improved Zero Voltage Transition (ZVT) technique,"These A novel power factor correction (PFC) Converter employing zero voltage transition (ZVT) technique based boost topology is proposed in this paper. It operates at a fixed frequency while achieving zero voltage turn-on of the main switch and zero current turn-off of the boost diode. This is accomplished by employing resonant operation only during switch transitions. During the rest of the cycle, the resonant network is essentially removed from the circuit and converter operation is identical to its non-resonant counterpart. This technique increases the efficiency to 95% and power factor to 0.99. Soft switching of the diode also reduces EMI, an important system consideration.The principle of operation, theoretical analysis, simulation results and experimental results are presented. A prototype of 500 W is built to test the proposed topology. The input voltage is from 170 V rms to 230 V rms. The output voltage is 400 V. The operation frequency is 100 KHz.",2006,0, 4879,"Utilizing third harmonic 100% stator ground fault protection, a cogeneration experience","Third harmonic, 100% stator ground fault protection schemes are becoming economically viable for small and mid size generators used in cogeneration applications. Practical considerations must be observed in order that such schemes are successfully applied for different machines. This paper introduces an experience with the applications of 3rd harmonic schemes in cogeneration applications and depicts their advantages and limitations. For a 50 MVA generator, actual measurements of produced third harmonics are portrayed and analyzed. In light of the experience and analysis, applications of certain third harmonic scheme configurations are contemplated",2000,0, 4880,Fault analysis study using modeling and simulation tools for distribution power systems,"This article describes a fault analysis study using some of the best available simulation and modeling tools for electrical distribution power systems. Several software tools were identified and assessed in L. Nastac, et al (2005). The fault analysis was conducted with the assessed software tools using the recorded fault data from a real circuit system. The recorded fault data including the topology and the line data with more than 1000 elements were provided by Detroit Edison (DTE) Energy for validation purposes. The effects of pro-fault loading and arcing impedance on the predicted fault current values were also investigated. Then, to ensure that the validated software tools are indeed capable of analyzing circuits with DCs, fault management and relay protection problems were developed and solved using a modified IEEE 34-bus feeder with addition of DCs.",2005,0, 4881,Bit-error-rate performance of intra-chip wireless interconnect systems,This Letter evaluates the bit-error rate (BER) performance of a coherent binary phase-shift keying interconnect system operating on an intra-chip wireless channel at 15 GHz. Results show that the system performance degrades with the separation distance and the data rate. A high data rate at 2 Gb/s with a low BER<10-5 over the entire chip of size 20 20 mm can be achieved with the transmitted power of 10 dBm.,2004,0, 4882,Harmonic suppression with photonic bandgap and defected ground structure for a microstrip patch antenna,"This letter presents a microstrip patch antenna integrated with two-dimensional photonic bandgap (PBG) and one-dimensional defected ground structure (DGS) jointly in ground plane. It is demonstrated that application of both PBG and DGS eliminates the second and third harmonics and improves the return loss level. Moreover, the combination use of PBG and DGS decreases the occupied area by 70% compared to the conventional PBG patch antenna.",2005,0, 4883,Transmission Line Fault Location Using Two-Terminal Data Without Time Synchronization,"This letter presents a new transmission line fault location method that uses current and voltage sinusoidal phasors at both ends, without necessity of data synchronization. The main difference among the classical Johns method resides in the fact that the proposed method is based on magnitude of fault point voltage and does not demand exact phase angles of the acquired signals. Simulated and real case results are presented, showing that the proposed algorithm is robust, accurate, and provides adequate performance. Practical applications confirm that the synchronization is not really necessary, making the method faster and easier to apply than classical methods in many real situations",2007,0, 4884,Compact Dual-Band Filter Using Defected Stepped Impedance Resonator,"This letter presents a novel approach for designing a dual-band bandpass filter by using defected stepped impedance resonator (DSIR). The resonant frequency of the DSIR is found to be much lower than that of the conventional microstrip stepped impedance resonator (SIR), which reduces the circuit size effectively. Two types of second-order DSIR microstrip bandpass filter operating at 1.85 and 2.35 GHz, respectively, are well designed according to the classical theory of coupled resonator filter. Then they are combined to construct a compact dual-band filter with a common parallel microstrip feed line, the measurement results of the fabricated filter have a good agreement with the simulation.",2008,0, 4885,Fault Classification for SRAM-Based FPGAs in the Space Environment for Fault Mitigation,"This letter proposes a classification algorithm to discriminate between recoverable and not recoverable faults occurring in static random access memory (SRAM)-based field-programmable gate arrays (FPGAs), with the final aim of devising a methodology to enable the exploitation of these devices also in space applications, typically characterized by long mission times, where permanent faults become an issue. By starting from a characterization of the radiation effects and aging mechanisms, we define a controller able to classify such faults and consequently to apply the appropriate mitigation strategy.",2010,0, 4886,An alternative expression for the symbol-error probability of MPSK in the presence of I/Q unbalance,"This letter provides a new expression for the probability of the phase angle between two vectors perturbed by Gaussian noise with in-phase/quadrature-phase (I/Q) unbalance, in order to simplify the analysis of the error probabilities of M-ary phase-shift keying (MPSK). The error probabilities, symbol-error rates, or bit-error rates for MPSK with I/Q balance or unbalance, can be presented straightforwardly from the derived expression. Because the newly derived result is provided in terms of the conventional first-order Gaussian Q-function and the joint Gaussian Q-function with a correlation coefficient dependent on M, it readily allows rapid evaluation for various cases of practical interest.",2004,0, 4887,Estimation of Systematic Errors of MODIS Thermal Infrared Bands,"This letter reports a statistical method to estimate detector-dependent systematic error in Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared (TIR) Bands 20-25 and 27-36. There exist scan-to-scan overlapped pixels in MODIS data. By analyzing a sufficiently large amount of those most overlapped pixels, the systematic error of each detector in the TIR bands can be estimated. The results show that the Aqua MODIS data are generally better than the Terra MODIS data in 160 MODIS TIR detectors. There are no detector-dependent systematic errors in Bands 31 and 32 for both Terra and Aqua MODIS data. The maximum detector errors are 3.00 K in Band 21 of Terra and -8.15 K in that of Aqua for brightness temperatures of more than 250 K",2006,0, 4888,Sensors for fault detection using chemical off-gassing analysis,"This novel and revolutionary new approach to printed circuit board testing utilizes nanostructured sensor technology to detect elements of electronic component degradation and failure directly at the circuit board itself, combined with imaging techniques to literally see defects within the circuit board and electronic components. Specific achievements include the development of electronic and chemoresistive sensors based on novel, purpose developed organic materials and the evaluation of their responses to a number of various failure events. Sample collection mechanisms used for the testing procedures are discussed along with the data acquisition processes followed by feature extraction and pattern recognition procedures. Finally, the design of a functional prototype system will also be presented along with the conceptualized integration of the chemical sensor-based off-gassing analysis within a multi-technology fault detection system.",2008,0, 4889,Measures to improve delay fault testing on low-cost testers - a case study,"This paper addresses delay test for SOC devices on low-cost testers. The case study focuses on the at-speed testing for a state-of the-art microcontroller device by using an on-chip high-speed clock generator. The experimental results show that the simple on-chip high-speed clock generator is not sufficient to reach both high fault coverage and acceptable pattern count. Meanwhile, at-speed test constraints, required to enable the delay test on low cost testers, have a significant impact on test generation results. DFT techniques to increase fault coverage and to reduce pattern count are discussed.",2005,0, 4890,Fault detection and protection system for the power converters with high-voltage IGBTs,"This paper addresses problems related to the design and implementation of a fault detection and protection system for high-voltage (HV) NPT IGBT-based converters. An isolated half-bridge power converter topology is investigated, which seems to be very attractive for the high-power electronic converters due to its overall simplicity, small component count and low realization costs. This converter is to be applied in rolling stock with its demanding reliability and safety requirements. Clearly, the robust control and protection system is essential.",2008,0, 4891,Hardware-in-the-loop test for fault diagnosis system of tilt rotor UAV,"The tilt rotor UAV developed in KARI has fault diagnosis functions to enhance system reliability, which were implemented using the operational flight program (OFP) of the flight control computer (FCC). Basically, they conduct built-in-test (BIT) between the FCC and onboard systems such as the communication system, navigation system, and actuators, etc. If a system has no BIT function, fault diagnosis can be performed by checking if the corresponding physical data exit within the available range. In order to test each function, a test device for fault diagnosis was developed and used for hardware-in-the-loop simulation (HILS). The test apparatus consists of the operation control computer (OCC) and the verification computer (VC), which can be interfaced with the simulation computer via Ethernet. The OCC creates faults into the VC and displays the result of fault diagnosis. The VC has simulation models for each onboard system and provides output information to FCC. The FCC performs fault diagnosis and returns the result of fault diagnosis. This paper describes the fault diagnosis functions of the tilt rotor UAV, the test environment used to evaluate them and the test result in hardware-in-the-loop simulation environment.",2008,0, 4892,FAUST: fault-injection script-based tool,"The tool described in this paper aims at evaluating the effectiveness of software-implemented fault-tolerant techniques used in safety-critical systems. The target application is stressed with the injection of transient or permanent faults. The user can therefore observe the real behaviour of the application in presence of a fault, and, if necessary, take the appropriate countermeasures. The accent is put on the extreme easiness of the use and the portability on all UNIX platforms.",2003,0, 4893,A Review of Error Resilience Technique for Video Coding Using Concealment,"The traditional error resilience technique has been widely used in video coding. Many literatures have shown that with the technique's help, the video coding bitstream can been protected and the reconstructed image will get high improvement. In this paper, we review the error resilience for video coding and give the improvement of this new technology. These techniques are based on coding simultaneously for synchronization and error protection or detection. We apply the techniques to improve the performance of the multiplexing protocol and also to improve the robustness of the coded video. The techniques proposed for the video also have the advantage of simple transcoding with bit streams complying in H.263.",2009,0, 4894,A new approach to solve dynamic fault trees,"The traditional static fault trees with AND, OR and voting gates cannot capture the dynamic behavior of system failure mechanisms such as sequence-dependent events, spares and dynamic redundancy management and priorities of failure events. Therefore, researchers introduced dynamic gates into fault trees to capture these sequence-dependent failure mechanisms. Dynamic fault trees are generally solved using automatic conversion to Markov models; however, this process generates a huge state space even for moderately sized problems. In this paper, the authors propose a new method to analyze dynamic fault trees. In most cases, the proposed method solves the fault trees without converting them to Markov models. They use the best methods that are applicable for static fault tree analysis in solving dynamic fault trees. The method is straightforward for modular fault trees; and for the general case, they use conditional probabilities to solve the problem. In this paper, the authors concentrate only on the exact methods. The proposed methodology solves the dynamic fault tree quickly and accurately.",2003,0, 4895,Research on reliability modeling of complex system based on dynamic fault tree,"The traditional static fault trees with AND, OR, and Voting gates cannot capture the dynamic behavior of complex computer system failure mechanisms such as sequence dependent events, spares and dynamic redundancy management, and priorities of failure events. In this paper, dynamic fault tree modeling method is applied in complex computer system. This paper starts from dynamic fault tree analysis method and its logic gates. A complex computer system simulation example is given. We establish respectively each model, and then gain system-level dynamic fault tree and modularize this model and analyze other questions. At last, we can draw a conclusion that this model method of this paper analyzes well dynamic behaviors in computer system that combines hardware, software and human reason.",2009,0, 4896,Self-adjusting Component-Based Fault Management,"The Trust4All project aims to define an open, component-based framework for the middleware layer in high-volume embedded appliances that enables robust and reliable operation, upgrading and extension. To improve availability of each individual application in a Trust4All system, we propose a runtime configurable fault management mechanism (FMM) which detects deviations from given service specifications by intercepting interface calls. There are two novel contributions associated with FMM. First, when repair is necessary, FMM picks a repair action that incurs the best tradeoff between the success rate and the cost of repair. Second, considering that it is rather difficult to obtain sufficient information about third party components during their early stage of usage, FMM is designed to be able to accumulate appropriate knowledge, e.g. the success rate of a specific repair action in the past and rules that can avoid a specific failure, and self-adjust its capability accordingly",2006,0, 4897,Adaptive runtime fault management for service instances in component-based software applications,"The Trust4All project aims to define an open, component-based framework for the middleware layer in high-volume embedded appliances that enables robust and reliable operation, upgrading and extension. To improve the availability of each individual application in a Trust4All system, a runtime configurable fault management mechanism (FMM) is proposed, which detects deviations from given service specifications by intercepting interface calls. When repair is necessary, FMM picks a repair action that incurs the best tradeoff between the success rate and the cost of repair. Considering that it is rather difficult to obtain sufficient information about third party components during their early stage of usage, FMM is designed to be able to accumulate knowledge and adapts its capability accordingly",2007,0, 4898,Cancer-radiotherapy equipment as a cause of soft errors in electronic equipment,"The undesirable production of secondary neutrons by cancer-radiotherapy linear accelerators (linac) has been demonstrated to cause soft errors in nearby electronics through the 10B(n,a)7Li reaction. 10B is a component in the BPSG used as a dielectric material in some integrated-circuit (IC) fabrication processes.",2005,0, 4899,Error-Aware Design,"The universal underlying assumption made today is that systems on chip must maintain 100% correctness regardless of the application. This work advocates the concept that some applications - by construction - are inherently error tolerant and therefore do not require this strict bound of 100% correctness. In such cases, it is possible to exploit this tolerance by aggressively reducing the supply voltage, thereby reducing power consumption significantly. This approach is demonstrated on several case studies in imaging, video and wireless communication fields.",2007,0, 4900,Frictional Defects in Cold Rolling of Tin Plate Diagnostic,"This article describes the development of one expert system to determinate possible causes of surfaces defects related to friction on steel process and to establish ways of action to correct the disturbance. The outputs of the system were validated by comparison between historical records and expert opinion for same facts. The soundness of the results allows its application for decision making by advising when one deviation is detected, producing sound and timely diagnostics which report economical benefits for the whole business becoming independent the diagnostic from the expert presence.",2007,0, 4901,A reconfiguration-based defect-tolerant design paradigm for nanotechnologies,"This article discusses a novel probabilistic design paradigm targeting reconfigurable architected nanofabrics and points to a promising foundation for comprehensively addressing, at the system level, the density, scalability, and reliability challenges of emerging nanotechnologies. The approach exposes a new class of yield, delay, and cost trade-offs that must be jointly considered when designing computing systems in defect-prone nanotechnologies.",2005,0, 4902,How to recognize and remove qualitative errors in time-of-flight laser range measurements,"This article presents results concerning recognition and classification of the qualitative-type range measurement errors in a time-of-flight principle based 2D laser scanner used on mobile robots for navigation. The main source of the qualitative uncertainty are the mixed measurements. This effect has been investigated experimentally and explained by analyzing the physical phenomena underlying the sensor operation. A local grid map has been used to remove the erroneous range measurements. A novel fuzzy-set-based algorithm has been employed to update evidence in the grid. The results of tests show that this algorithm is superior to the common Bayesian approach, when qualitative errors in range measurements are present.",2008,0, 4903,Learning defect classifiers for visual inspection images by neuro-evolution using weakly labelled training data,"This article presents results from experiments where a detector for defects in visual inspection images was learned from scratch by EANT2, a method for evolutionary reinforcement learning. The detector is constructed as a neural network that takes as input statistical data on filter responses from a bank of image filters applied to an image region. Training is done on example images with weakly labelled defects. Experiments show good results of EANT2 in an application area where evolutionary methods are rare.",2008,0, 4904,Active-Fault-Alarm based Dynamic Temporary Protection Mechanism for MPLS-TP optical networks,"This article proposes an Active-Fault-Alarm (AFA) based Dynamic Temporary Protection Mechanism (DTPM) that is applied into the multi-protocol label switching transport profile (MPLS-TP) optical network. The new DTPM, by using the AFA function, can establish dynamically the so called temporary protection path (TPP) for the work path in advance before possible fault occurs and keep flexibly the TPP only during the low-quality or fault period of the work path. Therefore, the novel DTPM is greatly different from the 1:1 Traditional Linear Protection Mechanism (TLPM). Experimental results show that the DTPM has better performance in terms of flexibility and efficiency.",2010,0, 4905,A fault diagnosis approach for hybrid systems,"This communication presents a first attempt in the use of a dynamic hybrid simulator within a fault diagnosis system coupled to a classification methodology. The use of hybrid modelling has the advantage to clearly separate the continuous aspects from the discrete ones; this allows an analysis of causalities resulting from the state changes. Our approach begins with the simulation of the system in normal conditions until the existence of dysfunction has been detected (by comparing process measurement and simulation result). Then, a backward simulation through the chain of causality is performed using the same Petri net and the method is propagated until the coherence between simulation and real behaviour is reached. Nevertheless, according to this procedure all the possible scenarios of faults should be explored. A classification method based on measurement data would be used to restrict the tree of possibilities of faults to be explored. In the framework of this study, the simulation aspects have been entrusted to the general object-oriented environment PrODHyS (process object dynamic hybrid simulator). Its major characteristic is its ability to simulate systems described with object differential Petri nets (ODPN) formalism",2006,0, 4906,A versatile structure of S31-GGA-casc switched-current memory cell with complex suppression of memorizing errors,"This contribution describes the circuit structure of high precision S31-GGA-casc switched-current (SI) memory cell providing complex suppression of memorizing errors. The low-frequency relative current error caused by charge-injection and input/output conductance ratio is typically 50 ppm. Moreover, the DC current offset of this cell type is lower than 0.2 /spl mu/A within the signal range of 350 /spl mu/A. At this point, excellent DC stability was achieved by a new solution of current biasing circuitry. The cell provides optimized input/output impedance ratio which can further minimize the errors arising in a SI system. All the results were proven by circuit simulation and chip measurement. In the field of circuit testing, a new concept of high-precision current-sensing circuit was invented and it is also being described in this paper.",2003,0, 4907,The study of analog circuit fault diagnosis method based on circuit transform function,"This is the paper use Laplace Transfer to compute the analog circuit transform function, and compute the fault transform functions with different kinds of fault to generate the fault diagnosis table. The table is used to complete fault diagnosis. At last the software Multisim is used to simulate an analog circuit to do the fault diagnosis, and this method is verified usefully.",2010,0, 4908,Compact Bandpass Ring Resonator Filter With Enhanced Wide-Band Rejection Characteristics Using Defected Ground Structures,"This letter addresses a modified bandpass ring resonator filter providing compact size, low insertion-loss, wide bandwidth, sharp rejection, and suppressed higher order modes. It is demonstrated that the use of internal folded stubs translates into an overall size reduction of more than 70% through the exploitation of the internal ring area. The introduction of defected ground structures enhances the rejection of higher order modes beside offering a further advantage in terms of size reduction. It is furthermore shown that electromagnetic simulations, transmission line model, as well as measurement results are in very good agreement.",2008,0, 4909,X-ray micro radiography using beam hardening correction,"The system presented provides high quality micro radiographs including very low contrast and low absorption objects. An important source of image distortion arises from beam hardening effects. When left uncorrected, distortion blurs low contrast image elements. Using a cooled digital X-ray semiconductor detector together with the proposed beam hardening correction procedure brings high dynamic range and very low noise of the acquired radiographs over the entire X-ray source spectrum. Both soft and hard parts of the object appear with high contrast and spatial resolution in the resulting radiographs. The beam hardening correction procedure is fully automated using a set of the calibrators and appropriate software modules",2005,0, 4910,Correction of extrinsic information for iterative decoding in a serially concatenated multiuser DS-CDMA system,"The system under study is a coded asynchronous DS-CDMA system with orthogonal modulation in time-varying Rayleigh fading multipath channels. Information bits are convolutionally encoded, block interleaved, and mapped to M-ary orthogonal Walsh codes, where the last step is essentially a process of block coding. This paper aims at tackling the problem of joint iterative decoding of this serially concatenated inner block code and outer convolutional code and estimating frequency-selective fading channels in multiuser environments. The (logarithm) maximum a posteriori probability, (Log)-MAP criterion is used to derive the iterative decoding schemes. In our system, the soft output from inner block decoder is used as a priori information for the outer decoder. The soft output from outer convolutional decoder is used for two purposes. First, it may be fed back to the inner decoder as extrinsic information for the systematic bits of the Walsh codeword. Secondly, it is utilized for channel estimation and multiuser detection (MUD). We also show that the inner decoding can be accomplished without extrinsic information, and in some cases, e.g., when the system is heavily loaded, yields better performance than the decoding with unprocessed extrinsic information. This implies the need for correcting the extrinsic information obtained from outer decoder. Different schemes are examined and compared numerically, and it is shown that iterative decoding with properly corrected extrinsic information or with non-extrinsic/extrinsic adaptation enables the system to operate reliably in the presence of severe multiuser interference, especially when the inner decoding is assisted by decision directed channel estimation and interference cancellation techniques.",2006,0, 4911,Vertical Velocity Measurement - Processing of Sensor Data Using Altitude Corrections,"The system was designed for sensor measurement and data transfer. A new architecture of a multisensor system for temperature measurement using wireless communications was used in the paper. There are used sensors with digital or analog outputs. The control software of the system has been created. Different software were designed for wireless units. The integrated RF chip nRF9E5 was used as wireless units. Chip ensures wireless communication between control unit and sensors as well as wireless switch unit. The control unit controls system operation, i.e. communication, sensor data processing as well as work of actuator unit. Communication is ensured in the range of 300 m in the free space. The system was designed to operate with different type of sensors. The number of sensor can be variable. The system can used PC, PDA or mobile phone to communication with control unit.",2008,0, 4912,A Perceptron Neural Network for Asymmetric Comparison-Based System-Level Fault Diagnosis,"The system-level fault diagnosis problem aims at answering the very simple question ""Who's faulty and who's fault-free?"", in systems known to be diagnosable. In this paper, we answer such a question using neural networks. Our objective is to identify faulty nodes based on an input syndrome that has been generated using the asymmetric comparison model. In such a model, the system, which is composed of interconnected independent heterogeneous nodes, is modeled using an undirected comparison graph. Tasks are assigned to pairs of nodes and the results of executing these tasks are compared. Based on the agreements and disagreements among the nodes' outputs, the diagnosis algorithm must identify faulty nodes. In general, it is assumed that faults are permanent, and that at most t nodes can fail simultaneously. The new solution we introduce in this paper uses a perceptron neural network to solve the fault identification problem. The neural network is first trained using various input syndromes with known fault sets. Extensive simulations have been conducted next using randomly generated diagnosable systems. Surprisingly, the neural network was able to identify all the millions of faulty situations we have tested, including those that are unlikely to occur. Simulations results indicate that the perceptron-based diagnosis algorithm is a viable addition to present diagnosis problems.",2009,0, 4913,Fault tolerant control allocation for a thruster-controlled floating platform using parametric programming,"The task in control allocation is to allocate a specified generalized force to a redundant set of control effectors where the associated actuator control inputs are constrained, and other physical and operational constraints and objectives should be met. In this paper we consider a convex quadratic approximation to a control allocation problem for a thruster-controlled floating platform with eight rotatable azimuth thrusters where the high level controller is assumed to specify three generalized forces; surge, sway and yaw. The optimization problem is solved explicitly by viewing the generalized forces as a vector of parameters and utilizing parametric programming techniques, leading to a continuous piecewise affine (PWA) function implementing the optimal control allocation. Experimental results from a test basin with a 1:100 scale model of a platform are presented. It is shown how thruster and machinery failure scenarios can be handled by automatic reconfiguration of the control allocation, exploiting symmetry of the thruster configuration.",2009,0, 4914,Research of Remote Fault Diagnosis System Based on Internet,"The technology of intelligent multi-agents is applied to design remote fault diagnosis system based on Internet. The system owns the kernel of the remote fault diagnosis platform, (RFDP) and the members of manufacturers and enterprise client. It is a distributed, remote monitoring and on-line diagnosis system. The system has overcome the function limitations of 2 kinds of traditional client/server architecture, equipment client-end remote diagnostic mode and manufacturer-end remote diagnostic mode. The platform has a rapid diagnosis response and makes it easy to realize information transmitting timely between platform and diagnosis members. For this reason, RFDP which holds a great ability for enabling on-line monitoring, general fault diagnosis, repairing service and updating knowledge base rapidly, can give a good service for remote distributed multi- equipments and multi-manufactures. As a common diagnosis platform, the system can be applied to various remote mechanical and electronic equipment diagnosis areas such as CNC and press machine diagnosis.",2007,0, 4915,Fault tolerance for distributed process control system,"The Yokogawa Electric Corporation has been developing distributed process control systems (DCS) since 1975. As the core of plant control, the DCS has placed increasingly severe demands on reliability. This paper describes the fault tolerant technology of the Duplexed Field Control Station in DCS and the online exchange and concurrent operation of current and new CPU cards technology supporting long-term use.",2002,0, 4916,Fault Injection Analysis of Bidirectional Signals,"There are large subsets of digital circuits that are designed with bidirectional ports, like microprocessors, peripherals and certain communication circuits. During the design phase, the reliability of these circuits can be tested by means of fault injection. Traditional fault injection techniques have to arrange the design in order to perform the testing the bidirectional ports, because these tests have to take into account not only errors in the values, but also possible damages in the direction of the data. The present paper presents a solution adopted in an existing fault injection system FT-UNSHADES and the new incoming platform FT-UNSHADES2, for bidirectional signals and the solutions of some practical problems encountered.",2009,0, 4917,A algorithm of concave-convex defect recognition based on finite element mesh model structure,"There are many concave-convex defects in the finite element mesh model structure of composite materials after the layer optimal design. Aimed at this kind of defect, this paper proposed an algorithm which defined a serial of specific recognition rules according to analysis of defect attributes, matched the finite element models with concave-convex defects recursively, and finally recognized all defects, the results of the recognition can be used as further basis for defects repairing. Application examples show that the algorithm could recognize concave-convex defects in finite element mesh accurately.",2010,0, 4918,Usage of Weibull and other models for software faults prediction in AXE,There are several families for software quality prediction techniques in development projects. All of them can be classified in several subfamilies. Each of these techniques has its own distinctive feature and it may not give correct prediction of quality for a scenario different from the one for which the technique was designed. All these techniques for software quality prediction are dispersed. One of them is statistical and probabilistic technique. The paper deals with software quality prediction techniques in development projects. Four different models based on statistical and probabilistic approach is presented and evaluated for prediction of software faults in very large development projects.,2008,0, 4919,Error analysis and a new arithmetic study on computing horizontal temperature gradient in marine GIS,"There are three problems using current GIS software to calculate the horizontal temperature gradient of fishing grid. This work analyzed the errors caused by these problems, and presented a kind of new GIS algorithm of calculating the horizontal temperature gradient of fishing grid, and also presented multisection & single point change surface interpolation method for interpolating grid point temperature based on SST isoline. The results by our algorithms are better than results by ArcGIS.",2004,0, 4920,Optimizing Internet flows over IEEE 802.11b wireless local area networks: a performance-enhancing proxy based on forward error correction,"The success of the IP and its associated technologies has led to new challenges as we try to use it more widely in everyday communications. In particular, the drive toward wireless and highly heterogeneous infrastructures supporting IP services transparently and independent of the underlying physical layer is a challenge. In this context, this article focuses on introducing an implementation of a generic performance-enhancing proxy, called the wireless adaptation layer, and particularly its forward error control enhancement module. The error control module is a potentially important tool for achieving better UDP and TCP performance over the inherently unreliable wireless channels, and providing some adaptation for that. In order to assess the benefits and drawbacks of the selected design, we have also conducted some performance measurements over IEEE 802.11b WLANs",2001,0, 4921,"Development, installation, field test and operational experience of a superconducting Fault Current Limiter in a distributed generation Test Facility","The Superconducting Fault Current Limiter (SFCL) is an innovative component able to limit instantaneously large short-circuit currents to acceptable level. In this paper we report on installation, commissioning and field testing of a SFCL prototype in a distributed generation Test Facility. Details of SFCL installation and of developed monitoring and acquisition system are described. Simulation results for 3 phase faults are compared with successful short-circuit field tests showing the good current-limiting capability and fast action of SFCL prototype.",2009,0, 4922,Performance analysis of HTS fault current limiter combined with a ZnO varistor,"The superconducting technology nowadays is an innovation in the field of the electrical power supply. Using HTS in fault current limiter (FCL) represents a new category of electrical equipment and a novel configuration of electrical network. In fact the high temperature superconductivity (HTS) make a relatively sharp transition to a highly resistive state when the critical current density is exceeded, and this effect has suggested their use for resistive fault current limiters. FCL is an important element in order to reduce system impedance, which permits an increase of power transmission. Furthermore, it allows additional meshing of a power system, which increases the power availability. The most significant features of the SCFCLs requested from the power system operating conditions are a limiting impedance, a trigger current level and a recovery time. In this paper, a model of HTS FCL using ZnO varistor is proposed. The effectiveness of this model is investigated through results of simulation in MATLAB/Simulink software. In addition it is illustrated the limiting feature of HTS FCL and protected role of ZnO varistor",2005,0, 4923,Custom error control schemes for energy efficient bluetooth sensor networks,"This paper analyzes the effect of custom error control schemes on the energy efficiency in Bluetooth sensor networks. The energy efficiency metric considers in just one parameter the energy and reliability constraints of the wireless sensor networks. New packet types are introduced using some error control strategies in the AUX1 packet, such as Hamming and BCH codes, with and without CRC for error detection. Two adaptive techniques are proposed that change the error control strategy based on the number of hops traversed by a packet through the network. The performance results are obtained through simulations in a channel with Rayleigh fading for networks with different number of hops, showing that error control can improve the energy efficiency of a Bluetooth-based sensor network.",2006,0, 4924,Effects of Systematic and Stochastic Errors on Estimated Failure Probability of Anisotropic Conductive Film,"This paper analyzes the effects of systematic and stochastic errors on the failure probability of anisotropic-conductive-film (ACF) assemblies estimated using the V-shaped-curve method. It is shown that the effect of systematic errors varies as a function of the volume fraction and the volume-fraction bias. The effects of stochastic errors are investigated by using an in-house software program to generate random conductive-particle distributions in the pad and inter pad regions of the ACF package for the given volume fractions and package geometries. The dependences of the coefficient of variation (CV), essentially the degree of uniformity of the particle distribution, and the failure probability on the volume fraction are examined, and the corresponding results are used to derive the correlation between the stochastic error and the CV for a given volume fraction. In general, the current results indicate that the effects of systematic errors on the accuracy of the estimated failure probability can be controlled by improving the accuracy with which the resin and conductive-particle components of the ACF-compound material are weighed during the ACF fabrication process. However, the effects of stochastic errors cannot be controlled and vary as a function of the volume fraction and the degree of nonuniformity of the particle distribution. Nevertheless, the results indicate that the effects of both systematic and stochastic errors can be suppressed by specifying the volume fraction as the value corresponding to the tip of the V-shaped curve when designing the ACF compound.",2007,0, 4925,Error sources in in-plane silicon tuning-fork MEMS gyroscopes,"This paper analyzes the error sources defining tactical-grade performance in silicon, in-plane tuning-fork gyroscopes such as the Honeywell-Draper units being delivered for military applications. These analyses have not yet appeared in the literature. These units incorporate crystalline silicon anodically bonded to a glass substrate. After general descriptions of the tuning-fork gyroscope, ordering modal frequencies, fundamental dynamics, force, and fluid coupling, which dictate the need for vacuum packaging, mechanical quadrature, and electrical coupling are analyzed. Alternative strategies for handling these engineering issues are discussed by introducing the Systron Donner/BEI quartz rate sensor, a successful commercial product, and the Analog Device (ADXRS), which is designed for automotive applications.",2006,0, 4926,Symbol Error Rate of Wireless Multiuser Relay Networks in Nakagami-m Fading Channels,"This paper analyzes the performance of wireless multiuser relay networks (MRN) in unbalanced Nakagami-m fading channels. For such networks, we consider a single channel state information (CSI)-based amplify-and-forward (AaF) relay. We derive a new exact expression for the symbol error rate (SER), which is in closed-form and applies to a wide variety of modulations. Subsequently we present a simplified asymptotic expression for the SER in the high signal-to-noise ratio (SNR) regime to identify key performance metrics such as the diversity order and array gain. Our asymptotic result explicitly reveals the direct relationship between the diversity order and both the number of destinations and the per-hop fading parameters. Moreover, we highlight the effect of the number of destinations on the optimal relay location aiming at minimizing the SER. The validity of our analysis is substantiated by numerical results.",2010,0, 4927,High throughput Byzantine fault tolerance,"This paper argues for a simple change to Byzantine fault tolerant (BFT) state machine replication libraries. Traditional BFT state machine replication techniques provide high availability and security but fail to provide high throughput. This limitation stems from the fundamental assumption of generalized state machine replication techniques that all replicas execute requests sequentially in the same total order to ensure consistency across replicas. We propose a high throughput Byzantine fault tolerant architecture that uses application-specific information to identify and concurrently execute independent requests. Our architecture thus provides a general way to exploit application parallelism in order to provide high throughput without compromising correctness. Although this approach is extremely simple, it yields dramatic practical benefits. When sufficient application concurrency and hardware resources exist, CBASE, our system prototype, provides orders of magnitude improvements in throughput over BASE, a traditional BFT architecture. CBASE-FS, a Byzantine fault tolerant file system that uses CBASE, achieves twice the throughput of BASE-FS for the IOZone micro-benchmarks even in a configuration with modest available hardware parallelism.",2004,0, 4928,Fault-Tolerant Algorithm for Distributed Primary Detection in Cognitive Radio Networks,"This paper attempts to identify the reliability of detection of licensed primary transmission based on cooperative sensing in cognitive radio networks. With a parallel fusion network model, the correlation issue of the received signals between the nodes in the worst case is derived. Leveraging the property of false sensing data due to malfunctioning or malicious software, the optimizing strategy, namely fault-tolerant algorithm for distributed detection (FTDD) is proposed, and quantitative analysis of false alarm reliability and detection probability under the scheme is presented. In particular, the tradeoff between licensed transmissions and user cooperation among nodes is discussed. Simulation experiments are also used to evaluate the fusion performance under practical settings. The model and analytic results provide useful tools for reliability analysis for other wireless decentralization-based applications (e.g., those involving robust spectrum sensing).",2009,0, 4929,A comparison of sliding mode and unknown input observers for fault reconstruction,"This paper compares the use of a recently proposed sliding mode fault detection and isolation scheme with a linear scheme based on an unknown input observer. Both methods seek to reconstruct actuator and sensor fault signals for a class of uncertain systems. Although the explicit details of the two approaches appear quite different, an underlying link between them is exposed and investigated. The methods are compared on data collected from a laboratory scale crane rig on which (known) faults were deliberately introduced.",2004,0, 4930,A comparison of methods for fault prediction in the broadband networks,"This paper compares two short-term prediction models of the expected number of faults in broadband telecommunication networks (BB network). These faults occur due to various causes. In order to improve the functionality, various routine or special maintenance (upgrades, replacement of equipment, add new functions, ...) are often carried out in the BB networks on various network elements that may cause unintended and unexpected degradation of services. On the other side of the access and customer part of the network are susceptible to errors induced by various causes, which in the same manner increases the number of faults in the system. This degradation in some cases is not recognized immediately from the systemic alarms, but later they appeared in the form of random disturbances reported by the users of these services. This study examines the prediction of faults by using two different models, Hidden Markov Model (HMM) and the Kalman filter. The model is made on the basis of one-year monitoring of broadband faults analyzed by the services and the exact time of appearance. Assessment of the accuracy of both models is made by comparing the results obtained by modeling and the actual data.",2010,0, 4931,Automated correction of spin-history related motion artefacts in fMRI: Simulated and phantom data,"This paper concerns the problem of correcting spin-history artefacts in fMRI data. We focus on the influence of through-plane motion on the history of magnetization. A change in object position will disrupt the tissue's steady-state magnetization. The disruption will propagate to the next few acquired volumes until a new steady state is reached. In this paper we present a simulation of spin-history effects, experimental data, and an automatic two-step algorithm for detecting and correcting spin-history artefacts. The algorithm determines the steady-state distribution of all voxels in a given slice and indicates which voxels need a spin-history correction. The spin-history correction is meant to be applied before standard realignment procedures. To obtain experimental data a special phantom and an MRI compatible motion system were designed. The effect of motion on spin-history is presented for data obtained using this phantom inside a 1.5-T MRI scanner. We show that the presented algorithm is capable of detecting the occurrence of a displacement, and it determines which voxels need a spin-history correction. The results of the phantom study show good agreement with the simulations.",2005,0, 4932,Adaptive Cancellation of a Sinusoidal Disturbance with Rapidly Varying Frequency Using an Augmented Error Algorith,This paper considers a compensator for a sinusoidal disturbance with known but rapidly varying frequency. The compensator is obtained as an adaptive feedforward cancellation algorithm using an augmented error. The system is shown to be Lyapunov stable and equivalent to a linear time-varying controller that includes an internal model of the disturbance. The stability and robustness properties of the augmented error algorithm are validated by simulation results,2005,0, 4933,State estimation of discrete-time Markov jump linear systems based on linear minimum mean-square error estimate,"This paper considers state estimation problem for discrete-time Markov jump linear systems. For this, two algorithm are presented. The first algorithm is an optimal algorithm of state estimation in the sense of linear minimum mean-square error estimate, which requires an ever-increasing computation and storage load with the length of the noise observation sequence. The second algorithm is a suboptimal algorithm which is proposed to reduce the computation and storage load of the optimal algorithm. A numerical example is presented to evaluate the performance of the proposed suboptimal algorithm.",2010,0, 4934,Importance sampling for error event analysis of HMM frequency line trackers,"This paper considers the problem of designing efficient and systematic importance sampling (IS) schemes for the performance study of hidden Markov model (HMM) based trackers. Importance sampling (IS) is a powerful Monte Carlo (MC) variance reduction technique, which can require orders of magnitude fewer simulation trials than ordinary MC to obtain the same specified precision. We present an IS technique applicable to error event analysis of HMM based trackers. Specifically, we use conditional IS to extend our work in another of our paper to estimate average error event probabilities. In addition, we derive upper bounds on these error probabilities, which are then used to verify the simulations. The power and accuracy of the proposed method is illustrated by application to an HMM frequency tracker.",2002,0, 4935,Supporting fault-tolerance in streaming grid applications,"This paper considers the problem of supporting and efficiently implementing fault-tolerance for tightly-coupled and pipelined applications, especially streaming applications, in a grid environment. We provide an alternative to basic checkpointing and use the notion of light-weight summary structure(LSS) to enable efficient failure-recovery. The idea behind LSS is that at certain points during the execution of a processing stage, the state of the program can be summarized by a small amount of memory. This allows us to store copies of LSS for enabling failure-recovery, which causes low overhead fault-tolerance. Our work can be viewed as an optimization and adaptation of the idea of application-level checkpointing to a different execution environment, and for a different class of applications. Our implementation and evaluation of LSS based failure- recovery has been in the context of the GATES (grid-based adaptive execution on streams) middleware. An observation we use for providing very low overhead support for fault-tolerance is that algorithms analyzing data streams are only allowed to take a single pass over data, which means they only perform approximate processing. Therefore, we believe that in supporting fault-tolerant execution for these applications, it is acceptable to not analyze a small number of packets of data during failure-recovery. We show how we perform failure-recovery and also demonstrate how we could use additional buffers to limit data loss during the recovery procedure. We also present an efficient algorithm for allocating a new computation resource for failure-recovery at runtime. We have extensively evaluated our implementation using three stream data processing applications, and shown that the use of LSS allows effective and low-overhead failure-recovery.",2008,0, 4936,Evaluating the effectiveness of a software fault-tolerance technique on RISC- and CISC-based architectures,"This paper deals with a method able to provide a microprocessor-based system with safety capabilities by modifying the source code of the executed application, only. The method exploits a set of transformations which can automatically be applied, thus greatly reducing the cost of designing a safe system, and increasing the confidence in its correctness. Fault Injection experiments have been performed on a sample application using two different systems based on CISC and RISC processors. Results demonstrate that the method effectiveness is rather independent of the adopted platform",2000,0, 4937,Use of accelerometers for material inner defect detection,"This paper deals with a possibility of non-destructive diagnostics of solid objects by software analysis of vibration spectrum by accelerometers. By a use of MATLAB platform, a processing and information evaluation from accelerometer is possible. Accelerometer is placed on the measured object. The analog signal needs to be digitized by a special I/O device to be processed offline with FFT (Fast Fourier Transformation). The power spectrum is then examined by developed evaluating procedures.",2010,0, 4938,A corrective control for angle and voltage stability enhancement on the transient time-scale,"This paper deals with the development of a nonlinear programming methodology for evaluating load shedding (LS) as a corrective action to improve the dynamic security of power systems when angle or voltage instability is detected. A centralized corrective control is developed, on the basis of online DSA computations, in order to ensure corrective actions when a threatening contingency actually occurs. The algorithm is implemented and tested on the actual Italian grid managed by ENEL S.p.A",2000,0, 4939,A Novel Monitoring of Load Level and Broken Bar Fault Severity Applied to Squirrel-Cage Induction Motors Using a Genetic Algorithm,"This paper deals with the diagnostic of the signature of rotor broken bars when an induction machine is fed or not by an unbalanced line voltage. These signatures are given by the complex spectrum modulus of line current. In order to make the diagnostic, a genetic algorithm is used to keep the amplitude of all faulty lines. Moreover, a fuzzy logic approach allows us to conclude to the load level operating system and to inform the operator of the rotor fault severity. Several experimental results prove the performance of this method under various load levels and various fault severities. Notwithstanding, this approach requires a steady-state operating condition. The conclusion resulting from this paper is highlighted by experimental results which prove the efficiency of the suggested approach.",2009,0, 4940," ADC-Based Frequency-Error Measurement in Single-Carrier Digital Modulations","This paper deals with the functional-block architecture implementing the new method to measure the carrier frequency error of the single-carrier digital modulations. The method is able to operate on the modulations M-ary amplitude shift keying, M-ary phase shift keying, and M-ary quadrature amplitude modulation used in the telecommunication systems. The functional-block architecture obtained from the software-radio (SR) architecture is based on the cascade of the analog-to-digital converter (ADC), the digital down converter, and the base-band processing. Looking at the implementation in measurement instrumentation, the performance of this cascaded architecture is investigated by considering three different ADC architectures. They are based on 1) pipeline; 2) single quantizer loop Sigma-Delta (SigmaDelta) modulator; and 3) multistage noise shaper (MASH) SigmaDelta modulator. Numerical tests confirm the important rule of the ADC in this architecture and highlight the interesting performance of the MASH-based SigmaDelta modulator for use in advanced measurement instruments",2006,0, 4941,Maximum time interval error assessment based on the sequential reducing data volume,"This paper deals with the problem of efficient assessment of maximum time interval error (MTIE) for a series of observation intervals. The method expands the concept of the extreme fix method, which also is briefly described. Then the idea of the extreme fix with sequential data reducing method of MTIE estimate calculation is thoroughly developed. The mechanism of sequential reducing data volume is explained. The time efficiency of the method proposed is evaluated through comparison of the time taken to compute MTIE estimate by means of this method and the method resulting directly from the MTIE estimator formula (the direct method) as well as the extreme fix method of the MTIE estimate calculation. Distinctive performance of the method proposed is confirmed by the MTIE calculation experiment.",2002,0, 4942,A Strategy for fault tolerant control in Networked Control Systems in the presence of Medium Access Constraints,"This paper deals with the problem of fault-tolerant control of a Network Control System (NCS) for the case in which the sensors, actuators and controller are interconnected via various Medium Access Control protocols which define the access scheduling and collision arbitration policies in the network and employing the so-called periodic communication sequence. A new procedure for controlling a system over a network using the concept of an NCS-Information-Packet is described which comprises an augmented vector consisting of control moves and fault flags. The size of this packet is used to define a Completely Fault Tolerant NCS. The fault-tolerant behaviour and control performance of this scheme is illustrated through the use of a process model and controller. The plant is controlled over a network using Model-based Predictive Control and implemented via MATLAB and LABVIEW software.",2007,0, 4943,Fault Detection on Critical Instrumentation Loops of Gas Turbines With Reflectometry,"This paper deals with the testing of instrumentation loops (ILs) in gas turbines during scheduled maintenance. To both guarantee safe operation and save costs on periodical changing of cables, an innovative measurement device that monitors the status of the IL and indicates whether a replacement is necessary, has been developed. The goal of the research described in this paper is twofold: First, the cable network is checked for faults with time-domain reflectometry by comparing the recorded reflectogram with a reference reflectogram. Second, the sensor state is investigated by calculating its impedance through one-port scattering measurements. For this, a de-embedding technique is described to remove the cabling influences. Changes in the reflectogram or the impedance point to aging or a fault and indicate the location of the defect, so that specific actions can be taken to replace the malfunctioning cable or sensor.",2009,0, 4944,Quantifying the value proposition of advanced fault management systems in mpls core networks,"This paper delineates and quantifies the benefits of advanced fault management systems (FMSs) in multiservice, converged Multiprotocol Label Switching (MPLS) core networks. MPLS is a relatively new technology, where fault management is still on a steep learning curve for both infrastructure and services. The FMS value proposition is quantified for a particular system, Lucent Technologies' Navis Network Fault Manager (NFM), with a comprehensive network-level cost/savings model. This new, bottoms-up approach integrates a detailed single facility failure model (analyzing impacts of network failure alarms and estimating reduction in time-to-repair and time-to-restore when advanced alarm filtering and correlation rules are applied) with an overall network reliability model and then calculates the total cost of ownership and operational savings. Results of this analysis show that an advanced FMS is advantageous to any network, but, in particular, to converged MPLS networks where the multiple protocols required for the multiple applications create complexities in fault management.",2005,0, 4945,"Performance of Multicode DS/CDMA With Noncoherent -ary Orthogonal Modulation in the Presence of Timing Errors","This paper derives an accurate approximation to the bit error rate (BER) of multicode direct-sequence code-division multiple access (DS/CDMA) with noncoherent M-ary modulation in wideband fading channels when timing errors are made at the receiver employing equal-gain combining (EGC). This reflects the practical scenario where the path delays are estimated imperfectly, leading to synchronization errors between the correlation receivers and the received signals. The analysis can be applied to any type of a fading distribution for both independent and correlated diversity branches. It is shown that the derived analytical expressions are in close agreement with the Monte Carlo system simulations, particularly in the case of small timing errors.",2008,0, 4946,Achieving Zero-Defects for Automotive Applications,"This paper describes a comprehensive flow for achieving Zero Defect semiconductor chips in a costeffective manner. The focus is on Designed-In Quality achieved through harmonious deployment of defect prevention and defect detection methods during the chip design phase (i.e., through DFM and DFT). The paper also identifies opportunities for improving the Zero- Defect design flow.",2008,0, 4947,Configurable PVC checking for fault identification,"This paper describes a configurable passive voltage contrast (PVC) checker for fault identification. The checker uses an on-line trace technique to query connectivity of design layout features downward, and color coded the checked features according to pre-defined terminations. The checker was used for fault identification on a scan based design. The emulated PVC behavior expedites comparison to scanning electron microscope (SEM) images shortening the time for fault localization.",2009,0, 4948,Demonstration of fault tolerance for CORBA applications,"This paper describes a fault tolerant CORBA infrastructure for a simple air defense application in which the interception management and guidance program is replicated over three computers to protect it against faults. The fault tolerant CORBA infrastructure maintains continuous service of the interception management and guidance program, through faults and recovery, and maintains strong replica consistency, despite asynchrony and faults. With the fault tolerant CORBA infrastructure, fault tolerance is achieved with minimal modification of the application program, no special programming skills and shorter program development timescales. The fault tolerant CORBA infrastructure being demonstrated was developed originally under DARPA sponsorship at the University of California, Santa Barbara, and is now available commercially from Eternal Systems Incorporated.",2003,0, 4949,A fault-tolerant control architecture for induction motor drives in automotive applications,"This paper describes a fault-tolerant control system for a high-performance induction motor drive that propels an electrical vehicle (EV) or hybrid electric vehicle (HEV). In the proposed control scheme, the developed system takes into account the controller transition smoothness in the event of sensor failure. Moreover, due to the EV or HEV requirements for sensorless operations, a practical sensorless control scheme is developed and used within the proposed fault-tolerant control system. This requires the presence of an adaptive flux observer. The speed estimator is based on the approximation of the magnetic characteristic slope of the induction motor to the mutual inductance value. Simulation results, in terms of speed and torque responses, show the effectiveness of the proposed approach.",2004,0, 4950,A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits,"This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.",2010,0, 4951,Sensorless Control and Fault Diagnosis of Electromechanical Contactors,"This paper describes a novel algorithm for a closed-loop sensorless control of electromagnetic devices. It integrates a novel fault diagnosis algorithm of the whole electromagnetic circuit. Applied to a contactor with a dc core, it eliminates the armature and contact bounce, whether it is ac or dc powered and at either 50 or 60 Hz. The position and velocity of the moving armature and contacts are calculated by means of the online determination of the inductance using only the measured current and voltage values of the contactor coil as control inputs. A fuzzy controller takes the position and velocity of the armature as input and provides the current set point as output, which controls the velocity of closure of the contacts. The algorithm has been implemented in a low-cost electronic module.",2008,0, 4952,Multiresolution sensor fusion approach to PCB fault detection and isolation,"This paper describes a novel approach to printed circuit board (PCB) testing that fuses the products of individual, non-traditional sensors to draw conclusions regarding overall PCB health and performance. This approach supplements existing parametric test capabilities with the inclusion of sensors for electromagnetic emissions, laser Doppler vibrometry, off-gassing and material parameters, and X-ray and Terahertz spectral images of the PCB. This approach lends itself to the detection and prediction of entire classes of anomalies, degraded performance, and failures that are not detectable using current automatic test equipment (ATE) or other test devices performing end-to-end diagnostic testing of individual signal parameters. This greater performance comes with a smaller price tag in terms of non-recurring development and recurring maintenance costs over currently existing test program sets. The complexities of interfacing diverse and unique sensor technologies with the PCB are discussed from both the hardware and software perspective. Issues pertaining to creating a whole-PCB interface, not just at the card-edge connectors, are addressed. In addition, we discuss methods of integrating and interpreting the unique software inputs obtained from the various sensors to determine the existence of anomalies that may be indicative of existing or pending failures within the PCB. Indications of how these new sensor technologies may comprise future test systems, as well as their retrofit into existing test systems, will also be provided.",2008,0, 4953,A novel fault-detection technique of high-impedance arcing faults in transmission lines using the wavelet transform,"This paper describes a novel fault-detection technique of high-impedance faults (HIFs) in high-voltage transmission lines using the wavelet transform. The wavelet transform (WT) has been successfully applied in many fields. The technique is based on using the absolute sum value of coefficients in multiresolution signal decomposition (MSD) based on the discrete wavelet transform (DWT). A fault indicator and fault criteria are then used to detect the HIF in the transmission line. In order to discriminate between HIF and nonfault transient phenomena, such as capacitor and line switching and arc furnace loads, the concept of duration time (i.e., the transient time period), is presented. On the basis of extensive investigations, optimal mother wavelets for the detection of HIF are chosen. It is shown that the technique developed is robust to fault type, fault inception angle, fault resistance, and fault location. The paper demonstrates a new concept and methodology in HIF in transmission lines. The performance of the proposed technique is tested under a variety of fault conditions on a typical 154-kV Korean transmission-line system.",2002,0, 4954,A Robust High Speed Serial PHY Architecture With Feed-Forward Correction Clock and Data Recovery,"This paper describes a robust architecture for high speed serial links for embedded SoC applications, implemented to satisfy the 1.5 Gb/s and 3 Gb/s Serial-ATA PHY standards. To meet the primary design requirements of a sub-system that is very tolerant of device variability and is easy to port to smaller nanometre CMOS technologies, a minimum of precision analog functions are used. All digital functions are implemented in rail-to-rail CMOS with maximum use of synthesized library cells. A single fixed frequency low-jitter PLL serves the transmit and receive paths in both modes so that tracking and lock time issues are eliminated. A new oversampling CDR with a simple feed-forward error correction scheme is proposed which relaxes the requirements for the analog front-end as well as for the received signal quality. Measurements show that the error corrector can almost double the tolerance to incoming jitter and to DC offsets in the analog front-end. The design occupies less than 0.4 mm2 in 90 nm CMOS and consumes 75 mW.",2009,0, 4955,Modeling transformer internal faults using Matlab,"This paper describes a technique for modeling transformer internal faults using MATLAB. In this technique a model for simulating a two-winding-single-phase transformer is modified to be suitable for simulating an internal fault in one of the windings. The transformer is represented by three-windings in this case; one winding is the healthy winding, while the other two represent the faulty windings. Three differential equations representing these windings are simulated and solved using MATLAB/SIMULINK. Simulation results include inrush magnetizing current and internal fault current. A fast Fourier transform (FFT) is used to analyze the simulated currents, and it shows that the second harmonic component of the internal fault current is not predominant which agrees with transformer theories.",2002,0, 4956,Reliable matching of 300 mm defect inspection tools @ sub 60 nm defect size,"This paper describes advanced methodologies for defect inspection studies and tool matching at sub 60 nm defect size. The methodologies were affirmed by experimental results achieved on different defect inspection systems designed for non-patterned wafers. The defect inspection has been described successfully by the binomial distribution. The reliability of matching of defect inspection systems was improved by using defect maps. It is anticipated that the methodologies will be applicable for defect inspection of both, patterned and non patterned wafers.",2005,0, 4957,Automatic toolset for fault tolerant design: results demonstration on a running industrial application,"This paper describes an automatic toolset to validate fault tolerant designs. The toolset has been produced as a demonstrator for the AMATISTA European project (IST project 11762), to test the effectiveness of the new FT (Fault Tolerant) tools, developed as the main target of the project. The toolset has been applied on a typical automotive application already modified by the insertion of FT structures. This paper describes the set-up of the demonstrator and the significant results obtained.",2003,0, 4958,A VHDL error simulator for functional test generation,"This paper describes an efficient error simulator able to analyze functional VHDL descriptions. The proposed simulation environment can be based on commercial VHDL simulators. All components of the simulation environment are automatically built starting from the VHDL specification of the description under test. The effectiveness of the simulator has been measured by using a random functional test generator. Functional test patterns produce, on some benchmarks, a higher gate-level fault coverage than the fault coverage achieved by a very efficient gate-level test pattern generator. Moreover, functional test generation requires a fraction of the time necessary to generate test at the gate level. This is due to the possibility of effectively exploring the test patterns space since error simulation is directly performed at the VHDL level",2000,0, 4959,Error prediction for multi-classification,"This paper describes an error prediction mechanism for multiclassification systems. First, a multiclassification system is constructed by combining a suite of two-class classifiers. While training, each sub-classifier does not utilize all the training data and the remaining data are used for testing purpose. Thus, the classification system can predict its own performance after training. We have tested this mechanism on several well-known benchmark datasets. Experimental results are demonstrated for its effectiveness.",2005,0, 4960,Alternative methods for attenuation correction for PET images in MR-PET scanners,"This paper describes and compares procedures to obtain attenuation maps used for the absorption correction (AC) of PET brain scans if a transmission scan is not available as in the case of future MR-PET scanners. A previously reported approach called MBA (MRT-based attenuation correction) used Tl- weighted MR images which were segmented into four tissue types representing brain tissue, bone, other tissue and sinus to which appropriate attenuation coefficients were assigned. In this work a template-based attenuation correction (TBA) is presented which applies an attenuation template to single subjects. A common attenuation template was created from transmission scans of 10 normal volunteers and spatially normalized to the SPM2 standard brain shape. For each subject the Tl-MR template of SPM2 was warped onto the subject's individual MR image. The resulting warping matrix was applied to the common attenuation template so that an attenuation map matching the subject's brain shape was obtained. The attenuation maps of MBA and TBA were forward projected into attenuation factors which were alternatively used for AC. FDG scans of four subjects were reconstructed after AC with MBA and TBA and compared to images whose ACs were based on conventional attenuation maps (PBA=PET-based attenuation correction). Using PBA as reference in a region of interest analysis, MBA and TBA showed similar under- and overestimation of the reconstructed radioactivity up to -10% and 9%, respectively. The procedure to obtain the attenuation template needs still some improvements. Nevertheless, the TBA method of attenuation correction is a promising alternative to MBA with its still complex and not yet resolved accurate segmentation of MR images.",2007,0, 4961,Handling ambiguity and errors: visual languages for calligraphic interaction,"This paper describes error handling and ambiguity in a class of applications organized around drawing and sketching, which we call Calligraphic Interfaces. While errors and imprecision are unavoidable features of human input, these have long been considered nuisances and problems to circumvent in user interface design. However, the transition away from non-WIMP interface styles and into continuous media featuring recognition requires that we take a fresh approach to errors and imprecision. We present new tools and interaction styles to allow designers to develop error tolerant and simpler interaction dialogues",2001,0, 4962,Sources of error in hybrid pxi-benchtop vector signal generators,"This paper describes the affects of common errors in direct upconversion on the accuracy of digitally modulated signals. In addition, it describes common techniques used to model system error in a hybrid PXI-benchtop vector signal generator.",2007,0, 4963,Earth faults and related disturbances in distribution networks,"This paper describes the characteristics of the earth faults and related disturbances, recorded in medium voltage overhead distribution networks during the years 1998-1999. Altogether 316 real cases were analyzed. The use of subharmonic oscillation and harmonic distortion was investigated, as a means of anticipating of faults. Arcing faults made up at least half of all the disturbances, and were especially predominant in the unearthed network. Fault resistances reached their minimum values near the beginning of the disturbances. The maximum currents that allowed for autoextinction in the unearthed network were comparatively small.",2001,0, 4964,Adaptive fault tolerance for spacecraft,"This paper describes the design and implementation of software infrastructure for real-time fault tolerance for applications on long duration deep space missions. The infrastructure has advanced capabilities for Adaptive Fault Tolerance (AFT), i.e., the ability to change the recovery strategy based on the failure history, available resources, and the operating environment. The AFT technology can accommodate adaptive or fixed recovery strategies. Adaptive fault tolerance allows the recovery strategy to be changed on the basis of the mission phase, failure history, and environment. For example, during a phase when power consumption must be minimized, there would be only one processor in operation. Thus, the recovery strategy would be to restart and retry. On the other hand, if the mission phase were in a time-critical mode (e.g., orbital insertion, encounter, etc.), then, multiple processors would be running, and the recovery strategy would be to switch from a leader copy to a follower copy of the control software. In a fixed recovery strategy, there is a specified redundant resource which is committed when certain failure conditions occur. The most obvious example of a fixed recovery strategy is to switch over to the standby processor in the event of any failure of the active processor",2000,0, 4965,A Robustness Approach for Handling Modeling Errors in Parallel-Plate Electrostatic MEMS Control,"This paper addresses the control of electrostatic parallel-plate microactuators in the presence of such modeling errors as unmodeled fringing field effect and deformations. In general, accurate descriptions of these phenomena often lead to very complicated mathematical models, while ignoring them may result in significant performance degradation. In this paper, it is shown by finite-element-method-based simulations that the capacitance due to fringing field effect and deformations can be compensated by introducing a variable serial capacitor. When a suitable robust controller is used, the full knowledge of the introduced serial capacitor is not required, but merely its boundaries of variation. Based on this model, a robust control scheme is derived using the theory of input-to-state stability combined with backstepping state feedback design. Since the full state measurement may not be available under practical operational conditions, an output feedback control scheme is developed. The stability and performance of the system using the proposed control schemes are demonstrated through both stability analysis and numerical simulation. The present approach allows the loosening of the stringent requirements on modeling accuracy without compromising the performance of control systems.",2008,0, 4966,A new design method for the clawpole transverse flux machine. Application to the machine no-load flux optimization. Part I: Accurate magnetic model with error compensation,"This paper addresses the difficulty of modeling and optimizing transverse flux machines (TFMs). 3D flux line patterns, complex leakage paths and saturation of the magnetic material significantly add to the complexity of building accurate magnetic models to optimize TFMs. Therefore, common TFMs design approaches usually rely on time-consuming finite element analyses, guided by the designer's knowledge. In this paper, a new design method is presented and applied to maximize the no-load flux of a Clawpole TFM. An error compensation mechanism combined to an analytical reluctance model is proposed as a solution to overcome inherent inaccuracies of TFM analytical models. It is shown how finite-element derived factors applied to selected reluctances of an analytical model can compensate for the model errors and validate the optimal solution found in a TFM design process, with a limited number of finite element simulations.",2010,0, 4967,Comparison of physical and software-implemented fault injection techniques,"This paper addresses the issue of characterizing the respective impact of fault injection techniques. Three physical techniques and one software-implemented technique that have been used to assess the fault tolerance features of the MARS fault-tolerant distributed real-time system are compared and analyzed. After a short summary of the fault tolerance features of the MARS architecture and especially of the error detection mechanisms that were used to compare the erroneous behaviors induced by the fault injection techniques considered, we describe the common distributed testbed and test scenario implemented to perform a coherent set of fault injection campaigns. The main features of the four fault injection techniques considered are then briefly described and the results obtained are finally presented and discussed. Emphasis is put on the analysis of the specific impact and merit of each injection technique.",2003,0, 4968,Defect tolerance of solid dielectric transmission class cable,"This paper addresses the issue of determining the level of defect that is likely to cause the failure of solid dielectric transmission class cables. It also proposes methods for predicting the level of defect that is likely to cause failure and to provide a simple analytic approximation for doing so in the case of conducting spheroids aligned with the electric field. A common assumption is that conducting particles > 100 m in length are likely to cause failure of extruded dielectric transmission cable. This analysis suggests that when the effects of operation at elevated temperature are included in the analysis, this is probably an appropriate criterion with a sound technical basis. For maximum background fields in the range of 15 kV/mm, as presently seen near the conductor shield of some transmission class cables, a worst-case particle length in the range of 0.1 mm is likely to be required to cause failure for the worst-case local polymer morphology in the range of the maximum operating temperature.",2005,0, 4969,Using Simulated Fault Injection for Fault Tolerance Assessment of Quantum Circuits,"This paper addresses the problem of evaluating the fault tolerance algorithms and methodologies (FTAMs) designed for quantum systems, by adopting the simulated fault injection methodology from classical computation. Due to their wide spectrum of applications (including quantum circuit simulation) and hierarchical features, the HDLs were employed for performing fault injection, as prescribed by the guidelines of the QUERIST project. At the same time, the injection techniques taken from classical circuit simulation had to be adapted to quantum computation requirements, including the specific quantum error models. The experimental simulated fault injection campaigns are thoroughly described along with the experimental results, which confirm the analytical expectations",2007,0, 4970,Reducing No Fault Found using statistical processing and an expert system,"This paper describes a method for capturing avionics test failure results from Automated Test Equipment (ATE) and statistically processing this data to provide decision support for software engineers in reducing No Fault Found (NFF) cases at various testing levels. NFFs have plagued the avionics test and repair environment for years at enormous cost to readiness and logistics support. The costs in terms of depot repair and user exchange dollars that are wasted annually for unresolved cases are graphically illustrated. A diagnostic data model is presented, which automatically captures, archives and statistically processes test parameters and failure results which are then used to determine if an NFF at the next testing level resulted from a test anomaly. The model includes statistical process methods, which produce historical trend patterns for each part and serial numbered unit tested. An Expert System is used to detect statistical pattern changes and stores that information in a knowledge base. A Decision Support System (DSS) provides advisories for engineers and technicians by combining the statistical test pattern with unit performance changes in the knowledge base. Examples of specific F-16 NFF reduction results are provided.",2002,0, 4971,A method for fault location in distribution systems,This paper describes a method for fault location in power distribution systems. The method is fundamentally based on artificial neural networks. It also employs transmission line theory and the resistive nature of the fault impedance. The method is applicable to feeders containing lateral derivations (single or three phase) and nonhomogeneous characteristic. Phase current and voltage fundamental frequency phasors at the origin of the feeder are needed as input data for a diagnostic. Studies regarding the influence of errors on the system modeling on the results are also presented.,2004,0, 4972,Probabilistic analysis of MPEG-4 error resilience tools in W-CDMA environments,"This paper describes a method of a probabilistic analysis of MPEG-4 error resilience tools. It reduces the burden of computer simulation. In order to determine design parameters for MPEG-4 error resilience tools, a lot of combinations with respect to bitrate, video packet size, bit error rate, etc. should be evaluated which usually requires tedious computer simulation. Without computer simulation, the probabilistic analysis provides average behavior of error resilience tools. The validity of probabilistic analysis is evaluated by comparison to results of actual computer simulation",2000,0, 4973,A new algorithm for the identification of defects generating partial discharges in rotating machines,This paper describes a new algorithm for the automatic identification of the type of partial discharge sources in insulation systems of rotating machine stator windings. The proposed algorithm is based on fuzzy rules that associate each partial discharge phenomenon to a fuzzy space class trough a tree path selection. The output is a simple frame where the defect type and a membership are evidenced. Applications of the algorithm to analyze data collected both on field and in laboratory are reported and discussed to show its validity and robustness.,2004,0, 4974,Wavelet transform in the accurate detection of high impedance arcing faults in high voltage transmission lines,"This paper describes a new fault detection technique which involves capturing the current signals generated in a transmission line under high impedance faults (HIFs). Its main thrust lies in the utilisation of the absolute sum value of signal components based on the discrete wavelet transform (DWT). A sophisticated decision logic is also designed for the determination of a trip decision. The results presented relate to a typical 154 kV Korean transmission system, the faulted signals for which are attained using the well known Electromagnetic Transients Program (EMTP) software. The simulation also includes an embodiment of a realistic nonlinear HIF mode",2001,0, 4975,A new fault generator suitable for reliability analysis of digital circuits,"This paper deals with fault injection issues for reliability analysis. We propose a fault generator IP suitable for hardware emulation of single and multiple simultaneous faults occurrence. The proposed IP is based on a very useful approach that allows the designer to control complexity and completeness of the fault injection process. We provide models for cost and performance estimation of the IP. Also, synthesis results of its implementation on FPGA are given.",2010,0, 4976,Open phase faults detection in PMSM drives based on current signature analysis,"This paper deals with monitoring condition of electrical failures in variable speed of permanent magnet synchronous motor (PMSM) drives by stator current signature analysis. The objective of this study is to develop a detection method for the open phase fault in PMSM drives. The main idea consists in minimizing the number of sensors allowing the open stator phase fault of the system to study. The harmonics produced by current spectral analysis fault-related components are studied. The current waveform patterns for various modes of open phase winding are investigated. Simulation and experimental results are presented using a 1.1 kW, 6 poles three-phase PMSM. Comparison of simulation and experimental results show that the method is able to detect the open-phase fault in PMSM drive.",2010,0, 4977,Sensor fault detection by sliding mode observer applied to an autonomous vehicle,"This paper deals with sensors faults detection and isolation using method by observers. The principle is to reconstruct the state vector and measurements of the system by sliding mode observers and compare the estimated outputs with those measured. In this work, a multi observer's technique is used. It consists on construction of many observers, at least one observer for each output (sensor measurement). Each observer must be robust to noises and to other uncertainties but sensitive to sensor faults. The residual is the estimation error witch is the difference between sensor measurement and its estimated. Without failure in the sensor, this residual remains around zero and if a fault is occurred, it deviates significantly from zero. This method is applied to an autonomous electric vehicle called RobuCar. Simulation results are given at the end to show the effectiveness of the approach.",2009,0, 4978,An approach to system-wide fault tolerance for FPGAs,"This paper deals with the construction of an entire FPGA based and fault-tolerant computer system spanning all layers of modern computer architecture. This starts with the protection of the fundamental FPGA configuration matrix, continues to the HDL design of multiple hardware components, essentially required to run regular applications on FPGAs, including processor, memory and interfaces and ends up in the implementation of an operating system running radiation hardened software. Joining all these separate layers with their individual approaches to fault tolerance increases the overall radiation susceptibility to a maximum value and enables the use in high-energy physics particle accelerators. The current design phase is shown exemplary for a fault-tolerant soft core CPU including validation results.",2009,0, 4979,Fault diagnostic testing using Partial Discharge measurements on High Voltage rotating machines,"This paper addresses the testing of High Voltage (HV) insulation systems associated with large motors and generators. The condition of HV insulation systems associated with rotating machines using Partial Discharge (PD) as a predictive tool alongside the other techniques will be assessed. These other tests include the measurement of Insulation Resistances (IR), and the equivalent parallel resistance and capacitance to calculate tan delta values.",2009,0, 4980,Detection and classification of impulse faults in transformer using wavelet transform and artificial neural network,This paper aims at describing a method for the detection and classification of impulse faults in a transformer winding using wavelet transform and an artificial neural network. The method is explained by considering the lumped parameter model of a winding. The WT decomposes the signal and RMS value of the detailed signal is extracted to train the ANN. The simulation results are satisfactory in detection and classification of faults.,2008,0, 4981,Fault Location and Voltage Estimation in Transmission Systems by Evolutionary Algorithms,"This paper aims at presenting the development and implementation of an approach to deal with fault location in transmission systems by Evolutionary Algorithms. The algorithm allows for the estimation of voltage sags and swells in non-monitored buses as well. An Evolutionary Strategy was applied to address the optimization problem, from the generation of a population of individuals that represent possible solutions to the problem to the evolutionary operators such as selection, mutation and crossover. Some variants of the basic approach were also considered in such a way to determine the fault location with better performance. A case study, based on the 30 bus IEEE study system, demonstrates the potential of the proposed methodology.",2009,0, 4982,Error analysis of an optical current transducer operating with a digital signal processing system,"This paper analyzes errors associated with the analog-to-digital (A/D) conversion process of a digital signal processing unit (DSP) within the operation of an optical current transducer (OCT). Quantization of the analog current measurement signal leads to measurement errors which are a direct consequence of the uncertainty with which an N-bit resolution A/D assigns a binary word for a given analog input value. This paper presents comprehensive simulations of the performance of different current sensors monitored by the DSP unit and discusses aspects of compatibility between the sensor dynamic range and the resolution of an A/D conversion process. Recommendations are given on how to match the OCT to the given A/D parameters, and vice versa, in order to meet specified accuracy requirements",2000,0, 4983,Power Factor Correction on Distribution Networks Including Distributed Generation,"This paper describes the mathematical formulation and implementation of a procedure to design power factor correction in unbalanced distribution systems. The methodology is based on the Four-Conductor Current Injection Method - FCIM, that applies the Newton-Raphson method to solve the power flow problem on four-conductor distribution systems. The set of nonlinear current injection equations are derived using phase coordinates, and the complex variables are written in rectangular form. The mathematical formulation can be especially useful in systems having distributed generators such as induction machines, as well as in systems feeding large industrial consumers. Some test systems are used to demonstrate the effectiveness of the method.",2007,0, 4984,A simulator evaluation of a model-reference sliding mode fault tolerant controller,This paper describes the real-time implementation of a model-reference sliding mode control allocation scheme on the SIMONA flight simulator. The scheme allows automatic redistribution of the control signals to the remaining functioning actuators when a fault or failure occurs. The system's closed-loop stability under certain classes of faults and failures is analyzed. It is shown that faults and even certain total actuator failures can be handled directly without reconfiguring the controller. The results obtained from the SIMONA show good performance in both nominal and failure scenarios.,2009,0, 4985,Finite Element Analysis of Switched Reluctance Motor under Dynamic Eccentricity Fault,"This paper describes the results of a two-dimensional finite element analysis carried out on an 8/6 switched reluctance motor for studying the effects of dynamic eccentricity on the static characteristics of the motor. Flux contours, flux-linkage profiles and mutual fluxes are obtained for both healthy and faulty motor. Besides, Static torque profiles of phases are obtained for different degrees of eccentricity and it is shown that at low current; the effect of eccentricity is considerable compared to that of the rated current case. Finally, Fourier analysis of the torque profiles is performed to make their difference visible.",2006,0, 4986,Design and implementation of informatized assessment system on electrical fault maintenance-- taken electrical fault detection of refrigeration equipment as an example,"This paper narrates the design on constructing informatized training and assessment system of refrigerate equipment. Through the analysis of functional module such as RS485, intelligent embedded controllers, simulation training platform, the characteristics of higher efficiency, low-loss, lowcost, guarantee the fairness and just of the examination are mainly discussed.",2008,0, 4987,A neural network based fault detection and identification scheme for pneumatic process control valves,"This paper outlines a method for detection and identification of actuator faults in a pneumatic process control valve using a neural network. First, the valve signature and dynamic error band tests, used by specialists to determine valve performance parameters, are carried out for a number of faulty operating conditions. A commercially available software package is used to carry out the diagnostic tests, thus eliminating the need for additional instrumentation of the valve. Next, the experimentally determined valve performance parameters are used to train a multilayer feedforward network to successfully detect and identify incorrect supply pressure, actuator vent blockage, and diaphragm leakage faults",2001,0, 4988,Attack and Fault Identification in Electric Power Control Systems: An Approach to Improve the Security,"This paper presented a technique to extract rules in order to identify attacks and faults to improve security of electric power control systems. By using rough sets classification algorithm, a set of rules can be defined. The approach tries to reduce the number of input variables and the number of examples, offering a more compact set of examples to fix the rules to the anomaly detector. An illustrative example is presented.",2007,0, 4989,A .NET framework for an integrated fault diagnosis and failure prognosis architecture,"This paper presents a .NET framework as the integrating software platform linking all constituent modules of the fault diagnosis and failure prognosis architecture. The inherent characteristics of the .NET framework provide the proposed system with a generic architecture for fault diagnosis and failure prognosis for a variety of applications. Functioning as data processing, feature extraction, fault diagnosis and failure prognosis, the corresponding modules in the system are built as .NET components that are developed separately and independently in any of the .NET languages. With the use of Bayesian estimation theory, a generic particle-filtering-based framework is integrated in the system for fault diagnosis and failure prognosis. The system is tested in two different applications - bearing spalling fault diagnosis and failure prognosis and brushless DC motor turn-to-turn winding fault diagnosis. The results suggest that the system is capable of meeting performance requirements specified by both the developer and the user for a variety of engineering systems.",2010,0, 4990,A software-based concurrent error detection technique for power PC processor-based embedded systems,"This paper presents a behavior-based error detection technique called control flow checking using branch trace exceptions for powerPC processors family (CFCBTE). This technique is based on the branch trace exception feature available in the powerPC processors family for debugging purposes. This technique traces the target addresses of program branches at run-time and compares them with reference target addresses to detect possible violations caused by transient faults. The reference target addresses are derived by a preprocessor from the source program. The proposed technique is experimentally evaluated on a 32-bit powerPC microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 91% of the injected control flow errors. The memory overhead is 39.16% on average, and the performance overhead varies between 110% and 304% depending on the workload used. This technique does not modify the program source code.",2005,0, 4991,Hadamard-Craigen error correcting codes,"This paper presents a block coding scheme based on a new construction of block-circulant Hadamard matrices. We describe this unified construction of Hadamard matrices of order 2tp where p is any odd number and t4 (depends on p), explaining how the block-circulant structure of the matrices contributes to more efficient coding. We introduce HC codes that can be implemented in software with relatively short encoding and decoding times. An HC matrix is compact in that it can be represented in its entirety by its first 2t-1 rows. Experiments show that the HC(64,7,32) code is better than the BCH(63,7,31) code by 1.5 dB and 2.8 dB over the corresponding uncoded data at SNR of 3 dB.",2002,0, 4992,Sequential Element Design With Built-In Soft Error Resilience,"This paper presents a built-in soft error resilience (BISER) technique for correcting radiation-induced soft errors in latches and flip-flops. The presented error-correcting latch and flip-flop designs are power efficient, introduce minimal speed penalty, and employ reuse of on-chip scan design-for-testability and design-for-debug resources to minimize area overheads. Circuit simulations using a sub-90-nm technology show that the presented designs achieve more than a 20-fold reduction in cell-level soft error rate (SER). Fault injection experiments conducted on a microprocessor model further demonstrate that chip-level SER improvement is tunable by selective placement of the presented error-correcting designs. When coupled with error correction code to protect in-pipeline memories, the BISER flip-flop design improves chip-level SER by 10 times over an unprotected pipeline with the flip-flops contributing an extra 7-10.5% in power. When only soft errors in flips-flops are considered, the BISER technique improves chip-level SER by 10 times with an increased power of 10.3%. The error correction mechanism is configurable (i.e., can be turned on or off) which enables the use of the presented techniques for designs that can target multiple applications with a wide range of reliability requirements",2006,0, 4993,Threshold-based mechanisms to discriminate transient from intermittent faults,"This paper presents a class of count-and-threshold mechanisms, collectively named -count, which are able to discriminate between transient faults and intermittent faults in computing systems. For many years, commercial systems have been using transient fault discrimination via threshold-based techniques. We aim to contribute to the utility of count-and-threshold schemes, by exploring their effects on the system. We adopt a mathematically defined structure, which is simple enough to analyze by standard tools. -count is equipped with internal parameters that can be tuned to suit environmental variables (such as transient fault rate, intermittent fault occurrence patterns). We carried out an extensive behavior analysis for two versions of the count-and-threshold scheme, assuming, first, exponentially distributed fault occurrencies and, then, more realistic fault patterns",2000,0, 4994,Comparative study on a fault current limiter with thyristor-controlled impedances,"This paper presents a comparative study of three types of thyristor-controlled impedances, i) resistance ii) inductance and ii) capacitance, on a solid-state fault current limiter (SSFLC) for electric power distribution systems. The aim of the proposed study is to assess thyristor-controlled elements of a resonant-type fault current limiter. The impedance of the SSFCL is controlled using the firing controller of thyristors. Regarding to the assessment of PCC (point of coupling connection) voltage and fault current limitability, the simulation results showed that the inductance type gives the best performance among those three impedances.",2008,0, 4995,Analysis of the ground effect in the single phase fault location for power distribution systems,"This paper presents a comparison of two different alternatives for locating single phase faults, considering the influence of the soil resistivity and the fault resistance. The compared methods are the proposed by a commercial software developer and one of the classic impedance based fault location methods. Tests were performed using three different soil resistivity models, based on the measurements taken directly from field. According to the results in a 34kV power distribution system, the performance of the simple impedance based fault locator is better than the obtained using the commercial software, especially in such cases of high resistance faults. Additionally, the soil resistivity models which best represents the real systems are those which give better results in the fault location, showing the influence of this variable.",2008,0, 4996,An Adaptive Fault-Tolerant Component Model,This paper presents a component model for building distributed applications with fault-tolerance requirements. The AFT-CCM model selects the configuration of replicated services during execution time based on QoS requirements specified by the user. The configuration is managed using a set of components that deal with the non-functional aspects of the application. The characteristics of this model and the results obtained with its implementation are described along this paper.,2003,0, 4997,Complex-Valued Neural Networks Fault Tolerance in Pattern Classification Applications,"This paper investigates the fault-tolerance ability of complex-values neural networks (CVNNs) in classification applications. An analysis of the effect of weight loss at the units (neurons) level revealed that the loss of weight in complex neural networks is more critical than in real valued neural networks. A novel weight decay technique for fault tolerance of real-valued neural networks (RVNNs) is proposed and applied to CVNN. The simulation results indicate that the complex-valued neural networks are less fault tolerant than real-valued neural networks. It is also found that while the weight decay technique substantially improves the fault tolerance ability of RVNN, the technique does not necessary improve the fault tolerance of CVNNs.",2010,0, 4998,Error performance and throughput evaluation of a multi-Gbps millimeter-wave WPAN system in the presence of adjacent and co-channel interference,"This paper investigates the impact of adjacent channel interference (ACI) and co-channel interference (CCI) on error performance and throughput of a multi-Gbps millimeterwave wireless personal area network (WPAN) system in a realistic residential line-of-sight (LOS) and non-line-of-sight (NLOS) multipath environment. The main contribution of this paper is providing a multi-Gbps WPAN system design in the challenging multipath environment in the presence of ACI/CCI. Based on the investigation results, we have provided ACI/CCI rejection as a reference for victim receiver protection design. In the NLOS environment, the ACI rejection (i.e. ACI that causes 0.5 dB degradation in the required signal-to-noise ratio (SNR) to achieve bit error rate (BER) of 10-6) for pi/2-BPSK, QPSK, 8 PSK and 16 QAM are 13, 7, 0 and -6dB respectively. And the CCI rejection for similar modulation schemes are -18, -20, -26 and -29 respectively. Secondly, we have clarified the LOS-NLOS relationship of the ACI/CCI impact to system performance. ACI in multipath NLOS environment causes an additional 5 dB degradation to error performance as compared to ACI in the LOS environment. CCI on the other hand, has similar impact on error performance in both LOS and NLOS environment. Thirdly, we have clarified the relationship between modulation spectral efficiency and robustness against ACI/CCI. In an environment with no or low ACI/CCI, the maximum achievable throughput for pi/2-BPSK, QPSK, 8 PSK and 16 QAM in LOS environment are 1.2, 2.5, 3.8 and 5 Gbps respectively. In NLOS environment, the achievable throughput decreases to 1, 1.9, 2.8 and 3.8 Gbps respectively. As ACI/CCI increases, the throughput of higherorder modulation schemes such as 16 QAM decreases the most rapidly, followed by 8 PSK and QPSK. The throughput for pi/2-BPSK has the highest tolerance against increasing ACI/CCI, at the expense of lower maximum achievable throughput.",2009,0, 4999,Satellite Ozone Retrieval Under Broken Cloud Conditions: An Error Analysis Based on Monte Carlo Simulations,"This paper investigates the influence of horizontally inhomogeneous clouds on the accuracy of total ozone column retrievals from space. The focus here is on retrievals based on backscattered ultraviolet light measurements in Huggins bands in the range of 315-340 nm. It is found that simplifying the description of cloud properties in the ozone-retrieval algorithm studied can produce errors of up to 6%, depending on the error in the assumed cloud parameters. Yet another finding is the fact that independent pixel approximation suffices for ozone-retrieval algorithms. This was found using three-dimensional Monte Carlo radiative transfer calculations in the Huggins bands",2007,0, 5000,Fault-tolerant scheduling of real-time tasks having software faults,"This paper investigates the problem of fault-tolerant scheduling of a set of real-time tasks where each task has primary and alternate implementations. Similar scheduling problem has been studied before, however, we propose an enhanced scheme for scheduling real-time periodic tasks with software faults. Alternate-primary recovery (APR) based scheduling employs a special backup-primary that can replace the primary when it fails often. The new scheduling technique saves the CPU time wasted on executing of unsuccessful primaries again and again. APR scheduling is implemented and tested for a TRC (TCP-to-RS232 converter) embedded system that connects Ethernet to serial-RS232 devices. It is also compared with an existing fault-tolerant scheduling method to verify the proposed enhancement",2005,0, 5001,Fault detection of open-switch damage in voltage-fed PWM motor drive systems,This paper investigates the use of different techniques for fault detection in voltage-fed asynchronous machine drive systems. With the proposed techniques it is possible to detect and identify the power switch in which the fault has occurred. Such detection requires the measurement of some voltages and is based on the analytical model of the voltage source inverter. Simulation and experimental results are presented to demonstrate the correctness of the proposed techniques. The results obtained so far indicate that it is possible to embed some fault-tolerant properties for the voltage-fed asynchronous machine drive system.,2003,0, 5002,A Fuzzy-Based Approach for the Diagnosis of Fault Modes in a Voltage-Fed PWM Inverter Induction Motor Drive,"This paper investigates the use of fuzzy logic for fault detection and diagnosis in a pulsewidth modulation voltage source inverter (PWM-VSI) induction motor drive. The proposed fuzzy technique requires the measurement of the output inverter currents to detect intermittent loss of firing pulses in the inverter power switches. For diagnosis purposes, a localization domain made with seven patterns is built with the stator Concordia current vector. One is dedicated to the healthy domain and the six others to each inverter power switch. The fuzzy bases of the proposed technique are extracted from the current analysis of the fault modes in the PWM-VSI. Experimental results on a 1.5-kW induction motor drive are presented to demonstrate the effectiveness of the proposed fuzzy approach.",2008,0, 5003,Fault tolerant event localization in sensor networks using binary data,"This paper investigates the use of wireless sensor networks for estimating the location of an event that emits a signal which propagates over a large region. In this context, we assume that the sensors make binary observations and report the event if the measured signal at their location is above a threshold; otherwise they remain silent. Based on the sensor binary beliefs we use 4 different estimators to localize the event: CE (centroid estimator), ML (maximum likelihood), SNAP (subtract on negative add on positive) and AP (add on positive). The main contribution of this paper is the fault tolerance analysis of the proposed estimators. Furthermore, the analysis shows that SNAP is the most fault tolerant of all estimators considered.",2008,0, 5004,Improved fault tolerance of active power filter system,"This paper investigates the utilization of a relatively simple topology that provides fault-tolerant operation for a three-phase system containing a shunt active power filter. With such a topology, when one of the filter converter legs is lost, the filter can still operate by connecting the grid neutral to a fourth converter leg. The structure proposed and the operating principles of the system are presented. The system model under fault condition is derived and a suitable control strategy is proposed. Experimental results are presented to demonstrate the correctness of the proposed solution",2001,0, 5005,Progress in real-time fault tolerance,"This paper discusses progress in the field of real-time fault tolerance. In particular, it considers synchronous vs. asynchronous fault tolerance designs, maintaining replica consistency, alternative fault tolerance strategies, including checkpoint restoration, transactions, and consistent replay, and custom vs. generic fault tolerance.",2004,0, 5006,The Analysis of Losses in High-Power Fault-Tolerant Machines for Aerospace Applications,"This paper discusses the design of a fault-tolerant electric motor for an aircraft main engine fuel pump. The motor in question is a four-phase fault-tolerant motor with separated windings and a six-pole permanent magnet rotor. Methods of reducing machine losses in both the rotor and stator are introduced and discussed. The methods used to calculate rotor eddy current losses are examined. Full three-dimensional finite-element (FE) time stepping, two-dimensional (2-D) FE time stepping, and 2-D FE harmonic methods are discussed, and the differences between them and the results they produce were investigated. Conclusions are drawn about the accuracy of the results produced and how the methods in question will help the machine designer",2006,0, 5007,On the Automation of Software Fault Prediction,"This paper discusses the issues involved in building a practical automated tool to predict the incidence of software faults in future releases of a large software system. The possibility of creating such a tool is based on the authors' experience in analyzing the fault history of several large industrial software projects, and constructing statistical models that are capable of accurately predicting the most fault-prone software entities in an industrial environment. The emphasis of this paper is on the issues involved in the tool design and construction and an assessment of the extent to which the entire process can be automated so that it can be widely deployed and used by practitioners who do not necessarily have any particular statistical or modeling expertise",2006,0, 5008,Development of a novel humidity sensor with error-compensated measurement system,"This paper documents the creation of a complete PC-based humidity sensing system, implemented using LabVIEW from National Instruments. The humidity sensor, which has a measured sensitivity of 0.25%/RH%, is manufactured by thin film technology from a novel combination of SiO/In2O3. Both the humidity sensor and a standard temperature sensor are interfaced to a PC using a front-end signal conditioning circuit. The entire system has been analyzed mathematically and the necessary algorithms for error-compensation have been developed. The resulting measurement system is efficient, accurate and flexible.",2002,0, 5009,Performance Evaluation of an Adaptive-Network-Based Fuzzy Inference System Approach for Location of Faults on Transmission Lines Using Monte Carlo Simulation,"This paper employs a wavelet multiresolution analysis (MRA) along with the adaptive-network-based fuzzy inference system to overcome the difficulties associated with conventional voltage- and current-based measurements for transmission-line fault location algorithms, due to the effect of factors such as fault inception angle, fault impedance, and fault distance. This proposed approach is different from conventional algorithms that are based on deterministic computations on a well-defined model to be protected, employing wavelet transform together with intelligent computational techniques, such as the fuzzy inference system (FIS), adaptive neurofuzzy inference system (ANFIS), and artificial neural network (ANN) in order to incorporate expert evaluation so as to extract important features from wavelet MRA coefficients for obtaining coherent conclusions regarding fault location. A comparative study establishes that the ANFIS approach has superiority over ANN- and FIS-based approaches for the location of line faults. In addition, the efficacy of the ANFIS is validated through the Monte Carlo simulation for incorporating the stochastic nature of fault occurrence in practical systems. Thus, this ANFIS-based digital relay can be used as an effective tool for real-time digital relaying purposes.",2008,0, 5010,Evaluation of atmospheric correction using FLAASH,"This paper evaluated the capability of FLAASH in ENVI software to make atmospheric correction for Hyperion hyperspectral image and ALI image. Hyperion and ALI sensors are two of the three instruments onboard NASA EO-1 satellite, New Millennium Program (NMP) launched on November 21, 2000, and Hyperion is the first spaceborne hyperspectral imaging spectrometer. The study area is Zhangye city (37deg28'N-39deg57'N, 97deg20'E-102deg12'E) in Heihe River valley of Gansu province, China. Using TM data with UTM projection, Hyperion hyperspectral data acquired on September 10, 2007 and ALI data on September 20, 2007 were geometrically and radiometrically corrected, and then atmospherically corrected using FLAASH. Surface reflectance spectra of corn, water body, desert and buildings were extracted from these two images and made comparison with apparent reflectance. Canopy reflectance spectra of corn were recorded using ASD Fieldspec spectroradiometer in near-real time to coincide with EO-1 satellite sensor overpass. According to filter function of ALI and central wavelength and Gaussian filter function based on full width at half maximum (FWHM) of Hyperion, the ASD reflectance spectra were resampled to corresponding Hyperion and ALI bands. Results showed that resampled ASD spectra of corn were consistent with spectra on Hyperion and ALI images after FLAASH. This demonstrated the effectiveness of atmospheric correction using FLAASH.",2008,0, 5011,Investigation of a fault tolerant and high performance motor drive for critical applications,"This paper evaluates a fault-tolerant electric motor drive with redundancy for a potential all-electric aircraft (AEA), and investigates to increase the redundancy against partial or complete motor failures. A flexible motor configuration is proposed to increase the redundancy against motor failures. The paper highlights the importance of effectively simulating the system on computer, and the way in which the ideas and techniques used to simulate the system have been adapted to the physical prototype. The computer modelling section of the paper outlines the steps involved with simulating the complete system using an object orientated programming language, LabVIEW, and some simulation results are provided",2001,0, 5012,"Diagnosis of stator, rotor and airgap eccentricity faults in three-phase induction motors based on the multiple reference frames theory","This paper describes the use of multiple reference frames for the diagnosis of stator, rotor, and eccentricity faults in line-fed and direct torque controlled (DTC) inverter-fed induction motors. The use of this new technique, which was proposed by the authors for the diagnosis of inter-turn short circuits, is extended for the detection and classification of different types of faults. Each fault causes a different disturbance or introduces different components in the motor supply currents. Based on the multiple reference frames theory, by choosing a proper reference frame, it is possible to transform each one of these current components to a d-q frame. In these d-q reference frames, those current harmonics will appear as constants, thus being easily measured, extracted or manipulated. Because each fault causes a different disturbance, the multiple reference frames technique can easily discriminate between different faults. Simulation and experimental results demonstrate the effectiveness of the proposed technique for the diagnosis of stator, rotor, and airgap eccentricity faults in three-phase induction motors. Moreover, due to the operating philosophy of the multiple reference frames technique, its integration into the control system of a DTC induction motor drive is a straightforward task and is briefly addressed at the end of the paper.",2003,0, 5013,Fault-tolerant and scalable TCP splice and web server architecture,"This paper describes three enhancements to the TCP splicing mechanism: (1) Enable a TCP connection to be simultaneously spliced through multiple machines for higher scalability; (2) Make a spliced connection fault-tolerant to proxy failures; and (3) Provide flexibility of splitting a TCP splice between a proxy and a backend server for further increasing the scalability of a Web server system. A Web server architecture based on this enhanced TCP splicing is proposed. This architecture provides a highly scalable, seamless service to the users with minimal disruption during server failures. In addition to the traditional Web services in which users download Web pages, multimedia files and other types of data from a Web server, the proposed architecture supports newly emerging Web services that are highly interactive, and involve relatively longer, stateful client-server sessions. A prototype of this architecture has been implemented as a Linux 2.6 kernel module, and the paper presents important performance results measured from this implementation",2006,0, 5014,Signatures of Electromechanical Faults in Stress Distribution and Vibration of Induction Motors,"This paper develops a method for determining the signatures of electromechanical faults in the airgap stress distribution and stator vibration of induction machines. Two types of induction motors are analyzed; a 2-pole pairs, star connected and an 1-pole pair, delta connected motor. The radial electromagnetic stress distribution along the airgap is calculated and developed into double Fourier series in space and time. The computations show the existence of low frequency and low order stress distributions acting on the stator of the electrical machine when it is working under faulty conditions. These stress waves are able to produce forced vibration in the stator surface. The simulation results are corroborated by vibration measurements. Modal testing is also carried out to determine the natural frequencies. The measurements and simulations show that low frequency components of the vibrations can be used as identifiable signatures for condition monitoring of induction motors. The patterns presented by the faults allow to discriminate between them.",2007,0, 5015,Diagnostics of bar and end-ring connector breakage faults in polyphase induction motors through a novel dual track of time-series data mining and time-stepping coupled FE-state space modeling,"This paper develops the fundamental foundations of a technique for detection of faults in induction motors that is not based on the traditional Fourier transform frequency domain approach. The technique can extensively and economically characterize and predict faults from the induction machine adjustable speed drive design data. This is done through the development of dual-track proof-of-principle studies of fault simulation and identification. These studies are performed using our proven time stepping coupled finite element-state space method to generate fault case data. Then, the fault cases are classified by their inherent characteristics, so-called ""signatures"" or ""fingerprints."" These fault signatures are extracted or mined here from the fault case data using our novel time series data mining technique. The dual-track of generating fault data and mining fault signatures was tested here on three, six, and nine broken bar and broken end-ring connectors in a 208-volt, 60-Hz, 4-pole, 1.2-hp, squirrel cage 3-phase induction motor",2002,0, 5016,Fault tolerant complex FIR filter architectures using a redundant MRRNS,"This paper discusses a fault-tolerant procedure for computing with complex signals based on the modulus replication residue number system (MRRNS). Using the MRRNS mapping technique, we are able to compute over identical channels using modular arithmetic. We discuss an extension to the complex MRRNS which allows the detection and correction of faults in the mapped channels. The technique provides redundancy by adding channels to the existing parallel structure. The paper discusses the overall fault tolerant technique and provides an application example of a complex adaptive filter for a wireless LAN system.",2001,0, 5017,Kinematic and geometric error verification and compensation of a three axes vertical machining center,"This paper discusses about kinematic and geometric error, which are the basic inaccuracy of the machine tool. For a three axes machine, there are 21 error components (3 linear position errors, 6 straightness errors, 9 angular errors and 3 squareness errors). Each individual error is measured directly by laser interferometer. The derivation of the kinematic and geometric error is based on the assumption of rigid body motions. It utilizes homogenous transformation matrices to create a kinematic and geometric mathematical error model which is useful for identifying all error within the working volume. The developed mathematical error model is used to calculate and predict the error vector to compensate error in each machine movement. The error vector at an intermediate position between the measured point is estimated by back-propagation neural network.",2002,0, 5018,Online Fault Diagnosis of Discrete Event Systems. A Petri Net-Based Approach,"This paper is concerned with an online model-based fault diagnosis of discrete event systems. The model of the system is built using the interpreted Petri nets (IPN) formalism. The model includes the normal system states as well as all possible faulty states. Moreover, it assumes the general case when events and states are partially observed. One of the contributions of this work is a bottom-up modeling methodology. It describes the behavior of system elements using the required states variables and assigning a range to each state variable. Then, each state variable is represented by an IPN model, herein named module. Afterwards, using two composition operators over all the modules, a monolithic model for the whole system is derived. It is a very general modeling methodology that avoids tuning phases and the state combinatory found in finite state automata (FSA) approaches. Another contribution is a definition of diagnosability for IPN models built with the above methodology and a structural characterization of this property; polynomial algorithms for checking diagnosability of IPN are proposed, avoiding the reachability analysis of other approaches. The last contribution is a scheme for online diagnosis; it is based on the IPN model of the system and an efficient algorithm to detect and locate the faulty state. Note to Practitioners-The results proposed in this paper allow: 1) building discrete event system models in which faults may arise; 2) testing the diagnosability of the model; and 3) implementing an online diagnoser. The modeling methodology helps to conceive in a natural way the model from the description of the system's components leading to modules that are easily interconnected. The diagnosability test is stated as a linear programming problem which can be straightforward programmed. Finally, the algorithm for online diagnosis leads to an efficient procedure that monitors the system's outputs and handles the normal behavior model. This provides an oppo- rtune detection and location of faults occurring within the system",2007,0, 5019,Semiphysical Models of a Hydraulic Servo Axis for Fault Detection,"This paper is concerned with the model-based fault detection and diagnosis of hydraulic servo axes. For the application of model-based methods, a very detailed model of the plant must be derived. Such a model can be derived by means of physical equations (i.e. white-box models) or by data-driven methods (so-termed black box models). The paper at hand tries to combine the best of both worlds by applying the LOLIMOT neural net and deriving semi-physical models. Since only physical relationships but no physical laws must be known, the model structure can be set up easily. On the contrary, the data driven modeling can be expedited since a model structure can be assumed. In this paper, models will be derived for the pressure supply of the hydraulic servo axis as well as the valve and cylinder unit. Based on these models, parity equations will be compiled, which in turn will be used for fault detection and diagnosis. Measurements at a testbed verify these results.",2007,0, 5020,Fault-Tolerant Encryption for Space Applications,"This paper is concerned with the use of commercial security algorithms like the Advanced Encryption Standard (AES) in Earth observation small satellites. The demand to protect the sensitive and valuable data transmitted from satellites to ground has increased and hence the need to use encryption on board. AES, which is a very popular choice in terrestrial communications, is slowly emerging as the preferred option in the aerospace industry including satellites. This paper first addresses the encryption of satellite imaging data using five AES modes - ECB, CBC, CFB, OFB and CTR. A detailed analysis of the effect of single even upsets (SEUs) on imaging data during on-board encryption using different modes of AES is carried out. The impact of faults in the data occurring during transmission to ground due to noisy channels is also discussed and compared for all the five modes of AES. In order to avoid data corruption due to SEUs, a novel fault-tolerant model of AES is presented, which is based on the Hamming error correction code. A field programmable gate array (FPGA) implementation of the proposed model is carried out and measurements of the power and throughput overhead are presented.",2009,0, 5021,Investigation of Transformer Electromagnetic Forces Caused by External Faults Using FEM,"This paper is focused on the investigation of transformers internal electromagnetic forces caused by external fault. The simulations are based on the finite elements method (FEM) to model a three-phase transformer and also to calculate the forces via traditional mechanical and electrical equations. Normal operation and symmetrical fault conditions are used for the estimation of internal radial and axial forces. This work, which leads to specific transformer performances, allows to verify the potentiality of the method used and also the physical consistence of the computational simulations. The results shown in this paper are flux distribution, losses and other information regarding the equipment operation. The FEM approach has shown to be a powerful tool on the estimation of mechanical stress within transformers and it can be quite useful at the design stage of such devices",2006,0, 5022,Survey on Fault Operation on Multilevel Inverters,"This paper is related to faults that can appear in multilevel (ML) inverters, which have a high number of components. This is a subject of increasing importance in high-power inverters. First, methods to identify a fault are classified and briefly described for each topology. In addition, a number of strategies and hardware modifications that allow for operation in faulty conditions are also presented. As a result of the analyzed works, it can be concluded that ML inverters can significantly increase their availability and are able to operate even with some faulty components.",2010,0, 5023,Static error identification: Application to line parameter estimation,"This paper made a review of static error identification techniques in power system state estimation. It then illustrated the application of Kalman filter in estimation of static errors, such as line parameters. Using the test cases for line parameter estimation, the paper demonstrated that the simple scheme of resetting of parameter covariance matrix will improve the estimation performance.",2010,0, 5024,Automated in-camera detection of flash-eye defects,"This paper examines the problem of performing automated real-time detection of flash eye dejects (redeye) in the firmware of a digital camera. Several different algorithms are compared, timing and memory requirements on several embedded architectures are presented. A discussion on advanced in-camera techniques to improve on standard algorithms is also presented.",2005,0, 5025,Comparing code reading techniques applied to object-oriented software frameworks with regard to effectiveness and defect detection rate,"This paper first reasons on understanding software frameworks for defect detection, and then presents an experimental research for comparing the effectiveness and defect detection rate of code-reading techniques, once applied to C++ coded object-oriented frameworks. We present and discuss the functionality-based approach to framework understanding. Then, we present an experiment that compared three reading techniques for inspection of software frameworks. Two of those reading techniques, namely checklist-based reading, and systematic order-based reading, were adopted from scientific literature, while the third one, namely functionality-based reading, was derived from the functionality-based approach. The results of the experiment are that (1) functionality-based reading is much more effective and efficient than checklist based reading. (2) Functionality-based Reading is significantly more effective and efficient than systematic order-based reading. (3) Systematic order-based reading performs significantly better than checklist based reading for what concerns defect detection rate. However, because we used checklist-based reading and systematic order-based reading quite as they are, with limited adaptation to frameworks, it is too early to draw strong conclusions from the experiment results and improving and replicating this study is strongly recommended.",2004,0, 5026,A Hierarchical DHT for Fault Tolerant Management in P2P-SIP Networks,"This paper focuses on fault tolerance of super-nodes in P2P-SIP systems. The large-scale environments such as P2P-SIP networks are characterized by high volatility (i.e. a high frequency of failures of super-nodes). Most fault-tolerant proposed solutions are only for physical defects. They do not take into account the timing faults that are very important for multimedia applications such as telephony. We propose HP2P-SIP which is a timing and physical fault tolerant approach based on a hierarchical approach for P2P-SIP systems. Using the Oversim simulator, we demonstrate the feasibility and the efficiency of HP2P-SIP. The obtained results show that our proposition reduces significantly the localization time of nodes, and increases the probability to find the called nodes. This optimization allows to improve the efficiency of applications that have a strong time constraints such as VoIP systems in dynamic P2P networks.",2010,0, 5027,Fault-Tolerant Operation of a Battery-Energy-Storage System Based on a Multilevel Cascade PWM Converter With Star Configuration,"This paper focuses on fault-tolerant control for a battery-energy-storage system based on a multilevel cascade pulsewidth-modulation (PWM) converter with star configuration. During the occurrence of a single-converter-cell or single-battery-unit fault, the fault-tolerant control enables continuous operation and maintains state-of-charge balancing of the remaining healthy battery units. This enhances both system reliability and availability. A 200-V, 10-kW, 3.6-kWh laboratory system combining a three-phase cascade PWM converter with nine nickel-metal-hydride battery units is designed, constructed, and tested to verify the validity and effectiveness of the proposed fault-tolerant control.",2010,0, 5028,New Error Containment Schemes for H.264 Decoders,"This paper focuses on new error containment schemes for the H.264 advanced video codec (AVC). A lossless flexible macroblock ordering (FMO) removal scheme that allows playback of FMO-encoded videos on any H.264 player and a novel error concealment method have been developed. H.264 introduces powerful error resilience tools such as FMO to mitigate the effect of errors on the decoded videos. However, many commercial H.264 players cannot handle FMO. We have developed a new method to remove the FMO structure, thereby allowing the video to be decoded on any commercial player. We also present a model that accurately predicts the average overheads incurred by our scheme. At the same time, we developed a new error concealment method for I-frames to enhance video quality without relying on channel feedback. This method is shown to be superior to existing methods, including that from the JM reference software.",2009,0, 5029,Fault diagnosis of motor bearing using self-organizing maps,"This paper focuses on the application of self-organizing maps (SOM) in motor bearing fault diagnosis and presents an approach for motor rolling bearing fault diagnosis using SOM neural networks and time/frequency-domain bearing analysis. The SOM is a neural network algorithm which is based on unsupervised learning and combines the tasks of vector quantization and data projection. The objective of this paper is to detect and diagnose faults to motor adoptively, with emphasis on faults occurred in the bearing part of the motor. The experiment results show that the SOM is an efficient tool for the fault visualization and diagnosis of motor bearing",2005,0, 5030,Fault Tolerant Parallel FFT Using Parallel Failure Recovery,"This paper introduces a new method based on parallel failure recovery, for the fault tolerance issue of parallel programs. In case a process fails, other surviving processes will compute the task of the failed one in parallel, so that the overhead for fault tolerance is leveled down. The paper presents the design and implementation of the parallel FFT using the new approach, and works on finding an optimum number of processes that participate in parallel failure recovery. Finally, an experiment is done to show the better performance of the parallel failure recovery over that of checkpointing, and to show the effectiveness of our solution for the best number of processes participating parallel failure recovery.",2009,0, 5031,Design of a fault-tolerant voter for safety related analog inputs,"This paper introduces a voting scheme for safety-related analog input module to arbitrate between the results of redundant channels in fault-tolerant system. The design approach is a distributed system using a sophisticated form of duplication. For each running process, there is a backup process running on a different CPU. The voter is responsible for checkpointing its state to duplex CPUs. In order to increase the dependability for safety-related controllers, the I/O modules use redundancy to reduce the risk associated with relying upon any single component operating flawlessly. The 1oo2D voting principle is commonly used in fault tolerant I/O modules to provide passive redundancy for masking runtime faults at hardware and software levels, respectively. A dual architecture (1oo2D) which provides high safety integrity to a rating of SIL 3 is presented. The outputs from two identical channels operating in parallel with the same inputs are supplied to a voting unit that arbitrates between them to produce an overall output. Based on the hardware logic model and FPGA technique, the study adopts the hardware voter which has much more advantage in the velocity and reliability. Finally, using modelsim simulations, we verify the effectiveness of the proposed voter design in preserving the hazard-free property of the response of an analog inputs module.",2010,0, 5032,Fault-tolerant routing on Complete Josephus Cubes,"This paper introduces the Complete Josephus Cube, a fault-tolerant class of the recently proposed Josephus Cube and proposes a cost-effective, fault-tolerant routing strategy for the Complete Josephus Cube. For a Complete Josephus Cube of order r, the routing algorithm can tolerate up to (r+1) encountered component faults in its message path and generates routes that are both deadlock-free and livelock-free. The message is guaranteed to be optimally (respectively, sub-optimally) delivered within a maximum of r (respectively, 2r+1) hops. The message overhead incurred is only a single (r+2)-bit routing vector accompanying the message to be communicated",2001,0, 5033,FPGA-based fault-tolerant current controllers for induction machine,"This paper investigates a FPGA design of fault - tolerant current-control (FTCC) algorithm for three-phase inverter-fed electric drives. The focus of the proposed method is on the identification of the faulty-current sensor in AC machines drives, and the actual reconfiguration between two control-sampling times. The developed FTCC is simple, and can be readily incorporated into existing drive applications with relatively minor effort and cost. FPGA design ensures very fast execution time of the algorithm which allows a safety continuous working of the faulty system. Simulations and experimental results are presented to emphasize the effectiveness of the method.",2009,0, 5034,Predicting error rate for microprocessor-based digital architectures through C.E.U. (Code Emulating Upsets) injection,"This paper investigates an approach to study the effects of upsets on the operation of microprocessor-based digital architectures. The method is based on the injection of bit-flips, randomly in time and location by using the capabilities of typical application boards. Experimental results, obtained on programs running on two different digital boards, built around an 80C51 microcontroller and a 320C50 Digital Signal Processor, illustrate the potentialities of this new strategy",2000,0, 5035,Fault diagnosis and isolation in aircraft gas turbine engines,"This paper formulates and validates a novel methodology for diagnosis and isolation of incipient faults in aircraft gas turbine engines. In addition to abrupt large faults, the proposed method is capable of detecting and isolating slowly evolving anomalies (i.e., deviations from the nominal behavior), based on analysis of time series data observed from the instrumentation in engine components. The fault diagnosis and isolation (FDT) algorithm is based upon Symbolic Dynamic Filtering (SDF) that has been recently reported in literature and relies on the principles of Symbolic Dynamics, Statistical Pattern Recognition and Information Theory. Validation of the concept is presented and a real life software architecture is proposed based on the simulation model of a generic two-spool turbofan engine for diagnosis and isolation of incipient faults.",2008,0, 5036,Transient-based protection as a solution for earth-fault detection in unearthed and compensated neutral medium voltage distribution networks,"This paper highlights the earth fault problem in unearthed (isolated) and compensated neutral medium voltage distribution networks and the transient-based protection as a solution. The technique of transient-based analysis will be discussed considering the sampling window, frequency and data updating. The required transient simulation analysis and its validation will be presented. Many important practical investigations will also be discussed such as the required measured quantities, polarity criterion, magnitude criterion and suitable earth-fault indication tools. Finally, some of the developed techniques and algorithms from the previous work in this area will be summarized.",2010,0, 5037,A Fault-Tolerant Best-Effort Multicast Algorithm,"This paper integrates into this base algorithm the fault tolerance capability. We present a protection-based and a restoration-based reroute algorithm which is executed by every node in the network, and an algorithm to compute the degree of tolerance. The algorithms do not use any unicast routing protocol. We demonstrate and verify the operations of the algorithms using computer simulations.",2006,0, 5038,Fault Testing And Diagnosis System Of Armored Vehicle Based On Information Fusion Technology,"This paper introduces a fault testing and diagnosis system of bearing in armored vehicle. It can realize real-time testing and precise fault diagnosis of bearing on vehicle chassis. In the hardware, it adopts PCL1800 data acquisition card to acquire sample data and send to the software. In the software, we compile fault testing and diagnosis system of bearing in armored vehicle on a portable computer, its core diagnostic method based on virtual instrument and multi-sensor information fusion technology. Virtual testing subsystem has various functions, including on-line data acquiring, signal displaying, different kinds of signals analyzing and data management. It only monitors the status of bearing and warning alarm. The further work to judge the type and the severity factor of the faults lies on fault diagnosis subsystem based on neural network information fusion technology.",2007,0, 5039,A Fault Tolerant and Multi-Paradigm Grid Architecture for Time Constrained Problems. Application to Option Pricing in Finance.,"This paper introduces a Grid software architecture offering fault tolerance, dynamic and aggressive load balancing and two complementary parallel programming paradigms. Experiments with financial applications on a real multi-site Grid assess this solution. This architecture has been designed to run industrial and financial applications, that are frequently time constrained and CPU consuming, feature both tightly and loosely coupled parallelism requiring generic programming paradigm, and adopt client-server business architecture.",2006,0, 5040,Analog circuits for thermistor linearization with Chebyshev-optimal linearity error,"This paper investigates analog linearization circuits for NTC thermistors, and the theoretical limits to linearity performance. In theory, by using multiple identical thermistors, the circuit may have a linearity error that is arbitrarily small and optimal in the Chebyshev norm. This result also applies to many other impedance-type sensors (e.g. resistive, capacitive). Experimental data from a three-thermistor circuit shows approximately 16 mK (160 ppm) linearity over a span of 100degC, consistent with a theoretical limit of plusmn12 mK (120 ppm).",2007,0, 5041,Fault restoration and spare capacity allocation with QoS constraints for MPLS networks,This paper investigates distributed fault restoration techniques for multiprotocol label switching (MPLS) to automatically reroute label switched paths in the event of link or router failures while maintaining quality of service (QoS) requirements. Protocols for path and partial path restoration are evaluated. A backup route selection algorithm based on optimization of equivalent bandwidth is formulated and demonstrated for an example network,2000,0, 5042,A comparative study on current control methods for load balancing and power factor correction using STATCOM,"This paper investigates several current control methods for load balancing and power factor correction based on distribution static compensator (STATCOM). Two different configurations are considered for STATCOM; a three leg inverter, and three single phase inverters. It is assumed that the STATCOM is associated with a load that is remote from the supply. After a brief introduction, control structure based on PWM method and simulation results using PSCAD are presented. Next, the same system is simulated using hystersis control method. Both methods employ the instantaneous symmetrical components theory for load balancing and power factor correction. At the end, a comparison between two methods is presented.",2005,0, 5043,Modeling and bounding low cost inertial sensor errors,"This paper presents a methodology for developing models for the post-calibration residual errors of inexpensive inertial sensors in the class normally referred to as ldquoautomotiverdquo or ldquoconsumerrdquo grade. These sensors are increasingly being used in real-time vehicle navigation and guidance systems. However, manufacturer supplied specification sheets for these sensors seldom provide enough detail to allow constructing the type of error models required for analyzing the performance or assessing the risk associated with navigation and guidance systems. A methodology for generating error models that are accurate and usable in navigation and guidance systemspsila sensor fusion and risk analysis algorithms is developed and validated. Use of the error models is demonstrated by a simulation in which the performance of an automotive navigation and guidance system is analyzed.",2008,0, 5044,Integrating reliability into the design of fault-tolerant power electronics systems,"This paper presents a methodology for integrating reliability considerations into the performance analysis carried out during the design of fault-tolerant power converters. The methodology relies on using a state-space representation of the power converter, based on averaging, similar to the ones used when analyzing linear time-invariant systems, and assumes an unknown-but-bounded uncertainty model for the converter uncontrolled inputs, such as load or variations in input voltage. The converter must be designed such that, for any uncontrolled input, the state variables remain within a region of the state space defined by performance requirements, e.g., output voltage tolerance or switch ratings. In the presence of component faults, and depending on the uncontrolled inputs, the converter may or may not meet performance requirements. Given the uncertain nature of these uncontrolled inputs, and for each particular fault, we introduce an analytical method to compute the probability that the performance requirements are met, which will define the reliability of the converter for each particular fault. By including these probabilities in a Markov reliability model, it is possible to obtain the overall converter reliability. The application of the methodology is illustrated with a case study of a fault-tolerant interleaved buck converter.",2008,0, 5045,An expert system for substation fault detection in thermoelectric generation plants,"This paper presents a methodology to develop a Fault Diagnosis Integrated System in a Thermoelectric Generation Plant (TGP). The proposed methodology is based on three methods: Discrete Fourier Transform (DFT), Fuzzy Logic and Artificial Neural Network (ANN). The test electric system was built in BPA's ATP/EMTP software, conformably to the needs presented by a Thermoelectric Generation Plant (TGP) of 711 MW-230 kV, located in southern Brazil. Simulated test cases demonstrate the generalization capability of the protection system developed, now used in a Southern Brazilian Utility.",2010,0, 5046,Assessment on Fault-tolerance Performance Using Neural Network Model Based on Ant Colony Optimization Algorithm for Fault Diagnosis in Distribution Systems of Electric Power Systems,"This paper presents a model based on neural network optimized by the ant colony optimization algorithm (ACOA) for fault section diagnosis in distribution systems of electric power systems, and the simulation results show that it can effectively improve the fault-tolerance ability of fault section diagnosis. It had better fault-tolerance ability in contrast with the BP-NN model and the DGA-NN model. It must be pointed out that the improvement degree is correlative with the space distribution of samples, and it isn 't the essential improvement, but it is the potential mining of neural network.",2007,0, 5047,On a filter bank correction scheme for mitigating mismatch distortions in time-interleaved converter systems,"This paper presents a multirate filterbank architecture that mitigates the distortion caused by non-ideal samplers in a time-interleaved system. Closed form fractional delay filters with finite impulse response (FIR) and infinite impulse response (IIR) type are employed to model the behaviour of the non-ideal converters. Based on a polyphase description of the system, the reconstruction filters are derived for the IIR case, which can be regarded as a generalization of the FIR design scheme. Furthermore, the achieved performance of filter banks for various fractional delay filters is compared. To investigate the numerical robustness, the impact of limited coefficient lengths on different figures-of-merit was explored. Finally, the reconstruction of a non-uniformly sequence was used as an example to verify the reconstruction scheme.",2009,0, 5048,An adaptive precise one-end power transmission line fault locating algorithm based on multilayer complex adaptive artificial neural networks,This paper presents a new algorithm for precise fault locating in power transmission lines based on a novel two-layer complex adaptive linear neural network (CADALINE). The first layer is used for a direct estimation of the symmetrical components. The second layer is used to estimate the fault location directly from the time-domain positive sequence components of only one-end of the transmission line. The proposed algorithm is evaluated through extensive simulations in PSCAD/EMTDC and MATLAB software. The results presented in this paper clearly show that the CADALINE algorithm is highly robust and very accurate in performing the requested task of locating the fault.,2009,0, 5049,Total Sensitivity Index Calculation via Error Propagation Equation,"This paper presents a new and convenient method to calculate the total sensitivity indices defined by variance-based sensitivity analysis. By decomposing the output variance using error propagation equations, this method can transform the ""double-loop"" sampling procedure into ""single-loop"" one and obviously reduce the computation cost of analysis. In contrast with Sobol and Fourier amplitude sensitivity test (FAST) method, which is limited in non-correlated variables, new approach is suitable for correlated input variables. An application in semiconductor assemble and test manufacturing (ATM) factory indicates that this approach has a good performance in additive model and simple non-additive mathematical model.",2007,0, 5050,Fault location algorithms for power transmission lines based on Monte-Carlo method,"This paper presents a new approach for calculation of fault location on high voltage transmission lines. The proposed fault location algorithm utilizes statistical information about the equivalent impedances of the system at the unmonitored end of the transmission tine. Knowledge about the distribution laws of these values or their numerical characteristics which are more accessible for practical use, results in more accurate fault location for lines with grounded neutral, especially in case of distant short circuits through the large transient resistance. The proposed algorithm is based on modeling of the faulted line and the method of Monte-Carlo. The algorithm calculates not only the expected value of the distance to the fault, but also another important additional characteristic for the fault location, namely, the length of the line segment, where short circuit may have occurred",2001,0, 5051,A deterministic approach for hardware fault injection in asynchronous QDI logic,"This paper presents a new approach for hardware based fault injection in Quasi Delay Insensitive (QDI) asynchronous circuits. Configurable saboteurs are placed at points of interest in the circuit and allow to inject various types of faults on an arbitrary number of signals. These saboteurs not only redefine the logic value of a faulty signal but also the exact moment of the fault occurrence. In asynchronous logic, signal events rather than time are used to trigger on a circuit's state. Our concept allows to precisely control the order of concurrent signal events. It can be shown that the fault sensitivity highly depends on that event ordering. Thereby a deterministic and reproducible investigation of QDI circuits in the presence of transient and permanent faults in hardware is obtained. The work is evaluated by fault injection experiments on different circuits.",2010,0, 5052,Bit-plane-wise unequal error protection for Internet video applications,"This paper presents a new bit-plane-wise unequal error protection algorithm for Internet video applications. The proposed algorithm protects a compressed embedded bitstream generated by a 3-D SPIHT algorithm by assigning unequal forward error correction (FEC) codes to each bit-plane. The proposed algorithm reduces the amount of side information needed to send the size of each code to the decoder by limiting the number of quality levels to the number of bit-planes to be sent while providing a graceful degradation of picture quality as packet losses increase. To get additional error-resilience at high packet loss rates, we extend our algorithm to multiple-substream unequal error protection. Simulation results show that the proposed algorithm is simple, fast and robust in hostile network conditions and therefore can provide reasonable picture quality for Internet video applications under varying network conditions.",2002,0, 5053,A traveling wave based single-ended fault location algorithm using DWT for overhead lines combined with underground cables,This paper presents a single-ended traveling wave based fault location method for power transmission systems where overhead lines are combined with underground cables. Bewley diagrams are used to determine the traveling wave patterns. The proposed method utilizes Discrete Wavelet Transform (DWT) to extract the transient information from the recorded voltage signals. The squares of the wavelet transform coefficients (WTC2) are calculated in order to determine the energy of the signal which is used to identify the faulted section (underground cable or overhead line) and subsequently to locate the fault. The transient voltages for different types of faults and locations along the overhead section as well as the underground cable section are obtained through EMTP simulations. MATLAB Wavelet Toolbox is used to process the simulated transients and apply the proposed method. The performance of the method is tested on various scenarios.,2010,0, 5054,Automatically translating dynamic fault trees into dynamic Bayesian networks by means of a software tool,"This paper presents a software tool allowing the automatic analysis of a dynamic fault tree (DFT) exploiting its conversion to a dynamic Bayesian network (DBN). First, the architecture of the tool is described, together with the rules implemented in the tool, to convert dynamic gates in DBNs. Then, the tool is tested on a case of system: its DFT model and the corresponding DBN are provided and analyzed by means of the tool. The obtained unreliability results are compared with those returned by other tools, in order to verify their correctness.",2006,0, 5055,Software-based rerouting for fault-tolerant pipelined communication,"This paper presents a software-based approach to fault-tolerant routing in networks using wormhole or virtual cut-through switching. When a message encounters a faulty output link, it is removed from the network by the local router and delivered to the messaging layer of the local node's operating system. The message passing software can reroute this message, possibly along nonminimal paths. Alternatively, the message may be addressed to an intermediate node, which will forward the message to the destination. A message may encounter multiple faults and pass through multiple intermediate nodes. The proposed techniques are applicable to both obliviously and adaptively routed networks. The techniques are specifically targeted toward commercial multiprocessors where the mean time to repair (MTTR) is much smaller than the mean time between router failures (MTBF), i.e., it is sufficient to tolerate a maximum of three failures. This paper presents requirements for buffer management, deadlock freedom, and livelock freedom. Simulation results are presented to evaluate the degradation in latency and throughput as a function of the number and distribution of faults. There are several advantages of such an approach. Router designs are minimally impacted, and thus remain compact and fast. Only messages that encounter faulty components are affected, while the machine is ensured of continued operation until the faulty components can be replaced. The technique leverages existing network technology, and the concepts are portable across evolving switch and router designs. Therefore, we feel that the technique is a good candidate for incorporation into the next generation of multiprocessor networks",2000,0, 5056,Application of Fourier Descriptors and Pearson Correlation for fault detection in Sucker Rod Pumping System,This paper presents a study of a new proposal for down-hole dynamometer cards patterns for the automatic fault diagnosis of Sucker Rod Pumping System. The accuracy and quick identification of down-hole problems is essential to the risk decrease and production improvement of the petroleum industry. The main idea is a card recognition through a digital image processing (Fourier Descriptors) and statistical (Pearson Correlation) tools. Successful results were reached when this proposal was applied in simulated cards and PETROBRAS real cards.,2009,0, 5057,Fault tolerance in a TWC/AWG based optical burst switching node architecture,"This paper presents a study of dependability aspects of a switch architecture for optical burst switching based on tuneable lasers and array waveguide gratings. For this architecture, we carry out analyses of failure modes, alternative fault tolerance schemes and the relation between redundancy dimensioning and the dependability properties one might expect.",2005,0, 5058,An Investigation of Fault Tolerance Behavior of 32-Bit DLX Processor,This paper presents a study of fault tolerance behavior of a 32-bit DLX processor. Simulation-based method has been applied to analyze the fault tolerance characteristic of this processor. This experiment is based on injection of 14000 faults among 70 points of components which are more frequently used in VHDL model. The experimental results have been considered in different aspects. Up to 55% of the injected faults cause system failure and also approximately 42% of them are masked before changing into errors. Less than 3% of injected faults remain as latent errors. The Average of fault latency has also been reported between 47 to 59 clock cycles regarding to different workloads. The comparison of observed component distinguishes ALU as the most sensitive one among others with the approximate failure rate of 50%.,2009,0, 5059,PMSM Bearing Fault Detection by means of Fourier and Wavelet transform,This paper presents a study of permanent magnet synchronous machines (PMSM) with bearing fault using a two-dimensional (2-D) finite element analysis (FEA). Fourier fast and wavelet transform were used to fault detection of bearing damage under stationary and non stationary working conditions. Simulation were carried out and compared with experimental results.,2007,0, 5060,A Synchronization Approach for the Minimization of Contouring Errors of CNC Machine Tools,"This paper presents a synchronization control approach for the minimization of contouring errors of multi-axis CNC machine tools. The contouring errors are presented by the position synchronization errors that are defined as differential position errors between each axis and its adjacent ones. Using cross-coupling concept, a decentralized tracking controller is developed with feedback of both position and synchronization errors, formed with a combination of feedforward, feedback and a saturation control. It is proven that this controller can asymptotically stabilize both position and synchronization errors to zero. The proposed controller does not require significant use of the system dynamic models. Experiments performed on a multi-axis machine tool demonstrate improved performance especially in the contouring error minimization.",2009,0, 5061,A Systematic Sensor-Placement Strategy for Enhanced Defect Detection in Rolling Bearings,"This paper presents a systematic approach to determining optimal sensor locations with the goal to improve data-acquisition quality and the effectiveness in machine condition monitoring. The presented method ranks an initial set of candidate sensor locations based on their respective effective-independence (EfI) value, which is a measure for the contribution of each location to the measurement data matrix. Through an iterative procedure, locations having relatively low EfI values are progressively eliminated from the candidate set. The remaining locations with high EfI values have shown to be more effective in providing sensing coverage of the machine component being monitored. The method was applied to selecting locations for four accelerometers to monitor a rolling element bearing with a localized defect. Experimental tests have confirmed the effectiveness of the method in improving sensing quality",2006,0, 5062,Automated Derivation of Application-aware Error Detectors using Static Analysis,"This paper presents a technique to derive and implement error detectors to protect an application from data errors. The error detectors are derived automatically using compiler-based static analysis from the backward program slice of critical variables in the program. Critical variables are defined as those that are highly sensitive to errors, and deriving error detectors for these variables provides high coverage for errors in any data value used in the program. The error detectors take the form of checking expressions and are optimized for each control flow path followed at runtime. The derived detectors are implemented using a combination of hardware and software. Experiments show that the derived detectors incur low performance overheads while achieving high detection coverage for errors that impact the application.",2007,0, 5063,Nonlinear phase correction with an extended statistical algorithm,"This paper presents a new magnetic resonance imaging (MRI) phase correction method. The linear phase correction method using autocorrelation proposed by Ahn and Cho (AC method) is extended to handle nonlinear terms, which are often important for polynomial expansion of phase variation in MRI. The polynomial coefficients are statistically determined from a cascade series of n-pixel-shift rotational differential fields (RDFs). The n-pixel-shift RDF represents local vector rotations of a complex field relative to itself after being shifted by n pixels. We have found that increasing the shift enhances the signal significantly and extends the AC method to handle higher order nonlinear phase error terms. The n-pixel-shift RDF can also be applied to improve other methods such as the weighted least squares phase unwrapping method proposed by Liang. The feasibility of the method has been demonstrated with two-dimensional (2-D) in vivo inversion-recovery MRI data.",2005,0, 5064,A Novel Test Application Scheme for High Transition Fault Coverage and Low Test Cost,"This paper presents a new method for improving transition fault coverage in hybrid scan testing. It is based on a novel test application scheme, in order to break the functional dependence of broadside testing. The new technique analyzes the automatic test pattern generation conflicts in broadside test generation and skewed-load test generation, and tries to control the flip-flops with the most influence on fault coverage. The conflict-driven selection method selects some flip-flops that work in the enhanced scan mode or skewed-load scan mode. And the conflict-driven reordering method distributes the selected flip-flops into different chains. In the multiple scan chain architecture, to avoid too many scan-in pins, some chains are driven by the same scan-in pin to construct a tree-based architecture. Based on the architecture, the new test application scheme allows some flip-flops to work in enhanced scan or skewed-load mode, while most of others to work in the traditional broadside scan mode. With the efficient conflict-driven selection and reordering schemes, fault coverage is improved greatly, which can also reduce test application time and test data volume. Experimental results show that fault coverage based on the proposed method is comparable that of enhanced scan.",2010,0, 5065,Sensor fault diagnosis based on a new method of feature extraction in time-series,"This paper presents a new method of how to choose the key points of monotone sequences based on the basic theory of time series segmentation algorithm, which is to select the key points from monotone sequences by calculating the curvatures. With such method, time series can be well linear-fitted. This method is also used for fault diagnosis of sensor. Key point sequence of the maximum difference can be achieved by comparisons among different time series of sensors, thus the fault sensor can be determined.",2010,0, 5066,A glass bottle defect detection system without touching,"This paper presents a new remote detection system for defects in the upper portion of glass bottles in a production line. The system uses eight video cameras installed beside the production line to capture the images of the mouth, lip and neck of each bottle. Then, two computers collect and process these images to detect any defects. The detection process includes two parts: one is to detect defects occurring in the mouth and lip of a glass bottle using images captured from the top of a bottle. The other is for the defects occurring in the neck and shoulder of a bottle, by processing the images shot on the upper-side of the bottle. The processing includes image orientation, preprocessing, and image recognizing. The system sends a control signal to the mainframe once it recognizes a defective bottle, which can then be ejected from the production line. Test results are given to show the recognition rate system for all defects occurring in the upper part of a glass bottle.",2002,0, 5067,A low cost fault tolerant packet routing for parallel computers,"This paper presents a new switching mechanism to tolerate arbitrary faults in interconnection networks with a negligible implementation cost. Although our routing technique can be applied to any regular or irregular topology, in this paper we focus on its application to k-ary n-cube networks when managing both synthetic and real traffic workloads. Our mechanism is effective regardless the number of faults and their configuration. When the network is working without any fault, no overhead is added to the original routing scheme. In the presence of a low number of faults, the network sustains a performance close to that observed under fault-free conditions. Finally, when the number of faults increases, the system exhibits a graceful performance degradation.",2003,0, 5068,Positional protection of transmission line using fault generated high frequency transient signals,"This paper presents a new technique for high-speed protection of transmission lines, the positional protection technique. The technique uses a fault transient detector unit at the relaying point to capture fault generated high frequency transient signals contained in the primary currents. The decision to trip is based on the relative arrival times of these high frequency components as they propagate through the system. Extensive simulation studies of technique were carried out to examine the response to different power system and fault conditions. Results show that the scheme is insensitive to fault type, fault resistance, fault inception angle and system source configuration, and that it is able to offer both very high accuracy and speed in fault detection",2000,0, 5069,A New High Speed FPGA based Travelling Wave Fault Recorder for MV Distribution Systems,"This paper presents a newly developed, FPGA based, travelling wave fault recorder capable of simultaneously recording up to six input channels at 40 Mega samples per second (MSPS) at a 14 bit resolution. The fault recorder integrates a GPS receiver to provide accurate time tagging of recorded transients allowing both double-ended and single-ended schemes of fault location to be applied.",2008,0, 5070,A novel method to measure and correct the odometry errors in mobile robots,"This paper presents a novel and still simple method for odometry errors measurement and compensation. For a typical differential drive mobile robot, the three dominant and systematic error sources (unequal wheelspsila diameter, imprecise average wheels diameter and uncertainty about the effective wheelbase) are investigated. It is shown that the mobile robot, calibrated by using the proposed algorithm, has noticeable Odometric accuracy. Calibration coefficients are obtained by accurate measurements of the wheelspsila diameter ratios and the effective wheelbase. Then these coefficients are applied to the mobile robot through the driverpsilas controller software.",2008,0, 5071,Diagnosis and Protection of Stator Faults in Synchronous Generators Using Wavelet Transform,"This paper presents a novel application of wavelet transform for the diagnosis and protection of stator faults in synchronous generators. The instantaneous powers of the wavelet packet transformed coefficients (dd2) of line voltages and currents decomposed up to the second level of resolution of the wavelet packet tree using a mother wavelet are used to detect and differentiate different faulted conditions from normal (unfaulted) conditions. The performance of this newly devised protection technique is evaluated by simulation results as well as by experimental results. The complete protection scheme incorporating the wavelet power based detection algorithm successfully implemented in real-time using the ds1102 digital signal processor board for a laboratory 1.6 kW three-phase synchronous generator. The stator phase unbalance, single line to ground (L-G) fault, line-to-line (L-L) fault and turn-to-turn fault are investigated in order to evaluate the performance of the protection technique. In order to prove the superiority of the proposed protection technique over the conventional techniques, a comparison between the proposed and the conventional discrete Fourier transform (DFT) based diagnosis scheme is made at different dynamic operating conditions.",2007,0, 5072,"Multi-viewpoint silhouette extraction with 3D context-aware error detection, correction, and shadow suppression",This paper presents a novel approach for silhouette extraction from multi-viewpoint images. The main contribution of this paper is a new algorithm for 1) 3D context-aware error detection and correction of 2D multi-viewpoint silhouette extraction and 2) 3D context-aware classification of cast-shadow regions. Some experiments demonstrate advantages against previous approaches.,2007,0, 5073,Distributed recovery block based fault-tolerant routing in hypercube networks,"This paper presents a fault-tolerant routing algorithm that employs a modified distributed recovery block (DRB) approach. The section of a parallel or distributed system spanning between the source and destination nodes is partitioned into a series of overlapping DRB groups. Each DRB group consists of three nodes: a current node and two successor nodes. Primary successor executes the primary try while alternate successor executes an alternate try. The primary successor node delivers the message, whereas the alternate is ready to take over if the primary fails. The successful successor in an active DRB group becomes the current node of the next DRB group on the routing path. A prototype version of the routing method is implemented for a hypercube topology and its performance is compared with adaptive routing techniques based on backtracking.",2002,0, 5074,Fault-Tolerant Scheduling Framework for MedioGRID System,"This paper presents a fault-tolerant scheduling framework that is mapped on the architecture of MedioGRID, a real-time satellite image processing system operating in a grid environment. Our study addresses the problem of fault-tolerant system for tasks execution. In MedioGRID, scheduling various computationally intensive and data intensive applications require information from satellite images. The proposed solution provides a fault-tolerant mechanism of mapping the image processing applications, on the available resources in MedioGRID clusters and uniform access. Various applications require simultaneous processing. Our approach demonstrates a very good behavior in the case of scheduling and executing groups of applications, while also achieving an optimized utilization of the resources in the system. We also survey the features of the Globus framework and the way these techniques can be applied on the MedioGRID architecture and highlight some solutions in order to enhance the fault tolerance at the MedioGRID application level, both simple and parallel.",2007,0, 5075,Single-stage Flyback converter for constant current output LED driver with power factor correction,"This paper presents a LED driver circuit based on single-stage Flyback converter with power factor correction. Voltage following control and nonlinear carrier control are applied to constant current mode Flyback converter which is combined with constant current driver technique. By designing the outer loop to stabilize the output voltage and the inner loop to shape input current waveform, that makes the system achieve unity power factor. The circuit doesn't need an expensive multiplier as classical multiplier control. To verify the performance, the proposed circuit was confirmed by simulation and experiment. From the results, the proposed system shows high power factor with 0.96 and constant current output.",2009,0, 5076,Identifiability of geometric parameters of 6-DOF PKM systems using a minimum set of pose error data,"This paper presents a mathematic proof for the identifiability of geometric parameters of 6-DOF parallel kinematic machines (PKM). Rank deficiency of the identification matrix is justified if the end-effector undergoes all possible degrees of freedom but only a set of position errors normal to a plane is measured. An approach is proposed to tackle this problem, which enables the full set of parameter errors to be identified by only using a minimum set of pose errors: 1) the ""flatness"" of a fictitious plane generated by the tip of endpoint sensor; 2) the ""squareness"" of two orthogonal axes; and 3) the orientation of the end-effector at the initial configuration. Consequently, the burden arisen from the orientation error measurement may be dramatically reduced. The proposed method is so general that it can also be used to handle the parameter identification problems of the PKM with fewer than six degrees of freedom.",2003,0, 5077,Fault diagnosis of analog circuits with tolerances by using RBF and BP neural networks,"This paper presents a method for analog circuit fault diagnosis by using neural networks. This method exploits a DC approach for constructing a dictionary in fault diagnosis using the neural network's classification capability. Also, Radial Basis Function (RBF) and backward error propagation (BEP) networks are considered and compared for analog fault diagnosis. The primary focus of the paper is to provide robust diagnosis using a mechanism to deal with the problem of component tolerance and reduce testing time. Simulation results show that the radial basis function network with reasonable dimension has double precision in fault classification but its classification is local, while the backward error propagation network with reasonable dimension has single precision in fault classification but its classification is global.",2002,0, 5078,A method for dead reckoning parameter correction in pedestrian navigation system,"This paper presents a method for correcting dead reckoning parameters, which are heading and step size, for a pedestrian navigation system. In this method, the compass bias error and the step size error can be estimated during the period that the Global Positioning System (GPS) signal is available. The errors are used for correcting those parameters to improve the accuracy of position determination using only the dead reckoning system when the GPS signal is not available. The results show that the parameters can be estimated with reasonable accuracy. Moreover, the method also helps to increase the positioning accuracy when the GPS signal is available.",2003,0, 5079,Rolling element bearing fault classification using soft computing techniques,"This paper presents a method, based on classification techniques, for automatically detecting and diagnosing various types of defects which may occur on a rolling element bearing. In the experiments we have used vibration signals coming from a mechanical device including more than ten rolling element bearings monitored by means of four accelerometers: the signals have been collected both with all faultless bearings and substituting one faultless bearing with an artificially damaged one: four different defects have been taken into account. The proposed technique considers all the aspects of classification: feature selection, different base classifiers (two statistical classifiers, namely LDC and QDC, and MLP neural networks) and classifier fusion. Experiments, performed on the vibration signals represented in the frequency domain, have shown that the proposed classification method is highly sensitive to different types of defects and to different severity degrees of the defects.",2009,0, 5080,Generation of classification rules using artificial immune system for fault diagnosis,"This paper presents an artificial immune system based classification rules generation for fault diagnosis of induction motors. To implement the proposed method effectively, a feature extraction and fuzzificiation processes are used for choosing fault-related attributes from motor current signals. The idea behind the method is mainly based on both concepts of data mining and artificial immune system. Association rule set is generated using clonal selection based on confidence and support measures of each rule. Afterwards, an efficiency evaluation method is utilized to construct memory set of classification rules. Each rule is evaluated based on three measures, sensitivity, simplicity, and coverage, to select an optimal rule for classification. The proposed approach was experimentally implemented on a 0.37 kW induction motor and its performance verified on various working conditions of the induction motors. The performance results have shown that a high accuracy rate has been achieved.",2010,0, 5081,Fault location for distribution systems based on decision rules and wavelet transform,"This paper presents an efficient fault location scheme for distribution systems. The proposed method was developed by make use available measurements in the substation. The first step identifies the number of reclosings from the information about the current wavelet coefficients. In the second one, the fault location is provided by a set of decision rules, which are obtained from the voltage and current steady-state system simulations for different protection configurations. The diagnosis consists in determining the protection zone where the disturbance happened. The methodology performance is evaluated through data obtained from a real distribution feeder model considered.",2005,0, 5082,CEDA: control-flow error detection through assertions,"This paper presents an efficient software technique, control flow error detection through assertions (CEDA), for online detection of control flow errors. Extra instructions are automatically embedded into the program at compile time to continuously update run-time signatures and to compare them against pre-assigned values. The novel method of computing run-time signatures results in a huge reduction in the performance overhead, as well as the ability to deal with complex programs and the capability to detect subtle control flow errors. The widely used C compiler, GCC, has been modified to implement CEDA, and the SPEC benchmark programs were used as the target to compare with earlier techniques. Fault injection experiments were used to evaluate the fault detection capabilities. Based on a new comparison metric, method efficiency, which takes into account both error coverage and performance overhead, CEDA is found to be much better than previously proposed methods",2006,0, 5083,Experimental Evaluation of Three Concurrent Error Detection Mechanisms,"This paper presents an experimental evaluation of the effectiveness of three hardware-based control flow checking mechanisms, using software-implemented fault injection (SWIFI) method. The fault detection technique uses reconfigurable of the shelf FPGAs to concurrently check the execution flow of the target program. The technique assigns signatures to the target program in the compile time and verifies the signatures using a FPGA as a watchdog processor to detect possible violation caused by the transient faults. A total of 3000 faults were injected in the experimental embedded system, which is based on an 8051 microcontroller, to measure the error detection coverage. The experimental results show that these mechanisms detect about 90% of transient errors, injected by software implemented method.",2006,0, 5084,Software Implemented Detection and Recovery of Soft Errors in a Brake-by-Wire System,"This paper presents an experimental study of the impact of soft errors in a prototype brake-by-wire system. To emulate the effects of soft errors, we injected single bit-flips into ""live"" data in the architected state of a MPC565 microcontroller. We first describe the results of an error injection campaign with a brake-by-wire controller in which hardware exceptions are the only means for error detection. In this campaign, 30% of the injected errors passed undetected and caused the controller to produce erroneous outputs to the brake actuator. Of these, 15% resulted in critical failures. An analysis showed that a majority of the critical failures were caused by errors affecting either the stack pointer or the controller's integrator. Hence, we designed two software implemented error handling mechanisms that protect the stack pointer and the integrator state, inducing an overhead of 4% in data and 8% in speed. A second error injection campaign showed that these mechanisms reduced the proportion of critical failures one order of magnitude, from 4.6% to 0.4% of the injected soft errors.",2008,0, 5085,A Novel SBST Generation Technique for Path-Delay Faults in Microprocessors Exploiting Gate- and RT-Level Descriptions,"This paper presents an innovative approach for the generation of functional programs to test path- delay faults within microprocessors. The proposed method takes advantage of both the gate- and RT-level description of the processor. The former is used to build binary decision diagrams (BDDs) for deriving fault excitation conditions; the latter is exploited for the automatic generation of test programs able to excite and propagate fault effects, based on an evolutionary algorithm and fast RTL simulation. Experimental results on a simple microcontroller show that the proposed methodology is able to generate suitable test sets in reduced times.",2008,0, 5086,Fault ride through techniques of DFIG-based wind energy systems,"This paper presents a novel control technique for fault ride through of a DFIG-based wind energy system. Both unbalanced faults and balanced faults are considered. For unbalanced faults, a close to 120 Hz component will be present in the rotor currents. For balanced faults, the rotor currents will experience a surge. The control technique presented in this paper is effective in mitigating the high frequency harmonic components and reducing the surge in the rotor currents. The novelty of the control technique is its simplicity and effectiveness: (i) only one reference transformation (abc/AA) is required and (ii) harmonic filters are not required. Simulations performed in Matlab/Simulink are presented to illustrate the effectiveness of the proposed control strategy.",2009,0, 5087,A Novel Multiphase Fault Tolerant Permanent Magnet Motor Drive for Fuel cell Powered Vehicles,"This paper presents a novel five-phase fault tolerant interior permanent magnet motor drive with higher performance and reliability for use in fuel cell powered vehicles. A new machine design along with efficient control strategy is developed for fault tolerant operation of electric drive without severely comprising the drive performance. Fault tolerance is achieved by employing a five phase fractional-slot concentrated windings configuration IPM motor drive, with each phase electrically, magnetically, thermally and physically independent of all others. The proposed electric drive system poses with higher torque density, negligible cogging torque, and about plusmn0.5% torque ripple. Power converter requirement are discussed and control strategies to minimize the impact of a machine or converter fault are developed. Beside, all the requirement of a fault tolerant operation, including high phase inductance and negligible mutual coupling between phases are met. Analytical and finite element analysis and comparison case studies are presented.",2007,0, 5088,An efficient quasi-active power factor correction scheme,"This paper presents a novel input current shaper based on a quasi-active power factor correction (PFC) scheme. The power factor is improved by adding two auxiliary windings coupled to the transformer of a cascade dc/dc flyback converter. The auxiliary windings are placed between the input rectifier and the low-frequency filter capacitor to serve as a magnetic switch to drive an input inductor. Since the dc/dc converter is operated at high switching frequency, the auxiliary windings produce a high frequency pulsating source such that the input current conduction angle is significantly lengthened and the input current harmonics is reduced. It eliminates the use of active switch and control circuit for PFC. The input inductor can be designed to operate in discontinuous current mode (DCM) with lower harmonic content or continuous conduction mode (CCM) with higher efficiency. However, a trade-off between efficiency and harmonic content must be made. Operating principles, analysis, simulation and practical results of the proposed method are presented.",2009,0, 5089,Morphological undecimated wavelet decomposition for fault location on power transmission lines,"This paper presents a novel morphological undecimated wavelet (MUDW) decomposition scheme for fault location on power transmission lines. The MUDW scheme is developed based on the morphological wavelet (MW) theory for both the extraction of transient features and noise reduction in signal processing. The analysis operators and the synthesis operator of MUDW strictly satisfy the pyramid condition. In this paper, the MUDW scheme is used to extract features from noise imposed fault-generated transient voltage and/or current signals of power transmission lines. The efficiency of the MUDW scheme used for noise reduction and the extraction of sudden changes in the transient signals are evaluated in simulation studies. The simulation results show that the fault location can be accurately detected in noisy environments",2006,0, 5090,Quasi-active power factor correction using transformer-assisted driving voltage,"This paper presents a novel simple input current shaper based on a quasi-active power factor correction (PFC) scheme. In this method high power factor and low harmonic content are achieved by providing an auxiliary PFC circuit with a driving voltage derived from a third winding of the transformer of a DC-DC flyback converter. It eliminates the use of active switch and control circuit for PFC. Operating principles, analysis, and simulation results of the proposed method are presented.",2009,0, 5091,Design of a low voltage cable fault detector,"This paper presents a design and construction of a low voltage cable fault location detector. The principle of design is based on the reflection of waves. A pulse is sent into the defective cable and it is reflected back. The interval of travelling waves is identified in order to determine fault location and fault types. The prototype consists of pulse generator, wave detection circuit, counter circuit, control and processing part, and display unit based on a microcontroller. All are assembled in a small compact size. Also, the portable detector is suitable for outdoors. In addition, it is more safe when compared to a conventional high voltage impulse method. It has been tested and found that the error of fault location is within 5%",2000,0, 5092,System-on-chip oriented fault-tolerant sequential systems implementation methodology,"This paper presents a design methodology for fault tolerant sequential systems implemented on System on Chip (SoC). In the paper, as an example, a complex fault tolerant finite state machine has been mapped on the FPGA contained in the SoC. The fault identification has been obtained by using a checker permitting the identification of classes of faults. When a fault is detected, an interrupt for the microcontroller is generated and the interrupt handling routine partially reprograms the FPGA to override the part of memory configuring the faulty block. The architectures of the SoCs recently appeared on the market are characterized by a very efficient interaction between the microcontroller and the FPGA allowing a very efficient implementation of the fault detection and fault recovery strategy. A test bed of the proposed methodology has been implemented on the recently presented Atmel AT94K FPSLIC (Field Programmable System Level Integrated Circuits)",2001,0, 5093,The limitations of software signature and basic block sizing in soft error fault coverage,"This paper presents a detailed analysis of the efficiency of software-only techniques to mitigate SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results show the limitations of the control-flow techniques in detecting the majority of SEU and SET faults, even when different basic block sizes are evaluated. A further analysis on the undetected faults with control flow effect is done and five causes are explained. The conclusions can lead designers in developing more efficient techniques to detect these types of faults.",2010,0, 5094,Combining learning methods and time-scale analysis for defect diagnosis of a tramway guiding system,"This paper presents a diagnosis system for detecting tramway rollers defects. First, the continuous wavelet transform is applied on vibration signals measured by specific accelerometers. Then, the Singular Values Decomposition (SVD) is applied on the time-scale representations to extract a set of singular values as classification features. The resulting multi-class classification problem is decomposed into several 2-class sub-problems. The predicted probabilities are coupled using a pairwise coupling method. Empirical results demonstrate the efficiency and robustness of the overall diagnosis system on measurement data.",2008,0, 5095,Wireless Sensor Network Modeling Using Modified Recurrent Neural Networks: Application to Fault Detection,"This paper presents a dynamic model of wireless sensor networks (WSNs) and its application to a sensor node fault detection. Recurrent neural networks (RNNs) are used to model a sensor node, its dynamics, and interconnections with other sensor network nodes. The modeling approach is used for sensor node identification and fault detection. The input to the neural network is chosen to include delayed output samples of the modeling sensor node and the current and previous output samples of neighboring sensors. The model is based on a new structure of backpropagation-type neural network. The input to the neural network and topology of the network are based on a general nonlinear dynamic sensor model. A simulation example has demonstrated effectiveness of the proposed scheme.",2007,0, 5096,A platooning controller robust to vehicular faults,This paper presents a platooning controller for a four-wheel-driving four-wheel-steering vehicle to follow another. The controller is based on the full-state tracking theory and utilizes a vehicular model that makes it able to continue to operate when faults are detected at its steering systems or driving motors which are disabled accordingly. The unified controller is also able to track and follow the target either moving forward in front or moving backward in the back of the vehicle making the real-time implementation of different tracking modes simple. Tracking stability is secured by the proper selection of design parameters. Simulations show the proposed control scheme works properly even in the presence of faults at several different parts.,2004,0, 5097,Network-Integrated DSP-based Adaptive High Impedance Ground Fault Feeder Protection,"This paper presents a practical adaptive strategy for live substation feeder protections with new implementations using state-of-the-art digital signal processing (DSP) technology and modern computer networking technology. This network- integrated DSP-based feeder protection adapts its trip settings to provide improved selectivity and speed of fault detections for varying power system configurations and loadings. This protection accurately senses and de-energizes downed conductors that often could not be detected by conventional non-adaptive feeder protections. Several people are killed every year by contacting with live downed conductors. This adaptive protection can significantly reduce this potentially fatal condition. This paper presents the design of this adaptive feeder protection. Several power system faults were studied under varying load and system conditions. Results were analyzed to verify the design, to determine improvement over conventional static protections, and to verify correct protection operations. Field tests were conducted in a substation in Canada to validate this adaptive feeder protection.",2007,0, 5098,Fault prediction in electrical valves using temporal Kohonen maps,"This paper presents a proactive maintenance scheme for the prediction of faults in electrical valves. In our case study, these valves are used for controlling the oil flow in a distribution network. A system implements temporal self-organizing maps for the prediction of faults. These faults lead to deviations either on torque, on the valve end position or on opening/closing time. For fault prediction, one map is trained using data from a mathematical model devised for the electrical valve. The training is performed by fault injection based on three parameter deviations over this same mathematical model. The map learns the energies of the torque and the position that are computed using the wavelet packet transform. Once the map is trained, the system is ready for on-line monitoring of the valve. During the on-line testing phase, the system computes the Euclidean distance and the activation of data series. The biggest activation determines which is the winner neuron of the map for one data series. The obtained results demonstrate a new solution for prediction behavior of these valves.",2010,0, 5099,Robust Detection of Faults in Frequency Control Loops,"This paper presents a robust approach to real-time detection of faults in the load-frequency control loop of interconnected power systems. The detection of faults takes place under different operating conditions, in the presence of modeling uncertainties, unknown changes in the load demand, and other external disturbances, such as plant and sensor noise. Although the approach is applicable to N-area systems, a two-area interconnected power system example is considered for simplicity",2007,0, 5100,ADAPTATION - Algorithms to Adaptive Fault Monitoring and their implementation on CORBA,This paper presents ADAPTATION - Algorithms to Adaptive Fault Monitoring for asynchronous distributed systems and their implementation on CORBA. Our algorithms vary the timeouts based on a recent history of last elapsed times of the monitoring messages. The aim of the proposed algorithms is to provide a better response time to crashes and a minimum discrepancy between a suspection due to the network overload and due to the real process crash. The proposed approach extends the Fault Tolerant CORBA OMG specification with the push model and the definition of pull and push ADAPTATION fault monitors. Some ADAPTATION experiments on ACE+TAO were made to observe their behavior on changing network workloads,2001,0, 5101,Extrapolative Model of DGPS Corrections using a Multilayered Neural Network Based on the Extended Kalman Filter,"This paper presents an accurate DGPS land vehicle navigation system using a multilayered neural network (NN) based on the extended Kalman filter (EKF). The network setup is developed based on a mathematical model to avoid excessive training. The proposed method uses an EKF training rule, which achieves the optimal training criterion. The NN predicts the DGPS corrections for accurate positioning. The proposed method is suitable for DGPS systems sampled at different rates. The experimental results on collected real data demonstrate the suitability of this method in developing an accurate DGPS land vehicle navigation method. The experiments show that the prediction total RMS error is less than 1.65m and 0.67m, before and after SA, respectively. Also, tests with real data demonstrate that the prediction accuracy is better than 1.1m for 10 second prediction and 1.9m for 30 second prediction, respectively, which can maintain the vehicle navigation in the required accuracy for a period of 30 seconds",2006,0, 5102,Torque ripple analysis of a 42V fault tolerant six-phase permanent magnet synchronous machine,This paper presents an analysis of the torque ripple of a six-phase fault tolerant permanent magnet synchronous machine (PMSM6) for electrical power steering system (EPS) applications. Opened stator phase is the realistic fault condition that can be imagined for a machine fed by a power converter drive system. The behavior of the machine was analyzed under three different winding fault conditions (open phase). All the obtained results using the finite element method show the performance of the PMSM6 for the application of EPS.,2010,0, 5103,Fault detection of a gas turbine fuel actuator based on qualitative causal models,"This paper presents an application of the Ca~En model-based diagnosis software to a GE Frame 6 gas turbine. The paper focuses on the fault detection task and presents a mixed strategy, which combines an observer strategy with a simulation strategy, to achieve a good robustness/sensitivity trade-off. The presented application results have been obtained by running Ca~En in real-time on the GE Frame 6 turbine owned by National Power at Aylesford (UK).",2001,0, 5104,Automating Cellular Network Faults Prediction Using Mobile Intelligent Agents,"This paper presents an approach for prediction of faults in cellular networks using mobile intelligent agents. Cellular networks are uncertain and dynamic in their behavior hence the application of different artificial intelligent techniques for prediction, detection and identification of network failures, which can lead to more robust handling of unforeseen anomalies within a network environment. In this paper different artificial intelligent techniques are applied in developing platform independent, autonomous and robust agents that can report on any unforeseen anomaly within the infrastructure of a cellular network service provider. The specific design and implementation is done using Java Agent Development Framework (JADE). The experimental results obtained show 79% prediction accuracy.",2009,0, 5105,STATCOM-Based Indirect Torque Control of Induction Machines During Voltage Recovery After Grid Faults,"This paper proposes a control method for limiting the torque of grid-connected cage induction machines during the recovery process after grid faults, by using a static synchronous compensator (STATCOM) connected at the machine terminals. When a STATCOM is used for transient stability improvement, common practice is to design the control system to keep reactive current at maximum level until the voltage has returned to its initial value. This will result in high torques during the recovery process after grid faults. The control method proposed in this paper is intended to limit such torque transients by temporarily defining a new voltage reference for the STATCOM control system. As torque is controlled through the voltage reference of the STATCOM, the method is labeled indirect torque control (ITC). The presented concept is a model-based approach derived from a quasi-static equivalent circuit of the induction machine, the STATCOM and a The??venin representation of the power system. For illustration and verification, time-domain simulations of a wind power generation system with a STATCOM at the terminals of an induction generator, are provided. As the objective of limiting the torque of the induction machine is achieved, the derivation of the concept proves to be reasonable. The approach is presented in its most general form, oriented to torque limitation of induction machines both in generating and motoring mode, and is not restricted to the presented example.",2010,0, 5106,Fault Diagnosis Using a Timed Discrete-Event Approach Based on Interval Observers: Application to Sewer Networks,This paper proposes a fault diagnosis method using a timed discrete-event approach based on interval observers that improves the integration of fault detection and isolation tasks. The interface between fault detection and fault isolation considers the activation degree and the occurrence time instant of the diagnostic signals using a combination of several theoretical fault signature matrices that store the knowledge of the relationship between diagnostic signals and faults. The fault isolation module is implemented using a timed discrete-event approach that recognizes the occurrence of a fault by identifying a unique sequence of observable events (fault signals). The states and transitions that characterize such a system can directly be inferred from the relation between fault signals and faults. The proposed fault diagnosis approach has been motivated by the problem of detecting and isolating faults of the Barcelona's urban sewer system limnimeters (level meter sensors). The results obtained in this case study illustrate the benefits of using the proposed approach in comparison with the standard fault detection and isolation approach.,2010,0, 5107,A framework for distributed fault management using intelligent software agents,"This paper proposes a framework for distributed management of network faults by software agents. Intelligent network agents with advanced reasoning capabilities address many of the issues for the distribution of processing and control in network management. The agents detect, correlate and selectively seek to derive a clear explanation of alarms generated in their domain. The causal relationship between faults and their effects is presented as a Bayesian network. As evidence (alarms) is gathered, the probability of the presence of any particular fault is strengthened or weakened. Agents having a narrower view of the network forward their findings to another with a much broader view of the network. Depending on the network's degree of automation, the agent can carry out local recovery actions. A prototype reflecting the ideas discussed in this paper is under implementation.",2003,0, 5108,Overflow and Roundoff Error Analysis via Model Checking,"This paper proposes a framework for statically analyzing overflow and roundoff errors of C programs. First, a new range representation, ""extended affine interval"", is proposed to estimate overflow and roundoff errors. Second, the overflow and roundoff error analysis problem is encoded as a weighted model checking problem. To avoid widening, currently we focus on programs with bounded loops, which typically appear in encoder/decoder reference algorithms. Last, we implement the proposed framework as a static analysis tool CANA. Experimental results on small programs show that the extended affine interval is much more precise than classical interval.",2009,0, 5109,Reliability and Safety Modeling of Fault Tolerant Control System,"This paper proposes a generalized approach of reliability and safety modeling for fault tolerant control system based on Markov model. The reliability and safety function, computed from the transition probability of the Markov process, provides a proper quantitative measure of the fault tolerant control system because it incorporates the deadline, failure detection and fault isolation, permanent and correlated fault. State transition diagram was established based on the state transition of the system. State transition equation could be obtained by state transition diagram. Different state probability diagrams were acquired with different parameters of failure rate, recovery rate from transient fault, failure detection rate and fault isolation rate.",2008,0, 5110,Specifying and Constructing a Fault-Tolerant Composite Service,"This paper proposes a means to specify the semantics of fault tolerant Web services at an abstract level using semantics adapted from queuing system theory. A framework that supports the implementation of specified fault-tolerance is also described. Based on our work, we show how the redundancy and diversity characteristics of a service-oriented system can be expressed and implemented in a Web-service application.",2008,0, 5111,A Method of Detecting Vulnerability Defects Based on Static Analysis,"This paper proposes a method for detecting vulnerability defects caused by tainted data based on state machine. It first uses state machine to define various defect patterns. If the states of state machine is considered as the value propagated in dataflow analysis and the union operation of the state sets as the aggregation operation of dataflow analysis, the defect detection can be treated as a forward dataflow analysis problem. To reduce the false positives caused by intraprocedural analysis, the dynamic information of program was represented approximately by abstract value of variables, and then infeasible path can be identified when some variable's abstract value is empty in the state condition. A function summary method is proposed to get the information needed for performing interprocedural defect detection. The method proposed has been implemented in a defect testing tools.",2010,0, 5112,Grey Clustering Analysis Based Classifier for Steam Turbine-Generator Fault Diagnosis,"This paper proposes a method for steam turbine-generator fault diagnosis using grey clustering analysis (GCA). According to the field records, diagnostic information can be provided to monitor mechanical condition by the spectrum of the vibration signal. Frequency-based features are computed by fast Fourier transformation (FFT), the frequency ranges are <0.4f, 1f, 2f, 3f, and >3f. The maximum and minimum values of power spectrum indicate mechanical vibration fault at a particular frequency, and frequency patterns are applied to diagnose faults. For numerical tests with practical filed records, test results were conducted to show the proposed method demonstrates computational efficiency and high accuracy.",2007,0, 5113,A Method for Detecting Defects in Source Codes Using Model Checking Techniques,This paper proposes a method of detecting troublesome defects in the Java source codes for enterprise systems using a model checking technique. A supporting tool also provides a function to automatically translate source code into a model which is simulated by UPPAAL model checker.,2010,0, 5114,Determination of sag disturbing and sag vulnerable zones in a distribution network using stochastic fault simulation,"This paper proposes a methodology to predict the sags level caused by faults that may affect users in a distribution network. The methodology determines the trend of the possible fault location, the fault type and failed phases by analyzing historical data. These trends are modeled using probability densities and are simulated using Monte Carlo techniques (Gibbs algorithm). With the simulation results, a statistical analysis is performed to determine the average sag depth in each user as well as the most vulnerable areas against this type of disturbance. Finally, a sensitivity analysis is achieved with the aim of identifying zones where these disturbances cause a great impact on the average sag depth (disturbing areas) in order to implement the required solutions.",2008,0, 5115,Fault-tolerant dynamically reconfigurable NoC-based SoC,"This paper proposes a network-on-chip (NoC)-based dynamically reconfigurable platform which can perform multiple applications, simultaneously. A tile attached to a router in the NoC consists of a core container which can host a core permanently or temporarily. The tile also has a hardwired controller and a cache like memory to control the hosted cores. A core, which runs a task, may be described by a bitstream (called hardware core) or a programme code (called software core). Because of the dynamic behaviour of the proposed platform, using task identifier, a stochastic dynamic routing algorithm will find (or map) the task in the platform. Because of using the task identifier in routing algorithm and the reconfigurability of tiles, the proposed platform can tolerate probable faults. The proposed SoC architecture is easily able to run new protocols and tasks. Our results show that, the proposed platform follows the user interests such that runs tasks with higher temporal locality much faster than the tasks with lower temporal locality.",2008,0, 5116,A Fault Tolerant Wired/Wireless Sensor Network Architecture for Monitoring Pipeline Infrastructures,"This paper proposes a new fault-tolerant sensor network architecture for monitoring pipeline infrastructures. This architecture is an integrated wired and wireless network. The wired part of the network is considered the primary network while the wireless part is used as a backup among sensor nodes when there is any failure in the wired network. This architecture solves the current reliability issues of wired networks for pipelines monitoring and control. This includes the problem of disabling the network by disconnecting the network cables due to artificial or natural reasons. In addition, it solves the issues raised in recently proposed network architectures using wireless sensor networks for pipeline monitoring. These issues include the issues of power management and efficient routing for wireless sensor nodes to extend the life of the network. Detailed advantages of the proposed integrated network architecture are discussed under different application and fault scenarios.",2008,0, 5117,Fault recovery for a distributed QoS-based multicast routing algorithm,"This paper proposes a new minimum spanning tree (MST) based distributed QoS-based multicast routing algorithm which is capable of constructing a delay constrained multicast tree when node failures occur during the tree construction period and recovering from any node failure in a multicast tree during the on-going multicast session without interrupting the running traffic on the unaffected portion of the tree. The proposed algorithm performs the failure recovery efficiently, which gives better performance in terms of the number of exchanged messages and the convergence time than the existing MST based distributed multicast routing algorithms",2001,0, 5118,Forward error protection for robust video streaming based on distributed video coding principles,"This paper proposes an error resilient coding scheme that employs distributed video coding tools. A bitstream, produced by any standard motion-compensated predictive codec (MPEG-x, H.26x), is sent over an error-prone channel. In addition, a Wyner-Ziv encoded auxiliary bitstream is sent as redundant information to serve as a forward error correction code. At the decoder side, error concealed reconstructed frames are used as side information by the Wyner-Ziv decoder, and the corrected frame is used as a reference by future frames, thus reducing drift. We explicitly target the problem of rate allocation at the encoder side, by estimating the channel induced distortion in the transform domain. Experimental results conducted over a simulated error-prone channel reveal that the proposed scheme has comparable or better performance than a scheme where forward error correction codes are used. Moreover the proposed solution shows good performance when compared to a scheme that uses the intra-macroblock refresh procedure.",2008,0, 5119,SIMPREBAL: An expert system for real-time fault diagnosis of hydrogenerators machinery,"This paper proposes an expert system to aid plant maintainers and operators personnel for solving hydroelectric equipments troubleshootings. The expert system was implemented into intelligent maintenance system called SIMPREBAL (Predictive Maintenance System of Balbina). The SIMPREBAL knowledge base, the architecture and the inference machine are presented in detail. The knowledge base is based on experts empirical knowledge, work orders, manuals, technical documents and operation procedures. The predictive maintenance system architecture is based on the OSA-CBM framework that has seven layers. The software application has been successfully implemented in client-server computational framework. The data acquisition and intelligent processing tasks were develop in the server side and the user interface in the client side. The intelligent processing task is an expert system that use JESS inference machine. During two years, the SIMPREBAL has been used for monitoring and diagnosing hydrogenerators machinery malfunctions. The industrial application of the SIMPREBAL proved its high reliability and accuracy. Finally, satisfactory fault diagnostics have been verified using maintenance indicators before and after the SIMPREBAL installation in the hydroelectric power plant. These valuable results are been used in the decision support layer to pre-schedule maintenance work, reduce inventory costs for spare parts and minimize the risk of catastrophic failure.",2010,0, 5120,Video quality prediction in the presence of MAC contention and wireless channel error,"This paper proposes an integrated model to predict the quality of video, expressed in terms of mean square error (MSE) of the received video frames, in an IEEE 802.1 le wireless network. The proposed system takes into account contention at the MAC layer, wireless channel error, queueing at the MAC layer, parameters of different 802.1 le access categories (ACs), and video characteristics of different H.264 data partitions (DPs). To the best of the authors' knowledge, this is the first system that takes these network and video characteristics into consideration to predict video quality in an IEEE 802.1 le network. The proposed system consists of two components. The first component predicts the packet loss rate of each H.264 data partition by using a multi-dimensional discrete-time Markov chain (DTMC) coupled to a M/G/l queue. The second component uses these packet loss rates and the video characteristics to predict the MSE of each received video frames. We verify the accuracy of our combination system by using discrete event simulation and real H.264 coded video sequences.",2010,0, 5121,GRIDTS: A New Approach for Fault-Tolerant Scheduling in Grid Computing,"This paper proposes GRIDTS, a grid infrastructure in which the resources select the tasks they execute, on the contrary to traditional infrastructures where schedulers find resources for the tasks. This solution allows scheduling decisions to be made with up-to-date information about the resources, which is difficult in the traditional infrastructures. Moreover, GRIDTS provides fault-tolerant scheduling by combining a set of fault tolerance techniques to cope with crash faults in components of the system. The solution is mainly based a tuple space, which supports the scheduling and also provides support for the fault tolerance mechanisms.",2007,0, 5122,Design of the Autonomous Fault Processing Mechanism for Home Network,"This paper proposes the design of the autonomous fault processing mechanism that can be used to solve abnormal faults in home network. In home network environments that consists of several kind of networks and devices, their unrevealed functionalities can cause non-instinct faults (combination or inferred situation). Usually many researches had tried to make a model-based fault processing mechanism. But, those works depended on the characteristics of the model for the specific environment. We focus on the process in which faults are induced and establish fault categories that can be caused in home network and the autonomous fault processing mechanism.",2007,0, 5123,Application level error recovery using active network nodes,"This paper proposes the use of active networking technology for error correction with application level error recovery code. The focus of this paper lies on real-time video transmission supported by active network nodes. Experiments with a prototype model show a considerable improvement with respect to video quality, error correction time and bandwidth usage",2000,0, 5124,A comparative study of induction motor current signature analysis techniques for mechanical faults detection,"This paper provides a comprehensive comparison of motor current signature analysis techniques for broken rotor bars and air gap eccentricity detection in induction motors. Four characteristic categories of processing methods are investigated, which are Fourier based analysis of the phase currents, parametric and eigenanalysis methods for spectrum estimation of the phase currents, wavelet analysis of the phase currents and current space vector analysis. The analysis is at first performed through simulation of a faulty asynchronous machine in Matlab-Simulink. Subsequent to the simulation results analysis, an experimental fault test rig was designed and implemented for the investigation of the aforementioned malfunctions. In this preliminary investigation, each analysis technique along with its advantages and disadvantages is presented and briefly discussed.",2005,0, 5125,Analogous view transfer for gaze correction in video sequences,"This paper provides a framework for doing facial gaze correction in video sequences. The proposed framework involves stages of face registration, face parameter mapping, and face synthesis. We introduce the concept of analogous views, and derive a novel formulation which extends view transfers based on epipolar geometry to cope with non-rigid motion. Additionally, a disparity mapping function is derived which is learned from training data and handles both spatial disparities as well as pixel-value changes. The disparity mapping function generalizes to facial expressions, illumination conditions and individuals not in the training set, as shown by the results obtained.",2002,0, 5126,A generalized framework for digital adjustment or correction,"This paper provides a unified technique to deal with a number of open-ended, digital adjustment and/or correction techniques that have grown up as special cases. These include corrections for system imperfections in channel matching, in frequency response, and/or in transient response. In the situations under consideration there is always a controlling variable, such as signal amplitude, and a controlled or correctable variable, such as the transient response. All of the systems to be corrected require a combination of: stability, the ability to sort the controlling variable, a digital storage mechanism, a control or correction mechanism, and a standard to provide the means to adjust the final results to within the desired tolerance. The methods are illustrated with results from an actual, digitally sampling, two-channel, seven-amplitude range, 10 MHz, 100 A, pulse current generating and measuring system. These results include the digital correction of the transient response of the system and introduce a 100 A flatness standard. By using these concepts one may usually achieve at least a ten times reduction in the uncertainty in the transfer function of the system that is adjusted or corrected",2001,0, 5127,Biologically-Based Signal Processing Chips with Emphasis on Telecommunication Defect Tracking and Reliability Estimation,"This paper provides observations and motivations to mimic biological information processing. Alternative bio-inspired systems definitions, basics, approaches, algorithms, and chip implementations will be illustrated to offer a base of choice for bio-based Intelligent Information Processing (IIP) systems. Hybrid biological and bio-based IIP are briefly presented. Two specific applications follow with embedded bio-based systems: Bio-chemical sensing and detection E-nose; and Track improvements In the reliability of the software used in telecommunication network deployments. The biologically-based processing discoveries gleaned from observing the spikes in the brain activity of monkeys, introduced the concept of plasticity in synapses used in our embedded Spiking Neural Network (SNN) system for the E-Nose The mathematical construct of a defect tracking classifier is nonlinear, and the event to be recognized involves a sequentially varying or non-stationary phenomenon for telecommunication defect tracking and reliability estimation. Thus, Adaptive Recurrent Dynamic Neural Network (ARDNN) system using wavelet function as the basis improved the failure event estimation of software defect tracking in telecommunications and reduced the error from 88% to L25-8%.",2007,0, 5128,Study of SINS/GPS/DVL integrated navigation system's fault tolerance,This paper put forward several fault tolerant arithmetic combining engineering applications on the AUV (autonomous underwater vehicle). The arithmetic is based on traditional centralized Kalman Filter and can improve SINS/GPS/DVL integrated navigation system's precision and capability of fault tolerant. The simulation and the vehicle tests validate the arithmetic.,2005,0, 5129,Phase-locked loop automatic layout generation and transient fault injection analysis: a case study,"This paper reports a case study about the automatic layout generation and transient fault injection analysis of a phase-locked loop (PLL). A script methodology was used to generate the layout based on transistor level specifications. After layout validation, experiences were performed in the PLL in order to evaluate the sensibility against transient fault. The circuit was generated using the STMicroelectronics HCMOS8D process (0.18mum). Results report the PLL sensitive points allowing the study and development of techniques to protect this circuit against transient faults",2006,0, 5130,New resonance type Fault Current Limiter,"This paper proposes a new parallel LC resonance type Fault Current Limiter (FCL). This structure has low cast because of using dry capacitor and non-superconducting inductor and fast operation. The proposed FCL is able to limit fault current in constant value near to pre-fault condition value against series resonance type FCL. In this way, the voltage of point of common coupling (PCC) will not change during fault. Analytical analysis is presented in detail and simulation results are involved to validate the effectiveness of this structure.",2010,0, 5131,New Supervisory Control and Data Acquisition (SCADA) based fault isolation system for low voltage distribution systems,"This paper proposes a new supervisory control and data acquisition (SCADA) based fault isolation system on the low voltage (415/240 V) distribution system. It presents a customized distribution automation system (DAS) for automatic operation and secure fault isolation tested in the Malaysian utility distribution system; Tenaga Nasional Berhad (TNB) distribution system. It presents the first research work on customer side automation for operating and controlling between the consumer and the substation in an automated manner. The paper focuses on the development of very secure automated fault isolation work tested to TNB distribution operating principles as the fault is detected, identified, isolated and cleared in few seconds by just clicking the mouse of laptop or desktop connected to the system. Supervisory Control and Data Acquisition (SCADA) technique has been developed and utilized to build Human Machine Interface (HMI) that provides a Graphical User Interface (GUI) functions for the engineers and technicians to monitor and control the system. Microprocessor based Remote Monitoring Devices have been used for customized software integrated to the hardware. Power Line Carrier (PLC) has been used as communication media between the consumer and the substation. As a result, complete DAS and fault isolation system has been developed for remote automated operation, cost reduction, maintenance time saving and less human intervention during faults conditions.",2010,0, 5132,Novel fault diagnostic technique for permanent Magnet Synchronous Machines using electromagnetic signature analysis,"This paper proposes a novel alternative scheme for permanent magnet synchronous machine (PMSM) health monitoring and multi-faults detection using direct flux measurement with search coils. Phase current spectrum is not used for analysis and therefore it is not influenced by power supply introduced harmonics any more. In addition, it is also not necessary to specify load condition for accurate diagnosis. In this study, numerical models of a healthy machine and of machines with various faults are developed and examined. Simulation by means of a two-dimensional finite-element analysis (FEA) software package is presented to verify the application of the proposed method over different motor operation conditions.",2010,0, 5133,Fault tolerant adaptive mission planning with semantic knowledge representation for autonomous underwater vehicles,"This paper proposes a novel approach for autonomous mission plan recovery for maintaining operability of unmanned underwater vehicles. It combines the benefits of knowledge-based ontology representation, autonomous partial ordering plan repair and robust mission execution. The approach uses the potential of ontology reasoning in order to orient the planning algorithms adapting the mission plan of the vehicle. It can handle uncertainty and action scheduling in order to maximize mission efficiency and minimise mission failures due to external unexpected factors. Its performance is presented in a set of simulated scenarios for different concepts of operations for the underwater domain. The paper concludes by showing the results of a trial demonstration carried out on a real underwater platform. The results of this paper are readily applicable to land and air robotics.",2008,0, 5134,A simple fault detection of the open-switch damage in BLDC motor drive systems,This paper proposes a novel fault detection algorithm for a brushless DC (BLDC) motor drive system. This proposed method is configured without the additional sensor for fault detection and identification. The fault detection and identification are achieved by a simple algorithm using the operating characteristic of the BLDC motor. The drive system after the fault identification is reconfigured by four-switch topology connecting a faulty leg to the middle point of DC-link using bidirectional switches. This proposed method can also be embedded into existing BLDC motor drive systems as a subroutine without excessive computational effort. The feasibility of a novel fault detection algorithm is validated in simulation.,2007,0, 5135,A novel fault tolerant design and an algorithm for tolerating faults in digital circuits,"This paper proposes a novel fault tolerant algorithm for tolerating stuck-at-faults in digital circuits. We consider in this paper single stuck-at type faults, occurring either at a gate input or at a gate output. A stuck-at-fault may adversely affect on the functionality of the user implemented design. A novel fault tolerant design based on hardware redundancy (replication) is presented here for single fault model to tolerate transient as well as permanent faults. The design is also suitable to be used for highly dependable systems implemented by means of Field Programmable Gate Arrays (FPGAs) at RTL level. This approach offers the possibility of using larger and more cost effective devices that contain interconnect defects without compromising on performance or configurability. The algorithm presented here demonstrates the fault tolerance capability of the design and is implemented for a full adder circuit but can be generalized for any other digital circuit. Using exhaustive testing the functioning of all the three full adders can be easily verified. In case of occurrence of stuck-at-faults; the circuit will configure itself to select the other fault free outputs. We have evaluated our novel fault tolerant technique (NFT) in five different circuits: full adder, encoder, counter, shift register and microprocessor. The proposed design approach scales well to larger digital circuits also and does not require fault detection. We have also presented and compared the results of triple modular redundancy (TMR) method with our technique. All possible faults are tested by injecting the faults using a multiplexer.",2008,0, 5136,Fault-Tolerant Optimal Neurocontrol for a Static Synchronous Series Compensator Connected to a Power Network,"This paper proposes a novel fault-tolerant optimal neurocontrol scheme (FTONC) for a static synchronous series compensator (SSSC) connected to a multimachine benchmark power system. The dual heuristic programming technique and radial basis function neural networks are used to design a nonlinear optimal neurocontroller (NONC) for the external control of the SSSC. Compared to the conventional external linear controller, the NONC improves the damping performance of the SSSC. The internal control of the SSSC is achieved by a conventional linear controller. A sensor evaluation and (missing sensor) restoration scheme (SERS) is designed by using the autoassociative neural networks and particle swarm optimization. This SERS provides a set of fault-tolerant measurements to the SSSC controllers, and therefore, guarantees a fault-tolerant control for the SSSC. The proposed FTONC is verified by simulation studies in the PSCAD/EMTDC environment.",2008,0, 5137,Generic Fault Tolerant Software Architecture Reasoning and Customization,"This paper proposes a novel heterogeneous software architecture GFTSA (Generic Fault Tolerant Software Architecture) which can guide the development of safety critical distributed systems. GFTSA incorporates an idealized fault tolerant component concept, and coordinated error recovery mechanism in the early system design phase. It can be reused in the high level model design of specific safety critical distributed systems with reliability requirements. To provide precise common idioms & patterns for the system designers, formal language Object-Z is used to specify GFTSA. Formal proofs based on Object-Z reasoning rules are constructed to demonstrate that the proposed GFTSA model can preserve significant fault tolerant properties. The inheritance & instantiation mechanisms of Object-Z can contribute to the customization of the GFTSA formal model. By analyzing the customization process, we also present a template of GFTSA, expressed in x-frames using the XVCL (XML-based Variant Configuration Language) methodology to make the customization process more direct & automatic. We use an LDAS (Line Direction Agreement System) case study to illustrate that GFTSA can guide the development of specific safety critical distributed systems",2006,0, 5138,Fault detection of power transformers using genetic programming method,"This paper proposes a novel method for insulation fault detection of power transformer using the genetic programming (GP) method. Fault detection can be seen as a problem of multi-class classification. GP is a way of automatically constructing computer programs using a process analogous to biological evolution. GP methods of problem solving have a great advantage in their power to represent solutions to complex classification problems. The flexibility of representation gives GP the capacity to represent classification problems with means unavailable to other techniques such as neural networks. A binary tree (Bi-tree) structure is presented to transfer an N-class problem into N-1 two-class problems. The proposed method has been tested on the actual records and compared with the conventional methods, fuzzy system method and artificial neural network method. The result shows that GP has advantages over the existing diagnosis methods and provides a new way to solve the problem of fault detection.",2004,0, 5139,Intelligent Wireless System to Monitoring Mechanical Fault in Power Transmission Lines,"This paper presents the development of an intelligent wireless system to monitoring mechanical fault in electric power transmission lines under oscillating movement, collapse, overheating or short circuit faults in the cable; caused by natural or induced man effects. The system has a remote module hanging on a cable and a central operation module communicated via wireless. The remote module has data acquisition, is capable to add the location where is going to be installed with a GPS receiver (Global Position System), is capable to sensing temperature, electric current and acceleration of the cable; the last one is an integrate circuit based in MEMS technology (micro electromechanical system), all of these operations are controlled and processed with a microcontroller. The central operation module is connected to serial port in a personal computer. The user can survey the remote module, ask the historic stored data, assign an identification number and assign global positioning coordinates via wireless, with the system-user software created in LabVIEW.",2008,0, 5140,Research of BMP and PCB Image-position in the Digital Circuit Fault Diagnosis System,"This paper presents the different methods and processes of the implement of image-position technique, by which the user can locate the probe quickly and accurately during the process of circuit-fault-diagnosis. The article introduces the fault diagnosis program that can retrieves information from fault dictionary, presents the image-position technique of BMP and PCB and proposes the usage of the significant function and controls are given which can used to locate the probe accurately. During the test, the real circuit graph guides the user and points out the fault location, thus raising the efficiency of fault removal.",2009,0, 5141,Image-Position Technology of the Digital Circuit Fault Diagnosis Based on Lab Windows/CVI,"This paper presents the different methods and processes of the implement of image-positioning technique, by which the user can locate the probe quickly and accurately during the process of circuit-fault diagnosis.The article introduces the fault diagnosis program that can retrieves information from fault dictionary, presents the image-position technique of BMP and PCB and proposes the usage of the significant function and controls are given which can used to locate the probe accurately. During the test, the real circuit graph guides the user and points out the fault location, thus raising the efficiency of fault removal.",2008,0, 5142,Steward: Scaling Byzantine Fault-Tolerant Replication to Wide Area Networks,"This paper presents the first hierarchical byzantine fault-tolerant replication architecture suitable to systems that span multiple wide-area sites. The architecture confines the effects of any malicious replica to its local site, reduces message complexity of wide-area communication, and allows read-only queries to be performed locally within a site for the price of additional standard hardware. We present proofs that our algorithm provides safety and liveness properties. A prototype implementation is evaluated over several network topologies and is compared with a flat byzantine fault-tolerant approach. The experimental results show considerable improvement over flat byzantine replication algorithms, bringing the performance of byzantine replication closer to existing benign fault-tolerant replication techniques over wide area networks.",2010,0, 5143,Fault detection methods for wireless sensor networks using neural networks,This paper presents the implementation of a neural network-based fault detection for wireless sensor networks (WSNs). There are discussed two proposed approaches: both a centralized and respectively distributed fault detection. The methods are validated using a wireless sensor network that takes environmental measurements every 60 seconds. There were considered various fault scenarios that were detected and corrected by the implemented structure. Neural networks (NNs) are used to model the environmental parameter's dynamics. The physical measurement is compared against the predicted value and a given threshold of error to determine sensor fault.,2010,0, 5144,Bug analysis and corresponding error models in real designs,"This paper presents the item-missing error model. It stems from the analysis of real bugs that are collected in two market-oriented projects: (1) the AMBA interface of a general-purpose microprocessor IP core; (2) a wireless sensor network oriented embedded processor. The bugs are analyzed via code structure comparison, and it is found that item-missing errors merit attention. The test generation method for item-missing error model is proposed. Structural information obtained from this error model is helpful to reach a greater probability of bug detection than that in random-generation verification with only functional constraints. Finally, the proposed test method is applied in verification of our designs, and experimental results demonstrate the effectiveness of this method.",2007,0, 5145,Applying Dynamic Reconfiguration for Fault Tolerance in Fine-Grained Logic Arrays,"This paper presents the realization of a fault tolerance technique for a dynamically reconfigurable array of programmable cells. The three parts of the technique, fault detection, fault reconfiguration, and fault recovery, are implemented completely in hardware and form a self-contained system. Each of the parts can be exchanged by an alternative implementation without affecting the remaining parts too much, thus making the concept adaptable to different reconfigurable circuits. A hardware realization for the core mechanism is discussed and a prototypical design of a field-programmable gate array implementing the complete system is described. The technological development towards nanoscale feature sizes and the growing influence of deep-submicrometer effects will result in an inherent unreliability of the individual components of future circuit implementations and a higher vulnerability towards external influences. The technique discussed can be used to exploit dynamic reconfiguration capabilities of programmable arrays to alleviate system vulnerability towards these effects and thus to enhance their overall reliability.",2008,0, 5146,Mechanical fault detection in a medium-sized induction motor using stator current monitoring,"This paper presents the results of an experimental study of the detection of mechanical faults in an induction motor. As is reasonably well known, by means of analysis of combinations of permeance and magneto-motive force (MMF) harmonics, it is possible to predict the frequency of air gap flux density harmonics which occur as a result of certain irregularities in an induction motor. In turn, analysis of flux density harmonics allows the prediction of induced voltages and currents in the stator windings. Reviewing this theory, equations which may aid in the identification of mechanical faults are presented. These equations include both those which indicate eccentric conditions and those which have been suggested to help identify bearing faults. The development of test facility to create eccentricity faults and bearing fault conditions is described. This test facility allows rapid access to the motor bearings, allowing an investigation into the ability to detect faulted bearing conditions using stator current monitoring. Experimental test results are presented, indicating that it may be possible to detect bearing degradation using relatively simple and inexpensive equipment.",2005,0, 5147,An investigation into formatting and layout errors produced by blind word-processor users and an evaluation of prototype error prevention and correction techniques,"This paper presents the results of an investigation into tools to support blind authors in the creation and checking of word processed documents. Eighty-nine documents produced by 14 blind authors are analyzed to determine and classify common types of layout and formatting errors. Based on the survey result, two prototype tools were developed to assist blind authors in the creation of documents: a letter creation wizard, which is used before the document is produced; and a format/layout checker that detects errors and presents them to the author after the document has been created. The results of a limited evaluation of the tools by 11 blind computer users are presented. A survey of word processor usage by these users is also presented and indicates that: authors have concerns about the appearance of the documents that they produce; many blind authors fail to use word processor tools such as spell checkers, grammar checkers and templates; and a significant number of blind people rely on sighted help for document creation or checking. The paper concludes that document formatting and layout is a problem for blind authors and that tools should be able to assist.",2003,0, 5148,Fault classification for power distribution systems via a combined wavelet-neural approach,"This paper presents an integrated design of a fault classifier which uses a hybrid wavelet-artificial neural network (ANN) based approach. The data for the fault classifier is produced by PSCAD/EMTDC simulation program for 34.5 kV Sagmalcilar-Maltepe distribution system in Istanbul, Turkey. It is aimed to design a classifier capable of recognizing ten classes of three-phase distribution system faults. A database of line currents and line-to-ground voltages is built up including system faults at different fault inception angles and fault locations. The characteristic information over six-channel of current and voltage samples is extracted by the wavelet multiresolution analysis technique. Then, an ANN-based tool was employed for classification task. The main idea of this approach is to solve the complex fault (three-phase short-circuit) classification problem under various system and fault conditions. A self-organizing map, with Kohonen's learning algorithm and type-one learning vector quantization technique is implemented into this study. The performance of the wavelet-neural fault classifier is presented and the results are analyzed in the paper. It is shown that the technique correctly recognizes and discriminates the fault types and faulted phases with a high degree of accuracy in the simulated model distribution system.",2004,0, 5149,An Online Procedure for Linear-Phase 2-D FIR Filters of Smallest Size with Magnitude Error Constraint,"This paper presents an online procedure that produces the smallest feasible size of two-dimensional (2-D) FIR filters with prescribed magnitude error constraint. The procedure uses the mean square normalized error of constrained and unconstrained least-square filters to produce the initial and the subsequent sizes that converge to the smallest feasible one in a few iterations, where the constrained least-square filters are defined as the least-square filters satisfying the magnitude error constraint. The procedure finally returns a smallest size filter that satisfies the magnitude error constraint and has least total squared magnitude error. Design examples of diamond-shaped, rectangular, and elliptic filters are provided, and comparisons with an exhaustive search are given.",2007,0, 5150,Aiding Navy helicopter aircrews in handling mechanical fault emergencies,"This paper presents an overview of a research program which is investigating how to generate and present information to Navy helicopter aircrews to help them handle mechanical problems in-flight. This work seeks to provide a mechanism for alerting aircrews to problems that are identified and potentially diagnosed by a Health and Usage Monitoring System (HUMS). HUMS alerting is combined with a concept for an interactive, electronic flight manual to produce a complete aircrew aiding system. Since the basis for the flight manual information is the Navy's NATOPS, the system is designated as Interactive Electronic NATOPS (IE-NATOPS). A prototype design, which will be presented, is planned for implementation on an electronic kneeboard device with aircraft 1553 bus connection to the Warning Caution and Advisory (WCA) cockpit alerting display and HUMS. Aircrew information requirements for HUMS-based aiding have been investigated through two studies that were conducted in a Navy H-46 simulator. The first study addressed aircrew performance in a baseline aircraft with no aiding. The second study examined performance with a hypothetical (scripted) aid which provided information in the categories of problem identification, diagnosis, confirmation, and action recommendation. Aircrew performance and preferences in using these various categories of information were studied. Results of both of these studies will be summarized. Human factors design issues will be discussed, focusing primarily on the cognitive issues of information characteristics. Implementation issues will also be addressed, including document information management, aircrew interface, and aircrew training implications",2000,0, 5151,Fault classification for distance protection,This paper presents an overview of power system fault classification methods and challenges. It also contains some ideas about structured testing.,2002,0, 5152,CARP: Handling Silent Data Errors and Site Failures in an Integrated Program and Storage Replication Mechanism,"This paper presents CARP, an integrated program and storage replication solution. CARP extends program replication systems which do not currently address storage errors, builds upon a record-and-replay scheme that handles nondeterminism in program execution, and uses a scheme based on recorded program state and I/O logs to enable efficient detection of silent data errors and efficient recovery from such errors. CARP is designed to be transparent to applications with minimal run-time impact and is general enough to be implemented on commodity machines. We implemented CARP as a prototype on the Linux operating system and conducted extensive sensitivity analysis of its overhead with different application profiles and system parameters. In particular, we evaluated CARP with standard unmodified email, database, and web server benchmarks and showed that it imposes acceptable overhead while providing sub-second program state recovery times on detecting a silent data error.",2009,0, 5153,An alternative fault location algorithm based on Wavelet Transforms for three-terminal lines,"This paper presents the study and development of a complete fault location scheme for three-terminal transmission lines using wavelet transforms (WT). The methodology is based on the low and high frequency components of the transient signals originated from a fault situation registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of traveling waves of voltages and/or currents from the fault point to the terminals, as well as to estimate the fundamental frequency components. As a consequence, both faulted leg and fault location can be estimated with reference to one of the system terminals. A new approach is presented to develop a reliable and accurate fault location scheme combining the best the methods can offer. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The algorithm was tested for different fault conditions by simulations using ATP (alternative transients program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.",2008,0, 5154,Thermal Performance of a Three-Phase Induction Motor Under Fault Tolerant Operating Strategies,"This paper presents the thermal behaviour of a three-phase induction motor under direct torque control, when supplied by a three-phase voltage source inverter with fault tolerant capabilities. For this purpose, a fault tolerant operating strategy based on the connection of the faulty inverter leg to the DC link middle point was considered. The motor thermal profile is obtained through the use of nine thermocouples positioned in both stator and rotor circuits. The experimental results obtained under fault compensated operation show that, as far as the motor thermal characteristics are concerned, it is not necessary to reinforce the motor insulation properties since it is already prepared for such an operation",2005,0, 5155,Digitally controlled three-phase power factor correction circuit with partially resonant circuit,"This paper presents the three-phase PFC (power factor correction) circuit with partially resonant circuit using a DSP (digital signal processor). The power supply systems for telecommunication and data communication systems have been able to get the high power factor and low input current harmonic distortion factor using the PFC circuit. However, the PFC circuit requires many components and many adjusting processes especially in case of the three-phase input PFC. In this paper, the advantages of the digital control with DSP are shown and compared with the analog control. With the digital control using DSP, better regulation of the output voltage, fewer components and reducing adjusting process are achieved. Also the partially resonant circuit which is effective to the PFC circuit is also shown. In the partially resonant circuit, both the main switches and the resonant switches switch under soft switching conditions. Over 97% efficiency of the PFC circuit is realized in the 2.5 kW power supply system.",2003,0, 5156,Research and Realization of Digital Circuit Fault Probe Location Process,"This paper presents three core files relating to circuit fault diagnosis which is generated by LASAR (logic automated stimulus response), i.e. fault dictionary, node truth table and pin connection table, analyses the content of fault dictionary, pin connection table and node truth table, finds the necessary information for fault location, summarizes the procedure of circuit test and fault location. Finally the digital circuit diagnosis system which can locate the fault on the pin of components is designed. With the help of probe, fault location of component pins can be accurately pinpointed.",2008,0, 5157,Genetic programming approach for fault modeling of electronic hardware,"This paper presents two variants of genetic programming (GP) approaches for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the stressor - susceptibility interaction model. A circuit or a system is deemed to be failed once the stressor has exceeded the susceptibility limits. For on-line prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after preprocessing and standardization are fed into the GP models. Empirical results are compared with artificial neural networks trained using backpropagation algorithm. The performance of the proposed method is evaluated by comparing the experiment results with the actual failure model values. The developed model reveals that GP could play an important role for future fault monitoring systems.",2005,0, 5158,End-to-end latency of a fault-tolerant CORBA infrastructure,"This paper presents measured probability density functions (pdfs) for the end-to-end latency, of two-way, remote method invocations from a CORBA client to a replicated CORBA server in a fault-tolerance infrastructure. The infrastructure uses a multicast group-communication protocol based on a logical token-passing ring imposed on a single local-area network. The measurements show that the peaks of the pd/s for the latency are affected by the presence of duplicate messages for active replication, and by the position of the primary server replica on the ring for semi-active and passive replication. Because a node cannot broadcast a user message until it receives the token, up to two complete token rotations can contribute to the end-to-end latency seen by the client for synchronous remote method invocations, depending on the server processing time and the interval between two consecutive client invocations. For semi-active and passive replication, careful placement of the primary server replica is necessary to alleviate this broadcast delay to achieve the best possible end-to-end latency. The client invocation patterns and the server processing time must be considered together to determine the most favorable position for the primary replica. Assuming that an effective sending-side duplicate suppression mechanism is implemented, active replication can be more advantageous than semi-active and passive replication because all replicas compete for sending and, therefore, the replica at the most favorable position will have the opportunity to send first",2002,0, 5159,Implementation and error performance evaluation of an iterative decoding algorithm,"This paper presents new experimental results about the error correction performance of an iterative threshold decoder at relatively high signal to noise ratio. To accomplish this task, an accelerated characterization platform has been developed. Without this platform, it would take approximately 37.23 years in computational time with a software version of the decoder to get the error correction performances over an extended signal to noise range. An acceleration factor of 4812 is obtained by using the accelerated characterization platform. The platform constitutes a new way for characterizing quickly and efficiently a new error correction algorithm. The acceleration characterization platform has allowed verifying the error correction performance at high SNR which will require a prohibitive computational time on a conventional computer.",2005,0, 5160,Development of Automated Fault Location Software for Distribution Network,"This paper presents on an ongoing development of automated fault location software for distribution network. The proposed method of the software is able to identify the most possible faulty section based on voltage sags feature i.e. magnitude and phase shift. The actual voltage sags caused by fault is compares with simulated voltage sags stored in a database. The matching will give all possible faulty sections that have been stored in the database. The software is developed by integrating various software modules. The module will be developed into a component type using component based development (CBD) approach. By developing the software from component, replacement or modification can be done to any of the module without affecting the whole software. The test results of the proposed method show satisfactory. Future works in order to improve the software and method are also discussed in this paper.",2006,0, 5161,Fault simulation and response compaction in full scan circuits using HOPE,"This paper presents results on fault simulation and response compaction on ISCAS 89 full scan sequential benchmark circuits using HOPE-a fault simulator developed for synchronous sequential circuits that employs parallel fault simulation with heuristics to reduce simulation time in the context of designing space-efficient support hardware for built-in self-testing of very large-scale integrated circuits. The techniques realized in this paper take advantage of the basic ideas of sequence characterization previously developed and utilized by the authors for response data compaction in the case of ISCAS 85 combinational benchmark circuits, using simulation programs ATALANTA, FSIM, and COMPACTEST, under conditions of both stochastic independence and dependence of single and double line errors in the selection of specific gates for merger of a pair of output bit streams from a circuit under test (CUT). These concepts are then applied to designing efficient space compression networks in the case of full scan sequential benchmark circuits using the fault simulator HOPE.",2005,0, 5162,Multi-frame error concealment for H.264/AVC frames with complexity adaptation,"This paper proposes a novel multi-frame error concealment algorithm for the whole-frame losses in video transmission based on H.264/AVC. In order to minimize the error propagation, the proposed algorithm estimates the distortion of both the lost frame and its succeeding frame, and determines the modes for recovering the motion vectors of the lost frame. In addition, based on the analysis of the computational complexity and the distortion of concealed frames, we introduce complexity adaptation into the proposed algorithm to achieve optimal complexity-distortion under different real-time constraints and different decoder computational power. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms in both objective and subjective quality of the concealed frames.",2008,0, 5163,Robust sensor fault estimation for tolerant control of a civil aircraft using sliding modes,This paper proposes a sensor fault tolerant control scheme for a large civil aircraft. It is based on the application of a robust method for sensor fault reconstruction using sliding mode theory. The novelty lies in the application of the sensor fault reconstruction scheme to correct the corrupted measured signals before they are used by the controller and therefore the controller does not need to be reconfigured to adapt to sensor faults,2006,0, 5164,Coordinated forward error recovery for composite Web services,"This paper proposes a solution based on forward error recovery, oriented towards providing dependability of composite Web services. While exploiting their possible support for fault tolerance (e.g., transactional support at the level of each service), the proposed solution has no impact on the autonomy of the individual Web services, our solution lies in system structuring in terms of co-operative atomic actions that have a well-defined behavior, both in the absence and in the presence of service failures. More specifically, we define the notion of Web Service Composition Action (WSCA), based on the Coordinated Atomic Action concept, which allows structuring composite Web services in terms of dependable actions. Fault tolerance can then be obtained as an emergent property of the aggregation of several potentially non-dependable services. We further introduce a framework enabling the development of composite Web services based on WSCAs, consisting of an XML-based language for the specification of WSCAs.",2003,0, 5165,DVHMM: variable length text recognition error model,"This paper proposes a text recognition error model called the dual variable length output hidden Markov model (DVHMM) and gives a parameter estimation algorithm based on the EM algorithm. Although existing probabilistic error models are limited to substitution (1, 1), insertion (1, 0), and deletion (0, 1) errors, the DVHMM can handle error patterns of any pair (i, j) of lengths including substitution, insertion, and deletion.",2002,0, 5166,A 160120-pixels range camera with on-pixel correlated double sampling and nonuniformity correction in 29.1m pitch,"This paper presents the design and test of a CMOS integrated circuit implementing a 160120-pixels 3D camera. The on-pixel processing allows the use of Indirect Time-Of-Flight technique for distance measurement with reset noise removal through Correlated Double Sampling and embedded fixed-pattern noise reduction, while a fast readout operation allows to stream out the pixels values at a maximum rate of 10MSample/s. The imager can operate as a fast 2D camera up to 458fps, a 3D camera up to 80fps or both. The chip has been fabricated using a standard 0.18m 1P4M 1.8V CMOS technology with MIM capacitors. The resulting pixel has a pitch of 29.1m with a fill-factor of 34% and consists of 66 transistors. Distance measurements up to 4.5m have been performed with pulsed laser light, achieving 2.5cm precision at 2m in real-time.",2010,0, 5167,Event-orthogonal error-insensitive multiple fault detection with cascade correlation network,"This paper presents the design of a fault detection system with cascade correlation network (CCN) for a power system. Associate fault components with the states of protective devices would form symptomatic patterns to create training data. The proposed method makes use of information from both the primary and back-up devices involving single fault, multiple faults, data communication with errors, or fault with the failed operation of relays and circuit breakers. With a sample power system, computer simulations were conducted to show the effectiveness of the proposed system.",2003,0, 5168,Development of a technique for calculation of the influence of generator design on power system balanced fault behaviour,"This paper presents the development of a method for quantitatively determining the potential impact that the design of a single generator may have upon the performance of power system under fault conditions. Initially it is illustrated that the impact that a single generator may have on network fault behaviour is limited by the configuration of the existing network to which the new generator is connected. These constraints are then used to develop a quantitative measure of variability in network-wide fault currents and the subsequent voltage disturbances that can be produced under balanced fault conditions by changing the design of a new generator, irrespective of its point of connection. Finally comparisons with the observed variation in network fault behaviour obtained from the simulation in PSS/E of a realistic 600-bus transmission network are used to demonstrate the technique's apparent effectiveness.",2002,0, 5169,Multilevel full-chip gridless routing considering optical proximity correction,"To handle modern routing with nanometer effects, we need to consider designs of variable wire widths and spacings, for which gridless routers are desirable due to their great flexibility. The gridless routing is much more difficult than the grid-based one because the solution space of gridless routing is significantly larger than that of grid-based one. In this paper, we present the first multilevel, full-chip gridless detailed router. The router integrates global routing, detailed routing, and congestion estimation together at each level of the multilevel routing. It can handle non-uniform wire widths and consider routability and optical proximity correction (OPC). Experimental results show that our approach obtains significantly better routing solutions than previous works. For example, for a set of 11 commonly used benchmark circuits, our approach achieves 100% routing completion for all circuits while the famous state-of-the-art three-level routing and multilevel routing (multilevel global routing + flat detailed routing) cannot complete routing for any of the circuits. Besides, experimental results show that our multilevel gridless router can handle non-uniform wire widths efficiently and effectively (still maintain 100% routing completion for all circuits). In particular, our OPC-aware multilevel gridless router archives an average reduction of 11.3% pattern features and still maintains 100% routability for the 11 benchmark circuits.",2005,0, 5170,Analysis and modeling on the induction machine faults,"To obtain high performance and high safety degree in induction machine operation, the researchers' attention is directed through monitoring and evaluating their reliability. This paper analyzes the connection between the failure causes and their effects on induction machine, by using Failure Mode and Effects Analysis (FMEA) and Fault Tree Analysis (FTA). The first method identifies possible faults in induction machine operation and assesses their consequences on the entire electric system, while the second one is a logical and graphical method used to assess the failure probability. These methods are applied in a case study on the induction machines produced by a Romanian company. The faults that appear in the manufacturing phase for the induction machine with powers in the range of 0.75-1.5 kW are monitored and analyzed. Based on experimental data, the faults space for the induction machine is modeled and the failure rate & probability of failure for this electrical system are established. The case study is useful for determining weaknesses of the induction machine, based on which the optimization of manufacturing procedures can be done.",2010,0, 5171,Company-Wide Implementation of Metrics for Early Software Fault Detection,"To shorten time-to-market and improve customer satisfaction, software development companies commonly want to use metrics for assessing and improving the performance of their development projects. This paper describes a measurement concept for assessing how good an organization is at finding faults when most cost-effective, i. e. in most cases early. The paper provides results and lessons learned from applying the measurement concept widely at a large software development company. A major finding was that on average, 64 percent of all faults found would have been more cost effective to find during unit tests. An in-depth study of a few projects at a development unit also demonstrated how to use the measurement concept for identifying which parts in the fault detection process that needs to be improved to become more efficient (e.g. reduce the amount of time spent on rework).",2007,0, 5172,Fault Diagnosis for Engine Based on EMD and Wavelet Packet BP Neural Network,"To solve the problem of fault diagnosis for engine, due to the complexity of the equipments and the particularity of the operating environments, generally speaking, there is no one-to-one correspondence between the characteristic parameters and status, so, the methods of diagnosis are very complicated. A novel fault diagnosis method based on empirical mode decomposition (EMD) and wavelet packet BP neural network is proposed in this paper. Firstly, the given signal is analyzed by wavelet packet to remove the noise; Then the de-noised data is decomposed into a number of IMFs by EMD and extract their frequency eigenvectors, then using these eigenvectors as the training samples of the BP network, training the BP network to identify the faults. Finally, the simulation experiments shows that the proposed method for fault diagnosis of engine is effective and the de-nosing process using wavelet packet transform is essential.",2009,0, 5173,Hybrid intelligent fault diagnosis based on granular computing,"To solve the problem of lacking hybrid modes and common algorithms in hybrid intelligent diagnosis, this paper presents a new approach to hybrid intelligent fault diagnosis of the mechanical equipment based on granular computing. The hybrid intelligent diagnosis model based on neighborhood rough set is constructed in different granular levels, and the results of support vector machines (SVMS) and artificial neural network (ANN) in granular levels are combined by criterion matrix algorithm as output of hybrid intelligent diagnosis. Finally, the proposed model is applied to fault diagnosis in roller bearings of high-speed locomotive. The applied results show that the classification accuracy of hybrid model reaches to 97.96%, which is 8.49% and 39.12% higher than the classification accuracy of SVMS and ANN respectively. It shows that the proposed model as a new common algorithm can reliably recognize different fault categories and effectively enhance robustness of the hybrid intelligent diagnosis model.",2009,0, 5174,Research on Framework of Error Handling and Controlling Protocol in Mobile Payment,"To study and solve the mobile payment problems based on MPTP in China, this paper proposed to put forward Error Handling and Controlling Protocol (EHCP) and described the framework of this protocol and the execution process. EHCP is aimed to perfect mobile payment communication mechanism and further solve mobile payment security issues; in order to make mobile payment becomes more widely accepted.",2010,0, 5175,A Dynamic Fault Tolerant Algorithm Based on Active Replication,"To the wide area network oriented distributed computing such as Web services, a slow service is equivalent to an unavailable service. It makes demands of effectively improving the performance without damaging availability to the replication algorithms. Aim for improving the performance of the active replication algorithm, we propose a new replication algorithm named AAR (adaptive active replication). Its basic idea is: all replicas receive requests, but only the fastest one returns the response. Its main advantages are: (1) The response is returned by the fastest replica; (2) The algorithm is based on the active replication algorithm, but it avoids the redundant nested invocation problem. We prove the advantages by analyzing and experiments.",2008,0, 5176,An Effective Error Concealment for H.264/AVC,"To transmit video bit stream over low bandwidth such as mobile channel, high bit rate algorithm like H.264/AVC codec is exploited. In transmitting high compressed video bit-stream over low bandwidth, packet loss causes severe degradation in image quality. In this paper, a new error concealment algorithm for recovery of missing or erroneous motion vector is proposed. Considering that the missing or erroneous motion vectors in blocks are closely correlated with those of neighboring blocks. The proposed approach is that missing or erroneous motion vector is recovered by grouping the movements of neighboring blocks by their homogeneity. Motion vectors of neighboring block are grouped according to average linkage algorithm clustering and a representative value for each group is determined to obtain the candidate motion vector sets. As a test result, simulation results show that the proposed method dramatically improves complex time compared to existing H.264/AVC error concealment method. Also the proposed method is similar to existing H.264/AVC in terms of visual quality.",2008,0, 5177,A Fault-Tolerant Middleware Architecture for High-Availability Storage Services,"Today organizations and business enterprises of all sizes need to deal with unprecedented amounts of digital information, creating challenging demands for mass storage and on-demand storage services. The current trend of clustered scale-out storage systems use symmetric active replication based clustering middleware to provide continuous availability and high throughput. Such architectures provide significant gains in terms of cost, scalability and performance of mass storage and storage services. However, a fundamental limitation of such an architecture is its vulnerability to application-induced massive dependent failures of the clustering middleware. In this paper, we propose hierarchical middleware architectures that improve availability and reliability in scale-out storage systems while continuing to deliver the cost and performance advantages and a single system image (SSI). Hierarchical middleware architectures organize critical cluster management services into an overlay network that provides application fault isolation and eliminates symmetric clustering middleware as a single-point-of-failure. We present an in-depth evaluation of hierarchical middlewares based on an industry-strength storage system. Our results show that hierarchical architectures can significantly improve availability and reliability of scale-out storage clusters.",2007,0, 5178,Intelligent PC-based user control interface for on-line correction of robot programs,"Today the automotive sector is dominated by high variety of types and increasing product diversification. Thus, OEM manufacturers reintegrate key technologies back in the enterprise, due to competitive reason and to stabilize leadership in innovation. As an example, state-of-the-art fuel and diesel engine cylinder heads achieve higher complexity and filigrane structure, therefore careful treatment of specimen as well as net-shape machining processes with high quality output are a major requirement in this sector. A key issue is the surface fretting and finishing of engine parts subsequent following foundry procedures. An advanced factory automation system, based on an industrial robot (6-axis joint coordinate), has been developed in order to obtain the described results. On the system apply various measuring procedures, in order to recognize and remove burr formation (particularly with optical technique). A graphical interactive programming system enables simple and user friendly correction at deburring results for individual workpiece types. A possibility was given to the system user, who can easily remove deburring errors through short operator interaction, mainly input of correction values at controller system. Thereby, each authorized robot cell operator can contribute to guaranty process quality without special technical training.",2002,0, 5179,A Fault Propagation Approach for Highly Distributed Service Compositions,"Today, the techniques for realizing service compositions (e.g. WS-BPEL) have become mature. Nevertheless, when it comes to execution faults within service compositions, many problems are still unsolved. Especially the propagation and global handling of errors in service compositions yet remains an open issue. In this paper, we describe some preliminary results of our ongoing work in the field of fault propagation and exception handling in service compositions. We provide some service classification criteria and show how they relate to service composition fault handling. Further, we present a fault propagation approach for service compositions.",2008,0, 5180,Using transient/persistent errors to develop automated test oracles for event-driven software,"Today's software-intensive systems contain an important class of software, namely event-driven software (EDS). All EDS take events as input, change their state, and (perhaps) output an event sequence. EDS is typically implemented as a collection of event-handlers designed to respond to individual events. The nature of EDS creates new challenges for test automation. In this paper, we focus on those relevant to automated test oracles. A test oracle is a mechanism that determines whether a software executed correctly for a test case. A test case for an EDS consists of a sequence of events. The test case is executed on the EDS, one event at a time. Errors in the EDS may ""appear"" and later ''disappear"" at several points (e.g., after an event is executed) during test case execution. Because of the behavior of these transient (those that disappear) and persistent (those that don't disappear) errors, EDS require complex and expensive test oracles that compare the expected and actual output multiple times during test case execution. We leverage our previous work to study several applications and observe the occurrence of persistent/transient errors. Our studies show that in practice, a large number of errors in EDS are transient and that there are specific classes of events that lead to transient errors. We use the results of this study to develop a new test oracle that compares the expected and actual output at strategic points during test case execution. We show that the oracle is effective at detecting errors and efficient in terms of resource utilization",2004,0, 5181,High-Value Design Techniques for Mitigating Random Defect Sensitivities,"Today's sophisticated design-for-manufacturability (DFM) methodologies provide a designer with an overwhelming amount of choices, many with significant costs and unclear value. The technology challenges of subwavelength lithography, new materials, device types/sizes, etc., can mask the underlying random defect yield contribution which ultimately dominates mature manufacturing, and the distinction between technology limitations and process excursions must also be understood. The best DFM strategy fully exploits all of the available techniques that mitigate a design's sensitivity to random defects where the value is clearly quantifiable, yet few designers seize this opportunity. This paper provides a roadmap through the entire design flow and gives an overview of the various options.",2008,0, 5182,Mutual-Aid: Diskless Checkpointing Scheme for Tolerating Double Faults,"Tolerating double faults is an important issue for diskless checkpointing due to the size and increase of the executing time. This is why Mutual-Aid checkpointing has become the first scheme to achieve the goal. Mutual-aid checkpointing combines the advantages of neighbor-based, with parity-based diskless approaches. This also tolerates all double processor faults by bitwising exclusive-or snapshots from its neighbor processors in its virtual assistant ring. In view of the fact that checkpointing and recovery of mutual-aid are so simple and efficient, this increases the performance, reduces application running time, and allows more frequent checkpoints. Moreover, it could be employed towards a very largescale and high performance computing field because of its distributed methods as well as localized operations. The degree of fault tolerance has achieved higher success than other schemes.",2008,0, 5183,A scalable on-line multilevel distributed network fault detection/monitoring system based on the SNMP protocol,"Traditional centralized network management solutions do not scale to present-day large-scale computer/communication networks. Decentralization/distributed solutions can solve some of these problems (Goldszmidt, G. and Yemini, Y., 1995), and thus there is considerable interest in distributed/decentralized network management applications. We present the design and evaluation of an SNMP-based distributed network fault detection/monitoring system. We integrate into the SNMP framework our ML-ADSD algorithm (Su, M.-S. et al., Proc. 39th Annual Allerton Conf. on Commun., Control, and Computers, 2001; Su, ""Multilevel distributed diagnosis and the design of a distributed network fault detection system based on the SNMP protocol"", Ph.D. Thesis, School of Computer Science, University of Oklahoma, 2002) for fault diagnosis in a distributed processor system. The algorithm uses the multilevel paradigm and requires only minor modifications to be scalable to networks of varying sizes. The system is fault tolerant, allowing processor failure and/or recovery during the diagnosis process. We have implemented the system on an Ethernet network of 32 machines. Our results show that the diagnosis latency (or time to termination) is much better than that of earlier solutions. Also, the system's bandwidth utilization is insignificant, demonstrating the practicality of its deployment in a real network. We have successfully integrated three modern disciplines: network management, distributed computing and system level diagnosis.",2002,0, 5184,A fault location and protection scheme for distribution systems in presence of dg using MLP neural networks,"Traditional electric distribution systems are radial in nature. These networks are protected by very simple protection devices such as over-current relays, fuses, and re-closers. Recent trends in distributed generation (DG) and its useful advantages perfectly can be achieved while the relevant concerns are deliberately taken into account. For example, penetration of DG disturbs the radial nature of conventional distribution networks. Therefore, protection coordination will be changed in some cases, and in some other cases it will be lost. The penetration of DG into distribution networks reinforces the necessity of designing new protection systems for these networks. One of the main capabilities that can improve the efficiency of new protection relays in distribution systems is exact fault locating. In this paper, a novel fault location and protection scheme has been presented to provide the distribution networks with DG. The suggested approach is able to determine the accurate type and location of faults using MLP neural networks. As case study, the proposed scheme has been assessed using a MATLAB based developed software and DIgSILENT Power Factory 13.2 on a sample distribution network.",2009,0, 5185,Intelligent Fault Diagnosis System in Large Industrial Networks,"Traditional fault diagnosis system in large industrial networks is not intelligent enough and cannot predict faults. It is too expensive for industrial corporations. This paper brings forward an intelligent fault diagnosis system-IFDS, which uses new types of intelligent database technology, and has the ability of effectively solving the fault diagnosis and prediction issue of current industrial Ethernet network. In addition, this paper discusses some methods which can be implemented in IBM DB2 database.",2008,0, 5186,Fail-stutter fault tolerance,"Traditional fault models present system designers with two extremes: the Byzantine fault model, which is general and therefore difficult to apply, and the fail-stop fault model, which is easier to employ but does not accurately capture modern device behavior To address this gap, we introduce the concept of fail-stutter fault tolerance, a realistic and yet tractable fault model that accounts for both absolute failure and a new range of performance failures common in modern components. Systems built under the fail-stutter model will likely perform well, be highly reliable and available, and be easier to manage when deployed.",2001,0, 5187,Diagnosing configuration errors in virtual private networks,"Traditional network fault management systems diagnose hard, localized errors such as fiber cuts or hardware/software component failures. It is quite possible, however, that network components work correctly yet end-to-end services do not. This happens if there are configuration errors, i.e., configuration parameters of components are set to incorrect values. Configuration is a fundamental operation for integrating components to implement end-to-end services. Configuration errors are frequent because transforming end-to-end service requirements into configurations is inherently difficult: in realistic networks there are many components, configuration parameters, values, protocols and requirements. Yet, such transformation is largely, manually performed. This paper describes a toolkit called Service Grammar to diagnose configuration errors. The toolkit is illustrated in the context of an IP virtual private network with routing and security services. It is based on the following assumptions: (1) every component has configuration parameters which can be set to definite values; (2) these values remain fixed during the normal operation of a component; and (3) the set of values of all configuration parameters in a network, called the configuration vector, determines the behavior of the network as a whole",2001,0, 5188,Application of time series analysis to fault management in MANETs,"Traditional network management systems are not usually able to differentiate between mobility and other causes of communication degradation in wireless mobile ad hoc networks. A fault management system needs the ability to not only detect changes in performance but also reason about their possible causes and how to fix the problems. We propose a system called TimeSAFE (Time Series Analyzer Front End) that performs time series analysis as a front end input to a central fault management system. We show how such an analysis can help to distinguish between motion, obstacles, and interference as possible causes of changes in SINR (Signal to Interference and Noise Ratio). We propose three different methods of time series analysis and also methods for dynamic order selection and dynamic window selection. We describe our implementation of TimeSAFE and provide some preliminary results of detecting changes in SINR and identifying their causes.",2010,0, 5189,A research of protection characteristics analysis method based on fault recording data,"Traditional software for fault data analysis embedded in fault recorder device is hard to satisfy the requirements of protection characteristic analysis in power systems. Some main algorithms for fault recording data processing are presented, and the software for protection characteristic analysis is also designed in this paper. It can be applied in new protection principle testing and relay checking on site.",2005,0, 5190,The utility of hybrid error-erasure LDPC (HEEL) codes for wireless multimedia,"Traditional wireless communication protocols do not relay corrupted packets towards the application layer and neither do they forward such packets over multiple hops. Such an approach can lead to a significant number of packet drops and thus a severe deterioration in performance of high bandwidth applications. Cross-layer protocols which do relay and forward corrupted packets have exhibited substantial promise to mitigate the above problem and thus their utility for wireless multimedia needs to be explored further. Moreover, there is a need to identify efficient channel coding methods for the cross-layer channel. Unlike the traditional schemes, where the channel observed at the application layer is a pure erasure channel, in the cross-layer schemes the application layer channel exhibits hybrid erasure-error impairments. Thus in this paper, we use a rather abstract link-layer model on the basis of which we compare the performance of cross-layer and conventional schemes. We identify the modifications required to be made to RS and LDPC based FEC schemes in order to use them over hybrid erasure-error channels. Finally we compare the considered schemes in terms of video quality using the emerging H.264 video standard. Our video analysis is based on employing a hybrid error-erasure channel coding FEC for the cross-layer schemes versus employing erasure recovery FEC for the traditional protocols. We show that cross-layer schemes can lead to a significant improvement in video quality.",2005,0, 5191,Induced error-correcting code for 2 bit-per-cell multi-level DRAM,"Traditionally, memories employ SEC-DED (Single Error Correcting and Double Error Detecting) Error Correcting Codes (ECC). While such codes have been considered for MLDRAM (Multi-Level Dynamic Random Access Memory), their use is inefficient, due to likely double-bit errors in a single cell. For this reason we propose an induced ECC architecture that uses ECC in such a way that no common error corrupts two bits. Induced ECC allows significant increase in reliability of the MLDRAM",2001,0, 5192,Fault diagnosis and recovery from structural failures (icing) in unmanned aerial vehicles,"This paper tackles the problem of fault diagnosis and recovery of unmanned aerial vehicles (UAVs) resulting from structural failures (icing). The proposed system consists of two units one for fault diagnosis and another for fault recovery. The goal of the fault diagnosis unit is to detect and estimate the severity of a fault. The recovery unit utilizes information on the estimated fault and adjusts the controller parameters to recover the system from the faulty condition. This methodology is useful mostly for small UAVs, where the available methods for structural fault detection and recovery are too expensive to be applied. Icing problem, which is a well-known structural problem in aerospace industry is considered as a demonstration of this method. Our proposed approach takes advantage of control theory to detect and recover faults within the flight control unit. Simulation results are provided to demonstrate the effectiveness and practical use of our proposed approach.",2009,0, 5193,Comparing fail-silence provided by process duplication versus internal error detection for DHCP server,"This paper uses fault injection to compare the ability of two fault-tolerant software architectures to protect an application from faults. These two architectures are Voltan, which uses process duplication, and Chameleon ARMORs, which use self-checking. The target application is a Dynamic Host Configuration Protocol (DHCP) server, a widely used application for managing IP addresses. NFTAPE, a software-based fault injection environment, is used to inject three classes of faults, namely random memory bit-flip, control-flow and high-level target specific faults, into each software architecture and into baseline Solaris and Linux versions",2001,0, 5194,System independent and distributed fault management system,"This paper will outline a distributed and dynamic fault management system and practice of it. This work shows that proposed platform-independent, distributed and reusable fault management system architecture can be an integral part of the next generation of network management systems. Another feature of the proposed fault management system is being an extensible fault management computing framework for researchers. By the proposed infrastructure , researcher can carry out their original work on this framework by the help of the event, correlator and alarm programming interfaces. Proposed architecture is applicable not only to network management system but also to intrusion detection systems, business systems. We present these architecture and the use of advanced technologies such as RMI and Java. Finally, we demonstrate how these technological solutions have been implemented in the distributed fault management system called JADFAME - Java distributed fault management engine.",2003,0, 5195,Design process error-proofing: benchmarking the NASA development life-cycle,"This report describes the practices of the development life-cycle observed at NASA. Through a number of interviews and surveys of the NASA experience, including current projects such as Aura, CALIPSO, Kepler, SAGE and SOFIA, this research attempts to capture the design methods and culture present. The NASA development methods are shared and compared with industry practices, including gated realization processes, portfolio management, and platform design. The goal of this work is to identify best practices and lessons learned from NASA's design and review experience, benchmark against industry techniques, and develop strategies to improve the process. With better understanding of not only the execution but motivation for the current development life-cycle, any organization can better improve and error-proof its design process",2005,0, 5196,Perspective correction for improved visual registration using natural features.,"This research proposes a perspective invariant registration algorithm which improves on popular registration algorithms such as SIFT and SURF by correcting for perspective distortion using optical flow. A novel addition to the natural feature based registration process is proposed, which uses orientation information from previously correctly registered frames to attempt perspective correction. This process is evaluated when applied to the Natural Feature algorithms described. This research overcomes the cause of registration failings based on perspective distortion in Natural Feature Tracking, and attempts to find a better resolution than just pruning invalid matches. The results show that the proposed algorithm improved registration on two prominent Natural Feature based Registration algorithms, SIFT and SURF.",2008,0, 5197,Applying Decision Tree in Fault Pattern Analysis for HGA Manufacturing,"This research proposes the design of a fault pattern analysis algorithm based on the C4.5 decision tree technique. We study the actual data collected from a disk drive manufacturing company. Our work emphasizes the HGA manufacturing data. However, the data from the Wafer and the Slider processes are also explored as they may affect the yield of the HGA production. In our algorithm, the data is first retrieved from the data warehouse, and then pre-processed using the regular data cleaning techniques. The critical external and internal data from all operations that are related to the HGA production (machine parameters and product attributes) are used as inputs in our algorithm. The data preparation steps are added to improve the raw data quality. Subsequently, our decision tree technique is employed to categorize decision options that indicate problems on the actual manufacturing environment. Finally, the root causes of the yield degradation will be identified in three categories of attributes (machine, material and method). The data analysts in a HDD company can use this tool to automatically summarize the problems on the manufacturing line. Yield can then be improved by adjusting parameters and/or attributes as suggested by the algorithm. In this paper, we also describe the algorithm through a simple example. Further study will be performed and the experiments will be elaborated in the near future.",2009,0, 5198,"Reasons for software effort estimation error: impact of respondent role, information collection approach, and data analysis method","This study aims to improve analyses of why errors occur in software effort estimation. Within one software development company, we collected information about estimation errors through: 1) interviews with employees in different roles who are responsible for estimation, 2) estimation experience reports from 68 completed projects, and 3) statistical analysis of relations between characteristics of the 68 completed projects and estimation error. We found that the role of the respondents, the data collection approach, and the type of analysis had an important impact on the reasons given for estimation error. We found, for example, a strong tendency to perceive factors outside the respondents' own control as important reasons for inaccurate estimates. Reasons given for accurate estimates, on the other hand, typically cited factors that were within the respondents' own control and were determined by the estimators' skill or experience. This bias in types of reason means that the collection only of project managers' viewpoints will not yield balanced models of reasons for estimation error. Unfortunately, previous studies on reasons for estimation error have tended to collect information from project managers only. We recommend that software companies combine estimation error information from in-depth interviews with stakeholders in all relevant roles, estimation experience reports, and results from statistical analyses of project characteristics",2004,0, 5199,Wavelet-based switching faults detection in direct torque control induction motor drives,"This study describes a method of detection and identification of IGBT-based drive open-circuit faults in direct torque control (DTC) induction motor drives. The detection mechanism is based on wavelet decomposition. The Symlet2 wavelet was selected as the wavelet base to perform stator current analysis during transients. In this method, the stator currents will be used as an input to the system. The MATLAB program was used to process discreet wavelet transform (DWT) of the signals. The stator current was used for the detection of the fault. When an open-circuit fault appears in an inverter IGBT, the signal fault information is included in each frequency region. There are spikes in the sixth level detail for incipient faults. The time of the spikes in the DWT is correlated with the time of fault. As a result of time domain studies, a faulty system can be easily discriminated from a healthy one. In this study the analysis of electrical transients arising during switch open-circuit faults in a three-phase power inverter feeding DTC induction motor drives was decomposed using the wavelet transform. The results demonstrate that the proposed fault detection and diagnosis system has very good capabilities.",2010,0, 5200,A verification of fault tree for safety integrity level evaluation,"This study focuses on a novel approach which automatically proves the correctness and completeness of fault trees based on a formal model by model checking. This study represents that the model checking technique is useful when validating the correctness of informal safety analysis such as FTA. The benefits of this study are that it provides the probability of formally validating FTA by proving correctness and completeness of the fault trees. In addition to this benefit, it is possible that the CTL technique proves the FTA based SIL.",2009,0, 5201,Aircraft conflict probe sensitivity to weather forecasts errors,"This study investigated the user request evaluation tool's (URET) prediction sensitivity to weather forecast error. A quantitative experiment was designed and performed by the Federal Aviation Administration's Conflict Probe Assessment Team (CPAT) to evaluate the impact of weather forecast errors on URET trajectory and conflict predictions. The experiment used approximately two hours of traffic data recorded at the Indianapolis en route center in May 1999. The flights were time shifted to generate a sufficient number of test conflicts using a genetic algorithm technique developed by CPAT. The resulting scenario was input into the URET prototype system. To induce weather forecast error, the weather input file (rapid update cycle, RUC) was altered by adding 20 or 60 knots to the wind magnitude, 45 or 90 degrees to the wind direction, and 5 or 15 degrees Kelvin to the air temperature. This produced seven URET runs for the experiment -the unaltered control run and six treatment runs. The analysis compared the control run against the treatment runs. A methodology was developed to compare the trajectory and conflict prediction accuracy of these runs. A statistical analysis provided evidence that the forecast errors in wind magnitude and direction had significant effect on the longitudinal trajectory error and a modest impact on retracted false alerts, which caused at most an increase in the false alert probability by six percent. It also showed that the air temperature runs did not have a significant effect. Based on this experiment, a controller suspecting errors in the input wind forecast should expect only a modest impact on URET predictions. The impact would mainly be a moderate increase in the number of retractions of its conflict predictions (defined in this study as a retracted false alert). If the controller notices an increase in retractions, it may be symptomatic of inaccurate wind forecasts, which should be investigated.",2007,0, 5202,Inhomogeneity correction of magnetic resonance images by minimization of intensity overlapping,"This work presents a new algorithm (NIC; nonuniform intensity correction) for the correction of intensity inhomogeneities in magnetic resonance images. The algorithm has been validated by means of realistic phantom images and a set of 24 real images. Evaluation using previously proposed phantom images for inhomogeneity correction algorithms allowed us to obtain results fully comparable to the previous literature on the topic. This new algorithm was also compared, using a real image dataset, to other widely used methods which are freely available in the Internet (N3, SPM'99 and SPM2). Standard quality criteria have been used for determining the goodness of the different methods. The new algorithm showed better results removing the intensity inhomogeneities and did not produce degradation when used on images free from this artifact.",2003,0, 5203,Detection of high impedance fault in distribution feeder using wavelet transform and artificial neural networks,"This work presents a novel analysis method that can simulate the potential effect of high impedance fault (HIF). The proposed method offers a new scheme for protecting the overhead distribution feeder. The wavelet transform (WT) method was successfully applied in many fields. The characteristics of scaling and translation of WT can be used to identify stable and transient signals. Discrete wavelet transforms (DWT) are initially used to extract distinctive features of the voltage and current signals, and are transformed into a series of detailed and approximated wavelet components. The coefficients of variation of the wavelet components are then calculated. This information is introduced into the training artificial neural networks (ANN) to determine an HIF from the operations of the switches. The simulated results clearly reveal that the proposed method can accurately identify the HIF in the distribution feeder.",2004,0, 5204,Power System Fault Data Compression Wavelet Parametric Investigation,"This work presents a performance comparison method for wavelet based compression of three-phase signals. Input signals are simulated on ATP software, while the mother wavelets, compression thresholds and transform detail levels are changed. Thousands of compressed output signals are thus obtained, and the compression rate and error are calculated. These performance indicators are analyzed, based on which a group of wavelet compression parameters is obtained.",2009,0, 5205,Fault Current Limiter Based on Resonant Circuit Controlled by Power Semiconductor Devices,This work presents a resonant fault current limiter (FCL) controlled by power semiconductor devices. Initially the operation of two ideal resonant circuit topologies as fault current limiter are discussed. The analysis of these circuits is used to derive an alternative topology to the fault current limiter based on the connection of a series and a parallel resonant circuit. Digital models are implemented in the SimPowerSystem/Matlab simulation package to investigate the performance of the proposed FCL to protect transmission and distribution electric networks against short circuit currents. Transfer functions of the linear limiter models are used to identify the effect of each element of the FCL over its stability and its transient response. The developed analysis will be used to derive modifications in the FCL topology in such a way to improve their dynamic response.,2007,0, 5206,Robust sensor fault reconstruction for an inverted pendulum using right eigenstructure assignment,"This work presents a robust sensor fault reconstruction scheme applied to an inverted pendulum. The scheme utilized a linear observer, using right eigenstructure assignment in order to the effect of nonlinearities and disturbances on the fault reconstruction. The design method is adapted and modified from existing work in the literature. A suitable interface between the pendulum and a computer enabled the application. Very good results were obtained.",2004,0, 5207,Fault detection and diagnosis for stand alone system based on induction generator,"This work presents an intelligent stand-alone system that is monitoring and analyzing results with the insertion of faults in this system. To fault detection it is used root mean square value combined with neural network. To ensure the fault signature, it was made different harmonic analysis simulations. The results obtained so far indicated that is possible detect the failure in the stand-alone system, so that, improving the repair of this system.",2009,0, 5208,"Current Sensor Fault Detection, Identification, and Reconfiguration for Doubly Fed Induction Generators","This work presents current sensor fault detection, identification and reconfiguration for a voltage oriented controlled doubly fed induction generator. The focus of this analysis is on the identification of the faulty sensor, and the actual reconfiguration. It is proposed to temporary switch from closed loop into open loop control to decouple the drive from faulty sensor readings. During a short period of open loop operation, the fault is identified. Then replacement signals from observers are used to reconfigure the drive and re-enter closed loop control. Measurement results are included to prove that the proposed concept leads to good results.",2007,0, 5209,Fast motion estimation based on adaptive search range adjustment and matching error prediction,"This work presents fast motion estimation (ME) by using both an adaptive search range adjustment and a matching error prediction. The basic idea of the proposed scheme is based on adjusting a given search range adaptively and predicting a block matching errors effectively. The adaptive search range adjustment is first performed by analyzing the contents of a scene. Next, the total block matching error is estimated by using partial errors of sub-sampled blocks to eliminate invalid blocks earlier for ME. In order to evaluate the proposed scheme, several baseline approaches are described and compared. The experimental results show that the proposed algorithm can reduce the computational cost more than 81% for ME at the cost of 0.01 dB image quality degradation versus the conventional PDE algorithm1. The main contributions of the proposed approach are that 1) it can reduce the computational cost considerably; 2) it can be applied to conventional PDE algorithms without significant changes; and 3) it can be a useful tool for fast ME in the consumer electronics-related field.",2009,0, 5210,"K-Bug, A New Bug Approach for Mobile Robot's Path Planning","This work presents the K-Bug algorithm, a new method for path planning of mobile robots belonging to the Bug family. The main idea of the algorithm may be used to improve the performance of existing methods of path planning that use local information, or as an entirely new method, if global information is available. It's also presented a short comparison of the methods found in literature, proving its efficiency, low computational cost and high robustness, even in complex environments.",2007,0, 5211,On per-test fault diagnosis using the X-fault model,This work proposes a new per-test fault diagnosis method based on the X-fault model. The X-fault model represents all possible behaviors of a physical defect or defects in a gate and/or on its fanout branches by using different X symbols on the fanout branches. A novel technique is proposed for analyzing the relation between observed and simulated responses to extract diagnostic information and to score the results of diagnosis. Experimental results show the effectiveness of our method.,2004,0, 5212,A Test-oriented Architecture for Network Fault Management,"This work proposes a novel architecture for network fault management. The architecture is based on Tests and comprises four main modules: Troubleshooting Server, Topology Server, Operator GUI and Test Designer. Tests are created using a high level visual language, which presents several advantages for fault network management. Our proposal was tested in real network scenarios using the RNP national backbone and in an experimental laboratory network. It was shown to be robust, flexible and general so that it serves for a large variety of faults that may occur in networks. As an example we implemented a test to re-route traffic flows and bypass a congested link (faulty) in a typical traffic engineering procedure.",2007,0, 5213,Faults location in transmission lines through neural networks,"This work studies the viability of the application of computational techniques, more specifically the artificial neural networks (ANN), in the identification and location of faults in transmission lines using voltages and currents signals registered by numeric relays in one of the terminals of a transmission line. One model of ANN was used in the fault identification and other four were used on the fault location, being one for each type: single-phase, two-phase, two-phase-earth and three-phase.",2004,0, 5214,Formal development of software for tolerating transient faults,"Transient faults constitute a wide-spread class of faults typical in control systems. These are faults that appear for some time during system operation and might disappear and reappear later. However, even by appearing for a short time, they might cause dangerous system errors. Hence designing mechanisms for tolerating transient faults is an acute issue, especially in the development of safety-critical control systems. In this paper we propose a formal approach to specifying software-based mechanisms for tolerating transient faults in the B method. We focus on deriving a general specification and development pattern which can be applied in the development of various control systems. We illustrate an application of the proposed patterns by an example from avionics software product line.",2005,0, 5215,Phase Characterization and Classification for Micro-architecture Soft Error,"Transient faults have become a key challenge to modern processor design. Processor designers take Architectural Vulnerability Factor (AVF) as an estimation method of micro-architectures soft error rate. Dynamic, phase-based system reliability management, which tunes system hardware and software parameters at runtime for different phases, has become a focus in the field of processor design. Phase characterization technique (PCT) and phase classification algorithm (PCA) determine the accuracy of phase identification, which is the foundation of dynamic, phase-based system management. To our knowledge, this paper is the first to give a comprehensive evaluation and comparison of PCTs and PCAs for micro-architecture soft error. We first compare the efficiency of basic block vectors (BBV) and performance metric counters (PMC) based PCTs in reliability-oriented phase characterization on three micro-architectural structures (i.e. instruction queue, function unit and reorder buffer). Experimental results show that PMC based PCT performs better than BBV based PCT for most programs studied. Also, we compare the accuracy of three clustering algorithms (i.e. hierarchical clustering, k-means clustering and regression tree) in reliability-oriented phase classification. Regression tree method is demonstrated to improve the accuracy of classification by 30% compared with other two PCAs on average. Furthermore, based on the comparisons of PCTs and PCAs, we propose the optimal combination of PCT and PCA for soft error reliability-oriented phase identification - the combination of PMC and regression tree. In addition, we quantify the upper bound of predictability of AVF using BBV/PMC. Overall, an average of 82% AVF can be explained by PMC, while BBV can explain 78% AVF averagely.",2010,0, 5216,An improved inter frame error concealment in H.264/AVC,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Aimed this problem, non-normative error concealment (EC) is recommended in H.264/AVC to recover the lost image. By analyzing of non-normative inter EC, this paper focused on improvement of it. Firstly, since the ""guessed'Vrecovered MV (motion vector) in existing EC is not so close to the real MV, a refined MV recovery is proposed to get a more accurate ""guessed"" MV. Secondly an improved MV selection criterion (side matching criterion) is proposed, which is trying to ""guess'Vapproach a lost MB (macroblock)'s real/original side match distortion, instead of ""guess'Vapproach value 0 in traditional inter EC. In addition, for MV refinement scheme, inspired by MV searching in motion estimation, we adopt a diamond searching for MV recovery. Both objective and subjective image quality evaluation in experiments show that our proposal achieves a better image recovery compared with non-normative inter EC.",2007,0, 5217,Side match distortion based adaptive error concealment order for 1Seg video broadcasting application,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Such degradation even becomes worse in 1Seg video broadcasting application, which is widely used in Japan and Brazil for mobile phone TV service, where errors are drastically increased and huge conjunctive areas inside a picture may be corrupted. In this case the error concealment order -to decide which MB should be concealed earlier -may highly influence image quality. Aimed this problem, this paper proposes an adaptive concealment order based on well-known boundary matching algorithm (BMA). The concealment order is carefully chosen according to a lost MB's priority, which is formulated considering a concealed MB's side match distortion: an MB with smaller distortion should be concealed earlier compared with an MB with larger distortion. As for formulation of side match distortion, not only the current corrupted MB's, but also the neighboring MB's, which is caused by error propagation, are included. Compared with reference work [10], the experiments show our proposal achieves better performance of video recovery under different error rate channel in 1Seg application.",2009,0, 5218,Error concealment considering error propagation inside a frame,"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Such degradation even becomes worse in 1Seg video broadcasting, which is widely used in Japan and Brazil for mobile phone TV service recently, where errors are drastically increased and lost areas are contiguous. Therefore the errors in earlier concealed MBs (macro blocks) may propagate to the MBs later to be concealed inside the same frame (spatial domain). The error concealment (EC) is used to recover the lost data by the redundancy in videos. Aiming at spatial error propagation (SEP) reduction, this paper proposes a SEP reduction based EC (SEPEC). In SEPEC, besides the mismatch distortion in current MB, the potential propagated mismatch distortion in the following to be concealed MBs is also minimized. Also, 2 extensions of SEPEC, that SEPEC with refined search and SEPEC with multiple layer match are discussed. Compared with previous work, the experiments show SEPEC achieves much better performance of video recovery and excellent trade-off between quality and computation in 1Seg broadcasting in terms of computation cost.",2010,0, 5219,Implementing Probabilistic Risk Assessment with Fault Trees to support space exploration missions,This paper seeks to illustrate the implementation of a Probabilistic Risk Assessment (PRA) methodology as a foundation for space mission support risk assessment and management process. Identifying the risks to delivering expected spacecraft data services to a mission is only the first part of the risk assessment. Arriving at a quantified probability (Likelihood) of the manifestation of these risks is the desired outcome of the process.,2010,0, 5220,Fault-tolerant voltage-fed PWM inverter AC motor drive systems,This paper shows how to integrate fault compensation strategies into two different types of configurations of induction motor drive systems. The proposed strategies provide compensation for open-circuit and short-circuit failures occurring in the converter power devices. The fault compensation is achieved by reconfiguring the power converter topology with the help of isolating and connecting devices. These devices are used to redefine the post-fault converter topology. This allows for continuous free operation of the drive after isolation of the faulty power switches in the converter. Experimental results demonstrate the validity of the proposed systems.,2004,0, 5221,The harmonic impact of self-generating in power factor correction equipment of industrial loads: real cases studies,"This paper shows the impact of the self-generating installation in industrial loads, the problems occurred in field and the proposed solutions based from the harmonic point of view. To illustrate these points, the paper describes two facilities that installed self-generating and all the measurements and studies performed to analyze the electrical problems detected. Some studies results are shown and the implemented solutions are also described.",2002,0, 5222,Formal Fault Tolerant Architecture,"This paper shows the need of development by refinement: from most abstract specification to the implementation, in order to ensure 1) the traceability of the needs and requirements, 2) a good management of the development and 3) a reliable and fault-tolerant design of systems. We propose a formal architecture of models and methods for critical requirements and fault-tolerance. System complexity increases and the choices of their implementation are numerous. So the architecture verification achieves a prominent role in the system design cycle. Fault detecting at this early level decreases the time and costs of correction. We show how a formal method, B method, may be used to write the abstract specification of a system then to product correct-by-construction architecture through many steps of formal refinement. During these steps, a fault scenario is injected with a suitable introspective reaction by the system. All refinement steps, including the introspective correction, should be proven to be correct and satisfy the initial specification of the system. At the lower levels, design is separated between hardware and software communities. But even at these levels many design traces could be captured to prove not only the consistency of each design unit but the coherence between the different sub-parts: software, digital or other technologies",2010,0, 5223,Fault diagnosis of water hydraulic motor by Hilbert transform and adaptive spectrogram,This paper studied the fault diagnosis of fluid machines by the analysis of the periodic impulse vibration signal. The method is based on Hilbert transform and adaptive wavelet analysis. We apply the Hilbert transform to present the characteristic envelope of the periodic impulsive signal to show the fundamental frequencies. The Gaussian functions are optimized to get the optimal parameters to show the characteristic frequencies of the Hilbert transform-based envelope of the vibration signals. The simulated signal and experimental signal are used to test the method. The results show it is applicable and effective to fault diagnosis of fluid machines by analyzing the periodic impulsive signal.,2010,0, 5224,A fault-tolerant scheduling scheme for hybrid tasks in distributed real-time systems,"This paper studies a fault-tolerant real-time scheduling algorithm for hybrid task sets, which contain periodic tasks and aperiodic tasks. A previously proposed fault-tolerant periodic scheduling was extended to hybrid scheduling based on time redundancy and space redundancy. Periodic tasks are assigned to processors using fast and simple heuristic scheme and joint scheduling is used for periodic and aperiodic tasks. A simulation study shows the effectiveness of the proposed approach.",2005,0, 5225,An Area-Efficient Approach to Improving Register File Reliability against Transient Errors,"This paper studies approaches to exploiting the space both within or across registers efficiently for improving the register file reliability against transient errors. The idea of our approach is based on the fact that a large number of register values are narrow (i.e., less than or equal to 16 bits for a 32-bit architecture); therefore, the upper 16 bits of the registers can be used to replicate the short operands for enhancing register integrity. This paper also adapts a prior register replication approach by selectively copying register values (i.e., long operands only) to the unused physical registers for enhancing reliability without incurring significant hardware cost. Our experiments indicate that on average, 993% register reads (regardless of short or long operands) can find their replicas available, implying significant improvement of register file integrity against transient errors.",2007,0, 5226,Detecting Defects in Golden Surfaces of Flexible Printed Circuits Using Optimal Gabor Filters,"This paper studies the application of advanced computer image processing techniques for solving the problem of automated defect detection for golden surfaces of flexible printed circuits (FPC). A special defect detection scheme based on semi-supervised mechanism is proposed, which consists of an optimal Gabor filter and a smoothing filter. The aim is to automatically discriminate between ""known"" non-defective background textures and ""unknown"" defective textures of golden surfaces of FPC. In developing the scheme, the parameters of the optimal Gabor filter are searched with the help of the genetic algorithm based on constrained minimization of a Fisher cost function. The performance of the proposed defect detection scheme is evaluated off-line by using a set of golden images acquired from CCD. The results exhibit accurate defect detection with low false alarms, thus showing the effectiveness and robustness of the proposed scheme.",2008,0, 5227,Scheduling for energy efficiency and fault tolerance in hard real-time systems,"This paper studies the dilemma between fault tolerance and energy efficiency in frame-based real-time systems. Given a set of K tasks to be executed on a system that supports L voltage levels, the proposed heuristic-based scheduling technique minimizes the energy consumption of tasks execution when faults are absent, and preserves feasibility under the worst case of fault occurrences. The proposed technique first finds out the optimal solution in a comparable system that supports continuous voltage scaling, then converts the solution to the original system. The runtime complexity is only (LK2). Experimental results show that the proposed approach produces near-optimal results in polynomial time.",2010,0, 5228,Analysis and comparison of several fault line selective methods in small current grounding power system,"Through a small current grounding system of single-phase grounding to steady state and transient analysis and the existing line selection methods are analyzed and compared, their advantages and shortcomings can be gotten. The line of the selection method in the future was pointed out. The wavelet analysis was introduced to selection methods of faults and the fault lines can be selected in the wavelet analysis method.",2008,0, 5229,Switching-based fault-tolerant control for an F-16 aircraft with thrust vectoring,"Thrust vectoring technique enables aircraft to perform various maneuvers not available to conventional-engined planes. This paper presents an application of switching control concepts to fault-tolerant control design for an F-16 aircraft model augmented with thrust vectoring. Two controllers are synthesized using a switching logic, and they are switched on a fault parameter. During normal flight conditions, the F-16 aircraft relies on no vectored thrust and the elevator. The thrust vectoring nozzle is only turned on in the presence of elevator failures. Two elevator fault scenarios, lock and loss of effectiveness, are considered. Nonlinear simulation results show that the switching control can guarantee the stability and performance of the faulted system.",2009,0, 5230,On the probability of detecting data errors generated by permanent faults using time redundancy,"Time redundant execution of tasks and comparison of results is a well-known technique for detecting transient faults in computer systems. However, time redundancy is also capable of detecting permanent faults that occur during or between the executions of two task replicas, provided the faults affect the results of the two tasks in different ways. In this paper, we derive an expression for estimating the probability of detecting data errors generated by permanent faults with time redundant execution. The expression is validated experimentally by injecting permanent stuck-at faults into a multiplier unit of a microprocessor. We use the derived expression to show how tasks can be scheduled to improve the detection probability of errors generated by permanent faults. We also show that the detection capability of permanent faults is low for the Temporal Error Masking (TEM) technique (i.e. triplicated execution and voting to mask transient faults) and may not be increased by scheduling. Thus, we propose complementing TEM with special test tasks.",2003,0, 5231,Optimizing Joint Erasure- and Error-Correction Coding for Wireless Packet Transmissions,"To achieve reliable packet transmission over a wireless link without feedback, we propose a layered coding approach that uses error-correction coding within each packet and erasure-correction coding across the packets. This layered approach is also applicable to an end-to-end data transport over a network where a wireless link is the performance bottleneck. We investigate how to optimally combine the strengths of error- and erasure-correction coding to optimize the system performance with a given resource constraint, or to maximize the resource utilization efficiency subject to a prescribed performance. Our results determine the optimum tradeoff in splitting redundancy between error-correction coding and erasure-correction codes, which depends on the fading statistics and the average signal to noise ratio (SNR) of the wireless channel. For severe fading channels, such as Rayleigh fading channels, the tradeoff leans towards more redundancy on erasure-correction coding across packets, and less so on error-correction coding within each packet. For channels with better fading conditions, more redundancy can be spent on error-correction coding. The analysis has been extended to a limiting case with a large number of packets, and a scenario where only discrete rates are available via a finite number of transmission modes.",2008,0, 5232,Cost drivers of software corrective maintenance: An empirical study in two companies,"To estimate the corrective software maintenance effort, we must know the factors that have the strongest influence on corrective maintenance activities. In this study, we have analyzed activities and effort of correcting 810 software defects in one Norwegian software company and 577 software defects in another. We compared the defect profiles according to the defect correction effort. We also analyzed defect descriptions and recorded discussions between developers in the course of correcting defects in order to understand what led to the high cost of correcting some types of defects. The study shows that size and complexity of the software to be maintained, maintainers' experience, and tool and process support are the most influential cost drivers of corrective maintenance in one company, while domain knowledge is one of the main cost drivers of corrective maintenance in the other company. This illustrates that models for estimating software corrective maintenance effort have to be customized based on the defect profiles and cost drivers of each company and project to be useful.",2010,0, 5233,Synthesis of Fault-Tolerant Embedded Systems,"This work addresses the issue of design optimization for fault- tolerant hard real-time systems. In particular, our focus is on the handling of transient faults using both checkpointing with rollback recovery and active replication. Fault tolerant schedules are generated based on a conditional process graph representation. The formulated system synthesis approaches decide the assignment of fault-tolerance policies to processes, the optimal placement of checkpoints and the mapping of processes to processors, such that multiple transient faults are tolerated, transparency requirements are considered, and the timing constraints of the application are satisfied.",2008,0, 5234,Error analysis of MPEG-4 HVXC parameters at high frequencies,"This work deals with digital speech communications at very low bit rates, like 2 and 4 kbps, transmitted over long distance SW channels at HFs such as 20 and 30 MHz. The low bandwidth at HF, usually 3 kHz, imposes serious challenges namely in terms of speech compression algorithms and their transmission over such a hostile channel. Thus, This work evaluates the coding parameter significance of the MPEG-4 harmonic vector excitation coding (HVXC) algorithm. As typical transmission parameters at these frequencies, it was assumed 2400 symb/s, 8-QAM, 16-QAM, 32-QAM and 64-QAM. The results show that the most significant parameter is V/UV at SNR less than 6 dB and the least significant ones are the parameters related to the unvoiced frames. Therefore, any access control, unequal error protection, data partitioning and grouping strategies covering a wide range of SNRs can then be designed, based on the significance results presented in This work. We also propose a solution to increase the error resilience of MPEG-4 speech encoded signals over HF channels.",2004,0, 5235,A precise sample-and-hold circuit topology in CMOS for low voltage applications with offset voltage self correction,"This work describes a new topology for CMOS sample-and-hold circuits in low voltage with self-correction of the offset voltage caused by mismatches in the differential input pair of the operational amplifier. The charge injection of the NMOS switches, although not properly modeled by the simulators, is an important factor and it is minimized in this topology. The results were obtained using the ACCUSIM II simulator on the AMS CMOS 0.8 m CYE and they reveal the circuit has a reduced error of just 0.03% at the output.",2002,0, 5236,An offset self-correction sample and hold circuit for precise applications in low voltage CMOS,"This work describes a new topology for CMOS sample-and-hold circuits, in low voltage, with self-correction of the offset voltage caused by mismatches in the differential input pair of the operational amplifier. The charge injection of the NMOS switches is an important factor and it is minimized in this topology. The results were obtained using the ACCUSIM II simulator on the AMS CMOS 0.8 m CYE and they reveal the circuit has a reduced error of just 0.03% at the output.",2002,0, 5237,Automated in-camera detection of flash eye-defects,"This work examines the problem of performing automatic real-time detection of flash eye defects (redeye) in the firmware of a digital camera. Several different algorithms are compared, and timing and memory requirements on several embedded architectures are presented. A discussion on advanced in-camera techniques to improve on standard algorithms is also presented.",2005,0,5024 5238,Fast algebraic fault diagnosis for the electrolytic filter capacitor of buck converter,"This work focuses on fault detection applied to static converters. A new method for estimating parameters of dynamic model of the buck converter is proposed, based on recent algebraic parameters estimators. The diagnosis scheme developed in this paper computes online the filter capacitor aging by determining the equivalent series resistance (ESR). The capacitance and inductance value are also estimated by this method. Finally, the estimators detailed in this study are validated experimentally on a prototype of buck converter.",2010,0, 5239,Transfomer fault diagnosis based on rough sets theory and artificial neural networks,"Transformer fault diagnosis based on artificial neural networks (ANN) is widely used, because ANN has essential nonlinear character, parallel processing ability and the ability of self organize and self learning. But there exist problems if we use traditional ANN method alone to diagnose transformer fault, the large input vector dimension and complex training database will cause the computation complexity and the space increase greatly, lead to long training time, slow convergence and low judgement accuracy. In this paper, a hybrid fault diagnosis method combining rough set (RS) theory and ANN (RS-ANN) is presented. Taking advantage of the strong ability of RS theory in processing large data and eliminating redundant information, this method can remove irrelevant factors from the original data and reduce the amount of training data which helps to overcome ANN's defect when process large database. A number of simulation results show that RS-ANN simplifies the networks' structure, reduces the networks' training epochs, improves the judgement accuracy.",2008,0, 5240,Computing cache vulnerability to transient errors and its implication,"Transient errors caused by particle strikes have become a critical challenge for microprocessor design. Being the major consumer of on-chip real estate, cache memories are particularly susceptible to transient errors. However, not all cache soft errors can be propagated to the processor. For instance, soft errors can be corrected by write operations before they are read. In this paper, we define the cache vulnerability factor (CVF) to be the probability that a fault in the cache can be propagated to the processor or other memory hierarchy. We also propose an approach to compute the CVF based on the cache line access patterns. Building upon the CVF we evaluate the reliability for different cache memories. Our results show that 83.5% of soft errors from a write-through data cache can be masked without affecting other components. We also propose two early write-back strategies to improve the reliability (i.e., by reducing the CVF) of write-back data caches without compromising the high performance.",2005,0, 5241,Goal trees and fault trees for root cause analysis,"Typical enterprise applications are built upon different platforms, operate in a heterogeneous, distributed environment, and utilize different technologies, such as middleware, databases and Web services. Diagnosing the root causes of problems in such systems is difficult in part due to the number of possible configuration and tuning parameters. Today a variety of tools are used to aid operators of enterprise applications identify root causes. For example, a user input validation tool detects and prevents Website intrusions or a log analysis tool identifies malfunctioning components. Searching for the root causes of such failures in a myriad of functional and non-functional requirements poses significant challenges-not only for users, but also for experienced operators when monitoring, auditing, and diagnosing systems. We propose the notion of a guide map-a set of goal trees and fault trees-to aid users in the process of choosing (supported by high level goal trees) and applying (supported by low level fault trees) suitable diagnostic tools. In this paper we discuss two case studies to illustrate how the guide map aids users to apply two home grown diagnostic tools.",2008,0, 5242,Use of code error and beat frequency test method to identify single event upset sensitive circuits in a 1GHz analog to digital converter,"Typical test methods for characterizing the single event upset performance of an analog to digital converter (ADC) have involved holding the input at static values. As a result, output error signatures are seen for only a few input voltage and output codes. A test method using an input beat frequency and output code error detection allows an ADC to be characterized with a dynamic input at a high frequency. With this method, the impact of an ion strike can be seen over the full code range of the output. The error signatures from this testing can provide clues to which area of the ADC is sensitive to an ion strike.",2007,0, 5243,An Ultrasonic System for Detecting Channel Defects in Flexible Packages,"Ultrasonic system was developed for detecting channel defects embedded in bonded 2-sheet flexible packages film. This hardware system consisted of spherically focused 22.66-MHz ultrasonic transducer, four-axis precision positioning system, NI PXI-bus embedded controller and ultrasonic pulser-receiver. The software system was designed based on the modularization, realized the echo signal on-line processing using ultrasonic backscattered echo envelope integral (BEEI) imaging method. Some experimental results were presented, and the BEEI-mode imaging of channel defect was shown. The system can be easily used to detect the channel defects in flexible packages.",2008,0, 5244,Fast Single-Turn Sensitive Stator Inter-Turn Fault Detection of Induction Machines Based on Positive and Negative Sequence Third Harmonic Components of Line Currents,"Unambiguous detection of stator inter-turn faults for induction machines at their incipient stage, i.e., few turns' fault, has recently received great attention. Traditionally, inter- turn faults are detected using negative sequence current and impedance. However, their effectiveness under supply unbalance conditions is questionable. Recently line current third harmonic (+3f) has also been used in an attempt to achieve this goal. But, issues such as inherent structural asymmetry and voltage unbalance also influence the +3f. In this paper, positive and negative sequence third harmonics (plusmn3f) of line current under different operating conditions have been explored by combining space and time harmonics. The suggested fault signature was obtained by removing residual components from tested quantities. Simulation and experimental results using one second of data indicate proposed plusmn3f signatures are capable of very effectively detecting even single turn fault and distinguish it from voltage unbalance and structural asymmetry.",2008,0, 5245,Influence of faults in electric railway systems,"Under normal operation conditions, assessing the condition of a railway system is quite simple. However, when a fault occurs, the analysis will be more complicated and, at the same time, much more important due to the conclusions which could be inferred from the results achieved in the analysis. This paper presents a simulation of an electric traction system, with double conversion AC/DC and DC/AC to fed AC traction motors, using the MATLAB/SIMULINK package. It has been analysed, in a fault situation, the mutual influence between an electric traction system and the distribution network it is connected to. The aim of this simulation is the optimisation of protection systems to avoid undesirable effects in both the distribution network and the traction system as a consequence of a fault situation in the traction system. This optimisation will be possible if a deep knowledge, about fault currents in different operation conditions, is achieved.",2004,0, 5246,Enhancing the low quality images using Unsupervised Colour Correction Method,"Underwater images are affected by reduced contrast and non-uniform colour cast due to the absorption and scattering of light in the aquatic environment. This affects the quality and reliability of image processing and therefore colour correction is a necessary pre-processing stage. In this paper, we propose an Unsupervised Colour Correction Method (UCM) for underwater image enhancement. UCM is based on colour balancing, contrast correction of RGB colour model and contrast correction of HSI colour model. Firstly, the colour cast is reduced by equalizing the colour values. Secondly, an enhancement to a contrast correction method is applied to increase the Red colour by stretching red histogram towards the maximum (i.e., right side), similarly the Blue colour is reduced by stretching the blue histogram towards the minimum (i.e., left side). Thirdly, the Saturation and Intensity components of the HSI colour model have been applied for contrast correction to increase the true colour using Saturation and to address the illumination problem through Intensity. We compare our results with three well known methods, namely Gray World, White Patch and Histogram Equalisation using Adobe Photoshop. The proposed method has produced better results than the existing methods.",2010,0, 5247,Software Assumptions Lead to Preventable Errors,"Undocumented assumptions are often the cause of serious software system failure. Thus, to reduce such failures, developers must become better at discovering and documenting their assumptions. In this article, we focus on common categories of assumptions in software, discuss methods for recognizing when developers are making them, and recommend techniques for documenting them.",2009,0, 5248,Unequal Error Protection Schema for Wireless H.264 Video Transmission Based on Perceived Motion Energy Model,"Unequal error protection on video transmission is widely used to combat with bit errors in the wireless channel. However, current UEP schemas are based on heuristic approaches and taking no account of the characteristics of human visual system. In this paper, a novel unequal error protection schema for wireless H.264 video transmission based on a modified perceived motion energy (PME) model is presented. According to the sensitive characteristics to video motions of human eyes, the proposed modified PME model taking account to the encoding features of H.264/AVC standard to analyze and model the motions in H.264/AVC encoded video. Based on this model, the video bitstream is divided into several quality layers and unequal error protection is designed to protect the layered bitstream for the transmission over wireless channels. Experiment results show that higher video transmission quality is obtained.",2008,0, 5249,Fault-Tolerance in Universal Middleware Bridge,Universal middleware bridge (UMB) provides seamless interoperation among heterogeneous home network middleware. There have been high demands for the UMB components (UMB core and adaptors) to have fault- tolerance capabilities. This paper presents a TMO structuring approach together with new implementation techniques for the fault-tolerant TMO-replica structuring scheme called PSTR. PSTR implementations of UMB components provide fault tolerance capabilities essential in realizing high reliability for the UMB facility.,2008,0, 5250,Fault Behavior in Fire Control System Based on Extendable Petri Net,"Use the Petri net theory and studied out a synthetical expression method of static state and trends behavior in fire control system fault diagnose. The method may be convenient and succinct for fault phenomenon of multilayer rank order and many route. Amount of work is simplified in the fault diagnose system. The fault information characteristic, fault information molde and propagation behavior fashion are expounded. The algorithm for system fault information characteristic is put. The Petri net molde of fault diagnose administrative levels and mixed fault is set up. The fault information propagation route is gained in system analysis. The trends transition of fault behavior is analyzed, to find the solution of system the smallest cut set and path.",2010,0, 5251,Study on Computer-Aided Fault Tree Construction for Geological Disasters,"Using the object-oriented plat software OEC, an expert system for aiding construction of fault trees for geological disasters is developed. The friendly user interface is provided by the system. This system also affords user some other functions, such as knowledge base maintenance, following up the scent and help function. By means of the computer expert system for aiding fault tree construction, the common people can construct fault tree for geological disasters at the level of an expert, and analyzes the cause of the accidents subsequently. The principle, process of the construction and important functions of computer-aided fault tree construction were discussed in this paper.",2009,0, 5252,A Model-based Simulation Approach to Error Analysis of IT Services,"Utility computing environments provide on-demand IT services to customers. Such environments are dynamic in nature and continuously adapt to changes in requirements and system state. Errors are an important category of environment state changes as such environments consist of a large number of components, and hence, are subject to errors. In this paper, we design and implement a model-based simulation framework that leverages information about existing service components and their interactions, and provides concrete service behavior in presence of a variety of errors. To evaluate the framework, experiments are conducted on a virtualized blade-server based environment. Results show that the framework is effective and practical in analyzing error impacts on IT services.",2007,0, 5253,History-based weighted average voter: a novel software voting algorithm for fault-tolerant computer systems,"Voting algorithms have been widely used in the realisation of fault-tolerant systems. We propose a new software voting algorithm which uses the history record of redundant modules to compute the final output. Two versions for the novel algorithm are introduced In the first version any module result is assigned to a weighting value such that module results with higher history record value, are assigned to a higher weighing value than those with lower history record value. In the second version of the novel voter, those module results which have a history record value, less than the average record value, are allocated a weight of zero and removed from the contribution toward the voter output. Furthermore, a novel method for creation of a history record of modules is proposed. Empirical results show that both versions of the novel voter give higher safety performance than the Standard Weighted Average voter with permanent and transient errors",2001,0, 5254,AMISP: a complete content-based MPEG-2 error-resilient scheme,"We address a new error-resilient scheme for broadcast quality MPEG-2 video streams to be transmitted over lossy packet networks. A new scene-complexity adaptive mechanism, namely Adaptive MPEG-2 Information Structuring (AMIS) is introduced. AMIS modulates the number of resynchronization points (i.e., slice headers and intra-coded macroblocks) in order to maximize the perceived video quality, assuming that the encoder is aware of the underlying packetization scheme, the packet loss probability (PLR), and the error-concealment technique implemented at the decoding side. The end-to-end video quality depends both on the encoding quality and the degradation due to data loss. Therefore, AMIS constantly determines the best compromise between the rate allocated to encode pure video information and the rate aiming at reducing the sensitivity to packet loss. Experimental results show that AMIS dramatically outperforms existing structuring techniques, thanks to its efficient adaptivity. We then extend AMIS with a forward-error-correction (FEC)-based protection algorithm to become AMISP. AMISP triggers the insertion of FEC packets in the MPEG-2 video packet stream. Finally, the performances of the AMISP scheme in an MPEG-2 over RTP/UDP/IP scenario are evaluated",2001,0, 5255,Coding scheme for low energy consumption fault-tolerant bus,"We address the problem of devising the error correcting code which, if used to encode the information on a very deep submicron (VDSM) bus, allows us to achieve fault-tolerance with the minimal impact on bus power consumption and power-delay product. In particular, we first report the results of an analysis that we performed on power dissipation in VDSM fault-tolerant busses using Hamming single error correcting codes. We show that no power saving is possible by choosing between different optimal Hamming codes with the same redundancy. We then propose a new coding scheme which provides a reduction of the energy consumption and power-delay product of over the 11.5% and 45%, respectively, with respect to the optimal (7,4) Hamming code, for a 0.13m CMOS technology bus.",2002,0, 5256,Improvement of myocardial perfusion defect severity quantitation in cardiac SPECT: A simulation study,"We aim at improving the quantitative assessment of the severity of myocardial perfusion defects in cardiac SPECT imaging. The idea of a numerical heart template is utilized, which enables a patient specific measurement of defect severity as opposed to the more traditional population-based approaches. Using NCAT we developed three male thorax phantoms with different orientations and sizes of the left ventricle. Each heart contained a small (5%) inferior wall defect with a severity of 20-80%. The SimSET code was used to perform 21 simulations modeling cardiac SPECT acquisitions with a Tc-99m radiotracer, LEHR collimator, 64A64 matrix, and 60 camera stops. A conventional method (CM) of defect severity assessment included the MLEM reconstruction with 40 iterations and a calculation of the ratio, RCM, of the average activity concentrations in the defect and the normal heart. Our template method (TM) calculates a new ratio, RTM, which is a correction to RCM by rescaling it between two reference levels corresponding to a completely non-perfused defect and a healthy myocardium. These levels are calculated by projecting and reconstructing two numerical heart templates (may be based on CT in clinical studies) with activity ratios in defect to normal heart set to zero and unity, respectively. The proposed TM method was more accurate and more sensitive to small changes in defect severity than CM. While CM showed no defect (RCM was equal to 0.98-1.02) in the case with 20% severity (true ratio is 0.8), TM led to RTM values of 0.84-0.92. On average, our TM technique exhibited a 17% improvement in defect to normal ratios relative the CM method. Our proposed method offers a patient-specific assessment of perfusion defect severity in SPECT without the limitations intrinsic to traditional methodologies (e.g. extreme heart geometries).",2009,0, 5257,Error-rate analysis for multirate DS-CDMA transmission schemes,"We analyze and compare the error performance of a dual-rate direct-sequence code-division multiple-access (DS-CDMA) system using multicode (MCD) and variable-spreading gain (VSG) transmission in the uplink. Specifically, we present two sets of results. First, we consider an ideal additive white Gaussian noise channel. We show that the bit-error rate (BER) of VSG users is slightly lower than that of MCD users if the number of low-rate interferers is smaller than a specific threshold. Otherwise, they exhibit similar error performance. Second, we look at multipath fading channels. We show that with diversity RAKE reception, the VSG user suffers from a larger interference power than the MCD user if the channel delay spread is small. The reverse is true for a large delay spread. However, a larger interference power in this case does not necessarily lead to higher error probability. Essentially, our results for both cases show that: 1) in addition to the signal-to-interference ratio (SIR), the difference in error performance between the two systems strongly depends on the distributions of multiple-access and multipath interference; 2) for practical cellular communications, performances for both systems are expected to be similar most of the time.",2003,0, 5258,Impact of channel estimation errors in amplify-and-forward cooperative transmission,"We analyze the impact of the channel estimation errors in amplify-and-forward (AF) cooperative communication systems over Rayleigh fading channels with multi-relays assisted. We derive the exact closed-form and the approximate expressions of the cumulative distribution function (CDF) of the received signal-noise-ratio (SNR) for each link. The variance of channel estimation errors varies for different values of the transmit SNR, dominated by the channel estimation quality order. We also present the approximate expression of the average symbol error rate (SER) under different levels of channel estimation errors by using the moment-generating function (MGF) based approach. Numerical results confirm that our theoretical analysis for SER is very accurate.",2010,0, 5259,ExPert: Dynamic Analysis Based Fault Location via Execution Perturbations,"We are designing dynamic analysis techniques to identify executed program statements where a fault lies, i.e. the fault candidate set. To narrow the set of statements in the fault candidate set, automated dynamic analyses are being developed which consider not only a failed run of a program but also execution perturbations of the failed run. The goal of this work is to focus the users attention on a small subset of statements in the fault candidate set.",2007,0, 5260,Development of component-based normalization correction for the Clear-PEM system,We are developing a component-based normalization correction for the Clear-PEM positron emission mammography system. This system consists of two opposing parallel planar detectors that rotate around the breast. The distance between detector plates can vary to adapt to the patient and scintillation light is read at two ends of the crystals for Depth of Interaction (DOI) information. The normalization model currently accounts for intrinsic and geometric efficiencies using new methods specifically developed for this purpose. Both efficiencies are calculated from data obtained with a planar source that is parallel to the two detector plates. Support for other components (deadtime and DOI) is currently being developed. The whole normalization scheme is in the process of being assessed with real data using planar and cylindrical sources.,2009,0, 5261,Real-Time Distributed Discrete-Event Execution with Fault Tolerance,"We build on PTIDES, a programming model for distributed embedded systems that uses discrete-event (DE) models as program specifications. PTIDES improves on distributed DE execution by allowing more concurrent event processing without backtracking. This paper discusses the general execution strategy for PTIDES, and provides two feasible implementations. This execution strategy is then extended with tolerance for hardware errors. We take a program transformation approach to automatically enhance DE models with incremental checkpointing and state recovery functionality. Our fault tolerance mechanism is lightweight and has low overhead. It requires very little human intervention. We incorporate this mechanism into PTIDES for efficient execution of fault- tolerant real-time distributed DE systems.",2008,0, 5262,Correction of in-band self-interference due to imperfect frequency synthesizer,"We consider a synthesizer proposed for multiband OFDM. This paper presents a simple way to correct the in-band default caused by the non-ideal frequency synthesizer, in the case of one user. This compensation is a first step toward the possibility of a conception of a less constrained synthesizer.",2009,0, 5263,Toward Optimal Network Fault Correction via End-to-End Inference,"We consider an end-to-end approach of inferring network faults that manifest in multiple protocol layers, with an optimization goal of minimizing the expected cost of correcting all faulty nodes. Instead of first checking the most likely faulty nodes as in conventional fault localization problems, we prove that an optimal strategy should start with checking one of the candidate nodes, which are identified based on a potential function that we develop. We propose several efficient heuristics for inferring the best node to be checked in large-scale networks. By extensive simulation, we show that we can infer the best node in at least 95%, and that checking first the candidate nodes rather than the most likely faulty nodes can decrease the checking cost of correcting all faulty nodes by up to 25%.",2007,0, 5264,ATPG for timing-induced functional errors on trigger events in hardware-software systems,We consider timing-induced functional errors in inter process communication. We present an Automatic Test Pattern Generation (ATPG) algorithm for the co-validation of hardware-software systems. Events on trigger signals (signals contained in the sensitivity list of a process) implement the basic synchronization mechanism in most hardware-software description languages. Timing faults on trigger signals can have a serious impact on system behavior. We target timing faults on trigger signals by enhancing a timing fault model proposed in previous work. The ATPG algorithm which we present targets the new timing fault model and provides significant performance benefits over manual test generation which is typically used for co-validation.,2002,0, 5265,Multi-error-correcting amplitude damping codes,"We construct new families of multi-error-correcting quantum codes for the amplitude damping channel. Our key observation is that, with proper encoding, two uses of the amplitude damping channel simulate a quantum erasure channel. This allows us to use concatenated codes with quantum erasure-correcting codes as outer codes for correcting multiple amplitude damping errors. Our new codes are degenerate stabilizer codes and have parameters which are better than the amplitude damping codes obtained by any previously known construction.",2010,0, 5266,Experimental checking of fault susceptibility in a parallel algorithm,We deal with the problem of analyzing fault susceptibility of a parallel algorithm designed for a multiprocessor array (MIMD structure). This algorithm realizes quite a complex communication protocol in the system. We present an original methodology of the analysis based on the use of a software implemented fault injector. The considered algorithm is modeled as a multithreaded application. The experiment set up and results are presented and commented. The performed experiments proved relatively high natural robustness of the analyzed algorithm and showed further possibilities of its improvement.,2002,0, 5267,Exact bit-error probability for optimum combining with a Rayleigh fading Gaussian cochannel interferer,"We derive expressions for the exact bit-error probability (BEP) for the detection of coherent binary phase-shift keying signals of the optimum combiner employing space diversity when both the desired signal and a Gaussian cochannel interferer are subject to flat Rayleigh fading. Two different methods are employed to reach two different, but numerically identical, expressions. With the direct method, the conditional BEP is averaged over the fading of both signal and interference, With the moment generating function based method, expressions are derived from an alternative representation of the Gaussian Q-function",2000,0, 5268,Tight exponential upper bounds on the ML decoding error probability of block codes over fully interleaved fading channels,"We derive tight exponential upper bounds on the decoding error probability of block codes which are operating over fully interleaved Rician fading channels, coherently detected and maximum-likelihood decoded. It is assumed that the fading samples are statistically independent and that perfect estimates of these samples are provided to the decoder. These upper bounds on the bit and block error probabilities are based on certain variations of the Gallager bounds. These bounds do not require integration in their final version and they are reasonably tight in a certain portion of the rate region exceeding the cutoff rate of the channel. By inserting interconnections between these bounds, we show that they are generalized versions of some reported bounds for the binary-input additive white Gaussian noise channel.",2003,0, 5269,An error tolerant software equipment for human DNA characterization,We describe a learning algorithm for the prediction of splice site locations in human DNA in the presence of sequence annotation errors in the training data. Experimental results on a common dataset including errors are reported. We also give an efficient implementation. The resulting software package is publicly available.,2004,0, 5270,Traveling wave-based fault location experiences,"Transmission utility companies continuously strive for high availability of their lines. When a line is off service following a permanent fault, a significant amount of the time during the restore the line can be attributed to locating the point of the fault. This paper shows how this time can be significantly reduced using highly accurate traveling wave (TW) fault locators. It describes location results of several faults in three separated transmissions lines. The collected data are compared to those obtained by considering one- and two-end impedance location algorithms for the same faults. It also highlights the unique characteristics of Reason fault locators to overcome common problems of other TW solutions existing in the market.",2010,0, 5271,An Analysis Based on Fault Injection of Hardening Techniques for SRAM-Based FPGAs,"Triple Modular Redundancy (TMR) is recognized as one of the possible solutions to harden circuits implemented on SRAM-based FPGAs against soft-errors affecting configuration memory and user memory. Several works already showed cross-section figures confirming the soundness of TMR principle, however some faults still escape the TMR's fault masking mechanism. In this work we analyzed by means of extensive fault-injection experiments the TMR architecture. We identified some of the causes that are responsible for the escaped faults, and we proposed possible solutions. In our analyses we considered both the TMR and one of its enhanced version, the XTMR",2006,0, 5272,Assessment and implementation of NOAA NWP-based tropospheric correction model,"Tropospheric delay is one of the dominant Global Positioning System (GPS) errors, which degrades the positioning accuracy. Recent developments in tropospheric modeling rely on implementation of more accurate Numerical Weather Prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA model, which has been developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAA tropospheric correction model became the focus of many researchers. In this paper, we analyzed the performance of the NOAA tropospheric correction model and examined its effect on precise point positioning (PPP) solution. We generated a three-year-long tropospheric zenith total delay (ZTD) data series for the NOAA, Hopfield, and the IGS final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAA ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAA model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAA model on the PPP solution convergence and accuracy, which again showed superior performance in comparison with the Hopfield model.",2009,0, 5273,A Trustworthy Network Fault Diagnosis Approach,"Trustworthy network fault diagnosis approach is one critical management item to enhance network trustworthiness. Aiming at gaining highly trustworthy of fault diagnosis in Internet, we present a trustworthy fault diagnosis approach based on integration of Artificial Neural Network and Rule-based Reasoning. Supported by the topology of hierarchical and distributed in multi-domains, reasoning rule matrix and its operation are studied to acquire parallel reasoning capability. Moreover, the quantitative trustworthy degree is defined and information entropy is applied to define threshold function marked on arcs and nodes in the Artificial Neural Network. Our approach possesses higher parallel capability guaranteeing by matrix operation and trustworthiness by trustworthy degree definition and calculation using Artificial Neural Network. Further, it is general so that it can be transplanted into various application fields.",2009,0, 5274,Iterative (TURBO) IQ Imbalance Estimation and Correction in BICM-ID for Flat Fading Channels,"TURBO principle has been exploited gainfully to implement many receiver functions. RF front-end impairments are a serious issue in high spectral efficient applications. IQ imbalance is one of these impairments and in this work, we study the issue of IQ imbalance correction using baseband signal processing techniques. In particular, we propose an estimation technique based on EM algorithm. Such a technique is developed rather intuitively for the case of a Bit Interleaved Coded modulation - Iterative Detection (BICM-ID) receiver for burst mode communications. The resulting TURBO IQ Decorrelator is embedded in the BICM-ID loop, is blind in the sense that it does not require any training symbols or tones. Performance is simulated for 64QAM under flat fading channel conditions.",2007,0, 5275,Ensemble Dependent Matrix Methodology for Probabilistic-Based Fault-tolerant Nanoscale Circuit Design,"Two probabilistic-based models, namely the ensemble-dependent matrix model (Chen and Li, 2006), (Patel et al., 2003) and the Markov random field model (Chen et al., 2003), have been proposed to deal with faults in nanoscale system. The MRF design can provide excellent noise tolerance in nanoscale circuit design. However, it is complicated to be applied to model circuit behavior at system level. Ensemble dependent matrix methodology is more effective and suitable for CAD tools development and to optimize nanoscale circuit and system design. In this paper, we show that the ensemble-dependent matrices describe the actual circuit performances when signal errors are present. We then propose a new criterion to compare circuit error-tolerance capability. We also prove that the matrix model and the Markov model converge when signals are digital",2007,0, 5276,A comparison of phase space reconstruction and spectral coherence approaches for diagnostics of bar and end-ring connector breakage faults in polyphase induction motors using current waveforms,"Two signal (waveform) analysis approaches are investigated in this paper for motor drive fault identification-one linear and the other nonlinear. Twenty-one different motor-drive operating conditions including healthy, 1 through 10 broken bars, and 1 through 10 broken end-ring connectors are investigated. Highly accurate numerical simulations of current waveforms for the various operating conditions are generated using the time stepping coupled finite element-state space method for a 208-V, 60-Hz, 2-pole, 1.2-hp, squirrel cage 3-phase induction motor. The linear signal analysis method is based on spectral coherence, whereas the nonlinear signal analysis method is based on stochastic models of reconstructed phase spaces. Conclusions resulting from the comparisons of these two methods are drawn.",2002,0, 5277,A hybrid scatter correction for 3D PET based on an estimation of the distribution of unscattered coincidences: implementation on the ECAT EXACT HR+,"We implemented a hybrid scatter correction method for 3D PET that combines two scatter correction methods in a complementary way. The implemented scheme uses a method based on the discrimination of the energy of events (the estimation of trues method, ETM) and an auxiliary method (the single scatter simulation method, or the convolution-subtraction method), in an attempt to increase the accuracy of the correction over a wider range of acquisitions. The ETM takes into account the scatter from outside the field-of-view (FOV), which is not estimated with the auxiliary method. On the other hand, the auxiliary method accounts for events that have scattered with small angles, which have an energy that can not be discriminated from that of unscattered events using the ETM. The ETM uses the data acquired in an upper energy window above the photopeak (550-650 keV), to obtain a noisy estimate of the unscattered events in the standard window (350-650 keV). Our implementation uses the auxiliary method to correct the residual scatter in the upper window. After appropriate scaling, the upper window data is subtracted from the total coincidences acquired in the standard window, resulting in the final scatter estimate, after smoothing. We compare the hybrid method with the corrections used by default in the 2D and 3D modes of the ECAT EXACT HR+, using phantom measurements. Generally, the contrast was better with the hybrid method, although the relative errors of quantification were similar. We conclude that hybrid techniques such as the one implemented in this work can provide an accurate, general-purpose and practical way to correct the scatter in 3D PET, taking into account the scatter from outside the FOV",2000,0, 5278,Theoretical Lower Error Bound for Comparative Evaluation of Sensor Arrays in Magnetostatic Linear Inverse Problems,"We introduce a theoretical lower error bound for solutions to magnetostatic linear inverse problems and we propose it as a figure of merit for the comparative evaluation of sensor arrays. With the help of the proposed error bound, we demonstrate the superiority of three-axial biomagnetic sensor arrays applying truncated singular value decomposition analysis to a kernel matrix computed from boundary-element-method (BEM) models of the human torso for a biomagnetic application. In simulations, we found that, for a more complex five-compartment BEM model, the advantage of using three-axial measurements is more pronounced, compared to a three-compartment BEM model",2006,0, 5279,Identifying efficient combinations of error detection mechanisms based on results of fault injection experiments,"We introduce novel performance ratings for error detection mechanisms. Given a proper setup of the fault injection experiments, these ratings can be directly computed from raw readout data. They allow the evaluation of the overall performance of arbitrary combinations of mechanisms without the need for further experiments. With this means we can determine a minimal subset of mechanisms that still provides the required performance",2002,0, 5280,A different view of fault prediction,"We investigated a different mode of using the prediction model to identify the files associated with a fixed percentage of the faults. The tester could ask the tool to identify which files are likely to contain the bulks of faults, with the tester selecting any desired percentage of faults. Again the tool would return a list ordered in decreasing order of the predicted numbers of faults in the files the model expects to be most problematic. If the number of files identified is too large, the tester could reselect a smaller percentage of faults. This would make the number of files requiring particular scrutiny manageable. We expect both modes to be valuable to professional software testers and developers.",2005,0, 5281,Spectral RTL Test Generation for Gate-Level Stuck-at Faults,"We model RTL faults as stuck-at faults on primary inputs, primary outputs, and flip-flops. Tests for these faults are analyzed using Hadamard matrices for Walsh functions and random noise level at each primary input. This information then helps generate vector sequences. At the gate-level, a fault simulator and an integer linear program (ILP) compact the test sequences. We give results for four ITC'99 and four ISC AS'89 benchmark circuits, and an experimental processor. The RTL spectral vectors performed equally well on multiple gate-level implementations. Compared to a gate-level ATPG, RTL vectors produced similar or higher coverage in shorter CPU times",2006,0, 5282,Distributed fault diagnostics for tactical networks,"We present a design and an evaluation of a distributed fault diagnostic system (FDS) that copes with changing wireless network topology, complexity and size of fault propagation patterns, constrained bandwidth, and limited computing power of the mobile devices. The presented FDS consists of several components: run-time synthesis algorithm to generate network-wide fault dependency model (FPM), scalable Bayesian inference algorithms, and novel techniques for optimally distributing inference to ensure the scalability of our approach. We describe three algorithms for distributing inference, each of them using different technique for maximizing the fault-symptom locality: Fault-based Adaptive algorithm, Topology-based Adaptive algorithm, and Topology-based Probabilistic algorithm. We have evaluated the performance of the proposed approach in a simulated environment using abstract models of a real-life tactical network, and compared it to a centralized approach. We found that our techniques allows for a significant gain in the processing time (30 times improvement for the best performing technique), and exhibit only a minimal reduction (3% percentage points) in the accuracy of the fault diagnostics.",2010,0, 5283,Re-engineering fault tolerance requirements: a case study in specifying fault tolerant flight control systems,We present a formal specification of fault tolerance requirements for an analytical redundancy based fault tolerant flight control system. The development of the specification is driven by the performance and fault tolerance requirements contained in the US Air Force military specification MIL-F-9490D. The design constraints imposed to the system from adopting the analytical redundancy approach are captured within the specification. We draw some preliminary conclusions from our study,2001,0, 5284,Implementation of a quantum corrections in a 3D parallel drift-diffusion simulator,"We describe an implementation of density-gradient quantum corrections in a 3D drift-diffusion (D-D) semiconductor simulator based on finite element method. Mesh efficiency of the 3D semiconductor device simulator with quantum mechanical corrections is achieved by parallelisation of the code for a memory distributed multiprocessor environment. The Poisson equation, the current continuity equation, and the density gradient equation with an appropriate finite element discretisation have to be solved iteratively. Moreover, parallel algorithms are employed to speed up the self-consistent solution. In order to test our 3D semiconductor device simulator, we have carried out a careful calibration against experimental I-V characteristics of a 67 nm Si MOSFET achieving an excellent agreement. Then we demonstrate a relative impact of quantum mechanical corrections in this device.",2007,0, 5285,Hybrid fault-tolerant control of aerospace vehicles,"We describe our recent results (2001) related to the design of hybrid online failure detection and identification and adaptive reconfigurable control algorithms for aerial and space vehicles. Our approach is based on the multiple models, switching and tuning methodology and its extensions, and has been demonstrated as an efficient tool for hybrid fault tolerant control under subsystem and component failures and structural damage",2001,0, 5286,The design and implementation of a fault-tolerant RPC system: Ninf-C,"We describe the design and implementation of a fault tolerant GridRPC system, Ninf-C, designed for easy programming of large-scale master-worker programs that take from few days to few months for its execution in a grid environment. Ninf-C employs Condor, developed at University of Wisconsin, as the underlying middleware supporting remote file transmission and checkpointing for system-wide robustness for application users on the grid. Ninf-C layers all the GridRPC communication and task parallel programming features on top of Condor in a non-trivial fashion, assuming that the entire program is structured in a master-worker style-in fact, older Ninf master-worker programs can be run directly or trivially ported to Ninf-C. In contrast to the original Ninf, Ninf-C exploits and extends Condor features extensively for robustness and transparency, such as 1) checkpointing and stateful recovery of the master process, 2) the master and workers mutually communicating using (remote) files, not IP sockets, and 3) automated throttling of parallel GridRPC calls; and in contrast to using Condor directly, programmers can set up complex dynamic workflow as well as master-worker parallel structure with almost no learning curve involved. To prove the robustness of the system, we performed an experiment on a heterogeneous cluster that consists of x86 and SPARC CPUs, and ran a simple but long-running master-worker program with staged rebooting of multiple nodes to simulate some serious fault situations. The program execution finished normally avoiding all the fault scenarios, demonstrating the robustness of Ninf-C.",2004,0, 5287,Software release control using defect based quality estimation,"We describe two case studies to investigate the application of a state variable model to control the system test phase of software products. The model consists of two components: a feedback control portion and a model parameter estimation portion. The focus in this study is on the assessment of the goodness of the estimates and predictions of the model parameters and their utility in the management of the system test phase. Two large network management applications developed and tested at Sun Microsystems served as the subjects in these studies. Unlike the release of products based on marketing or deadline pressure, estimates of the number of residual defects are used to control the quality of the product being released. The estimates of the number of defects in the application when the test phase began and at the current checkpoint are obtained. In addition a prediction is made regarding the reduction in the number of remaining defects over the remaining period. The estimates and predictions assist the management in planning the test phase and allow inferring the level of customer support needed subsequent to product release. The results of both case studies are satisfactory and, when viewed in light of other studies conducted at Sun Microsystems, show the applicability of the state variable model to the management of the software test process.",2004,0, 5288,Uncertainty of Timebase Corrections,We develop a covariance matrix describing the uncertainty of a new timebase for waveform measurements determined with the National Institute of Standards and Technology's timebase correction algorithm. This covariance matrix is used with covariance matrices associated with other random and systematic effects in the propagation of uncertainty for the measured waveform.,2009,0, 5289,Fault tolerant control of spacecraft in the presence of sensor bias,"We develop a stable scheme for bias estimation in the case of attitude tracking. The scheme is based on the design of nonlinear observers for unknown bias identification and state estimation. In the case of gyro bias, our nonlinear observer design, based on the quaternion dynamics, leads to an error model and adjustment laws that result in guaranteed convergence of the unknown bias estimates to their true values. We demonstrate that our scheme results in a stable overall system, and achieves highly accurate pointing in the presence of unknown sensor bias. The properties of the proposed scheme are evaluated through simulations using a generic spacecraft model",2000,0, 5290,Dynamic Causality Diagram in Vehicular Engine's Fault Diagnosis,"We discuss the knowledge expression, reasoning and probability computing in dynamic causality diagram, which is developed from the belief network and overcomes some shortages of belief network. The model of causality diagram used for vehicular enginepsilas fault diagnosis is brought forward, and the model constructing method and reasoning algorithm are also presented. At last, an application example in the vehicular enginepsilas fault diagnosis is given which shows that the method is effective.",2009,0, 5291,Adaptive shape and texture intra refreshment schemes for improved error resilience in object-based video coding,"Video encoders may use several techniques to improve error resilience. In particular, for video encoders that rely on predictive (inter) coding to remove temporal redundancy, intra coding refreshment is especially useful to stop temporal error propagation when errors occur in the transmission or storage of the coded streams, since these errors may cause the decoded quality to decay very rapidly. In the context of object-based video coding, intra coding refreshment can be applied to both the shape and texture data. In this paper, novel shape and texture intra refreshment schemes are proposed which can be used by object-based video encoders, such as MPEG-4 video encoders, independently or combined. These schemes allow to adaptively determine when the shape and texture of the various video objects in a scene should be refreshed in order to maximize the decoded video quality for a certain total bit rate.",2004,0, 5292,Design and analysis of a fault-tolerant mechanism for a server-less video-on-demand system,"Video-on-demand (VoD) systems have traditionally been built on the client-server architecture, where a video server stores, retrieves, and transmits video data to video clients for playback This paper investigates a radically different approach to building VoD systems, one where the server, and hence the primary bottleneck, is completely eliminated. This server-less architecture comprises homogeneous hosts, called nodes, which serve both as client and as mini-server. Video data are distributed over all nodes and these nodes cooperatively stream video data to one another for playback. However, unlike traditional video server that runs on high-end server hardware in a carefully controlled and protected data centre, a node in a server less system is likely to be far more unreliable. Therefore it is essential that sufficient data and capacity redundancies are incorporated to maintain an acceptable set-vice reliability. This paper presents and analyzes a fault tolerant mechanism based on inter-node striping and erasure correction codes to tackle this challenge. By formulating the system's reliability as a Markov chain model, we obtain insights into the feasible operating region of the system, such as the amount of redundancy required and the node-level reliability that can be tolerated. Numerical results show that a server-less VoD system of 200 nodes can achieve reliability surpassing that of dedicated video server using a redundancy overhead of only 21.2% even though individual nodes are highly unreliable.",2002,0, 5293,Design on the Fault Diagnostic System Based on Virtual Instrument Technique,"Virtual instrument (VI) is one of the most prevalent technologies in the domain of testing, control and fault diagnosis systems, etc. In order to update entirely the means of the equipment testing for shipboard equipment, the fault diagnostic system for shipboard equipment is developed, based on VI technology, Delphi and database. The performance and constitutes of VI are introduced briefly. The modularization and universalization are proposed in its database-based design concept, realizing the design of software and hardware. The ODBC technique is applied for the interconnection of databases to ensure the generality and flexibility of the system. The aim of this design is to resolve the problems existing in the usage of testing equipments. The system mode the best of VI platform and grey diagnosis method, broke through conventional check diagnosis patterns for warships equipment, solved the problems of state prediction and trouble-mode recognition of warships equipment. It has been proved from the application that the system has merits both the testing speed and high accuracy.",2009,0, 5294,"""A Bug's Life"" Visualizing a Bug Database","Visualization has long been accepted as a viable means to comprehend large amounts of information. Especially in the context of software evolution a well-designed visualization is crucial to be able to cope with the sheer data that needs to be analyzed. Many approaches have been investigated to visualize evolving systems, but most of them focus on structural data and are useful to answer questions about the structural evolution of a system. In this paper we consider an often neglected type of information, namely the one provided by bug tracking systems, which store data about the problems that various people, from developers to end users, detected and reported. We first briefly introduce the context by reporting on the particularities of the present data, and then propose two visualizations to render bugs as first-level entities.",2007,0, 5295,Extended Minimum Classification Error Training in Voice Activity Detection,"Voice activity detection (VAD) is a fundamental part of speech processing. Combination of multiple acoustic features is an effective approach to make VAD more robust against various noise conditions. There have been proposed several feature combination methods, in which weights for feature values are optimized based on minimum classification error (MCE) training. We improve these MCE-based methods by introducing a novel discriminative function for whole frames. The proposed method optimizes combination weights taking into account the ratio between false acceptance and false rejection rates as well as the effect of the use of shaping procedures such as hangover.",2009,0, 5296,Vehicle fault diagnostics using a sensor fusion approach,"Vehicle electronics and mechanical systems continue to become more complex and interdependent. Automotive electronic control units (ECUs) used in vehicle sub-systems execute high performance algorithms requiring robust fault detection and diagnostics. Our technical approach begins by presenting an overview of several techniques used for processing information from multiple input sources (sensors in particular) in the context of ECU fault detection and diagnostic schemes. We assert that inter-relationships between groups of sensors can be exploited through sensor fusion for signal integrity estimation, and present two vehicle application examples (chassis and powertrain). We propose a virtual fusion/estimation technique which provides basic signal redundancy and fault tolerance. Dynamic vehicle sensor information was used to develop an uncompounded fusion-processing algorithm, expressed through a Matlab/Simulink model. The results of these modeling experiments are presented and subsequent conclusions are drawn regarding the implementation of a sensor fusion-based or sensor fusion-enhanced ECU algorithm as part of a more comprehensive diagnostic, control and service methodology.",2002,0, 5297,A fault model for fault injection analysis of dynamic UML specifications,"Verification and validation (V&V) tasks, as applied to software specifications, enable early detection of analysis and design flaws prior to implementation. Several fault injection techniques for software V&V are proposed at the code level. In this paper, we address V&V analysis methods based on fault injection at the software specification level. We present a fault model and a fault injection process for UML dynamic specifications. We use a case study based on a cardiac pacemaker for illustrating the developed approach.",2001,0, 5298,Statistical algorithms in fault detection and prediction: Toward a healthier network,"Very high reliability/availability at affordable cost requires a proactive approach to system faults and failures. This calls for sophisticated fault detection algorithms that ultimately could evolve into fault prediction strategies. This paper presents statistical algorithms the Operational Fault Detection (OFD) class of algorithms toward reaching these goals. OFD algorithms analyze system performance metrics to detect fault signatures. The concept behind OFD is to raise alarms for conditions that adversely impact customer revenue or system performance. Initial versions of OFD, deployed in the field, count meaningful events and raise alarms when a test statistic, based on the event counts, exceeds a predefined threshold. Setting the thresholds required human intervention. This is considered time consuming by our customers, even though the concepts of OFD have been well received. This paper suggests a new generation the second-generation OFD that is inherently adaptive and requires minimal human intervention. These new algorithms are designed to detect system performance degradations, paving the way to more mature fault prediction strategies. Detecting degradations is a precursor to fault predictions, as degradations are often early signatures of potentially catastrophic faults.",2005,0, 5299,Self-Organizing Map-Based Fault Dictionary Application Research on Rolling Bearing Faults,"Vibration signal resulting from rolling bearing defects presents a rich content of physical information, the appropriate analysis methods of which can lead to the clear identification of the nature of the fault. A novel procedure is presented for construction of fault diagnosis dictionary through self-organization map (SOM). The experiments show that the bearing faults diagnosis dictionary could be effectively applied in the vibration pattern recognition for a roller bearing system.",2008,0, 5300,An Efficient Technique for Error-Free Implementation of H.264 Using Algebraic Integer Encoding,"Video coding technology plays a key role in various multimedia applications. H.264 is the newest video coding standard and has achieved a significant improvement in coding efficiency. The 4*4 integer transform, as one of the key techniques in H.264 video compression standard, is very important for the whole performance of H.264 codec. In this paper we propose a novel algorithm for fast and error-free (infinite-precision) implementation of H.264 based on algebraic integer-encoding scheme. The proposed algorithm has regular structure. Simulation results show that this algorithm will result in reduction of computation complexity while enhancing the quality of obtained image simultaneously. Determining the quality of an image is an open problem that is highly dependent on the specific application that this image will be used for. We propose new measuring quantities for image quality.",2010,0, 5301,Clinical evaluation of real-time phase-aberration correction system [medical ultrasound],"We have developed a phase-aberration correction system that correlates signals in real time. In this paper we evaluate the clinical performance of the system in vivo. This system 1) constructs a cross-sectional image and, 2) calculates the correlation of signals at neighboring sensor elements. Both 1) and 2) are carried out in real time. The system was used for imaging of living tissue. First the beam was formed using initial focus delay settings and a real-time cross-sectional image was constructed. The correlation between neighboring signals was calculated simultaneously with the beam formation and the time difference between the pairs of signals was acquired. The time difference was then used to compensate for the initial delay. The image of the living tissue was substantially improved after the compensation. A further experiment is in progress to collect a statistically significant number of clinical results",2000,0, 5302,Automatic Recognition of Defect Signatures and Notification of Tool Malfunctions - IECON'06,"We have developed an automatic method that efficiently detects the defect signatures of substrates and identifies possible problems pertaining to LSI/TFT-LCD manufacturing processes and tools. This system, which has no built-in libraries, can be applied to the mass production line of thin film devices. This method is useful to quickly detect problems that have been overlooked thus far",2006,0, 5303,On-line sensor calibration and error modeling using single actuator stimulus,"We have developed an on-line in-field nonparametric calibration and error modeling approach. The approach employs a single excitation source as the external stimulus to create differential sensor readings. Under very mild assumptions imposed on the calibration functions, error model and the environment model, the technique utilizes the maximal likelihood principle and a nonlinear function minimization to derive both simultaneously the calibration function and the error model of a specified accuracy. Resubstitution is then used in order to establish the interval of confidence. The approach is intrinsically localized and we present two variants: i) one where only pairs of neighboring sensors communicate in order to conduct calibration and construct error model; ii) one where a provably minimum amount of communication is achieved. While the idea of employing external actuators to conduct calibration is generic in the sense that it can be applied to any sensor modality, in this paper we demonstrate and evaluate the approach using traces from light sensors and acoustic signal-based distance measurements recorded by in-field deployed sensors.",2009,0, 5304,Soft-Errors Phenomenon Impacts on Design for Reliability Technologies,"We will mainly address here the ""alter ego"" of quality, which is reliability, and is becoming a growing concern for designers using the latest technologies. After the DFM nodes in 90nm and 65nm, we are entering the DFR area, or Design For Reliability straddling from 65nm to 45nm and beyond. Because of the randomness character of reliability - failures can happen anytime anywhere - executives should mitigate reliability problems in terms of risk, which costs include cost of recalls, warranty costs, and loss of goodwill.",2007,0,6399 5305,Fault Tolerance Connectors for Unreliable Web Services,"Web Services are commonly used to implement service oriented architectures/applications. Service-oriented applications are large-scale distributed applications, typically highly dynamic, by definition loosely coupled and often unstable due to the unreliability of Web Services, which can be moved, deleted, and are subject to various sources of failures. In this paper, we propose customizable fault-tolerance connectors to add fault-tolerance to unreliable Web Services, thus filling the gap between clients and Web Service providers. Connectors are designed by clients, providers or dependability experts using the original WSDL description of the service. These connectors insert detection actions (e.g. runtime assertions) and recovery mechanisms (based on various replications strategies). The connectors can use identical or equivalent available service replicas. The benefits of this approach are demonstrated experimentally.",2007,0, 5306,Web Services for Automated Fault Analysis in Electrical Power System,"Web Services for Automated Fault Analysis (WS-AFA) in Electrical Power System is described in this paper. WS-AFA is a new solution to investigate and analyze fault and disturbance records from Digital Fault Recorders (DFRs) or other Intelligent Electronic Devices (IEDs) in substations. The paper describes the overall system architecture as well as the implementation of the services. C# and Dot NET technology has been successfully used for efficient implementation of the Web services. WS-AFA is composed of signal segmentation, signal analysis, fault type classification, fault recorded viewer and fault location services. Such services are designed to enhance manual investigation performed by engineers in power utility.",2009,0, 5307,A Fault Tolerant Web Service Architecture,"Web services have been pointed as a suitable technology for the development and execution of distributed applications. However, the Web service architecture still lacks facilities to support fault tolerance. The goal of this paper is to propose a fault tolerant Web service architecture. The architecture provides service mediation and monitoring. The main contribution of this paper is the use of Web service standards to include fault tolerance in the Web service architecture.",2007,0, 5308,Corrective maintenance maturity model (CM3): maintainer's education and training,"What is the point of improving maintenance processes if the most important asset, people, is not properly utilised? Knowledge of the product(s) maintained, maintenance processes and communications skills is very important for achieving quality software and for improving maintenance and development processes. We present CM3: Maintainer's Education and Training-a maturity model for educating and training maintenance engineers. This model is the result of a comparative study of two industrial processes utilised at ABB, and of process models such as IEEE 1219, ISO/IEC 12207, CMM, People CMM, and TickIT",2001,0, 5309,On Using Simplification and Correction Tables for Integrity Maintenance in Integrated Databases,"When a database is defined as views over autonomous sources, inconsistencies with respect to global integrity constraints are to be expected. This paper investigates the possibility of using simplification techniques for integrity constraints in order to maintain, in an incremental way, a correction table of virtual updates which, if executed, would restore consistency; access can be made through auxiliary views that take the table into account. The approach employs assumptions about local source consistency as well as cross-source constraints whenever possible",2006,0, 5310,An Analytic Model for Fault Diagnosis in Power Systems Considering Malfunctions of Protective Relays and Circuit Breakers,"When a fault occurs on a section or a component in a given power system, if one or more protective relays (PRs) and/or circuit breakers (CBs) associated do not work properly, or in other words, a malfunction or malfunctions happen with these PRs and/or CBs, the outage area could be extended. As a result, the complexity of the fault diagnosis could be greatly increased. The existing analytic models for power system fault diagnosis do not systematically address the possible malfunctions of PRs and/or CBs, and hence may lead to incorrect diagnosis results if such malfunctions do occur. Given this background, based on the existing analytic models, an effort is made to develop a new analytic model to well take into account of the possible malfunctions of PRs and/or CBs, and further to improve the accuracy of fault diagnosis results. The developed model does not only estimate the faulted section(s), but also identify the malfunctioned PRs and/or CBs as well as the missing and/or false alarms. A software system is developed for practical applications, and realistic fault scenarios from an actual power system are served for demonstrating the correctness of the presented model and the efficiency of the developed software system.",2010,0, 5311,Study on faulty feeder selection methods of single-phase earthed fault in non-solidly grounded systems,"When a single-phase earthed fault happened in non-solidly grounded systems, it is very important to select the faulty feeder rapidly in order to improve the power supply reliability. All currently used detection principles of single-phase earthed fault are analyzed in the paper. The advantages and disadvantages of each selection principle are also summarized. Finally, it presents my view on single-phase earthed fault detection: detection methods using transient signals have better sensitivity than that using steady state signals; it is not only applicable for Peterson coiled systems, but also applicable for intermittent arc earthed fault. With the development of modern microelectronic technique, the transient signal generated by earthed fault can be easily recorded and processed by complicated digital algorithm; hence, transient detection method must be more applicable in the future faulty feeder selection devices.",2007,0, 5312,On the relation between design contracts and errors: a software development strategy,"When designing a software module or system, a systems engineer must consider and differentiate between how the system responds to external and internal errors. External errors cannot be eliminated and must be tolerated by the system, while the number of internal errors should be minimized and the resulting faults should be detected and removed. This paper presents a development strategy based on design contracts and a case study of an industrial project in which the strategy was successfully applied. The goal of the strategy is to minimize the number of internal errors during the development of a software system while accommodating external errors. A distinction is made between weak and strong contracts. These two types of contracts are applicable to external and internal errors, respectively. According to the strategy, strong contracts should be applied initially to promote the correctness of the system. Before releasing, the contracts governing external interfaces should be weakened and error management of external errors enabled. This transformation of a strong contract to a weak one is harmless to client modules",2002,0, 5313,Fast vignetting correction and color matching for panoramic image stitching,"When images are stitched together to form a panorama there is often color mismatch between the source images due to vignetting and differences in exposure and white balance between images. In this paper a low complexity method is proposed to correct vignetting and differences in color between images, producing panoramas that look consistent across all source images. Unlike most previous methods which require complex non-linear optimization to solve for correction parameters, our method requires only linear regressions with a low number of parameters, resulting in a fast, computationally efficient method. Experimental results show the proposed method effectively removes vignetting effects and produces images that are highly visually consistent in color and brightness.",2009,0, 5314,Fault Diagnosis for Analogy Circuits Based on Support Vector Machines,"When it is hard to obtain training samples, the fault classifier based on support vector machine (SVM) can diagnose faults with high accuracy. It can easily be generalized and put to practical use. In this paper, a fault classifier based on support vector machine (SVM) is proposed for analog circuits. It can classify the faults in the target circuit effectively and accurately. In order to test the algorithm, an analog circuit fault diagnosis system based on SVM is designed for the measurement circuit that approximates the square curve with a broken line. After being trained with practical measurement data, the system is shown to be capable of diagnosing faults hidden in real measurement data accurately. Therefore, the effectiveness of the algorithm is verified.",2009,0, 5315,The Wackamole approach to fault tolerant networks,"We present Wackamole, a high availability tool for clusters of servers. Wackamole ensures that a server handles the requests that arrive on any of the service's public IP addresses. Wackamole is a completely distributed software solution based on a provably correct algorithm that negotiates the assignment of IP addresses among the available servers upon detection of faults and recoveries, and provides N-way fail-over, so that any one of a number of servers can cover for any other. Using a simple algorithm that utilizes strong group communication semantics, Wackamole demonstrates the application of group communication to address a critical availability problem at the core of the system, even in the presence of cascading network or server faults and recoveries. The same architecture is extended to provide a similar service for highly available routers.",2003,0, 5316,Introduction to fault attacks on smartcard,We present what can be achieved by attacks through faults induction on smart cards. We first describe the different means to perform fault attacks on chips and explain how fault attacks on cryptographic algorithms are used to recover secret keys. We next study the impact of fault attacks when focused on the disruption of the functional software layer. We conclude with the overall impact of this type of attacks on the smartcard environment and the need for software countermeasures and their limits.,2005,0, 5317,Dynamic Test Compaction for Transition Faults in Broadside Scan Testing Based on an Influence Cone Measure,"We propose a compact test generation method for transition faults, which is driven by a conflict-avoidance scheme employed during test generation. Based on an influence-cone function for transition faults in broadside scan testing, two dynamic test compaction schemes, named selfish test compaction and unselfish test compaction respectively, are proposed. The selfish test compaction tries to compact as many faults as possible into the current test, while the unselfish scheme attempts to compact the tests of the hard-to-compact faults into the current test. Potential conflicts produced by the signal requirements at the pseudo-primary outputs in the first frame are avoided through the use of an input dependency graph. Experimental results and comparison with existing approaches demonstrate the efficiency and effectiveness of the proposed method.",2009,0, 5318,Compression-free Checksum-based Fault-Detection Schemes for Pipelined Processors,"We propose a fault-detection scheme for pipelined, multithreaded processors. The scheme is based on checksums and improves on previous schemes in terms of fault coverage and detection latency by not using compression but storing complete checksums from several pipeline stages. We validate the scheme experimentally and derive checksum polynomials that lead to perfect fault coverage.",2007,0, 5319,A fault tolerant approach to microprocessor design,"We propose a fault-tolerant approach to reliable microprocessor design. Our approach, based on the use of an online checker component in the processor pipeline, provides significant resistance to core processor design errors and operational faults such as supply voltage noise and energetic particle strikes. We show through cycle-accurate simulation and timing analysis of a physical checker design that our approach preserves system performance while keeping area overheads and power demands low. Furthermore, analyses suggest that the checker is a fairly simple state machine that can be formally verified, scaled in performance, and reused. Further simulation analyses show virtually no performance impacts when our simple checker design is coupled with a high-performance microprocessor model. Timing analyses indicate that a fully synthesized unpipelined 4-wide checker component in 0.25 m technology is capable of checking Alpha instructions at 288 MHz. Physical analyses also confirm that costs are quite modest; our prototype checker requires less than 6% the area and 1.5% the power of an Alpha 21264 processor in the same technology. Additional improvements to the checker component are described which allow for improved detection of design, fabrication and operational faults.",2001,0, 5320,Proxy-Based Reference Picture Selection for Error Resilient Conversational Video in Mobile Networks,We propose a frame dependency management strategy for error robust transmission of conversational video in mobile networks. We consider an end-to-end video transmission scenario that involves both a wireless uplink as well as a wireless downlink plus some intermediate wireline network transmission. We also investigate the special cases of an end-to-end scenario where only a wireless uplink or a wireless downlink is present. We cope with packet loss on the downlink by retransmitting lost packets from the base station to the receiver for error recovery. Retransmissions are enabled by using fixed-distance reference picture selection during encoding with a prediction distance that corresponds to the round-trip time of the downlink combined with accelerated decoding. We deal with transmission errors on the uplink by sending acknowledgments and predicting the next frame to encode from those slices that have been correctly received by the base station. We show that these two separate approaches for uplink and downlink efficiently complement one another and the resulting end-to-end scheme is characterized by very low computational complexity. We compare our scheme to several state-of-the-art error resiliency approaches and report significant improvements.,2009,0, 5321,Error concealment for spatially Scalable Video Coding using hallucination,"We propose a new error concealment method based on hallucination for Scalable Video Coding with spatial scalability. In this method, parts of the frames which lose the enhancement layer are up-sampled from base layer and ldquohallucinatedrdquo as concealment frames. The database for hallucination is generated from the high-resolution and low-resolution frame-pairs near the lost frames in the video sequence. The effectiveness of hallucination here lies in the similarity between the nearby frames in the video sequence. Experiments show that the proposed method has superior results over the state-of-the-art error concealment method for spatially Scalable Video Coding.",2009,0, 5322,Keynote: Hierarchical Fault Detection in Embedded Control Software,"We propose a two-tiered hierarchical approach for detecting faults in embedded control software during their runtime operation: The observed behavior is monitored against the appropriate specifications at two different levels, namely, the software level and the controlled-system level. (The additional controlled- system level monitoring safeguards against any possible incompleteness at the software level monitoring.) A software fault is immediately detected when an observed behavior is rejected by a software level monitor. In contrast, when a system level monitor rejects an observed behavior it indicates a system level failure, and an additional isolation step is required to conclude whether a software fault occurred. This is done by tracking the executed behavior in the system model comprising of the models for the software and those for the nonfaulty hardware components: An acceptance by such a model indicates the presence of a software fault. The design of both the software-level and system-level monitors is modular and hence scalable (there exists one monitor for each property), and further the monitors are constructed directly from the property specifications and do not require any software or system model. Such models are required only for the fault isolation step when the detection occurs at the system level. We use input-output extended finite automata (I/O- EFA) for software as well as system level modeling, and also for modeling the property monitors. Note since the control changes only at the discrete times when the system/environment states are sampled, the controlled- system has a discrete-time hybrid dynamics which can be modeled as an I/O-EFA.",2008,0, 5323,"A Distributed (Constant of R, 2)-Approximation Algorithm for Fault-Tolerant Facility Location","We propose an approximation algorithm for the problem of Fault-Tolerant Facility Location which is implemented in a distributed and asynchronous manner within O(n) rounds of communication. Here n is the number of vertices in the network. As far as we know, the performance guarantee of similar algorithms (centralized) remains unknown except a special case where all cities have a uniform connectivity requirement. In this paper, we assume the shortest-path routing scheme deployed, as well as a constant (given) size of R, which represents the distinct levels of fault-tolerant capability provided by the system (i. e distinct connectivity requirements), and prove that the cost of our solution is no more than |R| A F* + 2 A C* in the general case, where F* and C* are respectively the facility cost and connection cost in an optimal solution. Further more, extensive numerical experiments showed that the quality of our solutions is comparable to the optimal solutions when |R| is no more than 10.",2009,0, 5324,"Cluster delegation: high-performance, fault-tolerant data sharing in NFS","We present cluster delegation, an enhancement to the NFSv4 file system, that improves both performance and recoverability in computing clusters. Cluster delegation allows data sharing among clients by extending the NFSv4 delegation model so that multiple clients manage a single file without interacting with the server. Based on cluster delegation, we implement a fast commit primitive, cooperative caching, and the ability to recover the uncommitted updates of a failed computer. Cluster delegation supports both read and write operations in the cooperative cache, while preserving the consistency guarantees of NFSv4. We have implemented cluster delegation by modifying the Linux NFSv4 client and show that it improves client performance and reduces server load by more than half.",2005,0, 5325,ConfErr: A tool for assessing resilience to human configuration errors,"We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.",2008,0, 5326,Evaluation of replication and fault detection in P2P-MPI,"We present in this paper an evaluation of fault management in the grid middleware P2P-MPI. One of P2P-MPI's objective is to support environments using commodity hardware. Hence, running programs is failure prone and a particular attention must be paid to fault management. The fault management covers two issues: fault-tolerance and fault detection. P2P-MPI provides a transparent fault tolerance facility based on replication of computations. Fault detection concerns the monitoring of the program execution by the system. The monitoring is done through a distributed set of modules called failure detectors. In this paper, we report results from several experiments which show the overhead of replication, and the cost of fault detection.",2009,0, 5327,The role of defects on CdTe detector performance,"We present results from a characterisation of bulk defects in CdTe wafers, and their role in the degradation of charge transport performance of CdTe radiation detectors. Sub-bandgap IR microscopy and X-ray Lang topography have been used to characterise material quality prior to device processing. IR microscopy clearly identifies extended defects such as tellurium precipitates in the material bulk, whilst Lang topography characterises stacking faults, crystallite boundaries and other crystallographic features in the near-surface region. After fabrication of contacts onto the material, ion beam induced charge imaging is used to investigate the correlations between material defects and charge transport. Digital ion beam induced charge imaging is used to produce high resolution maps of charge signal amplitude, carrier drift time, and carrier drift mobility.",2003,0, 5328,Monitoring a tunneling in an urbanized area with Terrasar-X interferometry Surface deformation measurements and atmospheric error treatment,"We present results from a deformation monitoring to demonstrate potential and limitations of TerraSAR-X interferometry to measure vertical displacements due to the tunneling of main sewerage pipes along the river Emscher in Germany. In spite of higher sensitivity for deformation gradients the potential for deformation monitoring benefits from high spatial and temporal resolution of the TerraSAR-X data. We analyzed a large stack of TerraSAR-X stripmap scenes to derive regional pattern of vertical displacements with differential SAR Interferometry and small-scale displacements and deformation of objects (infrastructure and houses) in time series of SAR-scenes with Persistent Scatterer Interferometry (PSI). First results from PSI are promising with a great number of detected PS. We show deformation measurements with Artificial Corner Reflectors. Short-time interferograms (11 or 22 days) show high coherence for large areas and therefore are likely less infected by unwrapping errors. Atmospheric errors are important for X-Band SAR. Expected deformation in our application is in the range of mm to cm, similar to tropospheric delay features in their spatial and temporal extent. The atmospheric phase screen in PSI and stacking procedures are smoothing the nonuniform deformation history of progressing tunneling.",2009,0, 5329,A pragmatic approach to concurrent error detection in sequential circuits implemented using FPGAs with embedded memory,"We present several low-cost concurrent error detection schemes for a sequential circuit implemented using FPGAs with embedded memory blocks. The experimental results show that for many of the examined circuits, a reasonable level of error detection can be obtained at the circuitry overhead of less than 10% - a level recommended by proponents of a ""pragmatic"" approach to on-line testing.",2005,0, 5330,Computer Modeling of YBCO Fault Current Limiter Strips Lines in Over-Critical Regime With Temperature Dependent Parameters,"We present the results of an advanced numerical model for fault current limiter (FCL) based on HTS thin films in which both thermal and electromagnetic aspects are taken into account. This model allows simulating the behavior of FCL in the over-critical current regime and we used it for studying strip lines of a YBCO/Au FCL on sapphire substrate. The electromagnetic and thermal equations have been implemented in finite-element method (FEM) software in order to obtain a model for investigating the comportment of the superconductor when the current exceeds Ic. In particular, materials equations have been implemented in order to simulate the electrical behavior of superconducting devices with strong over-critical currents. We report results of simulations in voltage source mode where currents largely exceed Ic. The global behavior of the FCL is compared with measurements, showing a good agreement. The use of FEM simulations offers the advantage to give access to local variables such as current density or temperature. Studies with this model can replace expensive experiments where very high current density might damage or destroy the FCL device.",2007,0, 5331,The effect of 3D building reconstruction errors on propagation prediction using geospatial data in cyberspace,"When the 3D building structures visualized in Google Earth are reconstructed using photogrametric method, errors or inaccuracies will occur to the building vertices and the building heights. In this paper the statistics of these errors are discussed and the effect of these errors on the propagation prediction results is examined in detail. It is found that our reconstruction method introduces distance errors to the building vertices and height errors to the building heights. These errors are less than 0.5 meters in 95% of the cases. The vertex error will cause an average mean error of -0.2 dB and an average standard deviation of 5.1 dB to the predicted path gains compared to the reference case. And the height error, in the cases investigated in this paper, is very small and can be ignored. These results match the observations in the literature for different propagation environments. The 3D reconstruction method is then shown to be of satisfactory accuracy in terms of propagation prediction.",2009,0, 5332,Application of expert system based on mixing reasoning in traction substation fault diagnosis,When there is a fault in traction substation the fault components and their reasons are required to be identified by operators quickly and accurately. A fault diagnosis expert system based on mixing reasoning is built in this paper. It can reason according to logical relations of relay protection and breakers and fault waveform of voltage and current provided by Fault Record System (FRS). Mistrip and failure to trip of relay protection and breakers can both be distinguished The reasoning result has a good credibility. Traction substation fault diagnosis expert system may be used as a module of integrated automation.,2002,0, 5333,Content-Adaptive Interpolation for Spatial Error Concealment,"When transmitting encoded images over a communication channel, the reconstructed image quality can be substantially degraded by channel errors. This paper presents a spatial error concealment algorithm that utilizes variance of surrounding pixels, and then classifies each error block (EB) into two categories: uniform block and edge block. For uniform block, nearest border prior spatial interpolation is adopted to restore missing pixels. We use the Wiener interpolation algorithm and special interpolation sequence for edge block. Experimental results indicate the proposed algorithm can attain well restored quality of intra-frames both subjectively and objectively. Meanwhile, the computational cost of the proposed algorithm can be significantly reduced, compared to Lipsilas method. And the restored quality is almost the same, sometimes even much better.",2009,0, 5334,Radiometric correction of RADARSAT-1 images for mapping the snow water equivalent (SWE) in a mountainous environment,"When trying to monitor the snow characteristics from RADARSAT-1 SAR data in a mountainous environment like the Coast Mountains (B.C. Canada), radiometric corrections must first be applied to correct for the distortions induced by the slant projection of SAR systems and by the highly variable terrain. This paper presents and discusses the results obtained from the implementation of two radiometric slope correction methods on Fine beam RADARSAT-1 images. For slope less than 30, both algorithms have almost the same effect. But, for very steep slopes, both algorithms are deficient and may not compensate enough.",2002,0, 5335,Fault detection and isolation in cooperative manipulators via artificial neural networks,"When two or more robotic manipulators are working cooperatively, faults can put at risk the task, the robots, or the manipulated load. In this work, two artificial neural networks are employed in a fault detection and isolation system for cooperative robotic manipulators. A multilayer perceptron is utilized to reproduce the dynamics of the cooperative system The difference between its outputs and the actual velocity measurements generates the residual vector. This vector is classified by a radial basis function network that produces the fault information. Simulations with two robotic manipulators performing a cooperative task are presented, indicating that free-swinging joint faults are correctly detected and isolated. The main contribution of this work is the first application of fault detection and isolation to cooperative manipulators with faults at the robots' joints",2001,0, 5336,The GPS Contribution to the Error Budget of Surface Elevations Derived From Airborne LIDAR,"When using airborne LIDAR to produce digital elevation models, the global positioning system (GPS) positioning of the LIDAR instrument is often the limiting factor, with accuracies typically quoted as being 10-30 cm. However, a comprehensive analysis of the accuracy and precision of GPS positioning of aircraft over large temporal and spatial scales is lacking from the literature. Here, an assessment is made of the likely GPS contribution to the airborne LIDAR measurement error budget by analyzing more than 500 days of continuous GPS data over a range of baseline lengths (3-960 km) and elevation differences (400-2000 m). Height errors corresponding to the 95th percentile are <0.15 m when using algorithms commonly applied in commercial software over 3-km baselines. These errors increase to 0.25 m at 45 km and <0.5 m at 250 km. At aircraft altitudes, relative heights are shown to be potentially biased by additional errors approaching 0.2 m, partly due to unmodeled tropospheric zenith total delay (ZTD). The application of advanced algorithms, including parameterization of the residual ZTD, gives error budgets that are largely constant despite baseline length and elevation differences. In this case, height errors corresponding to the 95th percentile are <0.22 m out to 960 km, and similar levels are shown for one randomly chosen day over a 2300-km baseline.",2009,0, 5337,Red-eye detection and correction using inpainting in digital photographs,"When we take pictures with flash, red-eye effect often appears in photographs. Flash light passing through pupil is reflected on the blood vessels, and arrives at a camera lens. This phenomenon makes red-eyes in photographs. Several algorithms have been proposed for removal of red-eyes in digital photographs. This paper proposes a red-eye removal algorithm using inpainting and eye-metric information, which is largely composed of two parts: red-eye detection and red-eye correction. For red-eye detection, face regions are detected first. Next, red-eye regions are segmented in the face regions using multi-cues such as redness, shape, and color information. By region growing, we select regions, which are to be completed with iris texture by an exemplar-based inpainting method. Then, for red-eye correction, pupils are painted with the appropriate radii calculated from the iris size and size ratio. Experimental results with a large number of test photographs with red-eye effect show that the proposed algorithm is effective and the corrected eyes look more natural than those processed by the conventional algorithms.",2009,0, 5338,Identifying the root causes of memory bugs using corrupted memory location suppression,"We present a general approach for automatically isolating the root causes of memory-related bugs in software. Our approach is based on the observation that most memory bugs involve uses of corrupted memory locations. By iteratively suppressing (nullifying) the effects of these corrupted memory locations during program execution, our approach gradually isolates the root cause of a memory bug. Our approach can work for common memory bugs such as buffer overflows, uninitialized reads, and double frees. However, our approach is particularly effective in finding root causes for memory bugs in which memory corruption propagates during execution until an observable failure such as a program crash occurs.",2008,0, 5339,A hardware Gaussian noise generator using the Box-Muller method and its error analysis,"We present a hardware Gaussian noise generator based on the Box-Muller method that provides highly accurate noise samples. The noise generator can be used as a key component in a hardware-based simulation system, such as for exploring channel code behavior at very low bit error rates, as low as 10-12 to 10-13. The main novelties of this work are accurate analytical error analysis and bit-width optimization for the elementary functions involved in the Box-Muller method. Two 16-bit noise samples are generated every clock cycle and, due to the accurate error analysis, every sample is analytically guaranteed to be accurate to one unit in the last place. An implementation on a Xilinx Virtex-4 XC4VLX100-12 FPGA occupies 1,452 slices, three block RAMs, and 12 DSP slices, and is capable of generating 750 million samples per second at a clock speed of 375 MHz. The performance can be improved by exploiting concurrent execution: 37 parallel instances of the noise generator at 95 MHz on a Xilinx Virtex-II Pro XC2VP100-7 FPGA generate seven billion samples per second and can run over 200 times faster than the output produced by software running on an Intel Pentium-4 3 GHz PC. The noise generator is currently being used at the Jet Propulsion Laboratory, NASA to evaluate the performance of low-density parity-check codes for deep-space communications",2006,0, 5340,Fault-tolerant static scheduling for real-time distributed embedded systems,"We present a heuristic for producing automatically a distributed fault-tolerant schedule of a given data-flow algorithm onto a given distributed architecture. The faults considered are processor failures, with a fail-silent behavior. Fault-tolerance is achieved with the software redundancy of computations and the time redundancy of data-dependencies",2001,0, 5341,Real-word spelling correction using Google Web 1T n-gram with backoff,We present a method for correcting real-word spelling errors using the Google Web 1T n-gram data set and a normalized and modified version of the longest common subsequence (LCS) string matching algorithm. Our method is focused mainly on how to improve the correction recall (the fraction of errors corrected) while keeping the correction precision (the fraction of suggestions that are correct) as high as possible. Evaluation results on a standard data set show that our method performs very well.,2009,0, 5342,Frame loss error concealment for spatial scalability using hallucination,"We present a new error concealment algorithm for spatially scalable video coding with frame loss in the enhancement layer, based on the technique of hallucination. For a lost enhancement layer frame, the error concealment is done as hallucinating its base layer frame, using the database trained from previously decoded frames nearby to the lost one. Simulation results show that the proposed method could out-perform the state-of-the-art error concealment algorithms of SVC significantly.",2009,0, 5343,A New Approach of Fault Localization Using Value Replacement,"We present a new method that bases on value replacement which considers both the control dependence and the data dependence. The key idea of value replacement is to see which program statements exercised during a failing run use values that can be altered so that the execution instead produces correct output. This approach is effective in locating statements that either faulty statements or directly effecting the faulty statements. Our approach also analyze the possibility of the statements that are faulty, this can be applied to more areas.",2010,0, 5344,Geometric and shading correction for images of printed materials: a unified approach using boundary,"We present a novel approach that uses boundary interpolation to correct (1) geometric distortion and (2) shading artifacts present in images of printed materials. Unlike existing approaches, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can estimate the intrinsic illumination component of the distorted image to correct shading artifacts.",2004,0, 5345,Combinatorial designs in multiple faults localization for battlefield networks,We present an application of combinatorial designs and variance analysis to correlating events in the midst of multiple network faults. The network fault model is based on the probabilistic dependency graph that accounts for the uncertainty about the state of network elements. Orthogonal arrays help reduce the exponential number of failure configurations to a small subset on which further analysis is performed. The preliminary results show that statistical analysis can pinpoint the probable causes of the observed symptoms with high accuracy and significant level of confidence. An example demonstrates how multiple soft link failures are localized in MIL-STD 188-220's datalink layer to explain the end-to-end connectivity problems in the network layer This technique can be utilized for the networks operating in an unreliable environment such as wireless and/or military networks.,2001,0, 5346,Automatic Fault Localization for Property Checking,"We present an efficient fully automatic approach to fault localization for safety properties stated in linear temporal logic. We view the failure as a contradiction between the specification and the actual behavior and look for components that explain this discrepancy. We find these components by solving the satisfiability of a propositional Boolean formula. We show how to construct this formula and how to extend it so that we find exactly those components that can be used to repair the circuit for a given set of counterexamples. Furthermore, we discuss how to efficiently solve the formula by using the proper decision heuristics and simulation-based preprocessing. We demonstrate the quality and efficiency of our approach by experimental results.",2008,0, 5347,A H.263 compatible error resilient video coder,"We present an error resilient video coder compatible with the ITU-T H.263 standard. Resynchronization flag insertion, error detection, localization and concealment in the decoder, and dynamic programming mode selection based on error tracking are the three main adopted error-resilient strategies. An information feedback method, which utilizes the H.263 video bit stream but does not modify its syntax, is described. Simulation results for the binary symmetric channel (BSC) with random bit errors are given to show the robustness of the proposed video coder",2000,0, 5348,Requirements specification and analysis of fault-tolerant digital systems,"We present an integrated computer-aided design environment, the PrT (predicate/transition) net system, in order to systematically introduce fault-tolerant properties into the design of complicated digital systems. This is accomplished by exploiting a formal specification of the system requirements in which the amount of necessary redundancy can be determined. The system is based on an integration of PrT nets with regular expressions. PrT nets are used to describe and analyze a high level system and regular expressions are used to describe and analyze the more detailed system structures. Both models provide us with well-defined levels of fault diagnosis needed in the digital system design. An S-invariant technique can be used to check the constancy of PrT nets; and a finite state automaton can be used to check the acceptability of regular expressions. Furthermore, the regular expression can also enable a system designer to determine redundancy in order to perform error correction. In consequence, our approach is superior to the current techniques for requirements analysis. Finally, main results are presented in the form of four propositions and supported by some experiments",2002,0, 5349,Concurrent bug patterns and how to test them,We present and categorize a taxonomy of concurrent bug patterns. We then use the taxonomy to create new timing heuristics for ConTest. Initial industrial experience indicates that these heuristics improve the bug finding ability of ConTest. We also show how concurrent bug patterns can be derived from concurrent design patterns. Further research is required to complete the concurrent bug taxonomy and formal experiments are needed to show that heuristics derived from the taxonomy improve the bug finding ability of ConTest.,2003,0, 5350,Using an RBF Neural Network to Locate Program Bugs,"We propose an RBF (radial basis function) neural network-based fault localization method to help programmers locate bugs in a more effective way. An RBF neural network with a three-layer feed-forward structure is employed to learn the relationship between the statement coverage of a test case and its corresponding execution result. The trained network is then given as input a set of virtual test cases, each covering only a single statement. The output of the network for each test case is considered to be the suspiciousness of the corresponding statement; a statement with a higher suspiciousness has a higher likelihood of containing a bug. The set of statements ranked in descending order by their suspiciousness are then examined by programmers one by one until a bug is located. Three case studies on different programs (space, grep and make) were conducted with each faulty version having exactly one bug. An additional program gcc was also used to demonstrate the concept of extending the proposed method to programs with multiple bugs. Our experimental data suggest that an RBF neural network-based fault localization method is more effective in locating a program bug (by examining less code before the first faulty statement containing the bug is identified) than another popular method, Tarantula, which also uses the coverage and execution results to compute the suspiciousness of each statement.",2008,0, 5351,Lowering Error Floors Using Dithered Belief Propagation,"We propose dithered belief propagation decoding algorithms to reduce the number of decoding failures of a belief propagation decoder and lower the error floor. The random nature of the algorithms enables a low hardware complexity compared to previously reported techniques. We introduce two dithering methods that target check node operations and channel input values, respectively. We present simulation results that confirm the error rate gains in the floor region, and that relate those gains with the maximum number of decoding iterations. The results show that the first algorithm can achieve good error rate gains with a low iteration limit. For the second algorithm, results show that with a large iteration limit, high FER gains are possible. Furthermore the average time complexity remains the same as that of a standard belief propagation algorithm.",2010,0, 5352,RACE: a software-based fault tolerance scheme for systematically transforming ordinary algorithms to robust algorithms,"We propose the robust algorithm-configured emulation (RACE) scheme for efficient parallel computation and communication in the presence of faults. A wide variety of algorithms originally designed for fault-free meshes, tori, and k-ary n-cubes can be transformed to corresponding robust algorithm through RACE. In particular optimal robust algorithms can be derived for total exchange (TE) and ascend/descend operations with a factor of 1+o (1) slowdown. Also, RACE can tolerate a large number of faulty elements, without relying on hardware redundancy or any assumption about the availability of a complete subarray",2001,0, 5353,Honeypots: practical means to validate malicious fault assumptions,"We report on an experiment run with several honeypots for 4 months. The motivation of this work resides in our wish to use data collected by honeypots to validate fault assumptions required when designing intrusion-tolerant systems. This work in progress establishes the foundations for a feasibility study into that direction. After a review of the state of the art with respect to honeypots, we present our test bed, discuss results obtained and lessons learned. Avenues for future work are also proposed.",2004,0, 5354,Propagating Bug Fixes with Fast Subgraph Matching,"We present a powerful and efficient approach to the problem of propagating a bug fix to all the locations in a code base to which it applies. Our approach represents bug and fix patterns as subgraphs of a system dependence graph, and it employs a fast, index-based subgraph matching algorithm to discover unfixed bug-pattern instances remaining in a code base. We have also developed a graphical tool to help programmers specify bug patterns and fix patterns easily. We evaluated our approach by applying it to bug fixes in four large open-source projects. The results indicate that the approach exhibits good recall and precision and excellent efficiency.",2010,0, 5355,A statistical model to locate faults at input levels,"We present a statistical model to locate faults at the input level based on the failure patterns and the success patterns. The model neither needs to be fed with software module, code or trace information, nor does it require re-executing the program. To evaluate the model, precision and recall are adopted as the criteria. Five programs are examined and 17 testing experiments are conducted in which the model gains 0.803 in precision and 0.697 in recall on average.",2004,0, 5356,Fault tree analysis for software design,"We present a study on software fault tree analysis (SFTA) conducted at the Software Assurance Technology Center at NASA Goddard Space Flight Center. While researchers have made various attempts at SFTA, software assurance practitioners have been slow to adopt it. One reason is the intense manual effort needed to identify and draw the fault trees for the code of large software projects. Another is the lack of commercial tools to assist in the technique for software. Most SFTA research efforts have been directed at requirements or code. Performing SFTA on the design may enable application of SFTA to critical code only, thus reducing the amount of effort. We attempt to develop a relationship between UMLTM design diagrams and fault tree symbology to enable adaptation of a commercial FTA tool to at least one software design language. Such a result would reduce the amount of fault tree effort both for size (design instead of code) and for manual effort.",2002,0, 5357,Empirical method for topographic correction in aerial photographs,"We suggest an empirical method to correct topographic effects on vegetation classification of panchromatic aerial photographs. The method is based on the use of spatial interpolation technique that constructs a luminance surface from targets of high brightness values. The luminance surface is then used to correct the topographic effects differentially, by increasing brightness values in shaded areas and decreasing brightness values of lightened areas. For this purpose, the use of a trapezoidal function was found successful in the reduction of standard deviation of brightness values of trees, shrubs, and herbaceous plants after empirical correction. This method outperformed a frequently used digital elevation model-based topographic correction in terms of overall classification accuracy of the resulting images.",2005,0, 5358,Introducing SW-based fault handling mechanisms to cope with EMI in embedded electronics: are they a good remedy?,"We summarize a study on the effectiveness of two software-based fault handling mechanisms in terms of detecting conducted electromagnetic interference (EMI) in microprocessors. One of these techniques deals with processor control flow checking. The second one is used to detect errors in code variables. In order to check the effectiveness of such techniques in RF ambient, an EIC 61.000-4-29 normative-compliant conducted RF-generator was implemented to inject spurious electromagnetic noise into the supply lines of a commercial off-the-shelf (COTS) microcontroller-based system. Experimental results suggest that the considered techniques present a good effectiveness to detect this type of faults, despite the multiple-fault injection nature of EMI in the processor control and data flows, which in most cases result in a complete system functional loss (the system must be reset).",2003,0, 5359,A Bayesian belief network for assessing the likelihood of fault content,"To predict software quality, we must consider various factors because software development consists of various activities, which the software reliability growth model (SRGM) does not consider. In this paper, we propose a model to predict the final quality of a software product by using the Bayesian belief network (BBN) model. By using the BBN, we can construct a prediction model that focuses on the structure of the software development process explicitly representing complex relationships between metrics, and handling uncertain metrics, such as residual faults in the software products. In order to evaluate the constructed model, we perform an empirical experiment based on the metrics data collected from development projects in a certain company. As a result of the empirical evaluation, we confirm that the proposed model can predict the amount of residual faults that the SRGM cannot handle.",2003,1, 5360,A Constructive RBF Neural Network for Estimating the Probability of Defects in Software Modules,"Much of the current research in software defect prediction focuses on building classifiers to predict only whether a software module is fault-prone or not. Using these techniques, the effort to test the software is directed at modules that are labelled as fault-prone by the classifier. This paper introduces a novel algorithm based on constructive RBF neural networks aimed at predicting the probability of errors in fault-prone modules; it is called RBF-DDA with Probabilistic Outputs and is an extension of RBF-DDA neural networks. The advantage of our method is that we can inform the test team of the probability of defect in a module, instead of indicating only if the module is fault-prone or not. Experiments carried out with static code measures from well-known software defect datasets from NASA show the effectiveness of the proposed method. We also compared the performance of the proposed method in software defect prediction with kNN and two of its variants, the S-POC-NN and R-POC-NN. The experimental results showed that the proposed method outperforms both S-POC-NN and R-POC-NN and that it is equivalent to kNN in terms of performance with the advantage of producing less complex classifiers.",2007,1, 5361,Empirical assessment of machine learning based software defect prediction techniques,"The wide-variety of real-time software systems, including telecontrol/telepresence systems, robotic systems, and mission planning systems, can entail dynamic code synthesis based on runtime mission-specific requirements and operating conditions. This necessitates the need for dynamic dependability assessment to ensure that these systems perform as specified and not fail in catastrophic ways. One approach in achieving this is to dynamically assess the modules in the synthesized code using software defect prediction techniques. Statistical models; such as stepwise multi-linear regression models and multivariate models, and machine learning approaches, such as artificial neural networks, instance-based reasoning, Bayesian-belief networks, decision trees, and rule inductions, have been investigated for predicting software quality. However, there is still no consensus about the best predictor model for software defects. In this paper; we evaluate different predictor models on four different real-time software defect data sets. The results show that a combination of IR and instance-based learning along with the consistency-based subset evaluation technique provides a relatively better consistency in accuracy prediction compared to other models. The results also show that ""size"" and ""complexity"" metrics are not sufficient for accurately predicting real-time software defects.",2005,1, 5362,Software Fault Prediction Model Based on Adaptive Dynamical and Median Particle Swarm Optimization,"Software quality prediction can play a role of importance in software management, and thus in improve the quality of software systems. By mining software with data mining technique, predictive models can be induced that software managers the insights they need to tackle these quality problems in an efficient way. This paper deals with the adaptive dynamic and median particle swarm optimization (ADMPSO) based on the PSO classification technique. ADMPSO can act as a valid data mining technique to predict erroneous software modules. The predictive model in this paper extracts the relationship rules of software quality and metrics. Information entropy approach is applied to simplify the extraction rule set. The empirical result shows that this method set of rules can be streamlined and the forecast accuracy can be improved.",2010,1, 5363,Towards logistic regression models for predicting fault-prone code across software projects,"In this paper, we discuss the challenge of making logistic regression models able to predict fault-prone object-oriented classes across software projects. Several studies have obtained successful results in using design-complexity metrics for such a purpose. However, our data exploration indicates that the distribution of these metrics varies from project to project, making the task of predicting across projects difficult to achieve. As a first attempt to solve this problem, we employed simple log transformations for making design-complexity measures more comparable among projects. We found these transformations useful in projects which data is not as spread as the data used for building the prediction model.",2009,1, 5364,On the Relationship Between Change Coupling and Software Defects,"Change coupling is the implicit relationship between two or more software artifacts that have been observed to frequently change together during the evolution of a software system. Researchers have studied this dependency and have observed that it points to design issues such as architectural decay. It is still unknown whether change coupling correlates with a tangible effect of design issues, i.e., software defects.In this paper we analyze the relationship between change coupling and software defects on three large software systems. We investigate whether change coupling correlates with defects, and if the performance of bug prediction models based on software metrics can be improved with change coupling information.",2009,1, 5365,Estimating software fault-proneness for tuning testing activities,"The article investigates whether a correlation exists between the fault-proneness of software and the measurable attributes of the code (i.e. the static metrics) and of the testing (i.e. the dynamic metrics). The article also studies how to use such data for tuning the testing process. The goal is not to find a general solution to the problem (a solution may not even exist), but to investigate the scope of specific solutions, i.e., to what extent homogeneity of the development process, organization, environment and application domain allows data computed on past projects to be projected onto new projects. A suitable variety of case studies is selected to investigate a methodology applicable to classes of homogeneous products, rather than investigating if a specific solution exists for few cases.",2000,1, 5366,Software defect prediction using static code metrics underestimates defect-proneness,"Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more confident in the predictions they made which were correct.",2010,1, 5367,Empirical validation of object-oriented metrics on open source software for fault prediction,"Open source software systems are becoming increasingly important these days. Many companies are investing in open source projects and lots of them are also using such software in their own work. But, because open source software is often developed with a different management style than the industrial ones, the quality and reliability of the code needs to be studied. Hence, the characteristics of the source code of these projects need to be measured to obtain more information about it. This paper describes how we calculated the object-oriented metrics given by Chidamber and Kemerer to illustrate how fault-proneness detection of the source code of the open source Web and e-mail suite called Mozilla can be carried out. We checked the values obtained against the number of bugs found in its bug database - called Bugzilla - using regression and machine learning methods to validate the usefulness of these metrics for fault-proneness prediction. We also compared the metrics of several versions of Mozilla to see how the predicted fault-proneness of the software system changed during its development cycle.",2005,1, 5368,Predicting defects in SAP Java code: An experience report,"Which components of a large software system are the most defect-prone? In a study on a large SAP Java system, we evaluated and compared a number of defect predictors, based on code features such as complexity metrics, static error detectors, change frequency, or component imports, thus replicating a number of earlier case studies in an industrial context. We found the overall predictive power to be lower than expected; still, the resulting regression models successfully predicted 50-60% of the 20% most defect-prone components.",2009,1, 5369,Variance Analysis in Software Fault Prediction Models,"Software fault prediction models play an important role in software quality assurance. They identify software subsystems (modules,components, classes, or files) which are likely to contain faults. These subsystems, in turn, receive additional resources for verification and validation activities. Fault prediction models are binary classifiers typically developed using one of the supervised learning techniques from either a subset of the fault data from the current project or from a similar past project. In practice, it is critical that such models provide a reliable prediction performance on the data not used in training. Variance is an important reliability indicator of software fault prediction models. However, variance is often ignored or barely mentioned in many published studies. In this paper, through the analysis of twelve data sets from a public software engineering repository from the perspective of variance, we explore the following five questions regarding fault prediction models: (1) Do different types ofclassification performance measures exhibit different variance? (2) Does the size of the data set imply a more (or less) accurate prediction performance? (3) Does the size of training subset impact model's stability? (4) Do different classifiers consistently exhibit different performance in terms of model's variance? (5) Are there differences between variance from 1000 runs and 10 runs of 10-fold cross validation experiments? Our results indicate that variance is a very important factor in understanding fault prediction models and we recommend the best practice for reporting variance in empirical software engineering studies.",2009,1, 5370,Building a genetically engineerable evolvable program (GEEP) using breadth-based explicit knowledge for predicting software defects,"There has been extensive research in the area of data mining over the last decade, but relatively little research in algorithmic mining. Some researchers shun the idea of incorporating explicit knowledge with a Genetic Program environment. At best, very domain specific knowledge is hard wired into the GP modeling process. This work proposes a new approach called the Genetically Engineerable Evolvable Program (GEEP). In this approach, explicit knowledge is made available to the GP. It is considered breadth-based, in that all pieces of knowledge are independent of each other. Several experiments are performed on a NASA-based data set using established equations from other researchers in order to predict software defects. All results are statistically validated.",2004,1, 5371,Modeling fault-prone modules of subsystems,"Software developers are very interested in targeting software enhancement activities prior to release, so that reworking of faulty modules can be avoided. Credible predictions of which modules are likely to have faults discovered by customers can be the basis for selecting modules for enhancement. Many case studies in the literature build models to predict which modules will be fault-prone without regard to the subsystems defined by the system's functional architecture. Our hypothesis is this: models that are specially built for subsystems will be more accurate than a system-wide model applied to each subsystem's modules. In other words, the subsystem that a module belongs to can be valuable information in software quality modeling. This paper presents an empirical case study which compared software quality models of an entire system to models of a major functional subsystem. The study, modeled a very large telecommunications system with classification trees built by the CART (classification and regression trees) algorithm. For predicting subsystem quality, we found that a model built with training data on the subsystem alone was more accurate than a similar model built with training data on the entire system. We concluded that the characteristics of the subsystem's modules were not similar to those of the system as a whole, and thus, information on subsystems can be valuable",2000,1, 5372,Predicting Faults from Cached History,"We analyze the version history of 7 software systems to predict the most fault prone entities and files. The basic assumption is that faults do not occur in isolation, but rather in bursts of several related faults. Therefore, we cache locations that are likely to have faults: starting from the location of a known (fixed) fault, we cache the location itself, any locations changed together with the fault, recently added locations, and recently changed locations. By consulting the cache at the moment a fault is fixed, a developer can detect likely fault-prone locations. This is useful for prioritizing verification and validation resources on the most fault prone files or entities. In our evaluation of seven open source projects with more than 200,000 revisions, the cache selects 10% of the source code files; these files account for 73%-95% of faults - a significant advance beyond the state of the art.",2007,1, 5373,Modeling the Effect of Size on Defect Proneness for Open-Source Software,"Quality is becoming increasingly important with the continuous adoption of open-source software. Previous research has found that there is generally a positive relationship between module size and defect proneness. Therefore, in open-source software development, it is important to monitor module size and understand its impact on defect proneness. However, traditional approaches to quality modeling, which measure specific system snapshots and obtain future defect counts, are not well suited because open-source modules usually evolve and their size changes over time. In this study, we used Cox proportional hazards modeling with recurrent events to study the effect of class size on defect-proneness in the Mozilla product. We found that the effect of size was significant, and we quantified this effect on defect proneness.",2007,1, 5374,Finding predictors of field defects for open source software systems in commonly available data sources: a case study of OpenBSD,"Open source software systems are important components of many business software applications. Field defect predictions for open source software systems may allow organizations to make informed decisions regarding open source software components. In this paper, we remotely measure and analyze predictors (metrics available before release) mined from established data sources (the code repository and the request tracking system) as well as a novel source of data (mailing list archives) for nine releases of OpenBSD. First, we attempt to predict field defects by extending a software reliability model fitted to development defects. We find this approach to be infeasible, which motivates examining metrics-based field defect prediction. Then, we evaluate 139 predictors using established statistical methods: Kendall's rank correlation, Pearson's rank correlation, and forward AIC model selection. The metrics we collect include product metrics, development metrics, deployment and usage metrics, and software and hardware configurations metrics. We find the number of messages to the technical discussion mailing list during the development period (a deployment and usage metric captured from mailing list archives) to be the best predictor of field defects. Our work identifies predictors of field defects in commonly available data sources for open source software systems and is a step towards metrics-based field defect prediction for quantitatively-based decision making regarding open source software components",2005,1, 5375,Evaluating Defect Prediction Models for a Large Evolving Software System,"A plethora of defect prediction models has been proposed and empirically evaluated, often using standard classification performance measures. In this paper, we explore defect prediction models for a large, multi-release software system from the telecommunications domain. A history of roughly 3 years is analyzed to extract process and static code metrics that are used to build several defect prediction models with random forests. The performance of the resulting models is comparable to previously published work. Furthermore, we develop a new evaluation measure based on the comparison to an optimal model.",2009,1, 5376,Data Mining Static Code Attributes to Learn Defect Predictors,"The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of ""McCabes versus Halstead versus lines of code counts"" for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected",2007,1, 5377,Change Bursts as Defect Predictors,"In software development, every change induces a risk. What happens if code changes again and again in some period of time? In an empirical study on Windows Vista, we found that the features of such change bursts have the highest predictive power for defect-prone components. With precision and recall values well above 90%, change bursts significantly improve upon earlier predictors such as complexity metrics, code churn, or organizational structure. As they only rely on version history and a controlled change process, change bursts are straight-forward to detect and deploy.",2010,1, 5378,Assessing UML design metrics for predicting fault-prone classes in a Java system,"Identifying and fixing software problems before implementation are believed to be much cheaper than after implementation. Hence, it follows that predicting fault-proneness of software modules based on early software artifacts like software design is beneficial as it allows software engineers to perform early predictions to anticipate and avoid faults early enough. Taking this motivation into consideration, in this paper we evaluate the usefulness of UML design metrics to predict fault-proneness of Java classes. We use historical data of a significant industrial Java system to build and validate a UML-based prediction model. Based on the case study we have found that level of detail of messages and import coupling-both measured from sequence diagrams, are significant predictors of class fault-proneness. We also learn that the prediction model built exclusively using the UML design metrics demonstrates a better accuracy than the one built exclusively using code metrics.",2010,1, 5379,Use of DGPS corrections with low power GPS receivers in a post SA environment,"With the removal of the dithering effects of Selective Availability (SA), use of Differential GPS (DGPS) corrections can now be applied for extended periods of time allowing enhanced performance for low power configurations of a Si RF based GPS receiver. The software selectable low power settings, implemented by Si RF, employ three states; track, navigate and trickle. During track and trickle states there is no UART communication making reception of DGPS correction unavailable. During the NAV state (when the navigation calculation is performed), corrections may be received. Previously, SA induced error shortened the viable extrapolation time to less than 30 seconds; else significant navigation error would build up between measurements. Additionally, the need to return to a full power state every 30 seconds significantly increased the overall average power dissipation over standard TricklePowerTM operation. Now that SA (the dominate error source of the DGPS correction) has been removed, the time limit that a DGPS correction can be applied has been extended from 30 seconds to several minutes without significant degradation in navigation performance. This opens up opportunity for low power GPS receiver operation to make use of the DGPS correction to improve navigation without severely impacting the average power requirements. Si RF's implementations of low power operation, leverages off its unique architecture that allows 100 ms signal reacquisition allowing a pseudorange measurement to as little 200 ms. The chipset is then shut down for 800 ms, significantly reducing the power consumption, while still maintaining 1 Hz navigation updates",2001,0, 5380,Efficient techniques for reducing error latency in on-line periodic BIST,"With transient and intermittent operational faults becoming a dominant failure mode in modern digital systems, the deployment of on-line test technology is becoming a major design objective. On-line periodic BIST is a testing method for the detection of operational faults in digital systems. The method applies a near-minimal deterministic test sequence periodically to the circuit under test and checks the circuit responses to detect the existence of operational faults. On-line periodic BIST is characterized by full error coverage, bounded error latency, moderate space and time redundancy. In this paper, we present various techniques to minimize the error latency without sacrificing the full error coverage. These techniques are primarily based on the reordering the test vectors or the selective repetition of test vectors. Our analytical and preliminary experimental results demonstrate that our techniques lead to a significant reduction in the error latency.",2009,0, 5381,Applying multifractal spectrum combined with fractal discrete brownian motion model to wood defects recognition for sawing,"Wood nondestructive testing technology is a new and comprehensive subject. In recent years it has achieved fast development. X-ray computed tomography (CT) scanning technology has been applied to the detection of internal defects in the logs for the purpose of obtaining prior information, which can be used to arrive at better wood sawing decision. Fractal geometry and its multifractal extension are new tools which can be used for describing, modeling, analyzing and processing different complex shapes and images. A method in CT image edge detection by using multifractal theory combined with fractal Brownian motion is applied in the paper. First its multifractal spectrum is estimated. Then different types of pixels are classified by the spectrum, smoothing edge point and singular edge point.",2009,0, 5382,Efficient computation of confidence intervals forword error rates,"Word error rate is a standard measure of quality for different tasks such as speech recognition, OCR or machine translation. As such, it is important to compute it together with confidence intervals. Previous works in the literature employ Monte Carlo methods in order to compute those intervals. We show how to compute them without simulations. We also adapt a method that compares two systems over the same test data so that it can be used without simulations.",2008,0, 5383,Joint write policy and fault-tolerance mechanism selection for caches in DSM technologies: Energy-reliability trade-off,"Write-through caches potentially have higher reliability than write-back caches. However, write-back caches are more energy efficient. This paper provides a comparison between the write-back and write-through policies based on the combination of reliability and energy consumption criteria. In the experiments, SIMPLESCALAR tool and CACTI model are used to evaluate the characteristics of the caches. The results show that a write-through cache with one parity bit per word is as reliable as a write-back cache with SEC-DED code per word. Furthermore, the results show that the energy saving of the write-through cache over the write-back cache increases if any of the following changes happens: i) a decrease in the feature size, ii) a decrease in the L2 cache size, and iii) an increase in the L1 cache size. The results also show that when feature size is bigger than 32 nm, the write-back cache is usually more energy efficient. However, for 32 nm and smaller feature sizes the write-through cache can be more energy efficient.",2009,0, 5384,The implementation of a COTS based fault tolerant avionics bus architecture,"X2000 is a technology program at the Jet Propulsion Laboratory to develop enabling technologies for future flight missions at affordable cost. The cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 multi-mission avionics system architecture. The X2000 has selected two commercial bus standards, the IEEE 1394 and I2C, as the avionics system buses. These two buses work together to provide the performance, scalability, low power consumption, and fault tolerance required by long life deep space missions. We report our approach to implementing a fault-tolerant bus architecture for the X2000 avionics system using these two COTS buses. The system approach is described first. Then, the focus of the rest of the discussion is on the implementation of two ASICs, digital I/O and mixed signal I/O ASICs, which are the key components of this COTS based fault-tolerant bus architecture",2000,0, 5385,Applying transformation-based error-driven learning to structured natural language queries,"XML information retrieval (XML-IR) systems aim to provide users with highly exhaustive and highly specific results. To interact with XML-IR systems, users must express both their content and structural requirement, in the form of a structured query. Traditionally, these structured queries have been formatted using formal languages such as XPath or NEXI. Unfortunately, formal query languages are very complex and too difficult to be used by experienced, let alone casual users. Therefore, recent research has investigated the idea of specifying users' content and structural needs via natural language queries (NLQs). In previous research we developed NLPX, a natural language interface to an XML-IR system. Here we present additions we have made to NLPX. The additions involve the application of transformation-based error-driven learning (TBL) to structured NLQs, to derive special connotations and group words into an atomic unit of information. TBL has successfully been applied to other areas of natural language processing; however, this paper presents the first time it has been applied to structured NLQs. Here, we investigate the applicability of TBL to NLQs and compare the TBL-based system, with our previous system and a system with a formal language interference. Our results show that TBL is effective for structured NLQs, and that structured NLQs a viable interface tor XML-IR systems",2005,0, 5386,Comparison of different ANN techniques for automatic defect detection in X-Ray images,"X-ray imaging is extensively used in the NDT. In the conventional method, interpretation of the large number of radiographs for defect detection and evaluation is carried out manually by operator or expert, which makes the system subjective. Also interpretation of large number of images is tedious and may lead to misinterpretation. Automation of Non-Destructive evaluation techniques is gaining greater relevance but automatic analysis of X-Ray images is still a complex problem, as the images are noisy, low contrast with a number of artifacts. ANN's are systems which can be trained to analyze input data based on conditions provided to derive required output. This makes the system automatic reducing the subjective interference in analysis of data. Artificial neural network based systems are thus a feasible solution to this problem of X-Ray NDT. Due to complex nature of input images and noise present, Noise removal becomes a problem in X-Ray images. Preprocessing techniques based on statistical analysis have shown improvement in image noise reduction. Pixels/group of pixels, which deviate from the general structural pattern and grey scale distribution are located. The statistically processed pixel values are used to obtain the features vector from defective as well as from non-defective areas. Software for pre-processing and analyzing NDT images has been developed. Software allows user to train neural networks for defect detection. Once trained satisfactorily, the software scans the new input image and uses the trained ANN for defect detection. The final image with defect regions marked will be displayed. This system can be used to obtain the probable defective areas in a given input image. This paper presents performance of MLP and RBF for detection of defect. The effect of different types of input viz. template and moments on performance of ANN is discussed.",2009,0, 5387,A neural-network approach to recognize defect spatial pattern in semiconductor fabrication,"Yield enhancement in semiconductor fabrication is important. Even though IC yield loss may be attributed to many problems, the existence of defects on the wafer is one of the main causes. When the defects on the wafer form spatial patterns, it is usually a clue for the identification of equipment problems or process variations. This research intends to develop an intelligent system, which will recognize defect spatial patterns to aid in the diagnosis of failure causes. The neural-network architecture named adaptive resonance theory network 1 (ART1) was adopted for this purpose. Actual data obtained from a semiconductor manufacturing company in Taiwan were used in experiments with the proposed system. Comparison between ART1 and another unsupervised neural network, self-organizing map (SOM), was also conducted. The results show that ART1 architecture can recognize the similar defect spatial patterns more easily and correctly",2000,0, 5388,Polarization Rotation Correction in Radiometry: An Error Analysis,"Yueh proposed a method of using the third Stokes parameter TU to correct brightness temperatures such as Tv and Th for polarization rotation. This paper presents an extended error analysis of the estimation of Tv, Th, and TQ equiv Tv - Th by Yueh's method. In order to carry out the analysis, we first develop a forward model of polarization rotation that accounts for the random nature of thermal radiation, receiver noise, and (to first order) calibration. Analytic formulas are then derived for the bias, standard deviation (STD), and root-mean-square error (RMSE) of estimated TQ, Tv, and Th, as functions of scene and radiometer parameters. These formulas are validated through independent calculation via Monte Carlo simulation. Examination of the formulas reveals that: 1) natural TU from planetary surface radiation, of the magnitude expected on Earth at L-band, has a negligible effect on correction for polarization rotation; 2) RMSE is a function of rotation angle Omega, but the value of Omega that minimizes RMSE is not known prior to instrument fabrication; and 3) if residual calibration errors can be sufficiently reduced via postlaunch calibration, then Yueh's method reduces the error incurred by polarization rotation to negligibility.",2007,0, 5389,5B: emerging technologies - reliable and fault-tolerant wireless sensor networks,"Wireless sensor networks create invisible interconnections with the physical world for the measurement, monitoring, and management of data from multiple sensors and probes with little constraint on location. These networks provide distributed processing, data storage, wireless communication, and dedicated application software with high reliability, inherent redundancy, failure-tolerant security and easily encrypted privacy. They have enormous potential to transform our society and are subjects of intense current research and application development. Three enabling hardware technologies which constitute a network node are microprocessors, MEMS sensors, and low-power radios. Sensor networks represent the paradigm shift in computing where they anticipate our needs and sometimes act on our behalf. The objective of this presentation is to discuss the reliable and fault-tolerant wireless sensor networks, focusing on environmental, behavioral, and biomedical areas. Special focus will be on wearable monitors and body wireless sensor network. An example of physiological monitoring by body area network will be discussed.",2005,0, 5390,Anshan: Wireless Sensor Networks for Equipment Fault Diagnosis in the Process Industry,"Wireless sensor networks provide an opportunity to enhance the current equipment diagnosis systems in the process industry, which have been based so far on wired networks. In this paper, we use our experience in the Anshan Iron and Steel Factory, China, as an example to present the issues from the real field of process industry, and our solutions. The challenges are three fold: First, very high reliability is required; second, energy consumption is constrained; and third, the environment is very challenging and constrained. To address these issues, it is necessary to put systematic efforts on network topology and node placement, network protocols, embedded software, and hardware. In this paper, we propose two technologies i.e. design for reliability and energy efficiency (DRE), and design for reconfiguration (DRC). Using these techniques we developed Anshan, a wireless sensor network for monitoring the temperature of rollers in a continuously annealing line and detecting equipment failures. Project Anshan includes 406 sensor nodes and has been running for four months continuously.",2008,0, 5391,Application of video error resilience techniques for mobile broadcast multicast services (MBMS),"With data throughput for mobile devices constantly increasing, services such as video broadcast and multicast are becoming feasible. The 3GPP (3rd Generation Partnership Project) committee is currently working on a standard for mobile broadcast and multicast services (MBMS). MBMS is expected to enable easier deployment of video and multimedia services on 3G networks. We present an overview of the standard including the proposed architecture and requirements focusing on radio aspects. We discuss the issue of video error resilience in such services that is critical to maintain consistent quality for terminals. The error resilience techniques currently used in video streaming services are not suitable for MBMS services. We analyze the error resilience techniques that are applicable within the context of MBMS standard and present our early research in this area.",2004,0, 5392,A Probabilistic Characterization of Fault Rings in Adaptively-Routed Mesh Interconnection Networks,"With increase in concern for reliability in the current and next generation of multiprocessors system-on-chip (MP-SoCs), multi-computers, cluster computers, and peer-to-peer communication networks, fault-tolerance has become an integral part of these systems. One of the fundamental issues regarding fault-tolerance is how to efficiently route a faulty network where each component is associated with some probability of failure. Adaptive fault-tolerant routing algorithms have been frequently suggested in the literature as means of improving communication performance and fault-tolerant demands in computer systems. Also, several results have been reported on usage of fault rings in providing detours to messages blocked by faults and in routing messages adaptively around the rectangular faulty regions. In order to analyze the performance of such routing schemes, one must investigate the characteristics of fault rings. In this paper, we derive mathematical expressions to compute the probability of message facing the fault rings in the well-known mesh interconnection network. We also conduct extensive simulation experiments using a variety of faults, the results of which are used to confirm the accuracy of the proposed models.",2008,0, 5393,Design and analysis of a reduced phase error digital carrier recovery architecture for high-order quadrature amplitude modulation signals,"With increasing order of quadrature amplitude modulation (QAM), the bandwidth efficiency is improved in digital communication. However, in practice, the modulation order is limited, since conventional digital carrier recovery (CR) algorithms give rise to unacceptable phase error. The authors present an efficient software-aided technique for phase error reduction in CR for high-order QAM, based on the simple and well-known fourth power CR loop. Analytical and simulation results indicate that the new technique has several attractive features such as approximate of invariance of phase error improvement over modulation order and low hardware complexity for modulation orders as high as 256-QAM. Experimental results for 64 and 256-QAM illustrate phase error variance of less than -110-dBc/Hz at the frequency offset of 10-kHz, that is, 30-dB reduction of phase error variance or 3-dB increase in system processing gain compared to the conventional fourth power CR loop. This allows a significant improvement of bandwidth efficiency by increasing the modulation order, at the cost of slight complexity overhead.",2010,0, 5394,Performance Evaluation of Probe-Send Fault-tolerant Network-on-chip Router,"With increasing reliability concerns for current and next generation VLSI technologies, fault-tolerance is fast becoming an integral part of system-on-chip and multi-core architectures. Another trend for such architectures is network-on-chip (NoC) becoming a standard for on-chip global communication. In an earlier work, a generic fault-tolerant routing algorithm in the context of NoCs has been presented. The proposed routing algorithm works in two phases, namely path exploration (PE) and normal communication. This paper presents fundamental insights into various novel PE approaches, their feasibility and performance trade-offs for k-ary 2-cube NoCs. The dependence of the normal communication phase on the probability of finding paths and their quality in the first phase emphasizes the PE's significance. One major contribution of this work is the investigation of application of constrained randomness to PE for optimizing the quality of paths. Another contribution is the proposed use of merging of traffic to reduce the reconfiguration time by a large amount (73.8% on an average).",2007,0, 5395,Fault diagnosis of electronic system using artificial intelligence,"With increasing system complexity, shorter product life cycles, lower production costs, and changing technologies, the need for intelligent tools for all stages of a product's lifecycle is becoming increasingly important. The purpose of this article is to give a brief review how AI has been used in the field of electronic fault diagnosis. Topics discussed include: rule-based diagnostic systems; model-based diagnostic systems; case-based reasoning (CBR); fuzzy reasoning and artificial neural networks (ANN); hybrid approaches; IEEE diagnostic standards and automated diagnostic tool future developments.",2002,0, 5396,IVF: Characterizing the vulnerability of microprocessor structures to intermittent faults,"With the advancement of CMOS manufacturing process to nano-scale, future shipped microprocessors will be increasingly vulnerable to intermittent faults. Quantitatively characterizing the vulnerability of microprocessor structures to intermittent faults at early design stage is significantly helpful to balance system performance and reliability. Prior researches have proposed several metrics to characterize the vulnerability of microprocessor structures to soft errors and permanent faults, however, the vulnerability of these structures to intermittent faults are still rarely considered. In this work, we propose a metric intermittent vulnerability factor (IVF) to characterize the vulnerability of microprocessor structures to intermittent faults. A structure's IVF is the probability an intermittent fault in that structure causes an external visible error. We instrument a cycle-accurate execution-driven simulator Sim-Alpha to compute IVFs for reorder buffer and register file. Experimental results show that the IVF of reorder buffer is much higher than that of register file. Besides, IVF varies significantly across different structures and workloads, which implies partial protection to the most vulnerable structures to improve system reliability with less overhead.",2010,0, 5397,Designing quantum adder circuits and evaluating their error performance,"With the advent of efficient quantum algorithms and technological advances, design of quantum circuits has gained importance. Minimization of the gate count and the number of gate levels are the two major objectives in quantum circuit design. The peculiar nature of quantum decoherence that leads to quantum errors mandates completion of all the quantum gate operations within a time bound, hence reduction in the gate count and the number of circuit levels leads to lowering the errors and the overall cost in quantum circuits. In this paper, we propose the design of adder circuits using CNOT and CkNOT gates, with significant reduction in gate count and number of gate levels over their existing counterparts in the literature. We then present a software model for evaluating errors in quantum computing circuits and employ it for evaluating the error performance of our proposed quantum adder circuits.",2008,0, 5398,Fault-tolerant Video on Demand in RSerPool Architecture,"With the advent of Internet, video over IP is gaining popularity. In such an environment, scalability and fault tolerance will be the key issues. Existing video on demand (VoD) service systems are usually neither scalable nor tolerant to server faults and hence fail to comply to multi-user, failure-prone networks such as the Internet. Current research areas concerning VoD often focus on increasing the throughput and reliability of single server, but rarely addresses the smooth provision of service during server as well as network failures. Reliable Server Pooling (RSerPool), being capable of providing high availability by using multiple redundant servers as single source point, can be a solution to overcome the above failures. During a possible server failure, the continuity of service is retained by another server. In order to achieve transparent failover, efficient state sharing is an important requirement. In this paper, we present an elegant, simple, efficient and scalable approach which has been developed to facilitate the transfer of state by the client itself, using extended cookie mechanism, which ensures that there is no noticeable change in disruption or the video quality.",2006,0, 5399,An RT-level fault model with high gate level correlation,"With the advent of new RT-level design and test flows, new tools are needed to migrate at the RT-level the activities of fault simulation testability analysis, and test pattern generation. This paper focuses on fault simulation at the RT-level, and aims at exploiting the capabilities of VHDL simulators to compute faulty responses. The simulator was implemented as a phototypical tool, and experimental results show that simulation of a faulty circuit is no more costly than simulation of the original circuit. The reliability of the fault coverage figures computed at the RT-level is increased thanks to an analysis of inherent VHDL redundancies, and by foreseeing classical synthesis optimizations. A set of rules is used to compute a fault list that exhibits good correlation with stuck-at faults",2000,0, 5400,An empirical analysis of fault persistence through software releases,"This work is based on the idea of analyzing the behavior all over the life-cycle of source files having a high number of faults at their first release. In terms of predictability, our study helps to understand if files that are faulty in their first release tend to remain faulty in later releases, and investigates the ways to assure a higher reliability to the faultiest programs, testing them carefully or lowering the complexity of their structure. The purpose of this paper is to verify empirically our hypothesis, through an experimental analysis on two different projects, and to find causes observing the structure of the faulty files. As a conclusion, we can say that the number of faults at the first release of source files is an early and significant index of its expected defect rate and reliability.",2003,1, 5401,Does calling structure information improve the accuracy of fault prediction?,"Previous studies have shown that software code attributes, such as lines of source code, and history information, such as the number of code changes and the number of faults in prior releases of software, are useful for predicting where faults will occur. In this study of an industrial software system, we investigate the effectiveness of adding information about calling structure to fault prediction models. The addition of calling structure information to a model based solely on non-calling structure code attributes provided noticeable improvement in prediction accuracy, but only marginally improved the best model based on history and non-calling structure code attributes. The best model based on history and non-calling structure code attributes outperformed the best model based on calling and non-calling structure code attributes.",2009,1, 5402,Application of neural network for predicting software development faults using object-oriented design metrics,"In this paper, we present the application of neural network for predicting software development faults including object-oriented faults. Object-oriented metrics can be used in quality estimation. In practice, quality estimation means either estimating reliability or maintainability. In the context of object-oriented metrics work, reliability is typically measured as the number of defects. Object-oriented design metrics are used as the independent variables and the number of faults is used as dependent variable in our study. Software metrics used include those concerning inheritance measures, complexity measures, coupling measures and object memory allocation measures. We also test the goodness of fit of neural network model by comparing the prediction result for software faults with multiple regression model. Our study is conducted on three industrial real-time systems that contain a number of natural faults that has been reported for three years (Mei-Huei Tang et al., 1999).",2002,1, 5403,Using search-based metric selection and oversampling to predict fault prone modules,"Predictive models can be used in the detection of fault prone modules using source code metrics as inputs for the classifier. However, there exist numerous structural measures that capture different aspects of size, coupling and complexity. Identifying a metric subset that enhances the performance for the predictive objective would not only improve the model but also provide insights into the structural properties that lead to problematic modules. Another difficulty in building predictive models comes from unbalanced datasets, which are common in empirical software engineering as a majority of the modules are not likely to be faulty. Oversampling attempts to overcome this deficiency by generating new training instances from the faulty modules. We present the results of applying search-based metric selection and oversampling to three NASA datasets. For these datasets, oversampling results in the largest improvement. Metric subset selection was able to reduce up to 52% of the metrics without decreasing the predictive performance gained with oversampling.",2010,1, 5404,A Rough Set Model for Software Defect Prediction,High assurance software requires extensive and expensive assessment. Many software organizations frequently do not allocate enough resources for software quality. We research the defect detectors focusing on the data sets of software defect prediction. A rough set model is presented to deal with the attributes of data sets of software defect prediction in this paper. Appling this model to the most famous public domain data set created by the NASA's metrics data program shows its splendid performance.,2008,1, 5405,An in-site system based-on ARM of faults diagnostic with the amplitude recovery method,"A new harmonic analysis method - the amplitude recovery method - is addressed and is used in the in-site faults diagnostic in induction motors. To practice this method, an in-site and portable system of faults diagnostic in induction motors on ARM (advanced RISC machines) is also addressed. S3C2410 ARM is chosen as the core of this system. Its faster and multi-channel ADC subsystem and small size can meet the needs of the real-time and in-situation test on induction motors. With the help of the PC and the current spectrum analysis, this ARM system can be used to diagnose the faults in induction motors. Even better, having the aid of the small-sized PC, Vortex86-6082, a reliable and portable system can be realized to meet the purpose of testing and diagnosing the faults of induction motors in-situation.",2008,0, 5406,High Continuous Availability Digital Information System Based on Stratus Fault-Tolerant Server,"With the construction of harmonious society, health improvement and the rapid development of information technology, People put forward higher requirements for the hospital. Hospital information system as an online services system requires continuous operation. Server system is the key to support hospital operations. System paralyzed accident caused by Server system failure is also not uncommon. Aiming at the problem of insufficient reliability of the traditional Cluster cluster server system, The article made a in-depth technical analysis on the performance of the Stratus fault-tolerant server. Combing with the characteristics of hospital information system, it proposed the digital hospital information system structure based on Stratus fault-tolerant server and explored and analyzed the economic and technical advantages of the program. The application effect demonstrates that the program is of the economic good and can realize continuous availability.",2010,0, 5407,Infrared technology in the fault diagnosis of substation equipment,"With the development of infrared technology and the further application in electric power system, it plays a more and more important role in electrical equipment fault diagnosis. Improving the accuracy of infrared diagnosis technology and its application effect is of important practicality value to the research of infrared diagnosis application technology. From the point of electric power system daily patrol, the paper expatiates how to diagnose the most popular radiation fault and trouble using the infrared imaging equipment, the operation process of obtaining infrared images of electrical equipment and the analysis of infrared images. In addition, the paper presents a series of management methods associating with infrared diagnosis daily work.",2008,0, 5408,1st workshop on fault-tolerance for HPC at extreme scale FTXS 2010,"With the emergence of many-core processors, accelerators, and alternative/heterogeneous architectures, the HPC community faces a new challenge: a scaling in number of processing elements that supersedes the historical trend of scaling in processor frequencies. The attendant increase in system complexity has first-order implications for fault tolerance. Mounting evidence invalidates traditional assumptions of HPC fault tolerance: faults are increasingly multiple-point instead of single-point and interdependent instead of independent; silent failures and silent data corruption are no longer rare enough to discount; stabilization time consumes a larger fraction of useful system lifetime, with failure rates projected to exceed one per hour on the largest systems; and application interrupt rates are apparently diverging from system failure rates.",2010,0, 5409,Research on calibration system error of 6-axis force/torque sensor integrated in humanoid robot foot,"With the fast development of humanoid robot with high intelligence and accuracy, the improvement of comprehensive performance of 6-axis force/toque sensor(F/T sensor) has been constantly emphasized and further put forward to a higher demand. Except that a good proper mechanical design to guarantee the precision of the F/T sensor, the calibration quality is one of the most important factors of influencing the precision of F/T sensor too. The influencing factor of the precision of F/T sensor and system error source have been analysed from the view point of the calibration in this correspondence for the improvement of optimization design of sensor structure and calibration system in order to reduce or eliminate the error effects, which offers the theoretical foundation for improving the comprehensive performance and measurement accuracy of the F/T sensor.",2010,0, 5410,Error spreading: a perception-driven approach to handling error in continuous media streaming,"With the growing popularity of the Internet, there is increasing interest in using it for audio and video transmission. Perceptual studies of audio and video viewing have shown that viewers find bursty losses, mostly caused by congestion, to be the most annoying disturbance, and hence these are critical issues to be addressed for continuous media streaming applications. Classical error handling techniques have mostly been geared toward ensuring that the transmission is correct, with no attention to timeliness. For isochronous traffic like audio and video, timeliness is a key criterion, and given the high degree of content redundancy, some loss of content is quite acceptable. We introduce the concept of error spreading, which is a transformation technique that permutes the input sequence of packets (from a continuous stream of data) before transmission. The packets are unscrambled at the receiving end. The transformation is designed to ensure that bursty losses in the transformed domain get spread all over the sequence in the original domain, thus improving the perceptual quality of the stream. Our error spreading idea deals with both cases where the stream has or does not have inter-frame dependencies. We next describe a continuous media transmission protocol and experimentally validate its performance based on this idea. We also show that our protocol can be used complementary to other error handling protocols",2002,0, 5411,Defect Tracing System Based on Orthogonal Defect Classification,"With the increase of the software complexity, the defect measurement becomes a task of high priority. We give a new software defect analytical methodology based on orthogonal classification. This method has two folds. Then a set of orthogonal defect classification (ODC) reference model is given which includes activity, trigger, severity, origin, content and type of defect. In the end, it gives a support tool and concrete workflow of defect tracing. In contrast with the traditional method, this method not only has the advantages of popularity, haleness and low cost, but also improves the accuracy of identifying defects notably. Thus it offers a strong support for the prevention of defects in software products.",2008,0, 5412,Nova: A Robustness-oriented Byzantine Fault Tolerance Protocol,"With the increased complexity, malicious faults have become an important reasons that affect the reliability of the distributed system, especially the web-scale infrastructures, i.e. Amazon S3, Google AppEngine etc. Most such systems assume benign fault model which can't depict the malicious actions. The goal of Byzantine Fault Tolerance protocol (BFT for short) is to mask the malicious behaviors and it has been proved that some new proposed BFTs are suitable to support practical applications. But these BFTs still lack in robustness, a simple fault injection may cause significantly decrease in throughput or run in low throughput without violating the BFT safety property. We propose a new robustness-oriented BFT named Nova. Experiments show Nova has comparable throughput as PBFT in normal case and behave stably under the malicious attack. Compared with other BFTs, Nova can support practical applications more effectively.",2010,0, 5413,Incremental fault-tolerant design in an object-oriented setting,"With the increasing emphasis on dependability in complex, distributed systems, it is essential that system development can be done gradually and at different levels of detail. We propose an incremental treatment of faults as a refinement process on object-oriented system specifications. An intolerant system specification is a natural abstraction from which a fault-tolerant system can evolve. With each refinement step a fault and its treatment are introduced, so the fault-tolerance of the system increases during the design process. Different kinds of faults are identified and captured by separate refinement relations according to how the tolerant system relates to abstract properties of the intolerant one in terms of safety, and liveness. The specification language utilized is object-oriented and based upon first-order predicates on communication traces. Fault-tolerance refinement relations are formalized within this framework",2001,0, 5414,Design and Implementation of Failover Federates Supporting Fault Tolerance for HLA Based Simulations,"With the increasing scale and complexity of HLA based simulations, fault tolerance is gradually becoming a pressing problem. This paper addresses the challenges in realizing a failover federate to support fault tolerance for HLA based simulations. Based on the analysis of the fault tolerance problem, the failover federate is described firstly. It comprises a primary federate and a standby federate. Next, the failover federate is designed through exploiting the fault detection problem and fault tolerance dispatching method. Additionally, the implementation detail is explained. Through some testing, it proves that the reliability of the whole simulation systems can be promoted through the introduction of the standby federates.",2010,0, 5415,System RAS implications of DRAM soft errors,"While attention in the realm of computer design has shifted away from the classic DRAM soft-error rate (SER) and focused instead on SRAM and microprocessor latch sensitivities as sources of potential errors, DRAM SER nonetheless remains a challenging problem. This is true even though both cosmic ray-induced and alpha-particle-induced DRAM soft errors have been well modeled and, to a certain degree, well understood. However, the often-overlooked alignment of a DRAM hard error and a random soft error can have major reliability, availability, and serviceability (RAS) implications for systems that require an extremely long mean time between failures. The net of this effect is that what appears to be a well-behaved, single-bit soft error ends up overwhelming a seemingly state-of-the-art mitigation technique. This paper describes some of the history of DRAM soft-error discovery and the subsequent development of mitigation strategies. It then examines some architectural considerations that can exacerbate the effect of DRAM soft errors and may have system-level implications for today's standard fault-tolerance schemes.",2008,0, 5416,A simulation technique for the evaluation of random error effects in time-domain measurement systems,"While many papers deal with time-domain network analyzer calibration procedures for the correction of systematic errors, little work has been published about the treatment of random errors. This paper is focused on the evaluation of random error effects in time-domain measurement systems. As a first step, an experimental identification of the measurement system random errors is achieved. Random errors addressed are jitter, vertical noise, and fast time drifts. Based on this identification, mathematical models are developed to simulate random errors. At a second step, time-domain measurements are simulated with these random errors. These simulations are used to predict measurement system repeatability and dynamic range. Then, as an application example, simulations of the measurement of the complex propagation coefficient and S parameters of a lossy mismatched microstrip line are achieved. By comparison with real measurements, it is shown that random error effects can be accurately predicted by Monte Carlo simulations",2001,0, 5417,Zapmem: A Framework for Testing the Effect of Memory Corruption Errors on Operating System Kernel Reliability,"While monolithic operating system kernels are composed of many subsystems, during runtime they all share a common address space, making fault propagation a serious issue. The code quality of each subsystem is different, as OS development is a complex task commonly divided by different groups with different degrees of expertise. Since the memory space into which this code runs is shared, the occurrence of bugs or errors in one of the subsystems may propagate to others and affect general OS reliability. It is necessary, then, to test how errors propagate between the different kernel subsystems and how they affect reliability. This work presents a simple new technique to inject memory corruption faults and Zapmem, a fault injection tool which uses such technique to test the effect on reliability from memory corruption of statically allocated kernel data. Zapmem associates the runtime memory addresses to the corresponding high level (source code) memory structure definitions, which indicate which kernel subsystem allocated that memory region, and the tool has minimal intrusiveness, as our technique does not require kernel instrumentation. The efficacy of our approach and preliminary results are also presented.",2009,0, 5418,Exploring the Maintenance Process through the Defect Management in the Open Source Projects - Four Case Studies,"While Open Source Software are becoming evermore widespread and used these days, their maintenance is coming important issue. Earlier studies have shown that defect and version management systems are rich and valuable sources for evaluation of maintenance but they have not studied the use of separate management system for support and feature request. Therefore, in this research we study defect reports, support and feature requests of Open Source Software projects through four case studies from SourceForge. Results showed that most of the case studies used actively those systems but discussion forums were even more active. Although reports and requests were submitted, most of them did not cause any changes or further actions because they were closed shortly as duplicates, invalid or without any resolution.",2006,0, 5419,Speeding up Fault Injection for Asynchronous Logic by FPGA-Based Emulation,"While stability and robustness of synchronous circuits becomes increasingly problematic due to shrinking feature sizes, delay-insensitive asynchronous circuits are supposed to provide inherent protection against various fault types. However, results on experimental evaluation and analysis of these fault tolerance properties are scarce, mainly due to the lack of suitable prototyping platforms. Using a soft-core processor as an example, this paper shows how an off-the-shelf FPGA can be used for asynchronous four state logic designs, on which future fault injection experiments will be conducted.",2009,0, 5420,Consensus-based fault-tolerant total order multicast,"While total order broadcast (or atomic broadcast) primitives have received a lot of attention, this paper concentrates on total order multicast to multiple groups in the context of asynchronous distributed systems in which processes may suffer crash failures. Multicast to Multiple Groups means that each message is sent to a subset of the process groups composing the system, distinct messages possibly having distinct destination groups. Total Order means that all message deliveries must be totally ordered. This paper investigates a consensus-based approach to solve this problem and proposes a corresponding protocol to implement this multicast primitive. This protocol is based on two underlying building blocks, namely, uniform reliable multicast and uniform consensus. Its design characteristics lie in the two following properties. The first one is a minimality property, more precisely, only the sender of a message and processes of its destination groups have to participate in the total order multicast of the message. The second property is a locality property: No execution of a consensus has to involve processes belonging to distinct groups (i.e., consensus is executed on a per group basis). This locality property is particularly useful when one is interested in using the total order multicast primitive in large-scale distributed systems. In addition to a correctness proof, an improvement that reduces the cost of the protocol is also suggested",2001,0, 5421,A fault injection tool for SRAM-based FPGAs,"A fault injection tool for SRAM-based FPGAs based on the fault emulation technique is presented. Faults are injected by modifying the configuration bitstream while this is loaded into the device, without using standard synthesis tools or available commercial software, such as Jbits or similar. This makes our tool independent of the system used for design development and allows a quick fault injection. Also, any device configuration cell can be accessed and this permits to study the effects of possible contentions or shorts, which cannot be analyzed using commercial tools. An example of the use of the tool is described.",2003,0, 5422,Fault Tolerant Active Pixel Sensors in 0.18 and 0.35 Micron Technologies,"A fault tolerant active pixel sensor (FTAPS) has been designed and fabricated to correct for point defects that occur in CMOS image sensors both at manufacturing and over the lifetime of the sensor. For some time it has been known that fabrication of CMOS image sensors in processes less than 0.35mum would generate significant performance changes, yet imagers are being fabricated in 0.18mum technology or smaller. Therefore the characteristics of the FTAPS are presented for pixels fabricated in both a standard 0.18mum and 0.35mum CMOS process and compared for consistency",2006,0, 5423,Fault secure datapath synthesis using hybrid time and hardware redundancy,A fault-secure datapath either generates a correct result or signals an error. This paper presents a register transfer level concurrent error detection (CED) technique that uses hybrid time and hardware redundancy to optimize the time and area overhead associated with fault security. The proposed technique combines the idle computation cycles in a datapath with selective breaking of data dependences of the normal computation. Designers can tradeoff time and hardware overhead by varying these design parameters. We present an algorithm to synthesize fault secure designs and validate it using Synopsys' behavioral compiler.,2004,0, 5424,Characteristics of fault-tolerant photodiode and photogate active pixel sensor (APS),"A fault-tolerant APS has been designed by splitting the APS pixel into two halves operating in parallel, where the photo sensing element has been divided in two and the readout transistors have been duplicated while maintaining a common row select transistor. This split design allows for a self correcting pixel scheme such that if one half of the pixel is faulty, the other half can be used to recover the entire output signal. The fault tolerant APS design has been implemented in a 0.18 m CMOS process for both a photodiode based and photogate based APS. Test results show that the fault tolerant pixels behave as expected where a non-faulty pixel behaves normally, and a half faulty pixel, where one half is either stuck low or high, produces roughly half the sensitivity. Preliminary results indicate that the sensitivity of a redundant pixel is approximately three times that of a traditional pixel for the photodiode APS and approximately twice that for the photogate APS.",2004,0, 5425,Fault-Tolerant Tension Allocation of Ship Mobile Mooring System under Sea Wind,"A fault-tolerant tension allocation framework is suggested for ship mobile system consisting of eight anchor windlasses. The fault-tolerant performance is evaluated under sea wind disturbance. The simulation results indicate that ship mobile mooring systems have certain fault-tolerant ability. When one windlass failed, the ship is able to completely counteract the sea wind disturbance. When two windlasses failed, the ship fault-tolerant ability becomes worse, and the remaining six windlasses are able to roughly balance the wind disturbance. When three windlasses failed, the ship fault-tolerant ability becomes even worse, and the remaining five windlasses are unable to counteract the lateral force from the sea wind. In summary, more windlasses failed, more performance degradation of the mooring system is resulted.",2009,0, 5426,Fixed-point error analysis of two DCT algorithms,"A fixed-point error analysis of two fast DCT algorithms proposed by Hou (1987) and Makhoul (1980) is presented. Expressions for error variances are derived and the results are compared with the simulation results. It is found that the simulation results and analysis results agree quite closely. This demonstrates the validity of the analysis. In addition, the two algorithms are compared in terms of their advantages and disadvantages",2000,0, 5427,Multiple-description coding of speech using forward error correction codes,A flexible framework is presented which performs multiple-description coding of speech signals with two or more channels. The use of forward error correction codes together with a layered speech codec permits encoding into more than two descriptions without excessive increase in complexity. Results of a formal MOS listening test reveal considerable improvements in robustness as long as base layer quality and the number of descriptions are chosen appropriately. A modification of the original encoding scheme allows trading off bit rate savings against robustness to extreme channel conditions. Different coding schemes can easily be compared using a real-time demonstrator software.,2007,0, 5428,Error Analysis on Top-Surface-Flatness of Diesel Engine Body Using FEM,"A four-cylinder diesel engine model is established using Pro/Engineer 3D modeling software. The model is loaded by ANSYS software. Grids meshes are imposed to the model with contact-settings and boundary conditions. Finite element analysis is performed on the contact-state of the diesel engine body. According to the minimum-zone principle for the flatness error analysis, Particle Swarm Optimization algorithm (PSO) is applied, and the mathematical model and method of the engine body top-surface-flatness error analysis is established. Based on the established method, the engine body top-surface-flatness errors can be obtained under the maximum preload working conditions, providing an accurate and effective way for further improvement of the engine tightness and body design.",2010,0, 5429,"System technology and test of CURL 10, a 10 kV, 10 MVA resistive high-Tc superconducting fault current limiter","A full scale three-phase resistive high-Tc superconducting fault current limiter (SCFCL) designed for 10 kV, 10 MVA, has been developed, manufactured, and tested within a publicly funded German project called CURL 10. The device is based on 90 bifilar coils of MCP BSCCO-2212 bulk material. The operating temperature of 66 K is achieved by cooling of liquid nitrogen using two Stirling cryocoolers. Until today, this is the largest HTS fault current limiter world wide. We report on the design features, the composition, and the operation parameters of the SCFCL system. From April 2004 on CURL 10 is installed and tested within the network of the utility RWE at Netphen near the city of Siegen, Germany. The results of the laboratory test and the field test of CURL 10 are given.",2005,0, 5430,A fully automatic redeye detection and correction algorithm,"A fully automatic redeye detection and correction algorithm was developed at Eastman Kodak Company Research Laboratories. The algorithm is highly sophisticated so that it is able to distinguish most redeye pairs from scene content. It is also highly optimized for execution speed and memory usage enabling it to be included in a variety of products. Detected redeyes are corrected so that the red color is removed, but the eye maintains a natural look.",2002,0, 5431,A fault tolerance infrastructure for high-performance COTS-based computing in dependable space systems,"A fundamental solution that allows the use of high-performance, but poorly checked processors in dependable space systems is the use of a generic, hierarchical, fault-tolerant hardware infrastructure (FTI). This FTI is a software-independent innermost defense for an autonomous, fault-tolerant long-life system that may also employ other, especially software-based , fault tolerance techniques. The entire FTI is fault-tolerant and contains no software, thus being immune to malicious software intrusions.",2004,0, 5432,A Fuzzy Neural Network Based Fault Detection Scheme for Synchronous Generator with Internal Fault,"A fuzzy neural network (FNN) based inter-turn short circuit fault detection scheme for generator is proposed. The second harmonic magnitude of field current and the negative sequence components of voltages and currents are used as inputs for the FNN fault detector. The negative sequence voltage and current are obtained from the phase voltages and currents using the symmetrical component analysis method. And the second harmonic magnitude of field current is achieved by the FFT technique. The FNN fault detector with Gauss membership functions is trained off-line using the training data which comes from the Multi-Loop simulation program. The proposed fault detection scheme can perform the inter-turn short circuit fault detection, the fault type classification, and the fault location identification. Experimental results corroborate the effectiveness of the proposed scheme, which is implemented on a TI's DSP.",2009,0, 5433,An Agent-Based Migration Transparency and Fault Tolerance in Computational Grid,"A Grid is a large-scale, geographically distributed hardware and software infrastructure for flexible, secure, and coordinated sharing of vast amounts of heterogeneous resources within large, dynamic and distributed communities of users belonging to virtual organizations, to enable solving of complex scientific problems. From the perspective of one computer, such network partitioning may appear as a failure to other computers. These types of failures may lead to major impact on whole application which is executing on Grid for many days. Thus, failures in the grid computing environment can be solved to some extent by performing migration of application through node agents. As executing applications in nodes have contiguous service features, it becomes important to handle and mask fault and migrate the current job to another grid node without stopping the on-going processes. In this paper, we use agents to provide communication between grid nodes and handle failure tolerant techniques on grid. The agent is dynamically reallocated to Grid nodes though a transparent migration mechanism, as a way to provide fault tolerance in computational grids.",2009,0, 5434,XOR retransmission in multicast error recovery,"A growing number of network applications require the use of a reliable multicast protocol to disseminate data from a source to a potentially large number of receivers. We present a new error recovery mechanism, called XOR, which is based on the selective repeat ARQ protocols. The idea is to combine several NACKed packets by XORing them. The analytical and simulative result show that XOR achieves a better throughput than both `N1' and `N2' do",2000,0, 5435,Fault Tolerance Virtual Router for Linux Virtual Server,"A growing variety of edge network access devices appear on the marketplace that perform various functions which are meant to complement generic routerspsila capabilities, such as firewalling, intrusion detection, virus scanning, network address translation, traffic shaping, route optimization. Because these edge network access devices are deployed on the critical path between a user site and its Internet service provider. Nowadays the availability of network services is very important for many businesses and it is extremely important that overload or failure of one network can not prevent the normal usage of all other services. This paper focuses on analysis of various protocols and an implementation of RGP (redundant gateway protocol), that can treat the above problem. It runs in user-space for Linux operating system.",2009,0, 5436,Empirical comparison of software-based error detection and correction techniques for embedded systems,"""Function Tokens"" and ""NOP Fills"" are two methods proposed by various authors to deal with instruction pointer corruption in microcontrollers, especially in the presence of high electromagnetic interference levels. An empirical analysis to assess and compare these two techniques is presented in this paper. Two main conclusions are drawn: [1] NOP Fills are a powerful technique for improving the reliability of embedded applications in the presence of EMI, and [2] the use of function tokens can lead to a reduction in overall system reliability.",2001,0, 5437,Accurate DS-CDMA bit-error probability calculation in Rayleigh fading,A binary direct-sequence spread-spectrum multiple-access system with random sequences in flat Rayleigh fading is considered. A new explicit closed-form expression is obtained for the characteristic function of the multiple-access interference signals. It is shown that the overall error rate can be expressed by a single integral whose integrand is nonnegative and exponentially decaying. Bit-error rates (BERs) are obtained with this expression to any desired accuracy with minimal computational complexity. The dependence of the system BER on the number of transitions in the target user signature chip sequence is explicitly derived. The results are used to examine definitively the validity of three Gaussian approximations and to compare the performances of synchronous systems to asynchronous systems,2002,0, 5438,Automatic Software Bug Triage System (BTS) Based on Latent Semantic Indexing and Support Vector Machine,"A bug triage system is used for validation and allocation of bug reports to the most appropriate developers. An automatic bug triage system may reduce the software maintenance time and improve its quality by correct and timely assignment of new bug reports to the appropriate developers. In this paper, we present the techniques behind an automatic bug triage system, which is based on the categorization of bug reports. In order to obtain an automatic bug triage system we used these techniques and performed comparative experiments. We downloaded 1,983 resolved bug reports along with the developer activity data from the Mozilla open source project. We extracted the relevant features like report title, report summary etc., from each bug report, and extracted developer's name who resolved the bug reports from the developers activity data. We processed the extracted textual data, and obtained the term-to-document matrix using parsing, filtering and term weighting methods. For term weighting methods we used simple term frequency and TFtimesIDF (term frequency inverse document frequency) methods. Furthermore, we reduced the dimensionality of the obtained term-to-document matrix by applying feature selection and latent semantic indexing methods. Finally we used seven different machine learning methods for the classification of bug reports. The best obtained bug triage system is based on latent semantic indexing and support vector machine having 44.4% classification accuracy. The average precision and recall values are 30% and 28%, respectively.",2009,0, 5439,Sampling rate of digital fault recorders influence on fault diagnosis,"A case study of fault classification in transmission lines using artificial neural networks (ANN) is presented. The database is built from current and voltage waveform samples obtained from fault simulations with the ATP. Utility companies usually have digital fault recorders with different sampling rates, so it is important to evaluate how good the classifier is when the sampling rate changes, this is the main purpose of the paper. A routine to reduce the sampling rate with no loss of accuracy in classifying faults was implemented.",2004,0, 5440,Modeling a fault-tolerant distributed system,A C-based simulation model of the time-triggered protocol (TTP/C) has been designed and implemented as a tool for verifying the properties of a system designed on the basis of it. The model has been provided with a user-friendly interface to allow easy visualization and evaluation of the results. The functionality of this general-purpose model is demonstrated on a simple TTP/C cluster application running under the influence of fault injection. The first round of experiments shows that the system is tolerant toward some typical transient faults like memory data distortion.,2001,0, 5441,Dynamic data replication: an approach to providing fault-tolerant shared memory clusters,"A challenging issue in today's server systems is to transparently deal with failures and application-imposed requirements for continuous operation. In this paper we address this problem in shared virtual memory (SVM) clusters at the programming abstraction layer. We design extensions to an existing SVM protocol that has been tuned for low-latency, high-bandwidth interconnects and SMP nodes and we achieve reliability through dynamic replication of application shared data and protocol information. Our extensions allow us to tolerate single (or multiple, but not simultaneous) node failures. We implement our extensions on a state-of-the-art cluster and we evaluate the common, failure-free case. We find that, although the complexity of our protocol is substantially higher than its failure-free counterpart, by taking advantage of architectural features of modern systems our approach imposes low overhead and can be employed for transparently dealing with system failures.",2003,0, 5442,FPGA based realization of a reduced complexity high speed decoder for error correction,"A chip for high speed two bit error correction in the received signal has been designed and implemented on a Xilinx FPGA using VHDL. The design is based on a modified step-by-step decoding algorithm which does not require the calculation of the error location polynomial. The use of complex computation intensive inverse operations is also avoided. Efforts have been made for reducing the complexity of the decoder. A modified circuit has been used for multiplication of field elements within the Galois field. For squaring the field elements within the Galois field, a modified square circuit with much less complexity has been successfully designed. The average operation cycle for decoding each received word is just equal to the block length of the coded word.",2003,0, 5443,A parallel implementation of fault simulation on a cluster of workstations,"A cluster of workstations may be employed for reducing fault simulation time greatly. Fault simulation can be parallelized by partitioning fault list, the test vector or both. In this study, parallel fault simulation algorithm called PAUSIM has been developed by parallelizing AUSUM which consists of logic simulation and two steps of fault simulation for sequential logic circuits. Compared to other algorithms, PAUSIM-CY avoids redundant work by a judicious task decomposition. Also, it adopts a cyclic fault partitioning method based on the LOG partitioning and local redistribution, resulting in a well-balanced load distribution. The results from the parallel implementation using MPI show a significant speed-up by PAUSIM-CY over other existing parallel algorithms.",2008,0, 5444,"The Perfect Binary One-Error-Correcting Codes of Length : Part IClassification","A complete classification of the perfect binary one-error-correcting codes of length 15 as well as their extensions of length 16 is presented. There are 5983 such inequivalent perfect codes and 2165 extended perfect codes. Efficient generation of these codes relies on the recent classification of Steiner quadruple systems of order 16. Utilizing a result of Blackmore, the optimal binary one-error-correcting codes of length 14 and the (15, 1024, 4) codes are also classified; there are 38 408 and 5983 such codes, respectively.",2009,0, 5445,Analytic representation of eddy current sensor data for fault diagnostics,A complex representation of eddy-current sensor data is proposed and used for the detection of blade fault conditions in turbine engines. The representation is applied to the problem of detecting synchronous vibrations using a single sensor,2005,0, 5446,An Approach to Error Concealment for Entire Right Frame Loss in Stereoscopic Video Transmission,"A compressed stereoscopic video stream is sensitive to the loss of data packets when transmitted over Internet. One approach to combat the impact of such a case is the use of error concealment at the decoder. In this paper, an error concealment algorithm is proposed for restoring the entire lost right frame. Firstly, the characteristics of stereoscopic video sequence are analyzed, then based on relativity of prediction modes for right frames, prediction mode of each MB in the lost frame is chosen, and finally utilized to restore the lost MB according to the estimated motion vector or disparity vector. Experimental results show that the proposed algorithm can restore the lost frame with good quality, and is efficient for error concealment of entire lost right frame in stereoscopic video sequence",2006,0, 5447,Identification of human errors during device-related accident investigations,"A minisystem is the smallest system that can deliver a single clinical benefit. Healthcare is delivered to patients through an assemblage of minisystems. It is the failure of these minisystems that reportedly results in between 44,000 and 98,000 iatrogenic deaths in the United States annually. Accident investigations are intended to identify the latent defects within these minisystems and to recommend corrective actions that will prevent a recurrence. Since human error is involved in approximately 69% of these deaths, understanding the fundamental causes of human error is important to an effective investigation. Device-related accident investigations require a three-step process to identify the deficiencies: 1) gathering clinical and engineering data surrounding an event, 2) analyzing the data to identify the components of the minisystem that contributed to the event, and, when operator error is identified, 3) translate the clinical actions of the operator to human error concepts. A case study of an incident involving a defibrillator which illustrates the process and recommendations for adjusting the minisystem to increase compatibility with the device operator are presented.",2004,0, 5448,"Fault detection, isolation and restoration using a multiagent-based Distribution Automation System","A multiagent-based Distribution Automation System (DAS) is developed for service restoration of distribution systems after a fault contingency. In this system, Remote Terminal Unit (RTU) agents, Main Transformer (MTR) agents, Feeder Circuit Breaker (FCB) agents, and Feeder Terminal Unit (FTU) agents of the Multiagent System (MAS) are used to derive the proper restoration plan after the faulted location is identified and isolated. To assure the restoration plan complies with operation regulation, heuristic rules based on standard operation procedures of Taipower's distribution system are included in the best first search of the MAS. For fault contingency during summer peak season when the capacity reserves of supporting feeders and main transformer are not enough to cover the fault restoration, load shedding scheme is derived for the MAS to restore service to as many key customers and loads as possible. A Taipower distribution system with 43 feeders is selected for computer simulation to demonstrate the effectiveness of the proposed methodology. The results show that by applying the proposed multiagent-based DAS, service of distribution systems can be efficiently restored.",2009,0, 5449,Multi-agents Based Fault Diagnosis Systems in MSW Incineration Process,"A Multi-agents based fault diagnosis reference model for MSW incineration process is important for high speed and automation The desired high levels of incineration efficiency as well as plant facility reliability. Fault diagnosis and maintenance are vital aspects in MSW incineration process, in this sense, diagnosis and maintenance systems should support decision-making tools, new maintenance approaches and techniques, the enterprise thinking and flexibility. In this paper a Multi-agents based fault diagnosis reference model for MSW incineration process is presented which combines the existing models and multi-agents. This model is based on a generic framework using multi-agent systems for MSW on-line monitoring system; in this sense, the Fault diagnosis problem is viewed like a feedback control process and the actions are related to the decision-making in the scheduling of the preventive maintenance task and the running of preventive and corrective specific maintenance tasks. The result of an evaluation of the Multi-agents based fault diagnosis reference model for MSW incineration process are presented. This new model is compared to some important existing models and applied to a real investigation.",2010,0, 5450,Noise Reduction and Confidence Level Analysis in MMG-based Transient Fault Location,"A multi-resolution morphological gradient (MMG) filtering technique is applied successfully into the transient fault locator. The performance of MMG will inevitably deteriorate when various disturbances are imposed on the transient signals. These disturbances can be considered as noise. In this paper, median filter is applied to fault location for noise reduction in the transient signals and to improve the performance of accurate fault location using MMG. The confidence analysis is introduced to discuss the reliabilities of fault location under different signal-noise-ratios in noisy environments. Since the accuracy of fault location relies on the accurate extraction of the maxima of MMG, a confidence level index (CLI) is defined based on the analysis of these maxima. A hypothesis testing is used to determine whether an assertion of maxima of time-tags is reasonable. By using the CLI, the reliability of fault location is able to be measured. The analysis of receiver operating characteristic (ROC) curves with respect to CLI has shown that the median filter can improve the discrimination of accurate and inaccurate locations and the performance of the MMG detection scheme has been improved therefore",2005,0, 5451,Fuzzy logic based fault detection of PMSM stator winding short under load fluctuation using negative sequence analysis,"A negative sequence analysis coupled with a fuzzy logic based approach is applied to fault detection of permanent magnet synchronous motors (PMSM). First, the fundamental components of the motor terminal currents and voltages are separated effectively, based on which the negative sequence components are calculated. A fuzzy logic based approach is implemented to generate a robust detection using the adjusted negative sequence current and negative sequence impedance. The adjusted negative sequence current is obtained by separating the high frequency components caused by the load fluctuation from the total negative sequence current. The adjusted negative sequence current provides a qualitative evaluation on severity of the stator fault. Validation of the method is performed online using a PMSM experimental setup with dSPACEreg and Matlabreg/Simulinkreg environment. The use of fuzzy logic improves the sensitivity of fault detection while reducing false alarm rate under load fluctuations.",2008,0, 5452,Network fault management based on SNMP agent groups,"A network management system must be fault-tolerant in order to provide the required fault management functionality. It is often useful to examine MIB objects of a faulty agent in order to determine why it is faulty. This paper presents a new framework for replicating of SNMP management objects in local area networks. The framework is based on groups of agents that communicate with each other using reliable multicast. A group of agents provides fault-tolerant object functionality. A SNMP service is proposed that allows replicated MIB objects of a faulty agent of a given group to be accessed through fault-free agents of that group. The presented framework allows the dynamic definition of agent groups, and management objects to be replicated in each group. A practical fault-tolerant tool for local area network fault management was implemented and is presented. The system employs SNMP agents that interact with a group communication tool. As an example, we show how the examination of TCP-related objects of faulty agents have been used in the fault diagnosis process. The impact of replication on network performance is evaluated as well as a probabilistic analysis of replicated object consistency",2001,0, 5453,Novel AC line conditioner for power factor correction,A new ac line conditioner is presented for high input power factor and clean ac output voltages for isolating the linear or nonlinear loads. A three-phase two-leg switching mode rectifier with neutral-point-clamped topology is proposed to draw the sinusoidal line currents from the ac mains. The carrier-based current controller is used in the inner control loop to track the line current commands with unity power factor. The dc bus voltage controller is adopted in the outer control loop to regulate the dc-link voltage. A voltage compensator is used to balance the neutral point voltage on the dc tank. A three-phase two-leg inverter with neutral-point-clamped topology is adopted in the system to provide the clean ac output voltages to the critical or sensitive loads. The carrier-based current control scheme is adopted to improve the instantaneous output voltages. Experimental results show the validity and effectiveness of the proposed control strategy.,2004,0, 5454,Actinic inspection of sub-50 nm EUV mask blank defects,A new actinic mask inspection technology to probe nano-scaled defects buried underneath a Mo/Si multilayer reflection coating of an Extreme Ultraviolet Lithography mask blank has been implemented using EUV Photoemission Electron Microscopy (EUV-PEEM). EUV PEEM images of programmed defect structures of various lateral and vertical sizes recorded at around 13 nm wavelength show that 35 nm wide and 4 nm high buried line defects are clearly detectable. The imaging technique proves to be sensitive to small phase jumps enhancing the visibility of the edges of the phase defects which is explained in terms of a standing wave enhanced image contrast at resonant EUV illumination.,2007,0, 5455,Remote Synchronization of Onboard Crystal Oscillator for QZSS Using L1/L2/L5 Signals for Error Adjustment,"A new error adjustment method for remote synchronization of the onboard crystal oscillator for the quasi-zenith satellite system (QZSS) using three different frequency positioning signals (L1/L2/L5) is proposed. The error adjustment method that uses L1/L2 positioning signals was demonstrated in the past. In both methods, the frequency-dependent part and the frequency-independent part were considered separately, and the total time information delay was estimated. By adopting L1/L2/L5, synchronization was improved by approximately 15% compared with that using L1/L2 and approximately 10% compared with that using L1/L5.",2007,0, 5456,Error Analysis of Explosion-Height Controlling Method Based on Geomagnetism Information,"A new explosion-height controlling scheme of the ballistic missile based on geomagnetism information was put forward aiming at the deficiency of the traditional explosion-height controlling method. The simplified geomagnetism model was presented in this study for the error analysis and precision calculating. Although the scheme was not feasible yet, it could provide the beneficial reference for the explosion-height controlling system design of the ballistic missile.",2009,0, 5457,A fast forward error correction allocation algorithm for unequal error protection of video transmission over wireless channels,"A new fast forward error correction (FEC) allocation algorithm for unequal error protection (UEP) of video transmission over wireless channels is proposed in this paper. First, an UEP scheme, which takes into consideration the non-uniformly distributed importance of frames in a group of pictures (GOP) and macroblocks in a video frame, is proposed. An enhanced video error propagation model, namely expected number of macroblocks error propagation (ENMEP) is also proposed in this paper to measure the amount of error propagation caused by transmission errors. Finally, by making use of our proposed ENMEP error propagation model, a fast FEC allocation algorithm for our proposed UEP scheme is also proposed. Simulation results demonstrate that our scheme outperforms the previous UEP scheme and the classical equal error protection (EEP) scheme.",2008,0, 5458,A new impedance-based fault location method for radial distribution systems,"A new impedance-based fault location method suitable for radial distribution systems is presented in this paper. The method uses the fundamental phasor components of voltage and current signals available at the distribution substation end only. Considering the unbalanced nature of the distribution network with single-phase and two-phase laterals, and unbalanced loads the fault location algorithm is derived using phase-component analysis. The multiple-estimation problem which is one of the main drawbacks of the one-end impedance based algorithms is solved in the present method using the estimated current information in the healthy phases. The methodology is based on the during-fault and pure-fault values of current phasors at the measuring end of the feeder and the distribution matrix derived using the pure-fault equivalent circuit. The efficacy of the method is demonstrated by simulating different distribution system configurations with different source impedances and fault types. The limitation of the proposed method is also discussed.",2010,0, 5459,Design of the distributed fault recorder based on TCP/IP,"A new kind of fault recorder is presented to overcome the shortcomings of existing fault recorders in power system, such as unreasonable structure design, low communication speed, small data storage capacity, low reliability and so on. The recorder, with MC68332 as kernel, adopts high speed synchro-sampling and computer network communication techniques. The network communication based on TCP/IP between the master station and recording modules is adopted. The principles, hardware and software design are introduced in detail. A RTOS, named Vxworks and modular design are applied, which make the system functions easy to be expanded and maintained. The recorder has been put into practical operation in some substations. The operation results show that it has the advantages of easy to be extended, convenient to be installed, better anti-interference, high reliability, etc.",2008,0, 5460,Rotor broken bars fault diagnosis for induction machines based on the wavelet ridge energy spectrum,"A new method for rotor broken bars fault diagnosis for induction machines based on the startup electromagnetic torque signal is presented. The fault characteristic torque frequency variation during the startup can be extracted using the wavelet ridge, which can be used to identify rotor broken bars fault. The wavelet coefficients modulus indicate the signal energy of the corresponding scale, so the wavelet coefficients modulus of the fault characteristic ridge will give us the magnitude variation law of the fault characteristic torque. According to these, the wavelet ridge energy spectrum is defined. Using the wavelet ridge energy spectrum of the fault characteristic torque as the fault severity index, the number of adjacent broken rotor bars can thus be given. Experimental results verify the feasibility of the proposed fault diagnosis method",2005,0, 5461,Research on a New Fabric Defect Identification Method,"A new method of recognition the fabric defects features is proposed for alleviating the difficulties of extracting to complicated fabric defects features. First, fast Fourier transform and self-adaptive power spectrum decomposition are performed. Sector-regional energy of spectrum is extracted and its mean and standard deviations is calculated as fabric features. Then, the spectral energy distribution was objected to direction Y, and local peak was extracted as defects recognition features after objection. Fabric defects recognition using the proposed method shows a high performance in the on-line detection.",2008,0, 5462,Fast fault simulation for nonlinear analog circuits,A new method of transient fault simulation uses dc bias grouping of faulty circuits and decreases the number of Newton-Raphson iterations needed to reach a solution. An experimental tool implementing this method achieves a speedup of 20% to 30% on a flat netlist.,2003,0, 5463,PCA data preprocessing for neural network-based detection of parametric defects in analog IC,"A new methodology for algorithmic selection of a proper training vector set for neural network learning in 2D PCA space is presented. In feed-forward neural networks with unsupervised learning, the training set selection plays a crucial role. In this paper, we propose a new approach to this selection using convex-hull graphics algorithms. Feed-forward neural network has been used for detecting parametric defects in a band pass filter circuit. As it is shown, well trained neural network is not only able to detect the faulty devices by classifying the analysed circuit's parameter into a proper category but also identifies direction of an undesired deviation of the parameter",2006,0, 5464,Dynamic displacement control error of multifunction and all-electric rheometer,"A new multi-function and all-electric rheometer (MAR) is designed by the authors. A sine vibration displacement is superimposed on the steady-state displacement of the piston in parallel. In order to study the dynamic displacement control error of MAR, its frequency and amplitude was obtained through decomposing the measured resultant displacement into steady-state displacement and the dynamic displacement, and transforming the time-domain signal into frequency-domain presentation. The results show that both the frequency and amplitude control errors of MAR are small enough for the test requirement. However, the repeatability of measured values of amplitude is not as well as the frequency. And they are always smaller than the set values. The mean values of amplitude relative errors are about 2%, so it may be a useful way to set the amplitude 2% higher than the wanted values. The relative errors of amplitude in 5 Hz are slightly larger than the errors in the other cases. It may caused by the reason that 5 Hz is closed to the lower limitation of the vibration table rated frequency.",2010,0, 5465,Fault diagnosis of high voltage direct current system based on particle filter,"A new particle filter based fault diagnosis method for nonlinear stochastic system with non-Gaussian noise and disturbances is proposed by combining particle filter algorithm and fault diagnosis theory. One of the appealing advantages of the new approach is that the complete probability distribution information of the state estimates from particle filter is utilized for fault detection, another is its applicability to general non-linear system with non-Gaussian noise and disturbances. Experimental results show the method is efficient and applicable to HVDC system.",2009,0, 5466,Recursive prediction error identification and scaling of non-linear systems with midpoint numerical integration,"A new recursive prediction error algorithm (RPEM) based on a non-linear ordinary differential equation (ODE) model of black-box state space form is presented. The selected model is discretised by a midpoint integration algorithm and compared to an Euler forward algorithm. When the algorithm is applied, scaling of the sampling time is used to improve performance further. This affects the state vector, the parameter vector and the Hessian. This impact is analysed and described in three Theorems. Numerical examples are provided to verify the theoretical results obtained.",2010,0, 5467,Fault Injection for Semi-Parametric Reliability Models,A new result about the reliability models of reconfigurable digital systems is derived and then applied to the problem of establishing ultra reliability by fault injection experiments. The result shows that the complicated fault recovery procedure can be adequately described by a few parameters. The resulting reduction in modeling and experimental effort brings establishing extremely low probabilities of failure within experimental reach. There is a discussion of the differences between this approach and previous efforts. The result is used to design experiments for several example systems,2005,0, 5468,Adaptive reclosure using high frequency fault transients,"A new signal processing algorithm for arcing faults detection based on the high frequency current transient is presented in this paper. In transient faults, arc current extinguishes and then ignites, and this periodically disturbs fault current. After several cycles, when transient signals by the fault decay a lot, the arc disturbance can be identified by the wavelet transform. The feasibility of this algorithm has been tested by computer simulation",2001,0, 5469,An iron core probe based inter-laminar core fault detection technique for generator stator cores,"A new technique for detecting incipient interlaminar insulation failure of laminated stator cores of large generators is proposed in this paper. The proposed scheme is a low flux induction method that employs a novel probe for core testing. The new probe configuration, which uses magnetic material and is scanned in the wedge depression area, significantly improves the sensitivity of fault detection as well as user convenience compared to existing methods. Experimental results from various test generators tested in factory, field and lab environments under a number of fault conditions are presented to verify the sensitivity and reliability of the proposed scheme.",2003,0, 5470,Minimum measurements at minimum set of test nodes for analog circuit fault diagnosis,"A new technique for isolating all separable hard faults in analog circuits using minimum measurements at a minimum pre-selected set of test nodes is presented. The spectrum of the circuit response to a sinusoidal input test signal is simulated at all circuit nodes. A clustering algorithm is then applied to evaluate the separability performance of all simulated measurements. Uniquely classified faults are isolated each in one set while faults having similar responses are grouped in one ambiguity set. The proposed algorithm is then applied in two consequent steps. In the first step, selection of an optimal minimal set of test nodes is achieved. In the second step, selection of minimum measurements at the pre-selected set of test nodes is realized. The resultant overall classification accuracy is comparable to that obtained using all measurements at all circuit nodes. The proposed strategy is demonstrated with a three-stage active filter circuit example. Moreover, classification results are verified using a learning vector quantization neural network.",2002,0, 5471,Traveling Wave Based Distribution Lines Fault Location Using Hilbert-Huang Transform,"A new traveling wave based distribution line fault location method using Hilbert-Huang transform (HHT) is presented in this paper. The intrinsic mode function (IMF) components of the fault generated traveling wave are extracted by empirical mode decomposition (EMD), and the instantaneous frequency changing time of each IMF component is obtained through Hilbert transform. The traveling wave arrival time in global position system (GPS) can thus be detected according to its high frequency sudden change point which is extracted in the instantaneous frequency diagram. The HHT based fault location scheme is comparing with that based on wavelet transform. ATP simulation results show that the HHT method can more effectively extract the characteristics of traveling wave, and the fault location error is not more than plusmn150 m.",2008,0, 5472,On mismatch errors in analog-VLSI error correcting decoders,"A new type of nonlinear analog transistor networks has recently been proposed for turbo decoding of error correcting codes. However, the influence of various nonidealities on the performance of such analog decoders is not yet well understood. The paper addresses the performance degradation due to transistor mismatch. Some analytical results are derived that allow to compare the accuracy of analog decoders with that of digital decoders. Moreover, these results enable to incorporate transistor mismatch into fast high-level simulations",2001,0, 5473,Estimation of timing error and frequency offset for M-DPSK and FSK,"A new universal algorithm based on the criterion of MMSE is presented to estimate timing error and frequency offset for most signals of digital phase-related modulation schemes such as M-DPSK, MSK and FSK. It can easily deal with signals with different modulation schemes and data rates by only setting a few input arguments of the versatile software modules. The algorithm consumes far less operating resources than the traditional methods and the average processing capacity of the algorithm is about 40 instructions per symbol. It is applied to process signals with high data rate in software radios.",2002,0, 5474,A novel method for defect location using Iddq,A novel algorithm for fault location using Iddq has been presented in this paper. A significant advantage of this method is that it can effectively locate multiple defects in a circuit. An experiment used to illuminate this algorithm is discussed in detail.,2004,0, 5475,Fault Tolerant Dynamic Antenna Array in Smart Antenna System Using Evolved Virtual Reconfigurable Circuit,"A majority of applications require cooperation of two or more independently designed, separately located, but mutually affecting subsystems. In addition to good behavior of each of the subsystems, an effective coordination is very important to achieve the desired overall performance. However, such a co-ordination is very difficult to attain mainly due to the lack of precise system models and/or dynamic parameters. In such situations, the evolvable hardware (EHW) techniques, which can achieve the sophisticated level of information processing the brain is capable of, can excel. In this paper, a new virtual reconfigurable circuit based drive circuit for array elements in smart antenna using the techniques of evolved operators is presented. The idea of this work is to develop a system that is tolerant to array element failure (fault tolerance) by utilizing phased array input programmer connected to a programmable VLSI chip. The approach chosen here is based on functional level evolution whose architecture contains many nonlinear functions and uses an evolutionary algorithm to evolve the best configuration. The system is tested for its effectiveness by choosing a real-time phase control in three element array of smart antenna with three input phases and introducing different element failures such as: element fails as open circuit, sensor fails as short circuit, noise added to individual element, multiple element failure etc.. In each case the mean square error is computed and used as the performance index.",2008,0, 5476,FAME: a fault-pattern based memory failure analysis framework,"A memory failure analysis framework is developed-the Failure Analyzer for MEmories (FAME). The FAME integrates the Memory Error Catch and Analysis (MECA) system and the Memory Defect Diagnostics (MDD) system. The fault-type based diagnostics approach used by MECA can improve the efficiency of the test and diagnostic algorithms. The fault-pattern based diagnostics approach used by MDD further improves the defect identification capability. The FAME also comes with a powerful viewer for inspecting the failure patterns and fault patterns. It provides an easy way to narrow down the potential cause of failures and identify possible defects more accurately during the memory product development and yield ramp-up stage. An experiment has been done on an industrial case, demonstrating very accurate results in a much shorter time as compared with the conventional way.",2003,0, 5477,ANN-based error reduction for experimentally modeled sensors,"A method for correcting the effects of multiple error sources in differential transducers is proposed. The correction is carried out by a nonlinear multidimensional inverse model of the transducer based on an artificial neural network. The model exploits independent information provided by the difference in actual characteristics of the sensing elements, and by an easily controllable auxiliary quantity (e.g., supply voltage of conditioning circuit). Experimental results of the correction of an eddy-current displacement transducer subject to the combined interference of structural and geometrical parameters highlight the practical effectiveness of the proposed method",2002,0, 5478,Beam-Based Non-Linear Optics Corrections in Colliders,"A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, that gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 4 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non-linear correction techniques.",2005,0, 5479,Error concealment using layer structure for JPEG2000 images,"A method of error concealment for JPEG2000 images is proposed in this paper. The proposed method uses the layer structure that is a feature of JPEG2000. The most significant layer is hidden in the lowest layer of the JPEG2000 bit stream, and this embedded layer is used for error concealment. The most significant layer is duplicated because JPEG2000 uses bit-plane coding. In this coding, when the upper layers are affected by errors, the coefficients of the lower layers become meaningless. A bit stream encoded using the proposed method has the same data structure as a standard JPEG2000. Therefore, it can be decoded by a standard decoder. Our simulation results demonstrated the effectiveness of the proposed method.",2002,0, 5480,Research on location of single-phase earth fault based on pulse injection method in distribution network,"A method of fault location based on pulse signal injection has been promoted in this paper, which is not only independent of the following factors, such as system operating mode, topology, neutral grounding and random fault, but also the site of signal injection, the width and period of pulse are flexible and adjustable. In this paper the design scheme of software and hardware for signal source is proposed, high-pressure pulse generator is carried out by adopting C8051F310 MCU, and signal detector is designed based on the principle of electromagnetic induction.",2009,0, 5481,Digital compensation scheme for coefficient errors of complex filter bank parallel A/D converter in low-IF receivers,A digital compensation scheme for coefficient errors of a complex filter bank in low-IF receivers is presented. The complex filter bank is employed to suppress DC offset and image signals in the low-IF receivers and relax the requirements on the conversion rate and resolution of A/D converters. The proposed compensation scheme regenerates interference due to the coefficient errors and subtracts it from the digital signal converted by an A/D converter. The proposed scheme also improves the effective resolution of A/D converters.,2002,0, 5482,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of a fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and non-fault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlight the excellent promptness in detecting faults, low false alarm rate, and very good diagnostic performance.",2002,0, 5483,Hybrid fault adaptive control of a wheeled mobile robot,"A fault adaptive control methodology for mobile robots is presented. The robot is modeled as a continuous system with a supervisory controller. The physical processes of the robot are modeled using bond graphs, and this forms the basis of a combined qualitative reasoning and quantitative model-based estimation scheme for online fault detection and isolation during robot operation. A hierarchical-control accommodation framework is developed for the supervisory controller that determines a suitable control strategy to accommodate the isolated fault. It is shown that for small degradations in actuation effort, a robust controller achieves fault accommodation without significant loss of performance. However, for larger faults, the supervisor needs to switch among several controllers to maintain acceptable performance. The switching stability among a set of trajectory tracking controllers is presented. Simulation results verify the proposed fault adaptive control technique for a mobile robot.",2003,0, 5484,Research on fault diagnosis expert system based on VXI bus for charger I circuit board,"A fault diagnose expert system base on VXI is designed to proceed automatically a certain type high-tech information electronic equipment fault detection and improve its efficiency and accuracy of diagnosis. This paper mainly introduces the research on algorithm and realization of the fault diagnose expert system of the charger I of such equipment, and example proving on the hardware platform is described as well. By use of such method, it's quicker and more convenient to locate fault on the circuit boards on this equipment. It's proved that this expert system can solve the problems of high cost and long intervals of maintenance and keep the equipment in a stable status.",2009,0, 5485,Fault Diagnosis on Hermetic Compressors Based on Sound Measurements,"A fault identification study is made to identify five common faults in hermetic compressors manufactured in a large plant. Sound power level is used as raw data. Sound measurements were made in a room where microphones were located at different places of a virtual hemi-sphere, designed according to international standards. Obtained data is analyzed using the artificial neural networks method, where the multilayer perceptron model is used. Two different analysis approaches are carried out. In the first approach, only the summary data that emanated from the information coming from all microphones are used. In the second approach, all data coming from all microphones are used. The results indicate that the first approach is partially successful and the second is successful.",2007,0, 5486,Fast Emulation of Permanent Faults in VLSI Systems,"A confident use of deep-submicron VLSI systems requires the study of their behaviour in the presence of faults, which has been traditionally conducted via model-based fault injection techniques. Although field-programmable gate arrays (FPGAs) allows for a fast execution of models, its use to emulate the occurrence of permanent faults in VLSI models has been restricted so far to the well-known stuck-at fault model. Recent studies in fault representativeness point out the need of considering a wider set of faults modelling aspects like delays or short circuits. This paper presents new and different alternatives for the emulation of permanent faults. Several experiments have been performed using an automated tool that allows for the injection of all the studied fault models. Results from these experiments show both the feasibility of the proposed approach, and the time saving achieved by executing the models on FPGAs",2006,0, 5487,Optimized fault location,"A continuous and reliable electrical energy supply is the objective of any power system operation. However, faults inevitably occur in power system due to bad weather conditions, equipment damage, equipment failure, environment changes, human or animal interference and many other reasons. Since it is very important that correct information about fault location and its nature is provided as fast as possible, an automated system is proposed to track status of equipment and to calculate fault location. Calculated results are available to users through detailed graphical representation. This paper presents elements of proposed solution and describes the benefits.",2007,0, 5488,Innovative airborne inventory and inspection technology for electric power line condition assessments and defect reporting,"A cost-effective and innovative airborne inventory and inspection patrol system for distributed assets such as transmission lines, pipelines, and roadways has been developed and evaluated. Results show that aerial high-resolution digital visual and spectral images tagged by Global Positioning Satellite (GPS) coordinates can be successfully used to cost-effectively identify the majority of conditions/defects on electric power lines. Experiments show that the condition and defect detection rate of the airborne inventory and inspection system is significantly higher than rates derived from traditional patrols and comparable to values achieved from driving patrols. Geographic information systems (GIS) based mapping tools can be used to quickly and efficiently interpret digital images collected from aerial platforms. Digital images provide an archival record of the condition of the distributed assets to estimate the long-term performance of the assets and to define cost-effective maintenance and replacement schedules",2000,0, 5489,Diagnosis of Multiple Scan Chain Timing Faults,"A diagnosis technique is presented to locate multiple timing faults in scan chains. Jump simulation is a novel parallel simulation technique which quickly searches for the upper and the lower bounds of every individual fault. The proposed technique takes into account the interaction of multiple faults so the diagnosis results are deterministic, not probabilistic. This technique is very useful in the production test environment because it requires only regular automated test pattern generator patterns, not specialized diagnosis patterns. Experiments on ISCAS'89 benchmark circuits show that this technique can successfully pinpoint almost every single one of 16 hold-time faults in a scan chain of more than 800 scan cells. The proposed technique is still effective when failure data are limited or faults are clustered.",2008,0, 5490,Reflex HMD to compensate lag and correction of derivative deformation,"A head-mounted display (HMD) system suffers largely from the time lag between human motion and the display output. The concept of a reflex HMD to compensate for the time lag is proposed and discussed. Based on this notion, a prototype reflex HMD is constructed. The rotational movement of the user's head is measured by a gyroscope, modulating the driving signal for the LCD panel, and this shifts the viewport within the image supplied from the computer. The derivative distortion was investigated, and the dynamic deformation of the watched world was picked up as the essential demerit. Cylinderical rendering is introduced to solve this problem and is proved to cancel this dynamic deformation, and also to decrease the static distortion",2002,0, 5491,Joint error resilient and rate control for H.264,"A joint error resilient and rate control method for H.264 over Internet channel is presented in this paper. An accurate rate model and a fast encoding mode selection for packet loss channel are used in this method. By rate-distortion based frame-layer bit allocation and slice-layer quantization parameter (QP) calculation, good error resilient and rate control results are demonstrated.",2005,0, 5492,Structural Error Verification in Active Rule-Based Systems using Petri Nets,"A knowledge base needs to be verified so that it works corretly. Up to date, approaches on production rule base verification have been reported adequately. However, active rule base verification cannot be found. In this paper, we primitively define structural errors in active rule base. Then, a verification approach is proposed based on Conditional Colored Petri Nets.",2006,0, 5493,A Control Circuit With Load-Current Injection for Single-Phase Power-Factor-Correction Rectifiers,"A load-current-injection control technique for boost-derived power-factor-correction (PFC) rectifiers with average current-mode control is proposed in this paper. By adding a load-current loop to the conventional inductor current loop, the output voltage response to load steps is speeded up, almost eliminating the typical voltage overshoots of this kind of converters. Although the techniques based on the load-current injection are traditionally called ""load feedforward,"" this paper shows that an additional feedback loop, which modifies the linear small-signal model of the converter, is also introduced. In order to validate the concept, a converter prototype working from a universal input line has been designed and tested, showing that a very fast dynamic response of PFC rectifiers may be achieved in a cost-effective way",2007,0, 5494,Low-Complexity Mobile Video Error Concealment Using OBMA,A low-complexity error concealment technique called the outer boundary matching algorithm (OBMA) for mobile video applications is studied extensively in this work. The OBMA technique is shown to provide an excellent tradeoff between the complexity and the quality of concealed video for a wide range of test video sequences.,2008,0, 5495,Identification of transient and permanent faults,"A new algorithm was developed for arcing fault detection based on high-frequency current transients analyzed with wavelet transforms to avoid automatic reclosing on permanent faults. The characteristics of arc currents during transient faults were investigated. The current curves of transient and permanent faults are quite similar since current variation from the fault arc is much less than the voltage variation. However, the fault current details are quite different because of the arc extinguishing and reigniting. Dyadic wavelet transforms were used to identify the cur rent var iation since wavelet transform has time-frequerey localization ability. Many electric magnetic transent program (EMTP) simulations have verified the feasibility of the algorithm.",2003,0, 5496,VirtCFT: A Transparent VM-Level Fault-Tolerant System for Virtual Clusters,"A virtual cluster consists of a multitude of virtual machines and software components that are doomed to fail eventually. In many environments, such failures can result in unanticipated, potentially devastating failure behavior and in service unavailability. The ability of failover is essential to the virtual cluster's availability, reliability, and manageability. Most of the existing methods have several common disadvantages: requiring modifications to the target processes or their OSes, which is usually error prone and sometimes impractical; only targeting at taking checkpoints of processes, not whole entire OS images, which limits the areas to be applied. In this paper we present VirtCFT, an innovative and practical system of fault tolerance for virtual cluster. VirtCFT is a system-level, coordinated distributed checkpointing fault tolerant system. It coordinates the distributed VMs to periodically reach the globally consistent state and take the checkpoint of the whole virtual cluster including states of CPU, memory, disk of each VM as well as the network communications. When faults occur, VirtCFT will automatically recover the entire virtual cluster to the correct state within a few seconds and keep it running. Superior to all the existing fault tolerance mechanisms, VirtCFT provides a simpler and totally transparent fault tolerant platform that allows existing, unmodified software and operating system (version unawareness) to be protected from the failure of the physical machine on which it runs. We have implemented this system based on the Xen virtualization platform. Our experiments with real-world benchmarks demonstrate the effectiveness and correctness of VirtCFT.",2010,0, 5497,Detecting processor hardware faults by means of automatically generated virtual duplex systems,"A virtual duplex system (VDS) can be used to increase safety without the use of structural redundancy on a single machine. If a deterministic program P is calculating a given function f, then a VDS contains two variants Pa and Pb of P which are calculating the diverse functions fa and fb in sequence. If no error occurs in the process of designing and executing Pa and Pb, then f= fa=fb holds. A fault in the underlying processor hardware is likely to be detected by the deviation of the results, i.e. fa(i)=fb(i) for input i. Normally, VDSs are generated by manually applying different diversity techniques. This paper, in contrast, presents a new method and a tool for the automated generation of VDSs with a high detection probability for hardware faults. Moreover, for the first time the diversity techniques are selected by an optimization algorithm rather than chosen intuitively. The generated VDSs are investigated extensively by means of software implemented processor fault injection.",2002,0, 5498,A Real-Time Vision System for Defect Detection in Printed Matter and Its Key Technologies,"A vision inspection system with high speed and on line is proposed to detect defects in printed matter, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, and wrinkles, etc. Any tiny defect would be developed by using the high and low illumination angles design and anti-blooming techniques. A new image reference method based on morphological pre-processing eliminates all false defects brought by slight distortion of printed matter and chromatography mistake. The fast objects searching algorithm based on run-length-encoding can locate the coordinates of defects and define the shape of the defects. The C/S parallel network structure was used, image data were processed distributed and quality data is managed centralized. Experimental results verify the speed, reliability and accuracy of proposed system.",2007,0, 5499,Agent-based wide area protection with high fault tolerance,"A wide area protection agent with high degree of fault tolerance is designed in this paper by using fault direction information and distance action information. The working principle of the proposed agent is to first locate the fault by making a comparison between different fault direction information, and then conduct fault-tolerance processing by utilizing fault confirmation ring and distance action information. After giving an introduction to the fault location algorithm and fault tolerance identification algorithm, the paper designs the structure for the proposed protection agent and presents the dynamic cooperation mechanism that connect protection agents by use of wide area protection algorithm. Finally, the cooperation process and the performance of the proposed protection agents are analyzed through simulation tests. Test results manifest that the proposed wide area protection agent is equipped with good fault tolerance capability.",2010,0, 5500,Fault-tolerant and energy-efficient permutation routing protocol for wireless networks,"A wireless network (WN) is a distributed system where each node is a small hand-held commodity device called a station. Wireless sensor networks have received increasing interest in recent years due to their usage in monitoring and data collection in a wide variety of environments like remote geographic locations, industrial plants, toxic locations or even office buildings. Two of the most important issues related to a WN are their energy constraints and their potential for developing faults. A station is usually powered by a battery which cannot be recharged while on a mission. Hence, any protocol run by a WN should be energy-efficient. Moreover, it is possible that all stations deployed as part of a WN may not work perfectly. Hence, any protocol designed for a WN should work well even when some of the stations are faulty. We design a protocol which is both energy-efficient and fault-tolerant for permutation routing in a WN.",2003,0, 5501,Prediction of service life of pre-stressed concrete bridge by Fault Tree Analysis model,"Fault Tree Analysis uses the logical model and mapping methods to analyse, calculate or estimate the probability of phylogenetic. Thus, the reliability of the system, safety and risk could be assessed. They have been widely used in system's reliability study and to quantify the correlation of potential risk of the system. In the existing of reliability analysis, the time factor generally doesn't need to be taken into account. However, composition properties of the materials decrease would lead to the reduction of the level of structural reliability. Therefore, structure's reliability is actually an amount of time-varying. Comprehensive consideration of the above factors, the analysis of a variety of factors will impact on the service life of pre-stressed concrete girder bridge, to build bridge structure analysis model. From the calculation of time-varying structure system's reliability, the service life of bridge could be evaluated to find safety and reliability of structure.",2010,0, 5502,"A Generic Model, and its Validation, for the Translational Systematic Errors in Synchronous Drive Robots","AbstractSynchronous Drive Robots (SDR) are seeing increasing use as service robots in dynamic environments. Due to the changing scenery in dynamic environments, the accuracy of proprioceptive sensors such as odometry is of greater importance. This paper proposes a generic kinematic model for the translational systematic odometry error in an n-wheeled SDR (n 3). An unexpected behaviour of SDR is the curved path when commanded to translate, which varies with wheel orientation (which changes when commanded to rotate.) This is caused by the traction force of each wheel around the centre of mass of the robot acting as a moment. There is a further odometry error due to wheel misalignment, which does not affect the path curvature, but creates a yaw. Compared to existing works, the proposed model is explicitly validated in the instance of a 3-wheeled SDR.",2010,0, 5503,Power system fault data compression based on wavelet packet transform and vector quantization,"According to the characteristics of fault transient signal of a power system, some principles of fault signal compression are proposed. That is keeping some waveband lossless compression and other waveband lossy compression. An integrated signal compression algorithm based on wavelet packet transform, Huffman coding and vector quantization is proposed. Programming software is also given. The simulation result shows this algorithm can precisely meet the requirements of signal compression in a power system and have great application potential.",2002,0, 5504,Rate control for low delay H.264/AVC transmission over channels with burst error,"A rate control approach is proposed to deal with the low delay H.264/AVC video transmission over channels with burst errors by applying stochastic optimization technique. Based on the exponential rate-distortion and the linear variance prediction model, the one pass rate control algorithm will take into account the channel state and round trip delay, and make an immediate decision on the optimal rate allocation for the video frame. Simulation results show that for different end to end delay constraints and round trip delay, the number of lost frames is significantly reduced, and the average reconstruction peek signal to noise ratio is improved by 0.5-1.6dB, compared with the reference rate control scheme [ARA, 01]",2006,0, 5505,Real-Time Error-Feedback Output Regulation of Nonhyperbolically Nonminimum Phase System,A real time implementation of an error feedback output regulation problem for the gyroscopical platform is presented here. It is based on a numerical method for the solution of the so-called regulator equation. The regulator equation consists of partial differential equations combined with algebraic ones and arises when solving the output-regulation problem. Error-feedback output regulation problem aims to find a dynamic feedback compensator using only tracking error measurements to ensure tracking given reference and/or rejecting unknown disturbance. Solving the regulator equation is becoming difficult especially for the non-minimum phase systems where reducing variables against algebraic part leads to possible unsolvable differential part. The proposed numerical method is based on the successive approximation of the differential part of the regulator equation by the finite-element method while trying to minimize functional expressing the error of its algebraical part. This solution is then used to design real-time controller which is successfully experimentally tested.,2007,0, 5506,Effectiveness of a new inductive fault current limiter model in MV networks,"A realistic model for a novel saturable core superconducting FCL (SCFCL) prototype is presented, and incorporated into time-domain power simulation software PSCADTM/EMTDCTM. The present work incorporates non-linear material properties data of the magnetic core with inductance to produce a limiting effect on the line current in real time. The novelty of this core design is the inclusion of a superconducting material as the magnetisation DC coil to saturate the core, instead of using the superconductor directly within the magnetic circuit. The FCL model's accuracy was validated against experimental test results, and its performance analysed by its placement in a UK generic network at MV level. Implementation simulations showed the device could achieve a 50% current clipping capacity, when placed in an MV network. Other standard FCL tests were performed on this model and their results are presented.",2010,0, 5507,Parallel computation of configuration space on reconfigurable mesh with faults,"A reconfigurable mesh (RMESH) can be used to compute robotic paths in the presence of obstacles, where the robot and obstacle images are represented and processed in mesh processors. For a non-point-like robot, we can compute the so-called configuration space to expand the obstacles, so that the robot can be reduced to a reference point to facilitate the robot's motion planning. In this paper, we present algorithms to compute the configuration space in a reconfigurable mesh that contains sparsely distributed faulty processors. Robots of rectangular and circular shapes are treated. It is seen that, in terms of computing the configuration space, a reconfigurable mesh can tolerate faulty processors without much extra cost-the computation takes the optimal O(1) time in both fault-free and faulty reconfigurable meshes",2000,0, 5508,Diagnosis of resistive-open and stuck-open defects in digital CMOS ICs,"A resistive-open defect is an imperfect circuit connection that can be modeled as a defect resistor between two circuit nodes that should be connected. A stuck-open (SOP) defect is a complete break (no current flow) between two circuit nodes that should be connected. Conventional single stuck-at fault diagnosis cannot precisely diagnose these two defects because the test results of defective chips depend on the sequence of test patterns. This paper presents precise diagnosis techniques for these two defects. The diagnosis techniques take the test-pattern sequence into account, and therefore, produce precise diagnosis results. Also, our diagnosis technique handles multiple faults of different fault models. The diagnosis techniques are validated by experimental results. Twelve SOP and one resistive-open chips are diagnosed out of a total of 459 defective chips.",2005,0, 5509,Development of a motion correction system for respiratory-gated PET study,"A respiratory motion during whole-body imaging has been recognized as a source of image quality degradation and reduces the quantitative accuracy of positron emission tomography (PET) study. The aim of this study is to evaluate respiratory gating system and to develop a respiratory motion correction system using trigger generating device built in-house and gated-PET data acquisition mode. We utilized a commercially available laser optical sensor to detect respiratory motion during PET scanning. Each respiratory cycle is divided into 4 bins defined from average peak interval and irregular peak within the breathing motion. The acquired data within the time bins correspond to different positions within the breathing cycle and stored for the post motion correction. Motion data of diaphragm and chest wall was calculated by CT image acquisition during the normal inspiration and expiration position. In the images of a phantom, the blurring artifact due to breathing motion was reduced by our correction method. This technique improves the quantitative specific activity of the tracer which is distorted because of the respiratory motion.",2004,0, 5510,Error performance of a duplex retrodirective array system,"A retrodirective antenna array has the property that it is capable of independently phasing the array elements so that it provides an improved link gain in the direction of the incoming signal without prior knowledge of the position of the source radiator. Many schemes have been proposed to achieve the retrodirectivity including the use of digital signal processing techniques. However, the diverse digital RF communication schemes pose a considerable challenge when it comes to quantitatively measuring the quality of a retrodirective system. Therefore, it becomes necessary to perform the analysis in time, frequency and modulation domains in order to obtain an insight into the RF system performance. Error vector magnitude (EVM) measurements by virtue of their ability to process the signals in full vector (magnitude and phase), provides us with a quantitative figure-of-merit for a digitally modulated signal. This paper presents the simulation model of a retrodirective communication system and its error performance in terms of EVM. The simulation and EVM measurements were performed for the first time on a DSP based half duplex retrodirective system.",2007,0, 5511,A new textual/non-textual classifier for document skew correction,A robust approach is proposed for document skew detection. We use Fourier analysis and SVM to classify textual areas from non-textual areas of documents. We also propose a robust method to determine the skew angle from textual areas. Our approach achieves good performance on documents with large area of non-textual contents.,2002,0, 5512,WiMAX bandpass filter using hybrid microstrip Defected-Ground-Structure,"A novel and compact WiMAX microstrip bandpass filter is proposed. The filter employs a very wide bandwidth from 2 to 11 GHz under NLOS environment of the IEEE 802.16-2004 standard, and with a 3-dB fractional bandwidth of greater than 138%. The proposed hybrid WiMAX band-pass filter had return loss more than 17 dB in the passband, and also demonstrated a reject band from 12.4 GHz to more than 20 GHz at -20 dB.",2010,0, 5513,Artificial neural network approach to fault classification for double circuit transmission lines,"A novel application of neural network approach to protection of double circuit transmission line is demonstrated in this paper. Different system faults on a protected transmission line should be detected and classified rapidly and correctly. The proposed method uses current signals to learn the hidden relationship in the input patterns. Using the proposed approach, fault detection, classification and faulted phase selection could be achieved within a quarter of cycle. An improved performance is experienced once the neural network is trained sufficiently and suitably, thus performing correctly when faced with different system parameters and conditions. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.",2004,0, 5514,Earth fault distance computation with artificial neural network trained by neutral voltage transients,"A novel application of the neural network approach for transient based earth fault location in 20 kV radial power distribution networks is presented. The items discussed are earth fault transients, signal pre-processing, ANN training and the performance of the proposed distance estimation method. The distribution networks considered are either unearthed or resonant earthed. Neural networks trained by the harmonic content of neutral voltage transients were found to be applicable to fault distance computation in the case of very low fault resistance. The mean error in fault location was about 1 km in the field tests using staged faults, which were recorded in real power systems.",2001,0, 5515,EBIST: a novel test generator with built-in fault detection capability,"A novel design methodology for test pattern generation in built-in self-test (BIST) is proposed. Experimental results are presented to demonstrate how a fault in the test pattern generator (TPG) itself can have serious consequences, a problem that has not been investigated. A solution is presented here, where the faults and errors in the generator itself are detected during the test in the TPG itself. This provides several major advantages, including the ability to distinguish between TPG and circuit under test (CUT) faults. In addition, this will ensure that there is no loss of fault coverage for the CUT caused by a fault in the TPG. Two different design methodologies are presented: The first guarantees all single fault/error detection, the second capable of detecting multiple faults and errors. The proposed linear feedback shift registers (LFSRs) do not have additional hardware overhead. Importantly, the test patterns generated have the potential to achieve superior fault coverage for both stuck-at and transition faults.",2005,0, 5516,A hardware immune system for benchmark state machine error detection,A novel error detection mechanism is demonstrated for integration into a hardware fault tolerant system. Inspiration is taken from principles of immunology to create a hardware immune system that runs in real-time hardware and continuously monitors a finite state machine architecture for errors. The work is demonstrated through immunisation of the ISCAS'89 benchmark state machine data set,2002,0, 5517,Image integrity and correction using parities of error control coding,"A novel function for image watermarking is proposed. We show how a watermarking sequence can be used for the correction of illegal modifications. To make this task possible, the parities generated by the conventional error control coding (ECC) technique are used for the watermarking sequence; then the receiver can correct any alterations by applying the ECC decoding to the received image data. To increase the correction capability, we also adopt a scrambling scheme prior to the encoding to change a burst type of content modifications (i.e. errors) into a random noise. The scrambling key can be also used as a key for the authentication of the sender",2000,0, 5518,Study on flue gas turbine fault diagnosis technology based on EMD and VPRS,"A novel intelligent fault diagnosis model for flue gas turbine based on EMD (Empirical mode decomposition) and VPRS (Variable precision rough set) theories, is proposed in order to solve the difficult problems of knowledge information acquisition and improve fault diagnosis accuracy in practice. This model combines EMD and VPRS techniques. First EMD signal processing technique is employed to excavate the underlying fault information from dynamic signals. The features that reflect the equipment operation conditions from the EMD analysis of the dynamic original vibration signals are extracted and the series of IMFs (intrinsic mode function) feature sets are obtained. Then the energy features of the calculated IMFs are using as the condition attributes of the knowledge acquisition decision table while the fault modes are using as the decision attributes respectively. The decision table is deal with through the attributes' reduction, attributes' value reduction and the rules' reduction based on VPRS theory. The system fault diagnosis rules are extracted on the condition that the model classification ability remains and the redundancy information is removed. The model is applied for the flue gas turbine diagnosis knowledge acquisition and fault diagnosis in Yanshan. The desired diagnosis effect is obtained via the fault diagnosis model based on EMD and VPRS. Moreover, the application result also validates the power and the practice of the model.",2009,0, 5519,Compact multilayer coupled stripline LTCC filter with defected ground structure,"A novel multilayer coupled stripline resonator structure is introduced to realize miniature broadband band-pass filter using low temperature co-fired ceramic (LTCC) process with defected ground structure (DGS). Wide bandwidth and good selectivity are obtained by exploiting four resonators and the filter exhibits a high rejection in stopband by adopting the tapered DGS. Moreover, an inductance feed back between the output and input is introduced to produce transmission zeros. Filter with size of 0 /12 0 /12 h (0 is the wavelength at the midband frequency; h is the substrate height) is designed, fabricated and measured. The measured responses agree well with simulation results.",2009,0, 5520,Chiller Unit Fault Detection and Diagnosis Based on Fuzzy Inference System,"A serials of adverse effects will occur after the chillers units' production of faults. The chiller units are strong non-linear and long-time delay system, so there are existences of fault detection and diagnosis (FDD) limitation if adoption regular fault judgment method. Fuzzy inference system (FIS) has an outstanding control effect because it is close to human thoughts. Moreover, the result of FIS is easy to be understood by operators. Firstly, fuzzy inference system of chillers unit was modelled in the paper, and then the FIS was trained and checked by the operation data obtained from a real building chiller unit under normal and unhealthy conditions respectively. The results indicated the chillers unit FDD based on FIS is feasible and prompt. Furthermore, this method is simple and prone to be automated.",2006,0, 5521,Reduction of mutual coupling between closely-packed antenna elements using defected ground structure,A simple ground plane structure that can reduce mutual coupling between closely-packed antenna elements is proposed. The structure consists of a dumb-bell-like pattern etched in a single ground plane. It is found that isolation of more than -40-dB can be achieved between two parallel individual planar inverted F antennas sharing a common ground plane. Influence of the designed defected ground plane structure on the radiation pattern is also investigated.,2009,0, 5522,Implementation of Fault Detection and Diagnosis System for Control Systems in Thermal Power Plants,"A software-based system was developed for fault detection and diagnosis (FDD) of control systems in typical thermal power plants. For the large-scale industrial processes as thermal power plants, a three-level configuration to build a realtime FDD system is proposed. This configuration includes the component level, loop level and system level. Developed algorithms for each level and systematic approaches and steps to implement the overall diagnostic system are discussed. The developed system has been applied to a real-world thermal power plant to monitor its main control systems",2006,0, 5523,Enhanced frequency synchronization for OFDM systems using timing error feedback compensation,"A strategy of improving the frequency synchronization performance of the conventional maximum-likelihood (ML) algorithm for OFDM systems is proposed. In the conventional ML algorithm, the accuracy of the frequency offset estimation directly depends on the block timing estimation. In the proposed algorithm, the block timing error caused by the ML algorithm is estimated at the post-FFT stage of the OFDM receiver using pilot symbols. An improved frequency offset estimation is obtained from the ML algorithm with timing error feedback compensation. Simulation results show that the proposed algorithm can: (i) eliminate the error floor caused by the ML algorithm and (ii) achieve normalized frequency synchronization errors as low as 1 ppm (< 10-6) for both AWGN and multipath channels with high SNR.",2004,0, 5524,Four-leg based Matrix Converter with Fault Resilient Structures and Controls for Electric Vehicle and Propulsion Systems,"A study of four-leg based fault-tolerant matrix converter is presented for remedial topological structures and control techniques against both open-faults and short-faults occurring in the AC-AC matrix converter drive based electric vehicles and propulsion systems. Topologies of the matrix converter drives with additional backup leg have been proposed to allow the matrix converter based drives for tolerating both open and short phase failures. Switching function algorithms with closed form expressions, based on switching matrices, have been developed to provide the matrix converter drives with continuous and disturbance-free operation after opened phase faults and shorted phase failures. The developed switching function matrix and modified topological configuration allow to synthesize redefined output waveforms under open-switch, open-phase, and shorted load motor winding faults. In addition, the proposed matrix converter topology can produce three-phase balanced sinusoidal output currents even after short-switch failures. Simulation and experimental results show the feasibility of the proposed topologies and the developed switching function techniques in case of both the open and short faults.",2007,0, 5525,What do we know about defect detection methods? [software testing],"A survey of defect detection studies comparing inspection and testing techniques yields practical recommendations: use inspections for requirements and design defects, and use testing for code. Evidence-based software engineering can help software practitioners decide which methods to use and for what purpose. EBSE involves defining relevant questions, surveying and appraising avail able empirical evidence, and integrating and evaluating new practices in the target environment. This article helps define questions regarding defect detection techniques and presents a survey of empirical studies on testing and inspection techniques. We then interpret the findings in terms of practical use. The term defect always relates to one or more underlying faults in an artifact such as code. In the context of this article, defects map to single faults",2006,0, 5526,Formal Analysis of a Distributed Fault Tolerant Clock Synchronization Algorithm for Automotive Communication Systems,"A synchronized time base is indispensable for a time- triggered system since all activities in such a system are triggered by the passage of time. Distributed fault-tolerant clock synchronization algorithms are normally used to achieve the synchronized time base. As a state-of-the-art representative of the time-triggered systems for automotive applications, FlexRay uses a fault-tolerant mid-point algorithm to achieve the synchronized time base. Correctness of the algorithm plays a crucial role as most of the protocol services rely on the fact that there exists a synchronized time base in the system. Due to the distinguished characteristics of the algorithm, we propose a case-analysis based technique for the formal analysis of the algorithm. We show that the case analysis technique can greatly facilitate our formal analysis of the algorithm. Mechanical support with Isabelle/HOL, a theorem prover, is also discussed.",2008,0, 5527,Introducing residual errors in accuracy assessment for remotely sensed change detection,"Accuracy assessment for map comparison is commonly found in urban planning research, especially for detecting error in remotely sensed imagery data. It is to compare two sources of spatial information. In analyzing such information quantitatively, the two datasets are summarized in a confusion matrix, which is represented in a form of percentage of predicted value against its actual data (ground truth). The common acceptable percentage is eighty percent and above. In this paper, we present a new way of accuracy assessment by introducing an additional value called residual error (or predicted error). The residual error is the percentage of error exists when two sources of major errors called mis-classification and mis-location are integrated. Such residual error is incorporated into the assessment so that the results are more accurate and comprehensive. As a case study, we calculate the residual errors of five independent image classifications from six different datasets. Therefore, the accuracy assessment is performed with more details that include not only the confusion matrix, but also the residual errors. In this way, the results of the change detection process can help in doing further analysis for urban growth and land development, particularly for town area.",2009,0, 5528,Atmospheric Corrections of Low Altitude Thermal Infrared Airborne Images Acquired over a Tropical Cropped Area,"Accurate corrections of atmospheric effects on thermal infrared remote sensing data are an essential pre-requisite for the development of thermal infrared airborne-derived crop water stress indices. These corrections can be performed using ground surface temperature measurements, which are time consuming and expensive. Atmospheric effects can also be corrected using radiative transfer models that require knowledge of atmospheric status. The latter can be accurately characterized from radiosoundings, but these are usually unavailable. It can also be derived from meteorological model simulations, but spatial and temporal resolution are often too coarse. This study proposes performing atmospheric corrections by using temperature and relative humidity profiles acquired in flight from on board sensors during data collection. Such measurements are used to document the atmospheric radiative transfer model MATISSE. First results from an experimentation over a tropical cropped area show that corrections are made with a 1.46deg K accuracy.",2008,0, 5529,Study of the quantification of FBP SPECT images with a correction for partial volume effects,"Accurate quantification of emission computed tomography data (PET-SPECT) is limited by partial volume effects. This paper compares two approaches, described in the literature, to measure accurately the true tissue tracer activity within a tissue compartment, defined by anatomical side information. The first approach is based on the selection of an appropriate number of regions. The second method is based on a minimum norm least square solution, taking into account the whole image. These methods assume a constant activity within one tissue compartment and have similar performance when this assumption is correct. We demonstrate the equivalence of both methodologies in the case of an appropriate region selection for the first technique. We also propose two new methods allowing activity fluctuations within the same anatomical tissue compartment. The first technique uses the minimum norm least square solution to estimate a higher number of activities within one tissue compartment while taking into account the whole reconstructed emission computed tomography image. The second method estimates the activity of a tissue compartment locally by linear regression analysis within a sliding window. A simple simulation study shows that these techniques yield a more accurate quantification in the case of a nonhomogeneous activity distribution within one tissue compartment",2002,0, 5530,Statistical reconstruction-based scatter correction: a new method for 3D PET,"Accurate scatter correction is one of the major problems facing quantitative 3D PET and many methods have been developed for the purpose of reducing the resultant degradation of image contrast and loss of quantitative accuracy. A new scatter correction method called Statistical Reconstruction-Based Scatter Correction (SRBSC) is proposed in this paper and evaluated using Monte Carlo simulations, experimental phantoms and clinical studies. For accurate modeling, the scatter fraction and scatter response function for uniformly attenuating media are parametrised using Monte Carlo simulations",2000,0, 5531,Improving the Accuracy of Software Effort Estimation Based on Multiple Least Square Regression Models by Estimation Error-Based Data Partitioning,"Accurate software effort estimation is one of the key factors to a successful project by making a better software project plan. To improve the estimation accuracy of software effort, many studies usually aimed at proposing novel effort estimation methods or combining several approaches of the existing effort estimation methods. However, those researches did not consider the distribution of historical software project data which is an important part impacting to the effort estimation accuracy. In this paper, to improve effort estimation accuracy by least squares regression, we propose a data partitioning method by the accuracy measures, MRE and MER which are usually used to measure the effort estimation accuracy. Furthermore, the empirical experimentations are performed by using two industry data sets (the ISBSG Release 9 and the Bank data set which consists of the project data performed in a bank in Korea).",2009,0, 5532,Reducing cost and tolerating defects in page-based intelligent memory,"Active Pages is a page-based model of intelligent memory specifically designed to support virtualized hardware resources. Previous work has shown substantial performance benefits from off loading data-intensive tasks to a memory system that implements Active Pages. With a simple VLIW processor embedded near each page on DRAM, Active Page memory systems achieve up to 1000X speedups over conventional memory systems. In this study, we examine Active Page memories that share, or multiplex, embedded VLIW processors across multiple physical Active Pages. We explore the trade-off between individual page-processor performance and page-level multiplexing. We find that hardware costs of computational logic can be reduced from 31% of DRAM chip area to 12%, through multiplexing, without significant loss in performance. Furthermore, manufacturing defects that disable up to 50% of the page processors can be tolerated through efficient resource allocation and associative multiplexing",2000,0, 5533,Handling Crash and Software Faults Efficiently in Distributed Event Stream Processing,"Active replication is a common approach to handle failures in distributed systems, including Event Stream Processing (ESP) systems. However, one weakness of conventional active replication is that replicas, being equal and in the same state, are susceptible to common-mode crashes due to software bugs. We propose a new approach to active replication that assumes a failure model stronger than fail-stop but weaker than models permitting arbitrary failures. We combine transactional memory and extended runtime checking to achieve: (i) low processing latency in failure-free runs by allowing downstream nodes to use speculative results and, thus, to circumvent the overhead added by the extended runtime checks; (ii) reduce the MTTR by enabling localized rollbacks (with word granularity) in several cases. We show that major limitations of n-variant active replication (e.g., multi-threading support, complex and slow recovery) can be overcome and tolerance to software bugs is orthogonal to Byzantine fault tolerance.",2010,0, 5534,A Case Study: A Model-Based Approach to Retrofit a Network Fault Management System with Self-Healing Functionality,"Adding self-healing capabilities to network management systems holds great promise for delivering important goals, such as QoS, while simultaneously lowering capital expenditure, operation cost, and maintenance cost. In this paper, we present a model-based approach to add self-healing capabilities to a fault management system for cellular networks. We propose a generic modeling framework to categorize software failures and specify their dispositions at the model level for the target system. This facilitates the deployment of a control loop for adding autonomic capabilities into the system architecture, which include self-monitoring, self-healing, and self-adjusting functionality. While self- monitoring oversees the environmental conditions and system behavior, self-healing is accomplished by instrumenting the system with self-adjusting operations. We include a case study on a prototype intelligent network fault management system to illustrate this approach by showing how these autonomic capabilities can be added and deployed. Specifically, these autonomic capabilities are derived from self-model specifications, and are used to mitigate the risk of specified failures and maintain the health of the system in response to different types of faults encountered.",2008,0, 5535,Efficient Fault-Tolerant Addition by Operand Width Considerat,"Addition is a central operation in microcontrollers and hence faults should be detected for safety reasons. We extend the principle of recomputing with shifted operands (RESO) by doing the re-computation concurrently to the computation in the case of small operands. Thus, we generate a solution cheaper than two adders and faster than simple repetition. To extend RESO, we consider the actual bit-widths of the operands. We validate our method with data from static code analysis of two application kernels.",2009,0, 5536,Automatic fault detection and execution monitoring for AUV missions,"Advanced AUV's, particularly those capable of long duration missions, need increasing amounts of autonomy in order to carry out sophisticated missions without requiring constant support from a ship. One important aspect of this autonomy is fault detection and execution monitoring. In this paper we describe the application of the Livingstone 2 diagnosis system on Autosub 6000 and examine in particular the issue of ensuring the mission is being executed correctly. Most AUV operations require a fast turnaround between retrieving the vehicle and redeploying it, typically constrained only by the time required to charge batteries. In many circumstances missions are planned very quickly so an automatic way to monitor mission execution is required. We describe an approach to automatically generate diagnosis models that correspond to the mission instructions which can then be combined with pre-built models of the physical components of the AUV to improve fault detection and monitor execution. We show that by incorporating this program model we are able to diagnose faults that were previously undiagnosable, and demonstrate that an actual fault that occurred on Autosub 6000 can successfully be detected.",2010,0, 5537,A novel single-phase AC/DC converter for power factor correction,"A novel single-phase AC/DC converter with two pulsewidth modulation (PWM) schemes is proposed to draw a sinusoidal line current with nearly unity power factor, achieve balanced neutral point voltage and regulate the do bus voltage. With the aid of neutral point clamped scheme, a three-level voltage pattern will be generated on the AC side of the proposed rectifier. To track the line current command derived from a voltage controller and a phase-locked loop circuit, a hysteresis current control scheme is used in the inner loop control. A capacitor voltage compensator is employed to achieve the balanced neutral point voltage. Simulation results are performed to investigate the effectiveness of the proposed control scheme.",2002,0, 5538,Periodic errors elimination in CVCF PWM DC/AC converter systems: repetitive control approach,A plug-in digital repetitive learning (RC) controller is proposed to eliminate periodic tracking errors in constant-voltage constant-frequency (CVCF) pulse-width modulated (PWM) DC/AC converter systems. The design of the RC controller is systematically developed and the stability analysis of the overall system is discussed. The periodic errors are forced toward zero asymptotically and the total harmonics distortion (THD) of the output voltage is substantially reduced under parameter uncertainties and load disturbances. Simulation and experimental results are provided to illustrate the validity of the proposed scheme,2000,0, 5539,Dual-Processor Design of Energy Efficient Fault-Tolerant System,"A popular approach to guarantee fault tolerance in safety-critical applications is to run the application on two processors. A checkpoint is inserted at the completion of the primary copy. If there is no fault, the secondary processor terminates its execution. Otherwise, should the fault occur, the second processor continues and completes the application before its deadline. In this paper, we study the energy efficiency of such dual-processor system. Specifically, we first derive an optimal static voltage scaling policy for single periodic task. We then extend it to multiple periodic tasks based on worst case execution time (WCET) analysis. Finally, we discuss how to further reduce system's energy consumption at run time by taking advantage of the actual execution time which is less than the WCET. Simulation on real-life benchmark applications shows that our technique can save up to 80% energy while still providing fault tolerance",2006,0, 5540,Convergency and Error Estimate of Nonlinear Fredholm Fuzzy Integral Equations of the Second Kind by Homotopy Analysis Method,"A powerful, easy-to-use analytic tool for nonlinear problems in general, namely the homotopy analysis method, is further improved and systematically described through a typical nonlinear problem. In this paper, a Nonlinear Fredholm fuzzy integral equations is solved by using the homotopy analysis method(HAM). The approximate solution of this equation is calculated in the form of a series which its components are computed easily. The convergence and error estimate of the proposed method are proved.",2010,0, 5541,Proactive service migration for long-running Byzantine fault-tolerant systems,"A proactive recovery scheme based on service migration for long-running Byzantine fault-tolerant systems is described. Proactive recovery is an essential method for ensuring the long-term reliability of fault-tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is a reduced vulnerability window under normal operation. This is achieved in two ways. First, the time-consuming reboot step is removed from the critical path of proactive recovery. Second, the response time and the service migration latency are continuously profiled and an optimal service migration interval is dynamically determined during runtime based on the observed system load and the user-specified availability requirement.",2009,0, 5542,Prognosis of faults in gas turbine engines,"A problem of interest to aircraft engine maintainers is the automatic detection, classification, and prediction (or prognosis) of potential critical component failures in gas turbine engines. Automatic monitoring offers the promise of substantially reducing the cost of repair and replacement of defective parts, and may even result in saving lives. Current processing for prognostic health monitoring (PHM) uses relatively simple metrics or features and rules to measure and characterize changes in sensor data. An alternative solution is to use neural nets coupled with appropriate feature extractors. We have developed techniques that couple neural nets with automated rule extractors to form systems that have: good statistical performance; easy system explanation and validation; potential new data insights and new rule discovery, novelty detection; and real-time performance. We apply these techniques to data sets data collected from operating engines. Prognostic examples using the integrated system are shown and compared with current PHM system performance. Rules for performing the prognostics will be developed and the rule performance compared",2000,0, 5543,A fault-tolerant transactional agent model on distributed object systems,"A transactional agent is a mobile agent to manipulate objects distributed on computers with some commitment condition like atomic commitment. Computers may stop by fault. In the client-server model, servers can be fault-tolerant according to replication and checkpointing technologies. However, an application program cannot be performed if a client computer is faulty. A transactional agent can move to another operational computer if some destination computer to which the agent to move is faulty. In this paper, we discuss how a program reliably manipulating objects can be realized in a mobile agent in presence of computer faults.",2006,0, 5544,Fault-tolerant mobile agents in distributed objects systems,"A transactional agent is a mobile agent which manipulates objects in one or more than one object server so as to satisfy some constraints. There are some types of constraints depending on applications. ACID is one of the constraints, which shows traditional atomic transactions. There are other constraints like at-least-one constraint where a transaction can commit if at least one object server is successfully manipulated. An agent leaves a surrogate agent on an object server on leaving the object server A surrogate holds objects manipulated by the agent and recreates an agent if the agent is faulty. In addition, an agent is replicated by itself. Thus, transactional agents are fault-tolerant. We discuss how transactional agents with types of commitment constraints can commit. We discuss how to implement transactional agents.",2003,0, 5545,Design and implementation of inference engine for fault prognosis in Power System,"According to the device fault and line fault that exist in power system, a componentized inference engine which combines forward inference and backward inference is designed based on the analysis of the categories and characteristics of the fault in this paper. Besides, a knowledge base which supports the inference engine is also designed. The engine and knowledge base are implemented based on the .Net framework and MySQL separately.",2010,0, 5546,A Root-fault Detection System of Grid Based on Immunology,"According to the immunology principles of bionics, a grid root-fault detection system is presented. In this paper event detection sequences are viewed as analogous to peptide. With the principle of positive selection in immunology, the system builds up its event database. And the behavior whose frequency is higher will be analyzed and processed first to improve the speed and effectiveness of fault detection. The experiment system implemented by this method shows a good diagnostic ability",2006,0, 5547,Data mining-based fault detect and diagnosis for the video amplifier circuit,"According to the principle of fault detect and diagnosis, this paper presents a new technology of data mining to deal with a lot of data got from fault detection and diagnosis. A method using database technology to solve fault detection and diagnosis is developed.",2003,0, 5548,RI2N: High-bandwidth and fault-tolerant network with multi-link Ethernet for PC clusters,"Although recent high-end interconnection network devices and switches provide a high performance/cost ratio, most of the small to medium sized PC clusters are still built on the commodity network, Ethernet. To enhance performance on commonly used gigabit Ethernet networks, link aggregation or binding technology is used. Currently, a Linux kernel is equipped with a software solution named linux channel bonding (LCB), which is based on IEEE802.3ad Link Aggregation technology. However, standard LCB has the problem of mismatching with the commonly used TCP protocol, which consequently implies several problems of both large latency and instability on bandwidth improvement. The fault-tolerant feature is also supported, but the usability is not sufficient. We have developed a new implementation similar to LCB named RI2N/DRV (redundant interconnection with inexpensive network with driver) for use on a gigabit Ethernet with a complete software stack that is very compatible with the TCP protocol. Our algorithm suppresses unnecessary ACK packets and retransmission of packets even in imbalanced network traffic and link failures on multiple links. It provides both high-bandwidth and fault-tolerant communication on multi-link gigabit Ethernet. We confirmed that this system improves the performance and reliability of the network, and our system can be applied to ordinary UNIX services such as NFS, without any modification of other modules.",2008,0, 5549,Physical design oriented DRAM Neighborhood Pattern Sensitive Fault testing,"Although the Neighborhood Pattern Sensitive Fault (NPSF) model is recognized as a high quality fault model for memory arrays, the excessive test application time cost associated with it, compared to other fault models, restricts its wide adoption for memory testing. In this work we exploit the physical design (layout) of folded DRAM memory arrays to introduce a new neighborhood type for NPSF testing and a pertinent test and locate algorithm. This algorithm reduces drastically the test application time (about 58% with respect to the well known Type-1 neighborhood) aiming to make the NPSF model also a cost attractive choice. In addition, we introduce the Neighborhood Word-Line Sensitive Fault model and the corresponding test algorithm to cover those faults along with NPSFs, achieving test application time cost reduction from 33% to 41%, depending on various assumptions, with respect to the Type-1 neighborhood.",2009,0, 5550,Task-based Dynamic Fault Tolerance for Humanoid Robots,"Although the performance of humanoid robots is rapidly improving, very few dependability schemes suitable for humanoid robots have been presented thus far. In particular, the fault tolerance of the engine (i.e., CPU module) has not been discussed. In the future, various tasks ranging from daily chores to safety-related tasks will be carried out by individual humanoid robots. If the characteristics and importance of the given tasks are different, the required fault-tolerant capabilities will also vary accordingly. Therefore, for mobile humanoid robots operating under power constraints, a dynamic fault tolerance capable of reducing the power consumption is desirable because fault-tolerant designs involving hardware redundancy are power intensive. In addition, an appropriate safety operation must be determined for each task. This paper discusses the dependability of humanoid robots and proposes a task-based dynamic fault tolerance scheme as a vital concept for humanoid robot applications; this scheme is based on hardware redundancy of the engine.",2006,0, 5551,Waveform matching approach for fault diagnosis of a high-voltage transmission line employing harmony search algorithm,"An accurate and effective technology for fault diagnosis of a high-voltage transmission line plays an important role in supporting rapid system restoration. The fault diagnosis of a high-voltage transmission line involves three major tasks, namely fault-type identification, fault location and fault time estimation. The diagnosis problem is formulated as an optimisation problem in this work: the variables involved in the fault diagnosis problem, such as the fault location, and the unknown variables such as ground resistance, are taken into account as optimisation variables; the sum of the discrepancy of the approximation components of the actual and expected waveforms is taken as the optimisation objective. Then, according to the characteristics of the formulated optimisation problem, the harmony search, an effective heuristic optimisation algorithm developed in recent years, is employed to solve this problem. Test results for a sample power system have shown that the developed fault diagnosis model and method are correct and efficient.",2010,0, 5552,Adaptive Multi-path Prediction for Error Resilient H.264 Coding,"An adaptive reference selection (ARS) scheme is proposed to enhance error resilience performance of H.264 video in this work, where multiple prediction paths can be created in the compressed video stream at the macroblock level without a large amount of bit rate overhead. We first develop a method to measure the expected distortion at the decoder when the H.264 video is transmitted through erroneous channels. Then, we use an updated rate-distortion cost function to incorporate this measurement into the mode decision process. The best prediction for each macroblock is selected with the objective to achieve the highest expected rate-distortion performance of the GOP in the received video stream. It is shown by experimental results that error propagation is largely reduced and the quality of received video stream is improved significantly by the proposed scheme",2006,0, 5553,An approach for improving yield with intentional defects,"An advanced methodology was implemented using intentionally created defect arrays to enhance the understanding of defect detection tools, thus improving yield learning. Intentional Defect Array (IDA) reticles were designed at International SEMATECH to target current and future ITRS requirements. Each IDA die pattern contains separate inspection areas for metal line widths of 0.18 m, 0.25 m, and 0.35 m. Defect sizes at 25%, 50%, and 100% of the design feature size with known shapes and locations are placed in patterns of memory, logic, and electrical test arrays. Advanced lithographic capabilities, short-loop recipes, and dual damascene copper process flows were used to establish the IDA patterns on 200 mm wafers. The IDA wafers are being used in a variety of wafer inspection applications that require calculating capture and false count rates for defect detection. This paper describes the approach used for creating IDA wafers and the way these wafers can be applied to enhance product wafer yield.",2002,0, 5554,A novel approach to fault classification using sparse sets of exemplars,"An algorithm is proposed for determining if a pattern classifier/recognizer can be developed based upon a sparse set of exemplars. Specifically, we address fault classifications issues associated with cable television distribution networks and use signatures of observed faults to train our neural networks. Our focus is to derive a training set of exemplars which will ensure that the training of a neural network classifier will result in a system capable of generalization.",2003,0, 5555,An analytic model and optimization technique based methods for fault diagnosis in power systems,"An analytic model for fault diagnosis of power system using optimization technique is expressed as unconstrained 0-1 integer programming problem, and consequently faulty equipment identification can be solved by refined mathematical operation. Considering the configuration of automatic devices in modern power systems, such as protective relays and reclosing relays, an improved analytic model and optimization technique-based method for fault diagnosis of power system is proposed in this paper. The evaluation criteria of the presented model is improved considering the relationship of multiple main protective relays, backup protective relays, malfunctioning protective relays and reclosing relays. Improvements of analytic model for fault diagnosis of electric power system based on optimization techniques are presented firstly. A brief description about the modulars and functions of the online fault diagnosis software which is developed by the authors for Jiangsu Provincial Power Company is given. The adopted EMS data acquisition method and simulated online test results for the power system of Jiangsu Power Company are described.",2008,0, 5556,An approach for fault detection and isolation in dynamic systems from distributed measurements,"An application is presented for online model-based fault detection and isolation (FDI) in a multitank fluid system. The tank system is equipped with a distributed measurement and control system that implements components of the IEEE standard for smart transducers, IEEE 1451. This standard includes an information model that provides programming constructs to support high level application functionality on a distributed network of smart transducers. The model-based FDI methodology in this work has several aspects that may be realized on such a distributed network. In the current work, the FDI application operates on a workstation that appears on the network as another (virtual) transducer node. The concurrent tasks in the application may be associated with actual transducer nodes. It represents a first effort toward constructing capabilities for distributed FDI in complex dynamic systems",2002,0, 5557,An AS-DSP for forward error correction applications,"An application specific digital signal processor for channel coding is presented. The vector operations can improve both the performance of memory accesses and program code density. The special function units and datapaths for channel decoding accelerate the decoding speed and facilitate algorithm implementation. The processor had been fabricated in a 0.18 m CMOS 1P6M technology. The chip size is 7.73 mm2 including 18k bits embedded memory, and the power consumption is 141 mW while decoding Reed-Solomon code and convolutional code. In contrast with general purpose processor designs, the results show this chip has at least 50% improvement in code density and 66% data rate enhancement.",2005,0, 5558,Fault Tolerant Algorithm for Functional and Data Flow Parallel Programs Performance on Clusters,An approach and algorithm providing fault tolerant performance of functional and data flow parallel processes on clusters is presented in the paper. Main accent in the paper is on decentralized solution supporting of the fault tolerant computations and minimization of time and resources of the recovery process.,2008,0, 5559,A neural network approach for fault diagnosis of large-scale analogue circuits,"An approach for fault diagnosis of large-scale analogue circuits using neural networks is presented in the paper. This method is based on the fault dictionary technique, but it can deal with soft faults due to the robustness of neural networks. Because the neural networks can create the fault dictionary, memorize and verify it simultaneously, computation time is drastically reduced. Rather than dealing with the whole circuit directly, the proposed approach partitions a large-scale circuit into several small sub-circuits and then tests each sub-circuit using the neural network method. The principle and diagnosis procedure of the method are described. Two examples are given to illustrate the method for both small and large-scale circuits.",2002,0, 5560,Effect of fiducial configuration on target registration error in intraoperative cone-beam CT guidance of head and neck surgery,"Advances in image-guided surgery have led to minimally-invasive, high-precision procedures that increase the efficacy of treatment, minimize surgical complications, and reduce patient recovery time. A recent advance in intraoperative 3D imaging includes cone-beam CT (CBCT) implemented on a mobile C-arm. This paper investigates the effect of the number and configuration of fiducials on target registration error (TRE) and identifies fiducial configurations that minimize TRE for rigid point-based registration in CBCT-guided head and neck surgery. Best configurations were those that minimized the distance between the centroid of fiducials and the surgical target while maximizing fiducial separation (distance from principal axes). Configurations with as few as 4 fiducials could be identified that minimized TRE (e.g., TRE < 0.3 mm for the pituitary, cochlea, and nasion), with more fiducials (6 or more) providing improved TRE uniformity throughout the volume of clinical interest. If possible, fiducials affixed to the skin or cranium (e.g., 46 markers) should include a majority about the target (to minimize centroid-to-target distance) with others at a distance (to maximize separation). A greater number of fiducials distributed evenly can provide low, uniform TRE for all targets - e.g., 8 markers, TRE 0.20.6 mm throughout the volume of interest. Such work helps guide the implementation of C-arm CBCT in head and neck surgery in a manner that maximizes surgical precision and exploits intraoperative image guidance to its full potential.",2008,0, 5561,Dependability analysis of a fault-tolerant processor,"Advances in semiconductor technology have improved the performance of integrated circuits, in general, and microprocessors, in particular, at a dazzling pace. Although, smaller transistor dimensions, lower power voltages and higher operating frequencies have significantly increased the circuit sensitivity to transient and intermittent faults. In this paper we present the architecture of a fault-tolerant processor and analyze its dependability with the aid of a generalized stochastic Petri net (GSPN) model. The effect of transient and intermittent faults is evaluated. It is concluded that fault tolerance mechanisms, usually employed by custom designed systems, have to be integrated into commercial-off-the-shelf (COTS) devices, in order to mitigate the impact of higher rates of occurrence of the transient and intermittent faults",2001,0, 5562,Combined Use of Fuzzy Set-Covering Theory and Mode Identification Technique for Fault Diagnosis in Power Systems,"After a fault occurs in a power system, generally some operating information of protective relays and circuit breakers could be obtained. Because protective relays and circuit breakers might improperly operate or fail to operate, and some errors and distortion may exist in data acquisition and communication, as a result uncertainties could be involved in the received information. A fault diagnosis model based on fuzzy set- covering theory and mode identification technique is proposed in this paper. With the fuzzy technology, the above mentioned uncertainties could be dealt with very well. Meanwhile, as the protective relays and circuit breakers may fail to operate in some cases, there are several different operating modes in protective relays and circuit breakers even for a same electrical device failure. Based on the received information, the proposed model could identify the most possible operating mode, and then the information corresponding to a fault hypothesis could be obtained. In the proposed model, the proposed fault diagnosis problem is described as a 0-1 integer programming one, and thus could be solved by the widely employed search technology, i.e., the well-known Tabu search method. The feasibility and efficiency of the proposed model is demonstrated by a sample power system.",2007,0, 5563,Attribute selection for fault location recognition in transmission lines,"After a severe disturbance due to an insulation failure in a transmission line, the precise fault location is a critical problem for the maintenance crew. In order to avoid further economical and social costs, fault diagnosis has to be performed as soon as possible. Fault diagnosis has been a major area of investigation among power system problems and intelligent system applications. Several approaches have been proposed for solving this problem. This paper advocates the application of support vector machines for mapping the relationship between electrical signals and fault locations in transmission lines. The significance of voltages and currents is analyzed using steady-state and electromagnetic transient information, i.e., two ways of feeding the fault location models are compared. The tests consider different operating and fault conditions, including different types of fault, a variety of fault impedances, fault angles, line loading, equivalent system impedances, and fault locations",2006,0, 5564,Estimation and prediction for tracking trajectories in cellular networks using the recursive prediction error method,"After considering the intrinsically erratic behavior of nodes in mobile networks, mobility prediction has been extensively used to improve the quality of services. Many methods have been proposed, inherited from technologies developed for signal processing and self-learning techniques and/or stochastic methods. Among the latter the Extended Kalman Filter (EKF), using the received power as a measurement, is the most used. However, because the measure is not linear with distance, the EKF loses stability under certain circumstances and must be reset. Moreover, it requires the a priori knowledge of disturbances and measurement noise covariance matrices which are difficult to obtain. In this work, from the non-linear model, we derive a stable time-variant first order auto-regressive and moving average model (ARMA), and propose a prediction mechanism based on the well-known Recursive Prediction Error Method (RPEM) to predict the mobile location and then compare it with (EKF). Simulation results show that RPEM has a lower prediction error variance in most cases and similar in others to that obtained with EKF with the additional advantages that it has guaranteed stability and does not require the a priori knowledge of disturbances and measurement noise covariance matrices as in EKF.",2010,0, 5565,Rebars and defects detection by a GPR survey at a L'Aquila school damaged by the earthquake of April 2009,"After the earthquake of L'Aquila (Abruzzo Region, Italy) occurred on the 6th April 2009, Ground Penetrating Radar investigations of public infrastructures were performed in order to provide an early damage assessment about structural elements such as beams and pillars. In particular, here we present the results of a 1500 MHz GPR survey on a cracked beam of a public building of L'Aquila to verify the goodness of a restoration work by means of epoxy resin injections. First a classical processing routine has been performed in order to focus the rebars hyperbolas and possible reflections coming by the injections; a data volume was built and several depth-slices are here presented. The survey allowed to check the rebars geometry and the reliability of the intervention based on the epoxy resin injections. Limitations of the classical data processing approach are a great complication in imaging the scene also due to the risk of introducing subjectiveness elements in the data by the operator and to the poor resolution achieved for the deeper rebar layer. The possibility of repeating the measurements on the opposite face of the beam in order to detect the second rebar layer was here adopted as a solution of the problem; however this way of operating becomes unacceptable during an earthquake post-crisis phase. The possibility of overcoming these drawbacks was offered by the microwave-tomography (MT) technique. The MT technique were first tested by processing one of the collected profiles acquired on the joist and the result confirmed the ability of the technique to achieve good focusing also of the deeper layer of rebars.",2010,0, 5566,Fault tolerant PVFS2 based on data replication,"Aggregating the capacity and bandwidth of the commodity disks in the nodes of a cluster provides cost effective and high performance storage systems. Nevertheless, this strategy could be a feasible approach only if the mean time to failure of disks and nodes is faced. The number of failures increases with the nodes and it is especially important in parallel file systems, like PVFS, because having a file striped over server disks increases the probability of failures. This work proposes a strategy to include data replication in the second version of PVFS in order to provide fault tolerance. We also analyze the performance of the implementation of this approach.",2010,0, 5567,Intermittent Fault Detection and Isolation System,"Aging aircraft electronic boxes often pose a maintenance challenge in that often after malfunctioning during flight in the aircraft, they test good, or ldquoNo Fault Foundrdquo (NFF) during ground test. The reason many of these boxes behave in this manner is that they have intermittent faults, which are momentary opens in one or more circuits due to a cracked solder joint, corroded contact, sprung connector receptacle, or any number of other reasons. These NFF boxes often account for a substantial number of boxes processed through a maintenance facility, where no repair can be performed, because no problem can be detected. Conventional test equipment is designed to test the electronic box for nominal operation, and usually ldquoaverages out,rdquo and hence hides, any short term anomalous event. This paper describes a tester that was specifically designed to detect and isolate the intermittent circuits in an electronic box chassis. This new and innovative tester has been designated the intermittent fault detection and isolation system (IFDIS). The IFDIS very effectively compliments conventional testers. The IFDIS includes an environmental chamber and shake table to subject the box to simulated operational conditions, which greatly enhances the probability the intermittent circuit will manifest itself. The IFDIS also includes an intermittent fault detector which continuously and simultaneously monitors every electrical path in the chassis under test, while the box is exposed to a simulated operational environment. To determine the effectiveness of this new tester in detecting and isolating intermittent circuits, several dozen electronic boxes, identified by serial number, that had been to the repair facility and tested NFF multiple times were selected for IFDIS testing. One or more intermittent faults were detected, isolated and repaired in nearly every box. These boxes were then tested on the conventional tester and returned to service. We are currently monitoring th- - eir performance to determine their increased service life and reduced number of NFF incidents.",2008,0, 5568,Design of intelligent fault diagnosis system based on naval vessel's cooling system of diesel engine,"Aimed at the defect of low veracity and inefficient tradition fault diagnosis, an intelligent fault diagnosis system of a naval vessel's cooling system for diesel engine is designed. Based on improved Back Propagation neural network algorithm and realized by Visual Basic loading Dynamic Link Library which by C++. Application results show that it has the more strong ability of self learning and self adaptation.",2010,0, 5569,Implementation of a Distributed Fault-Tolerant Computer for UAV,"Aiming at flight safety of high-altitude long-endurance unmanned aerial vehicle (UAV), a distributed fault-tolerant computer (FTC) was designed based on controller area network(CAN). According to the requirements of UAV control and the system structure of FTC, solutions of key issues (redundancy management, synchronization technology, scheduling strategy, CAN communication and software implementation technology) were given. Special testing and simulation results showed that the FTC was well-functioning and met the requirements of UAV flight control system.",2010,0, 5570,Recognition and Extraction Algorithm Design for Defect Characteristics of Armor-plate Flaw Detection Image,"Aiming at the detection image of strip steel which rolled on the 15 mm plate production line in a steel plant, the defect characteristics recognition and extraction algorithm had been analyzed and designed, based on the computer image processing and pattern recognition theory. And the corresponding defect characteristics recognition and processing program had been programmed by VC++6.0 computer language. In the paper, the 8 direction pixel gray value search algorithm had been compiled based on the computer image color grading theory firstly, then to extract every gray level pixel information of the armor-plate detection image, and to carry out the corresponding every gray level pixel distribution probability statistic. Based on the statistical results, the two-dimension histogram Fish evaluation function algorithm for the armor-plate CCD image processing had been designed, and the result of practical application shows that the defect characteristics recognition system which programmed based on the algorithm ahead can accurately recognize and extract the defect characteristics data from the armor-plate rolled detection image, and can effectively satisfy the industrial production requirement of plate rolled.",2010,0, 5571,Support Vector Machine for Mechanical Faults Diagnosis,"Aiming at the difficulty that Support Vector Machine (SVM) model selection of classification algorithm affect classification accuracy, it research relevant factors that influence the precision of fault classifiers based on the typical fault data samples obtained by experimental setup of rotor-bearing systems. The results show that different SVM classifiers, in which different kernel functions and different kernel functions parameters are adopted, will influence the precision of fault classifiers in conditions that fault data samples is small. It can be conveniently applied to choose appropriate kernel functions and kernel functions parameters in engineering application.",2010,0, 5572,Bug Mining Model Based on Event-Component Similarity to Discover Similar and Duplicate GUI Bugs,"All most all of the bugs related to graphical user interface (GUI) module of applications and are described in terms of events associated with GUI components. In this paper, a bug mining model for discovering duplicate and similar GUI bugs is presented and approach for detecting the similar and duplicate GUI bugs is described. Resolution of similar and duplicate bugs are almost identical, so if similar and duplicates are identified it will optimize the time for fixing reported GUI bugs and it can also help in achieving the faster development. A GUI bug can be transformed into a sequence of events, components and expected implementation requirements for each GUI event. This transformation is used in this paper to discover the similar and duplicate GUI bugs. First all the GUI bugs are transformed into events, components and requirements sequence, then these sequences are pair wise matched and common subsequence is generated which will indicate the similarity for the GUI bugs.",2009,0, 5573,A Novel Method of Fault Diagnosis in Wind Power Generation System,"Along with environmental consciousness enhancement, conventional energy depletion, wind energy exploitation is expanding gradually due to the renewable merit, clean without any pollution and vast reserve features. Therefore, wind power generation (WPG) system equipped with Doubly Fed Induction Generators is emerging as mushroom. Larger rated capacity of power unit, the higher tower and the variable pitch are the main scope in WPG system. However, latent trouble will bring about likewise. If a fault occurred, it will be catastrophic for WPG system. Consequently, the technology of fault detection will play a more important role in WPG system. Based on the present status, a novel method is proposed in this paper after summarizing and analyzing the lack of previous methods. Then an example for detecting the inverter fault is studied using PSCAD software. Results indicated that the proposed method is effective and feasible.",2010,0, 5574,FACTS: A Framework for Fault-Tolerant Composition of Transactional Web Services,"Along with the standardization of Web services composition language and the widespread acceptance of composition technologies, Web services composition is becoming an efficient and cost-effective way to develop modern business applications. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. In this paper, we propose FACTS, a framework for fault-tolerant composition of transactional Web services. We identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. We also devise a specification module and a verification module to assist service designers to construct fault-handling logic conveniently and correctly. Furthermore, we design an implementation module to automatically implement fault-handling logic in WS-BPEL. A case study demonstrates the viability of our framework and experimental results show that FACTS can improve fault tolerance of composite services with acceptable overheads.",2010,0, 5575,The Empirical Type I Error of Dynamic Statistical Parameter Mapping Estimates,"Although functional neuroimaging has provided methods for measuring and localizing change in metabolic or electromagnetic brain activity, the interpretation of the significance of such changes is challenging. This is at least partially due to the lack of a solid statistical foundation for many of these methods as well as the obvious problem associated with simultaneously performing thousands of statistical tests. Dynamic statistical parameter mapping (dSPM) is a well developed anatomically constrained noise-normalized t2 minimum-norm technique used to functionally map magnetoencephalography (MEG) data. dSPM values are theoretically statistically distributed as the square root of an F-distribution with 1 or 3 degrees of freedom in the numerator, depending on how 'loose' the source orientation is fixed. The validity of this statistical assumption for dSPM values is examined in this study. MEG data from six participants were recorded while the participants looked straight ahead at a cross on a screen for 60 seconds during which no task was performed. Triggers were randomly placed throughout the interval and dSPM values were derived as if the participant was performing a task lasting 100 ms. When the entire mock task was analyzed, the number of significant dSPM values was close to the expected probability. However, when the dSPM images were examined over time on an individual level, it was clear that the significant dSPM values clustered spatiotemporally. This suggests that a systematic non-stochastic process may have resulted in this pattern of Type I error.",2007,0, 5576,Automated Bug Neighborhood Analysis for Identifying Incomplete Bug Fixes,"Although many static-analysis techniques have been developed for automatically detecting bugs, such as null dereferences, fewer automated approaches have been presented for analyzing whether and how such bugs are fixed. Attempted bug fixes may be incomplete in that a related manifestation of the bug remains unfixed. In this paper, we characterize the completeness of attempted bug fixes that involve the flow of invalid values from one program point to another, such as null dereferences, in Java programs. Our characterization is based on the definition of a bug neighborhood, which is a scope of flows of invalid values. We present an automated analysis that, given two versions P and P' of a program, identifies the bugs in P that have been fixed in P', and classifies each fix as complete or incomplete. We implemented our technique for null-dereference bugs and conducted empirical studies using open-source projects. Our results indicate that, for the projects we studied, many bug fixes are not complete, and thus, may cause failures in subsequent executions of the program.",2010,0, 5577,Extraction of the basic feature points of handwriting data by auto translation error map,"Analysis of online-handwritten time series, acquired by pen tablet system, is valuable in various fields such as handwriting recognition, person verification or skill analysis. Generally, online handwriting data is a multidimensional time series comprises of time series of pen-tips position (x, y), pressure, altitude, azimuth etc.. In case of person verification application, use of such multivariate data improves verification accuracy. However increase of data volume increases computational cost for analysis. In this study, in order to reduce data volume , we propose a new method for extraction of the basic feature points of multidimensional handwriting time series from the view point of testing determinism in the underlying dynamics behind handwriting. Proposed method is based on two-dimensional recurrence map of translation error. Basic feature points denote the principal points in the trajectory of handwriting dynamics to preserve the rough form of the individual's handwriting speciality. The simulation experiment has been done with SVC 2004 online handwriting signature data. The result shows that the basic feature point series is quite sufficient for analyzing the data for identity detection while the raw handwriting time series includes redundancy.",2009,0, 5578,Digital correction method for an H-field sensor in power system EMC measurements,"Analysis was conducted on the nature and characteristics of an H-field sensor for power system electromagnetic compatibility (EMC) measurements. Based on the original calibrated response curve of the magnetic-field (H) sensor and steepest descent theory, an analog model has been created to represent the sensor's transfer function. This function was converted into a set of digital algorithms by the bilinear transformation method, thus allowing subsequent signal processing on available software. The digital correction model for the sensor gives an estimated error of within 8% for all the relevant frequency components. The results present some general analysis methods for power system EMC measurements.",2004,0, 5579,Business-oriented fault localization based on probabilistic neural networks,"Analyzed here is a business-oriented fault localization algorithm based on transitive closure fault propagation model and probability neural networks (PNN). Business-oriented fault localization is to construct a fault propagation model for each large complex software business. This strategy focuses on availability of key business other than scattered fault information. Because of the complex dependent relations between software, hardware, and middleware, fault of one component may propagate into correlated components and produce multiple alarms (symptoms). Transitive closure is possible symptoms domain of faults. Fault diagnosis can be transformed into a classification problem. In practice, PNN is often an excellent pattern classifier, outperforming other classifiers including back propagation (BP). It trains quickly since the training is done in one pass of each training vector, rather than several iterations. In our fault localization algorithm FLPNN, conditional probability of symptom is used as weight of hidden layer, and probability of fault is used as weight of output layer. Input of FLPNN is binary vector which represents the occurrence of symptom or not. In order to adapt the change of fault pattern, incremental learning algorithm of DFLPNN is also investigated. The simulation results show the validity and efficiency of FLPNN compared with MCA+ under lost and spurious symptom circumstances.",2009,0, 5580,Evergreen: A fault-tolerant application streaming technique,"Application streaming is technology to stream the code of an application from a central server and run it on a client computer without downloading and installation. The application can be executed while streaming of the application code may still be in progress. However, since the software streaming is based on networks, its service is affected by network failures. Network failures may cause the streamed application to stop, and to make it worse, the client system may crash. This paper proposes a network fault-tolerant application streaming technique named Evergreen. This technique enables a client to work with the application continuously during a network failure by using the downloaded code of the previously used function. We also discuss the implementation details of the evergreen model.",2009,0, 5581,Approximation method and formulas for average unavailability of systems with latent faults,Approximation method and formulas are provided for the average per flight probability of failure for aircraft systems whose components can fail latently. The approximations are given for two and three component redundant systems. The accuracy of the approximate formulas is studied by comparing them to the numerical results from a Markov solver. It shows that the approximation is close to the exact result in ultra-high reliability systems for a large range of failure rates of the components. The formulas are useful in representing event combinations involving latent events in fault trees. They could also be used to estimate the average per flight failure probability of more complex systems by a combination of the rare event approximation and the application of the formulas to the minimal cutsets of the system failure event,2002,0, 5582,TMS320 DSP based neural networks on fault diagnostic system of turbo-generator,"Artificial neural networks (ANN) are massive parallel interconnections of simple neurons that function as a collective system. More and more people are paying attention to ANN on fault diagnostic of turbo-generator for its association of thought, recollection and study function. However, the disadvantage of ANN lies in its huge data computation and the low speed of convergence. If we realize ANN with common CPU, it needs so much time on computing the huge data, so that real-time fault diagnosis become impossible. In fact, most of the computation in ANN is multiplication and addition. While digital signal processors (DSP) has altitude advantage on multiplication and addition computation, it can perform parallel multiplication and addition in a single cycle clock. Consequently we design a master/slave system to solve the problem. The slave system was mainly made up of DSP, which perform high speed ANN calculation. The master system was made up of PC, which performs data communication and real-time fault diagnosis. In this paper we bring forward a practical system design and present particular design method of hardware and software by using back-propagation (BP) network often used in fault diagnosis.",2003,0, 5583,ATP-Based Automated Fault Simulation,"As a free Electromagnetic Transient (EMT) simulation program, the Alternative Transient Program (ATP) cannot simulate in batch mode. Manual operation is very boring and prone to error when thousands of faults are to be simulated. In order to automate the process, based on close observation and analysis of the operation mechanism of the ATP, the letter concludes some useful rules and develops a software package to automate the ATP-based EMT simulation.",2008,0, 5584,Research on immune reconstruction application technology for fault diagnosis,"As a kind of applied systems of AI in fault diagnosis, the intelligent fault diagnosis system has got very good effect in practice, but with the gradual complication of the equipment and function of systems which leads to more complicated and exceptional faults, it is often difficult to study all faultspsila cause real-timely in some complex systems if adopting traditional intelligent reasoning methods. After analyzing on the immune mechanism, some fault diagnosis multi-agents is presented, with characteristics of evolution and reconfiguration, the relation between agents is named as the immune co-evolution and discussed, the basic process about reconstruction on a diagnosis model is also described, and an algorithm is designed to reconstruct diagnosis models, at last the feasibility of this research is initially verified in the test by simulation results.",2008,0, 5585,Research on high-speed fuzzy reasoning with FPGA for fault diagnosis expert system,"As an effective method for diagnosis reasoning, fuzzy reasoning is hard to meet the real-time challenge for its complex process and time-consuming. Traditionally, using software to implement fuzzy reasoning need to consume much more time, so a new method to design expert system fuzzy reasoning with FPGA for fault diagnosis is presented in this paper. In the new method, fuzzy operating is realized by function transform with ROM, and FPGA provides logic control and process coordination for fuzzy reasoning. The whole fuzzy reasoning is finished with hardware instead of software. Many experiments indicate that the speed of fuzzy reasoning with this method is faster than traditional modes, and it can be applicable to many on-line diagnosis systems based on single-chip controller or DSP (Digital Signal Processor).",2009,0, 5586,Experimental investigation and analysis for gearbox fault,"As an important component of a wind turbine drive chain, the high failure rate of the gearbox can have a serious impact on the operation of wind turbine units. The fault diagnosis of the gearbox is therefore important for the safety of wind turbine units. In this paper, a gearbox experiment rig was built, experiments on the gear surface spalling fault were held on the rig under laboratory conditions, the shaft vibration displacement signals were collected, related vibration signals were analyzed by time domain, amplitude domain, frequency domain and wavelet analysis, and the results of different analysis methods were compared. The results show that the waveform of normal and fault signals are similar, the amplitude of the signals will change while the fault happens, and the wavelet analysis is more effective than frequency domain analysis on gearbox fault features extraction.",2010,0, 5587,Analyzing the soft error resilience of linear solvers on multicore multiprocessors,"As chip transistor densities continue to increase, soft errors (bit flips) are becoming a significant concern in networked multiprocessors with multicore nodes. Large cache structures in multicore processors are especially susceptible to soft errors as they occupy a significant portion of the chip area. In this paper, we consider the impacts of soft errors in caches on the resilience and energy efficiency of sparse linear solvers. In particular, we focus on two widely used sparse iterative solvers, namely Conjugate Gradient (CG) and Generalized Minimum Residuals (GMRES). We propose two adaptive schemes, (i) a Write Eviction Hybrid ECC (WEH-ECC) scheme for the L1 cache and (ii) a Prefetcher Based Adaptive ECC (PBA-ECC) scheme for the L2 cache, and evaluate the energy and reliability trade-offs they bring in the context of GMRES and CG solvers. Our evaluations indicate that WEH-ECC reduces the CG and GMRES soft error vulnerability by a factor of 18 to 220 in L1 cache, relative to an unprotected L1 cache, and energy consumption by 16%, relative to a cache with strong protection. The PBA-ECC scheme reduces the CG and GMRES soft error vulnerability by a factor of 9 A 103 to 8.6 A 109, relative to an unprotected L2 cache, and reduces the energy consumption by 8.5%, relative to a cache with strong ECC protection. Our energy overheads over unprotected L1 and L2 caches are 5% and 14% respectively.",2010,0, 5588,Replication-Based Fault Tolerance for MPI Applications,"As computational clusters increase in size, their mean time to failure reduces drastically. Typically, checkpointing is used to minimize the loss of computation. Most checkpointing techniques, however, require central storage for storing checkpoints. This results in a bottleneck and severely limits the scalability of checkpointing, while also proving to be too expensive for dedicated checkpointing networks and storage systems. We propose a scalable replication-based MPI checkpointing facility. Our reference implementation is based on LAM/MPI; however, it is directly applicable to any MPI implementation. We extend the existing state of fault-tolerant MPI with asynchronous replication, eliminating the need for central or network storage. We evaluate centralized storage, a Sun-X4500-based solution, an EMC storage area network (SAN), and the Ibrix commercial parallel file system and show that they are not scalable, particularly after 64 CPUs. We demonstrate the low overhead of our checkpointing and replication scheme with the NAS Parallel Benchmarks and the High-Performance LINPACK benchmark with tests up to 256 nodes while demonstrating that checkpointing and replication can be achieved with a much lower overhead than that provided by current techniques. Finally, we show that the monetary cost of our solution is as low as 25 percent of that of a typical SAN/parallel-file-system-equipped storage system.",2009,0, 5589,Design and Analysis of Synchronizable Error-Resilient Arithmetic Codes,"An error-resilient variable-length arithmetic code is presented whose codewords are represented by binary digits. The input sequence is partitioned in subsequences, each of which is individually encoded using an arithmetic coding scheme with an integrated bit-stuffing technique that restricts the number of consecutive ones in the output sequence. An all-ones sequence of fixed length is appended to serve as a sync marker when the codewords are concatenated. The bit-stuffing technique ensures that the sync markers do not occur anywhere except at the boundaries between the codewords. Expressions for the optimal choice of the marker length and the block length are derived. The performance of the proposed code is determined in terms of redundancy and error resilience. An upper bound on the average error rate is derived and its tightness is confirmed with computer simulations. The proposed code shows to significantly suppress the error rate at the expense of a minimum increase in redundancy.",2009,0, 5590,A Fault Tolerant Adaptive Method for the Scheduling of Tasks in Dynamic Grids,"An essential issue in distributed high-performance computing is how to allocate efficiently the workload among the processors. This is specially important in a computational Grid where its resources are heterogeneous and dynamic. Algorithms like Quadratic Self-Scheduling (QSS) and Exponential Self-Scheduling (ESS) are useful to obtain a good load balance, reducing the communication overhead. Here, it is proposed a fault tolerant adaptive approach to schedule tasks in dynamic Grid environments. The aim of this approach is to optimize the list of chunks that QSS and ESS generates, that is, the way to schedule the tasks. For that, when the environment changes, new optimal QSS and ESS parameters are obtained to schedule the remaining tasks in an optimal way, maintaining a good load balance. Moreover, failed tasks are rescheduled. The results show that the adaptive approach obtains a good performance of both QSS and ESS even in a highly dynamic environment.",2009,0, 5591,Residual Generators for Fault Diagnosis Using Computation Sequences With Mixed Causality Applied to Automotive Systems,"An essential step in the design of a model-based diagnosis system is to find a set of residual generators fulfilling stated fault detection and isolation requirements. To be able to find a good set, it is desirable that the method used for residual generation gives as many candidate residual generators as possible, given a model. This paper presents a novel residual generation method that enables simultaneous use of integral and derivative causality, i.e., mixed causality, and also handles equation sets corresponding to algebraic and differential loops in a systematic manner. The method relies on a formal framework for computing unknown variables according to a computation sequence. In this framework, mixed causality is utilized, and the analytical properties of the equations in the model, as well as the available tools for algebraic equation solving, are taken into account. The proposed method is applied to two models of automotive systems, a Scania diesel engine, and a hydraulic braking system. Significantly more residual generators are found with the proposed method in comparison with methods using solely integral or derivative causality.",2010,0, 5592,Evanescent Field Absorption Sensor Using a Pure-Silica Defected-Core Photonic Crystal Fiber,"An evanescent field absorption sensing technique in liquid solutions is demonstrated using a microstructured photonic crystal fiber (PCF). Aqueous solution of cobalt chloride was infiltrated into all air holes of the PCF. The defected-core with a central hole in the PCF together with an increase in the surface volume with the evanescent field penetration into the air holes enhanced the liquid absorption sensitivity. The effects of solution concentration, detection directions, and PCF length on the absorption sensitivity are investigated experimentally. We have obtained a linear relationship between the evanescent field absorption and liquid concentration and also between the evanescent field absorption and PCF length. The absorption sensitivity using the longitudinal detection method is increased by more than 60 times compared with that using perpendicular measurement technique.",2008,0, 5593,Error Rate Expression for Perpendicular Magnetic Recording,"An expression for estimating error rate in a perpendicular magnetic recording system is developed. The probability of error is estimated for a dominant dibit error event. Noise includes stationary white Gaussian head/electronics noise and nonstationary colored medium transition noise. Error rate is determined for a variety of parameter changes. In particular, it is shown how two systems with the same total signal-to-noise ratio can have different error rates. Expansion of the model to include additional signal and noise effects as well as the evaluation of different error events is discussed.",2008,0, 5594,Concept design of remote fault diagnosis system for autonomous mobile robots,"An idea of considering an autonomous mobile robot as a virtual local network system is proposed in developing a remote fault diagnosis system of a mobile robot. Within the developed diagnosis system, a mobile robot is taken as one of the management objects of the network management system for autonomous mobile robots, and the simple network management protocol, which is widely used in network management system, is applied and adapted to the developed system as the communication protocol for exchanging diagnosis information. Moreover, by taking advantage of active moving and sensing ability of autonomous mobile robots, an effective fault inference method is also discussed",2000,0, 5595,On the placement of software mechanisms for detection of data errors,"An important aspect in the development of dependable software is to decide where to locate mechanisms for efficient error detection and recovery. We present a comparison between two methods for selecting locations for error detection mechanisms, in this case executable assertions (EAs), in black-box, modular software. Our results show that by placing EAs based on error propagation analysis one may reduce the memory and execution time requirements as compared to experience- and heuristic-based placement while maintaining the obtained detection coverage. Further, we show the sensitivity of the EA-provided coverage estimation on the choice of the underlying error model. Subsequently, we extend the analysis framework such that error-model effects are also addressed and introduce measures for classifying signals according to their effect on system output when errors are present. The extended framework facilitates profiling of software systems from varied dependability perspectives and is also less susceptible to the effects of having different error models for estimating detection coverage.",2002,0, 5596,Calibration and Profile based Synopses Error Estimation and Synopses Reconciliation,"An important factor in the effective utilization of data synopses is the ability to have good a priori estimates on their expected query approximation errors. Such estimates are essential for the appropriate decisions regarding which synopses to build and how much space to allocate to them, which are also at the heart of the synopses reconciliation problem. We present a novel synopses error estimation method based on the construction of synopses-dependant error estimation functions. These functions are computed in a pre-processing stage using a calibration method. Subsequently, they are used to provide ad hoc error estimation w.r.t. given data sets and query workloads based only on their statistical profiles. We also present a novel approach to synopses reconciliation, using the error-estimation functions within synopses reconciliation algorithms, gaining significant efficiency improvements by lowering to a minimum and even avoiding interference to the operational databases. Our method enables the first practical solution for the dynamic synopses reconciliation problem.",2007,0, 5597,Information Retrieval Based on OCR Errors in Scanned Documents,"An important proportion of documents are document images, i.e. scanned documents. For their retrieval, it is important to recognize their contents. Current technologies for optical character recognition (OCR) and document analysis do not handle such documents adequately because of the recognition errors. In this paper, we describe an approach that integrates the detection of errors in scanned texts without relying on a lexicon, and this detection is integrated in the research process. The proposed algorithm consists of two basic steps. In the first step, we apply editing operations on OCR words that generate a collection of error-grams and correction rules. The second step uses query terms, error-grams, and correction rules to create searchable keywords, identify appropriate matching terms, and determine the degree of relevance of retrieved document images. Algorithms has been tested on 979 document images provided by Media-team databases from Washington University, and the experimental results obtained show the effectiveness of our method and indicate improvement in comparison with the standard methods such as exact or partial matching, N-gram overlaps, and Q-gram distance.",2003,0, 5598,Improved transient simulation of salient-pole synchronous generators with internal and ground faults in the stator winding,"An improved model for simulating the transient behavior of salient-pole synchronous generators with internal and ground faults in the stator winding is established using the multi-loop circuit method. The model caters for faults under different ground conditions for the neutral, and accounts for the distributed capacitances of the windings to ground. Predictions from the model are validated by experiments, and it is shown that the model accurately predicts the voltage and current waveforms under fault conditions. Hence, it can be used to analyze important features of faults and to design appropriate protection schemes.",2005,0, 5599,Simulation of industrial AC drive system under fault conditions,"An industrial motor drive suffers from periodic failures of the DC link aluminum electrolytic capacitor bank. Simulations pinpoint likely causes, including problems with the design of the balancing resistors. Stress on the capacitor can be produced by faults in the inverter and by open-conductor faults in the balancing resistors.",2003,0, 5600,Integrated fault tolerant scheme with disturbance feedforward,"An integrated fault-tolerant scheme is presented with disturbance compensation. Fault-detection and compensation are merged together to provide a robust algorithm against model uncertainties. The GIMC control architecture is used as a feedback configuration for the fault-tolerant scheme. The synthesis procedure for the parameters of the fault-tolerant scheme is carried out by using tools of robust control theory. In order to increase the set of strongly detectable faults, the disturbance information is feedforward into the fault detection algorithm. A detection filter is designed for fault isolation taking into account the uncertainties in the mathematical model. Finally, the fault compensation strategy also incorporates the disturbance estimation to improve the performance of the closed-loop systems after the fault is detected. In order to illustrate these ideas, the speed regulation of a dc motor is selected as a case study, and the experimental results are reported.",2004,0, 5601,A Simulation Environment for the On-Line Monitoring of a Fault Tolerant Flight Control Computer,"An approach of designing a simulation environment for the on-line monitoring of a fault tolerant flight control computer is presented in this paper. The simulation environment is designed to evaluate an improved on-line monitoring technique for processors with a built-in cache. This technique assumes that a monitor checks on-line whether the execution of a program is in accordance with the control flow graph created for the program off-line by a preprocessor. The simulation environment consists of the target processor and the monitor, but also includes carefully chosen benchmark programs, fault injection modules and the preprocessor.",2009,0, 5602,A Roadmap for Autonomous Fault-Tolerant Systems,"An Autonomous Fault-Tolerant System (AFTS) refers to a system that is able to configure its own resources in the presence of permanent defects and spontaneous random faults occurring in its silicon substrate in order to maintain its functionality. This work analyzes how AFTS could be built, specifically focusing on hardware platform dependant issues, and gives an overview of the state-of-the-art in this field, which is still in its infancy. Three technological levels are used for classifying the research efforts conducted to date. By describing the current state-of-the-art and the constraints imposed by current technology, this work tries to envision future trends towards the ultimate objective of achieving a fully-adaptive system capable of modifying its architecture on-the-fly as needed. Finally, the general structure and organization of a Reliable Reconfigurable Real-Time Operating System (R3TOS) is presented. This OS aims at making the aforementioned adaptability easily exploitable by future commercial applications.",2010,0, 5603,A Method to Evaluate Voltages to Earth During an Earth Fault in an HV Network in a System of Interconnected Earth Electrodes of MV/LV Substations,"An easy and swift method to evaluate, in a system of interconnected earth electrodes, earth potentials on earthing systems of medium-voltage/low-voltage (MV/LV) substations, in an event of single-line-to-earth fault inside a high-voltage/medium- voltage (HV/MV) station, is presented. The advantage of the method is the simplicity of the mathematical model for solving complex systems of any size with a sufficient accuracy for practical purposes. This paper shows the results of simulations, performed on networks with different extensions and characteristics, organized in easy-to-read graphs and tables. A comparison of these results with the values obtained according to the procedure explained in the IEC-Standard 60909-3, and a study on the accuracy of the method has been made. Moreover, some considerations on the inclusion of earth electrodes of HV/MV stations within global earthing systems are done.",2008,0, 5604,Bit error rate of a digital radio eavesdropper on computer CRT monitors,"An eavesdropper on computer CRT (cathode ray tube) monitors can be used to intercept video information. Its anti-noise performance is analyzed in this paper. Baseband transmission models of digital signals are established according to the operating principle of the eavesdropper. The relationship between the eavesdropper's bit error rate and some parameters, such as intercept distance, superposition times and noise power, is discussed under the circumstances of ISI and ISI-freedom. Good agreement is obtained between experimental results and theoretical analysis.",2004,0, 5605,An Adaptive Error Concealment Scheme for H.264/AVG Transmission over Packet-Loss networks,"An effective error concealment (EC) scheme for H.264/AVC transmission over packet-loss networks is proposed in this paper. First, a lost macroblock (MB) is partitioned into adaptive block size based on motion analysis around the lost MB. Then, the motion vector (MV) of each partition in lost MB is recovered by an improved decoder motion vector estimation algorithm (DMVE). Finally, a modified overlapped block motion compensation method making use of the spatial smoothness property is utilized to reduce blocking artifacts caused by motion compensation with the recovered motion vector. Experimental results show that the proposed scheme outperforms other well-known methods in both PSNR and visual performance.",2007,0, 5606,Novel substrate integrated waveguide cavity filter with defected ground structure,"An open-structure eigenvalue problem of substrate integrated waveguide (SIW) cavity structures is investigated in detail by using a finite-difference frequency-domain method, and the quality (Q) factor of such SIW cavities is given. Based on the concept of a defected ground structure, a new class of SIW cavity bandpass filters are designed, fabricated, and measured around 5.8 GHz. With their fabrication on standard printed circuit boards, such filters present the advantages of high-Q factor, high power capacity, and small size. Simulated and measured results are presented and discussed to show promising performances of the proposed filters.",2005,0, 5607,Error propagation profiling of operating systems,"An operating system (OS) constitutes a fundamental software (SW) component of a computing system. The robustness of its operations, or lack thereof, strongly influences the robustness of the entire system. Targeting enhancement of robustness at the OS level via use of add-on SW wrappers, this paper presents an error propagation profiling framework that assists in a) systematic identification and location of design and operational vulnerabilities, and b) quantification of their potential impact. Focusing on data (value) errors occurring in OS drivers, a set of measures is presented that aids a designer to locate such vulnerabilities, either on an OS service (system call) basis or a per driver basis. A case study and associated experimental process, using Windows CE .Net, is presented outlining the utility of our proposed approach.",2005,0, 5608,Analog Circuits Fault Diagnosis Based on SVMs,"Analog circuit fault diagnosis problem can be modeled as a pattern recognition problem and solved by machine learning algorithm. SVM is often chosen as the learning machine because of its good generalization ability in small sample decision problem. However, in practical applications, because the fault samples are hard to acquire, the number of fault sample is far less than that for normal samples, which makes fault diagnosis a typical imbalanced problem. And it is found that traditional SVM can not ensure good performance in this situation. So in this paper, we propose an improved SVM-muSVM. In the new method, a parameter mu was introduced into the decision function, so that weight for fault class can be adjusted, and consequently the influence of fault class in decision function can be enlarged. Simulation experiments show that this method is effective in solving the problem of analog circuit fault diagnosis.",2009,0, 5609,Adaptive Fault Management of Parallel Applications for High-Performance Computing,"As the scale of high-performance computing (HPC) continues to grow, failure resilience of parallel applications becomes crucial. In this paper, we present FT-Pro, an adaptive fault management approach that combines proactive migration with reactive checkpointing. It aims to enable parallel applications to avoid anticipated failures via preventive migration and, in the case of unforeseeable failures, to minimize their impact through selective checkpointing. An adaptation manager is designed to make runtime decisions in response to failure prediction. Extensive experiments, by means of stochastic modeling and case studies with real applications, indicate that FT-Pro outperforms periodic checkpointing, in terms of reducing application completion times and improving resource utilization, by up to 43 percent.",2008,0, 5610,Fault-Aware Runtime Strategies for High-Performance Computing,"As the scale of parallel systems continues to grow, fault management of these systems is becoming a critical challenge. While existing research mainly focuses on developing or improving fault tolerance techniques, a number of key issues remain open. In this paper, we propose runtime strategies for spare node allocation and job rescheduling in response to failure prediction. These strategies, together with failure predictor and fault tolerance techniques, construct a runtime system called FARS (Fault-Aware Runtime System). In particular, we propose a 0-1 knapsack model and demonstrate its flexibility and effectiveness for reallocating running jobs to avoid failures. Experiments, by means of synthetic data and real traces from production systems, show that FARS has the potential to significantly improve system productivity (i.e., performance and reliability).",2009,0, 5611,Thread Relocation: A Runtime Architecture for Tolerating Hard Errors in Chip Multiprocessors,"As the semiconductor industry continues its relentless push for nano-CMOS technologies, device reliability and occurrence of hard errors have emerged as a dominant concern in multicores. Although regular memory structures are protected against hard errors using error correcting codes or spare rows and columns, many of the structures within the cores are left unprotected. Even if the location of hard errors is known a priori, disabling faulty cores results in a substantial performance loss. Several proposed techniques use microarchitectural redundancy to allow defective cores to continue operation. These techniques are attractive, but limited due to either added cost of additional redundancy that offers no benefits to an error-free core, or limited coverage, due to the natural redundancy offered by the microarchitecture. We propose to exploit the intercore redundancy in chip multiprocessors for hard-error tolerance. Our scheme combines hardware reconfiguration to ensure reduced functionality of cores, and a runtime layer of software (microvisor) to manage mapping of threads to cores. Microvisor observes the changing phase behavior of threads and initiates thread relocation to match the computational demands of threads to the capabilities of cores. Our results show that in the presence of degraded cores, microvisor mitigates performance losses by an average of two percent.",2010,0, 5612,Impact of intrinsic parameter fluctuation on the fault tolerance of L1 data cache,"As the semiconductor process technology continues to scale deeper into the nanometer region, the intrinsic parameter fluctuations will aggressively affect the performance and reliability of future microprocessors and System-on-Chip (SoC) applications. These system requires large SRAM arrays that occupy an increasing fraction of the chip real estate. To investigate the impact various source of intrinsic parameter fluctuation (IPF) from systems point of view, a framework to bridge architecture-level and device-level simulation will be utilized for data cache built from transistors with 25 nm, 18 nm and 13 nm technology node. This study found that the IPF will not have any significant impacts on data cache memory systems build with 25 nm while increasing the memory cell ratio, (A) to two will overcome the IPF impacts for the 18 nm. However, the 13 nm technology data cache could not operate even with higher cell ratio. Common, cache memory fault detection and correction such as ECC and redundancy can only partially remove the transaction error caused by these fluctuation sources.",2009,0, 5613,Research on domestic PV module structure based on fault detection,"As to detecting the location of fault in photovoltaic (PV) module structure, this paper presents a new type of PV arrays connection: CTCT structure (complex-total-cross-tied array). In the array of CTCT-type PV cells, by adding a certain number of current sensors and comparing the detected current, the Hot Spot cells can be located to avoid their damage to PV panels. This paper derives a formula to demonstrate the relationship between the total number of PV panels in parallel, the resolution and the total number of current sensors in the PV systems. And the algorithm has been verified correctly in a 3*9 board of PV panel. With the certain number of PV panels, selecting the reasonable value of resolution for determining the number of sensors needed, we can detect the precise location of PV cells that have Hot Spot by using current sensors as few as possible.",2010,0, 5614,Adaptive order-statistics multi-shell filtering for bad pixel correction within CFA demosaicking,"As today's digital cameras contain millions of image sensors, it is highly probable that the image sensors will contain a few defective pixels due to errors in the fabrication process. While these bad pixels would normally be mapped out in the manufacturing process, more defective pixels, known as hot pixels, could appear over time with camera usage. Since some hot pixels can still function at normal settings, they need not be permanently mapped out because they will only appear on a long exposure and/or at high ISO settings. In this paper, we apply an adaptive order-statistics multi-shell filter within CFA demosaicking to filter out only bad pixels whilst preserving the rest of the image. The CFA image containing bad pixels is first demosaicked to produce a full colour image. The adaptive filter is then only applied to the actual sensor pixels within the colour image for bad pixel correction. Demosaicking is then re-applied at those bad pixel locations to produce the final full colour image free of defective pixels. It has been shown that our proposed method outperforms a separate process of CFA demosaicking followed by bad pixel removal.",2009,0, 5615,An approach to calculating the bit-error rate of a coherent chaos-shift-keying digital communication system under a noisy multiuser environment,"Assuming ideal synchronization at the receivers, an approach to calculating the approximate theoretical bit-error rate (BER) of a coherent chaos-shift-keying (CSK) digital communication system under an additive white Gaussian noise environment is presented. The operation of a single-user coherent CSK system is reviewed and the BER is derived. Using a simple cubic map as the chaos generator, it is demonstrated that the calculated BERs are consistent with those found from simulations. A multiuser coherent CSK system is then defined and the BER is derived in terms of the noise intensity and the number of users. Finally, the computed BERs under a multiuser environment are compared with the simulation results",2002,0, 5616,Fault Tolerant Reconfiguration System for Asymmetric Multilevel Converters Using Bi-Directional Power Switches,"Asymmetric multilevel converters can optimise the number of levels by using H bridges scaled in power of three. The shortcoming of this topology is that the H bridges are not interchangeable and then, under certain faulty conditions, the converter cannot operate. A reconfiguration system based on bi-directional electronic valves has been designed for a 3-phase cascaded H-bridges. Once a fault is detected in any of the IGBTs of any H- bridge, the control is capable to reconfigure the hardware at the faulty phase by means of eliminating the damaged bridge. If the faulty bridge is not the smallest one, then the bi-directional-valve system reconfigure the faulty phase to keep the higher power bridges in operation. In this way, that phase can continue working at the same voltage level by adjusting its gating signals. Some simulations and experiments with a 27-level inverter, to show the operation of the system under a faulty condition, are displayed.",2007,0, 5617,"Modelling, calibration and correction of nonlinear illumination dependent fixed pattern noise in logarithmic CMOS image sensors","At present, most CMOS image sensors use an array of pixels with a linear response. However, logarithmic CMOS sensors are also possible, which are capable of imaging high dynamic range scenes without saturating. Unfortunately, logarithmic sensors suffer from fixed pattern noise (FPN). Work reported in the literature generally assumes the FPN is independent of illumination. This paper develops a nonlinear model y=a+bln(c+x) of the pixel response y to an illuminance z showing that FPN arises from variation of the offset a, gain b and bias c. Equations are derived which can be used to extract these parameters by calibration against a uniform illuminance of varying intensity. Experimental results, demonstrating parameter calibration and FPN correction, show that the nonlinear model outperforms outputs previous models that assume either only offset or offset and gain variation",2001,0, 5618,High availability and fault management in Objective Architecture systems,"At the heart of many real-time and near real-time distributed computing systems, such as radar, navigational, missile defense, and command and control, you find mission critical requirements. This designation demands continuous system availability even in the face of system faults. Lives are at stake and mission success is at risk. Standards-based high availability and computing resource management solutions with well defined functional boundaries and interfaces are gaining traction. This is seen in such Navy examples as Aegis Modernization, Littoral Combat Ship, Common Processing System, and more recently within the PEO-IWS Objective Architecture DoD Information Technology Standards Registry (DISR). As implemented, Objective Architectures hold the promise of providing a path to meet current and future demands placed on these system designs. Examples of these demands include: increased functional density and complexity, resource consolidation, mixed criticality systems and the ever present size, weight and power concerns. The architectural approach must be both flexible and scalable to fit the broadest set of cases, minimizing or eliminating special-case solutions. There are a set of standards published by the Service Availability (SA) Forum that address these issues. SA Forum standards are currently being used in the programs mentioned above. We will also consider how SA Forum specifications can be used to support operational availability in legacy systems while laying the groundwork for evolution of the objective architecture. This paper reviews the above issues and the specific benefits brought about by a SA Forum-based solution.",2010,0, 5619,Middleware of real-time object based fault tolerant distributed computing systems: issues and some approaches,At this turn of the century the object-oriented (OO) distributed real-time (RT) programming movement is growing rapidly along with the networked embedded systems market. The motivations are reviewed and then a brief overview is given of the particular programming scheme which this author and his collaborators have been establishing. The scheme is called the time-triggered message triggered object (TMO) programming scheme and it is used to make specific illustrations of the issues and potentials of OO RT programming. Fault tolerance capabilities are required in many distributed RT computing applications. At this time the development of middleware which is capable of supporting reliable fault-tolerant execution of application level RT distributed objects is an important challenge to the research community. Some major issues that need to be resolved and some promising approaches are discussed,2001,0, 5620,Application of Bayesian Theory in Fault Diagnosis of Turbo-generators,"Building on the analysis of the features of the sealing oil system faults in turbo-generators this paper mainly discusses how to employ Bayesian theory to perform fault diagnosis by providing mathematical formulae concerning the solution to the fault diagnosis and determining the Bayesian network inference methodology based on the prior information of the samples. It is demonstrated that the application of Bayesian theory, combined with the leaky noisy-OR model which helps to reduce the amount of data required, is conducive to improving the diagnosis speed and efficiency. This paper testifies the validity of this approach and realizes a forecast of the faults at early stages and a rapid diagnosis of their possible causes as well",2005,0, 5621,Crystal defects and charge collection in CZT x-ray and gamma detectors,"Cadmium Zinc Telluride (CZT) is one of the most exploited materials for x-ray and gamma ray radiation detection. Nevertheless CZT ingots are still affected by many defects, the most common features are Te inclusions, dislocations and grain boundaries. In this work the results of many investigation techniques are put together and compared in order to have a better understanding of the role of each defect in the degradation of the detector performances. A CZT ingot grown by low pressure Bridgman technique in IMEM Institute, Parma, was analyzed. The material was studied by means of the IR microscopy, for the identification of Te inclusions and then studied with the use of the synchrotron light source (NSLS National Synchrotron Light Source) for the analysis of the crystalline structure and uniformity of the x-ray response.",2010,0, 5622,Transitive statistical sensor error characterization and calibration,"Calibration is the process of identifying and correcting for the systematic bias component of the error in the sensor measurements. On-line and in-field sensor measurement calibration is particularly crucial since manual calibration is expensive and sometimes infeasible. We have developed an on-line and in-field error modeling technique, which is a generalization of the calibration problem, that relies on a small number of inaccurate sensors with known error distributions to develop error models for the deployed in-field sensors. We demonstrate the applicability of our transitive error modeling technique and evaluate its performance in various scenarios by conducting experiments using traces of the light intensity measurements recorded by in-field deployed light sensors. In addition, statistical validation and evaluation methods such as resubstitution are used in order to establish the interval of confidence",2005,0, 5623,A high-performance application protocol for fault-tolerant CAN networks,"CAN is a communication protocol largely used in automotive and industrial appliances because of the simple procedure for its parameterization and the low cost of its circuitry. The native CAN, however, does not assure the fault-tolerance level required by safety-critical appliances and somewhat sophisticated protocols have been introduced for their networking. In an attempt to keep on the advantages of CAN, several application protocols have been developed to supplement the native CAN with the aim of giving the CAN networks safety features. This paper continues such a research trend by proposing a CAN application protocol termed time-triggered bus-redundant CAN (TTBR-CAN) that outdoes the performance of the existing ones. After describing the architecture of TTBR-CAN, details are given on its implementation and experimental results are reported to demonstrate its effectiveness.",2010,0, 5624,Proactive fault management based on risk-augmented routing,"Carrier networks need to provide their customers with high availability of communication services. Unfortunately, failures are managed by recovery mechanisms getting involved only after the failure occurrence to limit the impact on traffic flows. However, there are often forewarning signs that a network device will stop working properly. We propose to take into account this risk exposure in order to improve the performance of the existing restoration mechanisms, in particular for IP networks. Based on an embedded and real-time risk-level assessment, we can perform a proactive fault-management and isolate the failing routers out of the routed topology, and thus totally avoid service unavailability. Our novel approach enables routers to preventively steer traffic away from risky paths by temporally tuning OSPF link cost.",2010,0, 5625,Synthesis of robust digital correction filters for cascaded sigma-delta converters,"Cascaded sigma-delta converters relax the requirements on the oversampling ratio for a given resolution. However, their performance is sensitive to mismatches in the analog components. Previous work has explored adaptive calibration schemes to address the mismatch problem, but such methods lead to an increase in the implementation complexity. In the present work an alternative strategy to the adaptive one is pursued in which robust digital correction filters are synthesised. The main contributions reported in this paper are the development and validation of the synthesis framework and preliminary solutions to the synthesis problem. A robust correction filter is synthesised for the 21 cascaded architecture which provides improved performance over the nominal correction filter for the worst case uncertainties in the analog parameter values. Since the filters are fixed at the design stage, the proposed scheme does not add an additional burden on the implementation.",2003,0, 5626,Automated red-eye detection and correction in digital photographs,"Caused by light reflected off the subject's retina, red-eye is a troublesome problem in consumer photography. Although most of the cameras have the red-eye reduction mode, the result reality is that no on-camera system is completely effective. In this paper, we propose a fully automatic approach to detecting and correcting red-eyes in digital images. In order to detect red-eyes in a picture, a heuristic yet efficient algorithm is first adopted to detect a group of candidate red regions and then an eye classifier is utilized to confirm whether each candidate region is a human eye. Thereafter, for each detected redeye, we can correct it by the correction algorithm. In case that a red-eye cannot be detected automatically, another algorithm is also provided to detect red-eyes manually with the user's interaction by clicking on an eye. Experimental results on about 300 images with various red-eye appearances demonstrate that the proposed solution is robust and effective.",2004,0, 5627,AUV control in the presence of fin actuator faults,"Autonomous underwater vehicles (AUV) are rapidly becoming useful and versatile tools in the ocean environment. They have only recently matured to the level of commercial viability, but there is currently recognition of the need for AUV, and an expectation of increased benefit from using these vehicles. In order for AUV to be firmly established as a viable, mature technology there are several issues that need to be addressed, not the least of which is reliability. A systematic study was made involving simulations of a vehicle under fault conditions to identify the vehicle behaviours typical of such fault conditions. The simulation tool used is a linear model of the dynamics of the Canadian Self-Contained Off-the-shelf Underwater Testbed (C-SCOUT), and the maneuvers used were those most likely to be desired during normal operation: holding course, a controlled dive, and a turn in the horizontal plane. C-SCOUT is typical of many current AUV, therefore the results of the study are applicable in a qualitative sense to a number of vehicles.",2002,0, 5628,Automatically Finding and Patching Bad Error Handling,Bad error handling is the cause of many service outages. We address this problem by a novel approach to detect and patch bad error handling automatically. Our approach uses error injection to detect bad error handling and static analysis of binary code to determine which type of patch can be instantiated. We describe several measurements regarding the effectiveness of our approach to detect and patch bad error handling in several open source programs,2006,0, 5629,Based on the Phonetic Spelling Correction System Research and Implementation,"Based on English phonetic spelling correction algorithm, this paper is to solve common spelling errors such as missing letters, extra letters, disordered letters, as well as phonetic spelling errors in the perspective of from the same pronunciation, similar pronunciation. Through the analysis of the causes of the spelling errors, an algorithm of combination of phonetic spelling correction, phonetic spelling regulations, edit-distance and habit-distance is put forward, thereby overall exposition about spelling correction system is proposed.",2009,0, 5630,Knowledge Model Based on Error Logic for Intelligent System,"Based on error logic, a new knowledge discovery method is presented in this paper. We uses error matrix that is built on error logic to represent knowledge. Error matrix is a useful method of modeling for scene. By means of solution of error matrix equation, we can get error logic transformation equation that can be used in reasoning about question. Combining the elements such as discussing domain, the things of u, space, characteristic, value of characteristic, error value and rules of object with decomposed transformation, similarity transformation, increased transformation, replacement transformation, vanishing transformation and unit transformation, we can obtain several useful alternative change paths by permutations and combinations. This paper present a general solution of the equation of X∧A=B based on the decomposed transformation of the things of u. Finally, the future works that are going to study farther is given.",2009,0, 5631,Monitoring and fault diagnosing system design for power transformer based on temperature field model and DGA feature extraction,"Based on support vector machine, a new fault diagnosis model for power transformer combining on-line extracting dissolved gases analysis (DGA) data with its three-dimensional temperature field information is proposed, which can realize fusion of the multivariate fault characteristic information. The finite element method is applied into establishing a three-dimensional temperature field for power transformer. Some issues about applying support vector machine (SVM) into power transformer fault diagnosis are further analyzed. In order to realize its state on-line monitoring and the measuring data remote communication, an embedded on-line monitoring system composed of ARM and general packet radio service is designed for the power transformer in this paper. Simulation experimental results show that the proposed fault diagnosis model has an excellent performance on training speed and correct ratio, and the developed embedded system can effectively monitor power transformerpsilas state during its running. Therefore, it will change the existing maintenance and repair pattern for power transformer and realizes accurate fault diagnosis or forecasts.",2008,0, 5632,Immune Population Network Algorithm and Its Application in Fault Diagnosis,"Based on the biological immune system, the authors propose a new immune population network algorithm which combines population immune algorithm and network immune algorithm in this paper. This algorithm could execute multi-point parallel search from local to global searching field, increasing the diversity of antigen population and the ability of searching maximum. At last, we apply this algorithm to the fault diagnosis system and simulation results indicate that the algorithm could recognize the fault correctly.",2008,0, 5633,Design and Verification of Internet Service Automatic Fault-Heal System,"Based on the current situation of Internet service management, we put forward an Internet service automatic fault-heal system. Combined with autonomic computing and services probes components, we put forward organization model and autonomic computing model of the system. Simultaneously, with the use of timed automata and UPPAAL - a model checking tool for timed automatons, we modeled the system and simulated it. Then we made a detailed description and analysis of each component of this model. Finally, combined with the verifier of UPPAAL and the TCTL formula, we verified the deadlock, safety and feasibility of the system.",2009,0, 5634,Research on the Remote Monitoring and Intelligent Fault Diagnosis System,"Based on the techniques of power electronics, fault diagnosis and network, a remote monitoring and fault diagnosis system with combination of a three-layer C/S structure and Internet is presented, which achieves with high compatibility between different equipment. Furthermore, the paper focuses on the discussion about the structure of compatibility between systems, the compatibility design of remote monitoring and the realization of field control equipment, such as the data collection sub-system, the real-time monitoring sub-system, the fault diagnosis sub-system and the data disposal and management sub-system, and the design of the data transferring sub-system and the data disposal and management sub-system, and the design of the data transferring sub-system and the communication protocol format.",2008,0, 5635,Research on the Remote Monitoring and Fault Diagnosis System for Equipment,"Based on the techniques of power electronics, fault diagnosis and network, a remote monitoring and fault diagnosis system with combination of a three-layer C/S structure and Internet is presented, which achieves with high compatibility between different equipment. Furthermore, the paper focuses on the discussion about the structure of compatibility between systems, the compatibility design of remote monitoring and the realization of field control equipment, such as the data collection sub-system, the real-time monitoring sub-system, the fault diagnosis sub-system and the data disposal and management subsystem, and the design of the data transferring sub-system and the data disposal and management sub-system, and the design of the data transferring sub-system and the communication protocol format.",2008,0, 5636,Co-Integration Analysis and an Error Correction Model for Urban Land Use Structure and Economic Growth in China,"Based on the theories of the information entropy of urban land use structure, co-integration and error correction model, this thesis analyses the relationship between economic growth, urbanization level and the information entropy of urban land use structure. The information entropy values of China's urban land use structure from 1981 to 2004 have been calculated in this paper. The relationship between China's GDP and the information entropy of urban land use structure is investigated in details. The results indicate a co-integration relationship between GDP and the information entropy of urban land use structure. However, there is no co-integration relationship between urbanization and the information entropy of urban land use structure. On the basis of the above analyses, the error correction model on GDP and the information entropy of urban land use structure is instituted. The application of the model shows that every 1% decrease in the information entropy value of urban land use structure demands a 59.53% growth in GDP. Because of the existing co-integration relationship between the economic growth and the information entropy of urban land use structure, the order degree of China's urban land use structure has relied heavily on the economic growth for a long time. The economic growth will significantly affect the optimal regulation of urban land use structure in the future. Further investigation also shows that economic growth and urbanization act as the direct driving force of the optimization of urban land use structure, but the optimization of urban land use structure can't promote the economic growth and the urbanization. Therefore, it becomes a very pressing strategic issue to adjust the economic structure and industrial distribution to optimize the distribution of urban land use and to optimize the structure of urban land use to improve the efficiency and promote the socio-economic development.",2009,0, 5637,An agent oriented proactive fault-tolerant framework for grid computing,"Because of computational grid heterogeneity, scale and complexity, faults become likely. Therefore, grid infrastructure must have mechanisms to deal with faults while also providing efficient and reliable services to its end users. Existing fault-tolerant approaches are inefficient because they are reactive and incomplete. They are reactive because they only deal with faults when they take place; they are incomplete because they only deal with certain types of faults. Proactive approaches increase efficiency by reducing the cost and time of operations and network resource usage by maintaining the state of executing applications and resuming operation when rescheduled. This paper presents an agent oriented, fault-tolerant grid framework where agents deal with individual faults proactively. Agents maintain information about hardware conditions, executing process memory consumption, available resources, network conditions and component mean time to failure. Based on this information and critical states, agent can improve the reliability and efficiency of grid services",2005,0, 5638,Fault-tolerant inverter with real-time monitoring for aerospace applications,"Because of its great interest, the power systems reliability has been widely investigated in military, industrial and space applications. This paper is presenting an improved fault-tolerant inverter topology with its associated control strategy. This topology has been designed for aerospace industry that puts hard constraints on system reactivity and testability. In addition to the nominal and faulty mode steady-state analysis, the study emphasizes on the transient analysis when switching from one to another. Simulations and experimental results are presented to validate the operation of the proposed solution.",2010,0, 5639,A scalable rule-based Reasoning Algorithm for fault diagnosis,"Because of network connection and the relations among services and network devices, a network fault usually happens with a series of related faults. These conditions bring complications to find out the exact root fault that leads fault storms. We have proposed a knowledge-based fault diagnosis model and a rule-based reasoning algorithm in our previous work. As we all know, control strategy and rules are often expanded or changed in fault reasoning. Based on the previous work, this paper proposes an Optimized Fault Reasoning Algorithm(OFRA) that can be scalable and maintainable based on two design patterns. We also implement OFRA in a real network and use real fault cases to validate the efficiency of the scalability and maintenance of OFRA.",2010,0, 5640,Gear Fault Diagnosis with Neural Network Based on Niche Genetic Algorithm,"Because of the complexity of gear working condition, there are non-linear relationship between characteristic parameters and fault types. This paper proposes to apply the artificial neural network theory and the genetic algorithm to solve the difficulties of gear fault diagnosis. Niche technique based on crowding mechanism is used in genetic algorithm, and punishing function is adopted to adjust individual fitness, so as to promote global search capability. Taking a certain gearbox fault signal acquisition experimental system for an example, Matlab software and its neural network toolbox are used to model and simulate. The experiment result shows that the founded network model has good performance for the common gear fault diagnosis and it can identify various types of faults stably and accurately.",2010,0, 5641,Spam Filter Based Approach for Finding Fault-Prone Software Modules,"Because of the increase of needs for spam e-mail detection, the spam filtering technique has been improved as a convenient and effective technique for text mining. We propose a novel approach to detect fault-prone modules in a way that the source code modules are considered as text files and are applied to the spam filter directly. In order to show the applicability of our approach, we conducted experimental applications using source code repositories of Java based open source developments. The result of experiments shows that our approach can classify more than 75% of software modules correctly.",2007,0, 5642,A Similar Resource Auto-Discovery Based Adaptive Fault-tolerance Method for Embedded Distributed System,"Because of the resource constraints and high reliability requirement of Embedded Distributed System (EDS), some new fault-tolerance means, which are different from the traditional hardware- redundancy ones, should be studied. In this article, a fault-tolerance method that based on similar resources and related technologies are proposed and discussed. First, several mathematical models of key elements, such as computing nodes, similar nodes and tasks, are constructed. Then, the similarity computation methods and evaluation criteria are evinced by two different views: tasks and resources. Supported by theories above, numerous methods, such as similar nodes auto- discovery (SNAD) and its optimization one (oSNAD), redundant tasks auto-deployment, and reconfiguration policies of fault tasks and nodes are highlighted respectively. Simulation results show that these approaches and schemes can improve the adaptive fault-tolerance abilities of complicated embedded distributed systems.",2007,0, 5643,Displacement sensor axis misalignment error,"Because of valve displacement great importance in valve systems in many marine power aggregates and other servomechanisms and equipment, controlling measurement of this parameter would be taken. Hysteresis measurement must also be taken after every mounting and periodically revision activities. Controlling measurement complies of displacement sensor and computer system applied with adequate hardware and software solutions. Displacement sensor axis misalignment error cannot be avoided in any situation where sensor axis line of symmetry is not parallel with axis line of valve movement. Therefore, elevation error must be considered. Other errors and imperfections in measuring process because of clearly thesis description in this paper are negligible (measuring method, sensor class, nonlinearity, software signal transition). Paper goal is pointing error values regarding sensor axis elevation and its presentation in meaningful ways, with comprehensive further influences",2006,0, 5644,Fault-tolerant Actuator System for Electrical Steering of Vehicles,"Being critical to the safety of vehicles, the steering system is required to maintain the vehicles ability to steer until it is brought to halt, should a fault occur. With electrical steering becoming a cost-effective candidate for electrical powered vehicles, a fault-tolerant architecture is needed that meets this requirement. This paper studies the fault-tolerance properties of an electrical steering system. It presents a fault-tolerant architecture where a dedicated AC motor design used in conjunction with cheap voltage measurements can ensure detection of all relevant faults in the steering system. The paper shows how active control reconfiguration can accommodate all critical faults. The fault-tolerant abilities of the steering system are demonstrated on the hardware of a warehouse truck",2006,0, 5645,An online scheme for the isolation of BGP misconfiguration errors,"Being the primary interdomain routing protocol, border gateway protocol (BGP) is the singular means of path establishment across the Internet. Therefore, misconfiguration errors in BGP routers result in failure to establish paths which in turn can cause several networks to become unreachable. In this paper, we first analyze data from recent BGP tables to show that misconfiguration errors occur very frequently in the Internet today. We then show theoretically and using real-world events the impact of these errors on routing stability. A scheme for real-time isolation of large-scale BGP misconfiguration events is then proposed in this paper. Our methodology is based on statistical techniques and is evaluated using data from past wellknown misconfiguration events. We show the effectiveness of our method as compared to the current state-of-the-art.",2008,0, 5646,Nanolab: a tool for evaluating reliability of defect-tolerant nano architectures,"As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the device, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, a Markov random field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a belief propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as triple modular redundancy (TMR), cascaded triple modular redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.",2004,0, 5647,"Software-Based Online Detection of Hardware Defects Mechanisms, Architectural Support, and Evaluation","As silicon process technology scales deeper into the nanometer regime, hardware defects are becoming more common. Such defects are bound to hinder the correct operation of future processor systems, unless new online techniques become available to detect and to tolerate them while preserving the integrity of software applications running on the system. This paper proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extension (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade off performance with reliability without requiring any change to the hardware. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22% of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5%. Based on a detailed RTL-level implementation of our technique, we find its area overhead to be quite modest, with only a 5.8% increase in total chip area.",2007,0, 5648,Soft Errors in SRAM-FPGAs: A Comparison of Two Complementary Approaches,"As SRAM-based field-programmable gate arrays (FPGAs) are introduced in safety- or mission-critical applications, the availability of suitable Electronic Design Automation (EDA) tools for predicting systems dependability becomes mandatory for designers. Nowadays designers can opt either for workload-independent EDA tools, which provide information about system's dependability disregarding the workload the system is supposed to elaborate when deployed in the mission, or workload-dependent approaches. In this paper, we compare two tools for predicting the effects of soft errors in circuits implemented using SRAM-based FPGAs, a workload-independent one (STAR) and a workload-dependent one (FLIPPER). Experimental results show that the two tools are complementary and can be used fruitfully for obtaining accurate predictions.",2008,0, 5649,Fault tolerant switched reluctance machine for fuel pump drive in aircraft,"As switched reluctance motors (SRM) generally offer a simple and robust design, they are very suitable for an aircraft main engine fuel pump drive which needs to be actuated by fault tolerant drives. Based on analytical comparison the merits and demerits of some different machine topologies including redundancies, a six phase 12/8 fault-tolerant SRM is proposed and designed for fuel pump drive application. The finite element model based on field-circuit coupling is established, results indicate that this machine meets the needs of demanding fuel pump drive system in aerospace environments.",2009,0, 5650,Fault-Tolerant Verification Platform for Systems Modeled at High Level of Abstraction,"As system-on-chip (SoC) becomes more and more complicated, and contains a large number of transistors, the SoC could encounter the reliability problem due to the increased likelihood of faults or radiation-induced soft errors when the chip fabrication enters the deep submicron technology. Thus, it is essential to employ the fault-tolerant techniques in the design of SoC to guarantee a high operational reliability in critical applications. An important issue in the design of fault-robust SoC is how to validate the robustness of the systems as early in the development phase to reduce the re-design cost. The goal of this study is to propose a new fault-tolerant verification approach based on a high-level abstract system model that can significantly reduce the validation efforts. The fault-tolerant verification platform proposed here can save the time of detailed hardware implementation, benchmark program development, and fault injection campaigns. As a result, it is efficient to reduce the implementation efforts and simulation time. However, since our approach employs a high level of abstraction to model the fault-robust systems, the accuracy of the simulation results will decrease. A fault-tolerant VLIW core developed by our team is used to demonstrate the feasibility of our approach by comparing the results obtained from this approach with the results derived from the simulation-based fault injection technique by VHDL.",2007,0, 5651,System-Bus Fault Injection Framework in SystemC Design Platform,"As system-on-chip (SoC) becomes prevalent in the intelligent system applications, the reliability issue of SoC is getting more attention in the design industry while the SoC fabrication enters the very deep submicron technology. In this study, we present a new approach of system-bus fault injection in SystemC design platform, which can be used to assist us in performing the FMEA procedure during the SoC design phase. We demonstrate the feasibility of the proposed fault injection mechanism with an experimental ARM-based system.",2008,0, 5652,Fault injection technique approach for testbench analysis,"As the chips complexity continues to grow, the importance of functional verification in the design flow increased, together with the time consumed for developing the verification environment and verifying the design. Creating reliable environments and reduced time for their debug increase the verification productivity. Verification environments have complex testbenches surrounding the digital design but these testbenches often contain undetected errors. This paper describes the usage of fault-injection technique applied to SystemVerilog language constructs used in testbenches, to identify such hidden errors. The testbench implementation can be analyzed using fault-injection techniques, leading to a better overview of its functionality and improving the design verification process. Possible faults to be injected by altering language constructs are presented, highlighting their impact on simulation results an how they can be used to detect potential testbench problems.",2010,0, 5653,Atmospheric Correction of the Hyperion Imagery for Turbid Estuary Water,"Atmospheric correction of Hyperion imagery over aquatic environments is generally more demanding than over land because the signal from the water column is small. In this paper, an atmospheric correction algorithm designed for the Hyperion hyper spectral imagery based on the analysis of the atmospheric correction algorithm of FLAASH and the algorithm adopted by MODIS imagery. It makes use of band 111 and band 149 of the Hyperion hyper-spectral imagery to obtain the type of aerosol, then applied it to the near-infrared and visible light wave bands, thereby the atmospheric correction of the whole imagery is realized. The Hyperion hyper-spectral imagery applied the algorithm referred above is suitable to monitor the turbid estuary water.",2008,0, 5654,Is SPECT or CT based attenuation correction more quantitatively accurate for dedicated breast SPECT acquired with non-traditional trajectories?,"Attenuation correction is necessary for SPECT quantification. There are a variety of methods to create attenuation maps. For dedicated breast SPECT-CT imaging, it is unclear if either a SPECT- or CT-based attenuation map would provide the most accurate quantification and whether or not segmenting the different tissue types will have an effect on the quantification. For these experiments, 99mTc diluted in methanol and water filled geometric and anthropomorphic breast phantoms was imaged with a dedicated dual-modality SPECT-CT breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisition trajectories including vertical and 30 tilted parallel beam, and complex sinusoidal trajectories. CT images were acquired using a quasi-monochromatic x-ray source and CsI(Tl) digital flat panel detector in a half-cone beam geometry. Measured scatter correction for SPECT and CT were implemented. To compare photon attenuation correction in the reconstructed SPECT images, various volumetric attenuation maps were derived from 1) uniform SPECT, 2) uniform CT, and 3) segmented CT, populated with different attenuation coefficient values. Comparisons between attenuation masks using phantoms consisting of materials with different attenuation values show that at 140 keV the differences in the attenuation between materials do not affect the quantification as much as the size and alignment of the attenuation map. The CT-based attenuation maps give quantitative values 30% below the actual value, but are consistent. The SPECT-based attenuation maps can provide within 10% accurate quantitative values, but are less consistent.",2010,0, 5655,Error-enhanced augmented proprioceptive feedback in stroke rehabilitation training: A pilot study,"Augmented feedback plays an essential role in stroke rehabilitation therapy. When a force is applied to the arm, an augmented sensory (proprioceptive) cue is provided. The question was to find out if stroke patients can learn reach-and retrieval movements with error-enhanced augmented sensory feedback. The movements were performed over a predefined path, and when deviating of the path a force is provided, as colliding to a wall of a tunnel. Two chronic stroke survivors (FM of 53 and 49) performed reach and retrieval movements in a virtual tunnel. When two consecutive series of 15 repetitions of the same movements were performed, there was a consistent decrease of collisions to the wall in the second series of movements. This indicates that these patients were able to learn the predefined trajectory by means of augmented proprioceptive feedback. Despite the small number of patients tested, this finding is promising for the usage of error-enhanced augmented proprioceptive feedback in rehabilitation therapy.",2009,0, 5656,Improved Non-parametric Subtraction for Detection of Wafer Defect,"Automated defect inspection for wafer has been developed since 1990 's to replace defect detection by human eye for low-cost and high-quality. Defects are detected by comparing an inspected die with a reference die in application of wafer defect inspection. Referential methods compare with reference image by computing the intensity difference pixel by pixel between a reference image and an inspected image or measuring the similarity between two images using normalized cross correlation or eigen value. These methods are problematic for defect detection due to illumination change, noise and alignment error. To reduce the sensitivity of illumination change and noise, the new image subtraction called non-parametric subtraction was proposed. Non-parametric subtraction can solve problem about illumination change and noise, but sensitivity of alignment remains unsolved. This paper introduces new approach less sensitive to alignment using non-parametric subtraction for wafer defect inspection.",2007,0, 5657,Algorithm of fault diagnosis for satellite network,"Automated fault diagnosis becomes increasingly important to satellite work. In an early paper, we introduced system level diagnosis theory into satellite iietwork firstly and presented two-level-node graph, a novel modeling method,. Based on these work, a new test invalidation model under certain fault pattern is presented and a diagnosis algorithni is proposed in this paper. Diagnosis can be divided into two steps, local diagnosis and centralized diagnosis. The former is distributed, which can reduce the diagnosis delay by collecting test results in satellites parallelly. The latter is centralized, which can make use of the regularity of satellite network. The procedure of local diagnosis is described by activity cycle diagram. During diagnosing, little professional knowledge about satellite needs to be involved. An example illustrates the diagnosis algorithm and its effect.",2004,0, 5658,Data mining approaches to software fault diagnosis,Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.,2005,0, 5659,Classification of power system faults using wavelet transforms and probabilistic neural networks,Automation of power system fault identification using information conveyed by the wavelet analysis of power system transients is proposed. The Probabilistic Neural Network (PNN) for detecting the type of fault is used. The work presented in this paper is focused on identification of simple power system faults. Wavelet Transform (WT) of the transient disturbance caused as a result of the occurrence of a fault is performed. The detail coefficient for each type of simple fault is characteristic in nature. PNN is used for distinguishing the detail coefficients and hence the faults.,2003,0, 5660,SCEMIT: A SystemC error and mutation injection tool,"As high-level models in C and SystemC are increasingly used for verification and even design (through high-level synthesis) of electronic systems, there is a growing need for compatible error injection tools to facilitate further development of coverage metrics and automated diagnosis. This paper introduces SCEMIT, a tool for the automated injection of errors into C/C++/SystemC models. A selection of `mutation' style errors are supported, and injection is performed though a plugin interface in the GCC compiler, which minimizes the impact of SCEMIT on existing simulation flows. Experimental injected error detection results are presented for the set of OSCI SystemC Example Models as well as the CHStone C High-Level-Synthesis benchmark set. Aside from demonstrating compatibility with these models, the results show the value of high-level error injection as a coverage measure compared to conventional code coverage measures.",2010,0, 5661,Development of Predictive Hybrid Redundancy for Fault tolerant in Safety Critical Systems,"As many systems depend on electronics, concern for fault tolerance is growing rapidly. For example, a car with its steering controlled by electronics and no mechanical linkage from steering wheel to front tires (steer-by-wire) should be fault tolerant because a failure can come without any warning and its effect is devastating. In order to make system fault tolerant, there has been a body of research mainly from aerospace field. This paper presents the structure of predictive hybrid redundancy that can filter out most erroneous values. In addition, several numerical simulation results are given where the predictive hybrid redundancy outperforms well-known average and median voters.",2007,0, 5662,Analysis of mobile agents' fault-tolerant behavior,"As mobile agents are the medium for implementing various executions, their behaviors are paramount for network performance. Clearly, one of the pivotal tasks ahead, if mobile agents are to have significant impact, is to explore quantitative studies on the behaviors of mobile agents, which can reveal the inherence of the mobile agent approach and ultimately guide future research. This issue, unlike that of system design using mobile agents, has not yet been adequately addressed so far. We propose a fault-tolerant model for mobile agents executing in large distributed networks and analyze the life expectancy of mobile agents in our model. The key idea is the use of stochastic regularities of mobile agents' behavior - all the mobile agents in the network as a whole can be stochastically characterized though a single mobile agent may act randomly. In effect, our analytical results reveal new theoretical insights into the statistical behaviors of mobile agents and provide useful tools for effectively managing mobile agents in large networks.",2004,0, 5663,An enhancement of fault-tolerant routing protocol for Wireless Sensor Network,"As more and more real Wireless Sensor Network's (WSN) applications have been tested and deployed over the last decade, the research community of WSN realizes that several issues need to be revisited from practical angles, such as reliability and availability. Furthermore, fault-tolerance is one of the main issues in WSNs since it becomes critical in real deployed environments where network stability and reduced inaccessibility times are important. Basically, wireless sensor networks suffer from resource limitations, high failure rates and faults caused by the defective nature of wireless communication and the wireless sensor itself. This can lead to situations, where nodes are often interrupted during data transmission and blind spots occur in the network by isolating some of the devices. In this paper, we address the reliability issue by designing an enhanced fault-tolerant mechanism for Ad hoc On-Demand Distance Vector (AODV) routing protocol applied in WSN called the ENhanced FAult-Tolerant AODV (ENFAT-AODV) routing protocol. We apply a backup route technique by creating a backup path for every node on a main path of data transmission. When a node gets failure to deliver a data packet through the main path, it immediately utilizes its backup route to become a new main path for the next coming data packet delivery to reduce a number of data packets dropped and to maintain the continuity of data packet transmission in presence of some faults (node or link failures). Furthermore, with increased failure rate, this proposed routing protocol improves the throughput, reduces the average jitter, provides low control overhead and decreases the number of data packets dropped in the network. As a result, the reliability, availability and maintainability of the network are achieved. The simulation results show that our proposed routing protocol is better than the original AODV routing.",2010,0, 5664,Immune Systems Inspired Approach to Anomaly Detection and Fault Diagnosis for Engines,"As more electronic devices are integrated into automobiles to improve the reliability, drivability and maintainability, automotive diagnosis becomes increasingly difficult to deal with. Unavoidable design defects, quality variations in the production process as well as different usage patterns make it is infeasible to foresee all possible faults that may occur to the vehicle. As a result, many systems rely on limited diagnostic coverage provided by a diagnostic strategy which tests only for a priori known or anticipated failures, and presumes the system is operating normally if the full set of tests is passed. To circumvent these difficulties and provide a more complete coverage for detection of any fault, a new paradigm for design of automotive diagnostic systems is needed. An approach inspired by the functionalities and characteristics of natural immune system is presented and discussed in the paper. The feasibility of the newly proposed paradigm is also partially demonstrated through application examples.",2007,0, 5665,Shape watermarking based on minimizing the quadric error metric,"Blind and robust watermarking of 3D object aims to embed codes into a 3D object such that the object is not visually distorted from the original shape. An essential condition is that the message should be securely extracted even after the graphical object was processed. In this paper, we propose a novel blind and robust mesh watermarking method based on the quadric error metric. The vertices are firstly grouped into bins using a secret key according to their distances to the object center. The statistics of the distances in each bin is modified when embedding the message. A novel quadric selective vertex placement scheme is proposed for finding the best location of each vertex, following watermark embedding, such that the resulting shape distortion is minimal. Experimental results show that the proposed method reduces the distortion to a minimum in the 3D shape.",2009,0, 5666,Induction motor mixed fault diagnosis based on wavelet analysis of the current space vector,"Broken bars and eccentricity are common faults in induction motors. These two faults usually occur simultaneously, since most installed induction motors have a small inherent eccentricity. In this paper a detailed investigation of the possibilities for induction motor fault diagnosis based on wavelet analysis of the current space vector is provided. The main objective is to formulate a method for enhanced diagnosis of broken rotor bars in induction motors by applying wavelet analysis to the motor current space vector. To this purpose, a detailed wavelet analysis of stator current space vector is implemented. The analysis is at first performed through simulation of a faulty asynchronous machine in Matlab-Simulink. Subsequent to the simulation results analysis, experimental tests were conducted. Characteristic results are presented and briefly discussed",2005,0, 5667,Statistical analysis on a case study of load effect on PSD technique for induction motor broken rotor bar fault detection,Broken rotor bars in an induction motor create asymmetries and result in abnormal amplitude of the sidebands around the fundamental supply frequency and its harmonics. Monitoring the power spectral density (PSD) amplitudes of the motor currents at these frequencies can be used to detect the existence of broken rotor bar faults. This paper presents a study on an actual three-phase induction motor using the PSD analysis as a broken rotor bar fault detection technique. The distributions of PSD amplitudes of experimental healthy and faulty motor data sets at these specific frequencies are analyzed statistically under different load conditions. Results indicate that statistically significant conclusions on broken rotor bar detection can vary significantly under different load conditions and under different inspected frequencies. Detection performance in terms of the variation of PSD amplitudes is also investigated as a case study.,2003,0, 5668,Revisiting common bug prediction findings using effort-aware models,"Bug prediction models are often used to help allocate software quality assurance efforts (e.g. testing and code reviews). Mende and Koschke have recently proposed bug prediction models that are effort-aware. These models factor in the effort needed to review or test code when evaluating the effectiveness of prediction models, leading to more realistic performance evaluations. In this paper, we revisit two common findings in the bug prediction literature: 1) Process metrics (e.g., change history) outperform product metrics (e.g., LOC), 2) Package-level predictions outperform file-level predictions. Through a case study on three projects from the Eclipse Foundation, we find that the first finding holds when effort is considered, while the second finding does not hold. These findings validate the practical significance of prior findings in the bug prediction literature and encourage their adoption in practice.",2010,0, 5669,"Bugzilla, ITracker, and other bug trackers","Bug-tracking helps the software developers in knowing what the error is, resolving it, and learning from it. Working on a software project includes managing the bugs we find. At first, we might list them on a spreadsheet. But when the number of bugs becomes too large and a lot of people must access and input data on them, we have to give up the spreadsheet and instead use a bug- or issue-tracking system. Many software projects reach this point, especially during testing and deployment when users tend to find an application's bugs. Nowadays we can choose among dozens of bug-tracking systems. This paper looks at two specific open source products and provides useful hints for working with any bug-tracking tool.",2005,0, 5670,Study of Superconducting Fault Current Limiters for System Integration of Wind Farms,"As electrical energy will be provided from renewable sources, the connection of a large number of wind farms to existing distribution networks may lead to the increasing fault levels beyond the capacity of existing switchgear, especially in urban areas. Fault current limiters (FCLs) are essentially expected to control the prospective short circuit currents. In this paper, investigations were carried out to assess the effectiveness of the resistive superconducting fault current limiters (SFCLs) for fault level management in wind power system. System studies confirmed that the superconducting fault current limiter (SFCL) could not only control the fault currents but also suppress the inrush currents, when wind farm has adopted in the case of the system interconnection. As a result, the highly efficient operation of the wind power system becomes more possible by introducing the superconducting fault current limiters.",2010,0, 5671,Examining the complexity behind a medication error: generic patterns in communication,"Communication was the most frequently cited cause of medication errors reported between 1995 and 2003. More detailed models of how communication breakdowns contribute to adverse events are needed to intervene to improve communication processes. We describe in detail an incident where an oncology fellow physician erroneously substituted the medication navelbine for the intended etoposide during ordering, resulting in a prolonged hospitalization with severe leukopenia for the patient. A team of human factors and medical experts analyzed the case and identified communication patterns described in the human factors literature. We discuss how the findings suggest targeted ideas for improving communication processes, media, and systems that may have higher ""traction"" for improving patient safety than are possible solely from aggregated analyses of coded descriptions of large sets of cases.",2004,0, 5672,A simulation model of focus and radial servos in compact disc players with disc surface defects,Compact disc players have been on the market in more than two decades. As a consequence most of the control servo problems have been solved. One large remaining problem to solve is the handling of compact discs with severe surface defects like scratches and fingerprints. This paper introduces a method for making the design of controllers handling surface defects easier. A simulation model of compact disc players playing discs with surface defects is presented. The main novel element in the model is a model of the surface defects. That model is based on data from discs with surface defects. This model is used to compare a high bandwidth with a low bandwidth controller's performance of handling surface defects.,2004,0, 5673,Fault detection and prognosis methods for a monitoring system of rotating electrical machines,"Companies are involved in a high competition for reducing the cost of production in order to maintain their market shares. Since the costs of maintenance contribute a substantial portion of the production costs, companies must budget maintenance effectively. Machine deterioration prognosis can decrease the costs of maintenance by minimizing the loss of production due to machine breakdown and avoiding the overstocking of spare parts. This paper gives a review of some fault detection and prognosis methods to diagnose faults and failure on rotating electrical machines. To develop the monitoring system accelerometers have been used to acquire vibration measurements. Performance are studied on a laboratory-scale experimental system.",2010,0, 5674,Adaptive upsampling with shift method for windmill artifact correction in multislice helical CT,"Compared to data-domain z-filtering, the proposed up-sampling with shift method provides stronger artifact suppression. On the other hand, it has a potential to preserve image z-resolution by incorporating adaptive techniques. The proposed method has some similarities with zFFS techniques described in [6],[7]. The difference of our approach is that it is simple to implement and does not require hardware changes, like FFS, or changes in the backprojection module. It does not require reconstruction of several image slices for each final image, like image-domain z-filtering, so reconstruction speed is not sacrificed. In this work we apply the proposed upsampling method before reconstruction. However, for improved efficiency we suggest applying the proposed method after the convolution, just before the backprojection.",2008,0, 5675,An efficient framework for the conversion of fault trees to diagnostic Bayesian network models,"Complex aerospace systems' cannot afford downtime to diagnose problems $the interruption of mission critical functions and prohibitive cost of lost business are unacceptable. Such systems are characterized by having many components and require a team of experts to diagnose problems after they occur or to assemble a knowledge database suitable for rapid model based diagnostics. In this paper we present an efficient and largely automated method for developing diagnostic Bayesian network models. The models are created by exploiting existing domain knowledge in the form of reliability fault trees and diagnostic observation lists. The algorithms for conversion of the trees and databases into Bayesian network models have been embedded in a C++ software tool and tested on examples of fault trees ranging from 10 to 800 nodes, which were developed for satellite systems",2006,0, 5676,Performance management via adaptive thresholds with separate control of false positive and false negative errors,"Component level performance thresholds are widely used as a basic means for performance management. As the complexity of managed systems increases, manual threshold maintenance becomes a difficult task. This may result from a) a large number of system components and their operational metrics, b) dynamically changing workloads, and c) complex dependencies between system components. To alleviate this problem, we advocate that component level thresholds should be computed, managed and optimized automatically and autonomously. To this end, we have designed and implemented a performance threshold management sub-system that automatically and dynamically computes two separate component level thresholds: one for controlling Type I errors and another for controlling Type II errors. We present the theoretical foundation for this autonomic threshold management system, describe a specific algorithm and its implementation, and evaluate it using real-life scenarios and production data sets. As our present study shows, with proper parameter tuning, our on-line dynamic solution is capable of nearly optimal performance thresholds calculation.",2009,0, 5677,Error Modeling in Dependable Component-Based Systems,"Component-based development (CBD) of software, with its successes in enterprise computing, has the promise of being a good development model due to its cost effectiveness and potential for achieving high quality of components by virtue of reuse. However, for systems with dependability concerns, such as real-time systems, a major challenge in using CBD consists of predicting dependability attributes, or providing dependability assertions, based on the individual component properties and architectural aspects. In this paper, we propose a framework which aims to address this challenge. Specifically, we present a revised error classification together with error propagation aspects, and briefly sketch how to compose error models within the context of component-based systems (CBS). The ultimate goal is to perform the analysis on a given CBS, in order to find bottlenecks in achieving dependability requirements and to provide guidelines to the designer on the usage of appropriate error detection and fault tolerance mechanisms.",2008,0, 5678,"A full-featured, error-resilient, scalable wavelet video codec based on the set partitioning in hierarchical trees (SPIHT) algorithm","Compressed video bitstreams require protection from channel errors in a wireless channel. The 3-D set partitioning in hierarchical trees (SPIHT) coder has proved its efficiency and its real-time capability in the compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single automatic-repeat request (ARQ) proved to be an effective means for protecting the bitstream. There were two problems with this scheme: (1) the noiseless reverse channel ARQ may not be feasible in practice and (2) in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. We eliminate the need for ARQ by making the 3-D SPIHT bitstream more robust and resistant to channel errors. We first break the wavelet transform into a number of spatio-temporal tree blocks which can be encoded and decoded independently by the 3-D SPIHT algorithm. This procedure brings the added benefit of parallelization of the compression and decompression algorithms, and enables implementation of region-based coding. We demonstrate the packetization of the bitstream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then we encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet-decoding failures, so that only the cleanly recovered packets are reconstructed. In extensive comparative tests, the reconstructed video is shown to be superior to that of MPEG-2, with the margin of superiority growing substantially as the channel becomes noisier. Furthermore, the parallelization makes possible real-time implementation in hardware and software",2002,0, 5679,Component Based Proactive Fault Tolerant Scheduling in Computational Grid,"Computational Grids have the capability to provide the main execution platform for high performance distributed applications. Grid resources having heterogeneous architectures, being geographically distributed and interconnected via unreliable network media are extremely complex and prone to different kinds of errors, failures and faults. Grid is a layered architecture and most of the fault tolerant techniques developed on grids use its strict layering approach. In this paper, we have proposed a cross-layer design for handling faults proactively. In a cross-layer design, the top- down and bottom-up approach is not strictly followed, and a middle layer can communicate with the layer below or above it [1]. At each grid layer there would be a monitoring component that would decide on predefined factors that the reliability of that particular layer is high, medium or low. Based on Hardware Reliability Rating (HRR) and Software Reliability Rating (SRR), the Middleware Monitoring Component / Cross- Layered Component (MMC/CLC) would generate a Combined Rating (CR) using CR calculation matrix rules. Each grid participating node will have a CR value generated through cross layered communication using HMC, MMC/CLC and SMC. All grid nodes will have their CR information in the form of a CR table and high rated machines would be selected for job execution on the basis of minimum CPU load along with different intensities of check pointing. Handling faults proactively at each layer of grid using cross communication model would result in overall improved dependability and increased performance with less overheads of check pointing.",2007,0, 5680,Faults in grids: why are they so bad and what can be done about it?,"Computational grids have the potential to become the main execution platform for high performance and distributed applications. However, such systems are extremely complex and prone to failures. We present a survey with the grid community on which several people shared their actual experience regarding fault treatment. The survey reveals that, nowadays, users have to be highly involved in diagnosing failures, that most failures are due to configuration problems (a hint of the area's immaturity), and that solutions for dealing with failures are mainly application-dependent. Going further, we identify two main reasons for this state of affairs. First, grid components that provide high-level abstractions when working, do expose all gory details when broken. Since there are no appropriate mechanisms to deal with the complexity exposed (configuration, middleware, hardware and software issues), users need to be deeply involved in the diagnosis and correction of failures. To address this problem, one needs a way to coordinate different support teams working at the grids different levels of abstraction. Second, fault tolerance schemes today implemented on grids tolerate only crash failures. Since grids are prone to more complex failures, such those caused by heisenbugs, one needs to tolerate tougher failures. Our hope is that the very heterogeneity, that makes a grid a complex environment, can help in the creation of diverse software replicas, a strategy that can tolerate more complex failures.",2003,0, 5681,Automated PET/CT brain registration for accurate attenuation correction,"Computed tomography (CT) is used for the attenuation correction of positron emission tomography (PET) to enhance the efficiency of data acquisition process and to improve the quality of the reconstructed PET data in the brain. Due to the use of two different modalities, chances of misalignment between PET and CT images are quite significant. The main cause of this misregistration is the motion of the patient during the PET scan and between the PET and CT scans. This misalignment produces an erroneous CT attenuation map that can project the bone and water attenuation parameters onto the brain, thereby under- or over-estimating the attenuation. To avoid the misregistration artifact and potential diagnostic misinterpretation, automated software for PET/CT brain registration has been developed. This software extracts the brain surface information from the CT and PET images and compensates for the translational and rotational misalignment between the two scans. This procedure has been applied to the dataset of a patient with visible perfusion defect in the brain, and the results show that the CTAC produced after the image registration eliminates that hypoperfusion artifact caused by the erroneous attenuation of the PET images.",2009,0, 5682,On the use of error propagation for statistical validation of computer vision software,"Computer vision software is complex, involving many tens of thousands of lines of code. Coding mistakes are not uncommon. When the vision algorithms are run on controlled data which meet all the algorithm assumptions, the results are often statistically predictable. This renders it possible to statistically validate the computer vision software and its associated theoretical derivations. In this paper, we review the general theory for some relevant kinds of statistical testing and then illustrate this experimental methodology to validate our building parameter estimation software. This software estimates the 3D positions of buildings vertices based on the input data obtained from multi-image photogrammetric resection calculations and 3D geometric information relating some of the points, lines and planes of the buildings to each other.",2005,0, 5683,A control theory approach for analyzing the effects of data errors in safety-critical control systems,"Computers are increasingly used for implementing control algorithms in safety-critical embedded applications, such as engine control, braking control and flight surface control. Addressing the consequent coupling of control performance with computer related errors, this paper develops a composite computer dependability/control theory methodology for analyzing the effects data errors have on control system dependability. The effect is measured as the resulting control error (defined as the difference between the desired value of a physical properly and its actual value). We use maximum bounds on this measure as the criterion for control system failure (i.e., if the control error exceeds a certain threshold, the system has failed). In this paper we a) present suitable models of computer faults for analysis of control level effects and related analysis methods, and b) apply traditional control theory analysis methods for understanding the effects of data errors on system dependability An automobile slip-control brake-system is used as an example showing the viability of our approach.",2002,0, 5684,Demonstration of the remote exploration and experimentation (REE) fault-tolerant parallel-processing supercomputer for spacecraft onboard scientific data processing,"Concerns a demonstration of the REE Project's work to date. The demonstration is intended to simulate an REE system that might exist on a Mars rover, consisting of multiple COTS processors, a COTS network, a COTS node-level operating system, REE middleware, and an REE application. The specific application performs texture processing of images. It was chosen as a building block of automated geological processing that will eventually be used for both navigation and data processing. Because the COTS hardware is not radiation hardened, single-event-upset-induced soft errors will occur. These errors are simulated in the demonstration by use of a software-implemented fault-injector, and are injected at a rate much higher than is realistic for the sake of viewer interest. Both the application and the middleware contain mechanisms for both detection of and recovery from these faults, and these mechanisms are tested by this very high fault-rate. The consequence of the REE system being able to tolerate this fault rate while continuing to process data is that the system will easily be able to handle the true fault rate",2000,0, 5685,Diversity techniques for concurrent error detection,"Concurrent error detection (CED) techniques are widely used to ensure data integrity in digital systems. Data integrity guarantees that the system outputs are either correct or an error is indicated when incorrect outputs are produced. This dissertation presents the results of theoretical and simulation studies of various CED techniques. The CED schemes studied are based on diverse duplication, simple duplication of identical implementations, and error-detection techniques like parity checking. The study aimed at (1) a quantitative comparison of the effectiveness different CED schemes, and (2) developing design techniques for efficient concurrent error detection",2001,0, 5686,Intelligent agent-based system using dissolved gas analysis to detect incipient faults in power transformers,Condition monitoring and software-based diagnosis tools are central to the implementation of efficient maintenance management strategies for many engineering applications including power transformers.,2010,0, 5687,"A new nonlinear directional overcurrent relay coordination technique, and banes and boons of near-end faults based approach","Considerations of weight factors and far-end faults in the directional overcurrent relay coordination problem formulation do not affect the optimal solution. This paper investigates this viewpoint and verifies that indeed by such an approach the optimality is not lost. But, this study reveals that in doing so, the coordination quality is sacrificed to some extent. It is also observed that if all remaining valid constraints (after relaxing few constraints based on the back-up coordination philosophy and strength of fault level generated) are considered and if the objective function is changed to running sum of all violating constraints, all valid considered constraints are satisfied. This study is done by simultaneously optimizing all settings in nonlinear environment by Sequential Quadratic Programming method using Matlab Toolbox. The results of the analysis on a sample 6-bus and IEEE 30-bus systems are presented in this paper.",2006,0, 5688,Research on remote intelligent fault-diagnosis of CNC lathe based on bayesian networks,"Considering the development of smart machine tools and Internet-based manufacturing and in order to manage the manufacturing process more efficiently, a unit of remote intelligent fault-diagnosis based on Bayesian Networks (BN) was designed and software based on internet was realized as well as the case study concerning CNC lathe. It is a compensation of machine tool's self-detection whose major job is to find the fault of hardware and programming. The case study proved the reliability and advantages of the intelligent model based on BN.",2010,0, 5689,Modeling the effects of combining diverse software fault detection techniques,"Considers what happens when several different fault-finding techniques are used together. The effectiveness of such multi-technique approaches depends upon a quite subtle interplay between their individual efficacies. The modeling tool we use to study this problem is closely related to earlier work on software design diversity which showed that it would be unreasonable even to expect software versions that were developed truly independently to fail independently of one another. The key idea was a difficulty function over the input space. Later work extended these ideas to introduce a notion of forced diversity. In this paper, we show that many of these results for design diversity have counterparts in diverse fault detection in a single software version. We define measures of fault-finding effectiveness and diversity, and show how these might be used to give guidance for the optimal application of different fault-finding procedures to a particular program. The effects on reliability of repeated applications of a particular fault-finding procedure are not statistically independent; such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault-finding procedures, it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. Diversity of fault-finding procedures is a good thing and should be applied as widely as possible. The model is illustrated using some data from an experimental investigation into diverse fault-finding on a railway signalling application",2000,0, 5690,Project Pathogens: The Anatomy of Omission Errors in Construction and Resource Engineering Project,"Construction and engineering projects are typically complex in nature and are prone to cost and schedule overruns. A significant factor that often contributes to these overruns is rework. Omissions errors, in particular, have been found to account for as much as 38% of the total rework costs experienced. To date, there has been limited research that has sought to determine the underlying factors that contribute to omission errors in construction and engineering projects. Using data derived from 59 in-depth interviews undertaken with various project participants, a generic systemic causal model of the key factors that contributed to omission errors is presented. The developed causal model can improve understanding of the archetypal nature and underlying dynamics of omission errors. Error management strategies that can be considered for implementation in projects are also discussed.",2009,0, 5691,mSWAT: Low-cost hardware fault detection and diagnosis for multicore systems,"Continued technology scaling is resulting in systems with billions of devices. Unfortunately, these devices are prone to failures from various sources, resulting in even commodity systems being affected by the growing reliability threat. Thus, traditional solutions involving high redundancy or piecemeal solutions targeting specific failure modes will no longer be viable owing to their high overheads. Recent reliability solutions have explored using low-cost monitors that watch for anomalous software behavior as a symptom of hardware faults. We previously proposed the SWAT system that uses such low-cost detectors to detect hardware faults, and a higher cost mechanism for diagnosis. However, all of the prior work in this context, including SWAT, assumes single-threaded applications and has not been demonstrated for multithreaded applications running on multicore systems. This paper presents mSWAT, the first work to apply symptom based detection and diagnosis for faults in multicore architectures running multithreaded software. For detection, we extend the symptom-based detectors in SWAT and show that they result in a very low silent data corruption (SDC) rate for both permanent and transient hardware faults. For diagnosis, the multicore environment poses significant new challenges. First, deterministic replay required for SWAT's single-threaded diagnosis incurs higher overheads for multithreaded workloads. Second, the fault may propagate to fault-free cores resulting in symptoms from fault-free cores and no available known-good core, breaking fundamental assumptions of SWAT's diagnosis algorithm. We propose a novel permanent fault diagnosis algorithm for multithreaded applications running on multicore systems that uses a lightweight isolated deterministic replay to diagnose the faulty core with no prior knowledge of a known good core. Our results show that this technique successfully diagnoses over 95% of the detected permanent faults while incurring low hardware ov- erheads. mSWAT thus offers an affordable solution to protect future multicore systems from hardware faults.",2009,0, 5692,Error propagation suppression in Self-servo Track Writer by time-domain control design,"Control design of self-servo track writer (SSTW) has become an important issue in hard disk drive research. This paper discusses the error propagation problem in SSTW control. Although the iterative learning control (ILC) has been suggested as a solution to suppress the error propagation in SSTW, existing suggestions design controllers in the iteration domain requiring considerable computation and complicated optimization algorithm. For this reason, this paper suggests SSTW control design in the time domain. First, reference correction to suppress the error propagation is suggested based on the error propagation study and a novel reference correction is suggested to control the amount of the converged error. Then, a state space approach is suggested developing a Kalman filter to estimate the absolute head position. The state space based design is extended and provides a formulation for a more general time-domain design of SSTW control.",2008,0, 5693,"A router for improved fault isolation, scalability and diagnosis in CAN","Controller Area Network (CAN) provides an inexpensive and robust network technology in many application domains. However, the use of CAN is constrained by limitations with respect to fault isolation, bandwidth, wire length, namespaces and diagnosis. This paper presents a solution to overcome these limitations by replacing the CAN bus with a star topology. We introduce a CAN router that detects and isolates node failures in the value and time domain. The CAN router ensures that minimum message interarrival times are satisfied and reserves CAN identifiers for individual CAN nodes. In addition, the CAN router exploits knowledge about communication relationships for a more efficient use of communication bandwidth through multicast messaging. An implementation of the CAN router based on a Multi-Processor System-on-a-Chip (MPSoC) shows the feasibility of the proposed solution.",2010,0, 5694,A method for inductor core loss estimation in power factor correction applications,"Conventional core loss estimation methods exhibit limitations in dealing with important aspects of switching power converter applications such as different duty cycles, discontinuous-conduction-mode, variable switching frequency, or variable duty cycle operation. These limitations are particularly evident when trying to estimate boost inductor core loss in power factor correction circuits. This paper first presents a core loss estimation method that addresses these limitations and then demonstrates an effective technique to estimate core losses in power factor correction circuits. Finally, the authors show examples of how this method can be conveniently incorporated into simulation software to automate the core loss estimation process. The inductor models that are developed to facilitate this automatic core loss estimation and the approaches to implement the calculation in simulation software, especially a program called SIMPLIS, are also provided",2002,0, 5695,Ultrasonic Waveguides Detection-based approach to locate defect on workpiece,"Conventional ultrasonic techniques, such as pulse-echo, has been limited to testing relatively simple geometries or interrogating the region in the immediate vicinity of the transducer. A novel, efficiency methodology uses ultrasonic waveguides to examine structural components. The advantages of this technique include: its ability to detect the entire structure in a single measurement through long distance with little attenuation; and its capacity to test inaccessible regions of complex components. However, in practical work, this technique exists dispersion and mode conversion phenomena which makes poor signal to noise ratio, thereby, influences the actual application of this technique. In order to solve this problem, simulation with experiments can not only verifies the feasibility of this technique, but also has guiding significant for actual work. This paper reports on a novel approach in the simplification of the simulation of Ultrasonic Waveguides Detection. The first step is the selection of the frequency of signal which has the fastest group velocity and relatively small dispersion. The second step is the decision of and le. As the numerical analysis characteristics of general-purpose software ANSYS, two key parameters: time step t and mesh element size le need to be carefully selected. This report finds the balance point between the accuracy of results and calculation time to determine two key parameters which significantly influence the result of the simulation result. Finally, this report show the experiment results on two-dimensional flat panel structure and three-dimensional triangle-iron structure respectively. From the result shown, the error between the simulation and actual value is less than 0.4%, perfectly prove the feasibility of this approach.",2010,0, 5696,Detection of Duplicate Defect Reports Using Natural Language Processing,"Defect reports are generated from various testing and development activities in software engineering. Sometimes two reports are submitted that describe the same problem, leading to duplicate reports. These reports are mostly written in structured natural language, and as such, it is hard to compare two reports for similarity with formal methods. In order to identify duplicates, we investigate using natural language processing (NLP) techniques to support the identification. A prototype tool is developed and evaluated in a case study analyzing defect reports at Sony Ericsson mobile communications. The evaluation shows that about 2/3 of the duplicates can possibly be found using the NLP techniques. Different variants of the techniques provide only minor result differences, indicating a robust technology. User testing shows that the overall attitude towards the technique is positive and that it has a growth potential.",2007,0, 5697,Enhanced error concealment with mode selection,"Delay sensitive video transmission over error prone networks can suffer from packet erasures when channel conditions are not favorable. Use of error concealment (EC) at the video decoder is necessary in such cases to prevent error induced artefacts making the affected video frames visibly intolerable. This paper proposes an EC method that incorporates enhanced temporal and spatial concealment elements, the use of which is controlled by a mode selection (MS) algorithm well matched to the characteristics of the temporal concealment approach. The performance of the individual enhancements and of the MS algorithm are compared with the respective features of the method employed in the H.264 joint model (JM) decoder and with other state of the art methods. The overall performance of the proposed method is shown to offer significant gains (up to 9 dB) compared to that of the JM decoder for a wide range of natural and animation image sequences without any considerable increase in complexity",2006,0, 5698,Fault Detection in a Multistage Gearbox by Demodulation of Motor Current Waveform,"Demodulation of vibration signal to detect faults in machinery has been a prominent prevalent technique that is discussed by a number of authors. This paper deals with the demodulation of the current signal of an induction motor driving a multistage gearbox for its fault detection. This multistage gearbox has three gear ratios, and thus, three rotating shafts and their corresponding gear mesh frequencies (GMFs). The gearbox is loaded electrically by a generator feeding an electrical resistance bank. Amplitude demodulation and frequency demodulation are applied to the current drawn by the induction motor for detecting the rotating shaft frequencies and GMFs, respectively. Discrete wavelet transform is applied to the demodulated current signal for denoising and removing the intervening neighboring features. Spectrum of a particular level, which comprises the GMFs, is used for gear fault detection",2006,0, 5699,Dependability Evaluation with Dynamic Reliability Block Diagrams and Dynamic Fault Trees,"Dependability evaluation is an important often-mandatory step in designing and analyzing (critical) systems. Introducing control and/or computing devices to automate processes increases the system complexity, with an impact on the overall dependability. This occurs as a consequence of interferences, dependencies, and other similar effects that cannot be adequately managed through formalisms such as reliability block diagrams (RBDs), fault trees (FTs), and reliability graphs (RGs), since the statistical independence assumption is not satisfied. In addition, more enhanced notations such as dynamic FTs (DFTs) might not be adequate to represent all the behavioral aspects of dynamic systems. To overcome these problems, we developed a new formalism derived from RBD: the dynamic RBD (DRBD). DRBD exploits the concept of dependence as the building block to represent dynamic behaviors, allowing us to compose the dependencies and adequately manage the arising conflicts by means of a priority algorithm. In this paper, we explain how we can use the DRBD notation by specifying a practical methodology. Starting from the system knowledge, the proposed methodology drives to the overall system reliability evaluation through the entire phases of modeling and analysis. Such a technique is applied to an example taken from the literature, consisting of a distributed computing system.",2009,0, 5700,Real time fault injection using a modified debugging infrastructure,"Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements",2006,0, 5701,An Effective RM-Based Scheduling Algorithm for Fault-Tolerant Real-Time Systems,"Dependability is the representative property that predominantly distinguishes a hard real-time system from other computer systems besides timeliness. Primary/alternate version technique is a cost-effective means which trades the quality of computation results for promptness to tolerate the software faults. The kernel algorithm proposed in this paper employs the off-line backwards-RM scheme to pre-allocate time intervals to the alternate version and the on-line RM scheme to dispatch the primary version. The methodology is a dual-purpose strategy, which aims to (1) tolerate potential software faults by ensuring the accomplishment of the alternate version once its primary fails to execute or re-execute and (2) achieve better quality of service by maximizing the success rate of primary version.",2009,0, 5702,Ringing out fault tolerance. A new ring network for superior low-cost dependability,"Dependability properties of bi-directional and braided rings are well recognized in improving communication availability. However, current ring-based topologies have no mechanisms for extreme integrity and have not been considered for emerging high-dependability markets where cost is a significant driver, such as the automotive ""by-wire"" applications. This paper introduces a braided-ring architecture with superior guardian functionality and complete Byzantine fault tolerance while simultaneously reducing cost. This paper reviews anticipated requirements for high-dependability low-cost applications and emphasizes the need for regular safe testing of core coverage functions. The paper describes the ring's main mechanisms for achieving integrity and availability levels similar to SAFEbus but at low automotive costs. The paper also presents a mechanism to achieve self-stabilizing TDMA-based communication and design methods for fault-tolerant protocols on a network of simplex nodes. The paper also introduces a new self-checking pair concept that leverages braided-ring properties. This novel message-based self-checking-pair concept allows high-integrity source data at extremely low cost.",2005,0, 5703,The application of the root causes of human error analysis method based on HAZOP analysis in using process of weapon,"Cognitive reliability and error analysis method (CREAM) is one of the most representative second-generation human reliability analysis methods. By using the CREAM, the retrospective analysis of the root causes of human error is given. There are no accordant standards of the determination of event sequences caused by the human errors in the root causes analysis in using process of weapon in CREAM. This paper presents the root causes of human error analysis method based on HAZOP analysis in using process of weapon. And according to the deficiency of CREAM, the improvement measures are put forwards as follows: (1) According to the HAZOP method, using process of the weapon is divided into parts according to the different operation steps. And the operation by human is defined as the characteristic parameters; (2) According to the deviations analysis, the event-order diagram should be achieved based on the correct operation steps and the deviations. Using the improved method, the event sequence which is end in failure and caused by human error can be gained. After choosing the human error event, the root causes can be obtained by retrospective analysis. An example of diesel engine underwater starting is discussed in this paper. And the root causes of human error leading to water intake are obtained. This example validates the correctness of the improved method in this paper.",2009,0, 5704,Multi-view video color correction using dynamic programming,"Color inconsistency between views is an important problem to be solved in multi-view video systems. A multi-view video color correction method using dynamic programming is proposed. Three-dimensional histograms are constructed with sequential conditional probability in HSI color space. Then, dynamic programming is used to seek the best color mapping relation with the minimum cost path between target image histogram and source image histogram. Finally, video tracking technique is performed to correct multi-view video. Experimental results show that the proposed method can obtain better subjective and objective performance in color correction.",2008,0, 5705,Introducing fault-based combinatorial testing to web services,"Combinatorial testing is considered effective for finding software faults. It is also efficient, since it keeps the number of tests relatively small. However, there seems to be very little research that considers combinatorial testing as a testing approach for web services. They are commonly tested by injecting fault-causing data perturbations into the network. It may be worthwhile to see if combinatorial testing can complement existing perturbations with the benefits of combinatorial testing. The approach proposed in this paper is called combinatorial fault-based testing. This type of testing combines existing fault-based testing techniques, such as fault injection, with combinatorial testing to attempt to find faults of varying strength within a web service. Combinatorial fault-based testing uses fault injection and helps reduce the problem with combinatorial explosion by focusing solely on fault-based combinations. This raises the following research question: Is there a way to take advantage of the benefits of combinatorial testing for web services assuming that source code will not be available, while minimizing the possibility of a combinatorial explosion? Combinatorial fault-based testing looks very promising for answering this question. As a side effect, it could potentially offer a way to determine the maximum strength of interactions to test for web services.",2010,0, 5706,Methodology and Tools Developed for Validation of COTS-based Fault-Tolerant Spacecraft Supercomputers,"Commercial off-the-shelf (COTS) electronic components are attractive for space applications. However, fault-tolerant architectures are required to cope with the Single Event Effect sensitivity of these components. CNES has developed a methodology, and the related validation tools, by injecting faults into these fault- tolerant architectures for validation purposes. The methodology is a hybrid one, combining deterministic and random fault injection phases. The main tools used are a boundary scan fault injector, made from an off-the-shelf JTAG tool, and software to analyse and process data obtained from the fault injection tests. This paper highlights the experience feedback relating to both the design and use of these tools, which were implemented to validate fault-tolerant architectures developed by CNES. Although this development has been done in the framework of the space domain, the methodology and tools are applicable for any fault-tolerant systems.",2007,0, 5707,Software-Implemented Hardware Error Detection: Costs and Gains,"Commercial off-the-shelf (COTS) hardware is becoming less and less reliable because of the continuously decreasing feature sizes of integrated circuits. But due to economic constraints, more and more critical systems will be based on basically unreliable COTS hardware. Usually in such systems redundant execution is used to detect erroneous executions. However, arithmetic codes promise much higher error detection rates. Yet, they are generally assumed to generate very large slowdowns. In this paper, we assess and compare the runtime overhead and error detection capabilities of redundancy and several arithmetic codes. Our results demonstrate a clear trade-off between runtime costs and gained safety. However, unexpectedly the runtime costs for arithmetic codes compared to redundancy increase only linearly, while the gained safety increases exponentially.",2010,0, 5708,Reconfigurable fault tolerance: A framework for environmentally adaptive fault mitigation in space,"Commercial SRAM-based FPGAs have the potential to provide aerospace applications with the necessary performance to meet next generation mission requirements. However, the susceptibility of these devices to radiation in the form of single-event upsets is a significant drawback. TMR techniques are traditionally used to mitigate these effects, but with an overwhelming amount of extra area and power. We propose a framework for reconfigurable fault tolerance which enables systems engineers to dynamically change the amount of redundancy and fault mitigation that is used in an FPGA design. This approach leverages the reconfigurable nature of the FPGA to allow significant processing to be performed safely and reliably when environmental factors permit. Phased-mission Markov modeling is used to estimate performability gains that can be achieved using the framework for two case-study orbits.",2009,0, 5709,Design of a novel soft error mitigation technique for reconfigurable architectures,"Commercially off the shelf (COTS) available reconfigurable architectures are becoming popular for applications where high dependability, performance and low costs are mandatory constraints such as space applications. We present a unique SEE (single event effect) mitigation technique based upon temporal data sampling and weighted voting for synchronous circuits and configuration bit storage for reconfigurable architectures. The design technique addresses both conventional static SEUs (single event upsets) and SETs (single event transients) induced errors which result in data loss for reconfigurable architectures in space. The design technique not only eliminates all the single event upsets and single event transients but eliminates all double event upset as well",2006,0, 5710,Extending Lifetime of Wireless Sensor Networks using Forward Error Correction,"Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption reducing node and network lifetime. In this paper, a convolution code FEC with Viterbi decoding on Mica2 nodes was implemented and evaluated to explore the possibility of extending the lifetime of a degrading WSN. Results were presented which suggest that our approach could be used in a WSN when increasing distance and channel noise degrade the network",2006,0, 5711,A Full- and Half-Cycle DFT-based technique for fault current filtering,"Decaying DC components in fault currents will make the digital filter to obtain inaccurate phasors and cause the false operations of a relay system. Many studies have put their focuses for removing the decaying DC components from fault currents. Generally, they need extra samples or phasors to obtain the parameters of the decaying DC component. In order to accelerate the filter response, this paper presents a new Discrete Fourier Transform (DFT)-based algorithm which does not need extra sample to remove the decaying DC component. To achieve this, the samples used for a traditional DFT computation are split to four groups. Both the Full-Cycle DFT (FCDFT) and the Half-Cycle DFT (HCDFT)-based computations have been developed in this paper. The proposed algorithm is evaluated by MATLAB/SIMULINK generated data to show its effectiveness.",2010,0, 5712,Optimized assignment of developers for fixing bugs an initial evaluation for eclipse projects,"Decisions on ldquoWho should fix this bugrdquo have substantial impact on the duration of the process and its results. In this paper, optimized strategies for the assignment of the ldquorightrdquo developers for doing the ldquorightrdquo task are studied and the results are compared to manual (called ad hoc) assignment. The quality of assignment is measured by the match between requested (from bugs) and available (from developers) competence profile. Different variants of Greedy search with varying parameter of look-ahead time are studied. The quality of the results has been evaluated for nine milestones of the open source Eclipse JDT project. The optimized strategies with largest look ahead time are demonstrated to be substantially better than the ad hoc solutions in terms of the quality of the assignment and the number of bugs which can be fixed within the given time interval.",2009,0, 5713,Enhancement of Fault Injection Techniques Based on the Modification of VHDL Code,"Deep submicrometer devices are expected to be increasingly sensitive to physical faults. For this reason, fault-tolerance mechanisms are more and more required in VLSI circuits. So, validating their dependability is a prior concern in the design process. Fault injection techniques based on the use of hardware description languages offer important advantages with regard to other techniques. First, as this type of techniques can be applied during the design phase of the system, they permit reducing the time-to-market. Second, they present high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high fault modeling capability. However, implementing automatically these techniques in a fault injection tool is difficult. Especially complex are the insertion of saboteurs and the generation of mutants. In this paper, we present new proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages.",2008,0, 5714,Inter-plane via defect detection using the sensor plane in 3D heterogeneous sensor systems,"Defect and fault tolerance is being studied in a 3D heterogeneous sensor using a stacked chip with sensors located on the top plane, and inter-plane vias connecting these to other planes which provide analog processing, digital signal processing, and wireless communication/networking. The sensor plane contains four types of transducers: visible imager (active pixel sensor), near IR and mid IR imager, and seismic and acoustic sensor arrays. This paper investigates ways of introducing defect and fault tolerance into the inter-plane via connections between the sensor and digital signal processing planes. The methodology detects failures in the inter-plane vias by inputting controlled signal patterns in each sensor type on the sensor plane. The sensor/via fault distribution in turn impacts the defect avoidance in the fault tolerant TESH network, which binds both the sensors and the processors that analyze and fuse the sensor plane data. Fault tolerance in the design and fabrication of the micromachined IR bolometers is also studied.",2005,0, 5715,Towards a Defect Prevention Based Process Improvement Approach,"Defect causal analysis (DCA) is a means of product focused software process improvement. A systematic literature review to identify the DCA state of the art has been undertaken. The systematic review gathered unbiased knowledge and evidence and identified opportunities for further investigation. Moreover, some guidance on how to efficiently implement DCA in software organizations could be elaborated. This paper describes the initial concept of the DBPI (Defect Based Process Improvement) approach. It represents a DCA based approach for process improvement, designed considering the results of the systematic review and the obtained guidance. Its main contributions are tailoring support for DCA based process improvement and addressing an identified opportunity for further investigation by integrating organizational learning mechanisms regarding cause-effect relations into the conduct of DCA.",2008,0, 5716,An industrial case study of implementing and validating defect classification for process improvement and quality management,"Defect measurement plays a crucial role when assessing quality assurance processes such as inspections and testing. To systematically combine these processes in the context of an integrated quality assurance strategy, measurement must provide empirical evidence on how effective these processes are and which types of defects are detected by which quality assurance process. Typically, defect classification schemes, such as ODC or the Hewlett-Packard scheme, are used to measure defects for this purpose. However, we found it difficult to transfer existing schemes to an embedded software context, where specific document- and defect types have to be considered. This paper presents an approach to define, introduce, and validate a customized defect classification scheme that considers the specifics of an industrial environment. The core of the approach is to combine the software engineering know-how of measurement experts and the domain know-how of developers. In addition to the approach, we present the results and experiences of using the approach in an industrial setting. The results indicate that our approach results in a defect classification scheme that allows classifying defects with good reliability, that allows identifying process improvement actions, and that can serve as a baseline for evaluating the impact of process improvements",2005,0, 5717,Colour Correction and Matching between Scenes Using 3D LUT,"Correct colour reproduction is an important factor in digital cinema. To do that, this paper proposes methods of colour correction by means of look-up-table (LUT) interpolations. Reference colour patches are taken by movie camera like taking a clapperboard shot which is generally used for sound synchronisation. Once the shooting is completed, device characterisation is performed to generate the 3D LUT for correct colour reproduction. By using the LUTs, any colours taken from camera under any illumination conditions can be converted to the colours under any preferred illumination conditions by means of a tri-linear interpolation. Finally, the real scenes can be colour changed to any illumination conditions without visual discrepancies. This paper also presents another 3D LUT type in which input RGB colours can be directly converted to output RGB colours without device characterisation. This LUT can be easily adapted to the stereoscopic cameras so that pairs of stereo images may match each other",2010,0, 5718,Approximating Deployment Metrics to Predict Field Defects and Plan Corrective Maintenance Activities,"Corrective maintenance activities are a common cause of schedule delays in software development projects. Organizations frequently fail to properly plan the effort required to fix field defects. This study aims to provide relevant guidance to software development organizations on planning for these corrective maintenance activities by correlating metrics that are available prior to release with parameters of the selected software reliability model that has historically best fit the product's field defect data. Many organizations do not have adequate historical data, especially historical deployment and field usage information. The study identifies a set of metrics calculable from available data to approximate these missing predictor categories. Two key metrics estimable prior to release surfaced with potentially useful correlations, (1) the number of periods until the next release and (2) the peak deployment percentage. Finally, these metrics were used in a case study to plan corrective maintenance efforts on current development releases.",2009,0, 5719,Weibull distribution in modeling component faults,"Cost efficiency and the issue of quality are pushing software companies to constantly invest in efforts to produce enough quality applications that will arrive in time, with good enough quality to the customer. Quality is not for free, it has a price. Using the different methods of prediction, characteristic parameters will be obtained and will lead to the conclusions about quality even prior the beginning of the project. The Weibull distribution is by far the world's most popular statistical model for life data. On the other hand, exponential distribution and Rayleigh distribution are special cases of Weibull distribution. If we want to model and predict software component quality with mentioned distribution we should take some assumption regarding them. Prediction of component quality will take us to preventive and corrective action in the organization. Based on the results of prediction and modeling of software components faults prior the project start, during project execution and finally during maintenance stage of the component lifecycle some conclusion can be made. In this paper software component prediction using different mathematical models will be presented.",2010,0, 5720,Expert software development estimation with uncertainty correction,"Creation of an effective metrics and estimation program is an important but daunting step for the maturing software development organization This paper outlines a roadmap for implementing a process that establishes a program that will reap a large portion of the benefits early in the process with a minimum of implementation effort and cost This process includes a mechanism to improve software estimation accuracy as historical data becomes available for more sophisticated methods. Furthermore, we present a practical proposal for software estimation in industry based on software task size, complexity, and uncertainty.",2010,0, 5721,A Hybrid Approach for Detection and Correction of Transient Faults in SoCs,"Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one.",2010,0, 5722,Middleware Fault Tolerance Support for the BOSS Embedded Operating System,"Critical embedded systems need a dependable operating system and application. Despite all efforts to prevent and remove faults in system development, residual software faults usually persist. Therefore, critical systems need some sort of fault tolerance to deal with these faults and also with hardware faults at operation time. This work proposes fault-tolerant support mechanisms for the BOSS embedded operating system, based on the application of proven fault tolerance strategies by middleware control software which transparently delivers the added functionality to the application software. Special attention is taken to complexity control and resource constraints, targeting the needs of the embedded market",2006,0, 5723,Blinded Fault Resistant Exponentiation Revisited,"Cryptographic algorithm implementations are subject to specific attacks, called side channel attacks, focusing on the analysis of their power consumption or execution time or on the analysis of faulty computations. At FDTC06, Fumaroli and Vigilant presented a generic method to compute an exponentiation resistant against different side channel attacks. However, even if this algorithm does not reveal information on the secrets in case of a fault attack, it can not be used to safely implement a crypto-system involving an exponentiation. In this paper, we propose a new exponentiation method without this drawback and give a security proof of resistance to fault attacks. As an application, we propose an RSA algorithm implemented using the Chinese Remainder Theorem protected against side channel attacks. The exponentiation algorithm is also 33% faster than the previous method.",2009,0, 5724,Evaluating the effects of transient faults on vehicle dynamic performance in automotive systems,"Current automotive systems are integrating more and more electronic components in the handling and performance areas, for supporting advanced comfort and safety features. The effects of component or network failures raise serious concerns about the overall vehicle stability and safety. This work proposes a methodology for analyzing at the system level (taking into account both mechanical and electronic components) the implications of transient faults in the electronic part on the overall vehicle response. A prototypical fault injection environment is also presented, and experimental results show how safety specifications for components can be derived from performance objectives set at the vehicle level.",2004,0, 5725,Impact of X-ray Scatter When Using CT-based Attenuation Correction in PET: A Monte Carlo Investigation,"Current dual-modality PET/CT systems offer significant advantages over stand-alone PET including decreased overall scanning time and increased accuracy in lesion localization and detectability. However, the contamination of 3-D cone-beam CT data with scattered radiation during CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map and thus the resulting PET images. The aim of this work is to quantitatively measure the impact of X-ray scatter in CT images on the accuracy of CTAC on future designs of volumetric PET/CT systems. Our recently developed MCNP4C-based Monte Carlo X-ray CT simulator capable of modeling both fan- and cone-beam CT scanners and the Eidolon dedicated 3D PET Monte Carlo simulator were used to generate realigned PET/CT data sets. The impact of X-ray scatter was investigated through simulation of a uniform cylindrical water phantom for both a commercial multi-slice and prototype flat panel detector-based cone-beam CT scanners. The analysis of attenuation correction factors (ACFs) for the simulated cylindrical water phantom showed that the contamination of CT data with scattered radiation in the absence of scatter removal underestimates the true ACFs, namely by 7.3% and 28.2% in the centre for both geometries, respectively. It was concluded that without appropriate X-ray scatter compensation, the visual artifacts and quantitative errors in flat panel detector-based geometry are substantial and propagate cupping artifacts to PET images during CTAC.",2006,0, 5726,Generator dynamics influence on currents distribution in fault condition,"Current flow calculation results along the elements of a complex power system are analyzed in this work, during a three-phase short-circuit taking into account relative rotor swing. Analysis is implemented on the examples when the infinite bus fault point is supplied by one or more generators. It is shown that generator swing neglected during short-circuits, essentially changing current distribution in the system, can lead to impermissible mistakes in calculation results.",2000,0, 5727,Defect-tolerant adder circuits with nanoscale crossbars,"Current manufacturing of molecular electronics introduces defects, but circuits can be implemented by including redundant components. We identify reliability thresholds for implementing binary adders in the crossbar approach to molecular electronics. These thresholds vary among different implementations of the same logical formula, giving a tradeoff between yield and circuit area. For instance, one implementation has at least 90% yield with up to 30% defects for an area 1.8 times larger than the minimum required for a defect-free crossbar.",2006,0, 5728,A highly resilient routing algorithm for fault-tolerant NoCs,"Current trends in technology scaling foreshadow worsening transistor reliability as well as greater numbers of transistors in each system. The combination of these factors will soon make long-term product reliability extremely difficult in complex modern systems such as systems on a chip (SoC) and chip multiprocessor (CMP) designs, where even a single device failure can cause fatal system errors. Resiliency to device failure will be a necessary condition at future technology nodes. In this work, we present a network-on-chip (NoC) routing algorithm to boost the robustness in interconnect networks, by reconfiguring them to avoid faulty components while maintaining connectivity and correct operation. This distributed algorithm can be implemented in hardware with less than 300 gates per network router. Experimental results over a broad range of 2D-mesh and 2D-torus networks demonstrate 99.99% reliability on average when 10% of the interconnect links have failed.",2009,0, 5729,A Robust Error Detection Mechanism for H.264/AVC Coded Video Sequences Based on Support Vector Machines,"Current trends in wireless communications provide fast and location-independent access to multimedia services. Due to its high compression efficiency, H.264/AVC is expected to become the dominant underlying technology in the delivery of future wireless video applications. The error resilient mechanisms adopted by this standard alleviate the problem of spatio-temporal propagation of visual artifacts caused by transmission errors by dropping and concealing all macroblocks (MBs) contained within corrupted segments, including uncorrupted MBs. Concealing these uncorrupted MBs generally causes a reduction in quality of the reconstructed video sequence.",2008,0, 5730,Joint Data Partition and Rate-Distortion Optimized Mode Selection for H.264 Error-Resilient Coding,"Data partitioning (DP) is an efficient error-resilient video coding tool. Its contribution to performance improvement in the error-prone environment arises from the superior error concealment mechanisms that are available with the help of protected data partitions. Since error-concealment in terms of DP is closely related to coding mode, it is desirable to have an optimized coding mode selection scheme. However, the existing coding mode selection techniques usually assume that the same error-concealment mechanism is used for a block when it is lost, and the associated distortion also remains the same. Obviously, this assumption is not true when DP involves. In this paper, a generalized end-to-end distortion model is proposed for the rate-distortion optimized coding mode selection, which fully utilizes the superior error-concealment mechanism in terms of DP. The proposed distortion model is also advantageous in the suppression of approximation errors caused by pixel average operations such as sub-pixel interpolation and deblocking filter. Therefore, it can lead to a low-complexity solution for real-time applications such as live streaming",2006,0, 5731,A novel adaptive nonlinear dynamic data reconciliation and gross error detection method,"Data reconciliation is a well-known method in online process control engineering aimed at estimating the true values of corrupted measurements under constraints. Most nonlinear dynamic data reconciliation methods have studied cases where the input variables are constant over relatively long periods of time separated by simple step changes (e.g., set-point changes). While this scenario is not uncommon in process control, it imposes strong limitations on a method's applicability. In this paper a novel adaptive nonlinear dynamic data reconciliation algorithm is presented that extends the method presented by Laylabadi and Taylor (2006) to the cases where the input variables are ramps or slow sinusoidal functions or, for that matter, any slow, smooth variation",2006,0, 5732,Automated Fault Analysis in the Indonesian power utility: A case study of South Sulawesi transmission system,"Data recorded during faults in transmission network are used by control centre personnel to analyze the protection system and to decide on remedial actions that will restore normal network operation as fast as possible. The increasing number of installed digital fault recorders (DFRs) and other intelligent electronic devices (IEDs) in substations has resulted in a large number of fault records to be analyzed by the power system engineer. Manual analysis of these records is both time-consuming and complex and for these reasons, many records may not be examined and much of their potential value would be lost. Determining how to make effective analysis of these records is a challenge being faced by many power utilities. The main purpose of this paper is to propose enhancements to the manual investigation of faults and disturbances that currently are performed by engineers in the Indonesian power utilities. In this paper, a new software framework for Automated Fault Analysis is proposed based on Application Service Provider (ASP) technology, which has lately received special attention in the development of distributed systems. Demonstrations of the following services currently implemented in the ASP are presented: signal pre-processing, fault analysis and protection performance analysis. The fault scenario from South Sulawesi transmission system is investigated to test the feature of the services.",2009,0, 5733,Early Error Detection and Classification in Data Transfer Scheduling,"Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users.Error messages are not logged efficiently, and sometimes are not relevant/useful from users' point-of-view.Our study explores the possibility of an efficient error detection and reporting system for such environments.Besides, early error detection and error classification have great importance in organizing data placement jobs.It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable data placement scheduler to make better and accurate decisions.We investigate the applicability of proposed early error detection and error classification techniques to improve arrangement of data placement jobs and to enhance decision making of data placement schedulers.",2009,0, 5734,Numerical round-off error in cellular phone services billing system,"Cellular phone services billing for per minute tariff plan and 1-second pulse involve floating point division and multiplication operation to calculate per call bill. Monthly customer billing involves addition operations on per call bills, which are floating point numbers. Round-off errors occur due to floating point numberspsila computer representation limitations and for storing limited significant figures during billing. The study analyzed post-paid itemized bills of a cellular phone service operator in Bangladesh and identified that accumulated round-off error for active post-paid subscribers is significantly high for subscriber group of large number. The research recommends a per second tariff plan to completely eliminate round-off error which also reduces floating point number operations.",2007,0, 5735,Online assessment of fault current considering substation topologies,"Changes to the power system, especially the installation of new generation sources and new operating conditions, result in higher fault currents. Circuit breakers, which are previously designed and coordinated for the power systems before the appearances of new generation sources and new operating conditions, may have inadequate fault current interruption capability. This paper discusses those operating conditions that result in interruption capability violations of circuit breakers, based on substation topologies/circuit breaker connections. An online application of fault current assessment is proposed to identify those unsafe operating conditions limited by circuit breaker interruption capabilities. Based on the findings of the paper, this application can give system operators suggestions of remedy actions that eliminate unsafe operating conditions by changing substation topologies. The design of such an online application is described and a MATLAB version is tested on IEEE 14-bus system.",2010,0, 5736,Estimation of channel parameters in a multipath environment via optimizing highly oscillatory error functions using a genetic algorithm,Channel estimation is of crucial importance for tomorrow's wireless mobile communication systems. This paper focuses on the solution of channel parameters estimation problem in a scenario involving multiple paths in the presence of additive white Gaussian noise. We assumed that number of paths in the multipath environment is known and the transmitted signal consists of attenuated and delayed replicas of a known transient signal. In order to determine the maximum likelihood estimates one has to solve a complicated optimization problem. Genetic algorithms (GA) are well known for their robustness in solving complex optimization problems. A GA is considered to extract channel parameters to minimize the derived error-function. The solution is based on the maximum-likelihood estimation of the channel parameters. Simulation results also demonstrate GA's robustness to channel parameters estimation errors.,2007,0, 5737,Study on pre-correction method for LED chip declination in sorting system,"Chip declination correction is a difficult problem in LED chip sorting process. Usually use real-time vision to correct chip declination. In this way, once correction may need several adjustments. It may cause great reduction for die production rate. Aiming at an effective solution of this problem, a declination pre-correcting method is proposed in this paper. Techniques including least-squares method to fit location of rotation center, computational method of positions to be corrected. Paper also proposed an iterative formula to revise the fitting rotation center position to ensure the accuracy, and discussed the over-quadrant phenomenon in chip declination pre-correction process. Experimental results show that the final deviation which includes image recognition error is less than 12m. It is acceptable for chip sorting. By contrast with traditional method, pre-correction method needs only one adjustment in most cases, it is more effective and efficient.",2010,0, 5738,Identification of Cross-Country Fault of Power Transformer for Fast Unblocking of Differential Protection,"Due to the different ratings of the current transformers (CT) located on different sides of power transformer, only the CT of low ratings will saturate when the power transformer experiences a heavy through fault, leading to false differential current. Such type of external fault can be identified from the internal one if the differential protection is equipped with the percentage restraint characteristic together with the method using operation time difference between pickup element and differential protection. However, the differential protection will be wrongly blocked as well if a cross-country fault occurs. According to the investigations in this paper, in the event of an external fault accompanied by the CT saturation, the variation of most samples of the secondary current of the saturated CT is inversely proportional to the variation of the differential current. Comparatively, this law cannot be followed on the occasion of an internal fault. In this case, the locus of the variation of the saturated secondary current with the differential current can be used to dynamically discriminate if an external fault develops to an internal fault. With the method proposed in this paper, the ability of the differential protection immune to cross-country fault can be improved further. The effectiveness of the proposed method is verified with the simulation tests.",2009,0, 5739,Analysis and comparison of the two multicast error recovery mechanisms,"Due to the distribution nature and multi-customer interaction characteristic of the CSCW system, multicast becomes an ideal transmission instrument. Thus we have put into effect a multicast protocol named TORM (totally ordered reliable multicast). In order to evaluate its performance, we conduct the analysis and comparison between TORM and another multicast protocol, SRM (scalable reliable multicast), in respect of their error recovery mechanism. With queuing theory, we establish a queue model, based on which the expression of error recovery latency are worked out. The analytical result demonstrates TORM with the higher error recovery efficiency is often preferable in the CSCW application. Moreover, we simulate the two protocols with NS2, a network simulation tool, and get the same conclusion.",2004,0, 5740,Increasing System Availability with Local Recovery Based on Fault Localization,"Due to the fact that software systems cannot be tested exhaustively, software systems must cope with residual defects at run-time. Local recovery is an approach for recovering from errors, in which only the defective parts of the system are recovered while the other parts are kept operational. To be efficient, local recovery must be aware of which component is at fault. In this paper, we combine a fault localization technique (spectrum-based fault localization, SFL) with local recovery techniques to achieve fully autonomous fault detection, isolation, and recovery. A framework is used for decomposing the system into separate units that can be recovered in isolation, while SFL is used for monitoring the activities of these units and diagnose the faulty one whenever an error is detected. We have applied our approach to MPlayer, a large open-source software. We have observed that SFL can increase the system availability by 23.4% on average.",2010,0, 5741,PMSG Wind Turbine Performance Analysis During Short Circuit Faults,"Due to the increasing price of fossil fuels and the security concerns of the nuclear energy, electricity generation using wind turbines has recently attracted significant attention after a period of neglect. Among different types of wind turbine generators, PM synchronous generators (PMSG) offer better performance due to higher efficiency and less maintenance since they do not have rotor current and could be used without a gear box. In addition, the utilization of a double conversion results in higher flexibility compared with other wind turbine systems. This paper develops the model of a PMSG wind turbine and then simulates short circuits to evaluate the performance of the system during short circuit fault. Since PMSG wind turbine systems use a double conversion converter, this paper develops two methods for controlling the converter, a new protection method for capacitor over voltage is also evaluated in this paper.",2007,0, 5742,A Practical Framework of Realizing Actuators for Autonomous Fault Management in SOA,"Due to the key features of service-oriented architecture (SOA); blackbox-nature of services, heterogeneity, service dynamism, and service evolvability, fault management in SOA is known to be more challenging than conventional system management. An efficient way of managing faults in SOA is to apply principles of autonomic computing (AC), of which process is specified in MAPE. The first two phases of MAPE are to monitor target systems and diagnose faults to determine underlying cause. The other two phases are to plan healing/actuation methods and to execute them. Devising methods to remedy service faults which can run in autonomous manner is a hard problem, mainly due to the remoteness and the limited visibility and controllability. In this paper, we present a practical framework to design actuators which can be invoked autonomously. By considering the relationships among fault, cause, and actuator, we derive the abstract and concrete actuators. For some essential concrete actuators, we present their algorithms which can be implemented in practice. We believe our proposed service actuation framework makes the realization of autonomous service management more feasible.",2009,0, 5743,A hybrid between region-based and voxel-based methods for Partial Volume correction in PET,"Due to the limited resolution of Positron Emission Tomography (PET), loss of signal through Partial Volume is significant for small structures. Consequently, Partial Volume Correction (PVC) is often used in PET imaging to recover this lost signal within images. Numerous methods have been proposed, and can be divided in multiple ways. One division is the separation of methods utilising image based segmentation and those that perform image based deconvolution to recover resolution. We propose a new method for PVC, PARtially-Segmented Lucy-Richardson (PARSLR), that combines the image based deconvolution approach of the Lucy-Richardson (LR) Iterative Deconvolution Algorithm with a partial segmentation of homogenous regions. Such an approach is of value where reliable segmentation is possible for part but not all of the image volume or sub-volume. We evaluated the performance of PARSLR with respect to a region-based method (Rousset's method) and a deconvolution voxel-based method (LR) for partial volume correction by comparing how each method behaves in an environment of complete and accurate segmentation, and partial segmentation, on a 3D simulated medial temporal brain area including the hippocampus, as well as on a 2D physical brain phantom. Under complete and accurate segmentation, PARSLR showed agreement in recovery with the other methods. In an environment of partial segmentation, PARSLR recovered the hippocampus intensity with the most accuracy, with Rousset's method showing errors when too many regions were defined. With only one homogeneous background identified, errors were also observed when using Rousset, with the recovered value being smaller than the measured uncorrected data in these particular evaluations. In the 2D measured data for the brain phantom, PARSLR recovered with an error of -0.91%, with LR recovering to -5.23%, for a selected region of cortex. Rousset with a homogeneous background recovered with an error of -6.50%. With the remaining pixels - - set as individual regions, Rousset's method became ill-conditioned with an error of -157.00%. The method therefore showed good recovery in regions that are only partly segmentable. We propose that the approach is of particular importance for: studies with pathological abnormalities where complete and accurate segmentation across or with a sub-volume of the image volume is challenging; and regions of the brain containing heterogeneous structures which cannot be accurately segmented from co-registered images.",2010,0, 5744,On the Threat of Metastability in an Asynchronous Fault-Tolerant Clock Generation Scheme,"Due to their handshake-based flow control, asynchronous circuits generally do not suffer from metastability issues as much as synchronous circuits do. We will show, however, that fault effects like single-event transients can force (sequential) asynchronous building blocks such as Muller C-Elements into a metastable state. At the example of a fault-tolerant clock generation scheme, we will illustrate that metastability could overcome conventional error containment boundaries, and that, ultimately, a single metastable upset could cause even a multiple Byzantine fault-tolerant system to fail. In order to quantify this threat, we performed analytic modeling and simulation of the elastic pipelines, which are at the heart of our physical implementation of the fault-tolerant clocks. Our analysis results reveal that only transient pulses of some very specific width can trigger metastable behavior. So even without consideration of other masking effects the probability of a metastable upset to propagate through a pipeline is fairly small. Still, however, a thorough metastability analysis is mandatory for circuits employed in high-dependability applications.",2009,0, 5745,"A system for simultaneously measuring contact force, ultrasound, and position information for use in force-based correction of freehand scanning","During freehand ultrasound imaging, the sonographer places the ultrasound probe on the patient's skin. This paper describes a system that simultaneously records the position of the probe, the contact force between the probe and skin, and the ultrasound image. The system consists of an ultrasound machine, a probe, a force sensor, an optical localizer, and a host computer. Two new calibration methods are demonstrated: a temporal calibration to determine the time delay between force and position measurements, and a gravitational calibration to remove the effect of gravity on the recorded force. Measurements made with the system showed good agreement with those obtained from a standard materials testing machine. The system's uses include three-dimensional (3-D) ultrasound imaging, force-based deformation correction of ultrasound images, and indentation testing.",2005,0, 5746,Confidentiality and Real Errors: A Contradiction?,"During industrial software development and deployment, a wealth of data is accumulated, which could be used for the evolvement and refinement of methods and tools for error analysis, statistical evaluation of errors, dynamic handling of errors, and the prediction of faults and failures. Unfortunately, this data is always classified as highly sensitive as it contains customer related information, quality and quality assurance related information and gives insights into internal development processes. There is a need for neutralization techniques to overcome these hurdles",2006,0, 5747,Floating-point error analysis based on affine arithmetic,"During the development of floating-point signal processing systems, an efficient error analysis method is needed to guarantee the output quality. We present a novel approach to floating-point error bound analysis based on affine arithmetic. The proposed method not only provides a tighter bound than the conventional approach, but also is applicable to any arithmetic operation. The error estimation accuracy is evaluated across several different applications which cover linear operations, nonlinear operations, and feedback systems. The accuracy decreases with the depth of computation path and also is affected by the linearity of the floating-point operations.",2003,0, 5748,Multiple fault diagnostics for communicating nondeterministic finite state machines,"During the last decade, different methods were developed to produce optimized test sequences for detecting faults in, communication protocol implementations. However, the application of these methods gives only limited information about the location of detected faults. We propose a complementary step, which localizes the faults, once detected. It consists of a generalized diagnostic algorithm for the case where more than one fault may be present in the transitions of a system represented by communicating nondeterministic finite state machines, if existing faults are detected, this algorithm permits the generation of a minimal set of diagnoses, each of which is formed by a set of transitions suspected of being faulty. A simple example is used to demonstrate the functioning of the proposed diagnostic algorithm. The complexity of each step in the algorithm are calculated",2001,0, 5749,Downhole Pressure Gauge Temperature Correction Model Design,"During the pressure measurement, the measured value may drift with the changing temperature; therefore a special mathematical model should be established to correct the measurement. Several temperature points were selected when calibrating, and the line fitting with least square method was used for test data of pressure at every temperature point. Between the two adjacent temperature points, the piecewise linear interpolation function was used, thus the original curve was replaced with a polyline. A slicing plane fitting was constructed in a three-dimensional space constructed from the values of pressure, temperature and measurement; a three-dimensional graphics of the slicing plane fitting was built by Matlab. Generated from the actual test data and corrected according to the mathematical model, the results coincided with the facts. This model can also be applied to amend pressure in general application situation.",2010,0, 5750,Study on Intelligent Analysis System for Engine Fault Diagnosis,"Depending on the types of engine failures, we attain some means to abstract the eigenvalues of signal in frequency field and time field, and design an intelligent analysis system based on TMS320VC5402. The particular hardware and software design based on the DSP device is present in the paper. The analysis system is high in speed, low in power consumption and small in size to be portable. It is fit for run-time supervising and analyzing",2006,0, 5751,Rigorous development of an embedded fault-tolerant system based on coordinated atomic actions,"Describes our experience using coordinated atomic (CA) actions as a system structuring tool to design and validate a sophisticated and embedded control system for a complex industrial application that has high reliability and safety requirements. Our study is based on an extended production cell model, the specification and simulator for which were defined and developed by FZI (Forschungszentrum Informatik, Germany). This ""fault-tolerant production cell"" represents a manufacturing process involving redundant mechanical devices (provided in order to enable continued production in the presence of machine faults). The challenge posed by the model specification is to design a control system that maintains specified safety and liveness properties even in the presence of a large number and variety of device and sensor failures. Based on an analysis of such failures, we provide details of: (1) a design for a control program that uses CA actions to deal with both safety-related and fault tolerance concerns and (2) the formal verification of this design based on the use of model checking. We found that CA action structuring facilitated both the design and verification tasks by enabling the various safety problems (involving possible clashes of moving machinery) to be treated independently. Even complex situations involving the concurrent occurrence of any pairs of the many possible mechanical and sensor failures can be handled simply yet appropriately. The formal verification activity was performed in parallel with the design activity, and the interaction between them resulted in a combined exercise in ""design for validation""; formal verification was very valuable in identifying some very subtle residual bugs in early versions of our design which would have been difficult to detect otherwise",2002,0, 5752,Simulation-based diagnosis for crosstalk faults in sequential circuits,"Describes two methods of diagnosing crosstalk-induced pulse faults in sequential circuits using crosstalk fault simulation. These methods compare with observed responses and simulated values at primary outputs to identify a set of suspected faults that are consistent with the observed responses. In these methods, if the simulated values agree with the observed responses, then the simulated fault is added to a set of suspected faults, otherwise the simulated fault is removed from the set of suspected faults. The diagnosis methods repeat the above process for each time frame to identify the suspected faults. The first method is a basic method which determines the suspected fault list by using the knowledge about the first and last failures of the test sequence. The second method uses state information and focuses on reducing the CPU time for diagnosing the faults. The CPU time is reduced by using stored state information to calculate the primary output values at the present time frame. Experimental results for ISCAS'89 benchmark circuits show that the number of suspected faults obtained by our methods is sufficiently small, and the second method is substantially faster than the first method",2001,0, 5753,On Bypassing Blocking Bugs during Post-Silicon Validation,"Design errors (or bugs) inadvertently escape the pre- silicon verification process. Before committing to a re-spin, it is expected that the escaped bugs have been identified during post-silicon validation. This is however hindered by the presence of blocking bugs in one erroneous module that inhibit the search for bugs in other parts of the chip that process data received from the erroneous module. In this paper we discuss how to design a novel embedded debug module that can bypass blocking bugs and aid the designer in validating the first silicon.",2008,0, 5754,H-RAFT - heuristic reachability analysis for fault tolerance protocols modelled in SDL,"Design flaws of fault tolerance techniques may lead to undesired consequences in particular fault cases under very special operating conditions. Such rare ""fault tolerance holes"" may be very difficult to reveal. This paper presents a novel approach directing the analysis towards potential weaknesses in a fault tolerance technique. A new algorithm based on special heuristics performs partial reachability analysis of SDL models describing fault-tolerant communication. It aims at finding violations of fault tolerance properties in an efficient way. The approach does not require knowledge of the model under investigation. The new algorithm is evaluated by experiments with realistic protocols - including a large model of an industrial system - and compared to the performance of known solutions.",2005,0, 5755,Relationship between design patterns defects and crosscutting concern scattering degree: an empirical study,"Design patterns are solutions to recurring design problems, aimed at increasing reuse, code quality and, above all, maintainability and resilience to changes. Despite such advantages, the usage of design patterns implies the presence of crosscutting code implementing the pattern usage and access from other system components. When the system evolves, the presence of crosscutting code can cause repeated changes, possibly introducing defects. This study reports an empirical study, in which it is showed that, for three open source projects, the number of defects in design-pattern classes is in several cases correlated with the scattering degree of their induced crosscutting concerns, and also varies among different kinds of patterns.",2009,0, 5756,Aspects of the Development of Secure and Fault-Resistant Hardware,"Designing ""secure hardware"" like a chip card controller, is a challenging task for hardware manufacturers: More and more attacks that are also more and more sophisticated generate a need for more and more countermeasures. Developers of these devices have to live with certain additional constraints and this does not make their life easier. The difficulties that the designer of such a system is confronted with, are pointed out. This might give the scientific community some impression of the problems that would be interesting to be solved.",2008,0, 5757,Representativeness analysis of injected software faults in complex software,"Despite of the existence of several techniques for emulating software faults, there are still open issues regarding representativeness of the faults being injected. An important aspect, not considered by existing techniques, is the non-trivial activation condition (trigger) of real faults, which causes them to elude testing and remain hidden until operation. In this paper, we investigate how the representativeness of injected software faults can be improved regarding the representativeness of triggers, by proposing a set of generic criteria to select representative faults from afaultload. We used the G-SWFIT technique to inject software faults in a DBMS, resulting in over 40 thousands faults and 2 million runs of a real test suite. We analyzed faults with respect to their triggers, and concluded that a non-negligible share (15%) would not realistically elude testing. Our proposed criteria decreased the percentage of non-elusive faults in the faultload, improving its representativeness.",2010,0, 5758,Router group monitoring: making traffic trajectory error detection more efficient,"Detecting errors in traffic trajectories (i.e., packet forwarding paths) is important to operational networks. Several different traffic monitoring algorithms such as Trajectory Sampling, PSAMP, and Fatih can be used for traffic trajectory error detection. However, a straight-forward application of these algorithms will incur the overhead of simultaneously monitoring all network interfaces in a network for the packets of interest. In this paper, we propose a novel technique called router group monitoring to improve the efficiency of trajectory error detection by only monitoring the periphery interfaces of a set of selected router groups. We analyze a large number of real network topologies and show that effective router groups with high trajectory error detection rates exist in all cases. However, for router group monitoring to be practical, those effective router groups must be identified efficiently. To this end, we develop an analytical model for quickly and accurately estimating the detection rates of different router groups. Based on this model, we propose an algorithm to select a set of router groups that can achieve complete error detection and low monitoring overhead. Finally, we show that the router group monitoring technique can significantly improve the efficiency of trajectory error detection based on Trajectory Sampling or Fatih.",2010,0, 5759,Fault-Tolerant Algoritms for Detecting Event Regions in Wireless Sensor Networks Using Statistical Hypothesis Test,"Detecting event regions in a monitored environment is a canonical task of wireless sensor networks (WSNs). It is a hard problem because sensor nodes are prone to failures and have scarce energy. In this paper, we seek distributed and localized algorithms for fault-tolerant event region detection. Most existing algorithms only assume that events are spatially correlated, but we argue that events are usually both spatially and temporally correlated. By examining the temporal correlation of sensor measurements, we propose two detection algorithms by applying statistical hypothesis test (SHT). Our analyses show that SHT-based algorithm is more accurate in detecting event regions. Moreover, it is more energy efficient since it gets rid of frequent measurement exchanges. In order to improve the capability of fault recognition, we extend SHT-based algorithm by examining both spatial and temporal correlations of sensor measurements, and our analyses show that extended SHT-based algorithm can recognize almost all faults when sensor network is densely deployed.",2008,0, 5760,A new wavelet-based method for detection of high impedance faults,"Detecting high impedance faults is one of the challenging issues for electrical engineers. Over-current relays can only detect some of the high impedance faults. Distance relays are unable to detect faults with impedance over 100 Omega. In this paper, by using an accurate model for high impedance faults, a new wavelet-based method is presented. The proposed method, which employs a 3 level neural network system, can successfully differentiate high impedance faults from other transients. The paper also thoroughly analyzes the effect of choice of mother wavelet on the detection performance. Simulation results which are carried out using PSCAD/EMTDC software are summarized",2005,0, 5761,Statistical software debugging: From bug predictors to the main causes of failure,"Detecting latent errors is a key challenging issue in the software testing process. Latent errors could be best detected by bug predictors. A bug predictor manifests the effect of a bug on the program execution state. The aim has been to find the smallest reasonable subset of the bug predictors, manifesting all possible bugs within a program. In this paper, a new algorithm for finding the smallest subset of bug predictors is presented. The algorithm, firstly, applies a LASSO method to detect program predicates which have relatively higher effect on the termination status of the program. Then, a ridge regression method is applied to select a subset of the detected predicates as independent representatives of all the program predicates. Program control and data dependency graphs can be best applied to find the causes of bugs represented by the selected bug predictors. Our proposed approach has been evaluated on two well-known test suites. The experimental results demonstrate the effectiveness and accuracy of the proposed approach.",2009,0, 5762,Detecting Malicious Packet Dropping in the Presence of Collisions and Channel Errors in Wireless Ad Hoc Networks,"Detecting malicious packet dropping is important in ad hoc networks to combat a variety of security attacks such as blackhole, greyhole, and wormhole attacks. We consider the detection of malicious packet drops in the presence of collisions and channel errors and describe a method to distinguish between these types. We present a simple analytical model for packet loss that helps a monitoring node to detect malicious packet dropping attacks. The model is analyzed and evaluated using simulations. The results show that it is possible to detect malicious packet drops in the presence of collisions and channel errors.",2009,0, 5763,GPS cycle slip detection and correction based on high order difference and lagrange interpolation,"Detection and correction of cycle slip is one of key problems of GPS carrier phase measurement in the GPS data processing. In this paper, it takes the carrier phase observation data of L1 as the research object on the platform of MATLAB making an advanced study for detection and correction of cycle slip. It brought forward a combinational algorithm that takes the high order time difference to detect and correct the large cycle slip which is more than two cycle slips and then makes the Lagrange interpolation correct the small cycle slip. It got a conclusion from the experiment that the combinational algorithm is very perfect, which could correct small cycle slip within the error of one cycle slip.",2009,0, 5764,Error analysis of free-form surfaces for manufacturing applications,"Error analysis of free-form surfaces is a requirement to assure quality and to reduce manufacturing costs and rework. This paper proposes a new approach and algorithms for the error analysis of free-form surfaces. Given the measured surface as an input the approach first uses a statistical method to determine the number of test-points with suitable sample size for shape error analysis. Then, the system applies a robust mathematic model, Implicit polynomials (IP), to construct the model of the test-points. To perform detailed comparison of the shapes, the CAD model is geometrically adjusted with the input using model-based matching algorithm developed in this paper. Once the CAD model is adjusted, it is compared with input to reveal the errors between their shapes. To accomplish this task a new shape matching algorithm is developed. Experimental results on error analysis of a variety of the machined metal skin of aircraft are reported to show the validity of the proposed methodology.",2009,0, 5765,Video Error Concealment Using Spatio-Temporal Boundary Matching and Partial Differential Equation,"Error concealment techniques are very important for video communication since compressed video sequences may be corrupted or lost when transmitted over error-prone networks. In this paper, we propose a novel two-stage error concealment scheme for erroneously received video sequences. In the first stage, we propose a novel spatio-temporal boundary matching algorithm (STBMA) to reconstruct the lost motion vectors (MV). A well defined cost function is introduced which exploits both spatial and temporal smoothness properties of video signals. By minimizing the cost function, the MV of each lost macroblock (MB) is recovered and the corresponding reference MB in the reference frame is obtained using this MV. In the second stage, instead of directly copying the reference MB as the final recovered pixel values, we use a novel partial differential equation (PDE) based algorithm to refine the reconstruction. We minimize, in a weighted manner, the difference between the gradient field of the reconstructed MB in current frame and that of the reference MB in the reference frame under given boundary condition. A weighting factor is used to control the regulation level according to the local blockiness degree. With this algorithm, the annoying blocking artifacts are effectively reduced while the structures of the reference MB are well preserved. Compared with the error concealment feature implemented in the H.264 reference software, our algorithm is able to achieve significantly higher PSNR as well as better visual quality.",2008,0, 5766,Novel Convergence Model for Efficient Error Concealment using Information Hiding in Multimedia Streams,Error concealment using information hiding has been an efficient tool to combat channel impairments that degrade the transmitted data quality by introducing channel errors/packet losses. The proposed model takes a stream of multimedia content and the binarised stream is subjected to bit level enhanced mapping procedure (PRASAN - Enhanced NFD approach) accompanied with a set of convergence models that ensure a high degree of convergence for a given error norm. The mapping is performed between the current frames with respect to the previous frame in case of video data. This approach often referred to as the correlation generation is followed by convergence mathematical function generation. This function is derived based on trying out the various convergence methodologies in a weighted round robin environment and choosing the best matching function by computing the mean square error. This error is termed map-fault and is kept a minimum. The test data taken are subjected to noisy channel environments and the power signal to noise ratios obtained experimentally support firmly the advantage of the proposed methodology in comparison to existing approaches,2007,0, 5767,A Fast and Efficient Spatial Error Concealment for Intra-coded Frames,"Error concealment with good restored video quality and low computational cost is significantly required for video streams transmitted in error-prone channels, especially for real-time applications. A fast and efficient spatial error concealment algorithm is proposed for intra-frames in the decoding phase, which utilizes a new matching criterion and avoids computational intensive tasks, such as edge detection and motion estimation. Moreover, this algorithm is capable of representing main edges in the image in light of the consistency with the spatial neighboring information. Experimental results indicate the proposed algorithm can attain well restored quality of intra-frames both subjectively and objectively. Meanwhile, the computational cost of the proposed algorithm can be significantly reduced, compared to temporal error concealment method, yet the restored quality is subjectively almost the same.",2008,0, 5768,On-line Monitoring of Real Time Applications for Early Error Detection,"Error confinement technologies have proven their efficiency to improve software dependability. Such mechanisms usually require efficient error detectors to swiftly signal any misbehaviour. Real-time systems, due to their timing constraints, require a richer description of correct and/or erroneous states that includes timing aspects. This paper presents real-time error detectors that can be automatically generated from formal models of the expected behaviours of software applications. The considered specifications provide the means to define quantitative temporal constraints on the execution of the application. These detectors check at run-time that the current execution matches its specification. The paper contribution is twofold. Firstly, at the theoretical level, we provide a formal definition of the expected behaviour of such detectors, ensuring a predictable behaviour of the detector system. Secondly, at a practical level, we provide a description of the complete generation process, from the models to the code of the detector.",2008,0, 5769,Performance Analysis of Selected Error Control Protocols in Wireless Multimedia Sensor Networks,"Error control is an important mechanism for providing robust multimedia communication in wireless sensor networks. Although there have been several research works in analysis of error control mechanisms in wireless multimedia networks and wireless sensor networks, but none of them are directly applicable to the wireless multimedia sensor networks (WMSNs) which has resource and performance constraints of WSNs as well as QoS requirements of multimedia communications. In this paper, we comprehensively evaluate the performance of several error control mechanisms in WMSNs. The results of our analysis provide an extensive comparison between automatic repeat request (ARQ), forward error correction (FEC), and hybrid FEC/ARQ error control mechanisms in terms of frame loss rate, frame peak signal-to-noise ratio (PSNR), and energy efficiency.",2010,0, 5770,Modularizing error recovery,"Error recovery in compilers often involves significant amounts of cognitive effort to identify the code and execution points in the compiler that are related to identifying and handling input-program errors. This is because current implementations fail to explicitly identify error-related control dependencies, and to separately characterize the actions to take when programming errors are detected. As a result, compiler writers need to navigate and understand much of the compiler source in order to replace or extend error recovery actions. In the context of AspectJ compiler (ajc), this paper encapsulates error concerns as aspects yielding improved modularity through: pointcuts that explicitly declare the loci of error detection, and advice that expose the extension points for error handling.",2009,0, 5771,An error resilience method for depth in stereoscopic 3D video,"Error resilience stereoscopic 3D video can ensure robust 3D video communication especially in high error rate wireless channel. In this paper, an error resilience method is proposed for the depth data of the stereoscopic 3D video using data partitioning. Although data partitioning method is available for 2D video, its extension to depth information has not been investigated in the context of stereoscopic 3D video. Simulation results show that the depth data is less sensitive to error and should be partitioned towards the end of the data partitions block. The partitioned depth data is then applied to an error resilience method namely multiple description coding (MDC) to code the 2D video and the depth information. Simulation results show improved performance using the proposed depth partitioning on MDC compared to the original MDC in an error prone environment.",2009,0, 5772,Error Vector Magnitude Analysis for OFDM Systems,"Error vector magnitude (EVM) is a popular figure-of-merit adopted by various communication standards for evaluating in-band distortions introduced in a communication system. In this paper, we regard EVM as a random variable and investigate its statistical distributions as the result of the following distortion mechanisms: phase noise, amplitude clipping, power amplifier nonlinearities, and gain/phase imbalances in orthogonal frequency division multiplexing (OFDM) systems. We relate key parameters characterizing the various distortion mechanisms to the statistical behavior of EVM; such statistical behavior can be used to verify compliance of the transmit signals to the requirements of the standard.",2006,0, 5773,Error vector magnitude to SNR conversion for nondata-aided receivers,"Error vector magnitude (EVM) is one of the widely accepted figure of merits used to evaluate the quality of communication systems. In the literature, EVM has been related to signal-to-noise ratio (SNR) for data-aided receivers, where preamble sequences or pilots are used to measure the EVM, or under the assumption of high SNR values. In this paper, this relation is examined for nondata-aided receivers and is shown to perform poorly, especially for low SNR values or high modulation orders. The EVM for nondata-aided receivers is then evaluated and its value is related to the SNR for quadrature amplitude modulation (QAM) and pulse amplitude modulation (PAM) signals over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels, and for systems with IQ imbalances. The results show that derived equations can be used to reliably estimate SNR values using EVM measurements that are made based on detected data symbols. Thus, presented work can be quite useful for measurement devices such as vector signal analyzers (VSA), where EVM measurements are readily available.",2009,0, 5774,Error-Correcting Codes Based on Quasigroups,"Error-correcting codes based on quasigroup transformations are proposed. For the proposed codes, similar to recursive convolutional codes, the correlation exists between any two bits of a codeword, which can have infinite length, theoretically. However, in contrast to convolutional codes, the proposed codes are nonlinear and almost random: for codewords with large enough length, the distribution of the letters, pair of letters, triple of letters, and so on, is uniform. Simulation results of bit-error probability for several codes in binary symmetric channels are presented.",2007,0, 5775,Experimental Validation of a Tool for Predicting the Effects of Soft Errors in SRAM-Based FPGAs,"Estimating the impact of single event effects (SEEs) on SRAM-based FPGA devices is a major issue in order to adopt them in radiation environments such as space or high altitude. Among the available approaches, we proposed an analytical method to predict SEE effects based on the analysis of the circuit the FPGA implements, which does not require either simulation or fault injection. In this paper we provide an experimental validation of this approach, by comparing the results it provides with those coming from accelerated testing. We adopted our analytical method for computing the error cross section of a design implemented on SRAM-based FPGA devices. We then compared the obtained figure with that obtained by accelerated testing. Experimental analysis demonstrated that accelerated testing closely match the figures the analytical method provides.",2007,0, 5776,The Study of Elevator Fault Diagnosis Based on Multi-Agent System,"Elevator is a kind of special complex mechanical and electrical equipment, its faults are complex and dispersed. It is unable to satisfy the requirements that fast and generality of fault diagnosis only rely on a single failure matching mode. Focusing on this problem, in this paper, the intelligent agent technology is applied in fault diagnosis system based on the traditional fault diagnosis method. The thesis made elevator door system (EDS) the target of comparative research. Firstly, the MAS (multi-agent system) elevator door system fault diagnosis system based on BDI agent is constructed. Then this paper expatiate the framework and diagnosis process of the MAS model. Finally analyzes the methods and process of the EDS fault diagnosis based on MAS.",2009,0, 5777,A New Mitigation Approach for Soft Errors in Embedded Processors,"Embedded processors, like for example processor macros inside modern FPGAs, are becoming widely used in many applications. As soon as these devices are deployed in radioactive environments, designers need hardening solutions to mitigate radiation-induced errors. When low-cost applications have to be developed, the traditional hardware redundancy-based approaches exploiting m-way replication and voting are no longer viable as too expensive, and new mitigation techniques have to be developed. In this paper we present a new approach, based on processor duplication, checkpoint and rollback, to detect and correct soft errors affecting the memory elements of embedded processors. Preliminary fault injection results performed on a PowerPC-based system confirmed the efficiency of the approach.",2008,0, 5778,SES-based framework for fault-tolerant systems,"Embedded real-time systems are often used in harsh environments, for example engine control systems in automotive vehicles. In such ECUs (Engine Control Unit) faults can lead to serious accidents. In this paper we propose a safety embedded architecture based on coded processing. This framework only needs two channels to provide fault tolerance and allows the detection and identification of permanent and transient faults. Once a fault is detected by an observer unit the SES guard makes it visible and initiates a suitable failure reaction.",2010,0, 5779,Application of an automated PD failure identification system for EMC-assessment strategies of multiple PD defects at HV-insulators,"EMC-assessment of field emission of high voltage insulators can be performed using phase-angle-resolved partial discharge diagnosis. However, with conventional PD-detection systems no satisfying statements about multiple PD defects are possible because of the highly dynamic apparent charge values for different discharge phenomena. This measurement problem can be solved by a new approach. For this purpose phase resolved pulse sequence analysis methods are suitable diagnosis tools without using the apparent charge as a dominating influence. A recently developed feature extraction method based on consecutive u/ values shows good classification results. The problem of multiple PD defects, which occur at the same time, is a new challenge for PD diagnosis systems. For the investigation two reference databases are generated. With the database, which takes these multiple PD defects into account, the diagnosis system WinTED of the University of Wuppertal is able to identify with high reliability actual measurements made by the University of Dortmund",2000,0, 5780,Accelerating error correction in high-throughput short-read DNA sequencing data with CUDA,"Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de-novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this paper we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data. It is based on spectral alignment and uses the CUDA programming model. Our computational experiments on a GTX 280 GPU show runtime savings between 10 and 19 times (for different error-rates using simulated datasets as well as real Solexa/Illumina datasets).",2009,0, 5781,Detection of a New Surface Killer Defect on Starting Si Material using Nomarski Principle of Differential Interference Contrast,End of line device failure analysis and inline defectivity investigations revealed a previously undetected surface killer defect that was generated during a front side polish process of integrated circuit Si substrate starting material. A surface defect impacting over 20% of the wafer was found to exist prior to Si Epitaxial growth. This surface defect was discovered by using the Surfscan SP1 TBItrade inspection tool and evaluated using the Leica INS 3000trade and the KLA-Tencor EV300 SEMItrade. A Bright Field inspection based upon the Nomarski principle of Differential Interference Contrast (DIC) was employed and revealed this previously undetected polish defectivity mechanism and this inline detection allowed the Si supplier to implement root cause fixes for the issue. These newly detected defects have a very low surface profile and were below the detection range of typical inspection methodologies currently in use for starting Si substrates. The implementation and use of Nomarski DIC inspection principles were highly manufacturable with regard to tool to tool sensitivity matching and the portability of inspection recipes. The new DIC inspection method enabled the Silicon supplier to identify the root cause of a new defect mechanism and extend the defect detection capability of an existing tool set.,2007,0, 5782,A Fault-Tolerant Scheme for Complex Transaction Patterns in J2EE,"End-to-end reliability is an important issue in building large-scale distributed enterprise applications based on multi-tier architecture, but the support of reliability as adopted in conventional replication or transactions mechanisms is not enough due to their distinct objectives - replication guarantees the liveness of computational operations by using forward error recovery, while transactions guarantee the safety of application data by using backward error recovery. Combining the two mechanisms for stronger reliability is a challenging task Current solutions, however, are typically on the assumption of simple transaction pattern where a request from a single client executes in the context of exactly one transaction at the middle-tier application server, and seldom think about some complex patterns, such as client transaction enclosing multiple client requests or nested transactions. In this paper, we first identify four transaction pattern classes, and then propose a fault-tolerant scheme that can uniformly provide exactly-once semantic reliability support for these patterns. In this scheme, application servers are passively replicated to endow business logics with high reliability and high availability. In addition, by replicating transaction coordinator, the blocking problem of 2PC protocol during distributed transactions processing is eliminated. We have implemented this approach and integrated it into our own J2EE application server, OnceAS. Also, its effectiveness is discussed in different transaction patterns and the corresponding performance is evaluated",2006,0, 5783,Multistrategy ensemble learning: reducing error by combining ensemble learning techniques,"Ensemble learning strategies, especially boosting and bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. We investigate the hypothesis that the improvement in accuracy of multistrategy approaches to ensemble learning is due to an increase in the diversity of ensemble members that are formed. In addition, guided by this hypothesis, we develop three new multistrategy ensemble learning techniques. Experimental results in a wide variety of natural domains suggest that these multistrategy ensemble learning techniques are, on average, more accurate than their component ensemble learning techniques.",2004,0, 5784,Error Pattern Analysis of Augmented Array Codes Using a Visual Debugging Tool,"Error analysis is very important in determining the error control capabilities of any code. Traditionally these are measured by analyzing BER performance. We have designed and implemented a visual debugging tool (VDT) for tracing trellis decoding process of a class of augmented array codes. We have implemented a user-friendly interface in utilizing the VDT for introducing errors at the specifically selected positions of the array. Also certain error patterns can be added to all possible array positions in sequence. The observation of the effect of these purposely inserted errors is facilitated by VDT interface. The VDT can be effectively employed to measure error detection and/or correction power of the Viterbi decoder. The tool allows users to trace decoding process step-by-step, so it can be utilized for teaching and research activities of code design and their practical applications.",2006,0, 5785,Study of machine fault diagnosis system using neural networks,"Develops a machine fault diagnosis system using neural networks and spectral analysis. A neural network is applied to the fault diagnosis of the machine. The neural network has learning and memory capability. By the learning of normal and abnormal states of the object system, a method with neural networks is proposed which can diagnose a fault of the machine. The proposed fault diagnosis system is based on the spectrum of vibrations or sounds obtained from the operating machine. The difference between normal and abnormal data becomes clearer when comparing time series data. It is suitable for the detection of the fault to utilize changes of spectral data. Using this method, it is shown that it can detect unknown fault patterns. Fault diagnosis experiments are performed on both a wood slicing machine and an electromagnetic valve. The possibility of an online fault diagnosis system is examined through the construction of an online data processing system for an electromagnetic valve and it is shown that the fault diagnosis can be performed in real time. Through these results, the effectiveness of the proposed fault diagnosis system is verified",2002,0, 5786,Limitations of the Linux Fault Injection Framework to Test Direct Memory Access Address Errors,"Device drivers can be traced as the source of most operating system (OS) bugs. The Linux kernel includes a fault injection framework which developers can use to implement simple fault injection tools to test device drivers. This paper presents our results in applying the fault injection framework to inject DMA address errors. Our experiments show that while the injected errors reach the device driver, the asynchronous nature of DMA makes the framework an ill-suited approach if the fault injection campaign expects the errors to reach the hardware I/O devices, as when trying to test IOMMU implementations.",2008,0, 5787,On handling dependent evidence and multiple faults in knowledge fusion for engine health management,"Diagnostic architectures that fuse outputs from multiple algorithms are described as knowledge fusion or evidence aggregation. Knowledge fusion using a statistical framework such as Dempster-Shafer (D-S) has been used in the context of engine health management. Fundamental assumptions made by this approach include the notion of independent evidence and single fault. In most real world systems, these assumptions are rarely satisfied. Relaxing the single fault assumption in D-S based knowledge fusion involves working with a hyper-power set of the frame of discernment. Computational complexity limits the practical use of such extension. In this paper, we introduce the notion of mutually exclusive diagnostic subsets. In our approach, elements of the frame of discernment are subsets of faults that cannot be mistaken for each other, rather than failure modes. These subsets are derived using a systematic analysis of connectivity and causal relationship between various components within the system. Specifically, we employ a special form of reachability analysis to derive such subsets. The theory of D-S can be extended to handle dependent evidence for simple and separable belief functions. However, in the real world the conclusions of diagnostic algorithms might not take the form of simple or separable belief functions. In this paper, we present a formal definition of algorithm dependency based on three metrics: the underlying technique an algorithm is using, the sensors it is using, and the feature of the sensor that the algorithm is using. With this formal definition, we partition evidence into highly dependent, weakly dependent and independent evidence. We present examples from a Honeywell auxiliary power unit to illustrate our modified D-S method of evidence aggregation",2006,0, 5788,Spatial correction of echo planar imaging deformation for subject specific diffusion tensor MRI analysis,"Diffusion tensor magnetic resonance imaging (DT-MRI or DTI) is a specialized mode of MRI, used for functional imaging of the body's internal structures. Echo planar imaging (EPI), the currently accepted protocol for acquiring DTI, suffers from severe image artefact, which complicates analysis of the data. We present our current progress in registration of T2-weighted MRI acquired with EPI and conventional protocols. The reported algorithm uses a multiresolution registration method with B-spline transform to correct for spatial deformation. This is the first step in registration of a full DTI dataset. It was found that direct application of a popular B- spline algorithm can correct some deformation in the images but further refinements are required to achieve more accurate results.",2008,0, 5789,Fault Tolerant BBMD in the BACnet/IP Protocol,"Digital communication networks have become a core technology in advanced building automation systems. BACnet (building automation and control networks) is a standard data communication protocol designed specifically for building automation and control systems. BACnet provides BACnet/IP protocol for data communication through the Internet. BACnet/IP device uses BBMD (BACnet broadcasting management device) to deliver BACnet broadcast messages. In this study, we propose a fault tolerant BBMD in the BACnet/IP protocol. The fault tolerant BBMD improves the connectivity of BACnet/IP networks by inheriting the operation of the original BBMD to the backup BBMDs. The fault tolerant BBMD was designed to provide backward compatible with the original BACnet/IP devices. In this study, we implemented the fault tolerant BBMD and examined its validity using an experimental model.",2006,0, 5790,Imaging panel skew correction and auto-focusing in radiotherapy cone beam imaging,"Digital transducers, notably amorphous silicon panels, are being used in image guided radiotherapy. Using kilo-voltage X-ray sources panel transducers are providing cone beam volume reconstruction. Theoretical considerations show that even sub-degree skew can result in the wandering of projection points across many detector elements to produce serious blurring of the cone beam reconstruction away from the imaging isocentre. The importance of identifying and correcting transducer skew is highlighted and shown to be of potentially greater importance than mechanical flex. Radiographic imaging of a suspended radio-opaque test object is described for direct characterisation of skew. The results of skew correction by image processing are presented. The use of 'auto-focusing' grey level change detection algorithms in cone beam reconstruction images, following programmed incremental angular transformation of the projection profiles of a 'Rando' anthropomorphic phantom, is shown to have the potential for automating skew correction.",2004,0, 5791,MPE-IFEC: An enhanced burst error protection for DVB-SH systems,"Digital video broadcasting-satellite services to handhelds (DVB-SH) is a new hybrid satellite/terrestrial system for the broadcasting of multimedia services to mobile receivers. To improve the link budget, DVB-SH uses a long interleaver to cope with land mobile satellite (LMS) channel impairments. Multi-protocol encapsulation-inter-burst forward error correction (MPE-IFEC) is an attractive alternative to the long physical interleaving option of the standard and is suited for terminal receivers with limited de-interleaving memory. In this paper, we present a tutorial overview of this powerful error-correcting technique and report new simulation results that show MPE-IFEC improves the quality of broadcast mobile television (TV) reception.",2009,0, 5792,Fault tolerant MPEG-4 Digital Video Recorder,"Digital video recorder (DVR), the security device that records video onto storage devices like hard disks or DVR systems, becomes more and more popular nowadays. Currently, many companies try to develop advanced DVR systems and add new state-of-the-art functions to them like motion detection, image enhancement, etc. One of the most important factors of the DVR systems is fault-tolerant, which is the ability to record, playback and store video data into a storage device without interruption or data loosing. In this paper, we propose a fault-tolerant DVR system that supports the MPEG-4 codec for high video compression ratio on the embedded PowerPC processor. Our fault- tolerant DVR system guarantees consumers against losing recorded video data by applying our new file system and overwriting policy. And a graphic user interface with various functionalities can well satisfy consumers' demands.",2008,0, 5793,Dynamic Fault Tree Analysis Using Input/Output Interactive Markov Chains,"Dynamic fault trees (DFT) extend standard fault trees by allowing the modeling of complex system components' behaviors and interactions. Being a high level model and easy to use, DFT are experiencing a growing success among reliability engineers. Unfortunately, a number of issues still remains when using DFT. Briefly, these issues are (1) a lack of formality (syntax and semantics), (2) limitations in modular analysis and thus vulnerability to the state-space explosion problem, and (3) lack in modular model-building. We use the input/output interactiveMarkov chain (I/O-IMC) formalism to analyse DFT. I/O-IMC have a precise semantics and are an extension of continuous-time Markov chains with input and output actions. In this paper, using the I/OI-MC framework, we address and resolve issues (2) and (3) mentioned above. We also show, through some examples, how one can readily extend the DFT modeling capabilities using the I/O-IMC framework.",2007,0, 5794,Supporting Reconfigurable Fault Tolerance on Application Servers,"Dynamic reconfiguration support in application servers is a solution to meet the demands for flexible and adaptive component-based applications. However, when an application is reconfigured, its fault-tolerant mechanism should be reconfigured either. This is one of the crucial problems we have to solve before a fault-tolerant application is dynamically reconfigured at runtime. This paper proposes a fault-tolerant sandbox to support the reconfigurable fault-tolerant mechanisms on application servers. We present how the sandbox integrates multiple error detection and recovery mechanisms, and how to reconfigure these mechanisms at runtime, especially for coordinated recovery mechanisms. We implement a prototype and perform a set of controlled experiments to demonstrate the sandboxpsilas capabilities.",2009,0, 5795,Razor: circuit-level correction of timing errors for low-power operation,Dynamic voltage scaling is one of the more effective and widely used methods for power-aware computing. We present a DVS approach that uses dynamic detection and correction of circuit timing errors to tune processor supply voltage and eliminate the need for voltage margins,2004,0, 5796,Learning Mechanisms for Intelligent Fault Diagnosis,"Early diagnosis of plant faults / deviations is a critical factor for optimized and safe plant operation and maintenance. Although smart controllers and diagnosis systems are available and widely used in chemical plants, however, some faults couldn't be detected. Major reason is the lack of learning techniques that can learn from operational running data and previous abnormal cases. In addition, operator and maintenance engineer opinions and observations are not well used, while useful diagnosis knowledge is ignored. This research paper presents the framework of the proposed learning mechanisms in different stages of integrated fault diagnostic system, which is called FDS. The proposed idea will support plant operation and maintenance planning as well as overall plant safety.",2006,0, 5797,A Comparative Study into Architecture-Based Safety Evaluation Methodologies Using AADL's Error Annex and Failure Propagation Models,"Early quality evaluation and support for decisions that affect quality characteristics are among the key incentives to formally specify the architecture of a software intensive system. The Architecture Analysis and Description Language (AADL) with its error annex is a new and promising architecture modeling language that supports analysis of safety and other dependability properties. This paper reviews the key concepts that are introduced by the error annex, and compares it to the existing safety evaluation techniques regarding its ability in providing modeling, process and tool support. Based on this review and the comparison, its strengths and weaknesses are identified and possible improvements for the model-driven safety evaluation methodology based on AADLpsilas error annex are highlighted.",2008,0, 5798,Unequal Error Protection for Robust Streaming of Scalable Video Over Packet Lossy Networks,"Efficient bit stream adaptation and resilience to packet losses are two critical requirements in scalable video coding for transmission over packet-lossy networks. Various scalable layers have highly distinct importance, measured by their contribution to the overall video quality. This distinction is especially more significant in the scalable H.264/advanced video coding (AVC) video, due to the employed prediction hierarchy and the drift propagation when quality refinements are missing. Therefore, efficient bit stream adaptation and unequal protection of these layers are of special interest in the scalable H.264/AVC video. This paper proposes an algorithm to accurately estimate the overall distortion of decoder reconstructed frames due to enhancement layer truncation, drift/error propagation, and error concealment in the scalable H.264/AVC video. The method recursively computes the total decoder expected distortion at the picture-level for each layer in the prediction hierarchy. This ensures low computational cost since it bypasses highly complex pixel-level motion compensation operations. Simulation results show an accurate distortion estimation at various channel loss rates. The estimate is further integrated into a cross-layer optimization framework for optimized bit extraction and content-aware channel rate allocation. Experimental results demonstrate that precise distortion estimation enables our proposed transmission system to achieve a significantly higher average video peak signal-to-noise ratio compared to a conventional content independent system.",2010,0, 5799,Model-based fault diagnosis in electric drives using machine learning,"Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",2006,0, 5800,Computing distribution system fault currents and voltages via numerically computed Thevenin equivalents and sensitivity matrices,"Distribution system circuit topologies, connections, and equipment pose interesting challenges to system analysts. An approach based upon fundamental circuit analysis principles to calculating, fault currents of various types is presented. The basic approach consists of obtaining the prefault phase voltages and a phase Thevenin matrix at the fault location, and using the phase Thevenin matrix to solve the equations imposed by the boundary conditions of a given fault. The approach also obtains sensitivity matrices to be used for calculating post-fault voltages and currents at locations different from the fault location. The approach requests services from a multiphase power flow software component and a circuit model software component. The approach works for both radial and looped distribution systems. The analysis can evaluate fault currents at a node regardless of whether it is three-phase or nonthree-phase and grounded or ungrounded. Experience obtained from solving circuits with more than 5000 nodes will be reviewed.",2004,0, 5801,Error Reduction on Automatic Segmentation in Microarray Image,"DNA Microarray hybridization is a popular high throughput technique in academic as well as in industrial genomics research. The Microarray image is considered as an important tool and powerful technology for large-scale gene sequence and gene expression analysis. There are many methods to analyze the Microarray image by automatic segmentation or gridding spot. These methods always have the same problem of noise and tilt in spot array. It is difficult to process strong noise image in automation. In this paper, we can reduce the error of the edge detection which is influenced by noise and tilt spot array. We propose an automatic segmentation method with some techniques from video segmentation to process the Microarray image. By the proposed method, we can reduce the automatic spot segmentation errors and get more exact spot position. Our method has the advantages of low computation and easy implementation. Eventually, we compare the result with ScanAlyze tool since ScanAlyze tool extracts spot position and edge by artificial interface. We obtain the 1.43% average differential value of spots analysis ratio in result with ScanAlyze.",2007,0, 5802,Study of the effect of beam spreading on systematic Doppler flow measurement errors,"Doppler ultrasonic flowmeters are based on a single ray approximation. We have studied the effect of beam spreading on the systematic Doppler flow measurement by developing a theoretical ray tracing model and validating the same with experiments. This paper will discuss experimental work and the use of ray tracing and finite element models to investigate this effect. This paper indicates some early trends which we have identified, but should be treated as ""work in progress"".",2010,0, 5803,Minimum-Threshold Crowbar for a Fault-Ride-Through Grid-Code-Compliant DFIG Wind Turbine,"Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to ride-through grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.",2010,0, 5804,Research of Refractive Correction Forecasting Software Technology Based on WEB Database,"DTK (Diode Thermal Keratoplasty) technology is a new type of laser refractive technology. This thesis adopts web-based database for the multi-layer architecture mode, ADO.NET technology for the realization of data access, and B/S (Browser /Server) mode for the resolution of network method to accumulate large samples data and improve forecast accuracy of refractive laser correction. The thesis also develops refractive correction prediction software based on Web database to fulfill the publishing, updating of and access to information through web so that a more flexible device management tool can adapt to the information age, and it gives the achievement results, including data management interface, the results of data analysis and so on. Refractive correction prediction software based on Web database has realized data accumulation of laser refractive and improvement of prediction accuracy.",2009,0, 5805,Control of the matrix converter based WECS for fault ride-through enhancement,"Due to steadily increased in wind power penetration, regulatory standards for grid interconnection have evolved to require that wind generation systems ride-through disturbances such as faults and support the grid during such events. Keeping the converter online during and after short-circuit faults, and guaranteeing the actual standards of the converter connected to the grid, is becoming a very critical issue. From these goals, in this paper, an optimal control of matrix converter (MC) based wind turbine have been developed, where adaptive fuzzy logic controls along with improved SVPWM switching have been used extensively to ensure that current levels remain within design limits, even at greatly reduced voltage levels, thus enhancing the fault ride-through capability.",2010,0, 5806,On Rigorous Design and Implementation of Fault Tolerant Ambient Systems,"Developing fault tolerant ambient systems requires many challenging factors to be considered due to the nature of such systems, which tend to contain a lot of mobile elements that change their behavior depending on the surrounding environment, as well as the possibility of their disconnection and reconnection. It is therefore necessary to construct the critical parts of fault tolerant ambient systems in a rigorous manner. This can be achieved by deploying formal approach at the design stage, coupled with sound framework and support at the implementation stage. In this paper, we briefly describe a middleware that we developed to provide system structuring through the concepts of roles, agents, locations and scopes, making it easier for the developers to achieve fault tolerance. We then outline our experience in developing an ambient lecture system using the combination of formal approach and our middleware",2007,0, 5807,Middleware for Resource-Aware Deployment and Configuration of Fault-Tolerant Real-time Systems,"Developing large-scale distributed real-time and embedded (DRE) systems is hard in part due to complex deployment and configuration issues involved in satisfying multiple quality for service (QoS) properties, such as real-timeliness and fault tolerance. This paper makes three contributions to the study of deployment and configuration middleware for DRE systems that satisfy multiple QoS properties. First, it describes a novel task allocation algorithm for passively replicated DRE systems to meet their real-time and fault-tolerance QoS properties while consuming significantly less resources. Second, it presents the design of a strategizable allocation engine that enables application developers to evaluate different allocation algorithms. Third, it presents the design of a middleware agnostic configuration framework that uses allocation decisions to deploy application components/replicas and configure the underlying middleware automatically on the chosen nodes. These contributions are realized in the DeCoRAM (Deployment and Configuration Reasoning and Analysis via Modeling) middleware. Empirical results on a distributed testbed demonstrate DeCoRAM's ability to handle multiple failures and provide efficient and predictable real-time performance.",2010,0, 5808,Selection of Tripping Times for Distance Protection Using Probabilistic Fault Trees with Time Dependencies,"Distance protection of electrical power system is analyzed. This protection is based on local and remote relays. Hazard is the event: remote circuit breaker tripping provided the local circuit breaker can be opened. Coordination of operation of protection relays in time domain is important and difficult problem. Incorrect values of delay times of protection relays can cause the hazard. In the paper, the impact of delay time settings on performance of protective scheme is analyzed using probabilistic fault trees with time dependencies (PFTTD). PFTTD are introduced in the paper and used for the above-mentioned hazard analyzes. The outcomes are hazard probabilities as a function of delay time values.",2009,0, 5809,An Approach for Detecting and Distinguishing Errors versus Attacks in Sensor Networks,"Distributed sensor networks are highly prone to accidental errors and malicious activities, owing to their limited resources and tight interaction with the environment. Yet only a few studies have analyzed and coped with the effects of corrupted sensor data. This paper contributes with the proposal of an on-the-fly statistical technique that can detect and distinguish faulty data from malicious data in a distributed sensor network. Detecting faults and attacks is essential to ensure the correct semantic of the network, while distinguishing faults from attacks is necessary to initiate a correct recovery action. The approach uses hidden Markov models (HMMs) to capture the error/attack-free dynamics of the environment and the dynamics of error/attack data. It then performs a structural analysis of these HMMs to determine the type of error/attack affecting sensor observations. The methodology is demonstrated with real data traces collected over one month of observation from motes deployed on the Great Duck Island",2006,0, 5810,Reduced Instrumentation and Optimized Fault Injection Control for Dependability Analysis,"Fault-injection based dependability analysis has proved to be an efficient mean to predict the behavior of a circuit in presence of faults. Instrumentation-based techniques are in general used to perform the injection during simulation or emulation. The weak point of these techniques remains the characteristics obtained after modification of either the high-level description or the circuit netlist, especially when emulation is used. This paper proposes an instrumentation technique reducing the extra hardware and accelerating the fault injection campaigns thanks to optimized fault location addressing and parallel injection",2006,0, 5811,Characterization of Upset-Induced Degradation of Error-Mitigated High-Speed I/O's Using Fault Injection on SRAM Based FPGAs,"Fault-injection experiments on Virtex-IItrade FPGAs quantify failure and degradation modes in I/O channels incorporating triple module redundancy (TMR). With increasing frequency (to 100 MHz), full TMR under both I/O standards investigated (LVCMOS at 3.3V and 1.8V) shows more configuration bits have a measurable detrimental performance effect when in error",2006,0, 5812,Transparent Fault-Tolerance Based on Asynchronous Virtual Machine Replication,"Fault-tolerance of services has received great attention for years, yet it is not clear how this can be done efficiently for different types of services. Current research focuses on finding strategies to provide fault-tolerance using commodity hardware and as independent of the service and transparent to clients as possible. We propose a replication system to provide fault-tolerance for any type of service by running the service instance into a virtual machine and replicate the entire virtual machine using an asynchronous replication strategy. The testing prototype was implemented using Xen hyper visor and an asynchronous replication strategy that gives better replication time than the synchronous one.",2010,0, 5813,Power supply induced common cause faults-experimental assessment of potential countermeasures,"Fault-tolerant architectures based on physical replication of components are vulnerable to faults that cause the same effect in all replica. Short outages in a power supply shared by all replica are a prominent example for such common cause faults. For systems in which the provision of a replicated power supply would cause prohibitive efforts the identification of reliable countermeasures against these effects is vital to maintain the required dependability level. In this paper we propose several of such countermeasures, namely parity protection, voltage monitoring and time diversity of the replica. We perform extensive fault injection experiments on three fault-tolerant dual core processor designs, one FPGA based and two commercial ASICs. These experiments provide evidence for the vulnerability of a completely unprotected dual core solution, while time diversity and voltage monitoring in combination with increased timing margins turn out particularly effective for eliminating common cause effects.",2009,0, 5814,"Fault tolerance for highly available internet services: concepts, approaches, and issues","Fault-tolerant frameworks provide highly available services by means of fault detection and fault recovery mechanisms. These frameworks need to meet different constraints related to the fault model strength, performance, and resource consumption. One of the factors that led to this work is the observation that current fault-tolerant frameworks are not always adapted to existing Internet services. In fact, most of the proposed frameworks are not transport-level- or session-level-aware, although the concerned services range from regular services like HTTP and FTP to more recent Internet services such as multimodal conferencing and voice over IP. In this work we give a comprehensive overview of fault tolerance concepts, approaches, and issues. We show how the redundancy of application servers can be invested to ensure efficient failover of Internet services when the legitimate processing server goes down.",2008,0, 5815,Structured stochastic modeling of fault-tolerant systems,"Fault-tolerant mechanisms have been increasingly used to develop safety-critical systems. Therefore, the accurate description of these mechanisms is crucial if we do not want their use to bring any kind of unexpected result due to the misinterpretation of their features. The paper presents a new way of precisely describing fault tolerant mechanisms using a formalism that has a Markovian behavior. More specifically, we describe how to apply stochastic automata networks (SAN) to describe a dependable multiparty interaction (DMI) mechanism.",2004,0, 5816,An exception handling software architecture for developing fault-tolerant software,"Fault-tolerant object-oriented software systems are inherently complex and have to cope with an increasing number of exceptional conditions in order to meet the system's dependability requirements. This work proposes a software architecture which integrates uniformly both concurrent and sequential exception handling. The exception handling architecture is independent of programming language or exception handling mechanism, and its use can minimize the complexity caused by the handling of abnormal behavior. Our architecture provides, during the architectural design stage, the context in which more detailed design decisions related to exception handling are made in later development stages. This work also presents a set of design patterns which describes the static and dynamic aspects of the components of our software architecture. The patterns allow a clear separation of concerns between the system's functionality and the exception handling facilities, applying the computational reflection technique",2000,0, 5817,Novel Classifier Fusion Approaches for Fault Diagnosis in Automotive Systems,"Faulty automotive systems significantly degrade the performance and efficiency of vehicles and are often major contributors of vehicle breakdown; they result in large expenditures for repair and maintenance. Therefore, intelligent vehicle health-monitoring schemes are needed for effective fault diagnosis in automotive systems. Previously, we developed a data-driven approach using a data-reduction technique, coupled with a variety of classifiers, for fault diagnosis in automotive systems. In this paper, we consider the problem of fusing classifier decisions to reduce diagnostic errors. Specifically, we develop three novel classifier fusion approaches: 1. class-specific Bayesian fusion; 2. joint optimization of the fusion center and individual classifiers; and 3. dynamic fusion. We evaluate the efficacies of these fusion approaches on an automotive engine data. The results demonstrate that the proposed fusion techniques outperform traditional fusion approaches. We also show that learning the parameters of individual classifiers as part of the fusion architecture can provide better classification performance.",2009,0, 5818,Improving robustness of gene ranking by resampling and permutation based score correction and normalization,"Feature ranking, which ranks features via their individual importance, is one of the frequently used feature selection techniques. Traditional feature ranking criteria are apt to produce inconsistent ranking results even with light perturbations in training samples when applied to high dimensional and small-sized gene expression data. A widely used strategy for solving the inconsistencies is the multi-criterion combination. But one problem encountered in combining multiple criteria is the score normalization. In this paper, problems in existing methods are first analyzed, and a new gene importance transformation algorithm is then proposed. Experimental studies on three popular gene expression datasets show that the multi-criterion combination based on the proposed score correction and normalization produces gene rankings with improved robustness.",2010,0, 5819,Selecting Discrete and Continuous Features Based on Neighborhood Decision Error Minimization,"Feature selection plays an important role in pattern recognition and machine learning. Feature evaluation and classification complexity estimation arise as key issues in the construction of selection algorithms. To estimate classification complexity in different feature subspaces, a novel feature evaluation measure, called the neighborhood decision error rate (NDER), is proposed, which is applicable to both categorical and numerical features. We first introduce a neighborhood rough-set model to divide the sample set into decision positive regions and decision boundary regions. Then, the samples that fall within decision boundary regions are further grouped into recognizable and misclassified subsets based on class probabilities that occur in neighborhoods. The percentage of misclassified samples is viewed as the estimate of classification complexity of the corresponding feature subspaces. We present a forward greedy strategy for searching the feature subset, which minimizes the NDER and, correspondingly, minimizes the classification complexity of the selected feature subset. Both theoretical and experimental comparison with other feature selection algorithms shows that the proposed algorithm is effective for discrete and continuous features, as well as their mixture.",2010,0, 5820,On the Evaluation of Radiation-Induced Transient Faults in Flash-Based FPGAs,"Field programmable gate arrays (FPGAs) are getting more and more attractive for military and aerospace applications, among others devices. The usage of non volatile FPGAs, like Flash-based ones, reduces permanent radiation effects but transient faults are still a concern. In this paper we propose a new methodology for effectively measuring the width of radiation-induced transient faults thus allowing tuning known mitigation techniques accordingly. Radiation experiments results are presented and commented demonstrating that the proposed methodology is a viable solution to measure the transient pulses width.",2008,0, 5821,Towards Interactive Fault Localization Using Test Information,"Finding the location of a fault is a central task of debugging. Typically, a developer employs an interactive process for fault localization. To accelerate this task, several approaches have been proposed to automate fault localization. In practice, testing-based fault localization (TBFL), which uses test information to locate faults, has become a research focus. However, experimental results reported in the literature showed that current automation of fault localization can only serve as a means to confirming the search space and prioritizing search sequences, not a substitute of the interactive fault localization process. In this paper, we propose an approach based on test information to support the entire interactive fault localization process. During this process, the information gathered from previous interaction steps can be used to provide the ranking of suspicious statements for the current interaction step. As a feasibility study of our approach, we performed an experiment on applying our approach together with some other TBFL approaches on the Siemens programs, which have been used in the literature. Our experimental results show the effectiveness of our approach.",2006,0, 5822,Research about Software Fault Injection Technology Based on Distributed System,"Firstly, the paper made a contrast between the current domestic and international research condition, and introduced the basic concept of fault injection and distributed system. secondly, it discussed the classification and requirements of fault injection. There are mainly three types of distributed fault, namely the memory fault, CPU fault and correspondence fault. Besides, it discussed the method of distributed software fault injection about DOCTOR and illustrated the comprehensive structure and its respective parts of DOCTOR in detail. Thirdly, it reached a conclusion about the fault model of distributed system of fault injection and its realization method.",2010,0, 5823,Real-Time Fisheye Lens Distortion Correction Using Automatically Generated Streaming Accelerators,"Fisheye lenses are often used in scientific or virtual reality applications to enlarge the field of view of a conventional camera. Fisheye lens distortion correction is an image processing application which transforms the distorted fisheye images back to the natural-looking perspective space. This application is characterized by non-linear streaming memory access patterns that make main memory bandwidth a key performance limiter. We have developed a fisheye lens distortion correction system on a custom board that includes a Xilinx Virtex-4 FPGA. We express the application in a high level streaming language, and we utilize Proteus, an architectural synthesis tool, to quickly explore the design space and generate the streaming accelerator best suited for our cost and performance constraints. This paper shows that appropriate ESL tools enable rapid prototyping and design of real-life, performance critical and cost sensitive systems with complex memory access patterns and hardware-software interaction mechanisms.",2009,0, 5824,Categorizing and Analysis of Activated Faults in the FlexRay Communication Controller Registers,"FlexRay communication protocol is expected becoming the de-facto standard for distributed safety-critical systems. In this paper, transient single bit-flip faults were injected into the FlexRay communication controller to categorize and analyze the activated faults. In this protocol, an activated fault results in one or more error types which are boundary violation, conflict, content, freeze, synchronization, and syntax. To study the activated faults, a FlexRay bus network, composed of four nodes, was modeled by verilog HDL; and a total of 135,600 transient faults were injected in only one node, where 9,342 (6.9%) of the faults were activated. The results show that the synchronization error is the widespread error with the occurrence ratio of about 70.1%. The Boundary violation and the syntax errors have the occurrence ratios of 32.4% and 24.6%, respectively. The results also show that the Freeze error which more frequent resulted system failures has the occurrence ratio of about 17.3%.",2009,0, 5825,Denoising fluorescence endoscopy - A motion compensated temporal recursive video filter with an optimal minimum mean square error parameterization,"Fluorescence endoscopy is an emerging technique for the detection of bladder cancer. A marker substance is brought into the patient's bladder which accumulates at cancer tissue. If a suitable narrow band light source is used for illumination, a red fluorescence of the marker substance is observable. Because of the low fluorescence photon count and because of the narrow band light source, only a small amount of light is detected by the camera's CCD sensor. This, in turn, leads to strong noise in the recorded video sequence. To overcome this problem, we apply a temporal recursive filter to the video sequence. The derivation of a filter function is presented, which leads to an optimal filter in the minimum mean square error sense. The algorithm is implemented as plug-in for the real-time capable clinical demonstrator platform RealTimeFrame and it is capable to process color videos with a resolution of 768times576 pixels at 50 frames per second.",2009,0, 5826,Simulation of fault injection of microprocessor system using VLSI architecture system,Evaluating and possibly improving the fault tolerance and error detecting mechanisms is becoming a key issue when designing safety-critical electronic systems. The proposed approach is based on simulation-based fault injection and allows the analysis of the system behavior when faults occur. The paper describes how a microprocessor board employed in an automated light-metro control system has been modeled in VHDL and a Fault Injection Environment has been set up using a commercial simulator. Preliminary results about the effectiveness of the hardware fault-detection mechanisms are also reported. Such results will address the activity of experimental evaluation in subsequent phases of the validation process.,2009,0, 5827,A Novel Error Classification Simulation Method for Rayleigh Fading Channel,"Evaluation of bit error rate (BER) for digital communication system is usually done via simulations using Monte Carlo (MC) method. For low BER, MC method requires huge sample sizes to achieve certain efficiency. To overcome this limitation, many variance reduction techniques such as importance sampling (IS) have been proposed. In this paper, we put forward a novel simulation method - Monte Carlo simulation with error classification (EC-MC). This method can reduce the estimation variance through dividing the total errors into many sub-categories and optimizing the simulation samples for each sub-category. We apply this method in simulations of single-path Rayleigh fading channel. The simulation results demonstrate EC-MC method can achieve the same accuracy at smaller sample sizes and shorter simulation runtime, comparing with both conventional MC and IS methods, especially at high signal to noise ratios.",2009,0, 5828,A New Fault Tolerance Heuristic for Scientific Workflows in Highly Distributed Environments Based on Resubmission Impact,"Even though highly distributed environments such as Clouds and Grids are increasingly used for e-science high performance applications, they still cannot deliver the robustness and reliability needed for widespread acceptance as ubiquitous scientific tools. To overcome this problem, existing systems resort to fault tolerance mechanisms such as task replication and task resubmission. In this paper we propose a new heuristic called resubmission impact to enhance the fault tolerance support for scientific workflows in highly distributed systems. In contrast to related approaches, our method can be used effectively on systems even in the absence of historic failure trace data. Simulated experiments of three real scientific workflows in the Austrian Grid environment show that our algorithm drastically reduces the resource waste compared to conservative task replication and resubmission techniques, while having a comparable execution performance and only a slight decrease in the success probability.",2009,0, 5829,On the practicality of using intrinsic reconfiguration for fault recovery,"Evolvable hardware (EHW) combines the powerful search capability of evolutionary algorithms with the flexibility of reprogrammable devices, thereby providing a natural framework for reconfiguration. This framework has generated an interest in using EHW for fault-tolerant systems because reconfiguration can effectively deal with hardware faults whenever it is impossible to provide spares. But systems cannot tolerate faults indefinitely, which means reconfiguration does have a deadline. The focus of previous EHW research relating to fault-tolerance has been primarily restricted to restoring functionality, with no real consideration of time constraints. In this paper, we are concerned with EHW performing reconfiguration under deadline constraints. In particular, we investigate reconfigurable hardware that undergoes intrinsic evolution. We show that fault recovery done by intrinsic reconfiguration has some restrictions, which designers cannot ignore.",2005,0, 5830,Practical considerations for implementing intrinsic fault recovery in embedded systems,Evolvable hardware provides a viable fault recovery technique for embedded systems already deployed into an operational environment. Typically the fitness of each evolved configuration in such systems must be intrinsically determined because imprecise information about faults makes extrinsic methods impractical. Most work on intrinsic circuit evolution is conducted in laboratory environments where sophisticated measurement equipment is readily available and frequency domain analysis poses no real problems. In this paper we argue intrinsic fault recovery for embedded systems has to be done in the time domain. We report the results of several experiments conducted to identify potential problems with determining fitness in the time domain for embedded systems. We also discuss the limitations embedded systems impose on GAs used for evolvable hardware applications and suggest some possible solutions.,2008,0, 5831,An approach to reliability growth planning based on failure mode discovery and correction using AMSAA projection methodology,"Exact expressions for the expected number of surfaced failure modes and system failure intensity as functions of test time are presented under the assumption that the surfaced modes are mitigated through corrective actions. These exact expressions depend on a large number of parameters. Functional forms are derived to approximate these quantities that depend on only a few parameters. Such parsimonious approximations are suitable for developing reliability growth plans and portraying the associated planned growth path. Simulation results indicate that the functional form of the derived parsimonious approximations can adequately represent the expected reliability growth associated with a variety of patterns for the failure mode initial rates of occurrence. A sequence of increasing MTBF target values can be constructed from the parsimonious MTBF projection approximation based on the following: (1) planning parameters that determine the parsimonious approximation; (2) corrective action mean lag time with respect to implementation and; (3) test schedule that gives the number of planned reliability, availability, and maintainability (RAM) test hours per month and specifies corrective action implementation periods",2006,0, 5832,MPICH-V2: a Fault Tolerant MPI for Volatile Nodes based on Pessimistic Sender Based Message Logging,"Execution of MPI applications on clusters and Grid deployments suffering from node and network failures motivates the use of fault tolerant MPI implementations. We present MPICH-V2 (the second protocol of MPICH-V project), an automatic fault tolerant MPI implementation using an innovative protocol that removes the most limiting factor of the pessimistic message logging approach: reliable logging of in transit messages. MPICH-V2 relies on uncoordinated checkpointing, sender based message logging and remote reliable logging of message logical clocks. This paper presents the architecture of MPICH-V2, its theoretical foundation and the performance of the implementation. We compare MPICH-V2 to MPICH-V1 and MPICH-P4 evaluating a) its point-to-point performance, b) the performance for the NAS benchmarks, c) the application performance when many faults occur during the execution. Experimental results demonstrate that MPICH-V2 provides performance close to MPICH-P4 for applications using large messages while reducing dramatically the number of reliable nodes compared to MPICH-V1.",2003,0, 5833,Software fault interactions and implications for software testing,"Exhaustive testing of computer software is intractable, but empirical studies of software failures suggest that testing can in some cases be effectively exhaustive. We show that software failures in a variety of domains were caused by combinations of relatively few conditions. These results have important implications for testing. If all faults in a system can be triggered by a combination of n or fewer parameters, then testing all n-tuples of parameters is effectively equivalent to exhaustive testing, if software behavior is not dependent on complex event sequences and variables have a small set of discrete values.",2004,0, 5834,A trust-based distributed data fault detection algorithm for wireless sensor networks,"Fault detection is a difficult and complex task in WSN because of there are many factors that influence data and could cause faults. Large-scale sensor networks impose energy and communication constraints, thus it is difficult to collect data from each individual sensor node and process it at the sink to detect faulty sensors. The proposed approach saves energy and improves network lifetime by detecting data faults locally in cluster head and therefore reducing the number of transmissions required to convey relevant information to the sink. This paper presents a novel approach for detecting sensors which produce faulty data in a distributed way as well as identifying the type of data faults using trust concepts to gain a high degree of confidence. We validate our method with simulations results.",2008,0, 5835,"A Survey of Fault Detection, Isolation, and Reconfiguration Methods","Fault detection, isolation, and reconfiguration (FDIR) is an important and challenging problem in many engineering applications and continues to be an active area of research in the control community. This paper presents a survey of the various model-based FDIR methods developed in the last decade. In the paper, the FDIR problem is divided into the fault detection and isolation (FDI) step, and the controller reconfiguration step. For FDI, we discuss various model-based techniques to generate residuals that are robust to noise, unknown disturbance, and model uncertainties, as well as various statistical techniques of testing the residuals for abrupt changes (or faults). We then discuss various techniques of implementing reconfigurable control strategy in response to faults.",2010,0, 5836,"Fault Detection, Diagnostics, and Prognostics: Software Agent Solutions","Fault diagnosis and prognosis are important tools for the reliability, availability, and survivability of navy all-electric ships (AES). Extending the fault detection and diagnosis into predictive maintenance increases the value of this technology. The traditional diagnosis can be viewed as a single diagnostic agent having a model of the component or the whole system to be diagnosed. This becomes inadequate when the components or system become large, complex, and even distributed as on navy electric ships. For such systems, the software multiagents may offer a solution. A key benefit of software agents is their ability to automatically perform complex tasks in place of human operators. After briefly reviewing traditional fault diagnosis and software agent technologies, this paper discusses how these technologies can be used to support the drastic manning reduction requirements for future navy ships. Examples are given on the existing naval applications and research on detection, diagnostic, and prognostic software agents. Current work on a multiagent system for shipboard power systems is presented as an example of system-level application.",2007,0, 5837,A Method of Building the Fault Propogation Model of Distributed Application Systems Based on Bayesian Network,"Fault diagnosis is a key research part in the field of network fault management. In order to make effective fault diagnosis to the increasingly complicated distributed application systems(DAS) which are based on the computer network, Building an accurate and practicable fault propagation model(FPM) is generally the necessary prerequisite of the subsequent tasks such as probabilistic reasoning, fault recovery and failure prediction. In this paper, a method of constructing the FPM which combined sample datas and the expert knowledge was put forward based on Bayesian network. Firstly, an initial tree(T) including all the service nodes on the specific DAS was generated by the maximum weight spanning tree(MWST) algorithm with sample datas. Secondly, the initial tree(T) was revised according to expert experiences. Finally, the FPM of the DAS was learned using greedy search structure-learning algorithm with the revised structure(T') as its initial input model. In the end, the learned FPM using the proposed method was evaluated by calculating its BIC-score and comparing to the actual one. And the results show that the proposed method can give an accurate FPM of the distributed application system.",2009,0, 5838,Fault daignosis based on AGA-LS-SVM for analog circuit,"Fault diagnosis is very important for development and maintenance of safe and reliable electronic circuits and systems. This paper describes an approach of soft fault diagnosis for analog circuits based on least square support vector machines (LS-SVM) and adaptive genetic algorithm (AGA), known as AGA-LS-SVM. AGA is applied to optimize the parameters of LS-SVM, and fault features are extracted from the frequency domain response of circuit under test (CUT) and the LS-SVM which trained by the fault features is used to recognize the unknown faults. The experimental results demonstrate that LS-SVM optimized by AGA performs better forecast accuracy and successful modeling of diagnosing analog circuits fault.",2009,0, 5839,Toward a Society Oriented Approach for Fault Handling in Multi-Agent Systems,"Fault handling/tolerance is important and remains a challenging problem in multi agents systems (MAS). The majority of existed distributed fault tolerant MAS are suffering from a robust solution for incomplete and uncertain situations. This paper proposes a novel society oriented approach for effectively handling the faults in dynamic situations with a high possibility of uncertainty. In this approach, the emerged implicit learning inspired by the concepts of stock market has considerably improved the fault tolerant ability. The experimental results illustrate the effectiveness of the presented approach in comparison to the case of central controller, the case of using purely redundant components, and helping without using a social value.",2007,0, 5840,A Functional Verification based Fault Injection Environment,"Fault injection is needed for different purposes such as analyzing the reaction of a system in a faulty environment or validating fault-detection and/or fault-correction techniques. In this paper we propose a simulation-based fault injection tool able to work at different abstraction levels and with user-defined fault models. By exploiting the facilities provided by a functional verification environment it allows to speed up the entire fault injection process: from the creation of the workload to the analysis of the results of injection campaigns. Moreover, the adoption of techniques to optimize the fault list significantly reduces the simulation time. Being the tool targeted to the validation of dependable systems, it includes a way to extract information from the Failure Mode and Effect Analysis and to correlate fault injection results with estimates.",2007,0, 5841,Fault injection in mixed-signal environment using behavioral fault modeling in Verilog-A,"Fault injection methods have been used for analyzing dependability characteristics of systems for years. In this paper we propose a practical mixed-signal fault injection flow that is fast as well as accurate. We described three classes of most common faults: i) Single event transients, ii) Electro-Magnetic interference and iii) Power disturbance faults. Fault models are implemented directly into circuit's devices using behavioral fault description in Verilog-A language. As an example for dependability evaluation, some test circuits have been prepared and the results of fault injection on their designs have been reported.",2010,0, 5842,A Fault Tolerance Protocol with Fast Fault Recovery,"Fault tolerance is an important issue for large machines with tens or hundreds of thousands of processors. Checkpoint-based methods, currently used on most machines, rollback all processors to previous checkpoints after a crash. This wastes a significant amount of computation as all processors have to redo all the computation from that checkpoint onwards. In addition, recovery time is bound by the time between the last checkpoint and the crash. Protocols based on message logging avoid the problem of rolling back all processors to their earlier state. However, the recovery time of existing message logging protocols is no smaller than the time between the last checkpoint and crash. We present a fault tolerance protocol, in this paper, that provides fast restarts by using the ideas of message logging and object-based processor virtualization. We evaluate our implementation of the protocol in the Charm++/adaptive MPI runtime system. We show that our protocol provides fast restarts and, for many applications, has low fault-free overhead.",2007,0, 5843,Fault-Tolerant Routing Schemes for Wormhole Mesh,"Fault tolerance is an important issue for the design of interconnection networks. In this paper, a new fault tolerant routing algorithm is presented and is applied in mesh networks employing wormhole switching. Due to its low routing restrictions, the presented routing algorithm is so highly adaptive that it is connected and deadlock free in spite of the various fault regions in mesh networks. Due to the minimal virtual channels it uses, the presented routing algorithm only employs as few buffers as possible and is suitable for fault-tolerant interconnection networks with low cost. Since it chooses the path around fault regions according to the local fault information, the presented routing algorithm takes routing decisions quickly and is applicable in interconnection networks. Moreover, a simulation is conducted for the proposed routing algorithm and the results show that the algorithm exhibits a graceful degradation in performance.",2009,0, 5844,Fault Tolerent Context Aware Mobile Computing,Fault tolerance is the main focus of design of new paradigms like context aware mobile computing. In situations where computing must be dependable require new engineering approaches and better understanding of paradigm. This paper discusses the fault tolerance approaches from the perspective of layers of context aware system architecture. An approach to make hardware level fault tolerant has been discussed.,2009,0, 5845,Integrating Product-Line Fault Tree Analysis into AADL Models,"Fault tree analysis (FTA) is a safety-analysis technique that has been extended recently to accommodate product-line engineering. This paper describes a tool-supported approach for integrating product-line FTA with the AADL (architecture analysis and design language) models and associated AADL Error Models for a product line. The AADL plug-in we have developed provides some automatic pruning and adaptation of the fault tree for a specific product from the product-line FTA. This work supports consistent reuse of the FTA across the systems in the product line and reduces the effort of maintaining traceability between the safety analysis and the architectural models. Incorporating the product-line FTA into the AADL models also allows derivation of basic quantitative and cut set analyses for each product-line member to help identify and eliminate design weaknesses. The tool-supported capabilities enable comparisons among candidate new members to assist in design decisions regarding redundancy, safety features, and the evaluation of alternative designs. Results from a small case study illustrate the approach.",2007,0, 5846,A Hardware Accelerated Semi Analytic Approach for Fault Trees with Repairable Components,"Fault tree analysis of complex systems with repairable components can easily be quite complicated and usually requires significant computer time and power despite significant simplifications. Invariably, software-based solutions, particularly those involving Monte Carlo simulation methods, have been used in practice to compute the top event probability. However, these methods require significant computer power and time. In this paper, a hardware-based solution is presented for solving fault trees. The methodology developed uses a new semi analytic approach embedded in a Field Programmable Gate Array (FPGA) using accelerators. Unlike previous attempts, the methodology developed properly handles repairable components in fault trees. Results from a specially written software-based simulation program confirm the accuracy and validate the efficacy of the hardware-oriented approach.",2009,0, 5847,Reliability and Sensitivity Analysis of Embedded Systems with Modular Dynamic Fault Trees,"Fault trees theories have been used in years because they can easily provide a concise representation of failure behavior of general non-repairable fault-tolerant systems. But the defect of traditional fault trees is lack of accuracy when modeling dynamic failure behavior of certain systems with fault-recovery process. A solution to this problem is called behavioral decomposition. A system will be divided into several dynamic or static modules, and each module can be further analyzed using BDD or Markov chains separately. In this paper, we will show a decomposition scheme that independent subtrees of a dynamic module are detected and solved hierarchically for saving computation time of solving Markov chains without losing unacceptable accuracy when assessing components sensitivities. In the end, we present our analyzing software toolkit that implements our enhanced methodology.",2005,0, 5848,Concurrent error detection schemes for fault-based side-channel cryptanalysis of symmetric block ciphers,"Fault-based side-channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy-based concurrent error detection (CED) architectures can be used to thwart such attacks, they entail significant overheads (either area or performance). The authors investigate systematic approaches to low-cost low-latency CED techniques for symmetric encryption algorithms based on inverse relationships that exist between encryption and decryption at algorithm level, round level, and operation level and develop CED architectures that explore tradeoffs among area overhead, performance penalty, and fault detection latency. The proposed techniques have been validated on FPGA implementations of Advanced Encryption Standard (AES) finalist 128-bit symmetric encryption algorithms.",2002,0, 5849,Structural analysis of explicit fault-tolerant programs,"Explicit fault tolerant programs are characterized by proactive efforts to ensure robustness and ability of fault correction. A fault tolerant application is usually realized conforming to one of a collection of standard techniques. Graph based methods can be used to examine existing applications to derive a control flow abstraction with respect to the fault-tolerance architecture. This abstraction, which we call the fault tolerance behavioural type, can be used as basis of structural analysis of the implemented architecture. This paper outlines the basic ideas and demonstrates their application using CTL (Computation Tree Logic) model checking to verify fault tolerance properties of explicit fault-tolerant programs.",2004,0, 5850,Use Case-Based Modeling and Analysis of Failsafe Fault-Tolerance,"Explicitly addressing fault-tolerance during the requirements analysis phase facilitates the early detection of inconsistencies between functional and fault-tolerance requirements, which could potentially reduce the overall development costs. Most existing approaches use redundancy of services as a means to mask faults, where it is difficult to provide a systematic approach for modeling and analyzing the effect of faults on functional requirements during use case analysis. Moreover, providing masking fault-tolerance could be costly or impractical. This paper overviews a systematic approach for use case-based modeling of faults and failsafe fault-tolerance, where a failsafe fault-tolerant system at least meets its safety requirements when faults occur",2006,0, 5851,Application-Aware diagnosis of runtime hardware faults,"Extreme technology scaling in silicon devices drastically affects reliability, particularly because of runtime failures induced by transistor wearout. Current online testing mechanisms focus on testing all components in a microprocessor, including hardware that has not been exercised, and thus have high performance penalties. We propose a hybrid hardware/software online testing solution where components that are heavily utilized by the software application are tested more thoroughly and frequently. Thus, our online testing approach focuses on the processor units that affect application correctness the most, and it achieves high coverage while incurring minimal performance overhead. We also introduce a new metric, Application-Aware Fault Coverage, measuring a test's capability to detect faults that might have corrupted the state or the output of an application. Test coverage is further improved through the insertion of observation points that augment the coverage of the testing system. By evaluating our technique on a Sun OpenSPARC T1, we show that our solution maintains high Application-Aware Fault Coverage while reducing the performance overhead of online testing by more than a factor of 2 when compared to solutions oblivious to application's behavior. Specifically, we found that our solution can achieve 95% fault coverage while maintaining a minimal performance overhead (1.3%) and area impact (0.4%).",2010,0, 5852,FADI: a fault tolerant environment for open distributed computing,"FADI (fault tolerant distributed environment) is a complete programming environment for the reliable execution of distributed application programs. FADI encompasses all aspects of modern fault-tolerant distributed computing. The built-in user-transparent error detection mechanism covers processor node crashes and hardware transient failures. The mechanism also integrates user-assisted error checks into the system failure model. The nucleus non-blocking checkpointing mechanism combined with a novel selective message logging technique delivers an efficient, low-overhead backup and recovery mechanism for distributed processes. FADI also provides a means of remote automatic process allocation on distributed system nodes",2000,0, 5853,Fault-tolerant on-board evolutionary platform for adaptive allocation of hardware and software tasks,"Fails in aerospace missions, caused by broken communication and unexpected situation, have generated a pressing need for intelligent adaptation, repairing and upgrading capabilities in on-board maintenance (OBM) applications. In this paper, we present a fault-tolerant on-board evolutionary platform for aerospace applications which utilizes FPGA technology. To avoid complex evolution process of circuit and function, we take tasks as our basic chromosome and implement relocation of hardware and software tasks in spaceborne computing systems. Tasks can run in software space or be put into hardware task slots according to energy-efficient or real-time requirements. The allocation of hardware and software tasks will adapt to the requirement of current condition to meet real-time, energy-efficient or environment requirements. The process is transparent to users so that there is no need to explicitly specify reconfiguration commands. Sleeping tasks, redundant FPGAs and version switching control are combined together to achieve fault-tolerance in the on-board evolutionary platform. Design theories and strategies of FOEP are described in detail. System performance is also evaluated using simulated experiments.",2008,0, 5854,Siamese-Twin: A Dynamically Fault-Tolerant Fat-Tree,"Fat-trees are a special case of multistage interconnection networks with quite good static fault tolerance capabilities. They are however straightforwardly unable to provide local dynamic fault tolerance. In this paper we propose a network topology based on the fat-tree using two parallel networks with crossover links between them in an effort to enable dynamic fault tolerance. We evaluate and compare this topology with two other similar fat-tree topologies and show through simulations that the new topology is able to improve slightly upon the ability to tolerate faults statically. More importantly, we show that the new network topology is the only one of the evaluated topologies able to tolerate one fault dynamically, with a superior network performance in the face of dynamically handled faults.",2005,0, 5855,Byzantine Fault Tolerance in MDS of Grid System,"Fault tolerance is a challenge problem in reliable distributed system. In grid, detecting and correcting fault techniques is used in fault tolerance of MDS system. These techniques are limited in dealing with the benign faults on servers and the Internet. But they will not work when malicious faults on servers or software errors occur. In this paper, a secure aware MDS system, which can tolerate malicious faults occurred on servers, is proposed. By using a new Byzantine-fault-tolerant algorithm, the proposed MDS system guarantees safety and liveness properties under the condition that no more than f replicas are faulty if it consists of 3f+1 tightly coupled servers, and it maintains the seamless interfaces to application programs as the usual formal MDS system does",2006,0, 5856,Fault-tolerance in filter-labeled-stream applications,"Fault tolerance is a desirable feature in distributed high-performance systems, since applications tend to run for long periods of time and faults become more likely as the number of nodes in the system increase. However, most distributed environments lack any fault tolerant features, since they tend to be hard to implement and use, and often hurt performance dramatically. In this paper we discuss how we successfully added fault-tolerance to the Anthill distributed programming environment by using an application-level checkpoint/rollback solution. The programming model offers an abstraction where the programmer can easily identify points during the execution where the communication pattern is well defined, forming a consistent cut where checkpoints may be saved consistently without requiring extra communication, avoiding any domino effect during recovery from faults. We present the new abstractions for fault tolerance, describe how the solution was implemented and present performance results that show the efficiency of the solution with both regular and irregular applications.",2007,0, 5857,ORTEGA: An Efficient and Flexible Online Fault Tolerance Architecture for Real-Time Control Systems,"Fault tolerance is an important aspect in real-time computing. In real-time control systems, tasks could be faulty due to various reasons. Faulty tasks may compromise the performance and safety of the whole system and even cause disastrous consequences. In this paper, we describe On-demand real-time guard (ORTEGA), a new software fault tolerance architecture for real-time control systems. ORTEGA has high fault coverage and reliability. Compared with existing real-time fault tolerance architectures, such as Simplex, ORTEGA allows more efficient resource utilizations and enhances flexibility. These advantages are achieved through the on-demand detection and recovery of faulty tasks. ORTEGA is applicable to most industrial control applications where both efficient resource usage and high fault coverage are desired.",2008,0, 5858,Enhancing Fault Injection Testbench,Fault injection techniques are widely used in system dependability evaluation. In the paper we deal with the problem of enhancing classical fault injection tools in two aspects: improvement of experiment effectiveness and result analysis capabilities. In particular we discuss the problem of distributing fault injection processes within a computer network and collecting the simulation results in a data warehouse with data mining capabilities. The presented considerations are illustrated with some experimental results performed with simulation tools developed in our institute,2006,0, 5859,Improvement of fault injection techniques based on VHDL code modification,"Fault injection techniques based on the use of VHDL as design language offer important advantages with regard to other fault injection techniques. First, as they can be applied during the design phase of the system, they allow reducing the time-to-market. Second, this type of techniques presents high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high capability of fault modeling. However, it is difficult to implement automatically these techniques in a fault injection tool, mainly the insertion of saboteurs and the generation of mutants. In this paper, we present new models of saboteurs and mutants that can be easily applicable in VFIT, a fault injection tool developed by the Fault-Tolerant Systems Research Group (GSTF) of the Technical University of Valencia.",2005,0, 5860,Fault Localization with Non-parametric Program Behavior Model,"Fault localization is a major activity in software debugging. Many existing statistical fault localization techniques compare feature spectra of successful and failed runs. Some approaches, such as SOBER, test the similarity of the feature spectra through parametric self-proposed hypothesis testing models. Our finding shows, however, that the assumption on feature spectra forming known distributions is not well-supported by empirical data. Instead, having a simple, robust, and explanatory model is an essential move toward establishing a debugging theory. This paper proposes a non-parametric approach to measuring the similarity of the feature spectra of successful and failed runs, and picks a general hypothesis testing model, namely the Mann-Whitney test, as the core. The empirical results on the Siemens suite show that our technique can outperform existing predicate-based statistical fault localization techniques in locating faulty statements.",2008,0, 5861,Toward understanding soft faults in high performance cluster networks,"Fault management in high performance cluster networks has been focused on the notion of hard faults (i.e., link or node failures). Network degradations that negatively impact performance but do not result in failures often go unnoticed. In this paper, we classify such degradations as soft faults. In addition, we identify consistent performance as an important requirement in cluster networks. Using this service requirement, we describe a comprehensive strategy for cluster fault management.",2003,0, 5862,Age-related Neural Changes during Memory Conjunction Errors,"Human behavioral studies demonstrate that healthy aging is often accompanied by increases in memory distortions or errors. Here we used event-related fMRI to examine the neural basis of age-related memory distortions. We used the memory conjunction error paradigm, a laboratory procedure known to elicit high levels of memory errors. For older adults, right parahippocampal gyrus showed significantly greater activity during false than during accurate retrieval. We observed no regions in which activity was greater during false than during accurate retrieval for young adults. Young adults, however, showed significantly greater activity than old adults during accurate retrieval in right hippocampus. By contrast, older adults demonstrated greater activity than young adults during accurate retrieval in right inferior and middle prefrontal cortex. These data are consistent with the notion that age-related memory conjunction errors arise from dysfunction of hippocampal system mechanisms, rather than impairments in frontally mediated monitoring processes.",2010,0, 5863,Detection and correction of abnormal pixels in Hyperion images,"Hyperion images are currently processed to level 1a (from level 0 or raw data). These level 1a images are files of radiometrically corrected data in units of either watts/(sr micron m2) 40 for VNIR bands or watts/(sr micron m2) 80 for SWIR bands. Each distributed Hyperion level 1a image tape contains a log file, called ""(EO-1 identifier).fix.log"", that reports the bad or corrupted pixels (called known bad pixels) found during the pre-flight checking, and details how they were fixed. All bad pixels should be corrected in a level 1a image. However, bad pixels are still evident. In addition, there are dark vertical stripes in the image that are not reported in the log file. In this paper, we introduce a method to detect and correct the bad pixels and vertical stripes (we will refer to these occurrences as abnormal pixels). Images from the Greater Victoria Watershed and other EVEOSD test sites are used to determine how stationary the locations of the abnormal pixels are. After abnormal pixel correction a Hyperion image is ready for geometric correction, atmospheric correction, and further analysis.",2002,0, 5864,Detection and correction of current transformer saturation effects in secondary current signals,Ideally a current transformer transduces the primary current linearly. In reality the transmission behaviour is non-linear at very high primary currents or at high burden i. e. the current transformer saturates. If saturation takes place the measured secondary current signals are distorted so that protection devices operate delayed or blocked (e. g. differential protection). For post processing issues of the recorded secondary signals one needs to correct the current waveforms. This paper presents a combined and robust method for detection of current transformer saturation effects even for low sampling frequencies. If saturation is present in the analysed current waveform a curve fitting algorithm is used to correct and estimate the unsaturated secondary current. All proposed methods do not require any electrical parameters of the current transformer. The proposed methods are tested on simulated disturbance records and the results are presented in this paper.,2009,0, 5865,Testing methods and error budge analysis of a software defined radio,"Ideally a software defined radio (SDR) is designed to accept a multitude of waveforms at any carrier frequency. This paper will discuss the importance of PHY layer measurements made in both the digital and analog domains including the additive effects impairments can have on BER. The paper will consider interoperability of a SDR with respect to three different modulation formats; OFDM, CDMA, and QAM. The importance of BER budgeting and a multitude of critical measurements including EVM, CCDF, ACP, spectrum mask, constellation displays, noise figure, and phase noise will be discussed.",2008,0, 5866,Fault-Tolerant Indirect Adaptive Neurocontrol for a Static Synchronous Series Compensator in a Power Network With Missing Sensor Measurements,"Identification and control of nonlinear systems depend on the availability and quality of sensor measurements. Measurements can be corrupted or interrupted due to sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software (referred to as missing sensor measurements in this paper). This paper proposes a novel fault-tolerant indirect adaptive neurocontroller (FTIANC) for controlling a static synchronous series compensator (SSSC), which is connected to a power network. The FTIANC consists of a sensor evaluation and (missing sensor) restoration scheme (SERS), a radial basis function neuroidentifier (RBFNI), and a radial basis function neurocontroller (RBFNC). The SERS provides a set of fault-tolerant measurements to the RBFNI and RBFNC. The resulting FTIANC is able to provide fault-tolerant effective control to the SSSC when some crucial time-varying sensor measurements are not available. Simulation studies are carried out on a single machine infinite bus (SMIB) as well as on the IEEE 10-machine 39-bus power system, for the SSSC equipped with conventional PI controllers (CONVC) and the FTIANC without any missing sensors, as well as for the FTIANC with multiple missing sensors. Results show that the transient performances of the proposed FTIANC with and without missing sensors are both superior to the CONVC used by the SSSC (without any missing sensors) over a wide range of system operating conditions. The proposed fault-tolerant control is readily applicable to other plant models in power systems.",2008,0, 5867,Biometric enhancements: Template aging error score analysis,"Identity capabilities are being modernized through biometric technology and systems. With maturing biometrics in full, rapid development, a higher accuracy of identity verification is required. Improvements to the security of biometric verification systems is provided through higher accuracy; ultimately reducing fraud, theft, and loss of resources from unauthorized personnel. With trivial biometric systems, a higher acceptance threshold to obtain higher accuracy rates increased false rejection rates and user unacceptability. However, maintaining the higher accuracy rate enhances the security of the system. Through the methods presented in this paper, higher accuracy rates are obtained without lowering the acceptance threshold, therefore improving the security level, false rejection rates, and user acceptability. An area of biometrics with a paucity of research is template aging, specifically in regards to facial aging. This paper presents methods of modeling and predicting facial template aging based on matching score analysis. A novel foundational framework for facial template aging is presented and provides a methodological framework. The groundwork discusses new techniques used in the template aging framework, to include the ldquoerror score matrixrdquo and ldquodecay error estimaterdquo concepts. The matching scores are calculated using commercially available facial matching algorithms/SDKs against publicly available facial databases. Improved performance error rates while maintaining or improving upon the overall matching and/or rejection levels is accomplished with the new framework. Using such scores, prediction of a timeframe if or when an individual needs to be re-enrolled with a new template is feasible.",2008,0, 5868,Optimal Object Configurations to Minimize the Positioning Error in Visual Servoing,"Image noise unavoidably affects the available image points that are used in visual-servoing schemes to steer a robot end-effector toward a desired location. As a consequence, letting the image points in the current view converge to those in the desired view does not ensure that the camera converges accurately to the desired location. This paper investigates the selection of object configurations to minimize the worst-case positioning error due to the presence of image noise. In particular, a strategy based on linear matrix inequalities (LMIs) and barrier functions is proposed to compute upper and lower bounds of this error for a given maximum error of the image points. This strategy can be applied to problems such as selecting an optimal subset of object points or determining an optimal position of an object in the scene. Some examples illustrate the use of the proposed strategy in such problems.",2010,0, 5869,An Algorithm for Correction of Distortion of Laser Marking Systems,"Images printed by laser marking systems will be distorted if linear voltages are given to the galvanometers due to the nonlinear relationship between angles of the galvanometers and coordinates of the image field. This paper presents a novel algorithm for correction of the distortion. A grid that covers the whole range of the image field is printed. The expected positions of the intersected points of the grid are calculated, and linear input voltages of the galvanometers are given to the laser marking system for printing the grid. Without distortion, the grid would be expected to be composed of vertical and horizontal lines. As a matter of fact, the printed lines will be curves. The intersected points of the curves are measured too. As any point to be printed will be in one of the quadrangles of the distorted grid, the corresponding corrected voltages of the galvanometers are calculated by linear interpolation according to the vertices of the quadrangle and their voltage inputs. Using the corrected voltages for printing, the distortion will be corrected. It is proved reliable by simulation and experiments on the laser marking system.",2007,0, 5870,Gamma camera PET with low energy collimators: characterization and correction of scatter,"Imaging of myocardial viability with FDG is possible with PET and with SPECT. The image quality from SPECT is not as good but is reported to provide clinical information comparable to PET. Studies have just begun to appear using gamma camera PET and either axial slat collimators or open frame graded absorbers to image myocardial viability. Alternatively, it may be possible to use standard low energy collimators when detecting coincidences. Although image quality may suffer, it is may be possible to devise methods such that no clinical information is lost. Such an approach also paves the way for dual isotope simultaneous imaging of coincidence and single photons. Here we characterize the scatter fraction and scatter distribution of gamma camera PET with low energy collimators, and investigate the improvements possible with a convolution-subtraction type scatter correction scheme. The scatter fraction was found to be almost identical to that obtained with axial slat collimators on a triple head Marconi Irix scanner. Images acquired with low energy collimators were degraded but still of good quality compared to acquisitions using axial collimation. The scatter correction scheme showed a degree of improvement over reconstructions without scatter correction. This approach is useful not only towards making the simultaneous imaging possible, but may also be useful to save time in a busy clinic that does both SPECT scans and cardiac FDG studies, since collimators would not need to be changed",2001,0, 5871,On the Investigation of Artificial Immune Systems on Imbalanced Data Classification for Power Distribution System Fault Cause Identification,"Imbalanced data are often encountered in real-world applications, they may incline the performance of classification to be biased. The immune-based algorithm artificial immune recognition system (AIRS) is applied to Duke Energy distribution systems outage data and we investigate its capability to classify imbalanced data. The performance of AIRS is compared with an artificial neural network (ANN). Two major distribution fault causes, tree and lightning strike, are used as prototypes and a tailor-made measure for imbalanced data, g-mean, is used as the major performance measure. The results indicate that AIRS is able to achieve a more balanced performance on imbalanced data than ANN.",2006,0, 5872,An Optimization-Based Method for Dynamic Multiple Fault Diagnosis Problem,"Imperfect test outcomes, due to factors such as unreliable sensors, electromagnetic interference, and environmental conditions, manifest themselves as missed detections and false alarms. The main objective of our research on on-board diagnostic inference is to develop near-optimal algorithms for dynamic multiple fault diagnosis (DMFD) problems in the presence of imperfect test outcomes. Our problem is to determine the most likely evolution of fault states, the one that best explains the observed test outcomes. Here, we develop a primal-dual algorithm for solving the DMFD problem by combining Lagrangian relaxation and the Viterbi decoding algorithm in an iterative way. A novel feature of our approach is that the approximate duality gap provides a measure of suboptimality of the DMFD solution.",2007,0, 5873,Compensation of Requantization and Interpolation Errors in MPEG-2 to H.264 Transcoding,"Implementing MPEG-2 to H.264 transcoding schemes in the pixel domain introduces a high degree of computational complexity. In the transform domain, this transcoding is more computationally efficient, and several methods have been developed to address that approach. However, incompatibilities between the two standards, such as the mismatches between the MPEG-2 and H.264 motion compensation processes, cause several distortions that may affect the overall picture quality. In this study, we address the main distortions that result from requantization errors: luminance half-pixel and chrominance quarter/three-quarter interpolation errors. Then, we propose algorithms that compensate for these errors. The traditional requantization error compensation algorithm for DCT coefficients is updated so that it can be applied to the H.264 integer transform coefficients. Equations that compensate for the luminance half-pixel and chrominance quarter/three-quarter pixel interpolation errors are derived. To remove the interpolation errors, the previous H.264 frame is needed. Thus, the compensation scheme includes a closed-loop H.264 motion compensation process, which is implemented in the pixel domain. To evaluate the performance of the proposed compensation algorithms in terms of picture quality, our scheme is compared with two different cascaded pixel-domain transcoding structures. The first structure reuses the MPEG-2 motion vectors, and the other implements plusmn2 pixels motion vector refinement, but each one has an H.264 deblocking filter. The experimental results show that the proposed compensation algorithms achieve 5-dB quality improvement over the open-loop transform-domain-based transcoding and almost the same picture quality (0.3-0.6 dB) as the cascaded structures. An additional advantage is the reduction in computational complexity that ranges from 13% to 69% compared with the two cascaded methods.",2008,0, 5874,Introspection-Based Fault Tolerance for COTS-Based High-Capability Computation in Space,"Future missions of deep space exploration face the challenge of designing, building,and operating progressively more capable autonomous spacecraft and planetary rovers. Given the communication latencies and bandwidth limitations for such missions, the need for increased autonomy becomes mandatory, along with the requirement for enhanced on-board computational capabilities while in deep space or time-critical situations. This will result in dramatic changes in the way missions will be conducted and supported by on-board computing systems. Specifically, the traditional approach of relying exclusively on radiation-hardened hardware and modular redundancy will not be able to deliver the required computational power. As a consequence, such systems are expected to include high-capability low-power components based on emerging Commercial-Off-The-Shelf (COTS) multi-core technology. This paper describes the design of a generic framework for introspection that supports runtime monitoring and analysis of program execution as well as a feedback-oriented recovery from faults. One of the first applications of this framework will be to provide flexible software fault tolerance matched to the requirements and properties of applications by exploiting knowledge that is either contained in an application knowledge base, provided by users, or automatically derived from specifications. A prototype implementation is currently in progress at the Jet Propulsion Laboratory, California Institute of Technology, targeting a cluster of Cell Broadband Engines.",2008,0, 5875,Implementation of the Gamma () Line System Similar to Non-linear Gamma Curve with 2bit Error(LSB),"Gamma correction is an essential function in every display device such as CRT, plasma TV, and TFT LCD. (Po-Ming Lee, 2005) proposed gamma () correction systems are developed to reduce the difference between non-linear gamma curve produced by a typical formula and result produced by the proposed algorithm. The proposed algorithms and systems are based on the specific gamma value such as 2.2, namely the formula, f(x)=x and the bit width of input and out data is 10bit. In order to reduce the difference, the proposed system is using the least squares polynomial which is calculating the best fitting polynomial through a set of points which is sampled. System are consisting of continuous several kinds of equations and having their own overlap sections to get more precise. Based on the algorithm verified, the systems are implemented by using Verilog-HDL. This paper compares the latest algorithm such as proposed system with the previous that of gamma system such as existing system. The former and the latter system have 2, 2 clock latency; each 1 result per clock. Each of the error range (LSB) is -1~+1, -3~+4. Under the condition of SAMSUNG STD90 0.35 worst case, each gate count is 2,564, 2,083 gates and each maximum data arrival time is 17.52,15.56 [ns], respectively",2006,0, 5876,Partially-independent component analysis for tissue heterogeneity correction in microarray gene expression analysis,"Gene microarray technologies provide powerful tools for the large scale analysis of gene expression in cancer research. Clinical applications often aim to facilitate a molecular classification of cancers based on discriminatory genes associated with different clinical stages or outcomes. However, gene expression profiles often represent a composite of more than one distinct source due to tissue heterogeneity, and could result in extracting signatures reflecting the proportion of stromal contamination in the sample, rather than underlying tumor biology. We therefore wish to introduce a computational approach, which allows for a blind decomposition of gene expression profiles from mixed cell populations. The algorithm is based on a linear latent variable model, whose parameters are estimated using partially-independent component analysis, supported by a subset of differentially-expressed genes. We demonstrate the principle of the approach on the data sets derived from mixed cell lines of small round blue cell tumors. Because accurate source separation can be achieved blindly and numerically, we anticipate that computational correction of tissue heterogeneity would be useful in a wide variety of gene microarray studies.",2003,0, 5877,A trace-based visual inspection technique to detect errors in simulation models,"Generation of traces from a simulation model and their analysis is a powerful and common mean to debug simulation models. In this paper, we define a measure of progress for simulation traces and describe how it can be used to detect certain errors. We devise a visual inspection technique based on that measure and discuss several examples to illustrate how one can distinguish normal behavior from irregular, potentially erroneous behavior documented in a trace of a simulation run. The overall approach is implemented and integrated in Traviando, a trace analyzer for debugging stochastic simulation models.",2007,0, 5878,Ethical Implications of Biases and Errors in Geographic Information Systems,"Geographic information systems combine traditional maps with additional data. The design and use of these systems can introduce a variety of biases and errors. In this paper, we identify and classify biases and errors that arise inevitably from limits on the accuracy of gathered data, from limits on the precision of display technologies, and from the combination of data from different sources. We also propose how designers and users can work with these biases in a professionally responsible way.",2006,0, 5879,Managing fault-induced delayed voltage recovery in Metro Atlanta with the Barrow County SVC,"Georgia Transmission Corporation (GTC) commissioned the Barrow County Static Var Compensator (SVC) with a continuous rating of 0 to +260 Mvar in June of 2008. This paper presents the northern Metro Atlanta Georgia area transmission system, the requirements for voltage and var support, the dynamic performance study used to verify performance of the SVC, and provides an overview of the SVC design and control strategy, including the SVC's response to an actual power system disturbance. The Barrow County SVC is connected to the 230 kV bus at the Winder Primary Substation to effectively manage the exposure to fault-induced delayed voltage recovery (FIDVR), where the system voltage remains low (<80%) for several seconds following a disturbance and potentially leading to voltage collapse. The SVC configuration includes two thyristor-switched capacitors for rapid insertion of reactive power following a disturbance to decrease voltage recovery time and control the system's dynamic performance.",2009,0, 5880,GIS reliability analysis based trapezoid fuzzy fault tree,"GIS reliability refers to the ability to complete the requirements under the specified conditions and time. In this paper, there are five elements including object, conditions of use, use of time, functions and capabilities, to evaluate GIS reliability proposed by the combination in a engineering GIS. The paper imported trapezoid fuzzy fault tree into GIS reliability analysis for the fist time. The paper discussed how GIS reliability analysis uses trapezoid fuzzy fault tree method, mainly researches two problems that are GIS trapezoid fuzzy fault tree establishing and analysis step of GIS trapezoid fuzzy fault tree; uses example to adopt trapezoid fuzzy fault tree method computing GIS reliability analysis; because the technique considers not only GIS random uncertainty but also GIS fuzzy uncertainty, the result is precise, scientific and reasonable; finally summarizes up and points out the problems to need solving.",2010,0, 5881,Making defect-finding tools work for you,"Given the high costs of software testing and fixing bugs after release, early detection of bugs using static analysis can result in significant savings. However, despite their many benefits, recent availability of many such tools, and evidence of a positive return-on-investment, static-analysis tools are not used widely because of various usability and usefulness problems. The usability inhibitors include the lack of features, such as capabilities to merge reports from multiple tools and view warning deltas between two builds of a system. The usefulness problems are related primarily to the accuracy of the tools: identification of false positives (or, spurious bugs) and uninteresting bugs among the true positives. In this paper, we present the details of an online portal, developed at IBM Research, to address these problems and promote the adoption of static-analysis tools. We report our experience with the deployment of the portal within the IBM developer community. We also highlight the problems that we have learned are important to address, and present our approach toward solving some of those problems.",2010,0, 5882,Register file exploration for a multi-standard wireless forward error correction ASIP,"Given the increase in the number of wireless standards, software defined radio has emerged as a cost effective way of supporting multiple standards on the same platform architecture. Embedded systems with such platforms need to power efficient and meet the real time constraints of these wireless standards. Register files are known to be power and performance bottlenecks in high performance low power embedded processors. Given the strict power constraints and real-time performance constraints of these applications a comprehensive study of the register file architecture is needed to reach the most optimal architecture. In this paper we perform an in depth analysis on different register file architectures and their configurations for wireless forward error correction algorithms of different wireless standards like 802.11n and 802.16e. We analyze the traditional clustered register file, hierarchical register file, stream register file as well as asymmetrical register files and show that there are various trade-offs between power and performance across these different architectures.",2009,0, 5883,MPICH-V: Toward a Scalable Fault Tolerant MPI for Volatile Nodes,"Global Computing platforms, large scale clusters and future TeraGRID systems gather thousands of nodes for computing parallel scientific applications. At this scale, node failures or disconnections are frequent events. This Volatility reduces the MTBF of the whole system in the range of hours or minutes. We present MPICH-V, an automatic Volatility tolerant MPI environment based on uncoordinated checkpoint/roll-back and distributed message logging. MPICH-V architecture relies on Channel Memories, Checkpoint servers and theoretically proven protocols to execute existing or new, SPMD and Master-Worker MPI applications on volatile nodes. To evaluate its capabilities, we run MPICH-V within a framework for which the number of nodes, Channels Memories and Checkpoint Servers can be completely configured as well as the node Volatility. We present a detailed performance evaluation of every component of MPICH-V and its global performance for non-trivial parallel applications. Experimental results demonstrate good scalability and high tolerance to node volatility.",2002,0, 5884,Error Correcting Graph Matching Application to Software Evolution,"Graph representations and graph algorithms are widely adopted to model and resolve problems in many different areas from telecommunications, to bio-informatics, to civil and software engineering. Many software artefacts such as the class diagram can be thought of as graphs and thus, many software evolution problems can be reformulated as a graph matching problem.In this paper, we investigate the applicability of an error-correcting graph matching algorithm to object-oriented software evolution and report results, obtained on a small system - the Latazza application -, supporting applicability and usefulness of our proposal.",2008,0, 5885,Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU,"Graphics processing units (GPUs) are gaining widespread use in high-performance computing because of their performance advantages relative to CPUs. However, the reliability of GPUs is largely unproven. In particular, current GPUs lack error checking and correcting (ECC) in their memory subsystems. The impact of this design has not been previously measured at a large enough scale to quantify soft error events. We present MemtestG80, our software for assessing memory error rates on NVIDIA graphics cards. Furthermore, we present a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 50,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our survey on Folding@home finds that, in their installed environments, two-thirds of tested GPUs exhibit a detectable, pattern-sensitive rate of memory soft errors. We show that these errors persist after controlling for over clocking and environmental proxies for temperature, but depend strongly on board architecture.",2010,0, 5886,Gravity measurement from moving platform by Kalman Filter and position and velocity corrections for earth layer monitoring to earthquake and volcano activity survey,"Gravity measurement responds to changes in subsurface density and characteristics and is a non-invasive and cost effective way to identify and characterize subsurface. Gravity measurement is an effective tool for earth layer monitoring to earthquake and volcano activity survey. It is particularly important for gravity observation from moving platforms, especially for remote and offshore areas by aircraft, boat, ship, submarine, vehicle, satellite, etc. The measurement is complicated by difficulty to discern gravity from platform accelerations, nonlinearity, drift and dynamic angular movement. Presented solution is second order Kalman Filter, a recursive estimator that is applied to highly nonlinear measurement problems. The filter optimally combines data of three-axis gyros, accelerometers and platform position and velocity signals to provide accurate attitude and gravity measurement. Extensive simulations verified accuracy and robustness of proposed method for measurement from different vehicles in various dynamic environments.",2008,0, 5887,A method of slant correction of vehicle license plate based on watershed algorithm,"In a vehicle license plate recognition system, slant vehicle license plate has a bad effect on the character segmentation and recognition. A method of slant correction of vehicle license plate is proposed in this paper. The method consists of five main stages: (1) the extraction of the boundaries of characters using watershed algorithm; (2) dividing the boundaries of vehicle license plate into small segments using vertical differential method; (3) connection of the fracture characters using expansion and corrosion; (4) computing centroids of the left and the right part in the vehicle license plate respectively; (5) finding the slant angle by means of two centroids. Experimental results show that the error rate of using the method is 6.13%, which is lower than that of the principal component analysis. The running time of using this method is less than that of Hough transform. The method improves accuracy of the slant correction.",2010,0, 5888,Mapping Web-Based Applications Failures to Faults,"In a world where software system is a daily need, system dependability has been a focus of interest. Nowadays, Web-based systems are more and more used and there is a lack of works about software fault representativeness for this platform. This work presents a field data study to analyze software faults found in real Java Web-based software during system test phase, performed by an independent software verification & validation team. The preliminary results allow us to conclude that previous classification partially fits Java Web-based software systems and must be extended to allow specific Web resources like JSP pages.",2009,0, 5889,Research on Fault Diagnosis Method Based Rough Sets Theory and Neural Network,"In allusion to more indeterminate information and higher speed request characteristic in fault diagnosis system, on the basis of switch and relay protecting information of substation, according to the intelligence complementary strategy, a new fault diagnosis method based on rough sets theory-neural network-expert system is presented. Firstly, basis on data acquisition and pretreatment, the original fault diagnosis samples are discretized by using hybrid clustering method. Then the decision attribute is reduced to delete redundancy information for obtaining the minimum fault feature subset. In course of the identifying fault diagnosis through RBF neural network, Some output results of RBF neural network is modified by using the inference capability expert system.",2009,0, 5890,Estimation of Nonlinear Errors-in-Variables Models for Computer Vision Applications,"In an errors-in-variables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errors-in-variables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the Levenberg-Marquardt method, the standard approach toward estimating nonlinear models",2006,0, 5891,Fault diagnosis of electronic systems using intelligent techniques: a review,"In an increasingly competitive marketplace system complexity continues to grow, but time-to-market and lifecycle are reducing. The purpose of fault diagnosis is the isolation of faults on defective systems, a task requiring a high skill set. This has driven the need for automated diagnostic tools. Over the last two decades, automated diagnosis has been an active research area, but the industrial acceptance of these techniques, particularly in cost-sensitive areas, has not been high. This paper reviews this research, primarily covering rule-based, model-based, and case-based approaches and applications. Future research directions are finally examined, with a concentration on issues, which may lead to a greater acceptance of automated diagnosis",2001,0, 5892,Fault Detection and Analysis of Control Software for a Mobile Robot,"In certain circumstances mobile robots are unreachable from human being, for example Mars exploration rover. So robots should detect and handle faults of control software themselves. This paper is intended to detect faults of control software by computers. Support vector machine (SVM) based classification is applied to fault diagnostics of control software for a mobile robot. Both training and testing data are sampled by simulating several fault software strategies and recording the operation parameters of the robot. The correct classification percentages for different situations are discussed",2006,0, 5893,Research and realization of a network fault locating algorithm,"In complex IP networks, faults usually have various uncertain causes. Therefore, it is essential for the fault locating system to realize accurate and quick fault locating. This paper puts forward a new symptom-fault map as the fault locating method for the fault propagation model-a fault locating method based on the Bayesian network, and a fault locating system is designed based on this method. This system, capable of simultaneously utilizing the characterization network and symptom information on whether the service application is normal or abnormal, has good resistance to noise, and can not only locate usability faults at low layers of the protocol stack, but also well locate service application faults at upper layers of the protocol stack, capable of accurately and quickly locating faults in a large IP network.",2009,0, 5894,Physical Layer Error Correction Based Cipher,"In conventional communication systems, error correction is carried out at the physical layer while data security is performed performed at an upper layer. As a result, these steps are done as separate steps. As opposed to this conventional system, we present a scheme that combines error correction and data security as one unit so that both encryption and encoding could be carried out at the physical layer. Hence, in this paper, we present an Error Correction Based Cipher (ECBC) that combines error correction and encryption/decryption in a single step. Encrypting and encoding or decoding and decrypting in a single step will lead to a faster and more efficient implementation. One of the challenges of using previous joint schemes in a communications channel is that there is a tradeoff between data reliability and security. However in ECBC, there is no trade off between reliability and security. Errors introduced at the transmitter for randomization are removed at the receiver. Hence ECBC can utilize its full capacity to correct channel errors. We show the result of randomization test on ECBC and its security against conventional attacks. We also present the result of the FPGA implementation of the ECBC encryption.",2010,0, 5895,A non-invasive system for the measurement of the robustness of microprocessor-type architectures against radiation-induced soft errors,"In critical digital designs such as aerospace or safety equipment, radiation-induced upset events (SEE) can produce adverse effects, so the ability to compare the sensibility of various proposed solutions is desirable. As custom-hardened microprocessor solutions can be very costly, the reliability of various COTS (Commercial-Off-The-Shelf) processors can be evaluated, to see if there is a commercially available microprocessor or microprocessor-type IP (Intellectual Property) with adequate robustness for the specific application. Most existing approaches for the measurement of this robustness of the microprocessor involve diverting the program flow and timing to introduce the bit flips via interrupts and embedded handlers added to the application program. In this paper the tool FT-UNSHADES-uP is described, which provides an environment and methodology for the evaluation of the sensitivity of microprocessor architectures, using dynamic runtime fault injection. A case study is presented, where the robustness of a MicroBlaze and Leon3 uP systems executing a simple signal processing task written in C are evaluated and compared. A hardened version of the program where the key variables are protected has also been tested and its contributions to the system robustness have also been evaluated.",2008,0, 5896,A scatter correction using thickness iteration in dual-energy radiography,"In dual-energy radiography with area detectors, a scattered signal causes dominant error in the separation of different materials. Several methods for scatter correction in dual energy radiography have been suggested, yielding improved results. Such methods, however, require additional lead blocks or detectors, and additional exposures to estimate the scatter fraction for every correction. In the present study we suggest a scatter correction method that uses a database of fractions and distributions of the scattered radiation. To verify the feasibility of this method we conducted a MCNP simulation for a two-material problem, aluminum and water. The generation of the scatter information for different thicknesses of an aluminum-water phantom has been simulated. Based on the uncorrected signals, the thickness of each material can be calculated by a conventional dual-energy algorithm. The scatter information of the corresponding thickness from the database, a look-up table, is then used to correct the original signals. The iteration of this scatter correction reduced relative-thickness error from 32% to 3.4% in aluminum, and from 41% to 2.8% in water. The proposed scatter correction method can be applied to two-material dual-energy radiography such as mammography, contrast imaging, and industrial inspections.",2006,0, 5897,Hybrid soft error detection by means of infrastructure IP cores [SoC implementation],"High integration levels, coupled with the increased sensitivity to soft errors even at ground level, make the task of guaranteeing adequate dependability levels more difficult then ever. In this paper, we propose to adopt low-cost infrastructure-intellectual-property (I-IP) cores in conjunction with software-based techniques to perform soft error detection. Experimental results are reported that show the effectiveness of the proposed approach.",2004,0, 5898,Remote diagnosis of overhead line insulation defects,"High voltage insulation defects cause partial discharges (PD) which can be detected through the reception of radiated radio frequency (rf) impulses. The paper describes a method of detecting insulation defects on overhead lines using vehicle mounted rf measuring equipment. The equipment is based on a 4 antenna array that is directly sampled using digital equipment with a bandwidth of 1 GHz. The results are analysed by firstly estimating the time delays apparent between the 4 antennas. Secondly, using the time delays, bearing and RMS time delay error information is calculated that allows identification of the PD source. The equipment has been tested in the field on a 132 kV overhead line defect that was initially reported to a radio spectrum management agency. The results show that equipment has the sensitivity to identify the defective insulator string.",2004,0, 5899,Bit-Error-Rate Estimation for High-Speed Serial Links,"High-performance serial communication systems often require the bit error rate (BER) to be at the level of 10-12 or lower. The excessive test time for measuring such a low BER is a major hindrance in testing communication systems. In this paper, we show that the jitter spectral information extracted from the transmitted data and some key characteristics of the clock and data recovery (CDR) circuit can be used to estimate the BER effectively without comparing each captured bit for error detection. This analysis is also useful for designing a CDR circuit for systems whose jitter spectral information is known. Experimental results comparing the estimated and measured BER on a 2.5-Gb/s commercial CDR circuit demonstrate the high accuracy of the proposed technique",2006,0, 5900,Modeling and test validation of a 15kV 24MVA Superconducting Fault Current Limiter,"High-power short-circuit test results and numerical simulations of a 15kV24MVA distribution-class High Temperature Superconductor (HTS) Fault Current Limiters (FCL) are presented and compared in this paper. The FCL design was based on the nonlinear inductance model here described, and the device was tested at 13.1kV line-to-line voltage for prospective fault currents up to 23kArms, prior to its installation in the electric grid. Comparison between numerical simulations and fault test measurements show good agreement. Some simulations and field testing results are depicted. The FCL was energized in the Southern California Edison grid on March 9, 2009.",2010,0, 5901,Error robustness scheme for perceptually coded audio based on interframe shuffling of samples,"High-quality audio streaming over IP networks is a significant part of multimedia traffic in telecommunications networks in the future. Although there are a number of error concealment and correction methods developed for real-time multimedia streaming over unreliable packet-switched networks, many effective codec dependent error robustness schemes have not been utilized for the state-of-art audio compression standards developed mainly to compress audio for storage media. This paper describes how the idea of redistributing adjacent audio samples to different packets can be applied to the perceptual audio codecs, such as MP3 and AAC. The experiences with testing the concept using AAC are explained as well. The proposed approach is especially applicable with semi-reliable transport protocols and future networks providing flexible support for prioritized data traffic.",2002,0, 5902,Fault simulation and modelling of microelectromechanical systems,High-reliability and safety-critical markets for microelectromechanical systems are driving new proposals for the integration of efficient built-in test and monitoring functions. The realisation of this technology will require support tools and validation methodologies including fault simulation and testability analysis and full closed-loop simulation techniques to ensure cost and quality targets. This article proposes methods to extend the capabilities of mixed signal and analogue integrated circuit fault simulation techniques to MEMS by including failure mode and effect analysis data and using behavioural modelling techniques compatible with electrical simulators.,2000,0, 5903,Research on reliability evaluation of high-speed railway train control system based on fault injection,"High-speed railway train control system is one complex security system with lots of functions, which means that deep research on safety and reliability of high-speed railway train control system be carried out. According to characters of simulation of high-speed railway train control system and advantages of fault injection, method of control system reliability and safety based on fault injection is carried out. Main structure and detailed functions, structures of fault injection system is designed and realized. The conclusion could be obtained that application of fault injection could help to improve reliability and safety of high-speed railway train control system simulation.",2010,0, 5904,Analysis of error control code use in ultra-low-power wireless sensor networks,"High-speed wireless sensor networks are currently being considered for a variety of communication application such as environmental, medical, industrial or security scenarios. For increased transmission rates given the limited embedded battery lifetime, ultra-low-power circuitry is needed in the sensor and processors. Much research is being undertaken in these different areas at the device, circuit, system and network levels Although using error control coding (ECC) potentially reduce the required transmit power for reliable communication, higher decoder complexity increases the required processing energy. The above tradeoff is explored in this paper to find when use of ECC results in more power-efficient systems. Several recently implemented decoders are analyzed, comparing both analog and digital implementations. The four most energy efficient decoders are analog decoders. The best analog decoder becomes energy-efficient at about 1/4 the distance of the best digital implementation",2006,0, 5905,A New Robust Paradigm for Diagnosing Hold-Time Faults in Scan Chains,"Hold-time violation is a common cause of failure at scan chains. A robust new paradigm for diagnosing such failure is presented in this paper. As compared to previous methods, the major advantage of ours is the ability to tolerate non-ideal conditions, e.g., under the presence of certain core logic faults or for those faults that manifest themselves intermittently. We first formulate the diagnosis problem as a delay insertion process. Then, two algorithms including a greedy algorithm and a so-called best-alignment based algorithm are proposed. Experimental results on a number of real designs are presented to demonstrate its effectiveness",2006,0, 5906,Fault Detection In PCB Using Homotopic Morphological Operator,"Homotopic morphological image processing software solution is developed for detection of hair cracks in PCB, which cannot be seen by naked eye. The proposed software solution is implemented using basic morphological operations like dilation, erosion, opening, closing, hit-miss transform, thinning, thickening skeletonizing and pruning. The software solution also performs image enhancement operations using mathematical morphology. The proposed software solution can be extended to detect thin bone fractures and real time automatic PCB fault detection",2006,0, 5907,Detecting attacks that exploit application-logic errors through application-level auditing,"Host security is achieved by securing both the operating system kernel and the privileged applications that run on top of it. Application-level bugs are more frequent than kernel-level bugs, and, therefore, applications are often the means to compromise the security of a system. Detecting these attacks can be difficult, especially in the case of attacks that exploit application-logic errors. These attacks seldom exhibit characterizing patterns as in the case of buffer overflows and format string attacks. In addition, the data used by intrusion detection systems is either too low-level, as in the case of system calls, or incomplete, as in the case of syslog entries. This paper presents a technique to enforce nonbypassable, application-level auditing that does not require the recompilation of legacy systems. The technique is implemented as a kernel-level component, a privileged daemon, and an offline language tool. The technique uses binary rewriting to instrument applications so that meaningful and complete audit information can be extracted. This information is then matched against application-specific signatures to detect attacks that exploit application-logic errors. The technique has been successfully applied to detect attacks against widely-deployed applications, including the Apache Web server and the OpenSSH server.",2004,0, 5908,SPIDER: software for protein identification from sequence tags with de novo sequencing error,"For the identification of novel proteins using MS/MS, de novo sequencing software computes one or several possible amino acid sequences (called sequence tags) for each MS/MS spectrum. Those tags are then used to match, accounting amino acid mutations, the sequences in a protein database. If the de novo sequencing gives correct tags, the homologs of the proteins can be identified by this approach and software such as MS-BLAST is available for the matching. However, de novo sequencing very often gives only partially correct tags. The most common error is that a segment of amino acids is replaced by another segment with approximately the same masses. We developed a new efficient algorithm to match sequence tags with errors to database sequences for the purpose of protein and peptide identification. A software package, SPIDER, was developed and made available on Internet for free public use. This work describes the algorithms and features of the SPIDER software.",2004,0, 5909,Aircraft fuel system diagnostic fault detection through expert system,"For the problem of fault diagnosis of aircraft fuel system, a new method using CLIPS to build expert system is presented in this paper. By building knowledge base, infer engine and using CB tool, the relevant software environment is set up. Consequently the fault diagnostic system of aircraft fuel system is successfully developed. The emulation results prove that the intelligence of expert system works perfectly and the faults of fuel system are diagnosed exactly and fast, furthermore, effective methods of reconstruction can be brought forward in the expert system. It is entirely practicable in aircraft fuel fault diagnosis system.",2008,0, 5910,A Novel Robust Video Transmission Scheme for Error Resilient Transcoding,"For video transmission over wireless or highly congested networks, video transcoding is typically used to reduce the rate and change the format of the originally encoded video source to match network conditions and terminal capabilities. Error resilient video transcoding can insert error resilient tools in the compressed video to enhance error resilience of the video over wireless channels by increasing bit rate. This paper proposed a novel error resilient video transcoding and streaming transmission scheme specially designed for Mpeg2 to H.264 transcoded video to portable devices in wireless channel. Based on the rate-distortion models developed in this paper, an optimal transcoding marcoblock mode selection and bit allocation scheme is proposed. In order to improve the video quality in the presence of transmission error, this paper also investigated how to allocate redundant pictures more efficiently according to the content characteristics of the primary pictures. The experiment results demonstrated that the proposed transcoding scheme can enhance speed and improve the decode image quality, get better error resilience.",2009,0, 5911,Multi-level fault diagnosis in satellite formations using fuzzy rule-based reasoning,"Formation flying is an emerging area in Earth and space science domain that utilizes multiple inexpensive spacecraft by distributing the functionalities of a single platform among miniature inexpensive platforms. Traditional spacecraft fault diagnosis and health monitoring practices that involve around-the-clock monitoring, threshold checking, and trend analysis of a large amount of telemetry data by human experts do not scale well for multiple space platforms. In this paper, a multi-level fault diagnosis methodology utilizing fuzzy rule-based reasoning is presented to enhance the level of autonomy in fault diagnosis at the ground stations. Effectiveness of the proposed fault diagnosis methodology is demonstrated by utilizing synthetic formation flying attitude control subsystem data. The proposed scheme has potential to serve as a prognostic tool when designed based on multiple fault severities, and hence can contribute in the overall health management process.",2008,0, 5912,A Statistical Bit Error Generator for Emulation of Complex Forward Error Correction Schemes,"Forward error correction (FEC schemes are generally used in wireless communication systems to maintain an acceptable quality of service. Various models have been proposed in literature to predict the end-to-end quality of wireless video systems. However, most of these models utilize simplistic error generators which do not accurately represent any practical wireless channel. A more accurate way is to evaluate the quality of a video system using Monte Carlo techniques. However these necessitate huge computational times, making these methods unpractical. This paper proposes an alternative method that can be used in modeling of complex communications systems with minimal computational time. The proposed three random variable method was used to model two FEC schemes adopted by the digital video broadcasting (DVB) standard. Simulation results confirm that this method closely matches the performance of the considered communication systems in both bit error rate (BER) and peak signal-to-noise ratio (PSNR).",2007,0, 5913,An improved template-based method for mining association rules from defect repositories,"Frequent pattern mining from defect repositories has started playing an important role in software defect detection and analysis. In this paper, we present a new improved template-based method to generate association rules from defect repositories. We aim to provide manager and tester with more efficient method of generating rules. The improved method optimizes the generation using rule templates firstly, then reduces unnecessary rule generation by the theorems described in Section III-A. The advantages of the improved method are validated by an experiment based on 1860 defect reports. The experimental results show the improved template-based method can significantly reduce the total number of the rules, and shorten the run time. The pruned association rule set makes it easier to find the interesting rules.",2010,0, 5914,"Managing the maintenance of ported, outsourced, and legacy software via orthogonal defect classification","From the perspective of maintenance, software systems that include COTS software, legacy, ported or outsourced code pose a major challenge. The dynamics of enhancing or adapting a product to address evolving customer usage and the inadequate documentation of these changes over a period of time (and several generations) are just two of the factors which may have a debilitating effect on the maintenance effort. While many approaches and solutions have been offered to address the underlying problems, few offer methods which directly affect a team's ability to quickly identify and prioritize actions targeting the product which is already in front of them. The paper describes a method to analyze the information contained in the form of defect data and arrive at technical actions to address explicit product and process weaknesses which can be feasibly addressed in the current effort. The defects are classified using Orthogonal Defect Classification (ODC) and actual case studies are used to illustrate the key points",2001,0, 5915,A cascaded correction method to reduce the contamination of ionospheric frequency modulation for HF skywave radars,"From the presented simulations and others not shown here, we conclude that the proposed cascaded correction method provides an efficient way to reduce the contamination of ionospheric frequency modulation for the HF skywave radar even in a serious case where the contaminated Bragg lines in Doppler domain overlap. The idea of the cascaded correction also implies the possibility that we can achieve correction gain from integration of the advantages of different correction methods.",2009,0, 5916,Modeling and Fault Diagnosis of a Polymer Electrolyte Fuel Cell Using Electrical Equivalent Analysis,"Fuel cell systems are complex systems and a high degree of competence is needed in different areas of knowledge such as thermodynamics, fluid dynamics, electrochemistry, and others, for their comprehension. This paper is a contribution to global modeling and fault diagnosis of these systems. More precisely, the goal of this paper is twofold. First, an electrical equivalent model, which could be used as a unifying approach to fuel cell systems will be resented. Second, a methodology to use the electrical model for fuel cell system diagnosis will be introduced, making special emphasis on fuel cell flooding detection. In order to illustrate the relevance of the proposed approach, experimental validations of the model, and the diagnosis methodology are proposed.",2010,0, 5917,Fault-Tolerance Schemes for Hierarchical Mesh Networks,"In [3], a hierarchical configuration for mesh was developed. The proposed scheme divides a mesh network into uniform smaller clusters. Each of these clusters contains a leader (or monitor) to communicate with the other members of the group. Leaders are then required to communicate with other leaders to form groups at higher levels. The hierarchical approach has been shown to reduce the communication cost by reducing the overall distance traveled by messages in the network [1, 2, 3]. Experiments were conducted on the hierarchical configuration to simulate different activities that it may be used for. Simulations were designed to test various sizes of the underlying mesh, as well as potential cluster sizes that may be utilized. In efforts to see if additional improvements could be made, a variety of throughputs of data were tested for the system.",2005,0, 5918,Transparent Fault Tolerance of Device Drivers for Virtual Machines,"In a consolidated server system using virtualization, physical device accesses from guest virtual machines (VMs) need to be coordinated. In this environment, a separate driver VM is usually assigned to this task to enhance reliability and to reuse existing device drivers. This driver VM needs to be highly reliable, since it handles all the I/O requests. This paper describes a mechanism to detect and recover the driver VM from faults to enhance the reliability of the whole system. The proposed mechanism is transparent in that guest VMs cannot recognize the fault and the driver VM can recover and continue its I/O operations. Our mechanism provides a progress monitoring-based fault detection that is isolated from fault contamination with low monitoring overhead. When a fault occurs, the system recovers by switching the faulted driver VM to another one. The recovery is performed without service disconnection or data loss and with negligible delay by fully exploiting the I/O structure of the virtualized system.",2010,0, 5919,Study on the impedance-frequency characteristic of HVDC converter under asymmetric faults in the AC system,"In a HVDC transmission system, faults occur in the AC system can result in the variation of impedance-frequency characteristic of the HVDC converter. In this paper, an advanced switching function model of HVDC converter, which considers the effect of the interactions between the AC and DC system and the unbalanced operating conditions of the converter, is presented. Based on this model, a method for calculation of the impedance-frequency characteristic of the HVDC converter under the asymmetric faults conditions is proposed. The method is verified by comparison with the results obtained from calculation and digital simulation using PSCAD/EMTDC.",2009,0, 5920,antenna-pattern correction for near-field-to-far field RCS transformation of 1D linear SAR measurements,"In a previous AMTA paper (B. E. Fischer, et al.), we presented a first-principles algorithm, called wavenumber migration (WM), for estimating a target's far-field RCS and/or far-field images from extreme near-field linear (one-dimensional) or planar (two-dimensional) SAR measurements, such as those collected for flight-line diagnostics of aircraft signatures. However, the algorithm assumes the radar antenna has a uniform, isotropic pattern for both transmitting and receiving. In this paper, we describe a modification to the (one-dimensional) linear SAR wavenumber migration algorithm that compensates for nonuniform antenna-pattern effects. We also introduce two variants to the algorithm that eliminate certain computational steps and lead to more efficient implementations. The effectiveness of the pattern compensation is demonstrated for all three versions of the algorithm in both the RCS and the image domains using simulated data from arrays of simple point scatterers.",2004,0, 5921,Using Defect Reports to Build Requirements Knowledge in Product Lines,"In a recent study of a product line, we found that the defect reports both (1) captured new requirements information and (2) implicated undocumented, tacit requirements information in the occurrence of the defects. We report four types of requirements knowledge revealed by software defect reports from integration and system testing for two products in this high-dependability product line. We argue that store-and-retrieve-based requirements management is insufficient to avoid recurrence of these types of defects on upcoming members of the product line. We then propose the use of two mechanisms not traditionally associated with requirements management, one formal and one informal, to improve communication of these types of requirements knowledge to developers of future products in the product line. We show how the two proposed mechanisms, namely feature models extended with assumption specifications (formal) and structured anecdotes of paradigmatic product-line defects (informal), can together improve propagation of the requirements knowledge exposed by these defects to future products in the product line.",2009,0, 5922,Fault Tolerant Service Composition in Service Overlay Networks,"In a service overlay network, the services provided by different service providers might span multiple Internet domains. A service provider failure may cause significant performance deterioration. Thus, it is desirable to provide fault tolerant service composition solutions such that the service composition can be switched to the backup service composition solution in case of a service provider failure. To provide 100% protection against a single service provider failure, fault tolerant service composition essentially requires to partition service providers into two disjoint sets, each of them can provide a service composition solution. We study a generalized fault tolerant service composition which aims to find two service composition solutions for each request to minimize the number of shared service providers. Subject to such a primary objective, we also aim to minimize the total service composition cost. We firstly prove that the problem is NP-Complete, and formulate the problem as an integer linear program. We then propose heuristic algorithms to efficiently solve the problem. Simulation results demonstrate the effectiveness of the proposed heuristic algorithms.",2008,0, 5923,Bayesian probability of error under fine quantization,"In a variety of decision systems, processing is performed not on the underlying signal but on a quantized version. Accordingly, assuming fine quantization, Poor observed a quadratic variation in f-divergences with smooth f. In this paper, we derive a quadratic behavior in the Bayesian probability of error, which corresponds to a nonsmooth f, thereby advancing the state of the art. Unlike Poor's purely variational method, we solve a novel cube-slicing problem, and convert a volume integral to a surface integral in the course of our analysis.",2008,0, 5924,Using design based binning to improve defect excursion control for 45nm production,"For advanced device (45 nm and below), we proposed a novel method to monitor systematic and random excursion. By integrating design information and defect inspection results into automated software (DBB), we can identify design/process marginality sites with defect inspection tool. In this study, we applied supervised binning function (DBC) and defect criticality index (DCI) to identify systematic and random excursion problems on 45 nm SRAM wafers. With established SPC charts, we will be able to detect future excursion problem in manufacturing line early.",2007,0, 5925,A Novel Fault Observer Design and Application in Flight Control System,"For estimating the fault signals of the discrete flight control system with unknown senor faults and actuator faults simultaneously, a discrete proportional integral observer (PIO) is presented. The proposed PIO uses the augment states to estimate the sensor faults of the flight control system. Moreover, this observer uses an additionally introduced integral term of the output error to obtain the estimation of actuator faults. The convergence of the PIO is proved. The proposed fault observer is applied in flight control system. Simulation results are given to demonstrate the effectiveness of the proposed fault observer.",2009,0, 5926,On Optimizing XOR-Based Codes for Fault-Tolerant Storage Applications,"For fault-tolerant storage applications, computation complexity is the key concern in choosing XOR-based codes. We observe that there is great benefit in computing common operations first (COF). Based on the COF rule, we describe a generic problem of optimizing XOR-based codes and make a conjecture about its NP-completeness. Two effective greedy algorithms are proposed. Against long odds, we show that XOR-based Reed-Solomon codes with such optimization can in fact be as efficient and sometimes even more efficient than the best known specifically designed XOR-based codes.",2007,0, 5927,Fault diagnosis of train-ground wireless communication unit based on Fuzzy Neural Network,"For making up the deficiency of fault diagnosis method of train-ground wireless communication unit of communication based train control (CBTC), a global fault diagnosis method based on model building is introduced. Global fault diagnosis model is mainly comprised of fault symptom, fault diagnosis rule and fault types. Fault symptom is seemed as input space and fault types are seemed as output space. Fault diagnosis rules have two sorts that can interconvert with each other. Firstly, single fault diagnosis rule was designed by fuzzy neural network. Secondly, regarding numerical value variation as vector characteristic, global fault diagnosis rule was designed depending on increasing values range transformation equation, decision matrix and logic algorithm. Lastly, applying global fault diagnosis model and Reworks software development language, fault diagnosis system of train-ground wireless communication (TGWC) unit was realized and correlating experiment validation was carried out. The experiment result indicates that global fault diagnosis model can provide high diagnosis precision of different fault types, had better to analyze cause and effect between fault symptom and fault types, avoid localization to single fault diagnosis method and collectivity precision is 93%.",2009,0, 5928,Experimental fault characterization of doubly fed induction machines for wind power generation,"For modern large wind farms, it is interesting to design an efficient diagnosis system oriented to wind turbine generators based on doubly-fed induction machine (DFIM). Intensive research effort has been focused on the signature analysis to predict or detect electrical and mechanical faults in induction machines. Different signals can be used, voltage, current, stray flux. In the case of wind generators, considering that both the machine signals and the control signals are accessible, for diagnostic purposes, there are interesting additional possibilities, for example the use of the converter modulating voltages. In this paper, a complete system is analyzed by suitable simulations and experimentations to deeply study fault influence and to identify the best diagnostic procedure to perform predictive maintenance",2006,0, 5929,Fault management for networks with link state routing protocols,"For network fault management, we present a new technique that is based on on-line monitoring of networks with link state routing protocols, such as OSPF (open shortest path first) and integrated IS-IS. Our approach employs an agent that monitors the on-line information of the network link state database, analyzes the events generated by network faults for event correlation, and detects and localizes the faults. We apply our method to a real network topology with various types of network faults. Experimental results show that our approach can detect and localize the faults in a timely manner, yet without disrupting normal network operations.",2004,0, 5930,Assigning bug reports using a vocabulary-based expertise model of developers,"For popular software systems, the number of daily submitted bug reports is high. Triaging these incoming reports is a time consuming task. Part of the bug triage is the assignment of a report to a developer with the appropriate expertise. In this paper, we present an approach to automatically suggest developers who have the appropriate expertise for handling a bug report. We model developer expertise using the vocabulary found in their source code contributions and compare this vocabulary to the vocabulary of bug reports. We evaluate our approach by comparing the suggested experts to the persons who eventually worked on the bug. Using eight years of Eclipse development as a case study, we achieve 33.6% top-1 precision and 71.0% top-10 recall.",2009,0, 5931,Research Classification of Printing Fault Based on RSVM,"For the characteristics of malfunction diagnose system a model to classify fault printing based on reduced support vector machines (RSVM) is discussed. The printing malfunctions have many classes. There are massive datasets used in fraud detection. The support vector machines have been promising methods for classification because of their solid mathematical foundation. However they are not favored for large- scale because the training complexity ofSVMis highly dependent on the size of data set. This paper use RSVM with an improved nonlinear Kernel to reduced the size of the quadratic program to be solved and simplified the classification of the nonlinear separating surface. Computational results indicate the RSVM has a good efficiency for adjustable printing fault, and computational times as well as memory usage are much smaller for RSVM than that of conventional SVM.",2008,0, 5932,Indirect Training with Error Backpropagation in Gray-Box Neural Model: Application to a Chemical Process,"Gray-box neural models mix differential equations, which act as white boxes, and neural networks, used as black boxes, to complete the phenomenological model. These models have been used in different researches proving their efficacy. The aim of this work is to show the training of the gray-box model through indirect back propagation and Levenberg-Marquardt. The gray-box neural model was tested in the simulation of a chemical process in a continuous stirred tank reactor (CSTR) with 5% noise, responding successfully.",2010,0, 5933,Error Detection in Grid Computing,"Grid computing is a distributed computing paradigm that differs from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. Grid computing is a partnership between clients and servers. Grid clients have more responsibilities than do traditional clients and must be equipped with powerful mechanisms for dealing with and recovering from failures, whether they occur in the context of remote execution, work management, or data output. When clients fail to perform the task, the failure should be identified. We propose a novel method to detect an error in the client using voting technique.",2009,0, 5934,Grid service reliability modeling on fault recovery and optimal task scheduling,"Grid computing is a newly developed technology. Although the developmental tools and techniques for the grid have been extensively studied, the reliability of grid service can not satisfy the real-world application requirements. To resolve this problem, this paper introduces a fault recovery mechanism into grid nodes and gives a constraint on recovery amount. Moreover, a multi-objective optimization model is presented to address the task scheduling problem and an ant colony optimization (ACO) is developed to solve the problem effectively. A numerical example is given to show the modeling procedures and efficiency of the ACO.",2009,0, 5935,Whole-frame Error concealment with improved backward motion estimation for H.264 decoders,"H.264 video compression standard is widely used in video transmission for telecommunications field, with a higher compression ratio and excellent network adaptability. But it also decreases the performance in anti-interference capabilities. More than one frame pictures of encoding data is stored in a data packet, once the data packet occur errors in transmission, the whole frame image is completely lost, and the next several frames can not be decoded well . Therefore, we must take the decoder error conceal technology to improve the ability of error resilient. This paper presents an error conceal method for whole-frame loss by using improved backward motion estimation. More precise quarter-pixel motion estimation to conceal the un-mapped pixel from previous correct decoded frame to current damaged frame is used to improve the quality of reconstructed frame, and spatial conceal method of distance-weighted average method is used to reduce the blocking-effect. Experiment results show it is effective.",2007,0, 5936,"A fast, high-quality inverse halftoning algorithm for error diffused halftones","Halftones and other binary images are difficult to process with causing several degradation. Degradation is greatly reduced if the halftone is inverse halftoned (converted to grayscale) before scaling, sharpening, rotating, or other processing. For error diffused halftones, we present (1) a fast inverse halftoning algorithm and (2) a new multiscale gradient estimator. The inverse halftoning algorithm is based on anisotropic diffusion. It uses the new multiscale gradient estimator to vary the tradeoff between spatial resolution and grayscale resolution at each pixel to obtain a sharp image with a low perceived noise level. Because the algorithm requires fewer than 300 arithmetic operations per pixel and processes 77 neighborhoods of halftone pixels, it is well suited for implementation in VLSI and embedded software. We compare the implementation cost, peak signal to noise ratio, and visual quality with other inverse halftoning algorithms",2000,0, 5937,Adaptive Sampling Rate Correction for Acoustic Echo Control in Voice-Over-IP,"Hands-free terminals for speech communication employ adaptive filters to reduce echoes resulting from the acoustic coupling between loudspeaker and microphone. When using a personal computer with commercial audio hardware for teleconferencing, a sampling frequency offset between the loudspeaker output D/A converter and the microphone input A/D converter often occurs. In this case, state-of-the-art echo cancellation algorithms fail to track the correct room impulse response. In this paper, we present a novel least mean square (LMS-type) adaptive algorithm to estimate the frequency offset and resynchronize the signals using arbitrary sampling rate conversion. In conjunction with a normalized LMS-type adaptive filter for room impulse response tracking, the proposed system widely removes the deteriorating effects of a frequency offset up to several Hz and restores the functionality of echo cancellation.",2010,0, 5938,Research on Case Representation in Printing Machine Fault Diagnosis Expert System Based on Case-Based Reasoning,"Having depended on the analysis of case content characteristic and FTA (fault tree analysis) characteristic, the paper propounded a method to express printing machine fault case, which is called four elements method based on FTA. Finally, the paper took an example to explain how to express fault case by the method. From the example we could found that this method had a good hiberarchy and easy maintenance, and for necessary of the printing machine fault diagnosis.",2008,0, 5939,Requirements specification for health monitoring systems capable of resolving flight control system faults,"Health Management Systems (HMS) may help pilots resolve faults in flight control systems and other failures that affect aircraft stability. For HMS to be effective, they must (1) alert pilots to problems early enough that the pilot can reasonably resolve the fault and, (2) if the aircraft's handling qualities are severely degraded, provide stability augmentation. This study examines these requirements of HMS through fast-time simulations of a pilot model controlling an aircraft experiencing flight control system faults and/or handling qualities degradations",2001,0, 5940,Determination of the systematic and random measurement error in an LC-FTICR mass spectrometry analysis of a partially characterized complex peptide mixture,"In high-throughput proteomics, a promising approach presently being explored is the use of liquid chromatography coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS) to provide measurements of the masses of tryptic peptides in complex mixtures, which can then be used to identify the proteins which gave rise to those peptides. In order to apply this method, it is necessary to account for any systematic measurement error, and it is useful to have an estimate of the random error in measured masses. In this investigation, a complex mixture of peptides derived from a partially characterized sample was analyzed by LC-FTICR-MS. Through the application of a Bayesian probability model of the data, partial knowledge of the composition of the sample is sufficient both to determine any systematic error and to estimate the random error in measured masses.",2004,0, 5941,Evaluation of attenuation and scatter correction requirements as a function of object size in PET small animal imaging,"In human emission tomography, an additional transmission scan is often required to obtain accurate attenuation maps for attenuation correction (AC) and scatter correction (SC). These methods have been translated to small animal imaging, although the impact of photon interactions on the reconstruction of mouse/rat images is substantially less than that in human imaging. In this study, we evaluate the requirement of these corrections for PET small animal imaging through Monte Carlo simulations of the Inveon PET scanner using MOBY voxelized phantoms scaled to different sizes (2.1-6.4 cm diameters). The 3D data were reconstructed in 6 different conditions depending on the attenuation map used: Accurate AC+SC, Simple AC+SC, Accurate AC only, Simple AC only, SC only and No correction. Mean error% for 8 different ROIs and 6 different MOBY sizes were obtained against the accurate reconstruction (first on the list). In addition, real mouse data obtained from an Inveon PET scanner were analyzed using similar methods. The results from simulations and real mouse data showed that attenuation correction based on solely emission data should be sufficient for imaging animals smaller than 4 cm diameter.",2010,0, 5942,Application of Bayesian belief networks to fault detection and diagnosis of industrial processes,"In industrial processes, to confide the success of planed operation, implementing early and accurate method for recognizing abnormal operating conditions, known as faults, is essential. Effective method for fault detection and diagnosis helps reducing impact of these faults, extols the safety of operation, minimizes down time and reduces manufacturing costs. In this paper, application of BBNs is studied for a benchmark chemical industrial process, known as, Tennessee Eastman in order to achieve early fault detection and accurate probable diagnosis of their causes. Application of Bayesian belief networks for fault detection and diagnosis of Tennessee Eastman process in the graphical context description has not been tested yet. Success of this feature confirms capability and ease use of it as a diagnostic system in actual industrial processes.",2010,0, 5943,A Two-Stage Iterative Decoding of LDPC Codes for Lowering Error Floors,"In iterative decoding of LDPC codes, trapping sets often lead to high error floors. In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly improved with this decoding scheme.",2008,0, 5944,A novel automated fault identification approach in computer networks based on graph theory,"In large computer network, isolation of the primary source of failure is a challenging task. In this paper, we present a novel approach of modeling network fault diagnosis. With the model based on reachable theorems, we design an automated fault identification algorithm and analyze its performance and validity named as DAFMA. To judge the consistency between the fault effect of the given failure sources and the testing one, an efficient algorithm is also proposed named as FFEAJ. DAFMA can be carried out automatically in computer because both DAFMA and FFEAJ are based on matrix and Boolean operations. Finally, to illustrate the details of DAFAM, four classical fault effects are classified and the working steps of DAFAM are described.",2003,0, 5945,Determining error rate in error tolerant VLSI chips,"In the near future all die implementing high performance circuitry will contain hundreds of thousands of defects. Most companies will attempt to achieve useful levels of functionally good die using classical and enhanced fault tolerant and defect tolerant techniques. We advocate a new notion for yield enhancement called error tolerance that includes marketing chips that occasionally output errors. The quantity and quality of errors produced by a chip can be characterized several ways, such as by accuracy, error rate, and accumulation (retention). This paper focuses on test techniques for estimating error rate.",2004,0, 5946,Using likely program invariants to detect hardware errors,"In the near future, hardware is expected to become increasingly vulnerable to faults due to continuously decreasing feature size. Software-level symptoms have previously been used to detect permanent hardware faults. However, they can not detect a small fraction of faults, which may lead to silent data corruptions(SDCs). In this paper, we present a system that uses invariants to improve the coverage and latency of existing detection techniques for permanent faults. The basic idea is to use training inputs to create likely invariants based on value ranges of selected program variables and then use them to identify faults at runtime. Likely invariants, however, can have false positives which makes them challenging to use for permanent faults. We use our on-line diagnosis framework for detecting false positives at runtime and limit the number of false positives to keep the associated overhead minimal. Experimental results using microarchitecture level fault injections in full-system simulation show 28.6% reduction in the number of undetected faults and 74.2% reduction in the number of SDCs over existing techniques, with reasonable overhead for checking code.",2008,0, 5947,A new field ground fault protection with low frequency voltage injection,"In the paper, a new field ground fault protection with voltage injection is introduced. The injection voltage of protection is a low frequency square wave voltage, it is transformed from an AC voltage by transformer, rectifier and electronic switches, and its frequency is controllable. The principle, criterion and hardware configuration of protection are given, and several operation problems are analyzed. The results of static simulation test and dynamic simulation test is shown, the operational aspect of protection in three gorges left bank power plant is also introduced, these test and operation results indicate that the protection performance can meet the practical operation requirement, so the application prospect of this protection is good.",2007,0, 5948,An intelligent FFT-analyzer with harmonic interference effect correction and uncertainty evaluation,"In the paper, the problem of the correction of harmonic interference effects on FFT results is discussed. A procedure for the effect evaluation and correction is proposed and implemented in an intelligent FFT-analyzer able also to provide the results with their uncertainty.",2003,0, 5949,Exploiting Redundancies to Enhance Schedulability in Fault-Tolerant and Real-Time Distributed Systems,"In the past decades, distributed systems have been widely applied to real-time applications, most of which have fault-tolerance requirements to assure high reliability. Due to the stringent space constraints of real-time systems, the issue of schedulability becomes a major concern in the design of fault-tolerant and real-time distributed systems. Most existing real-time and fault-tolerant scheduling algorithms, which are based on the primary-backup scheme for periodic real-time tasks, introduce unnecessary redundancies by aggressively using active-backup copies. To solve this problem, we propose two novel fault-tolerant techniques, which are seamlessly integrated with fixed-priority-based scheduling algorithms. These techniques leverage redundancies to enhance schedulability in fault-tolerant and real-time distributed systems. Our fault-tolerant techniques make use of the primary-backup scheme to tolerate permanent hardware failures. The first technique (referred to as Tercos) terminates the execution of active-backup copies, when corresponding primary copies are successfully completed. Tercos is designed to reduce scheduling lengths in fault-free scenarios to enhance schedulability by virtue of executing portions of active-backup copies in passive forms. The second technique (referred to as Debus) uses a deferred-active-backup scheme to further minimize schedule lengths to improve the schedulability performance. Debus schedules active-backup copies as late as possible, while terminating active-backup copies when their primary copies are completed. Experimental results show that, compared with existing algorithms in literature, Tercos can significantly improve schedulability by up to 17.0% (with an average of 9.7%). Furthermore, empirical results reveal that Debus can enhance schedulability over Tercos by up to 12% (with an average of 7.8%).",2009,0, 5950,Routability estimation of FPGA-based fault injection,"In the past years various approaches to hardware-based fault injection using FPGA-based hardware have been presented. Some approaches insert additional functions at the fault location (any location in the circuit, e.g. I/Os of components or their interconnection nets), while others utilize the reconfigurability of FPGAs. A common feature of each of these methods is the execution of hardware-based fault simulation using the stuck-at fault model at gate level. The expansion of a circuit by insertion of additional functions at the fault location constitutes an overhead of FPGA resources. An optimized mapping of the circuit into an FPGA and a routable placement in the FPGA is difficult to achieve due to the generation of additional functions at the fault locations. Therefore, an optimized assignment of the fault locations to the FPGA-resources (configurable logic blocks, look-up tables, I/O blocks, etc.) precedes and thereby guarantees the mapping and routability of very large circuits in an acceptable runtime. In this paper an approach to node assignment is introduced, which achieves a reduction in FPGA overhead as well as routability of the expanded circuit in a minimal runtime.",2003,0, 5951,SpeedGrade: an RTL path delay fault simulator,"In the past, research on delay fault testing has been focused on test generation using various delay fault models on full scan gate level netlists. These tests are not very suitable for speed-binning since the confidence that the slowest paths have been covered is low. We have developed a novel methodology with an accompanying tool flow called SpeedGrade that performs path delay fault simulation using an RTL (Register Transfer Level) simulator. This novel method was used to translate the gate level path excitation conditions into higher level of abstraction without loss of accuracy. The higher efficiency of the RTL-based solution allowed for fault grading of functional patterns against the top critical paths in commercial microprocessor designs. The RTL-based approach also had the added benefit of being easier to use for debugging critical paths",2001,0, 5952,Eliminating harmful redundancy for testing-based fault localization using test suite reduction: an experimental study,"In the process of software maintenance, it is usually a time-consuming task to track down bugs. To reduce the cost on debugging, several approaches have been proposed to localize the fault(s) to facilitate debugging. Intuitively, testing-based fault localization (TBFL), such as dicing and TRANTULA, is quite promising as it can take the advantage of a large set of execution traces at the same time. However, redundant test cases may bias the distribution of the test suite and harm this kind of approaches. Therefore, we suggest that the test suite, which is the input of TBFL, should be reduced before used in TBFL. To evaluate whether and to what extent TBFL can benefit from test suite reduction, we performed an experimental study on two source programs. The experimental results show that, for test suites containing unevenly distributed redundant test cases, performing test suite reduction before applying TBFL may be more advantageous.",2005,0, 5953,Correlation Error Metrics of Simulated MIMO Channels,"In the simulation of multiple-input multiple-output (MIMO) radio systems, accurate channel models are needed. Channel models have to be approximated to reach reasonable complexity of practical radio channel simulators, but the approximation should not cause too high error in the simulation. The error of the MIMO correlation matrix can be measured via different metrics such as correlation matrix distance (CMD) and mean square error (MSE). This paper compares the different metrics of correlation error, and investigates the impact of different approximations on the MIMO correlation matrix. From the results it was found that a modified MSE (Mod-MSE) presented in this paper and CMD have quite similar behavior, but Mod-MSE is independent of the correlation matrix size and the level of the original correlation. Thus, Mod-MSE shows only the error on correlation. Channel model approximations are, e.g., limited number of impulse responses, phase error between the channels, and synchronization error in simulator start-up. The impact of the approximations is investigated via the CMD and Mod-MSE analysis. The results show that the number of impulse responses should be in the order of 100 000 or more, phase error of less than 5 degrees is acceptable, and the synchronization error is critical when highly correlated channels are simulated. Another result of this paper is that the Mod-MSE is more recommendable metric than CMD.",2007,0, 5954,A transformer online fault monitoring and diagnosis embedded system based on TCP/IP and Pub/Sub new technology,"In the traditional transformer fault monitoring system, the LPU (Local Process Unit) usually sends the data to the MMC (Main Monitoring Computer), which completes the complicated fault diagnosis and is the process center. In this paper, a novel transformer monitoring system is presented. The LPUs, the embedded systems, are designed to possess of the approximately functions to MMC such as monitoring, diagnosing and alarm. The TCP/IP and the Pub/Sub (Publish/Subscribe) technology are used to realize the design of LPU. The TCP/IP protocol and the network chip make the LPU with high transmission speed. The complicated diagnosis task is dealt with by a computer with strong data treatment capacity. The LPU and MMC can subscribe some fault diagnosis subjects.",2003,0, 5955,Spatial error concealment algorithm based on improved SUSAN operator,"In the transmission of real-time video compressed streams, error concealment method is to restore the damaged or lost data packets. This paper improves the existing spatial error concealment algorithm based on SUSAN detection operator. On the one hand, as recovering the error, the detection pixels were reduced by considering the relationship of nearby pixels. On the other hand, more associated pixels were fully considered. The experimental results show that the proposed algorithm enhances the Peak Signal to Noise Ratio in the case of reducing 4%-8% computational complexity, and is more beneficial to real-time application.",2010,0, 5956,A proactive maintenance scheme for hardware fault diagnosis in wireless systems,"In the wireless sector maintenance costs form a large part of the total network operating cost. This paper presents a possible proactive maintenance scheme for wireless systems. The object of this paper is to reduce the high operational costs encountered in the wireless industry by decreasing maintenance costs and system downtime. An on-line monitoring system is suggested to identify performance degradation, as well as its possible sources via symbol frequency distribution analysis. In this paper a single fault mechanism is considered and the methodology to detect and diagnose the fault mechanism is presented. With similar analysis for other fault mechanisms, this system could ensure that maintenance occurs only when necessary and not at routine intervals.",2008,0, 5957,Fault Tree Based Prediction of Software Systems Safety,"In our modern world, software controls much of the hardware (equipment, electronics, and instruments) around us. Sometimes hardware failure can lead to a loss of human life. When software controls, operates, or interacts with such hardware, software safety becomes a vital concern. To assure the safety of software controlling system, prediction of software safety should be done at the beginning of systempsilas design. The paper focused on safety prediction using the key node property of fault trees. This metric use parameter ""s"" related to the fault tree to predict the safety of the software control systems. This metric allow designers to measure and the safety of software systems early in the design process. An applied example is shown in the paper.",2008,0, 5958,Application of Taguchi technique to reduce positional error in two degree of freedom rotary-rotary planar robotic arm,"In present work, positional accuracy of robotic arm has been discussed. The factors considered in the experiment were the length of links, the mass of both links, the velocity of end point and torque on both links. A considerable reduction in performance variation can be obtained by Taguchi technique. Through simple multifactorial experiments on manipulator, controlled factors can be isolated to provide centering and variance control for a process variable. The primary objective in present work is to investigate the effect of process parameter on performance variation to improve positional accuracy. An attempt has been made to introduce a small variation to current approaches broadly called Taguchi parametric design method. In these methods, there are two broad categories of problems associated with simultaneously minimizing performance variations and bringing the mean on target, viz. Type 1- minimizing variations in performance caused by variations in noise factors (uncontrolled parameters); Type 2-minimizing variations in performance caused by variations in control factors (design variables).",2007,0, 5959,A Pervasive Temporal Error Concealment Algorithm for H.264/AVC Decoder,"In real-time video transmission over error-prone network, the packet loss cannot be avoided which causes the video quality reduction in the destination. In this paper, we propose a pervasive temporal error concealment (PTEC) algorithm for H.264/AVC Inter frame decoding to eliminate the error effect for Human Visual System (HVS). To increase the error concealment (EC) accuracy, the 4x4 block size is used as the basic motion vector (MV) recovery unit, and the MV of the lost macroblock (MB) is recovered by employing the MV information of the neighboring intact MBs. The simulation results show that the proposed algorithm can achieve better EC performance compared with the existing TEC methods. Because of its simple composition, it is pervasive to be used in the real-time multimedia communication systems with the video coding standard H.264/AVC.",2010,0, 5960,Experimental performance evaluation of error estimation and compensation technique for quadrature modulators and demodulators,"In recent mobile communication systems, impairments in analog circuits cannot be disregarded owing to the need to achieve high performance. Errors in analog circuits consist of a distortion of a power amplifier(PA), gain and phase imbalance in a quadrature modulator(QMOD) and a quadrature demodulator(QDEMOD), dc offset, frequency offset, and so on. Because of high accuracy of compensation, many techniques of error compensation by using digital signal processing have been studied recently. Digital predistorter(DPD), which can eliminate odd-order distortion caused by PA nonlinearity, is one of the compensation techniques. However, in the case that direct conversion architecture is used in a loop-back path, errors in a QMOD and a QDEMOD affect the performance of DPD. Previously, we proposed an error estimation technique for a QMOD and a QDEMDO by using a variable phase shifter and evaluated the performance through computer simulation. In this paper, we evaluate the performance through an experiment by image rejection ratio(IRR) and error vector magnitude(EVM) in the case that the output of a QMOD is directly connected to the input of a QDEMOD. The results show that IRR improvement after compensation is 10 dB at the output of the QMOD and EVM improvement after compensation is 10 dB at the output of the QMOD and 20 dB at the output of the QDEMOD.",2008,0, 5961,Performance Analysis of Forward Error Correcting Codes in IPTV,In recent years the forward error correction (FEC) schemes at the binary erasure channel have been researched for many applications including DVB-H and IPTV systems. In most cases the packet-level FEC strategies are implemented by either Reed-Solomon (RS) codes at the link layer or Raptor codes at the application layer. Recently an enhanced decoding method for RS codes was presented. In this paper we consider the performance comparison of the enhanced-decoding RS code at the link layer and Raptor code at the application layer in the burst transmission environment to give a guidance of FEC schemes for IPTV application. It is noted that the efficient Raptor decoding algorithm makes it possible to broadcast multimedia data more reliable and faster.,2008,0, 5962,Fault tolerance in a mobile agent based computational grid,"In recent years, grid computing has emerged as a promising alternative to increase the capacity of processing and storage, through integration and sharing of multi-institutional resources. Fault tolerance is an essential characteristic for grid environments. As the grid acts as a massively parallel system, the loss of computation time must be avoided. In fact, the likelihood of errors occurring may be exacerbated by the fact that many grid applications will perform long tasks that may require several days of computation. In this paper, we describe the fault tolerance mechanism of the MAG grid middleware. We describe the fault tolerance components and how they interact with each other. The components were developed as mobile agents, forming a multi-agent society providing fault tolerance for node and application crashes",2006,0, 5963,Managing Faults in the Service Delivery Process of Service Provider Coalitions,"In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket System (ITS). As more compound or interdependent services are collaboratively offered by providers, the delivery of a service therefore becomes a responsibility of more than one provider's organization. In the ITS systems of various providers seemingly unrelated tickets are created and the connection between them is not realized automatically. The introduction of automation will reduce human involvement and time required for incident resolution.In this paper we consider a collaborative service delivery model that supports both per-request services and continuous high-availability services. In the case of high availability service the information stored in the ITS of the provider often includes information on the outage of a particular service rather than on the failure of a particular request. In this paper we offer an information model that consolidates and supports inter-organizational incident management and probabilistic model for fault discovery.",2009,0, 5964,Fault-tolerant control architecture for an electrical actuator,"In safety-critical applications fault-tolerant electric drives and motors can be used. To avoid compromising the reliability of the drives and isolated motor windings, the control hardware must also be fault tolerant. In this work control hardware is separated into lanes, each with a designated power supply, electric drive, motor phase winding and position feedback sensor, all packaged into a single assembly. For a system to operate with a faulted lane, lanes must be able to cross-compare data and determine whether a lane has failed. A demonstrator controller drive has been produced for a multiple lane actuator with the ability to detect a failed lane and continue operation. For the first time, current shaping is implemented on a working prototype to produce instantaneous torque control, even in the event of a fault. This permits rated torque at all rotor positions when stationary and when rotating slowly and rated mean torque at high speeds, when the inertia of the system lets the drive ride through torque dips caused by the faulted phase.",2004,0, 5965,Design of an Experimental System for Digital Circuit Fault Diagnosis Based on Support Vector Machine,"In order to combine the theory and practice of support vector machine (SVM) in the field of fault diagnosis, a novel universal experimental system for digital circuitspsila fault diagnosis is designed for the application study of SVM. The circuit is simulated in FPGA and its input and output pins are assigned to the DO and DI channels automatically by matrix switch. The faults and test points can be set easily. The algorithms of SVM and test vector generation etc. can be carried out in different software modules. The overall design and experimentation are discussed in detail. A practical example is given also. The experimental system is very convenient for the study of SVM and the efficiency is improved observably.",2009,0, 5966,Research on fault diagnosis system base on output waveform of high voltage inverter,"In order to economize power energy, the high voltage inverter is used widely in the production of industry and agriculture. The security and reliability problem is more and more important. However, the high voltage inverters controller system can check the base problem, such as under voltage protection, open-phase protection, IGBT over current, IGBT over voltage, IGBT over heat etc. Therefore, it is very necessary to research a deep fault diagnosis system. The paper gives a wavelet transform method of the fault diagnosis of high voltage inverter, and introduces the hardware structure and software flow of the fault diagnosis system.",2008,0, 5967,Dynamic Polling Scheme of Network Fault Management Based on Cloud Model,"In order to enhance the real time operability and efficiency of polling which are both crucial for network fault management, the current network polling schemes based on SNMP are discussed, and a novel dynamic polling scheme is put forward. By soft partition of variation amplitude concept and uncertainty reasoning based on cloud model, this scheme realizes the fine dynamic adjustment of polling intervals according to the qualitative rules. Simulation results show that the proposed scheme can display data in detail more accurately, and the distortion in data collection can effectively be minimized with the network bandwidth used rationally.",2009,0, 5968,Isolated singularity points recognition of hydroelectric generators fault signals based on CWT,"In order to extract fault feature information from low frequency vibrating transient signal of the main shaft of hydro-generator, an elective method is put forward. With bi-orthogonal spline wavelet, the method is designed to locate the isolated singularity points and estimate their singularity degree by applying WTMM (wavelet transformation maximal module) and MRA (multi-resolving analysis) of continuous wavelet transform (CWT) and least square algorithm and taking account of the length, intensity and Lipschitz exponent of WTMM lines. Result shows that the method has perfect effect.",2004,0, 5969,Application of PCA method and FCM clustering to the fault diagnosis of excavator's hydraulic system,"In order to improve reliability of the excavator's hydraulic system, a fault diagnosis approach based upon principal component analysis (PCA) method and fuzzy c-means (FCM) clustering was proposed. PCA is a powerful method for re-expressing multivariate data, which could effectively extract the correlation among process variables. With this approach, samples of target faults were used to develop PCA models in the first step and the largest eigenvalues extracted from the models were used as fault feature vector. Secondly, FCM clusering performed as fault classifier to determine the test fault. Simulated faults were introduced to validate the approach. Simulation results show that the proposed fault diagnosis approach could effectively applied to the excavator's hydraulic system.",2007,0, 5970,Image distortion correction algorithm based on lattice array coordinate projection,"In order to improve the accuracy of image detection and target recognition, this paper presents an image distortion correction algorithm based on quadrilateral fractal approach controlling points. This method uses standard lattice image to be the measuring target, determines the coordinate by combining mathematical morphology and sliding neighborhood operation, and then Adopt simple linear regression analysis for optical center and establish coordinate system, finally proposes Two-step and one-dimensional gray linear interpolation backward mapping algorithm to define the pixel intensity. An image acquisition hardware system is designed contains TMS320DM6437 and taking a building as target. Experiments with above method have shown that this algorithm can correct distortion in short time without loss edge information.",2010,0, 5971,Study on Method of Game Software Fault Management Based on MSC,"In order to improve the efficiency of the game fault management, we provided a method of games software fault management on MSC and achieved the editing and storage techniques about MSC; based on TTCN-3 test techniques, we used model-driven approach to activate MSC. The experimental results show that the method of fault management can reduce the workload, and improve the efficiency of the test.",2009,0, 5972,Tree Topology Based Fault Diagnosis in Wireless Sensor Networks,"In order to improve the energy efficiency of the fault diagnosis in wireless sensor networks, we propose a tree topology based distributed fault diagnosis algorithm. The algorithm maintains high node fault detection rate and low fault alarm rate in wireless sensor networks under low node distribution density. First, the algorithm finds a good node with a multi-layer detection method, then, detects the status of other nodes by means of the status relation with the good node which is inferred by the parent-child relation in the tree topology, and up to achieves fault detection of the whole network. In ZigBee tree network, the simulation results show that the algorithm performs better in energy efficiency, and identify the faulty sensors with high accuracy and robustness.",2009,0, 5973,Toward an Energy-based Error Criterion for Adaptive Meshing,"In order to improve the finite element modeling of macroscopic eddy-currents (associated with motion and/or a time-varying electrical excitation), an original error criterion for adaptive meshing, based on local power conservation, is proposed",2006,0, 5974,Master-Slave TMR Inspired Technique for Fault Tolerance of SRAM-Based FPGA,"In order to increase reliability and availability of Static-RAM based field programmable gate arrays (SRAM-based FPGAs), several methods of tolerating defects and permanent faults have been developed and applied. These methods are not well adapted for handling high fault rates for SRAM based FPGAs. In this paper, both single and double faults affecting configurable logic blocks (CLBs) are addressed. We have developed a new fault-tolerance technique that capitalizes on the partial reconfiguration capabilities of SRAM-based FPGA. The proposed fault-tolerance method is based on triple modular redundancy (TMR) combined with master-slave technique, and exploiting partial reconfiguration to tolerate permanent faults. Simulation results on reliability improvement corroborate the efficiency of the proposed method and prove that it compares favorably to previous methods.",2010,0, 5975,Research on the design of software fault tolerance based on RTEMS,"In order to overcome the shortage that RTEMS lacks effective software fault-tolerant mechanism, this paper proposes an approach to add a task fault-tolerant module into application service layer. Based on the task scheduling algorithm of RTEMS, the proposed approach achieves fault-tolerant function by arranging the rescheduling time of fault tasks, which can recover the tasks from transient faults and degrade the system when permanent faults happen. The validity of this proposed approach applied to RTEMS is verified in Markov analyzing theoretically and case study.",2010,0, 5976,Automated fault tree generation and risk-based testing of networked automation systems,"In manufacturing automation domain safety and availability are the most important factors to ensure productivity. In modern software intensive networked automation systems it became quite hard to ensure which non-functional requirements are related to these factors as well as whether these are satisfied or not. This is due to the prevalence of manual efforts in several analyses phases where complexity of the system often makes it hard to obtain comprehensive overview and thus makes it difficult to ascertain the presence of certain undesired consequences. Since design, development and following verification and validation activities are largely dependent upon the result of the analyses the product is largely affected. To address these problems automated fault tree generation is presented in this paper. It uses distinct modeling artifacts and information to automatically compose formal models of the system. Embedding hardware and network failures it is then ascertained through model checking whether the system satisfies certain safety and availability properties or not. This information is used to compose the fault tree. Proposed approach will improve completeness and correctness in fault trees and will consequently help in improving the quality of the system. Furthermore, it is also shown how the artifacts of this analysis can be used to produce test goals and test cases to validate the software constituents of the system and assure traceability between testing activity and safety requirements.",2010,0, 5977,Increase of fault ride-through capability for the grid-connected wind farms,"In many countries, high level of penetration of wind energy in power systems requires revisiting the grid connection standards in terms of impact on transient voltage stability. While the presently used common practice is to disconnected the wind turbines from the grid immediately when a fault occurs somewhere in the grid, in the future the wind turbines may be required to stay connected longer and ride through the part or the whole fault transient(s). To achieve easier grid integration and reliable voltage control, active control of wind turbines is becoming an area of increasing importance. This paper presents a computer model of a multi-turbine wind energy system that is based on the candidate wind farm site on Vancouver Island, Canada. A new voltage control scheme is proposed and compared to the traditional modes of the wind turbine operation. The simulated studies demonstrate an enhancement of the proposed controller during a fault-ride-through transient",2006,0, 5978,Design for soft error mitigation,"In nanometric technologies, circuits are increasingly sensitive to various kinds of perturbations. Soft errors, a concern for space applications in the past, became a reliability issue at ground level. Alpha particles and atmospheric neutrons induce single-event upsets (SEU), affecting memory cells, latches, and flip-flops, and single-event transients (SET), initiated in the combinational logic and captured by the latches and flip-flops associated to the outputs of this logic. To face this challenge, a designer must dispose a variety of soft error mitigation schemes adapted to various circuit structures, design architectures, and design constraints. In this paper, we describe various SEU and SET mitigation schemes that could help the designer meet her or his goals.",2005,0, 5979,Improving Bug Assignment with Bug Tossing Graphs and Bug Similarities,"In open-source software development the bug report is usually assigned to a developer for bug fixing. A large number of bug reports are tossed (reassigned) to other developers, for example because the bugs have been assigned by mistake. The tossing events increase bug-fix time. In order to quickly identify the fixer to bug reports we present an approach based on the bug tossing history and textual similarities between bug reports. This proposed approach is evaluated on Eclipse and Mozilla. The results show that our approach can significantly improve the efficiency of bug assignment: the bug resolver is often identified with fewer tossing events.",2010,0, 5980,Adaptive residual current compensation for robust fault-type selection in mho elements,"In order to accurately measure the impedance, the residual current for impedance measurement in ground distance relay is compensated in a fixed manner. It has been shown that the above compensation manner may have negative impact on the phase selection in mho elements in a single-pole trip scheme. A new residual current compensation method based on the magnitude of current of each phase for mho elements is presented in this paper. The performance of this new method is analyzed under all kinds of fault conditions. Simulation studies are carried out to investigate the new compensation method based fault component impedance relay in a typical transmission system. The relay has shown satisfactory performances under various fault conditions especially for the close-in ground fault and cross-country fault. This new compensation method can also be used in phase selector based on impedance measurement.",2005,0, 5981,Study on large-scale rotating machinery fault intelligent diagnosis multi-agent system,"In order to adapt the complex equipment intelligent diagnosis task, based on advantages of multi-agent system, study on large-scale rotating machinery fault intelligent diagnosis multi-agent system is proposed. In the study, the system has broken through traditional unalterable multi algorithm fusion model and shows the social intelligence of multiple agents through the ability which is organizational, self-study and able to resolve diagnosis problems through dynamically negotiating among multiple agents according to the actual characteristics of device signal. The main contributions are as follows: Firstly, how to decompose the diagnosis task rationally is proposed. From this part, a distinctive diagnosis task decomposition method is proposed and which intelligent diagnosis methods adapted to achieving diagnosis successfully can be known; Secondly, how to design the large-scale rotating machinery intelligent diagnosis multi-agent system is introduced, From this part, how to design the system structure and how the system complete the dynamically negotiating and self-study process efficiently can be known. Thirdly, an example of the system is proposed and proves the system can complete the function efficiently.",2008,0, 5982,Fault Data Mining on the Encoders in Numeric Control System Based on the Information Redundancy of Velocity,"In order to arising the safety and reliability of numerical control system, aiming at the conventionally used encoders in the system as the equipment of feedback in position-loop, the data mining of its fault is analyzed. Benefiting from the reluctant information of velocity in the normal servo system, the principal component analysis is introduced to solving the fault of losing codes and pausing codes in the encoder. At the same time, the basic principles and flow process are presented. In order to validate the effectiveness of the mining way based on the software of Matlab, simulation shows that it can perform a very exact diagnosis at the time of fault occurring. This idea provides a believable way for the fault data mining in numeric control system.",2007,0, 5983,Angle Domain Average and Autoregressive Spectrum Analysis Based Gear Faults Diagnosis,"In order to process the non-stationary vibration signals during run-up of gearbox, the method based on angle domain average and autoregressive spectrum analysis is presented. This new method combines angle domain average with angle domain average technique. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments.Secondly, the angle domain signal is preprocessed using angle domain average technique in order to eliminate the unrelated noise. In the end, the averaged signals are processed by autoregressive spectrum analysis. The experimental results show that the proposed method can effectively detect the gear crack faults.",2009,0, 5984,Information entropy-based Clustering Algorithm for Rapid Software Fault Diagnosis,"In order to rapidly diagnose and locate the fault, we present ICARSFD (information entropy-based clustering algorithm for rapid software fault diagnosis). In this paper, the average entropy and the total entropy are defined to guide the clustering operation over fault modes. This algorithm firstly stores the related information of existing faults in the form of fault tree, and deems each fault as an initial cluster. By calculating the information entropy between clusters and comparing them with the average entropy and the total entropy, fault clustering is completed. For the faults inappropriate to their located clusters, we take a retrospective approach to cluster them. Thereby the clustering effect related with the fault order could be addressed. Secondly, according to the ascending order of information entropy, the fault to be analyzed is matched to each cluster. Lastly, both the fault diagnosis results and fault paths are put out. In addition, if the fault match isn't successful, the fault path will be identified through fault tree, and the clustering results will be updated later. The experimental results demonstrate that ICARSFD has both good clustering effect and detection effect.",2009,0, 5985,Researche on Neugebauer equation correction algorithm,"In order to realize accurate colorimetric characterization for a output device, several different correction algorithm models were evaluated. There are three Neugebauer equation correction methods in common use, namely: The dot gain correction, the Yule-Nielsen exponential correction and the Cellular correction. This article analyzes these correction methods of Neugebauer equation, and put forward a new correction method that is using exponential correction method with considering the impact of dot gain. An identical output device was colorimetrically characterized with these algorithm models utilizing IT8.7/3 as color targets. Experimental results showed that this new correction method can improve the accuracy of calculation of Neugebauer equation, When using to creating output profile, It can significantly improve the accuracy of output profile, and the process of calculation is simple and convenient to actualize.",2010,0, 5986,Comprehensive fault evaluation on maglev train based on ensemble learning algorithm,"In order to realize comprehensive fault evaluation on faults occurred in maglev train, aim at the difficulty in establishing the evaluation weight matrix and subjection matrix parameter, faint comprehensive evaluation method based on ensemble learning algorithm is proposed. First, the structure of the suspension system of maglev train is analyzed and a fault diagnosis model is built. Then ensemble learning is introduced to the train model with learning ability. At last, this method is applied to fault evaluation on maglev train suspension system. In comparison to single and integration classification method, the emulational results prove that the ensemble method works better on the problem, the advantage of the ensemble learning algorithm is manifested, and practice has proved that this method is competent for precision demand.",2009,0, 5987,Content adaptive intra error concealment method,"In order to restore the information loss in video communication, a content adaptive intra error concealment method is proposed in this paper. Different images and different packet loss rates of the video sequences are used in the experiment. The experimental results show that the improved method has the better subjective quality and the higher PSNR than the bilinear interpolation and the semi-adaptive error concealment method.",2010,0, 5988,The Vibration Parameter Fault Diagnosis Cloud Model for Automobile Engine Based on ANFIS,"In order to solve the fault diagnosis problem of Vibration Parameter, Adaptive Neuro-Fuzzy inference system (ANFIS) was applied to build a fault diagnosis model of automobile engine and induce cloud model of fan-out, outputting results are continued. Through verification of the built diagnosis model with data of engine tests, it has been found that the recognition accuracy increase from 88.75% to 99.68%, training error falling from 0.001683 to 0.0011526. Simulation results show that the fitting ability, convergence speed and recognition accuracy of improved ANFIS model are all superior to ANFIS. So a contingent fault of automobile engine can be identified effectively.",2010,0, 5989,Design of Intelligentized Examination System on Fault Detection of Home Appliances,"In order to solve the various shortcomings of the traditional approaches, this paper proposes a fault detection and examination system on intelligentized home appliances. The main idea is: by using RS-485 network technology, simulation and database technology, embedded control technology etc. and through machine integration, the training and examination on learning home appliances maintenance is achieved. The system, which is characterized by high efficiency, low cost and fairness in the examination, provides a good platform for teachers as well as students in the training.",2008,0, 5990,Studying Experiment Teaching in Error Theory and Data Processing with MATLAB,"In order to strengthen practice, MATLAB software is introduced in the course of error theory and data processing as the teaching assistance method. Based on its ability formidable calculating function, graph handling ability and the rich toolbox, the computer-experiment has been set up. It improves the studentspsila creative capability and stimulates learning activity, because the abstract theory becomes direct-viewing and vivid, so the teaching effect is improved.",2009,0, 5991,Installation Error Dynamic Correction Based on Quasi-Linear Model,"In order to suppress the installation errors of photoelectric payload and increase the precision of target location using photoelectric payload equipped on the aerial mobile platform, a new method of calibrating fixing errors dynamically is proposed based on quasi-linear model. The method analyzes the relationship between measurement data and true value and comes to a quasi-linear conclusion between them under some conditions. The new method is used to real-time dynamically calibrating installation errors of photoelectric payload, which improves the precision of target location effectively. Both the simulation and the experiment prove the validity of the method. It is easy to calculate and be put into practice. Furthermore, the new method could meet the real-time need and is of high value to application.",2010,0, 5992,Error Resilient Decoding of JPEG2000,"In this letter, we investigate the error resilience properties of JPEG2000. We identify the dependencies among coding passes of a codeblock codestream, and determine the sections of the codestream that can be salvaged in the presence of errors. In our analysis, we consider the effects of mode variations provided by the standard. The proposed methods are derived using the existing dependency structure of coding passes and do not require a substantial increase in computational complexity of the decoder. Experimental results indicate that the proposed methods can improve the error resilience performance substantially.",2007,0, 5993,Techniques for early stopping and error detection in turbo decoding,"In this letter, we present three new criteria for early stopping and error detection in turbo decoding. The approaches are based on monitoring the mean of the absolute values of the log-likelihood ratio of the decoded bits, which we show to be directly related to the variance of the metachannel. We demonstrate that this mean value increases as the number of errors in a frame decreases, and as a result, propose the simple mean-estimate criterion. We show that the systematic component of a terminated recursive systematic convolutional encoder used in turbo codes provides a built-in cyclic redundancy check (CRC). To further improve the performance, we also propose the mean-sign-change (MSC) criterion and the MSC-CRCeb criterion, in which a short external CRC code and the built-in CRC are concatenated with the MSC criterion.",2003,0, 5994,A Fault Tolerant Control strategy for an unmanned aerial vehicle based on a Sequential Quadratic Programming algorithm,"In this paper a fault tolerant control strategy for the nonlinear model of an unmanned aerial vehicle (UAV) equipped with numerous redundant controls is proposed. Asymmetric actuator failures are considered and, in order to accommodate them, a sequential quadratic programming (SQP) algorithm which takes into account nonlinearities, aerodynamic and gyroscopic couplings, state and control limitations is implemented. This algorithm computes new trims such that around the new operating point, the faulty linearized model remains nearby from the fault free model. For the faulty linearized models, linear state feedback controllers based on an eigenstructure assignment method are designed to obtain soft transients during accommodation. Real time implementation of the SQP algorithm is also discussed.",2008,0, 5995,High Impedance Fault Detection Using Harmonics Energy Decision Tree Algorithm,"In this paper a new pattern recognition based algorithm is presented to detect high impedance fault in distribution networks. In this method the total energy of odd, even and in-between harmonics up to 400 Hz is calculated and is fed to a decision tree as a classifier. The proposed scheme can successfully distinguish the HIFs from normal operations in power system such as harmonic load switching, capacitors switching, and transformer energization. The results show high accuracy of the proposed method in the detection task.",2006,0, 5996,A decision tree based method for fault classification in transmission lines,"In this paper a novel and accurate method is proposed for fault classification in transmission lines. The method is based on decision tree and gets the 50 up to 950 Hz phasors of voltages and currents from one end of the line, as the inputs. The method is applied to a 400 kV transmission line, and the results showed the highest possible accuracy within less than quarter of a cycle after fault inception.",2008,0, 5997,On effective use of reliability models and defect data in software development,"In software technology today, several development methodologies such as extreme programming and open source development increasingly use feedback from customer testing. This makes the customer defect data become more readily available. This paper proposes an effective use of reliability models and defect data to help managers make software release decisions by applying a strategy for selecting a suitable reliability model, which best fits the customer defect data as testing progresses. We validate the proposed approach in an empirical study using a dataset of defect reports obtained from testing of three releases of a large medical system. The paper describes detailed results of our experiments and concludes with suggested guidelines on the usage of reliability models and defect data.",2006,0, 5998,Investigating effects of neutral wire and grounding in distribution systems with faults,"In some applications, like fault analysis, fault location, power quality studies, safety analysis, loss analysis, etc., knowing the neutral wire and ground currents and voltages could be of particular interest. In order to investigate the effects of neutrals and system grounding on the operation of distribution feeders with faults, a hybrid short circuit algorithm is generalized. In this novel use of the technique, the neutral wire and assumed ground conductor are explicitly represented. Results obtained from several case studies using the IEEE 34-node test network are presented and discussed.",2004,0, 5999,On-line detection of control-flow errors in SoCs by means of an infrastructure IP core,"In sub-micron technology circuits high integration levels coupled with the increased sensitivity to soft errors even at ground level make the task of guaranteeing systems' dependability more difficult than ever. In this paper we present a new approach to detect control-flow errors by exploiting a low-cost infrastructure intellectual property (I-IP) core that works in cooperation with software-based techniques. The proposed approach is particularly suited when the system to be hardened is implemented as a system-on-chip (SoC), since the I-IP can be added easily and it is independent on the application. Experimental results are reported showing the effectiveness of the proposed approach.",2005,0, 6000,Case study: How analysis of customer found defects can be used by system test to improve quality,"In the context of long-term, large-scale, industrial software development, process improvement and measurement to support process improvement is a necessity. Our software projects face the ongoing challenge, ldquoHow can we reduce the number of customer found defects in a cost-effective manner?rdquo This paper describes how the system test team of a large telecommunications company approached this challenge. Customer found defects were analyzed in a systematic way to provide information that system testers could turn into actionable activities in a test lab. Our findings indicated that most defects found by customers were associated with existing feature usage in various configurations while our test focus had primarily been on new feature operation. We categorized the information from our analysis using a technique called minimum conditions. Then, these data, together with an exploratory test method, were used to supplement existing test processes. The effort resulted in a $750,000 savings over two product releases.",2009,0, 6001,Error types in the computer-aided translation of tourism texts,"In the European-funded project MIS under the MLIS programme, the authors attempted a computer-driven translation package for tourism texts in 5 languages. It was believed such a package would be possible due to the highly formulaic language of tourism brochures and business communication in this sector, which ought to allow translation equivalences at the phrase or sentence level to be identified and used. While this proved to be broadly true, the multilanguage format produced errors of agreement and ordering normally avoided by human and computer translators working between two languages. Even at the phrase and sentence level, problems of interlanguage equivalence persisted. Awareness of these problems should aid the planning of translation packages of this kind so errors can be avoided",2000,0, 6002,Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures,"In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.",2009,0, 6003,Outage performance of mrt with unequal-power co-channel interference and channel estimation error,In this letter we investigate the outage performance of maximal ratio transmission (MRT) with unequal-power co-channel interference (CCI) and channel estimation error. The exact expression for the outage probability is presented. Our results are applicable to the MRT systems with arbitrary numbers of transmit and receive antennas.,2007,0, 6004,An election based approach to fault-tolerant group membership in collaborative environments,"In this paper we present a novel approach to fault-tolerant group membership for use predominantly in collaborative computing environments. As an exemplar, we use the Collaborative Computing Transport Layer which offers reliable atomic multicast capabilities for use in collaborative environments such as the Collaborative Computing Frameworks (CCF). Specific design goals of the approach are the elimination of processing overhead due to heartbeats, support for partial failures and extensibility These goals are satisfied in an approach which uses an IP multicast failure detector and two election based algorithms. By basing failure detection on IP multicast, the need for explicit keep-alive packets is removed, thus in the absence of failures the approach imposes no overhead",2001,0, 6005,A reconfigurable routing algorithm for a fault-tolerant 2D-Mesh Network-on-Chip,"In this paper we present a reconfigurable routing algorithm for a 2D-mesh network-on-chip (NoC) dedicated to fault- tolerant, massively parallel multi-processors systems on chip (MP2-SoC). The routing algorithm can be dynamically reconfigured, to adapt to the modification of the micro-network topology caused by a faulty router. This algorithm has been implemented in a reconfigurable version of the DSPIN micro-network, and evaluated from the point of view of performance (penalty on the network saturation threshold), and cost (extra silicon area occupied by the reconfigurable version of the router).",2008,0, 6006,Adaptive MAP error concealment for dispersively packetized images,"In this paper we present an adaptive maximum a posteriori (MAP) error concealment algorithm for dispersively packetized wavelet-coded images. We model the subbands of a wavelet-coded image as Markov random fields, and use the characteristics of subband/wavelet samples to adapt the potential functions locally. The resulting MAP estimation gives PSNR advantages of up to 0.7 dB compared to the competing algorithms. Visual quality of the reconstructed images is also improved.",2004,0, 6007,Mapping of Fault-Tolerant Applications with Transparency on Distributed Embedded Systems*,"In this paper we present an approach for the mapping optimization of fault-tolerant embedded systems for safety-critical applications. Processes and messages are statically scheduled. Process re-execution is used for recovering from multiple transient faults. We call process recovery transparent if it does not affect operation of other processes. Transparent recovery has the advantage of fault containment, improved debugability and less memory needed to store the fault-tolerant schedules. However, it will introduce additional delays that can lead to violations of the timing constraints of the application. We propose an algorithm for the mapping of fault-tolerant applications with transparency. The algorithm decides a mapping of processes on computation nodes such that the application is schedulable and the transparency properties imposed by the designer are satisfied. The mapping algorithm is driven by a heuristic that is able to estimate the worst-case schedule length and indicate whether a certain mapping alternative is schedulable",2006,0, 6008,Scheduling of Fault-Tolerant Embedded Systems with Soft and Hard Timing Constraints,"In this paper we present an approach to the synthesis of fault-tolerant schedules for embedded applications with soft and hard real-time constraints. We are interested to guarantee the deadlines for the hard processes even in the case of faults, while maximizing the overall utility. We use time/utility functions to capture the utility of soft processes. Process re-execution is employed to recover from multiple faults. A single static schedule computed off-line is not fault tolerant and is pessimistic in terms of utility, while a purely online approach, which computes a new schedule every time a process fails or completes, incurs an unacceptable overhead. Thus, we use a quasi-static scheduling strategy, where a set of schedules is synthesized off-line and, at run time, the scheduler will select the right schedule based on the occurrence of faults and the actual execution times of processes. The proposed schedule synthesis heuristics have been evaluated using extensive experiments.",2008,0, 6009,Adaptive noise canceller using LMS algorithm with codified error in a DSP,"In this paper we present an implementation of a digital adaptive filter on the digital signal processor TMS320C6713, using a variant of the LMS algorithm, which consists in error codification, thus the speed of convergence is increased and the complexity of design for its implementation in digital adaptive filters is reduced, because the resulting codified error is composed of integer values. The LMS Algorithm with codified error (ECLMS), was tested in an environmental noise canceller and the results demonstrate an increase in the convergence speed, and a reduction of processing time.",2009,0, 6010,Fault-tolerant distributed mass storage for LHC computing,"In this paper we present the concept and first prototyping results of a modular fault-tolerant distributed mass storage architecture for large Linux PC clusters as they are deployed by the upcoming particle physics experiments. The device masquerading technique using an Enhanced Network Block Device (ENBD) enables local RAID over remote disks as the key concept of the ClusterRAID system. The block level interface to remote files, partitions or disks provided by the ENBD makes it possible to use the standard Linux software RAID to add fault-tolerance to the system. Preliminary performance measurements indicate that the latency is comparable to a local hard drive. With four disks throughput rates of up to 55MB/s were achieved with first prototypes for a RAIDO setup, and about 40M/s for a RAID5 setup.",2003,0, 6011,Real-Time Detection of Solder-Joint Faults in Operational Field Programmable Gate Arrays,"In this paper we present two sensors for real-time detection of solder-joint faults in programmed, operational field programmable gate arrays (FPGAs), especially those FPGAs in ball grid array (BGA) packages. The first sensor uses a method in-situ within the FPGA and the second sensor uses a method external to the FPGA. Initial testing indicates the first method is capable of detecting high-resistance faults of 100 Omega or lower and which last one-half a clock period or longer. A prototype of the second method detected high-resistance faults of at least 150 Omega that lasted as low as 25 ns.",2007,0, 6012,Fault tolerance through automatic cell isolation using three-dimensional cellular genetic algorithms,"In this paper we propose a new algorithmic approach to achieve fault tolerance based on three dimensional cellular genetic algorithms (3D-cGAs). Herein, 3D architecture is targeted due to its amenability to implementation with current advanced custom silicon chip technology. The proposed approach is designed to exploit the inherent features of a cGA in which the genetic diversity is used as the key factor in identifying and isolating faulty individuals. A new migration schema is proposed as a mitigation technique. Several configurations concerning migration and selection intensity are considered. The approach is tested using four benchmark test functions and two real world problems which present different levels of difficulty. The overall results show that the proposed approach is able to cope with up to 40% soft errors (SEUs).",2010,0, 6013,Error concealment for MVC and 3D video coding,"In this paper we propose a novel approach to error concealment that can be applied to MVC and other 3D video coding technologies. The image content, that is lost due to errors, is recovered with use of multiple error-concealment techniques. In our work we have used three techniques: well-known temporal- and intra-based techniques and a novel inter-view technique. Proposed inter-view recovery employs Depth Image Based Rendering (DIBR), which requires neighboring views and corresponding depth maps. Those depth maps can be delivered in the bit-stream or estimated in the receiver. In order to obtain the final reconstruction, the best technique is selected locally. For that, an original recovery quality measurement method, based on cross-checking, has been proposed. The idea has been implemented and assessed experimentally, with use of 3D video test sequences. The objective and subjective results show that the proposed approach provide good quality of reconstructed video.",2010,0, 6014,Information theoretic fault detection,In this paper we propose a novel method of fault detection based on a clustering algorithm developed in the information theoretic framework. A mathematical formulation for a multi-input multi-output (MIMO) system is developed to identify the most informative signals for the fault detection using mutual information (MI) as the measure of correlation among various measurements on the system. This is a model-independent approach for the fault detection. The effectiveness of the proposed method is successfully demonstrated by employing MI-based algorithm to isolate various faults in 16-cylinder diesel engine in the form of distinct clusters.,2005,0, 6015,Design of H-infinity correction filters for cascaded sigma delta modulators,In this paper we propose a robust correction scheme to tackle the well known problem of analog mismatch errors encountered in cascaded sigma delta modulators. The problem is cast in an H-infinity setting and solved using standard robust control tools. Guaranteed worst case performance is obtained and implementation overhead is minimised. The approach is developed for the 1-1-1 cascaded architecture and results are provided for the slightly simpler 2-1 cascade scheme,2003,0, 6016,Analysis and optimization of fault-tolerant embedded systems with hardened processors,"In this paper we propose an approach to the design optimization of fault-tolerant hard real-time embedded systems, which combines hardware and software fault tolerance techniques. We trade-off between selective hardening in hardware and process re-execution in software to provide the required levels of fault tolerance against transient faults with the lowest-possible system costs. We propose a system failure probability (SFP) analysis that connects the hardening level with the maximum number of re-executions in software. We present design optimization heuristics, to select the fault-tolerant architecture and decide process mapping such that the system cost is minimized, deadlines are satisfied, and the reliability requirements are fulfilled.",2009,0, 6017,Edge-Directed Error Concealment,"In this paper we propose an edge-directed error concealment (EDEC) algorithm, to recover lost slices in video sequences encoded by flexible macroblock ordering. First, the strong edges in a corrupted frame are estimated based on the edges in the neighboring frames and the received area of the current frame. Next, the lost regions along these estimated edges are recovered using both spatial and temporal neighboring pixels. Finally, the remaining parts of the lost regions are estimated. Simulation results show that compared to the existing boundary matching algorithm [1] and the exemplar-based inpainting approach [2] , the proposed EDEC algorithm can reconstruct the corrupted frame with both a better visual quality and a higher decoder peak signal-to-noise ratio.",2010,0, 6018,Defect tolerance in hybrid nano/CMOS architecture using tagging mechanism,In this paper we propose two efficient repair techniques for hybrid nano/CMOS architecture to provide high level of defect tolerance at a modest cost. We have applied the proposed techniques to a lookup table(LUT) based Boolean logic approach. The proposed repair techniques are efficient in utilization of spare units and viable for various Boolean logic implementations. We show that the proposed techniques are capable of handling upto 20% defect ratess in hybrid nano/CMOS architecture and upto 14% defect rates for large ISCAS'85 benchmark circuits synthesized into smaller sized LUTs.,2009,0, 6019,Analysis of the characteristic about the Honghe active fault zone based on the ETM+ remote sensing images,"In this paper we studied the Honghe fault zone in the view of large scale extent and the relation of the association of Honghe fault and Xiaojiang fault. In our study we put emphasis upon the methodology research about the application of Digital Elevation Model (DEM) and CIS data. By utilizing DEM we perform the slope angle analysis, which gains valuable results. In addition directional filtering is also proved to be effective",2004,0, 6020,New results for fault detection of untimed continuous Petri nets,"In this paper we study fault diagnosis of systems modeled by untimed continuous Petri nets. In particular, we generalize our previous works in this framework where we solved this problem only for special classes of continuous Petri nets, namely state machines and backward conflict free nets. We show that the price to pay for this generalization is that only three diagnosis states can be defined, rather than four. However, this is not a significant restriction because it is in accordance with all the literature on finite state automata.",2009,0, 6021,Toward accurate modeling of the IEEE 802.11e EDCA under finite load and error-prone channel,"In this paper we study the performance of IEEE 802.11e enhanced distributed channel access (EDCA) priority schemes under finite load and error-prone channel. We introduce a multi-dimensional Markov Chain model that includes all the mandatory differentiation mechanisms of the standard: QoS parameters, CWMIN, CWMAX arbitration inter-frame space (AIFS), and the virtual collision handler. The model faithfully represents the functionality of the EDCA access mechanisms, including lesser known details of the standard such as the management of the backoff counter which is technically different from the one used in the legacy DCF. We study the priority schemes under both finite load and saturation conditions. Our analysis also takes into consideration channel conditions.",2008,0, 6022,A Method for an Accurate Early Prediction of Faults in Modified Classes,"In this paper we suggest and evaluate a method for predicting fault densities in modified classes early in the development process, i.e., before the modifications are implemented. We start by establishing methods that according to literature are considered the best for predicting fault densities of modified classes. We find that these methods can not be used until the system is implemented. We suggest our own methods, which are based on the same concept as the methods suggested in the literature, with the difference that our methods are applicable before the coding has started. We evaluate our methods using three large telecommunication systems produced by Ericsson. We find that our methods provide predictions that are of similar quality to the predictions based on metrics available after the code is implemented. Our predictions are, however, available much earlier in the development process. Therefore, they enable better planning of efficient fault prevention and fault detection activities",2006,0, 6023,A geometrical approach for online error compensation of industrial manipulators,"In this paper, a comprehensive online error compensation approach using offline calibration results is proposed for industrial manipulators (with closed control architecture), in order to improve its accuracy. The contents in this paper include a calibration algorithm based on the product-of-exponential formula, an online error compensation procedure for implementing the calibration results on industrial manipulators, and an experimental study that is conducted on a 6-DOF industrial manipulator from ABB (Model: IRB-4400) with a laser tracker system from Leica (Model: AT901-MR) and its Tracker-Machine control sensor. After implementing the proposed online error compensation approach, the accuracy of the ABB industrial manipulator improved to 0.3 mm, thus verifying the effectiveness of the proposed error compensation approach.",2010,0, 6024,Fault-tolerant five-phase permanent magnet motor drives,"In this paper, a control strategy that provides fault tolerance to five-phase permanent magnet motors is introduced. In this scheme, the five-phase permanent magnet (PM) motor continues operating safely under loss of up to two phases without any additional hardware connections. This feature is very important in traction and propulsion applications where high reliability is of major importance. The five-phase PM motors with sinusoidal and quasi-rectangular back-EMFs have been considered. To obtain the new set of phase currents to be applied to the motor during fault in stator phases or inverter legs, the torque producing MMF by the stator is kept constant under healthy and faulty conditions for both cases. Simulation and experimental results are provided to verify that the five-phase motor continues operating continuously and steadily under faulty conditions.",2004,0, 6025,A D-shaped defected patch antenna with enhanced bandwidth,"In this paper, a D-shaped defected patch antenna printed on a microwave substrate for increasing the bandwidth of a microstrip antenna is proposed. A second resonant frequency is introduced to achieve a wider impedance bandwidth with D-shaped defected structure. The effects of the proposed antenna structure parameters are studied in details. By simulation software, we obtained the size of an antenna operating around at 2.4 GHz. The measured parameters and simulation results show that the proposed antenna can attain the impedance bandwidth that is much broader than the traditional rectangular patch antenna.",2009,0, 6026,Dual-mode bandpass filter using defected ground structures for coupling enhancement and harmonic suppression,"In this paper, a dual-mode square ring bandpass filters with defected-ground structures (DGS) is proposed. By employing the DGS, the coupling is enhanced and the insertion loss is reduced effectively. The harmonic suppression is achieved by loading DGS structures under the input/output lines. Good agreement between the experimental and simulation results is observed.",2008,0, 6027,"Image steganalysis based on moments of characteristic functions using wavelet decomposition, prediction-error image, and neural network","In this paper, a general blind image steganalysis system is proposed, in which the statistical moments of characteristic functions of the prediction-error image, the test image, and their wavelet subbands are selected as features. Artificial neural network is utilized as the classifier. The performance of the proposed steganalysis system is significantly superior to the prior arts.",2005,0, 6028,A Checkpointing Technique for Rollback Error Recovery in Embedded Systems,"In this paper, a general checkpointing technique for rollback error recovery for embedded systems is proposed and evaluated. This technique is independent of used processor and employs the most important feature in control flow error detection mechanisms to simplify checkpoint selection and to minimize the overall code overhead. In this way, during the implementation of a control flow checking mechanism, the checkpoints are added to the program. To evaluate the checkpointing technique, a pre-processor is implemented that selects and adds the checkpoints to three workload programs running in an 8051 microcontroller-based system. The evaluation is based on 3000 experiments for each checkpoint.",2006,0, 6029,Error and Rate Joint Control for Wireless Video Streaming,"In this paper, a precise error-tracking scheme for robust transmission of real-time video streaming over wireless IP network is presented. By utilizing negative acknowledgements from feedback channel, the encoder can precisely calculate and track the propagated errors by examining the backward motion dependency. With this precise tracking, the error-propagation effects can be terminated completely by INTRA refreshing the affected macroblocks. In addition, due to lots of INTRA macroblocks refresh will entail a large increase of the output bit rate of a video encoder, several bit rate reduction techniques are proposed. They can be jointly used with INTRA refresh scheme to obtain uniform video quality performance instead of only changing the quantization scale. The simulations show that both control strategies yield significant video quality improvements in error-prone environments",2006,0, 6030,Reversible Data Hiding Based on Histogram Shifting of Prediction Errors,"In this paper, a reversible data hiding scheme based on histogram-shifting of prediction errors (HSPE) is proposed. Two-stage structures, the prediction stage and the error modification stage, are employed in our scheme. In the prediction stage, value of each pixel is predicted, and the error of the predicted value is obtained. In the error modification stage, histogram-shifting technique is used to prepare vacant positions for embedding data. The peak signal-to-noise ratio (PSNR) of the stego image produced by HSPE is guaranteed to be above 48 dB, while the embedding capacity is, in average, 4.74 times higher than that of the well known Ni et al.psilas technique with the same PSNR. Besides, the stego image quality produced by HSPE gains 7.99 dB higher than that of Ni et al.psilas method under the same embedding capacity. Experimental results indicate that the proposed data hiding scheme outperforms the prior works not only in terms of larger payload, but also in terms of stego image quality.",2008,0, 6031,A Robust Fault Detection and Isolation Method in Load Frequency Control Loops,"In this paper, a robust sensor and actuator fault detection and isolation (FDI) method based on unknown input observers (UIO) is adopted and applied to the load frequency control loops of interconnected power systems. Changes in load demand are regarded as the unknown disturbances in the system and the designed UIOs and thus the proposed method are robust to these disturbances. Using selected different sets of measured variables in the UIO design, simulations are performed for the dynamical model of a power control system composed of two areas. As we distinguish the cases for successful and unsuccessful sensor and controller FDI, it is shown that the proposed scheme is able to detect and isolate sensor and controller faults for proper selections of measured variables. By using residuals generated by the UIOs, the designed ldquofault detection and isolation logicrdquo system shows the operator which sensor or controller is faulty. Hence the faulty sensor or controller can be replaced by a healthy one for a more reliable operation.",2008,0, 6032,Analysis and design of SEPIC converter in boundary conduction mode for universal-line power factor correction applications,"In this paper, a SEPIC converter operated in boundary conduction mode for power factor correction applications with arbitrary output voltage is proposed, analyzed and designed. By developing an equivalent circuit model for the coupled inductor structure, a SEPIC converter with or without coupled inductors (and ripple current steering) can be analyzed and designed in a unified framework. Power factor correction under boundary conduction operation mode can be achieved conveniently using a simple commercially available control IC. Experimental results are provided to validate the circuit design",2001,0, 6033,A Novel Fault Detection of an Open-Switch Fault in the NPC Inverter System,"In this paper, a simple fault detection scheme is proposed for improving the reliability under the open-switch fault of a three-level neutral point clamped (NPC) inverter. The fault of switching device is detected by checking the change of a pole-voltage in a NPC inverter. This method has the advantages of fast detection ability and a simple realization for fault detection, compared with existing methods. Reconfiguration is also performed by two-phase control method that used to supply the balancing three-phase power to the load continuously. This proposed method minimizes a bad influence on the load caused by a delay of fault detection. This method can also be embedded into the existing NPC inverter system as a subroutine without excessive computational effort. The proposed scheme has been verified by simulation and experimental results.",2007,0, 6034,Fault-tolerant networked control systems under varying load,"In this paper, a simulation study is made to test the fault tolerant ability of networked machines. This ability is introduced by reallocating loads in case of controller failure. Also, the increase in machine speed for higher production, is considered. The maximum speed of operation of individual machines and fault tolerant production-lines, is studied. All machine networks of this study are built on-top-of switched Gigabit Ethernet in star topology.",2005,0, 6035,A Software-Based Error Detection Technique Using Encoded Signatures,"In this paper, a software-based control flow checking technique called SWTES (software-based error detection technique using encoded signatures) is presented and evaluated. This technique is processor independent and can be applied to any kind of processors and microcontrollers. To implement this technique, the program is partitioned to a set of blocks and the encoded signatures are assigned during the compile time. In the run-time, the signatures are compared with the expected ones by a monitoring routine. The proposed technique is experimentally evaluated on an ATMEL MCS51 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected errors. The memory overhead is about 135% on average, and the performance overhead varies between 11% and 191% depending on the workload used",2006,0, 6036,A ZigBee wireless sensor network for fault diagnosis of pump,"In this paper, a ZigBee wireless sensor network (WSN) is constructed for vibration signal acquisition of pump. And WiFi is used for connecting this network to Ethernet. Vibration sensor of intrinsic safety with double modes is applied in intelligent node, which has function of signal acquisition, state recognition, and data transmission. If the equipment is in abnormal state, original data is sent to field server for further analysis and diagnosis, else only data of symptom parameters is sent. So the load and energy consumption of wireless sensor network is decreased. A prototype of this system is established for performance test. Experimental results show that the system can implement signal acquisition and transmission, recognize abnormal state of rotating machinery timely and effectively.",2010,0, 6037,An Algorithm Based Fault Tolerant Scheme for Elliptic Curve Public-Key Cryptography,"In this paper, an algorithm based fault tolerant (ABFT) scheme for Elliptic Curve Cryptography (ECC) public-key cipher is presented. By adding 2n+1 check sums, the proposed scheme is able to detect and correct up to three errors which may occur during the massive computation process or/and data transmission process for total n2 groups of data. The other advantage of the proposed fault tolerant scheme include: (1). It maintains almost the same throughput when there is no error detected. (2). It does not require additional arithmetic units to create check sums and error detection. (3). It can be easily implemented by software or hardware.",2009,0, 6038,Torque based selection of ANN for fault diagnosis of wound rotor asynchronous motor-converter association,"In this paper, an automatic system of diagnosis was developed to detect and locate in real time the defects of the wound rotor asynchronous machine associated to electronic converter. For this purpose, we have treated the signals of the measured parameters (current and speed) to use them firstly, as indicating variables of the machine defects under study and, secondly, as inputs to the Artificial Neuron Network (ANN) for their classification in order to detect the defect type in progress. Once a defect is detected, the interpretation system of information will give the type of the defect and its place of appearance.",2010,0, 6039,A new NLOS error mitigation algorithm in location estimation,"In this paper, an effective technique is proposed for locating a mobile station's position when the range measurements are corrupted by non-line-of-sight (NLOS) errors. By linearizing the inequality of range models incorporating NLOS errors and adding loose variables, the method first acquires a preliminary estimation that relies on an initial value. After addressing the effect of linearization error and analyzing the preliminary estimation, a noniterative estimator that does not need an initial value is deduced. The method does not require knowledge of NLOS error statistics. Simulation results indicate that the proposed algorithm can restrain the NLOS errors and improve location accuracy.",2005,0, 6040,Characterization of GSM Non-Line-of-Sight Propagation Channels Generated in a Reverberating Chamber by Using Bit Error Rates,"In this paper, it is shown how a large mode-stirred reverberating chamber can be used to physically generate a set of non-line-of-sight propagation channels, which are naturally and objectively classified by means of the bit error rate (BER) norm. The experiments are accomplished at the mode-stirred reverberating chamber of the Universita di Napoli Parthenope (formerly Istituto Universitario Navale), and the electromagnetic input signal is a global system for mobile communications, one at 1.8 GHz. It is shown that it is possible to change the BER by means of the stirring process and/or the chamber loading. The proposed technique calls for fast measurements, and therefore, it is amenable to industrial use. The methodology is general and suitable to any digital electromagnetic signal, provided no distortion of modulation occurs.",2007,0, 6041,Specifications correction algorithm by the position of multicoordinate servo drive,"In this paper, mathematical computations for the positioning of a manipulator are presented. Under real conditions of the functioning of the industrial mechanisms, it becomes necessary to correct the velocity of the specifying signals and their values reading as the function of the coordinates equality of the initial and final points of the displacement of the calculated and the required trajectories of the mechanism operation unit motion as well as dynamic capabilities of the executive drive. The suggested method for forming specific trajectories of the drive motion is universal and can be used in designing the position systems of different control mechanisms.",2003,0, 6042,A perceptual Sensitivity Based Redundant Slices Coding Scheme for Error-Resilient Transmission H.264/AVC Video,"In this paper, redundant slices feature of the H.264/AVC codec is evaluated. In order to trade off compression efficiency and error robustness of a H.264/AVC codec with redundant slice capability, we propose a novel perceptual sensitivity based redundant slices coding scheme. The perceptually sensitive regions are determined by using a simple yet effective perceptual sensitivity analysis technique, which analyzes both the motion and the texture structures in the original video sequence. The experimental results show that our proposed algorithm can remarkably improve the reconstructed video quality in the packet lossy network",2006,0, 6043,Research on the remote intelligent fault diagnosis based on virtual instrument,"In this paper, some discussions on architecture of the remote intelligent fault diagnosis system have been put forward. The author proposes a system architecture based on VI, defines the system functions, and introduces how to built up the system.",2003,0, 6044,Magnetic equivalent circuit modeling of induction machines under stator and rotor fault conditions,"In this paper, stator and rotor failures in squirrel-cage induction machines are modeled using the magnetic equivalent circuit (MEC) approach. Failures associated with stator winding and rotor cage are considered. More specifically, stator inter-turn short circuit and broken rotor bar failures are modeled. When compared to conventional modeling techniques, the MEC modeling approach offers two main advantages: 1) relatively high speed of execution, and 2) high accuracy. The developed MEC model is validated here with respect to the experimental tests and time-stepping finite-element simulations for healthy and various faulty conditions.",2009,0, 6045,3H-5 Photoacoustic Tomographic Characterization of Surface and Undersurface Simulated Defects with a Line-focus Laser Beam,"In this paper, the analyses of both surface and undersurface photoacoustic (PA) tomographic imaging with a line-focus laser beam were formulated. Since PA signal in solid specimen is dominantly proportional to the local absorption coefficient of the specimen alpha (x,y) times laser beam intensity divided by the modulation frequency, illumination of the specimen with a line-focus laser beam in PAT corresponds to the summation of the absorption coefficient along some inclined direction thetas measured from some axis (x-axis). A second harmonic green laser beam of a LD-pumped YAG laser was expanded and focused on a specimen by concave and cylindrical lenses, respectively. The CT scanning (rotation and translation) of a laser beam on the specimen surface was achieved by a mechanical rotating stage and a mechanical stepping stage controlled by a computer. The size of a laser beam on a specimen was 25 mm (length) times 1 mm (width). The measured area was 26 mm times 26 mm, while the reconstructed area was 18 mm times 18 mm. Rotation and translation steps were 2pi/64 and 26 mm/64, respectively. The experiments for the simulated surface defect with shapes of a cross and a semicircle with a diameter of 13 mm resulted in the reasonable reconstruction of the original images",2006,0, 6046,Diagnosis by parameter estimation of stator and rotor faults occurring in induction machines,"In this paper, the authors give a new model of squirrel-cage induction motors under stator and rotor faults. First, they study an original model that takes into account the effects of interturn faults resulting in the shorting of one or more circuits of stator-phase winding. They introduce, thus, additional parameters to explain the fault in the three stator phases. Then, they propose a new faulty model dedicated to broken rotor bars detection. The corresponding diagnosis procedure based on parameter estimation of the stator and rotor faulty model is proposed. The estimation technique is performed by taking into account prior information available on the safe system operating in nominal conditions. A special three-phase induction machine has been designed and constructed in order to simulate true faulty experiments. Experimental test results show good agreement and demonstrate the possibility of detection and localization of previous failures.",2006,0, 6047,An error concealment scheme for entire frame losses based on H.264/AVC,"In this paper, an error concealment algorithm is proposed to conceal an entirely lost frame in an H.264/AVC compressed video bitstream. For a lost frame, the algorithm reconstructs the lost motion information of this frame based on its temporal neighbors. The algorithm is block based and thus has low computation and implementation complexity. It is designed for those hard-to-conceal video sequences, which have abundant active and chaotic motions. Experimental results show that the algorithm compares favorably to other simple error concealment algorithms",2006,0, 6048,An evaluation of the MCSA method when applied to detect faults in motor driven loads,"In this paper, an evaluation of the MCSA method when applied to detect faults in motor driven loads is presented. The evaluation is based on transfer functions obtained from small signal model of the induction motor. The inherent behavior of the induction motor as a torque transducer is explored to explain some of the characteristic features of its application to monitoring the driven load connected to its shaft. The influence of the electrical and mechanical parameters on its performance as a load fault detector is investigated. Mathematical relationships among these parameters which allow establish the nature of the frequency response are obtained. It is shown that, although the MCSA (Motor Current Signature Analysis) is a simple and effective method to detect faults on the driven load, care must be taken on its usage. This is due to the non-ideal transducer behavior of the induction motor. Simulation and Experimental results validating the studies are presented.",2010,0, 6049,Minimum Zone Evaluation of Sphericity Error Based on Ant Colony Algorithm,"In this paper, based on the analysis of existent evaluation methods for sphericity errors, an intelligent evaluation method is provided. The evolutional optimum model and the calculation process are introduced in detail. According to characteristics of sphericity error evaluation, ant colony optimization (ACO) algorithm is proposed to evaluate the minimum zone error. Compared with conventional optimum evaluation methods such as simplex search and Powell method, it can find the global optimal solution, and the precision of calculating result is very high. Then, the objective function calculation approaches for using the ACO algorithm to evaluate minimum zone error are formulated. Finally, the control experiment results evaluated by different method such as the least square, simplex search, Powell optimum methods and GA, indicate that the proposed method can provide better accuracy on sphericity error evaluation, and it has fast convergent speed as well as using computer expediently and popularizing application easily.",2007,0, 6050,Synchronization Probabilities using Conventional and MVDR Beam Forming with DOA Errors,"In this paper, code synchronization probabilities of the direct sequence spread spectrum (DS/SS) system are investigated when the receiver utilizes an adaptive antenna array. Performance studies of three beam forming algorithms in the presence of direction-of-arrival (DOA) errors are presented. The investigated algorithms are the conventional, the minimum variance distortionless response (MVDR), and the MVDR+ADL where the MVDR algorithm is enhanced against DOA error via the adaptive diagonal loading (ADL). The paper includes a large number of analytical and simulation results where the effects of DOA errors are investigated. It can be concluded that the MVDR beam former is more sensitive to DOA errors than the conventional beam former, especially, at large DOA errors and high signal-to-noise-ratio (SNR) values. However, this sensitivity can be reduced notably by using the ADL.",2007,0, 6051,A decomposition method for analog fault location,"In this paper, fault location in large analog networks by decomposition method is generalized to include subnetworks not explicitly testable. Assume that the network topology and nominal values of network components are known and the network-under-test is partitioned into subnetworks once for all. The decomposition nodes could be either the accessible nodes whose nodal voltages can be measured or the inaccessible nodes whose nodal voltages under faulty condition can be computed by a new method proposed in this paper. The new method reduces the test requirements for the number of accessible nodes and increases the flexibility of decomposition. Location of faulty subnetworks and subsequent location of faulty components are implemented based on checking consistency of the KCL equations for the decomposition nodes and using ambiguity group location techniques. This method can be applied to linear or non-linear networks, and is particularly effective for the large scale analog networks. An example circuit is provided to illustrate the efficiency of the proposed method.",2002,0, 6052,Efficient residual prediction with error concealment in extended spatial scalability,"In this paper, inter-layer residual prediction's trade-off between concealing lost macroblock (MB) and removing visual artifact in extended spatial scalability (ESS) is investigated. In order to improve visual artifacts, various schemes are proposed in the literature, including higher distortion to make residual prediction less preferable in ESS enhancement layer for non matching residuals. Whereas, residual prediction is also used to conceal lost MBs in inter-layer error concealment methods. We propose an efficient use of residual prediction to prevent those artifacts as well as to conceal lost MBs, exploiting features in homogenous characteristics in video objects for error resilient coding. Simulation results show that the proposed scheme achieves up to 0.29 dB PSNR gain, with overall average of 0.1 dB PSNR gain at the decoder for various tested sequences compared to JVT-W123 under testing conditions specified in JVT-V302.",2010,0, 6053,A precise fault locator algorithm with a novel realization for MV distribution feeders,"In this paper, a novel and practical fault locator for radial distribution feeders is presented. The existing numerical relays of the MV panels are universally arranged in order to realize the proposed fault locator. This arrangement has been validated using an experimental set-up. The fault locator algorithm is carefully developed, modified, and intensively tested using the electromagnetic transient program (EMTP) for both phase and earth faults. Error correction of the computed fault distance via either rational or intelligent techniques is outlined. The results reveal a distinguished precision of the proposed fault locator",2006,0, 6054,Defects Detection in Web Inspection Based on DM642 DSP,"In this paper, a novel approach for defects detection in web inspection based on TMS320DM642 DSP development board is proposed. The whole commercial web inspection system can be divided into three major function units: image capture, real-time processing of line-scanned images and the networking transmission. An innovative line-analysis based threshold detection algorithm is presented. The experimental results corroborate that this real-time method implements efficiently in the limited resources on a single DSP with high detection accuracy and robustness to the linear grayscale change. Though the analysis, we can see the total process ability of this algorithm finally achieves 22.51 M/s, which is nearly five to ten times as large as the other algorithms such as SOM based on statistics running on general processor without any obvious decline of the detection accuracy.",2009,0, 6055,Design of compact planar three-way dual-band power divider using defected ground structure,"In this paper, a novel compact planar three-way dual-band power divider using defected ground structure (DGS) is proposed. Based on this method, the power divider is reduced half of its original size. And it achieves a good performance. To verify the validity, a 915/2400 MHz power divider is fabricated, which could be applied in radio frequency identification (RFID) system. The experimental results show good agreement with the simulated responses.",2010,0, 6056,A design of the novel coupled line bandpass filter using defected ground structure,"In this paper, a novel coupled line bandpass filter with a DGS (Defected Ground Structure) is proposed to realize a compact size with low insertion loss characteristic. The proposed bandpass filter can provide an attenuation pole due to the resonance characteristic of the DGS. The equivalent circuit parameters for the DGS are extracted by using an EM simulation process and the circuit analysis method. The design method for the proposed 3-pole bandpass filter is derived based on coupled line filter theory and the derived equivalent circuit of the DGS. The experimental results show an excellent agreement with theoretical simulation results.",2000,0, 6057,A hybrid method for whole-frame error concealment,"In this paper, a novel hybrid method for whole-frame error concealment is proposed. The proposed method effectively combines results of various EC methods using the Kalman filter algorithm. Experimental results demonstrate that the proposed method improves PSNR performance with acceptable additional bits.",2009,0, 6058,Bayesian Network Based Fault Section Estimation in Power Systems,"In this paper, a novel method for fault section estimation in power systems based on Bayesian network is presented. The main contributions of this paper include the following two aspects. One is that the fault diagnosis models based on Bayesian network are proposed, which are converted from the logic relationship among section fault, protective relay operation and circuit breaker trip. This method is very simple, but can perfectly treat with the uncertain information existing in power system fault diagnosis. Another is that the method is developed for creating every section's diagnosis network automatically, thus the fault diagnosis can be fulfilled in a very short time for large-scale power system and can be implemented online. Diagnostic results of instance show that the proposed method is efficient and correct, and is very suitable for complex fault diagnosis problems, especially for the multiple-section fault cases and for the cases where protective relays and circuit breakers malfunction",2006,0, 6059,A novel soft-switching boost power factor correction converter with an active snubber,"In this paper, a novel soft switching transition technique in power factor correction (PFC) 1- AC-DC converter operating in a continuous conduction mode (CCM) with fixed switching frequency is presented. All of the semiconductor devices of the proposed converter are turned on and off under exact or near zero voltage switching and/or zero current switching. No additional voltage and current stresses on the main switch and main diode occur. The operating modes and analysis for the proposed converter are explained. To evaluate the performance of the suggested converter, simulation results for a 400V, 500W 1-phase AC-DC converter are presented. The simulated results confirm that the converter operates at almost unity power factor with reduced switching losses in a proposed converter. The output voltage is regulated without affecting zero voltage switching even under step change in input voltage.",2010,0, 6060,Safety-reliability of distributed embedded system fault tolerant units,"In this paper we compare the relative performance of two fault tolerant mechanisms dealing with repairable and non-repairable components that have failed. The relative improvement in the reliability and safety of a system with repairable components is calculated with respect to the corresponding system where the components are not repairable. The fault tolerant systems under study correspond to a flexible arrangement of fault tolerant units (FTU's) suitable for dependable distributed embedded systems. A simple simulation-based methodology to numerically evaluate dependability functions of a wide variety of fault tolerant units is used. The method is based on simulation of stochastic Petri nets. A set of 15 FTU configurations belonging to five groups is analysed. The methodology allows a quick and accurate evaluation of dependability functions of any distributed embedded system design in terms of the type of FTU (i.e., node or application), replicas per group, replicas per FTU, with or without repair functionality, and shared replicas.",2003,0, 6061,Minimum-error quantum state discrimination based on semidefinite programming,"In this paper we consider the general problem of designing the optimal quantum detection for minimum-error quantum state discrimination. Since it is hard to get an explicit solution and we must resort to numerical methods for the general case. We show the problem can be cast into a semidefinite programming(SDP) problem, we use the theory of SDP to solve it. Based on the idea, we propose a method for minimum-error discrimination by formulating the design of optimal quantum detection as a standard SDP problem; furthermore, based on the optimality conditions for SDP, we derive a set of optimality conditions for quantum detection. It is exemplified that this method is simple to implement and can be used to design optimum detection operators for minimum-error discrimination efficiently.",2008,0, 6062,A Fault Tolerant Optimization Algorithm based on Evolutionary Computation,"In this paper we describe how an evolutionary algorithm is capable of running on a distributed environment with volatile resources. When executing algorithms in a desktop computing or resource harvesting context, resources can be reclaimed by their owners without warning, which may produce data loss and process to fail. The interest of the algorithm presented in the paper is that although it doesn't keep processes from failing, or data from being lost, it does improve the quality of results because of its design, not employing any special task control, checkpoint/restart or resource redundancies. By means of a series of experiments, we test the performance of the algorithm by studying the number of process failing and the quality of solutions when compared with the classic flavor of the evolutionary algorithm. The new algorithm, which shows its advantages, therefore improve dependability of distributed system",2006,0, 6063,Design of compact microstrip lowpass filters using coupled half-circle defected ground structures (DGSs),"In this paper we have proposed a new half-circle resonators and investigated different coupling effects in order to realize a compact structure. The use of DGS in form of halfcircle shape improves the stop band characteristics. The new DGS-form allows a compact slot with a double coupling, thus a controlled of the resonance pole. The new filter has been designed, fabricated and measured. Good agreement between simulated and measured results has been achieved.",2008,0, 6064,Online Task-Scheduling for Fault-Tolerant Low-Energy Real-Time Systems,"In this paper we investigate fault tolerance and dynamic voltage scaling (DVS) in hard real time systems. We present two low-complexity fault-aware scheduling algorithms that combine feasibility analysis of rate monotonic algorithm (RMA) schedules and DVS-based frequency scaling using exact characterization of RMA algorithm. These algorithms lay the foundation for highly efficient online schemes that minimize energy consumption by adapting DVS policies to runtime behavior of tasks and fault occurrences without violating the offline feasibility analysis. Simulation results demonstrate energy savings of up to 60% over low-energy offline scheduling algorithms (Zhang and Chakrabarty, 2004)",2006,0, 6065,A Fault-Tolerant Attitude Determination System Based on COTS Devices,"In this paper we present a low cost fault-tolerant attitude determination system to a scientific satellite using COTS devices. We related our experience in developing the attitude determination system, where we combine proven fault tolerance techniques to protect the whole system composed only by COTS from the effects produced by transient faults. We detailed the failure cases and the detection, reconfiguration and recovery schemes that assure the fault-tolerant condition. A testbed system was used to inject faults, evaluate the recovery capability of the fault-tolerant system and validate the solution proposed.",2008,0, 6066,A fast algorithm to reduce 2-dimensional assignment problems to 1-dimensional assignment problems for FPGA-based fault simulation,"In this paper a polynomial time heuristic algorithm developed for the assignment optimization problem is introduced, which leads to an improved usage of field programmable gate array (FPGA) resources for hardware-based fault injection using an FPGA-based logic emulator. Logic emulation represents a new method of design validation utilizing a reprogrammable prototype of a digital circuit. In the past years various approaches to hardware-based fault injection using a hardware logic emulator have been presented. Some approaches insert additional functions at the fault location, while others utilize the reconfigurability of FPGAs. A common feature of each of these methods is the execution of hardware-based fault simulation using the stuck-at fault model at gate level.",2003,0, 6067,Application of methods of 3D surface reconstruction for characterization of pitting defects,In this paper a possibility of application of two different methods of pitting visualization is discussed.,2009,0, 6068,Experimental studies on faults detection using residual generator,"In this paper one will develop the faults detection and localization method using residual vectors, in order to emphasize the noises, disturbances and faults on the outputs L1 and L2 of the control level plant with two coupled tanks Quanser Water Level Control Two Tank Module. The proposed method was theoretically developed and experimentally verified in this plant and allowed detection and localization of two faults created in a real plant. The experiments presented were realized using Matlab Simulink program.",2010,0, 6069,Lab VIEW based implementation of remedial action for DC arcing faults in a spacecraft,"In this paper remedial action for DC arcing faults in spacecraft has been designed and implemented using Lab VIEW. The Lab VIEW is an innovative graphical programming system designed to facilitate computer controlled data acquisition and analysis. DC arcing faults in spacecraft has been designed and implemented using the experimental data obtained at NASA Glenn research center. It is important to keep the continuity of the power supply and at the same time increase the reliability of spacecraft energy power system. In the frequency domain, fast Fourier transformation (FFT) is used for the feature extraction of the fault signal and odd harmonics frequency components of the phase currents are analyzed.",2003,0, 6070,Architectural plan for constructing fault tolerable workflow engines based on grid service,"In this paper the design and implementation of fault tolerable architecture for scientific workflow engines is presented. The engines are assumed to be implemented as composite web services. Current architectures for workflow engines do not make any considerations for substituting faulty web services with correct ones at run time. The difficulty is to rollback the execution state of the workflow engine to its state before the invocation of the faulty web service. To achieve this, three components for fault diagnosis, recording the execution state of the workflow and substitution of faulty web services, at run time, are considered in our proposed architecture. The applicability of the proposed architecture is practically evaluated by applying it to design three different scientific workflow engines.",2010,0, 6071,Selective detection of simple grounding faults in medium voltage power networks with resonant earthed neutral system,"In this paper the influence of the medium voltage lines insulation ageing on some network parameters, namely the capacitive zero-sequence impedance of the network, the zero-sequence impedance of the grounding and arc-suppression coils taken together, as well as the equivalent zero-sequence impedance as seen from the grounding fault location, is analyzed. As a result of the analysis useful criteria to selectively detect faulty lines during a single grounding fault are formulated. These criteria are implemented by a digital protection block BYCD - 2. Measurements done in a real medium voltage network, during purposely provoked faults, validate the concepts the block is based on.",2010,0, 6072,Power factor correction and efficiency investigation of AC-DC converters using forced commutation techniques,"In this paper the power factor and the efficiency of a suggested AC-DC converter topology is studied via Mathlab/Simulink simulation. This converter topology consists of four MOSFET elements in bridge form and: a) two antiparallel IGBT elements between the bridge and the AC grid, b) one MOSFET element between the bridge and the DC load. These switching elements control the conduction time intervals of the bridge by a hysteresis current controller in order to achieve an AC current waveform in phase with the AC voltage as well as a very low content of higher harmonics. This way the values of the power factor and the efficiency become very high (e.g. 0,98... 0,99).",2005,0, 6073,Design of fuzzy feed-forward decoupling controller based on error,"In this paper, a kind of multi-channel fuzzy feedforward decoupling controller based on error is designed by adopting a kind of fuzzy feed-forward decoupling algorithm. The paper introduces the decoupling algorithm firstly, then designs the decoupling circuit, on this basis it analyzes the simulation and experimental results. With the smaller size, it can be embedded in other control systems conveniently, and good effect has been attained in the experiments.",2008,0, 6074,Using RBF Neural Network for Fault Diagnosis in Satellite ADS,"In this paper, a new hybrid learning strategy composed of K-means clustering algorithm and Kalman filtering is employed to train radial based function (RBF) neural network for fault diagnosis in satellite attitude determination system. Because Kalman filtering and K-means clustering algorithm both adopt linear update rule, their combination produces a new hybrid training algorithm that can converge quickly. Simulation results demonstrate that the proposed approach is effective for fault diagnosis in satellite attitude determination system.",2007,0, 6075,The Application of the Fuzzy Neural Network-Wavelet Singularity Detection in Mechanical Fault Diagnosis of High Voltage Breakers,"In this paper, a new method that the wavelet singularity detecting and NN of fuzzy combined to process the mechanism libration signal of the HV breaker is introduced, which is called the NN of fuzzy singularity. This is a new method that the wavelet singularity exponent of fuzzification combined with the multilayer feedforward NN is imported to the mechanism fault diagnosis of the HV breaker. The experimental results show that this method can achieve better effect than the wavelet singularity detecting and improves the accuracy and precision of diagnosis",2005,0, 6076,Mobile Filter: Exploring Filter Migration for Error-Bounded Continuous Sensor Data Collection,"In wireless sensor networks, filters that suppress data update reports within predefined error bounds effectively reduce the traffic volume for continuous data collection. All prior filter designs, however, are stationary in the sense that each filter is attached to a specific sensor node and remains stationary over its lifetime. In this paper, we propose a mobile filter, i.e., a novel design that explores the migration of filters to maximize overall traffic reduction. A mobile filter moves upstream along the data-collection path, with its residual size being updated according to the collected data. Intuitively, this migration extracts and relays unused filters, leading to more proactive suppression of update reports. While extra communications are needed to move filters, we show through probabilistic analysis that the overhead is outrun by the gain from suppressing more data updates. We present an optimal filter-migration algorithm for a chain topology. The algorithm is then extended to general multichain and tree topologies. Extensive simulations demonstrate that, for both synthetic and real data traces, the mobile filtering scheme significantly reduces data traffic and extends network lifetime against a state-of-the-art stationary filtering scheme. Such results are also observed from experiments over a Mica-2 sensor network testbed.",2010,0, 6077,Researches on the suitability of switched reluctance machines and permanent magnet machines for specific aerospace applications demanding fault tolerance,"In, aeronautical as well as in automotive environments more and more hydraulic auxiliary drives are to be replaced by electrical drives. Applications that are of vital importance for keeping up the operability of the whole system (e.g. movement of the rudders of an aircraft or electrical steering in automobiles) need to be actuated by fault tolerant drives. As switched reluctance machines (SRM) generally offer a very simple and robust design, they are very suitable for high reliable and fault tolerant applications. Permanent magnet machines (PM-machines) on the other hand are known to offer a far higher power density and thus, especially in an aircraft environment, offer the possibility to limit the overall weight and size of the system. This paper compares and evaluates the suitability of SRM and PM-machines for a specific aerospace application",2006,0, 6078,Finding concurrency bugs with context-aware communication graphs,"Incorrect thread synchronization often leads to concurrency bugs that manifest nondeterministically and are difficult to detect and fix. Past work on detecting concurrency bugs has addressed the general problem in an ad-hoc fashion, focusing mostly on data races and atomicity violations. Using graphs to represent a multithreaded program execution is very natural, nodes represent static instructions and edges represent communication via shared memory. In this paper we make the fundamental observation that such basic context-oblivious graphs do not encode enough information to enable accurate bug detection. We propose context-aware communication graphs, a new kind of communication graph that encodes global ordering information by embedding communication contexts. We then build Bugaboo, a simple and generic framework that accurately detects complex concurrency bugs. Our framework collects communication graphs from multiple executions and uses invariant-based techniques to detect anomalies in the graphs. We built two versions of Bugaboo: BB-SW, which is fully implemented in software but suffers from significant slowdowns; and BB-HW, which relies on custom architecture support but has negligible performance degradation. BB-HW requires modest extensions to a commodity multicore processor and can be used in deployment settings. We evaluate both versions using applications such as MySQL, Apache, PARSEC, and several others. Our results show that Bugaboo identifies a wide variety of concurrency bugs, including challenging multivariable bugs, with few (often zero) unnecessary code inspections.",2009,0, 6079,Detection of Turn to Turn Faults in Stator Winding with Axial Magnetic Flux in Induction Motors,"Induction motors play a very important part in the safe and efficient running of any industrial plant. Early detection of abnormalities in the motor helps to avoid costly breakdown. Accordingly, this work presents a technique for the diagnosis of turn to turn faults in stator winding in induction motors, with use of axial magnetic flux. Axial leakage flux generates voltages in flux coil sensor and their time based waveforms and frequency presentations are subsequently analyzed. Turn to turn failures in stator windings are studied with load and unload effects. Experimental results prove the efficiency of employed method.",2007,0, 6080,Fault detection in IP-based process control networks using data mining,"Industrial process control IP networks support communications between process control applications and devices. Communication faults in any stage of these control networks can cause delays or even shutdown of the entire manufacturing process. The current process of detecting and diagnosing communication faults is mostly manual, cumbersome, and inefficient. Detecting early symptoms of potential problems is very important but automated solutions do not yet exist. Our research goal is to automate the process of detecting and diagnosing the communication faults as well as to prevent problems by detecting early symptoms of potential problems. To achieve our goal, we have first investigated real-world fault cases and summarized control network failures. We have also defined network metrics and their alarm conditions to detect early symptoms for communication failures between process control servers and devices. In particular, we leverage data mining techniques to train the system to learn the rules of network faults in control networks and our testing results show that these rules are very effective. In our earlier work, we presented a design of a process control network monitoring and fault diagnosis system. In this paper, we focus on how the fault detection part of this system can be improved using data mining techniques.",2009,0, 6081,Vibration Fault Diagnosis of Steam Turbine Shafting Based on Probability Neural Networks,"Information entropy is an effective description for the uncertainty of a system, and could be used for the symptom to detect the vibration changes of steam turbine. Based on the faulty signals collected from rotor test rig, three information entropy: singular spectrum entropy, power spectrum entropy, wavelet energy spectrum entropy were calculated as information entropy data. Probability neural networks(PNNs) was explored to fuse the three information entropy. Research shows that with the advantages of Bayes classifier and neural networks, PNNs has good classification ability to typical vibration faults of turbine, the classification accuracy is 100% for training data, 80% for unseen data. Compared with the classification accuracy of minimum distance classifier(MDC) and improved MDC, PNNs has higher classification accuracy. It can be deduced that PNNs is a practical fusion diagnosis method for typical fault identification of turbine rotor.",2008,0, 6082,Robust image-adaptive data hiding using erasure and error correction,"Information-theoretic analyses for data hiding prescribe embedding the hidden data in the choice of quantizer for the host data. We propose practical realizations of this prescription for data hiding in images, with a view to hiding large volumes of data with low perceptual degradation. The hidden data can be recovered reliably under attacks, such as compression and limited amounts of image tampering and image resizing. The three main findings are as follows. 1) In order to limit perceivable distortion while hiding large amounts of data, hiding schemes must use image-adaptive criteria in addition to statistical criteria based on information theory. 2) The use of local criteria to choose where to hide data can potentially cause desynchronization of the encoder and decoder. This synchronization problem is solved by the use of powerful, but simple-to-implement, erasures and errors correcting codes, which also provide robustness against a variety of attacks. 3) For simplicity, scalar quantization-based hiding is employed, even though information-theoretic guidelines prescribe vector quantization-based methods. However, an information-theoretic analysis for an idealized model is provided to show that scalar quantization-based hiding incurs approximately only a 2-dB penalty in terms of resilience to attack.",2004,0, 6083,Investigating the defect detection effectiveness and cost benefit of nominal inspection teams,"Inspection is an effective but also expensive quality assurance activity to find defects early during software development. The defect detection process, team size, and staff hours invested can have a considerable impact on the defect detection effectiveness and cost-benefit of an inspection. In this paper, we use empirical data and a probabilistic model to estimate this impact for nominal (noncommunicating) inspection teams in an experiment context. Further, the analysis investigates how cutting off the inspection after a certain time frame would influence inspection performance. Main findings of the investigation are: 1) Using combinations of different reading techniques in a team is considerably more effective than using the best single technique only (regardless of the observed level of effort). 2) For optimizing the inspection performance, determining the optimal process mix in a team is more important than adding an inspector (above a certain team size) in our model. 3) A high level of defect detection effectiveness is much more costly to achieve than a moderate level since the average cost for the defects found by the inspector last added to a team increases more than linearly with growing effort investment. The work provides an initial baseline of inspection performance with regard to process diversity and effort in inspection teams. We encourage further studies on the topic of time usage with defect detection techniques and its effect on inspection effectiveness in a variety of inspection contexts to support inspection planning with limited resources.",2003,0, 6084,Dynamic fault-tolerant design for array processors based on immunology,"Inspired by the biological immunology principle and using its property of autonomy, learning and memory for reference, this paper focuses on the fault-tolerant design method of array processors which has real-time detection and dynamic configuration capabilities, to improve the chip's reliability by ensuring that when fault occurs in one or more of processor elements the chip can also work normally. This paper discusses the heterogeneous features of array processors, the structure of resource node and communication node. Research the fault-tolerance strategy of array processor, and the immune response mechanism of array processor, to achieve the real-time immune process of perception, training, response and feedback. This paper focuses on researching the structure and the algorithm of router switch unit with 90 mm technology which has the dynamic fault-tolerant function. It has the guiding significance on R & D of array processors for industry circles.",2010,0, 6085,Multi-Layer Immune Model for Fault Diagnosis,"Inspired by the multi-layer defense mechanism and incorporates the feedback mechanism in the nature immune system, the paper proposes a multi-layer immune model for fault diagnosis. In the multi-layer model, inherent immune layer direct recognition of known fault that could not cause influence to other nodes; propagation immune layer adopt the structure of the B- lymphocyte network to construct the fault propagation network for the fault localization; Adaptive immune layer learn the unknown fault pattern. Simulation results show that the multilayer immune diagnosis system has the properties of recognition, learning and memory.",2008,0, 6086,A study of partial discharge pattern analysis using artificially defected epoxy resin bushing in the ring voltage detecting sensor,"In this thesis, three kinds of defects are artificially made on the inside and the surface of the epoxy resin bushing with internal voltage detection sensor, which is practically equipped into SF6 gas insulated switchgear for a 22.9 kV distribution line, so that discrimination of three kinds of PD signals by means of phi-q-n pattern analysis method was performed",2000,0, 6087,Defect localization using physical design and electrical test information,In this work we describe an approach of using physical design and test failure knowledge to localize defects in random logic. We term this approach computer-aided fault to defect mapping (CAFDM). An integrated tool has been developed on top of an existing commercial ATPG tool. CAFDM was able to correctly identify the defect location and layer in all 9 of the chips that had bridging faults injected via FIB. Preliminary failure analysis results on production defects are promising,2000,0, 6088,An Efficient Secure Shared Storage Service with Fault and Investigative Disruption Tolerance,"In this work we focus on solutions to an emerging threat to cloud-based services namely that of data seizures within a shared multiple customer architecture. We focus on the problem of securing distributed data storage in a cloud computing environment by designing a specialized multi-tenant data-storage architecture. The architecture we present not only provides high degrees of availability and confidentiality of customer data but is also able to offer these properties even after seizures of various parts of the infrastructure have been carried out through a judicial process. Our solution uses a novel way of storing customer data - combining the cryptographic scheme of secret sharing and combinatorial design theory, to ensure that the requirements of the architecture are met. Furthermore, we show that our proposed solution is efficient with respect to the amount of hardware infrastructure required, thus making the implementation and use of our proposed architecture cost-efficient for adoption by IT enterprises.",2010,0, 6089,Constructions of low-degree and error-correcting /spl epsi/-biased generators,"In this work we give two new constructions of epsi-biased generators. Our first construction answers an open question of Dodis and Smith (2005), and our second construction significantly extends a result of Mossel et al. (2003). In particular we obtain the following results: (1) We construct a family of asymptotically good binary codes such that the codes in our family are also epsi-biased sets for an exponentially small epsi. Our encoding and decoding algorithms run in polynomial time in the block length of the code. This answers an open question of Dodis and Smith (2005). (2) For every k = o(log n) we construct a degree k epsi-biased generator G:{0, 1}m rarr {0,1}n (namely, every output bit of the generator is a degree k polynomial in the input bits). For k constant we get that n = Omega(m/log(1/epsi)) k, which is nearly optimal. Our result also separates degree k generators from generators in NCk 0, showing that the stretch of the former can be much larger than the stretch of the latter. The problem of constructing degree k generators was introduced by Mossel et al. (2003) who gave a construction only for the case of degree 2 generators",2006,0, 6090,Performance of watermarking as an error detection mechanism for corrupted H.264/AVC video sequences,"In this work we investigate the performance of watermarking as an error detection method for H.264/AVC encoded videos. The efficiency of a previously proposed forced even watermarking is evaluated in a more realistic error-prone transmission scenario. A less invasive watermarking scheme, the force odd watermarking, is proposed as alternative. In order to handle possible decoding desynchronization at the receiver, we implement a syntax check error detection mechanism together with watermarking and evaluate its performance.",2009,0, 6091,Limited view reconstruction of three-dimensional defect distribution for computed radiography system,In this work we present limited angle reconstruction algorithms combined with procedure for defect detection in three dimensions. A computer radiography system is utilized for radiographic images acquisition. Image quality of the reconstruction was investigated and results obtained for simulated radiographs are provided. Numerical evaluation as well as the images and the profile show improvement in contrast recovery of defects in all range of original contrast. This provides better input for defect detection algorithms due to better contrast between background and inclusion.,2009,0, 6092,A performance improvement and error floor avoidance technique for belief propagation decoding of LDPC codes,"In this work, we introduce a unique technique that improves the performance of the BP decoding in waterfall and error-floor regions by reversing the decoder failures. Based on the short cycles existing in the bipartite graph, an importance sampling simulation technique is used to identify the bit and check node combinations that are the dominant sources of error events, called trapping sets. Then, the identified trapping sets are used in the decoding process to avoid the pre-known failures and to converge to the transmitted codeword. With a minimal additional decoding complexity, the proposed technique is able to provide performance improvements for short-length LDPC codes and push or avoid error-floor behaviors of longer codes",2005,0, 6093,Using Hardware Performance Counters for Fault Localization,"In this work, we leverage hardware performance counters-collected data as abstraction mechanisms for program executions and use these abstractions to identify likely causes of failures. Our approach can be summarized as follows: Hardware counters-based data is collected from both successful and failed executions, the data collected from the successful executions is used to create normal behavior models of programs, and deviations from these models observed in failed executions are scored and reported as likely causes of failures. The results of our experiments conducted on three open source projects suggest that the proposed approach can effectively prioritize the space of likely causes of failures, which can in turn improve the turn around time for defect fixes.",2010,0, 6094,FTDIS: A Fault Tolerant Dynamic Instruction Scheduling,"In this work, we target the robustness for controller scheduler of type Tomasulo for SEU faults model. The proposed fault-tolerant dynamic scheduling unit is named FTDIS, in which critical control data of scheduler is protected from driving to an unwanted stage using Triple Modular Redundancy and majority voting approaches. Moreover, the feedbacks in voters produce recovery capability for detected faults in the FTDIS, enabling both fault mask and recovery for system. As the results of analytical evaluations demonstrate, the implemented FTDIS unit has over 99% fault detection coverage in the condition of existing less than 4 faults in critical bits. Furthermore, based on experiments, the FTDIS has a 200% hardware overhead comparing to the primitive dynamic scheduling control unit and about 50% overhead in comparision to a full CPU core. The proposed unit also has no performance penalty during simulation. In addition, the experiments show that FTDIS consumes 98% more power than the primitive unit.",2010,0, 6095,DuoTracker: Tool Support for Software Defect Data Collection and Analysis,"In today software industry defect tracking tools either help to improve an organizationAs software development process or an individualAs software development process. No defect tracking tool currently exists that help both processes. In this paper we present DuoTracker, a tool that makes possible to track and analyze software defects for organizational and individual software process decision making. To accomplish this, DuoTracker has capabilities to classify defects in a manner that makes analysis at both organizational and individual software processes meaningful. The benefit of this approach is that software engineers are able to see how their personal software process improvement impacts their organization and vice versa. This paper shows why software engineers need to keep track of their program defects, how this is currently done, and how DuoTracker offers a new way of keeping track of software errors. Furthermore, DuoTracker is compared to other tracking tools that enable software developers to record program defects that occur during their individual software processes.",2006,0, 6096,Single-switch power factor correction AC/DC converter with storage capacitor size reduction,"In universal line applications with hold-up time requirement, the single-stage PFC AC/DC converters may not be more attractive than the conventional two-stages approach if the size and cost of the storage capacitor are too high. Furthermore, computer related applications, in which the holdup time is a very important requirement, will have to comply with Class D limits of the low frequency harmonic regulation IEC 61000-3-2. Therefore, for these applications, a not very distorted line current will be required. In this paper, a new single-stage AC/DC converter suitable for universal line applications is proposed. The main difference with other solutions is the low voltage swing on the storage capacitor while the line varies within its universal range. This feature allows reducing the size and cost of the storage capacitor. Additional advantages of the proposed converter are topology simplicity (single-switch converter) and IEC 61000-3-2 Class D compliance. The experimental results confirms the above mentioned advantages.",2003,0, 6097,A New Fault-Information Model for Adaptive & Minimal Routing in 3-D Meshes,"In this paper, we rewrite the minimal-connected-component (MCC) model in 2-D meshes in a fully-distributed manner without using global information so that not only can the existence of a Manhattan-distance-path be ensured at the source, but also such a path can be formed by routing-decisions made at intermediate nodes along the path. We propose the MCC model in 3-D meshes, and extend the corresponding routing in 2-D meshes to 3-D meshes. We consider the positions of source & destination when the new faulty components are constructed. Specifically, all faulty nodes will be contained in some disjoint fault-components, and a healthy node will be included in a faulty component only if using it in the routing will definitely cause a non-minimal routing-path. A distributed process is provided to collect & distribute MCC information to a limited number of nodes along so-called boundaries. Moreover, a sufficient & necessary condition is provided for the existence of a Manhattan-distance-path in the presence of our faulty components. As a result, only the routing having a Manhattan-distance-path will be activated at the source, and its success can be guaranteed by using the information of boundary in routing-decisions at the intermediate nodes. The results of our Monte-Carlo-estimate show substantial improvement of the new fault-information model in the percentage of successful Manhattan-routing conducted in 3-D meshes.",2008,0, 6098,Fault tolerant routing and broadcasting in de Bruijn networks,"In this paper, we study fault tolerant routing and broadcasting in interconnection networks based on de Bruijn graph (dBG) for constructing large scale multiprocessors networks. Our paper presents a new approach to provide fault tolerance routing and broadcasting which haven't been investigated. The proposed approach is based on multi level discrete set concept in order to find a fault free shortest path among several paths provided. In the proposed fault tolerant broadcasting, we can achieve k (network diameter) as maximum time step to finish broadcast process and there is no overhead in the broadcast message.",2005,0, 6099,Incorporating Training Errors for Large Margin HMMS Under Semi-Definite Programming Framework,"In this paper, we study how to incorporate training errors in large margin estimation (LME) under semi-definite programming (SDP) framework. Like soft-margin SVM, we propose to optimize a new objective function which linearly combines the minimum margin among positive tokens and an average error function of all negative tokens. The new method is named as soft-LME. It is shown the new soft-LME problem can still be converted into an SDP problem if we properly define the average error function of all negative tokens based on their discriminative functions. Some preliminary results on TIDIGITS show that the soft-LML/SDP method yields modest performance gain when training error rates are significant. Moreover, it is also shown that the soft-LML/SDP can achieve much faster convergence for all cases which we have investigated.",2007,0, 6100,The fault coverage estimation for protocol conformance testing,"In this paper, we study the fault coverage of a transition tour of an extended finite state machine (Extended FSM). We consider single transition, output and assignment faults. The fault coverage is calculated based on the equivalent classical FSM.",2004,0, 6101,Nonlinear Amplification Effects on OFDM Error Rate Performance in Fading Environment,"In this paper, we study the impact of nonlinear amplification on average error rate of OFDM in Nakagami-m fading. We consider the conventional receiver (separate detection of each subcarrier) without any countermeasures against the nonlinearity to assess the pure joint effect of fading and nonlinear amplification. We present an approximate technique that allows to evaluate the average error rate in Nakagami-m fading in a closed form.",2009,0, 6102,Fault-Tolerant Distributed Computing in Full-Information Networks,"In this paper, we use random-selection protocols in the full-information model to solve classical problems in distributed computing. Our main results are the following: An O(log n)-round randomized Byzantine agreement (BA) protocol in a synchronous full-information network tolerating t < n/(3+epsi) faulty players (for any constant epsi > 0). As such, our protocol is asymptotically optimal in terms of fault-tolerance. An O(1)-round randomized BA protocol in a synchronous full-information network tolerating t = O(n/((log n)1.58)) faulty players. A compiler that converts any randomized protocol Piin designed to tolerate t fail-stop faults, where the source of randomness of Piin is an SV-source, into a protocol Piout that tolerates min(t, n/3) Byzantine faults. If the round-complexity of Piin is r, that of Piout is O(r log* n). Central to our results is the development of a new tool, ""audited protocols"". Informally ""auditing"" is a transformation that converts any protocol that assumes built-in broadcast channels into one that achieves a slightly weaker guarantee, without assuming broadcast channels. We regard this as a tool of independent interest, which could potentially find applications in the design of simple and modular randomized distributed algorithms",2006,0, 6103,"Towards quantitative SPECT: Error estimation of SPECT OSEM with 3D resolution recovery, attenuation correction and scatter correction","In this study we systematically investigate biases relevant to quantitative SPECT if OSEM with isotropic (3D) depth dependent resolution recovery (OSEM-3D), attenuation and scatter correction is used. We focus on the dependencies of activity estimation errors on the projection operator, structure size, pixel size, count density and reconstruction parameters. We use Tc-99m to establish a base line. Four Siemens low energy collimators (Low Energy Ultra High Resolution, Low Energy High Resolution, Low Energy All Purpose, Low Energy High Sensitivity) with geometric resolution between 4.4 mm and 13.1 mm at 10 cm distance and sensitivity between 100 cpm/Ci and 1020 cpm/Ci are tested with simulations of spheres with diameters between 9.8 mm and 168 mm in background. Pixel sizes and total counts are varied between 2.4 mm and 9.6 mm and 0.125 and 32 million counts. Images are reconstructed with OSEM-3D (Flash3D) with attenuation and scatter correction. Emission recovery is quantitatively measured for different reconstruction parameter settings. In addition, physical measurements of standard quality control phantoms are performed using an actual SPECT/CT system (Symbia T6). Cross calibration of the imaging system with a well counter and results from simulations are used to quantitatively estimate the true activity concentration in the physical phantoms. Results show variations of emission recovery between 13.8% and 104.5% depending on sphere volume and number of OSEM-3D updates. After correction for the emission recovery errors and cross calibration of the imaging system the errors in absolute quantitation using the physical sphere phantom are between +0.010.61% for the largest (16 ml) and 5.871.00% for the smallest (0.5 ml) sphere. As a conclusion, the emission recovery varies over a wide range and is highly dependent on imaging parameters when using OSEM-3D reconstruction. Accurate quantitation in phantoms is possible gi- - ven that errors at the specific imaging operation point can be estimated. In a clinical setup this is a nontrivial task, and perhaps too cumbersome for routine clinical use.",2008,0, 6104,"A Comparison of Cascading Horizontal and Vertical Menus with Overlapping and Traditional Designs in Terms of Effectiveness, Error Rate and user Satisfaction","In this study, effectiveness, efficiency and user satisfaction of different menu designs were investigated. 24 graduate students voluntarily participated to the study. The results indicate that horizontal menus are more effective than vertical menus in terms of selecting sub menu items, overall task completion time is not related to menu design, horizontal overlapping menu design is the most effective one in terms of preventing user errors. Lastly, user satisfaction doesn't vary according to menu designs.",2007,0, 6105,Transformer Fault Diagnosis Utilizing Rough Set and Support Vector Machine,"In this study, we are concerned with fault diagnosis of power transformer. The objective is to explore the use of some advanced techniques such as rough set (RS), support vector machine model (SVM) and quantify their effectiveness when dealing with dissolved gases extracted from power transformers. In order to increase data quality and decrease scalability of input data, we utilize the strong ability of RS theory in processing large data and eliminating redundant information, SVM is performed to separate various fault types of power transformer. As the simulation results to verify the effectiveness, the proposed method showed more improved classification results than artificial neural network (ANN).",2009,0, 6106,Probability of error analysis of BPSK OFDM systems with random residual frequency offset,"In this paper, we derive closed form bit error rate (BER) expressions for orthogonal frequency division multiplexing (OFDM) systems with residual carrier frequency offset (CFO). Most of the published work treats CFO as a nonrandom parameter. But in our study we consider it as a random parameter. The BER performance of binary phase shift keying (BPSK) OFDM system is analyzed in the cases of additive white Gaussian noise (AWGN), frequency-flat and frequency-selective Rayleigh fading channels.We further discuss how these expressions can be related to systems with practical estimators. The simulation results are provided to verify the accuracy of these error rate expressions.",2009,0, 6107,An improved error concealment scheme for entire frame loss of H.264/AVC,"In this paper, we develop an improved error concealment scheme for entire frame loss of H.264/AVC by using variable-size block motion vector extrapolation. At first, the proposed algorithm extrapolates MVs from available neighboring frames onto the lost frame and determines the MV of each 8times8 block, for those blocks with dissimilar candidate MVs, they are divided into four sub-blocks and an re-estimation method is implemented on every sub-blocks. Then, a more complete motion vector field is obtained by using median filtering. At last, the blocks without extrapolation MVs are concealed by using linear interpolation algorithm. Experimental results show that this method can alleviate the shortcomings of the fixed-size method effectively and provide better performance than other concealment methods.",2009,0, 6108,The Application of Evidence Theory in the Field of Equipment Fault Diagnosis,"In this paper, we explain the fusion technology of information briefly, and discuss the general course and the merged rule about the equipment fault diagnoses by the D-S evidence theory in detail. We give the relation between basic probability assignment and matrix by the location operation of C-language, and obtain the basic probability assignment by the Matlab software which make the matrix operation easier. We diagnose the fault of the voltage transformer using the D-S evidence theory",2006,0, 6109,A comparative exploration of FreeBSD bug lifetimes,"In this paper, we explore the viability of mining the basic data provided in bug repositories to predict bug lifetimes. We follow the method of Lucas D. Panjer as described in his paper, Predicting Eclipse Bug Lifetimes. However, in place of Eclipse data, the FreeBSD bug repository is used. We compare the predictive accuracy of five different classification algorithms applied to the two data sets. In addition, we propose future work on whether there is a more informative way of classifying bugs than is considered by current bug tracking systems.",2010,0, 6110,Multiple transient faults in logic: an issue for next generation ICs?,"In this paper, we first evaluate whether or not a multiple transient fault (multiple TF) generated by the hit of a single cosmic ray neutron can give rise to a bidirectional error at the circuit output (that is an error in which all erroneous bits are 1s rather than 0s, or vice versa, within the same word, but not both). By means of electrical level simulations, we show that this can be the case. Then, we present a software tool that we have developed in order to evaluate the likelihood of occurrence of such bidirectional errors for very deep submicron (VDSM) ICs. The application of this tool to benchmark circuits has proven that such a probability can not be neglected for several benchmark circuits. Finally, we evaluate the behavior of conventional self-checking circuits (generally designed accounting only for single TFs) with respect to such events. We show that the modifications generally introduced to their functional blocks in order to avoid output bidirectional errors due to single TFs (as required when an AUED code is implemented) can significantly reduce (up to the 40%) also the probability to have bidirectional errors because of multiple TFs.",2005,0, 6111,A coupled factorial hidden Markov model (CFHMM) for diagnosing coupled faults,"In this paper, we formulate a coupled factorial hidden Markov model-based framework to diagnose dependent faults occurring over time. In our previous research, the problem of diagnosing dynamic multiple faults (DMFD) is solved by assuming that the faults are independent. Here, we extend this formulation to determine the most likely evolution of dependent fault states (NP-hard problem), the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method along with the coupling assumptions (mixed memory Markov model) is proposed for solving the dynamic coupled fault diagnosis (DCFD) problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on small-scale and real-world systems and the simulation results show that this approach improves the correct isolation rate as compared to the formulation with independent fault states (DMFD).",2010,0, 6112,A Survey on Fault-Tolerance in Distributed Network Systems,"In this paper, we give a survey on fault tolerant issue in distributed systems. More specially speaking, we talk about one important and basic component called failure detection, which is to detect the failure of the process quickly and accurately. Thus, a good failure detection method will avoid the further system lost due to process crash. This survey provides the related research results and also explored the future directions about failure detection, and it is a good reference for researcher on this topic.",2009,0, 6113,Semantic translation error rate for evaluating translation systems,"In this paper, we introduce a new metric which we call the semantic translation error rate, or STER, for evaluating the performance of machine translation systems. STER is based on the previously published translation error rate (TER) (Snover et al., 2006) and METEOR (Banerjee and Lavie, 2005) metrics. Specifically, STER extends TER in two ways: first, by incorporating word equivalence measures (WordNet and Porter stemming) standardly used by METEOR, and second, by disallowing alignments of concept words to non-concept words (aka stop words). We show how these features make STER alignments better suited for human-driven analysis than standard TER. We also present experimental results that show that STER is better correlated to human judgments than TER. Finally, we compare STER to METEOR, and illustrate that METEOR scores computed using the STER alignments have similar statistical properties to METEOR scores computed using METEOR alignments.",2007,0, 6114,Statistical feature representations for automatic wood defects recognition research and applications,"In this paper, we introduce the non-negative matrix factorization (NMF) to decompose the wood images and structure the feature spaces. Local binary pattern (LBP) is used to extract the original spatial local structure features, such as curly edges, etc. and they have better luminance adaptability. Simultaneously, dual-tree complex wavelet transform (DTCWT) is used to extract the energy based statistical features from different directions and frequencies and they can maintain better time-frequency localized characteristics and finite data redundancy. We integrate the features together to choose the proper features to describe the discrepancies between sound woods and defects and then propose an automatic detection system for wood defects recognition. After many cross experiments, we received a better identification rate of more than 90% with good research values and potential applications.",2009,0, 6115,Impact of fault management server and its failure-related parameters on high-availability communication systems,"In this paper, we investigate the impact of a fault management server and its failure-related parameters on high-availability communication systems. The key point is that, to achieve high overall availability of a communication system, the availability of the fault management server itself is not as important as its fail-safe ratio and fault coverage. In other words, in building fault management servers, more attention should be paid to improving the server's ability of detecting faults in functional units and its own isolation under failure from the functional units. Tradeoffs can be made between the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper.",2002,0, 6116,Main defects of LC-based modulators,"In this paper, the main defects of LC-based modulators due to process variations are presented. Resonance frequency shift of LC tank circuit, which was discussed is some publications, is shown to be one among many other possible defects that are discussed in this paper. The effect of each defect on modulator output spectrum is shown and discussed. It is suggested that the information extracted from the output spectrum can be used to calibrate the modulator main blocks.",2010,0, 6117,Delay-dependent robust fault detection for a class of nonlinear time-delay systems,"In this paper, the robust fault detection filter (RFDF) design problem is investigated for a class of nonlinear time-delay systems with unknown inputs. Firstly, a reference residual model is introduced to formulate the robust fault detection filter design problem as an Hinfin optimization control problem. The reference residual model designed according to the performance index is an optimal residual generator, which takes into account the robustness against disturbances and sensitivity to faults simultaneously. Then, applying robust Hinfin optimization control technique, the novel criteria of the robust fault detection filter for a class of nonlinear time-delay systems with unknown inputs is presented in terms of linear matrix inequality (LMI), which depends on the size of time delays. Finally, a numerical example is used to demonstrate the feasibility of our proposed method.",2008,0, 6118,Analysis and Application of Novel Microstrip Line With Combinatorial Non-Periodic Defected Ground Structure,"In this paper, two different shape of defected ground structure (DGS) units are presented and their frequency characters are analyzed. Then the dual-frequency microstrip antenna integrated with these two different shapes of Combinatorial non-periodic DGS is designed. Simulated and measured results compared with conventional dual-frequency microstrip antenna show that high order harmonics are suppressed and the port isolation is enhanced .",2008,0, 6119,Lazy garbage collection of recovery state for fault-tolerant distributed shared memory,"In this paper, we address the problem of garbage collection in a single-failure fault-tolerant home-based lazy release consistency (HLRC) distributed shared-memory (DSM) system based on independent checkpointing and logging. Our solution uses laziness in garbage collection and exploits consistency constraints of the HLRC memory model for low overhead and scalability. We prove safe bounds on the state that must be retained in the system to guarantee correct recovery after a failure. We devise two algorithms for garbage collection of checkpoints and logs, checkpoint garbage collection (CGC), and lazy log trimming (LLT). The proposed approach targets large-scale distributed shared-memory computing on local-area clusters of computers. In such systems, using global synchronization or extra communication for garbage collection is inefficient or simply impractical due to system scale and temporary disconnections in communication. The challenge lies in controlling the size of the logs and the number of checkpoints without global synchronization while tolerating transient disruptions in communication. Our garbage collection scheme is completely distributed, does not force processes to synchronize, does not add extra messages to the base DSM protocol, and uses only the available DSM protocol information. Evaluation results for real applications show that it effectively bounds the number of past checkpoints to be retained and the size of the logs in stable storage",2002,0, 6120,A Linear Programming Approach for Automated Localization of Multiple Faults,"In this paper, we address the problem of localizing faults by analyzing execution traces of successful and unsuccessful invocations of the application when run against a suite of tests. We present a new algorithm, based on a linear programming model, which is designed to be particularly effective for the case where multiple faults are present in the application under investigation. Through an extensive empirical study, we show that in the case of both single and multiple faults, our approach outperforms a host of prominent fault localization methods from the literature.",2009,0, 6121,Exact probability of error of ST-coded OFDM systems with frequency offset in flat Rayleigh fading channels,"In this paper, we analyze the error performance of single-input-single-output (SISO) and space-time coded (ST-coded) orthogonal frequency division multiplexing (OFDM) systems with carrier frequency offset (CFO) in frequency-flat Rayleigh fading channels. Exact analytical expressions are derived for the symbol error probability (SEP) for BPSK and QPSK modulation schemes. The analysis presented has the following key features - it explicitly accounts for (i) the non-Gaussian nature of the intercarrier-interference (ICI) noise when the total number of subcarriers is small, and (ii) the dependency of the post-FFT signal and the ICI noise terms due to subcarrier correlations. Thus, the proposed method provides accurate SEP formulae, overcoming the inaccuracies involved in Gaussian approximation based methods reported in literature. The numerical results demonstrate the sensitivity of the receiver error performance to CFO in SISO-OFDM and ST-coded OFDM systems in flat Rayleigh fading channels. Also, the SEP results presented can be interpreted as the lower-bound of error performance in a general frequency-selective Rayleigh fading channel",2005,0, 6122,Unequal Error Protection Rateless Codes for Scalable Information Delivery in Mobile Networks,"In this paper, we are interested in the design and performance of unequal error protection (UEP) rateless codes. These codes have interesting properties in the deployment of mobile peer to peer (MP2P) and mobile broadcast systems. As our main contribution, we propose a new UEP rateless code structure that is composed of a bank of precoders and a common rateless code. Through the bank of precoders, information layers are first expanded into encoded layers with a diminishing coding rate as the priority of the layer increases. The output of precoders are then concatenated and fed into a rateless encoder. The output of the rateless encoder naturally constitutes UEP encoding blocks carrying disproportionately more information on the higher priority layers. The decoder at the receiver end performs the inverse operations to recover each layer separately. We evaluate our proposed scheme comparing against the alternative solutions. Results show that over a broad class of rateless code structures the proposed scheme can substantially reduce the time required to recover higher priority layers while minimally impacting the time required to recover the lower priority layers.",2007,0, 6123,Fault-tolerant group membership protocols using physical robot messengers,"In this paper, we consider a distributed system that consists of a group of teams of worker robots that rely on physical robot messengers for the communication between the teams. Unlike traditional distributed systems, there is a finite amount of messengers in the system, and thus a team can send messages to other teams only when some messenger robot is available locally. It follows that a careful management of the messengers is necessary to avoid the starvation of some teams. Concretely, the paper proposes algorithms to provide group membership and view synchrony among robot teams. We look at the problem in the face of failures, in particular when a certain number of messenger robots can possibly crash.",2005,0, 6124,Wiener Filter-Based Error Resilient Time-Domain Lapped Transform,"In this paper, the design of the error resilient time-domain lapped transform is formulated as a linear minimal mean-squared error problem. The optimal Wiener solution and several simplifications with different tradeoffs between complexity and performance are developed. We also prove the persymmetric structure of these Wiener filters. The existing mean reconstruction method is proven to be a special case of the proposed framework. Our method also includes as a special case the linear interpolation method used in DCT-based systems when there is no pre/postfiltering and when the quantization noise is ignored. The design criteria in our previous results are scrutinized and improved solutions are obtained. Various design examples and multiple description image coding experiments are reported to demonstrate the performance of the proposed method",2007,0, 6125,A method for impact assessment of faults on the performance of field-oriented control drives: A first step to reliability modeling,"In this paper, the effects of certain component faults on the performance of three-phase inverter-fed induction motors are analyzed under indirect field-oriented control. Simulations of faults in the current sensors, speed encoder, three-phase inverter, and motor are presented. Sample hardware faults verifying the simulation results are also presented. Performance requirements are set based on typical performance measures of electric vehicles. These requirements are then used to determine whether or not the system survives a fault. A simple fault detection and isolation scheme using multiple speed encoders is also tested. The construction of a Markov reliability model of the overall system is a direct application of the results, and allows quantifying global reliability measures.",2010,0, 6126,Defect-oriented BIST quality analysis,"In this paper, the efficiency of the built-in self-test pseudorandom patterns generated by an LFSR is investigated and compared for different fault classes. In particular, the need for defect-oriented testing is taken into account. Experimental research showed that the quality of BIST estimated using the SAF model can be too optimistic, and that extended fault classes for evaluation of BIST and determining the acceptable length of BIST sequences are needed.",2010,0, 6127,Fault detection for NARMAX stochastic systems using entropy optimization principle,"In this paper, the fault detection (FD) problem is studied for a class of NARMAX models with non-Gaussian disturbances and faults, as well as a time delay. Since generally (extended) Kalman filtering approaches are insufficient to characterize the non-Gaussian variables, entropy is adopted to describe the uncertainty of the error system. After a filter is constructed to generate the detected error, the FD problem is reduced to an entropy optimization problem. The design objective is to maximize the entropies of the stochastic detection errors when the faults occur, and to minimize the entropies of the stochastic estimation errors resulting from other stochastic noises. To improve the FD performance, a multi-step-ahead predictive nonlinear cumulative cost function is adopted rather than the instantaneous performance index. Following the formulation of the probability density function of the stochastic error in terms of those of both of the disturbances and the faults via a constructed mapping, new recursive approaches are established to calculate the entropies of the detection errors. Renyi's entropy has also been used to simplify the cost function. Finally, simulations are given to demonstrate the effectiveness of the proposed control algorithm.",2009,0, 6128,Fault analysis of current-controlled PWM-inverter fed induction-motor drives,"In this paper, the fault-tolerance capability of IM-drive is studied. The discussion on the fault-tolerance of IM drives in the literature has mostly been on the conceptual level without any detailed analysis. Most of studies are only achieved experimentally. This paper provides an analytical tool to quickly analyze and predict the performance under fault conditions. Also, most of the presented results were machine specific and not general enough to be applicable as an evaluation tool. So, this paper will present a generalized method for predicting the post-fault performance of IM-drives after identifying the various faults that can occur. The fault analysis for IM in the motoring mode will be presented in this paper. The paper includes an analysis for different classifications of drive faults. The faults in an IM-drive -that will be studied- can be broadly classified as: machine fault, (i.e., one of stator windings is open or short, multiple phase open or short, bearings, and rotor bar is broken) and inverter-converter faults (i.e., phase switch open or short, multiple phase fault, and DC-link voltage drop). Briefly, a general-purpose software package for variety of IM-drive faults -is introduced. This package is very important in IM-fault diagnosis and detection using artificial intelligent techniques, wavelet and signal processing.",2003,0, 6129,Neural network fault prediction and its application,"In this paper, the forecasting algorithm employs wavelet function to replace sigmoid function in the hidden layer of Back-Propagation Neural Network. And a Wavelet Neural Network prediction model is established to predict Anode Effect (the most typical fault) through forecasting the change rate of cell resistance. The authors have developed forecasting software based on platform of Visual Basic 6.0. The simulation results show that the proposed method not only has greatly improved fault prediction precision and real-time, but also improved the operation efficiency. That means we can increase energy efficiency and the safety of aluminum production process.",2010,0, 6130,Towards Robust Voxel-Coloring: Handling Camera Calibration Errors and Partial Emptiness of Surface Voxels,"In this paper, we present two new methods to reduce the effects of camera calibration errors and partial emptiness of surface voxels on voxel-coloring. Both of these sources of error introduce outlier pixels in voxel projections in the input images and thus result in over-carving of the reconstructed 3D scene. The existing methods to handle these errors are either insufficient or too complex. Our proposed methods are simple and can be incorporated into existing voxel-coloring algorithms easily. Our experimental results show that the methods proposed in this paper have the ability to improve the results of existing algorithms",2006,0, 6131,Fault Repair Framework for Mobile Sensor Networks,"In this paper, we propose a framework for fault repair in mobile sensor networks. A hierarchical structure which consists of replacement module, management policy module, knowledge module, decision making module, and evaluation module is adopted. We also propose a solution for faulty sensor replacement problem. Through the numerical results, we show that our algorithm is more efficient and achieves higher energy savings than the greedy approach to sensor replacement. We believe that the problem of faulty sensor nodes can be solved efficiently through the cooperation and communication across different modules, such as evaluation decision making, knowledge management, and replacement",2006,0, 6132,Joint media-channel aware unequal error protection for wireless scalable video streaming,"In this paper, we propose a joint source-channel unequal error protection scheme for scalable video streaming over capacity constrained high speed packet access (HSPA) networks. Conventional link adaptation schemes in HSPA networks use the modulation and coding scheme (MCS) that achieves a preset channel frame error rate. Our scheme utilizes video priority information along with channel quality information to set the channel coding rate that maximizes the cumulative coding rate of channel coding and application layer unequal error protection. Performance evaluations show that that under the same constraints our scheme results in an average performance improvement of 0.5dB in video PSNR for different channel conditions and different video sequences.",2008,0, 6133,Modeling and Analysis of Fault Detection Based on Time Petri Net,"In this paper, we propose a method for modeling and analysis of fault detection in real-time systems. This approach is based on the model of time petri net (TPN), which derives from the timing analysis of the TPN models with guarded transitions. With the reachability analysis of each mark of TPN model, some finite length of test sequences are generated through computing the shortest path from the initial mark to each leaf node. Then we carry out input in the proper time intervals and check out whether the system satisfies the requirements for real time and the functions of software that according to the output results. This approach is well illustrated by means of modeling and analysis of a safety-critical system: Interlock Logic System.",2010,0, 6134,Geographic routing in the presence of location errors,"In this paper, we propose a new geographic routing algorithm that alleviates the effect of location errors on routing in wireless ad hoc networks. In most previous work, geographic routing has been studied assuming perfect location information. However, in practice there could be significant errors in obtaining location estimates, even when nodes use GPS. Hence, existing geographic routing schemes will need to be appropriately modified. We investigate how such location errors affect the performance of geographic routing strategies. We incorporate location errors into our objective function by considering both transmission failures and backward progress. Each node then forwards packets to the node that maximizes this objective function. We call this strategy maximum expectation within transmission range (MER). Simulation results with MER show that accounting for location errors significantly improves the performance of geographic routing. We also show that MER is robust to the location error model and model parameters. Further, via simulations, we show that in a mobile environment MER performs better than existing approaches.",2005,0, 6135,Effect of the Delay Time in Fixing a Fault on Software Error Models,"In this paper, we propose a new model that incorporates both the fault-detection process and the fault-correction process. In addition, the fault- correction process is modeled as a delayed fault- detection process. Significant improvements on the conventional software reliability growth models (SRGMs) to better describe the actual software development have been achieved by eliminating an unrealistic assumption that detected errors are immediately corrected. This can especially be seen when some latent software errors are hard to detect and they even exist in the software product for a long time after they are detected. Therefore, the time delayed by the correction process is not negligible. The objective here is to remove this assumption in order to make the SRGMs more realistic and accurate. Finally, two real data sets have been performed, and the results show that the proposed new model performs much better in estimating the number of initial faults.",2007,0, 6136,Characterization and 3D correction of geometric distortion in low-field open-magnet MRI,"In this paper, we present a method to characterize and correct geometric image distortion in 0.2 T MR images. A large 3D phantom with spherical balls was used to characterize geometric distortion on an AIRIS Mate 0.2 T MR Scanner (Hitachi). MR images of the phantom were acquired in axial, sagittal and coronal planes using 2D Fast Spin Echo (FSE) sequence and distortions were measured at each control point. Two piecewise interpolation methods were then applied to correct geometric distortion. Distortion was characterized and corrected in any axial, sagittal or coronal slice within an effective FOV of 330(LR) x 180(AP) x 210(HF) mm3. The distortion was reduced from 16 mm to 1.2 mm at 180 mm from the magnet center. A fast and accurate method for correction of geometric distortion was performed within large distances from the magnet isocenter.",2008,0, 6137,A Simulation Platform for the Study of Soft Errors on Signal Processing Circuits through Software Fault Injection,"In this paper, we present a simulation platform tailored for signal processing circuits that injects bit flips in order to model soft errors. The platform is based on the ESA Data Systems Division's SEE simulation tool upgraded with new functionalities. In order to show the effectiveness of the platform, a digital filter has been tested.",2007,0, 6138,Syntax error repair with dynamic valid length in LR-based parsers,"In this paper, we present a syntax error repair to decide spurious errors using dynamic valid length. When the compiler encounters a syntax error, it usually attempts to restart parsing to check the remainder of the input for any further errors. One common method of error repair is to repair the input by insertion or deletion or substitution. After repair, the repair method should decide whether the error is a spurious error or not. In order to decide this, Conventional methods adopt a fixed valid length. However it is insufficient. In our method, valid length isnpsilat fixed on ahead. Our method tries all candidates for an error and decides that the one which has longest length is not a spurious error. Since the effectivity of our method was shown, the benchmark program was executed by our method and the conventional method using a fixed valid length. Compared with the conventional method, the proposed method can reduce approximately 90% of non-correcting errors and increase 23.1% of correcting errors.",2008,0, 6139,A Test Pattern Generation Method Based on Fault Injection for Logic Elements of FPGA,"In this paper, we present a test pattern generation method based on fault injection for logic elements of FPGAs (Field Programmable Gate Arrays). This method is able to perform fault diagnosis for stuck-at-0 and stuck-at-1 faults, which can locate logic resource faults in the logic elements of FPGA. We use EP2C8Q208C8N's LE (Logic Element) of Altera as the object to generate the test pattern, work out the test circuit and synthesis them by Quartus II. Finally, the test circuit is injected with stuck-at-0 and stuck-at-1 faults and the test patterns are generated by using SPICE.",2010,0, 6140,Global and local consistencies in distributed fault diagnosis for discrete-event systems,"In this paper, we present a unified framework for distributed diagnosis. We first introduce the concepts of global and local consistency in terms of supremal global and local supports, then present two distributed diagnosis problems based on them. After that, we provide algorithms to achieve supremal global and local supports respectively, and discuss in detail the advantages and disadvantages of each. Finally, we present an industrial example to demonstrate our distributed diagnosis approach.",2005,0, 6141,Error concealment using block-based scale-invariant features,"In this paper, we propose a scale-invariant block-based bilateral filter (SI-BBF) for spatial error concealment. The SI-BBF extends block-based bilateral filter (BBF) and takes advantage of both non-corrupted data and scale-invariant space data in restoring missing pixels, while operates in a block wise manner. In order to reduce the computational complexity, we introduce a novel weight adaptive selection scheme in our algorithm. Extensive experiments show favorable results of the proposed algorithm against several competitors.",2010,0, 6142,Adding Autonomic Capabilities to Network Fault Management System,"In this paper, we propose an adaptive framework for adding the most desired aspects of autonomic capabilities into the critical components of a network fault management system. The aspects deemed as the most desirable are those that have a significant impact on system dependability, which include self-monitoring, self-healing, self-adjusting, and self-configuring. Self-monitoring oversees the environmental conditions and system behavior, building a consciousness ground to support self-awareness capabilities. It is responsible for monitoring the system states and environmental conditions, analyzing them and thus detecting and identifying system faults/failures. Upon detection, self-healing operations is enabled to respond (i.e. take proper actions) to the identified faults /failures. These actions are usually accomplished by self-configuring and self- adjusting the corresponding system configurations and operations. Together, all self-*approaches complete an adaptive framework and offer a sound solution towards high system assurance.",2007,0, 6143,Retrieval and small correction system for sailing directions using the Internet,"In this paper, we propose an automatic retrieval and small correction system for Japanese Sailing Directions using the Internet. There are two kinds of retrieval methods. One is the retrieval with a keyword, and the other is retrieval with ship's sailing route. The system is easily able to materialize automatic small corrections, for example revisions and additions for new navigational information. In the database of main body are divided many files. They have file names and ID. The ID is number of date and time. If there is different ID between own ship's database and the Maritime Safety Agency's one, the system execute automatically small corrections comparing with their names and ID under operation of the Internet. The programs of the system are composed Java applets, and they are easy to work in the Internet under httpd operation. These technologies are useful for LAN construction within a future ship. If the Maritime Safety Agencies in the world offer the HTML database and tables by using CD-ROM, our proposal system is made practicable as soon as possible",2001,0, 6144,Performance evaluation of color correction approaches for automatic multi-view image and video stitching,"Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",2010,0, 6145,Determining and exploiting the distribution function of wind power forecasting error for the economic operation of autonomous power systems,"Many efforts have been presented in the bibliography for wind power forecasting in power systems and few of them have been used for autonomous power systems. The impact of knowing the distribution function of wind power forecasting error in the economic operation of a power system is studied in this paper. The papers proposes that the distribution of the wind power forecasting error of a specific tool can be easily derived if, for that model, an evaluation of its performance is made off-line comparing the forecasted values of the tool with the actual wind power values in the same horizon. The proposed methodology is applied to the autonomous power system of Crete. It is shown that the improvement of the performance of wind power forecasting tool has significant economic impact on the operation of autonomous power systems with increased wind power penetration. The obtained results for various levels of wind power production and load show that using only mean absolute percentage error (MAPE) leads to significant change in the estimation of the wind power to be shed to avoid technical limits violation, especially if the wind power forecasting tool presents underestimation of the actual production",2006,0, 6146,Fabric defect detection by Fourier analysis,"Many fabric defects are very small and undistinguishable, which makes them very difficult to detect by only monitoring the intensity change. Faultless fabric is a repetitive and regular global texture and Fourier transforms can be applied to monitor the spatial frequency spectrum of a fabric. When a defect occurs in fabric, its regular structure is changed so that the corresponding intensity at some specific positions of the frequency spectrum would change. However, the three-dimensional frequency spectrum is very difficult to analyze. In this paper, a simulated fabric model is used to understand the relationship between the fabric structure in the image space and in the frequency space. Based on the three-dimensional frequency spectrum, two significant spectral diagrams are defined and used for analyzing the fabric defect. These two diagrams are called the central spatial frequency spectrums. The defects are broadly classified into four classes: (1) double yarn; (2) missing yarn; (3) webs or broken fabric; and (4) yarn densities variation. After evaluating these four classes of defects using some simulated models and real samples, seven characteristic parameters for a central spatial frequency spectrum are extracted for defect classification",2000,0, 6147,Detecting faults in technical indicator computations for financial market analysis,"Many financial trading and charting software packages provide users with technical indicators to analyze and predict price movements in financial markets. Any computation fault in technical indicator may lead to wrong trading decisions and cause substantial financial losses. Testing is a major software engineering activity to detect computation faults in software. However, there are two problems in testing technical indicators in these software packages. Firstly, the indicator values are updated with real-time market data that cannot be generated arbitrarily. Secondly, technical indicators are computed based on a large amount of market data. Thus, it is extremely difficult, if not impossible, to derive the expected indicator values to check the correctness of the computed indicator values. In this paper, we address the above problems by proposing a new testing technique to detect faults in computation of technical indicators. We show that the proposed technique is effective in detecting computation faults in faulty technical indicators on the MetaTrader 4 Client Terminal.",2010,0, 6148,Implementing an Advanced Simulation Tool for Comprehensive Fault Analysis,"Many large scale system blackouts involve relay misoperations. Traditional relay algorithms and settings need to be evaluated under a variety of fault and no-fault system-wide scenarios to better understand the causes for misoperations. New fault diagnosis algorithms also need to be developed to assure improved relaying performance and then evaluated under various scenarios. This paper introduces advanced fault analysis simulation software based on the interactive MATLAB and ATP simulation. The software consists of two major parts, power system simulation and relay algorithm evaluation. The former part can automatically generate thousands of system-wide events at one time and extract the transients for fault studies. The latter part includes the traditional distance relay model and two new advanced fault diagnosis algorithms. The structure of the software enables easy simulation setup for different power system models",2005,0, 6149,FlowChecker: Detecting Bugs in MPI Libraries via Message Flow Checking,"Many MPI libraries have suffered from software bugs, which severely impact the productivity of a large number of users. This paper presents a new method called FlowChecker for detecting communication-related bugs inMPI libraries. The main idea is to extract program intentions of message passing (MPintentions), and to check whether theseMP-intentions are fulfilled correctly by the underlying MPI libraries, i.e., whether messages are delivered correctly from specified sources to specified destinations. If not, FlowChecker reports the bugs and provides diagnostic information. We have built a FlowChecker prototype on Linux and evaluated it with five real-world bug cases in three widely-used MPI libraries, including Open MPI, MPICH2, and MVAPICH2. Our experimental results show that FlowChecker effectively detects all five evaluated bug cases and provides useful diagnostic information. Additionally, our experiments with HPL and NPB show that FlowChecker incurs low runtime overhead (0.9-9.7% on three MPI libraries).",2010,0, 6150,Bathythermograph error analysis and reduction (BEAR),"Many oceanographic and tactical studies require high-fidelity sonar predictions, which require accurate portrayals of the ocean's temperature structure. The Naval Oceanographic Office (NAVOCEANO) merges various oceanographic data into 'first-guess' temperature fields. The Modular Ocean Data Assimilation System (MODAS) then assimilates more timely bathythermograph (BT) data to create temperature nowcasts and companion uncertainty fields. Cost constraints require minimization of at-sea measurements. The Sensor Placement for Optimal Temperature Sampling (SPOTS) algorithm determines the best placements of limited numbers of BTs to provide the most accurate temperature fields at the lowest possible cost. SPOTS hypothesizes that placements which minimize uncertainty will also tend to minimize error. The bathythermograph error analysis and reduction (BEAR) algorithm objectively determines covariance distances for use in MODAS assimilation routines and performs quality control (QC) on BTs to validate the SPOTS hypothesis. BEAR models physical errors that shift the temperature uniformly (factory mis-calibration, poor storage conditions, instrument abuse), shift the depth uniformly (starting lag and wave height), and expand or contract the gradients uniformly (inaccurate rate of fall). BEAR simulates the profiles that would result by backing out every possible combination of these physical errors. Deep-water temperature profiles sharply constrain BT QC, because they tend to be spatially and temporally stationary. An error of 0.4 degrees may correspond to fifty standard deviations from the mean. Unsurprisingly, the deeper half of the variances usually accounts for 95-99% of the total. Satellite-derived Sea-Surface Temperature (MCSST) data provide a second constraint, because their uncertainties are low (compared to BTs). BEAR computes the sum of variance over all depths for each simulated error-combination against these surface and climatological constraints. The minimum va- - riance sum is usually chosen to represent the most likely error combination, which is then applied to the BT to correct it, without at any point tampering with the temperature structure. Generally the one with the lowest variance is selected. However, if more than one BT is available from the same local region and within a reasonable time period, cross-correlation can be used to align the structural features (e.g., thermocline or mixed-layer depth) and select the best member of each variance cluster",2005,0, 6151,Motivating the corrective maintenance maturity model (CM3),"Many process models have been hitherto suggested. Very few of them, however, deal with the most utilised process today: the maintenance process. The article presents and motivates the Corrective Maintenance Maturity Model (CM3), a process model entirely dedicated to only software maintenance",2001,0, 6152,Adaptive and Fault Tolerant Simulation of Relativistic Particle Transport with Data-Level Checkpointing,"Many scientific applications exhibit high demands on memory storage and computing capability. Improvements in commodity processors and networks have provided an opportunity to support such scientific applications within an everyday computing infrastructure. Good applications need the ability to work in constantly changing environments. Adaptability and fault tolerance are essential. Based on simulation of relativistic particle transport, this paper proposes a data-level checkpointing scheme for common scientific applications. This scheme takes advantage of the regular program layout, dominant computing loops, and fine-grained iterations. Without handling stack and heap segments directly, only application data is saved and restored as the computation state. Checkpointing interval can be dynamically adjusted to satisfy sensitivity and efficiency requirements for feasible fault tolerance. With this periodic but fixed-location checkpointing scheme, the MPI- based simulation system can be reconfigured by being shut down first and then restarted on same or different computer clusters. Application data can be redistributed for the new configuration. Experimental results have demonstrated this scheme's efficiency and effectiveness.",2008,0, 6153,An Improved Fault Petri Net for Knowledge Representation,"Knowledge representation is the necessary process to build an autonomous fault diagnosis expert system for satellites. In order to enhance the knowledge representation ability of time-related rules and r-of-n rules, an improved fault Petri net model for knowledge representation is proposed on the basis theory of fault Petri net model. The alarming threshold function and the fire threshold function is introduced, the alarming conditions and result is given in the form of definition, similarly, the fire conditions and result is redefined. A transition gives an alarm for some time and then could fire, the improved fault Petri net model could present the time-related rules while the fault Petri net model couldn t. Further more, the improved fault Petri net model could present the r-of-n rules much more conveniently than the fault Petri net model, it needs much fewer transitions to present the same rule. The improved fault Petri net model upgrades the knowledge representation abilities and provides basic conditions for the knowledge representation of the diagnosis expert system.",2009,0, 6154,"A Discreet, Fault-Tolerant, and Scalable Software Architectural Style for Internet-Sized Networks","Large networks, such as the Internet, pose an ideal medium for solving computationally intensive problems, such as NP-complete problems, yet no well-scaling architecture for Internet-sized systems exists. I propose a software architectural style for large networks, based on a formal mathematical study of crystal growth that will exhibit properties of (1) discreetness (nodes on the network cannot learn the algorithm or input of the computation), (2) fault-tolerance (malicious, faulty, and unstable nodes cannot break the computation), and (3) scalability (communication among the nodes does not increase with network or problem size). I plan to evaluate the style both theoretically and empirically for these three properties.",2007,0, 6155,Fault-tolerant 2-D Mesh Network-On-Chip for MultiProcessor Systems-on-Chip,"Large system-on-chip (SoC) circuits contain increasing number of embedded processor cores while their communication infrastructures are implemented with networks-on-chip (NOC). Due to the increasing transistor and wire densities these circuits are more difficult to test, which requires that different self-diagnosis and self-test methods must be mobilized. Self-diagnosis and self-repair methods usable for invalidating at least minor manufacturing defects of the NOCs may also be needed for improving the chip yield. This paper presents a new fault-tolerant NOC with two-dimensional mesh topology for future multi-processor SoCs (MPSoC). The improved fault-tolerance is implemented with fault-diagnosis-and-repair (FDAR) system, which makes the NOC more testable and diagnosable. The FDAR can detect static, dynamic, and transient faults and repairs the faulty switches. Furthermore, it makes it possible also for the local processors to reconfigure their switch nodes to work correctly. After the reconfigurations a novel adaptive routing algorithm named fault-tolerant dimension-order-routing (FTDOR) is able to route packets adaptively in seriously faulty networks. The usage of the FTDOR makes it also possible to use all of the ports of the edge switch nodes for connecting processors to the NOC, which improves the utilization of the NOC's resources",2006,0, 6156,A method for reduction of TDOA measurement error in UWB leading edge detection receiver,Leading edge detection receivers are the simplest devices intended for reception of ultra-wideband pulses. However they application for time difference of arrival (TDOA) measurements can be a source of errors which values depend on the amplitudes of received pulses. The paper describes a method for reduction of these errors. It presents necessary receiver modifications and an algorithm for TDOA value correction. The method was experimentally tested with a positioning UWB receiver. The paper contains results of measurements made in indoor environment.,2010,0, 6157,FTA technique addressing fault criticality and interactions in complex consumer communications,"Limitations of conventional fault tree analysis (FTA) application to modern consumer electronics are outlined, and the necessity for an upgraded FTA is justified. Field and manufacturing failure modes, mechanisms and their interactions are arranged into a unified rank system. An advanced failure tree is developed; new application specific gates are substantiated and introduced. Detailed analysis, formulas, typical examples and recommendations are presented. It is shown that degree/accuracy of mathematical description is determined based on demands of practical accuracy. Broader application of the methodology is justified",2001,0, 6158,On errors-in-variables regression with arbitrary covariance and its application to optical flow estimation,"Linear inverse problems in computer vision, including motion estimation, shape fitting and image reconstruction, give rise to parameter estimation problems with highly correlated errors in variables. Established total least squares methods estimate the most likely corrections Acirc and bcirc to a given data matrix [A, b] perturbed by additive Gaussian noise, such that there exists a solution y with [A + Acirc, b +bcirc]y = 0. In practice, regression imposes a more restrictive constraint namely the existence of a solution x with [A + Acirc]x = [b + bcirc]. In addition, more complicated correlations arise canonically from the use of linear filters. We, therefore, propose a maximum likelihood estimator for regression in the general case of arbitrary positive definite covariance matrices. We show that Acirc, bcirc and x can be found simultaneously by the unconstrained minimization of a multivariate polynomial which can, in principle, be carried out by means of a Grobner basis. Results for plane fitting and optical flow computation indicate the superiority of the proposed method.",2008,0, 6159,Lightweight Fault Localization with Abstract Dependences,"Locating faults is one of the most time consuming tasks in today's fast paced economy. Testing and formal verification techniques like model-checking are usually used for detecting faults but do not attempt to locate the root-cause for the detected faulty behavior. This article makes use of abstract dependences between program variables for locating faults in programs. We discuss the basic ideas, the underlying theory, and first experimental results, as well our model's limitations. Our fault localization model is based on a previous work that uses the abstract dependences for fault detection. First case studies indicate our model's practical applicability",2006,0, 6160,Directional ground-fault indicator for high-resistance grounded systems,"Locating ground faults is a difficult and challenging problem for low-voltage power systems that are ungrounded or have high-impedance grounding. Recent work in pilot signals has renewed efforts in developing fault location methodologies. This paper presents a method for directional ground-fault indication that utilizes the fundamental frequency voltages and currents. Although the ground-fault current is small and usually less than the load currents, the fault has zero-sequence components that distinguish it from the load. Signal processing techniques are used to identify and compare the fault signals to determine the fault direction. The process takes advantage of the currents flowing from the distributed grounding capacitance. An experimental microprocessor-based directional indicator unit is tested in an industrial power distribution system. Directional indication of ground faults is applied near tap-off branch circuit connections. Promising results from field test conducted in a harmonic-noisy setting are presented. Directional indicator units simplify the search process on large networks, thus reducing the time and effort necessary to locate and remove the fault, and thereby significantly reduces the probability of a second ground fault with its destructive currents.",2003,0, 6161,Efficient Diagnosis of Scan Chains with Single Stuck-at Faults,"Locating the scan chain fault is a critical step for IC manufacturers to analyze failure for yield improvement. In this paper, we propose a diagnosis scheme to locate the single stuck-at fault in scan chains. Our diagnosis scheme is an improved design to a previously proposed scheme which can diagnose the output of each cell flip-flop in the scan chain. With our scheme, not only the output of each cell flip-flop can be diagnosed, but also the inverse output of each cell flip-flop and the serial input of the scan chain as well. Our proposed diagnosis scheme is efficient and takes (4n+6) clock cycles in the worst case for an n-bit scan chain.",2009,0, 6162,QED: Quick Error Detection tests for effective post-silicon validation,"Long error detection latency, the time elapsed between the occurrence of an error caused by a bug and its manifestation as a system-level failure, is a major challenge in post-silicon validation of robust systems. In this paper, we present a new technique called Quick Error Detection (QED), which transforms existing post-silicon validation tests into new validation tests that significantly reduce error detection latency. QED transformations allow flexible tradeoffs between error detection latency, coverage, and complexity, and can be implemented in software with little or no hardware changes. Results obtained from hardware experiments on quad-core Intel CoreTM i7 hardware platforms and from simulations on a multi-core MIPS processor design demonstrate that: 1. QED significantly improves error detection latencies by six orders of magnitude, i.e., from billions of cycles to a few thousand cycles or less. 2. QED transformations do not degrade the coverage of validation tests as estimated empirically by measuring the maximum operating frequencies over a wide range of operating voltage points. 3. QED tests improve coverage by detecting errors that escape the original non-QED tests.",2010,0, 6163,Locating a ground fault in low-voltage high-resistance grounded systems via the single-processor concept for circuit protection,"Low-voltage resistance-grounded systems may provide significant advantages to the facility in terms of system reliability and safety. However, maintenance of these systems is more complex than for solidly grounded wye systems and improper first-fault isolation and lack of timely repair may present a risk to the facility. Identifying the location of ground faults within the distribution system is the main problem. We describe a new way to detect which feeder has been faulted in a lineup of low-voltage switchgear protected via the ""single-processor concept for protection and control of circuit breakers in low-voltage switchgear"" [Marcelo, E et al., 2004]. We describe the detection methodology and how the potential sources of error are addressed.",2005,0, 6164,Effectiveness of machine checks for error diagnostics,"Machine Check Architecture (MCA) is a processor internal architecture subsystem that detects and logs correctable and uncorrectable errors in the data or control paths in each CPU core and the Northbridge. These errors include parity errors associated with caches, TLBs, ECC errors associated with caches and DRAM, and system bus errors. This paper reports on an experimental study on: (i) monitoring a computing cluster for machine checks and using this data to identify patterns that can be employed for error diagnostics and (ii) introducing faults into the machine to understand the resulting machine checks and correlate this data with relevant performance metrics.",2009,0, 6165,Non-destructive defect detection scheme using Kerr-channel optical surface analyzer,"Magnetic defects on the surface of magnetic disks, unlike normal topological defects, are difficult to detect because they are solely due to magnetic signal loss as seen by the MR element and often do not have any obvious topological features. Defects without any topological features usually render conventional defect finding techniques such as visual inspection, optical microscopy, and SEM useless. In the past, the painstaking way of locating the defect was to index the drive with a strobe light and followed with ferrofluid decoration of the surface to determine the location of the defect. The disadvantage is that the magnetic particles in the decoration technique can often confound the origin of the defect. Furthermore, the old technique does not have high enough resolution to locate defect with a size on the order of a micrometer. In this work, we describe how a magnetic marking technique (MMT) is utilized to circumscribe the magnetic defects. The markers are written in such a way that both the defect and the magnetic markers are easily detected using a Kerr-channel optical surface analyzer and magnetic force microscope. Typically, the accuracy of locating the defect using the MMT is within a track-pitch radially and within one data sector circumferentially. The new scheme provides at least one order of magnitude improvement in detection resolution over the old technique",2001,0, 6166,Mobile agent fault tolerance for information retrieval applications: an exception handling approach,"Maintaining mobile agent availability in the presence of agent server crashes is a challenging issue since developers normally have no control over remote agent servers. A popular technique is that a mobile agent injects a replica into stable storage upon its arrival at each agent server. However, a server crash leaves the replica unavailable, for an unknown time period, until the agent server is back online. This paper uses exception handling to maintain the availability, of mobile agents in the presence of agent server crash failures. Two exception handler designs are proposed. The first exists at the agent server that created the mobile agent. The second operates at the previous agent server visited by the mobile agent. Initial performance results demonstrate that although the second design is slower it offers the smaller trip time increase in the presence of agent server crashes.",2003,0, 6167,"A robust search-based approach to project management in the presence of abandonment, rework, error and uncertainty","Managing a large software project involves initial estimates that may turn out to be erroneous or that might be expressed with some degree of uncertainty. Furthermore, as the project progresses, it often becomes necessary to rework some of the work packages that make up the overall project. Other work packages might have to be abandoned for a variety of reasons. In the presence of these difficulties, optimal allocation of staff to project teams and teams to work packages is far from trivial. This paper shows how genetic algorithms can be combined with a queuing simulation model to address these problems in a robust manner. A tandem genetic algorithm is used to search for the best sequence in which to process work packages and the best allocation of staff to project teams. The simulation model, that computes the project estimated completion date, guides the search. The possible impact of rework, abandonment and erroneous or uncertain initial estimates are characterised by separate error distributions. The paper presents results from the application of these techniques to data obtained from a large scale commercial software maintenance project.",2004,0, 6168,Detecting Design Errors in Composite Events for Event Triggered Real-Time Systems Using Timed Automata,"Many applications need to detect and respond to occurring events and combine these event occurrences into new events with a higher level of abstraction. Specifying how events can be combined is often supported by design tools specific to the current event processing engine. However, the issue of ensuring that the combinations of events provide the system with the correct combination of information is often left to the developer to analyze. We argue that analyzing correctness of event composition is a complex task that needs tool support. In this paper we present a novel development tool for specifying composition of events with time constraints. One key feature of our tool is to automatically transform composite events for real-time systems into a timed automaton representation. The timed automaton representation allow us to check for design errors, for example, whether the outcome of combining events with different operators in different consumption policies is consistent with the requirement specification",2006,0, 6169,Fault-tree analysis on computer security system using intuitionistic fuzzy sets,Intuitionistic fuzzy set is a useful tool to express experts' uncertain knowledge and experience information. Most computer security systems have abnormal states and their failure behaviors are always characterized in uncertainty and inaccuracy. How to calculate fault interval of system components is an interesting and important issue in intuitionistic fuzzy fault-tree analysis (IFFTA). Intuitionistic fuzzy fault diagnosis models are proposed in this paper to analyze fuzzy system reliability and to find the most critical system component for the decision-making based on some basic definitions. The proposed method is applied for the failure analysis problem of computer security systems and comparative studies are conducted to show its flexibility and effectiveness with the other fault tree analysis models.,2009,0, 6170,A Proactive Fault Resilient Overlay Multicast for Media Streaming,"IP multicast has been suffering from a large scale deployment due to various issues such as upgrading of routers, policy making, negotiation and so on. To overcome problems of IP multicast, overlay multicast in which multicast functionalities are implemented at end hosts has been proposed. However, unlike IP multicast non-leaf nodes in the multicast delivery tree are normal end hosts which can join and leave the tree freely. This causes the multicast tree unstable and disrupted nodes to rejoin the tree. In this paper, we propose that each node when joining the multicast tree sets its leaving time, i.e. how long it will stay in the tree, and sends join request to a number of nodes which are already on the tree. Using the leaving time of nodes, new nodes are joined at the tree such that a child's leaving time is earlier than its parent, i.e. the child will leave the tree earlier than its parent. This makes the tree more stable than joining at random nodes in the tree. Furthermore, we propose a proactive recovery mechanism so that even if a parent leaves the tree and children do not, children can rejoin at predetermined nodes immediately, so that the recovery time of the disrupted nodes is minimum. We have shown by simulation that there is less overhead when joining the multicast tree and the recovery time of the disrupted nodes is much less than the previous works.",2009,0, 6171,Network-Processor-Based IPv4/IPv6 Translator: Implementation and Fault Tolarence,"IPv6 is a new version protocol for next generation Internet, which has the advantages of supporting scalability, mobility and security better than current IPv4 Internet. However, IPv4 and IPv6 are not directly compatible. IPv4/IPv6 translator with high performance and reliability can facilitate the seamless coexistence of IPv4 and IPv6 networks during the transition period. In this paper, we proposed a design and implementation of IPv4/IPv6 translator using network processors. The challenges of this work lie in three areas: (1) effective use of network processor resources, (2) design and implementation of an advanced control plane on a commodity OS, and (3) fault tolerance to improve reliability of the whole system. We discuss our solutions to these challenges and presented analytic models to investigate the fault tolerance issue of our approaches. Our work provided guidelines and insights into design, implementation and configuration of the IPv4/IPv6 translator.",2008,0, 6172,The Non-Uniformity Correction for IRFPA,"IRFPA (infrared focal plane) detector, which has low-cost, small size and non-refrigeration characteristics compared with traditional infrared detectors, has broad application prospects. However, non-uniformity issue of IRFPA detector has always been the fundamental constraint of its application, and also the current hotspot of infrared imaging studies. By analyzing actual noise characteristics of infrared image, at the view of image restoration, this paper point out specific methods of non-uniformity correction of infrared image, and selects two different image restoration methods for comparing actual effects of image restoration. Finally, point out possible research and development directions for non-uniformity correction algorithm of infrared image.",2009,0, 6173,Computation Error Analysis in Digital Signal Processing Systems With Overscaled Supply Voltage,"It has been recently demonstrated that digital signal processing systems may possibly leverage unconventional voltage overscaling (VOS) to reduce energy consumption while maintaining satisfactory signal processing performance. Due to the computation-intensive nature of most signal processing algorithms, the energy saving potential largely depends on the behavior of computer arithmetic units in response to overscaled supply voltage. This paper shows that different hardware implementations of the same computer arithmetic function may respond to VOS very differently and result in different energy saving potentials. Therefore, the selection of appropriate computer arithmetic architecture is an important issue in voltage-overscaled signal processing system design. This paper presents an analytical method to estimate the statistics of computer arithmetic computation errors due to supply voltage overscaling. Compared with computation-intensive circuit simulations, this analytical approach can be several orders of magnitude faster and can achieve a reasonable accuracy. This approach can be used to choose the appropriate computer arithmetic architecture in voltage-overscaled signal processing systems. Finally, we carry out case studies on a coordinate rotation digital computer processor and a finite-impulse-response filter to further demonstrate the importance of choosing proper computer arithmetic implementations.",2010,0, 6174,A fast unblocking scheme for distance protection to identify symmetrical fault occurring during power swings,"It is demonstrated in this paper that the change rate of the sum of three-phase active powers and reactive power are the cosine function and the sinusoidal function with respect to the phase difference between the two power systems during power swings. In this case, they cannot be lower than the threshold of 0.7 after they are normalized. However, they level off to 0 when a three-phase fault occurs during power swings. Based on this analysis, a cross-blocking scheme to rapidly identify the symmetrical fault occurring during the power swings is therefore proposed. By virtue of the improved two-instantaneous-value-product algorithm, the calculation of the active and reactive power is immune to the variation of the system power frequency. As the integration based criterion, this criterion has high stability. Simulation results show that this scheme is of high reliability and fast time response. The longest time delay is up to 30 ms",2006,0, 6175,Comparing several coverage criteria for detecting faults in logical decisions,"Many testing coverage criteria, including decision coverage and condition coverage, are well-known to be inadequate for software characterised by complex logical decisions, such as those in safety-critical software. In the past decade, more sophisticated testing criteria have been advocated. In particular, compliance of MC/DC has been mandated in the aviation industry for the approval of airborne software. On the other hand, the MUMCUT criterion has been proved to guarantee the detection of certain faults in logical decisions in irredundant disjunctive normal form. We analyse and empirically evaluate the ability of test sets satisfying these testing criteria in detecting faults in logical decisions. Our results show that MC/DC test sets are effective, but they may still miss some faults that can almost always be detected by test sets satisfying the MUMCUT criterion.",2004,0, 6176,Self-adaptive multi-scale weight morphological operator applied to wood products defects testing by using computed tomography,"Mathematical morphology is the emerging theory and method in digital image processing now. Based on the basic theory and algorithm of mathematical morphology, self-adaptive multi-scale weight morphological operator was advanced and used in the defect identification of wood products X-ray computed tomography image. As can be seen in the experimental analysis, compared with the traditional edge detection algorithm, self-adaptive multi-scale weight morphological operator had the characteristics of a high degree of detection and identification accuracy in wood products edge detection. Wood Products non-destructive testing real-time imaging hardware and wood products real-time imaging image processing software system were built in the study. And self-adaptive multi-scale weight morphological operator was used to edge detection of wood products X-ray image. The achievements in the study realized the real-time online detection of wood products X-ray computed tomography image, and improved the detection accuracy of defects in the image. They have a wide range of applications in the quality identification of wood board and defects detection in the logs.",2010,0, 6177,Error analysis on spinal motion measurement using skin mounted sensors,"Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a gold standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 (SD 10.6) and that from sensors is 62.8 (SD 12.8). The error and absolute error for gross motion range were 5.0 (SD 7.2) and 7.7 (SD 3.9). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 (SD 2.9) and 4.4 (SD 2.8), respectively.",2008,0, 6178,A method to handle CRC errors on the basis of FlexRay,"It is very important to communicate between the many electronic components in automobiles. If any component has an error, is possible for the error to affect other components. We propose a method of handling cyclic redundancy check (CRC) errors in the network using FlexRay. The FlexRay vehicle network protocol features fault-tolerance and guaranteed message latency. According to the FlexRay specification, each node in the FlexRay network decodes received messages and checks for various errors. However, if error occurs to any node, other nodes on the network do not know that a particular node generated an error. In this case, the node that did not receive the critical message cannot be guaranteed to demonstrate correct behavior. In order to solve this problem, we suggest a solution using dynamic messages supported by the FlexRay protocol.",2009,0, 6179,Jaca: a reflective fault injection tool based on patterns,"Jaca is a software fault injection tool that validates OO applications written in Java. Jaca's major goal is to inject faults using high-level programming features during runtime by corrupting attribute values, methods parameters or return values. Jaca's design was based on a set of patterns-the fault injection pattern system. This pattern describes a generic architecture defined from recurrent design aspects present in most fault injection tools. The objective was to reduce tool development time while enhancing qualities such as portability, extensibility, reusability, efficiency and robustness. The paper presents the pattern set and its use in Jaca's development. An extension of Jaca to consider injection at the assembly level is also presented to show how easy it is to add new features to the tool.",2002,0, 6180,"Fault-aware, utility-based job scheduling on Blue, Gene/P systems","Job scheduling on large-scale systems is an increasingly complicated affair, with numerous factors influencing scheduling policy. Addressing these concerns results in sophisticated scheduling policies that can be difficult to reason about. In this paper, we present a general utility-based scheduling framework to balance various scheduling requirements and priorities. It enables system owners to customize scheduling policies under different circumstances without changing the scheduling code. We also develop a fault-aware job allocation strategy for Blue Gene/P systems to address the increasing concern of system failures. We demonstrate the effectiveness of these facilities by means of event-driven simulations with real job traces collected from the production Blue Gene/P system at Argonne National Laboratory.",2009,0, 6181,Design and Implementation of Intelligent Fault Diagnosis System,"Method of analog circuit fault diagnosis based on ANN (artificial neural network) and ES (expert system) is proposed according to the recent development of modern electronic testing technology and artificial intelligence. The structure and organizing of fault diagnosis system on analog circuit and its feasibility is mainly illustrated, and the platform of software based on Labview and Matlab is developed. Through the software, the system can manage its two main function module which are auto measurement module and fault diagnosis module working on an environment orderly. If a test program is loaded into the software platform, the corresponding circuit will be measured automatically. It is shown that the software platform has friendly interface, complete function and content with the need of intelligent fault diagnosis.",2009,0, 6182,Correction for head movements in positron emission tomography using an optical motion tracking system,"Methods capable of correcting for head motion in all six degrees of freedom have been proposed for PET brain imaging but not yet demonstrated in human studies. These methods rely on the accurate measurement of motion in a coordinate frame aligned with the scanner. We present methodology for the direct calibration of an optical motion tracking system to the reconstruction coordinate frame using paired coordinate measurements obtained simultaneously from a PET scanner and tracking system. We also describe the implementation of motion correction, based on the multiple acquisition frame method originally described by Y. Picard and C.J. Thompson (1997), using data provided by the motion tracking system. Effective compensation for multiple six degree-of-freedom movements is demonstrated in dynamic PET scans of the Hoffman brain phantom and a normal volunteer. We conclude that reduced distortion and improved quantitative accuracy can be achieved with this method in PET brain studies degraded by head movements",2000,0, 6183,"Crouching error, hidden markup [Microsoft Word]","Microsoft has such a pervasive commercial presence that people working professionally in computing must expect that the company's behavior will affect their own work and reputation in many ways. It's difficult to ignore Microsoft and only too easy to snipe at the company, but Microsoft's effect on the computing profession, as distinct from its effect on the industry, might be harmful to a degree that would justify severe criticism. Microsoft Word's lack of a versatile and visible markup language can make using the package a nightmare-and reflects poorly on our profession. In addition, one's struggle to get help from Microsoft technical support can often be fruitless and expensive",2001,0, 6184,Identifying Error in AUV Communication,"Mine countermeasures (MCM) involving autonomous underwater vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel. Little work has been done to systematize error identification and response in AUV communication. We introduce a systematic approach involving design failure mode and effects analysis (DFMEA) that is adapted to the complex character of communication between autonomous agents.",2006,0, 6185,Soft underlayer noise in perpendicular recording and its impact on error rate,"Minimum soft underlayer thickness for adequate flux conduction was established through experiment and calculation. Methods for suppressing stripe domains were discussed. Characteristics of spike noise in laminated soft underlayers were described, and the impact of spike noise on error rate was assessed using a software channel. The correlation between spike noise and errors was weak.",2005,0, 6186,A Fault Hypothesis for Integrated Architectures,"Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends",2006,0, 6187,Fault analysis expert system for power system,"Internet not only brings opportunities to power utilities, but also takes challenges. By virtue of internet and data warehouse, we developed support software which collected and stored data from devices in substations such as relay protect devices, dynamic process recorder and so on. Based on intelligent agent theory, knowledge discovery theory & rough set theory, we developed a distributed intelligent system (Fault Analysis Expert System based on Intelligent Agent), which can analyze reasons that cause any failures. In the system, intelligent agent is the distributed intelligent unit providing services for other units; knowledge discovery theory based on rough set theory can acquire and optimize rules from fact information.",2004,0, 6188,Stress test for disturb faults in non-volatile memories,Non-volatile memories are susceptible to special type of faults known as program disturb faults. Testing for such faults requires the application of stress tests which have long application time to distinguish faulty cells from non-faulty cells. In this paper we present a new sensing scheme that can be used with stress tests to allow for efficient detection of faulty cells based on the notion of margin reads. We demonstrate the efficiency of the margin-read approach for distinguishing between faulty and fault-free cells using electrical simulations.,2003,0, 6189,Notice of Retraction
Material inner defect detection by a vibration spectrum analysis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In this paper is described possibility of non-destructive diagnostics of solid objects by software analysis of vibration spectrum. With using platform MATLAB, we can process and evaluate information from accelerometer, which is placed on the measured object. The analog signal is digitized by special I/O device dSAPCE CP-1104 and then processed offline with FFT (Fast Fourier Transform). The power spectrum is then examined by developed evaluating procedures and individual results are displayed in bar graph. When we take a look on results, there is an evidently correlation between spectrum and examining object (inner defects).",2010,0, 6190,Notice of Retraction
A unique approach to multi-factor decision making by combining hierarchical analysis with error analysis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The object of this research is to enhance effectiveness, and obtain more logical output data from the multi-criteria decision-making methodology through mixing the analytic hierarchy process (AHP) with the failure modes and effects analysis (FMEA). In the analytic hierarchy process method, the parameters of interest are ranked through specific matrix vectors after compiling and defining the objectives, alternatives and weighting for each parameter by pairwise comparison. All of the many ranking methods that have been designed on the basis of mathematical logic put the emphasis on the weight of each factor. What distinguishes the order analysis method from the others is the use of quantitative rather than qualitative parameters and pairwise comparison, in which the ranking accuracy is confirmed by the analyst. In this research, we succeeded to generate a favorable combination from the two mentioned methods, and rank the criteria of FMEA within the mold of AHP, and observe that the achieved results fully conform to the actual system.",2010,0, 6191,Notice of Retraction
A novel weighted voting algorithm based on neural networks for fault-tolerant systems,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Voting algorithms are used in a wide area of control systems from real-time and safety-critical control systems to pattern recognition, image processing and human organization systems in order to arbitrating among redundant results of processing in redundant hardware modules or software versions. From a point of view, voting algorithms can be categorized to agreement-based voters like plurality and majority or some voters which produce output regardless to agreement existence among the results of redundant variants. In some applications it is necessary to use second type voters including median and weighted average. Although both of median and weighted average voters are the choicest voters for highly available application, weighted average voting is often more trustable than median. Meanwhile median voter simply selects the mid-value of results; weighted average voter assigns weight to each input, based on their pre-determined priority or their differences, so that the share of more trustable inputs will increase rather than the inputs with low probable correctness. This paper introduces a novel weighted average voting algorithm based on neural networks that is capable of improving the rate of system reliability. Our experimental results showed that the neural weighted average voter has increases the reliability 116.63% in general and 309.82%, 130.27% and 9.37% respectively for large, medium and small errors in comparison with weighted average, and 73.87% in general and 160.44%, 83.59% and 7.52% respectively for- large, medium and small errors in comparison with median voter.",2010,0, 6192,Notice of Retraction
IVC fault diagnosis based on the improved BP neural network,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The improved BP neural network is researched. It is the first time to present the method of IVC fault diagnosis based on the improved BP neural network.. First, the maulti-faults integral diagnostics model will be used to diagnose and then the single-fault parallel diagnosis model is introduced according to the fault characters of IVC. The result of the research shows that the later method is apparently better than the former with its higher validity.",2010,0, 6193,Notice of Retraction
A channel quality based unequal error protection method for wireless video transmission,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

A channel quality based unequal error protection (UEP) method for wireless video transmission is proposed in this paper. The importance of different partitions of the compressed video bitstream is classified by the corresponding impact on the decoding quality, and has been taken into consideration in designing the transmission system. In order to avoid mapping the important parts into the subcarrier in deep fading, a subcarrier mapping rule combined with the hierarchical QAM (HQAM) is also proposed. Simulation results demonstrate that the proposed UEP method outperforms the equal error protection method (EEP) and the previously developed UEP methods by an average improvement of the received video of 4.0 dB.",2010,0, 6194,Notice of Retraction
Diagnosis method for connection-related faults in motion system based on SVM,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In order to arising the safety and reliability, and monitoring the working states of numerical control system, aiming at the multi-kinds of potential connection-related faults, the construction and the principle of the system were analyzed, and the systemic diagnosis framework was developed. Using the position signal and the torque monitoring one, the parameters of support vector machine were trained where the Gaussian function was employed as nonlinear kernel. The mentioned faults were diagnosed benefiting from the decision function where the parameters were from the trained results. Above method was applied to an X-Y motion platform where data acquisitions, training of support vector machine and fault diagnosis were carried out. The results validate the feasibility of the SVM method.",2009,0, 6195,Notice of Retraction
The study of expert system in rolling bearing faults diagnosis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In this paper, rolling bearing fault is diagnosed by using of expert system. Failure phenomenon and causes of the relevant faults are entered into the knowledge base, which have been gotten from the experienced maintenance staff by seeing, hearing, touching and measuring. Meantime, the knowledge base and inference engine have been written basing on the expert system which only provides the text environment. But it have unfriendly human-computer interface and lacks the ability to develop human-computer interface. VC++ is adopted and make up the deficiency by CLIPS dynamic linking to the VC++. The combination of software can scheme out efficient operation and friendly human-computer interaction expert system. Rolling element bearing fault can be fast find out and exclude, the efficiency of the expert system is greatly enhanced.",2010,0, 6196,Notice of Retraction
Fault diagnosis in cracked rotor based on fractal box counting dimension,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The vibration of large rotating machinery rotor system with a free-cracked shaft and cracked Shafts were simulated by the experiments on rotor experiment platform. According to the fractal characteristics of vibration signal about cracked rotor and free-cracked rotor shaft, fractal box counting dimensions of test data were calculated. The orbit of free-cracked shaft shows as an elliptic, and the corresponding fractal box counting dimension is small. The orbit of cracked shaft becomes more complex and its corresponding fractal box counting dimension is larger than that of free-cracked shafts. Vertical crack has less impact on the vibration of rotor while horizontal cracks have obvious impact. Therefore, the fractal box counting dimension can be used to fault diagnosis of cracked rotor.",2010,0, 6197,Notice of Retraction
A research on atlas of slope-correction derived from DEM with different spatial resolutions,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Three typical landforms,Liaohe plain, middle of Shandong low mountain hills, southwest of Sichuan alpine, were selected as studying areas, the national 1:50000 DEM were chosen as Data source, got DEM of 100m resolution by resampling DEM of 5m at every area. The slope extract by DEM with different resolutions were classified and superposed to get the atlas of slope-correction from different terrains. Then the accuracy of atlas of slope-correction were evaluated according to choose testing areas. The result shows that there is a lot of difference in atlas of slope-correction of different areas, but when referred to a special landform, there will be a perfect result according to transforming the slope extracted by low resolution DEM by atlas of slope-correction which can be represented by given landform.",2010,0, 6198,"Notice of Violation of IEEE Publication Principles
Design, analysis and performance evaluation of a new algorithm for developing a fault tolerant distributed system","Notice of Violation of IEEE Publication Principles

""Design, Analysis and Performance Evaluation of a New Algorithm for Developing a Fault Tolerant Distributed System""
by Umasankar Malladi
in the Proceedings of the 12th International Conference on Parallel and Distributed Systems

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

""Design, Analysis and Performance Evaluation of a New Algorithm for Developing a Fault Tolerant Distributed System""
by Ch.D.V. Subba Rao (Original Author)
in Technical Report CS-SWlab-2004-05/03, Department of Computer Science and
Engineering, Sri Venkateswara University College of Engineering, Tirupati, India

Checkpointing and message logging are few of the popular and general-purpose methods for providing fault tolerance in distributed systems. Several variations of their basic schemes have been reported in the literature. Majority of the coordinated checkpointing algorithms have not addressed about the treatment of lost messages. And also the schemes that consider the improvement of several or all performance factors are very rare. We addressed these issues by developing a new and efficient coordinated checkpointing protocol combined with limited sender-based pessimistic message logging. The significant contribution given by our scheme is that it never creates lost messages. The term limited message logging impli- s that ours is a periodic checkpointing strategy where the checkpoints and logging of messages takes place only within a specified interval (called, critical interval CI). Hence it minimizes checkpoint overhead, rollback distance, message logging and even recovery overheads. Output commit latency is also reduced to a considerable extent. Further, while logging the messages, the processes need not be blocked in this scheme. Performance measurement results obtained from our simulations indicate that the proposed strategy outperforms the existing standard techniques - independent checkpointing, pure sender based pessimistic message logging, and optimistic message logging. Another merit of our protocol is that, it is hardware independent and hence it can be implemented in multi-computer systems irrespective of the architecture, interconnection and routing strategy",2006,0, 6199,Notice of Violation of IEEE Publication Principles
Exploring Fault-tolerant Distributed Storage System using GE code,"Notice of Violation of IEEE Publication Principles

""Exploring Fault-tolerant Distributed Storage System using GE code""
by Zheng Chen, Xiaojing Wang, Yili Jin, Honglei Zhou
in the Proceedings of the 2008 International Conference on Embedded Software and Systems, July 2008, pp. 142-148

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

""Exploring High Performance Distributed File Storage using LDPC Codes""
by Benjamin Gaidioz, Birger Koblitz and Nuno Santos
in Parallel Computer, Vol 33, Issue 4-5, Elsevier, May 2007, pp. 264-274

On the basis of introducing GE code, we explore the feasibility of implementing a reliable distributed storage system, with high performance and low spending, which is very suitable for embedded systems. Files are distributed across storage nodes using erasure coding with GE code which provides high-reliability with small storage and performance overhead. We present efficient measurements done on a series of prototype systems, and the tests results corroborate our expectation.",2008,0, 6200,Notice of Violation of IEEE Publication Principles
Using Current Signature Analysis Technology to Reliably Detect Cage Winding Defects in Squirrel-Cage Induction Motors,"Notice of Violation of IEEE Publication Principles

""Using Current Signature Analysis Technology to Reliably Detect Cage Winding Defects in Squirrel-Cage Induction Motors""
by I.M. Culbert and W. Rhodes
in the IEEE Transactions on Industry Applications, Vol. 43, No. 2, March/April 2007

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains portions of original text from the papers cited below. The original text was copied without attribution and without permission.

Figure 5:
""Development of a Tool to Detect Faults in Induction Motors via Current Signature Analysis"",
by M. Fenger, B. A. Lloyd, and W. T. Thomson
in the Proceedings of the IEEE-IAS/PCA Cement Industry Conference, Dallas, TX, May 4-9, 2003, pp. 37-46.

Equation 3:
""Case Histories of Rotor Winding fault diagnosis in induction motors"",
by W. T. Thomson, and D. Rankin
in the Proceedings of the International Conference on Condition Monitoring, University College of Swansea, March 31-April 3, 1987, pp. 798-819.

This paper will demonstrate, through industrial case histories, the application of current signature analysis (CSA) technology to reliably diagnose rotor winding problems in squirrel-cage motors. Many traditional CSA methods result in false alarms and/or misdiagnosis of healthy machines due to the presence of current components in the broken cage winding frequency domain, which are not the result of such defects. Such components can result from operating conditions, motor design, and drive components such as mechanical load fluctuations, speed-reducing gearboxes, etc. Due to theoretical advancements, it is now possible to predict many of these current components, thus making CSA testing less error prone and therefore a much more reliable technology. Reliable detection of the in- eption of broken cage winding problems, or broken rotor bars, prior to failure allows for remedial actions to be taken to avoid significant costs associated with consequential motor component damage and unplanned downtime associated with such in-service failures",2007,0, 6201,Full probe-correction for near-field antenna measurements,"Nowadays antenna measurements are often carried out exploiting the benefits of near-field transformation techniques. The transformation algorithms usually applied require near-fields being measured in two polarisations in order to calculate an equivalent spherical multipole representation of the antenna under test (AUT). Although it is desirable to use custom probes for various applications, especially in research. Custom probes generally do not satisfy the first-order requirement. On the other hand it is well known that a treatment of custom probes as first-order probes will lead to significant errors when calculating the far field from these insufficiently corrected near-field measurements. In this paper it is shown that this error is even worse when dealing with near-field to near-field transformations as required for antenna diagnostics. However the impact on the quality of the transformed near-field data is significantly reduced when applying our proposed probe correction algorithm. The results of the near-field transformations therefore provide significantly improved far-fields as well as an adequate basis for calculating the electromagnetic fields in the proximity of the AUT for diagnostic purposes",2006,0, 6202,Intelligent fault detection and diagnostics system on rule-based neural network approach,"Modern industrial systems can't exist without fault detection and diagnostics subsystem. Creation of such subsystem becomes a challenging task. Often it's more difficult than creation of the rest system's parts. This paper provides an approach for building fault detection and diagnostics system based on artificial neural networks, automatic training method for such systems and investigates different aspects of this method.",2009,0, 6203,Faulted phase selection on double circuit transmission line using wavelet transform and neural network,"Modern numerical relays often incorporate the logic for combined single and three-phase auto-reclosing scheme; single phase to earth faults initiate single-phase tripping and reclosure, and all the other faults initiate three-phase tripping and reclosure. Accurate faulted phase selection is required for such a scheme. This paper presents a novel scheme for detection and classification of faults on double circuit transmission line. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the faulted phase. MATLAB/Simulink software is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed phase selector scheme is able to identify faulted phase on the double circuit transmission line rapidly and correctly.",2009,0, 6204,Context-Sensitive Error Correction: Using Topic Models to Improve OCR,"Modern optical, character recognition software relies on human interaction to correct mis recognized characters. Even though the software often reliably identifies low-confidence output, the simple language and vocabulary models employed are insufficient to automatically correct mistakes. This paper demonstrates that topic models, which automatically detect and represent an article's semantic context, reduces error by 7% over a global word distribution in a simulated OCR correction task. Detecting and leveraging context in this manner is an important step towards improving OCR.",2007,0, 6205,Ground fault detection in multiple source solidly grounded systems via the single-processor concept for circuit protection,"Modern power distribution systems often contain multiple power sources integrated within one system. A simple variant may be the common double-ended substation, which can be further complicated by additional emergency or alternate sources. This presents significant complexity to the designer attempting to design a protection system that will not be fooled by circulating neutral currents. The authors describe how the ""single processor concept for circuit breaker protection and control"" provides ways to address this sensing problem. One such method is described in further detail. Handling of resistance-grounded systems will be the subject of a related paper",2005,0, 6206,Ground-fault detection in multiple source solidly grounded systems via the single-processor concept for circuit protection,"Modern power distribution systems often contain multiple power sources integrated within one system. A simple variant may be the common double-ended substation, which can be further complicated by an additional emergency or alternate sources. This presents significant complexity to the designer attempting to design a protection system that will not be fooled by circulating neutral currents. The authors describe how the ""single processor concept for circuit breaker protection and control"" (M. E. Valdes et al., Proc. PCIC, 2003, pp. 267-275) provides ways to address this sensing problem. One such method is described in further detail. Handling of resistance-grounded systems will be the subject of a related paper.",2006,0,6205 6207,Analysis of the influence of processor hidden registers on the accuracy of fault injection techniques,"Modern processors tend to increase the number of registers, being part of them not accessible by the instruction set. Traditionally, the effect of faults in these hidden registers has not been considered during system validation using fault injection. In this paper, a study of the importance of faults in hidden registers is performed. Firstly, we have analysed the sensitivity of hidden registers to faults in combinational logic. In a second phase, we have analysed the impact of the faults occurred in hidden registers on system behaviour. A broad set of permanent and transient faults have been injected into the models of two typical commercial microcontrollers, using a VHDL-based fault injection tool developed by our research group. The results obtained indicate that the incidence of hidden registers is not negligible, and in some cases is even notable. This fact suggests that widely used fault injection techniques such as SWIFI could not be enough to perform a full and representative validation of modern processors, and it would be necessary to complement with other fault injection techniques that have a higher degree of accessibility.",2004,0, 6208,An automatic technique for optimizing Reed-Solomon codes to improve fault tolerance in memories,Modern SoC architectures manufactured at ever-decreasing geometries use multiple embedded memories. Error detection and correction codes are becoming increasingly important to improve the fault tolerance of embedded memories. This article focuses on automatically optimizing classical Reed-Solomon codes by selecting the appropriate code polynomial and set of used symbols.,2005,0, 6209,An Online Model Checking Tool for Safety and Liveness Bugs,"Modern software model checkers are usually used to find safety violations. However, checking liveness properties can offer a more natural and effective way to detect errors, particularly in complex concurrent and distributed e-business systems. Specifying global liveness properties which should always eventually be true proves to be more desirable, but it is hard for existing software model checkers to verify liveness in real codes because doing so requires finding an infinite execution. For solving such a challenge, this paper proposes an online checking tool to verify the safety and liveness properties of complex systems. We adopt the linear temporal logic to describe the semantics of the finite model checking, use binary instrumentation to obtain the distribute states and apply a checking engine to dynamically verify the finite trace linear temporal logic properties. At last, we demonstrate the method in a distributed system using distributed protocol Paxos and achieve good results by experiments.",2008,0, 6210,Stateful tmr for transient faults,"Module redundancy is often used as a method of construction a reliable system. TMR is used as the method of improving reliability by module redundancy. However, TMR does not decide correct result when two of three modules fail. Therefore, we proposed a new architecture of voting termed as Stateful TMR. It uses the result of TMR and State of the history, to select the most reliable module. By the simulation, we evaluate reliability of module with TMR and Stateful TMR in case of TMR obtained higher reliability than TMR for both failures.",2010,0, 6211,Fault Detection System Using Directional Classified Rule-Based in AHU,"Monitoring systems used at present to operate air handling unit (AHU) optimally do not have a function that enables to detect faults properly when there are faults of such as operating plants or performance falling, so they are unable to manage faults rapidly and operate optimally. In this paper, we have developed a fault detection system, directional classified rule-based, which can be used in AHU system. In order to experiment this algorithm, it was applied to AHU system which is installed inside environment chamber(EC), verified its own practical effect, and confirmed its own applicability to the related field in the future.",2007,0, 6212,Supporting software evolution analysis with historical dependencies and defect information,"More than 90% of the cost of software is due to maintenance and evolution. Understanding the evolution of large software systems is a complex problem, which requires the use of various techniques and the support of tools. Several software evolution approaches put the emphasis on structural entities such as packages, classes and structural relationships. However, software evolution is not only about the history of software artifacts, but it also includes other types of data such as problem reports, mailing list archives etc. We propose an approach which focuses on historical dependencies and defects. We claim that they play an important role in software evolution and they are complementary to techniques based on structural information. We use historical dependencies and defect information to learn about a software system and detect potential problems in the source code. Moreover, based on design flaws detected in the source code, we predict the location of future bugs to focus maintenance activities on the buggy parts of the system. We validated our defect prediction by comparing it with the actual defects reported in the bug tracking system.",2008,0, 6213,Evolutionary fault recovery in a Virtex FPGA using a representation that incorporates routing,"Most evolutionary approaches to fault recovery in FPGA focus on evolving alternative logic configurations as opposed to evolving the intra-cell routing. Since the majority of transistors in a typical FPGA are dedicated to interconnect, nearly 80% according to one estimate, evolutionary fault-recovery systems should benefit by accommodating routing. In this paper, we propose an evolutionary fault-recovery system employing a genetic representation that takes into account both logic and routing configurations. Experiments were run using a software model of the Xilinx Virtex FPGA. We report that using four Virtex combinational logic blocks, we were able to evolve a 100% accurate quadrature decoder finite state machine in the presence of a stuck-at-zero fault. Evolutionary experiments with the hardware in the loop have begun and we discuss the preliminary results.",2003,0, 6214,SWPM: An Incremental Fault Localization Algorithm Based on Sliding Window with Preprocessing Mechanism,"Most fault localization techniques are based on time windows. The sizes of time windows impact on the accuracy of fault localization greatly. This paper takes weighted bipartite graph as fault propagation model and proposes a heuristic fault localization approach based on sliding window with preprocessing mechanism (SWPM) to alleviate the shortcomings. First, SWPM defines the concept of symptom extension ratio and partitions observed symptoms into three segments: analyzed segment, analyzing segment, preprocessing segment. Then it determines the most probable fault set through incrementally computing Bayesian suspected degree (BSD) of the three segments and combining their results. Simulations show that the algorithm can reduce the impacts on the accuracy affected by improper window sizes. The algorithm which has a polynomial computational complexity can be applied to large scale communication network.",2008,0, 6215,On Signal Tracing for Debugging Speedpath-Related Electrical Errors in Post-Silicon Validation,"One of the most challenging problems in post-silicon validation is to identify those errors that cause prohibitive extra delay on speed paths in the circuit under debug (CUD) and only expose themselves in a certain electrical environment. To address this problem, we propose a trace-based silicon debug solution, which provides real-time visibility to the speed paths in the CUD during normal operation. Since tracing all speed path-related signals can cause prohibited design for debug (DfD) overhead, we present an automated trace signal selection methodology that maximizes error detection probability under a given constraint. In addition, we develop a novel trace qualification technique that reduces the storage requirement in trace buffers. The effectiveness of the proposed methodology is verified with large benchmark circuits.",2010,0, 6216,FAIL-MPI: How Fault-Tolerant Is Fault-Tolerant MPI?,"One of the topics of paramount importance in the development of cluster and grid middleware is the impact of faults since their occurrence in grid infrastructures and in large-scale distributed systems is common. MPI (message passing interface) is a popular abstraction for programming distributed and parallel applications. FAIL (FAult Injection Language) is an abstract language for fault occurrence description capable of expressing complex and realistic fault scenarios. In this paper, we investigate the possibility of using FAIL to inject faults in a fault-tolerant MPI implementation. Our middleware, FAIL-MPI, is used to carry quantitative and qualitative faults and stress testing",2006,0, 6217,Bug busters,"One way to deal with bugs is to avoid them entirely. The approach would be wasteful because we'd be underutilizing the many automated tools and techniques that can catch bugs for us. Most tools for eliminating bugs work by tightening the specifications of what we build. At the program code level, tighter specifications affect the operations allowed on various data types, our program's behavior, and our code's style. Furthermore, we can use many different approaches to verify that our code is on track: the programming language, its compiler, specialized tools, libraries, and embedded tests are our most obvious friends. We can delegate bug busting to code. Many libraries come with hooks or specialized builds that can catch questionable argument values, resource leaks, and wrong ordering of function calls. Bugs many be a fact of life, but they're not inevitable. We have some powerful tools to find them before they mess with our programs, and the good news is that these tools get better every year.",2006,0, 6218,Tracking the elusive online help topic. Organizing the review process with defect-tracking software,"Online help systems consist of hundreds of help topics. Keeping track of reviewers, comments about each help topic requires a database to do the job effectively. Rather than develop such a database from scratch, it may be possible to adapt the defect-tracking software already in use in the QA department to this task. This paper describes how technical writers can adapt defect-tracking software to organize the online help review process",2001,0, 6219,Online monitoring and fault diagnose system of STATCOM,"Online monitoring and fault diagnose system of electrical device is one of the main research directions in the electric field. It is also one of the key technologies for settling the decision problems of reliability, safety and maintenance. STATCOM is a kind of large Flexible AC Transmission System device which is used more and more widely. As its cost and complicated degree are high, it is especially important to build an OMFDS for STATCOM. However, as it is not a hot study problem, the current relative research work is a little. This paper states the basic theory of OMFDS and emphasizes three kinds of key technology, analyzes the particular characteristics for OMFDS application on STATCOM, concludes common fault types of STATCOM, such as power capacitor fault on DC side, bridge arm shoot-through fault, power electronics switch devices fault and so on. Finally it introduces the basic configuration and construction of hardware and software of OMFDS in STATCOM.",2009,0, 6220,Broadband measurements of nanofiber devices: Repeatability and random error analysis,"On-wafer, broadband measurements of two-port nanofiber devices were made in order to test the short-term repeatability of a widely used measurement approach that builds on established on-wafer calibration techniques. The test devices used in this study consist of Pt nanowire and Au microbridge structures incorporated into two-port coplanar waveguides. Based on repeated measurements of these test structures, we computed statistical (Type A) uncertainties. The standard deviation (k=1) of five repeated measurements of a Pt nanowire device was less than 50 S. The analysis suggests refinements to the measurement process depending on the desired output of the measurements, e.g. the broadband response itself or the extraction of circuit model parameters.",2010,0, 6221,A Framework for Error-Tolerant Scheme in Pervasive Computing,"Open and dynamic pervasive computing environments make fault detection and error recovery challenging. In this paper, we present a policy based framework for error-tolerant scheme in pervasive computing. As policy-based scheme can separate the business logic (rules) from the controls (programming code) of the implementations, it is typically more flexible and adaptable than non-policy-based approach. Moreover, the policy mechanism is based on not only event-condition-action rules, but also the-Separation-of-Concerns-principle for easily extensible to support the error recovery scheme.",2009,0, 6222,Fault tolerance in scalable agent support systems: integrating DARX in the AgentScape framework,"Open multi-agent systems need to cope with the characteristics of the Internet, e.g., dynamic availability of computational resources, latency, and diversity of services. Large-scale multi-agent systems employed on wide-area distributed systems are susceptible to both hardware and software failures. This paper describes AgentScape, a multi-agent system support environment, DARX, a framework for providing fault tolerance in large scale agent systems, and a design for the integration of the two.",2003,0, 6223,Finding Error Handling Bugs in OpenSSL Using Coccinelle,"OpenSSL is a library providing various functionalities relating to secure network communication. Detecting and fixing bugs in OpenSSL code is thus essential, particularly when such bugs can lead to malicious attacks. In previous work, we have proposed a methodology for finding API usage protocols in Linux kernel code using the program matching and transformation engine Coccinelle. In this work, we report on our experience in applying this methodology to OpenSSL, focusing on API usage protocols related to error handling. We have detected over 30 bugs in a recent OpenSSL snapshot, and in many cases it was possible to correct the bugs automatically. Our patches correcting these bugs have been accepted by the OpenSSL developers. This work furthermore confirms the applicability of our methodology to user-level code.",2010,0, 6224,Simulating Open-Via Defects,Open-via defects are a major systematic failure mechanism in nanoscale manufacturing processes. We present a flow for simulating open-via defects. Electrical parameters are extracted from the layout and technology data and represented in a way which allows efficient simulation on gate level. The simulator takes oscillation caused by open-via defects into account and quantifies its impact on defect coverage. The flow can be employed for manufacturing test as well as for defect diagnosis.,2007,0, 6225,Finite Element Analysis of Thermal and Mechanical Behaviors in Fault Current Limiter Model With QMG Bulk Superconductors,"Operations of superconducting fault current limiters proposed by our group are numerically simulated with a commercial software of finite element method. In order to reproduce experimental results with Y-based QMG bulk superconductors, the distributions of current and temperature inside the limiting devices are numerically solved at each time step for a half cycle of overcurrent. By using the obtained distributions, the analysis of internal stresses is also carried out to validate the structure of metal reinforcement. Furthermore, the current limiting behavior for voltage source is evaluated to simulate a realistic operation in actual power transmission lines",2006,0, 6226,Optimized routing for fault management in optical burst-switched WDM networks,"Optical burst switching (OBS) is a promising technique for supporting high-capacity, bursty data traffic over optical wavelength-division-multiplexed (WDM) networks. An optical link failure may result in a huge amount of data (and revenue) loss, and it has been an important survivability concern in optical networks. In this paper, we study the fault- management issues related with a link failure in an OBS network. We propose to use pre-planned global rerouting to balance network load and to reroute bursts after a link fails. We apply optimization techniques to pre-plan explicit backup routes for failure scenarios. Our objective is to achieve optimal load balancing both before and after a failure such that the network state can still remain stable with minimum burst-loss probability when a failure occurs. We apply the pre-planned normal and backup routing tables to an OBS network, and study the network performance after a failure occur using illustrative numerical examples. The results show that the average burst-loss probability can be significantly reduced by 60% - from an average of 0.10 to 0.04 (when the normalized link load is less than 0.5) using globally-rerouted backup routes, when compared with the scheme without global rerouting. We also observe that the burst-loss probability is reduced by 43% - from an average of 0.07 to 0.04 (when the link load is less than 0.5) if the rerouting is done using optimization techniques, when compared with shortest-path routing.for Fault Management",2007,0, 6227,Adaptive Replication Based Security Aware and Fault Tolerant Job Scheduling for Grids,"Most of the existing job scheduling algorithms for grids have ignored the security problem with a handful of exceptions. Moreover, existing algorithms using fixed- number job replications will consume excessive resources when grid security level changes dynamically. In this paper, a security aware and fault tolerant scheduling (SAFTS) algorithm based on adaptive replication is proposed which schedules the jobs by matching the user security demand and resource trust level and the number of the job replications changes adaptively with the dynamic of grid security. In experiments on RSBSME (remote sensing based soil moisture extraction) workload in a real grid environment, the average job scheduling success rate is 97%, and average grid utilization is 74%. Experiment results show that performance of SAFTS is better than non-security-aware and fixed-number job replication scheduling algorithms and SAFTS is fault-tolerant and scalable.",2007,0, 6228,Detection and Correction Process Modeling Considering the Time Dependency,"Most of the models for software reliability analysis are based on reliability growth models which deal with the fault detection process only. In this paper, some useful approaches to the modeling of both software fault detection and fault correction processes are discussed. Since the estimation of model parameters in software testing is essential to give accurate prediction and help make the right decision about software release, the problem of estimating the parameters is addressed. Taking into account the dependency between the fault correction process and the fault detection process, a new explicit formula for the likelihood function is derived and the maximum likelihood estimates are obtained under various time delay assumptions. An actual set of data from a software development project is used as an illustrative example. A Monte Carlo simulation is carried out to compare the predictive capability between the LSE method and the MLE method",2006,0, 6229,Defect-based reliability analysis for mission-critical software,"Most software reliability methods have been developed to predict the reliability of a program using only data gathered during the resting and validation of a specific program. Hence, the confidence that can be attained in the reliability estimate is limited since practical resource constraints can result in a statistically small sample set. One exception is the Orthogonal Defect Classification (ODC) method, which uses data gathered from several projects to track the reliability of a new program, Combining ODC with root-cause analysis can be useful in many applications where it is important to know the reliability of a program for a specific type of a fault. By focusing on specific classes of defects, it becomes possible to (a) construct a detailed model of the defect and (b) use data from a large number of programs. In this paper, we develop one such approach and demonstrate its application to modeling Y2K defects",2000,0, 6230,Multiple acquisition frame-based motion correction for awake monkey PET imaging,"Motion correction for PET brain imaging of awake non-human primates (NHP) is necessary if head fixation is not used. Previous studies have used multiple acquisition frame-based motion correction methods for human and small animal studies. Due to rapid motions, this method was extended to awake NHP imaging with the addition of attenuation correction and re-grouping of sub-frames in the algorithm to achieve comparable qualitative and quantitative image quality as in an anesthetized scan. Motion data were acquired with the Vicra system with a synchronization technique that improves the time precision between the list-mode data and the motion data. Using the motion data, the raw list-mode data were framed based on an intra-frame motion threshold (IFMT), above which a new sub-frame begins. Also, a minimal-frame duration threshold was set to eliminate undesirable short sub-frames. Each sub-frame was first reconstructed without attenuation correction, transformed to a reference position, and to this animal's MR template. A transmission image from an anesthetized scan of the same subject was resliced to the reference orientation of the current study, via reslicing to the same MR template, and then transformed to the orientation of each sub-frame. Finally, each sub-frame was re-reconstructed with attenuation correction, and re-grouped into standard time frames. Our work shows that this algorithm is able to generate comparable image quality and quantitative results as in an anesthetized study.",2010,0, 6231,Neural-network-based motor rolling bearing fault diagnosis,"Motor systems are very important in modern society. They convert almost 60% of the electricity produced in the US into other forms of energy to provide power to other equipment. In the performance of all motor systems, bearings play an important role. Many problems arising in motor operations are linked to bearing faults. In many cases, the accuracy of the instruments and devices used to monitor and control the motor system is highly dependent on the dynamic performance of the motor bearings. Thus, fault diagnosis of a motor system is inseparably related to the diagnosis of the bearing assembly. In this paper, bearing vibration frequency features are discussed for motor bearing fault diagnosis. This paper then presents an approach for motor rolling bearing fault diagnosis using neural networks and time/frequency-domain bearing vibration analysis. Vibration simulation is used to assist in the design of various motor rolling bearing fault diagnosis strategies. Both simulation and real-world testing results obtained indicate that neural networks can be effective agents in the diagnosis of various motor bearing faults through the measurement and interpretation of motor bearing vibration signatures",2000,0, 6232,"MPI/FTTM: architecture and taxonomies for fault-tolerant, message-passing middleware for performance-portable parallel computing","MPI has proven effective for parallel applications in situations with neither QoS nor fault handling. Emerging environments motivate fault-tolerant MPI middleware. Environments include space-based, wide-area/web/meta computing and scalable clusters. MPI/FT, the system described in the paper, trades off sufficient MPI fault coverage against acceptable parallel performance, based on mission requirements and constraints. MPI codes are evolved to use MPI/FT features. Non-portable code for event handlers and recovery management is isolated. User-coordinated recovery, checkpointing, transparency and event handling, as well as evolvability of legacy MPI codes form key design criteria. Parallel self-checking threads address four levels of MPI implementation robustness, three of which are portable to any multithreaded MPI. A taxonomy of application types provides six initial fault-relevant models; user-transparent parallel nMR computation is thereby considered. Key concepts from MPI/RT-real-time MPI-are also incorporated into MPI/FT, with further overt support for MPI/RT and MPI/FT in applications possible in future",2001,0, 6233,Decentralized architecture for fault tolerant multi agent system,"Multi agent systems (MAS) are expected to be involved in futuristic technologies. Agents require some execution environment in which they can publish their service interfaces and can provide services to other agent. Such execution environment is called agent platform (AP). From a technical point of view any abnormal behavior of platform can distress agents residing on that platform. That's why it is necessary to provide a suitable architecture for the AP which should not only provide fault tolerance but also scalability features. There also exist some management components within the platform, which provide services to application agents. All the agents within MAS are managed by agent management system (AMS) which is the mandatory supervisory authority of any AP. To be more scalable, a single agent platform can be distributed over several machines which not only provides load balancing but also fault tolerance depending upon the distributed architecture of the AP. In existing systems, AMS is centralized i.e. it exists on one machine. With centralized AMS, this infrastructure lacks fault tolerance, which is a key feature of high assurance. Absence of fault tolerance is the main reason for the small number of deployments of MAS. Failure of AMS leads towards abnormal behavior in the distributed platform. This paper proposes virtual agent cluster (VAC) paradigm which strongly supports decentralized AMS to achieve fault tolerance in distributed AP. VAC provides fault tolerance by using separate communication layers among different machines. Experiments show that it improves performance, brings autonomy and supports fault recovery along with load balancing in distributed AP.",2005,0, 6234,Dependability consequences of fault-tolerant technique integrated in stack processor emulator using information flow approach,"Nowadays, electronic systems are becoming increasingly attractive for many applications. Such systems should be more and more dependable and require the evaluation and the improvement of their dependability parameters. In the continuity of the CETIM project [1], the principal objective is to define an integrated design of dependable mechatronic systems. In addition, the presence of programmable electronics imposes the existence of hardware/software interactions for the evaluation of dependability parameters. In this work we apply the informational flow approach [2] to evaluate some dependability parameters of a stack processor architecture in order to make adjustment during the co-design step. A VHDL- RTL modeling of the processor instruction set is done in order to carry on the information flow modeling. The probability of existence on different functional mode is estimated and discussed for 2 instructions. The first study concerns the instruction models without taking into account a fault-tolerant method. The second study concerns the same models but with taking into account this fault-tolerant method.",2008,0, 6235,Fault diagnosis using neural-fuzzy technique based on the simulation results of stator faults for a three-phase induction motor drive system,"Nowadays, induction machines are known as workhorse and play an important role in manufacturing environments mainly due to their low cost, reasonably small size, ruggedness, low maintenance, and operation with an easily available power supply. Therefore, the diagnostic technology of this type of machine is mainly considered and proposed from industry and scientist academia. Several studies show that approximately 30-40% of induction machine faults are stator faults. The fault diagnosis of electrical machines has progressed in recent years from traditional to artificial intelligence (Al) techniques. This paper presents a general review of the principle of AI-based diagnostic methods first. It covers the recent development and the system structure, about expert system (ES), artificial neural network (ANN), fuzzy logic system (FLS), and combined structure, like neural-fuzzy, based fault diagnostic strategies. Finally, a neural-fuzzy technique is used in this paper to perform the stator fault diagnosis for induction machine. The simulation results verified the technique proposed",2005,0, 6236,Concurrent and simple digital controller of an AC/DC converter with power factor correction,"Nowadays, most digital controls for power converters are based on DSPs. This paper presents a field programmable gate array (FPGA) based digital control for a power factor correction (PFC) flyback AC/DC converter. The main difference is that FPGAs allow concurrent operation (simultaneous execution of all control procedures), enabling high performance and novel control methods. The control algorithm has been developed using a hardware description language (VHDL), which provides great flexibility and technology independence. The algorithm has been designed as simply as possible while maintaining good accuracy and dynamic response. Simulations and experimental results show the feasibility of the method",2002,0, 6237,Transmission line fault location estimation by Fourier & wavelet transforms using ANN,"Nowadays, power supply has become a business commodity. The quality and reliability of power needs to be maintained in order to obtain optimum performance. Therefore, it is extremely important that transmission line faults from various sources be identified accurately, reliably and be corrected as soon as possible. In this paper, a comparative study of the performance of Fourier transform and wavelet transform based methods combined with Neural Network (NN) for location estimation of faults on high voltage transmission lines is presented. A new location method is proposed for decreasing training time and dimensions of NN. The proposed algorithms are based on Fourier transform analysis of fundamental frequency of current and voltage signals in the event of a short circuit on a transmission line. Similar analysis is performed on transient current and voltage signals using multi-resolution Daubchies-9 wavelet transform, and comparative characteristics of the two methods are discussed.",2010,0, 6238,Image quality assessment: from error visibility to structural similarity,"Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/lcv/ssim/.",2004,0, 6239,"Atmospheric correction and oceanic constituents retrieval, with a neuro-variational method","Ocean color sensors on board satellite measure the solar radiation reflected by the ocean and the atmosphere. This information, denoted reflectance, is affected for 90% by air molecules and aerosols in the atmosphere and for only 10% by water molecules and phytoplankton cells in the ocean. Our method focuses on the chlorophyll-a concentration (chl-a) retrieval, which is commonly used as a proxy for phytoplankton concentration. Our algorithm, denoted NeuroVaria, computes relevant atmospheric (Angstrom coefficient, optical thickness, single-scattering albedo) and oceanic parameters (chl-a, oceanic particulate scattering) by minimizing the difference over the whole spectrum (visible + near infrared) between the observed reflectance and the reflectance computed from artificial neural networks that have been learned with a radiative transfer model. NeuroVaria has been applied to SeaWiFS (sea-viewing wide field-of-view sensor) imagery in the Mediterranean sea. A comparison with in-situ measurements of the water-leaving reflectance shows that NeuroVaria enables to better reconstruct this component at 443 nm than the standard SeaWiFS processing. This leads to an improvement of the retrieval of the chl-a for the oligotrophic sea. This result is generalized to the entire Mediterranean sea through weekly maps of chl-a.",2005,0, 6240,The Dangers of Failure Masking in Fault-Tolerant Software: Aspects of a Recent In-Flight Upset Event,"On 1 August 2005, a Boeing Company 777-200 aircraft, operating on an international passenger flight from Australia to Malaysia, was involved in a significant upset event while flying on autopilot. The Australian Transport Safety Bureau's investigation into the event discovered that ""an anomaly existed in the component software hierarchy that allowed inputs from a known faulty accelerometer to be processed by the air data inertial reference unit (ADIRU) and used by the primary flight computer, autopilot and other aircraft systems."" This anomaly had existed in original ADIRU software, and had not been detected in the testing and certification process for the unit. This paper describes the software aspects of the incident in detail, and suggests possible implications concerning complex, safety- critical, fault-tolerant software.",2007,0, 6241,Evaluating the Combined Effect of Vulnerabilities and Faults on Large Distributed Systems,"On large and complex distributed systems hardware and software faults, as well as vulnerabilities, exhibit significant dependencies and interrelationships. Being able to assess their actual impact on the overall system dependability is especially important. The goal of this paper is to propose a unifying way of describing a complex hardware and software system, in order to assess the impact of both vulnerabilities and faults by means of the same underlying reasoning mechanism, built on a standard Prolog inference engine. Some preliminary experimental results show that a prototype tool based on these techniques is both feasible and able to achieve encouraging performance levels on several synthetic test cases.",2007,0, 6242,Improving Classification Efficiency of Orthogonal Defect Classification via a Bayesian Network Approach,"Orthogonal defect classification (ODC) is a kind of defect analysis method invented by IBM. ODC classifies software defects by eight orthogonal attributes. By analyzing these attributes' distribution and increasing trend the software process information could be obtained. It has been used widely in many companies and organizations. In this paper, we focus on the ODC records collected in a company, and research to use these data to provide guidance in actual defect management to improve the efficiency of the classification. We study the relationships of these attributes and give a Bayesian network model, then with the help of the ODC records we got, a Bayesian network for ODC is presented. It shows great help in actual work for both the developers and the testers.",2009,0, 6243,Extraction of Error Detection Rules without Supervised Information from Log Files Using Automatically Defined Groups,"Our main aim is to extract multiple rules from log files in the computer systems, to detect various levels of errors, and to inform these errors or configuration mistakes to the system administrators automatically, in order to manage them without expert knowledge. To satisfy this aim, we performed an extraction experiment from the log files of a system using Automatically Defined Groups (ADG), which is based on Genetic Programming. Moreover, we focused on ""System State Pattern"" related to the difference between normal daily state and abnormal state that some errors occur in the system. In this experiment, then, we tried to extract rules without any manually managed and supervised information, by using simple translation technique: regular expressions. As a result, 50 agents in the best individual were divided into 16 groups from 322 log files. This means that 16 rules were acquired. We confirmed these rules could detect some errors such as DNS configuration error. We could also find the importance of the rules because the rule with more agents tended to have a higher adopted frequency by evolutionary computation. Therefore, we consider that our method using ADG is useful for the diagnosis of computer systems, and helps administrators manage their systems without expert knowledge about their systems.",2006,0, 6244,Automatic synthesis and fault-tolerant experiments on an evolvable hardware platform,"Outer solar system exploration and missions to comets and planets with severe environmental conditions require long-term survivability of space systems. This challenge has recently been approached with new ideas, such as using mechanisms for hardware adaptation inspired from biology. The application of evolution-inspired formalisms to hardware design and self-configuration lead to the concept of evolvable hardware (EHW). EHW refers to self-reconfiguration of electronic hardware by evolutionary/genetic reconfiguration mechanisms. The paper describes a fine-grained Field Programmable Transistor Array (FPTA) architecture for reconfigurable hardware, and its implementation on a VLSI chip. A first experiment illustrates automatic synthesis of electronic circuits through evolutionary design with the chip-in-the-loop. The chip is rapidly reconfigured to evaluate candidate circuit designs. A second, fault-tolerance experiment shows how evolutionary algorithms can recover functionality after being subjected to faults, by finding new circuit configurations that circumvent the faults",2000,0, 6245,Out-of-bounds array access fault model and automatic testing method study,"Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.",2007,0, 6246,Supporting fault tolerance in a data-intensive computing middleware,"Over the last 2-3 years, the importance of data-intensive computing has increasingly been recognized, closely coupled with the emergence and popularity of map-reduce for developing this class of applications. Besides programmability and ease of parallelization, fault tolerance is clearly important for data-intensive applications, because of their long running nature, and because of the potential for using a large number of nodes for processing massive amounts of data. Fault-tolerance has been an important attribute of map-reduce as well in its Hadoop implementation, where it is based on replication of data in the file system. Two important goals in supporting fault-tolerance are low overheads and efficient recovery. With these goals, this paper describes a different approach for enabling data-intensive computing with fault-tolerance. Our approach is based on an API for developing data-intensive computations that is a variation of map-reduce, and it involves an explicit programmer-declared reduction object. We show how more efficient fault-tolerance support can be developed using this API. Particularly, as the reduction object represents the state of the computation on a node, we can periodically cache the reduction object from every node at another location and use it to support failure-recovery. We have extensively evaluated our approach using two data-intensive applications. Our results show that the overheads of our scheme are extremely low, and our system outperforms Hadoop both in absence and presence of failures.",2010,0, 6247,Selection of optimal fault location algorithm,"Once fault event in power system occurs different intelligent electronic devices (IEDs) automatically recognize the fault as abnormality. With technological development many IEDs available today are capable of recording, executing analysis automatically and communicating results to different locations. Although recording capabilities are drastically increased applications that would fully utilize recorded data are still not available. In this paper, automated fault location (FL) procedure and usage of different intelligent algorithms is presented. Data is retrieved from various data sources, processed using expert system, neural networks, and genetic algorithm in order to provide data for optimal FL algorithm selection.",2008,0, 6248,Fan Fault Diagnosis System Based on Virtual Prototyping Technology,"One fault diagnosis system is proposed to monitor the fan system condition based on virtual prototyping technology. According to the real fan system structure and its foundation condition, the three-dimensional model is built. Under virtual environment, the components are assembled, and the constraints and driver are added. After validate the model, the typical mechanical faults of fans (unbalance, misalignment etc.) are simulated. By virtual sensors, the acquired fan data are used to establish the corresponding fan condition database. The real fan condition can be identified through comparing measured signal with virtual data. Pattern recognition and early prediction are feasible for dynamic operation of the equipment during the process of incipient fault. The contribution of this paper is to propose a new approach to equipment maintenance that makes the maintenance process ""low consumption and high efficiency"".",2008,0, 6249,A parallel and fault tolerant file system based on NFS servers,"One important piece of system software for clusters is the parallel file system. All current parallel file systems and parallel I/O libraries for clusters do not use standard servers, thus it is very difficult to use these systems in heterogeneous environments. However why use proprietary or special-purpose servers on the server end of a parallel file system when you have most of the necessary functionality in NFS servers already? This paper describes the fault tolerance implemented in Expand (Expandable Parallel File System), a parallel file system based on NFS servers. Expand allows the transparent use of multiple NFS servers as a single file system, providing a single name space. The different NFS servers are combined to create a distributed partition where files are stripped. Expand requires no changes to the NFS server and uses RPC operations to provide parallel access to the same file. Expand is also independent of the clients, because all operations are implemented using RPC and NFS protocol. Using this system, we can join heterogeneous servers (Linux, Solaris, Windows 2000, etc.) to provide a parallel and distributed partition. Fault tolerance is achieved using RAID techniques applied to parallel files. The paper describes the design of Expand and the evaluation of a prototype of Expand, using the MPI-IO interface. This evaluation has been made in Linux clusters and compares Expand with PVFS.",2003,0, 6250,Adaptive algorithm for error correction from sensor measurement,"One of the essential problems of sensor acquisition systems is the correction of measurement affected by nonsystematic error. This paper continues the problem of error correction from, and is focused on a digital method used in signal processing for correction of temperature measurement from inside of dam. The temperature measurements are done in a long time by same year. The measurements from inside of dam depend by ambient temperature of environment, so the temperature evolution respects the ambient temperature variation in good condition The purpose of this article is to interpret the temperature evolution as a signal and for correction is used an adaptive filtering method. The systematic error is a high frequency noise that affects the sensor measurement. The adaptive filter process two signals one is the utile signal (the sensor measurement) and the second is the reference signal (the environment temperature). This method consists in obtaining an estimate value for error and eliminates errors from sensor measurements. This method use the discreet time signal, the advantage is that this adaptive algorithm can be implemented with digital programmable circuit such as microcontroller, DSP. This algorithm can be implemented in data acquisition system, so the correction data will be done in real time.",2008,0, 6251,Evaluating the Performance of Adaptive Fault-Tolerant Routing Algorithms for Wormhole-Switched Mesh Interconnect Networks,"One of the fundamental problems in parallel computing is how to efficiently perform routing in a faulty network each component of which fails with some probability. This paper presents a comparative performance study of ten prominent adaptive fault-tolerant routing algorithms in wormhole-switched 2D mesh interconnect networks. These networks carry a routing scheme suggested by Boppana and Chalasani as an instance of a fault-tolerant method. The suggested scheme is widely used in the literature to achieve high adaptivity and support inter-processor communications in parallel computer systems due to its ability to preserve both communication performance and fault-tolerant demands in these networks. The performance measures studied are the throughput, average message latency and average usage of virtual channels per node. Results obtained through simulation suggest two classes of presented routing schemes as high performance candidate in most faulty networks.",2007,0, 6252,Distribution system grounding impacts on fault responses,"One of the main concerns of utilities, nowadays, is grounding the distribution systems. Distribution systems are usually three phase systems with a returning current neutral wire. Grounding the neutral wire will affect the power quality and characteristics of distribution systems during unbalanced conditions, specially phase to ground faults. In this paper several case studies have been done to investigate the impacts of different types of grounding the distribution systems on fault responses. The fault responses are fault current, voltage swell, substation ground potential rise and neutral wire voltages and currents. Some sensitivity studies are also performed to see the effects of grounding parameters on fault responses. The results are presented in graphical and tabular forms.",2008,0, 6253,FOTG: fault-oriented stress testing of IP multicast,"Network simulators provide a useful tool, for protocol evaluation. However, the results depend heavily on the simulated scenarios, especially for complex protocols such as multicast. There has been little work on scenario generation. In this work we present a fault-oriented test generation (FOTG) algorithm for automated stress testing of multicast protocols. FOTG processes an extended FSM model and uses a mix of forward and backward search techniques. Unlike traditional verification approaches, instead of starting from initial states, FOTG starts from a fault and uses cause-effect relations for automatic topology synthesis then uses backward implication to generate tests. Using FOTG we test various mechanisms commonly employed by multicast routing and validate our results through simulation.",2005,0, 6254,Techniques for Reducing the Effect of Measurement Errors in Near-Field Antenna Measurements,"NIST 18 term error analysis has been used for some time to estimate the uncertainty in the far-field antenna parameters determined from near-field measurements. Each of the error terms is evaluated separately to estimate the uncertainty it produces in parameters such as gain, directivity, side lobe level, cross polarization level and beam pointing angle. This identification and evaluation of uncertainties has led to the development of procedures that can be used to reduce the effect of individual error sources and therefore improve the reliability of the results. Automated, real time systems have been added to the measurement hardware and electronics that can reduce the effect of such things as probe position errors and cable flexing. Measurement and special computer processing techniques have also been developed to self-calibrate and correct for transmission path differences of dual mode probes. More recently, a number of techniques have been developed that provide a means to reduce the effect of measurement errors without the need of special hardware or additional measurements. These procedures often involve additional data processing steps to identify and reduce the presence of the error in the measured data, but the processing time is small and the improvement in some parameters can be very significant. In some cases, the error signal level can be reduced by 10 to 20 dB. Such techniques have been developed for errors due to bias error leakage in the receivers, non-ideal rotary joints, spherical rotator misalignment, and room scattering. Further improvements can be realized by making additional measurements to reduce multiple reflection effects, position errors and room scattering in spherical systems. Examples of these techniques will be presented to illustrate the methods and demonstrate typical improvement.",2007,0, 6255,"Analysis of the impact of CT based attenuation correction, scatter compensation and 3D-beam modeling on SPECT image quality","Non uniform photon attenuation and Compton scatter degrade nuclear medicine SPECT images by removing true counts or adding unwanted counts, respectively. For quantitative SPECT, iterative reconstruction algorithms, such as OSEM, allow 3D collimator modeling and compensation of the effects of scatter and attenuation. In this work, we investigate, through ROI and SPM analysis of phantom and computer simulation studies, how a combination of compensation strategies affects image quality and quantitative accuracy. Phantoms were imaged with a SPECT and a CT scanner. With these acquisitions, semi-quantitative analysis was performed for the following reconstruction strategies: OSEM-3D without attenuation and scatter compensation; OSEM-3D CTAC (with attenuation correction); and OSEM-3D with attenuation and scatter compensation (CTACSC). For the simulated dataset, the SNR was 50.5 with CTACSC compared to 27.5 without any compensation and OSEM-3D CTACSC produced reconstructed images with contrast within 0.23% of the true image with a standard error of 21 counts. Without compensation, the error increases to 2382 counts. The implementation and design of the OSEM-3D CTACSC approach proved effective, with improved visual quality and quantitative accuracy of the SPECT images.",2004,0, 6256,Inverse wave field extrapolation: a different NDI approach to imaging defects,"Nondestructive inspection (NDI) based on ultrasound is widely used. A relatively recent development for industrial applications is the use of ultrasonic array technology. Here, ultrasonic beams generated by array transducers are controlled by a computer. This makes the use of arrays more flexible than conventional single-element transducers. However, the inspection techniques have principally remained unchanged. As a consequence, the properties of these techniques, as far as characterization and sizing are concerned, have not improved. For further improvement, in this paper we apply imaging theory developed for seismic exploration of oil and gas fields on the NDI application. Synthetic data obtained from finite difference simulations is used to illustrate the principle of imaging. Measured data is obtained with a 64-element linear array (4 MHz) on a 20-mm thick steel block with a bore hole to illustrate the imaging approach. Furthermore, three examples of real data are presented, representing a lack of fusion defect, a surface breaking crack, and porosity",2007,0, 6257,Kernel Classification via Integrated Squared Error,"Nonparametric kernel methods are widely used and proven to be successful in many statistical learning problems. Wellknown examples include the kernel density estimate (KDE) for density estimation and the support vector machine (SVM) for classification. We propose a kernel classifier that optimizes an integrated squared error (ISE) criterion based on a ""difference of densities"" formulation. Our classifier is sparse, like SVMs, and performs comparably to state-of-the-art kernel methods. Furthermore, and unlike SVMs, the ISE criterion does not require the user to set any unknown regularization parameters. As a consequence, classifier training is faster than for support vector methods.",2007,0, 6258,Detecting packet-dropping faults in mobile ad-hoc networks,"Mobile ad-hoc networks are inherently prone to security attacks, with node mobility being the primary cause in allowing security breaches. This makes the network susceptible to Byzantyne faults with packets getting misrouted, corrupted or dropped. In this paper we propose solutions using an unobtrusive monitoring technique using the ""detection manager"" to locate malicious or faulty nodes that misroute, corrupt or drop packets. The unobtrusive monitoring technique is similar to an intrusion detection system that monitors system activity logs to determine if the system is under attack. This technique uses information from different network layers to detect malicious nodes. The detection manager we are developing for mobile ad-hoc networks stores several rules for responding to different situations. Any single node in the network can use unobtrusive monitoring without relying on the cooperation of other nodes, which makes unobtrusive monitoring easy to implement and deploy. Simulations of mobile ad-hoc networks that contain malicious nodes indicate that unobtrusive monitoring has a high detection effectiveness with low false positive rate.",2003,0, 6259,An approach to fault-tolerant mobile agent execution in distributed systems,"Mobile agents are no longer a theoretical issue since different architectures for their realization have been proposed. With the increasing market of electronic commerce it becomes an interesting aspect to use autonomous mobile agents for electronic business transactions. Being involved in money transactions, supplementary security features for mobile agent systems have to he ensured. Fault-tolerance is fundamental to the further development of mobile agent applications. In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., ensures that the agent arrives at its destination. Simple approaches such as checkpointing are prone to blocking. Replication can in principle improve solutions based on checkpointing. However, existing solutions in this context either assume a perfect failure detection mechanism (which is not realistic in an environment such as the Internet), or rely on complex solutions based on leader election and distributed transactions, where only a subset of solutions prevents blocking .This paper proposes a novel approach to fault-tolerant mobile agent execution, which is based on modeling agent execution as a sequence of agreement problems. Each agreement problem is one instance of the well-understood consensus problem. Our solution does not require a perfect failure detection mechanism, while preventing blocking and ensuring that the agent is executed exactly once.",2005,0, 6260,"FixD : Fault Detection, Bug Reporting, and Recoverability for Distributed Applications","Model checking, logging, debugging, and checkpointing/recovery are great tools to identify bugs in small sequential programs. The direct application of these techniques to the domain of distributed applications, however, has been less effective (mostly owing to the high degree of concurrency in this context). This paper presents the design of a hybrid tool, FixD, that attempts to address the deficiencies of these tools with respect to their application to distributed systems by using a novel composition of several of these existing techniques. The authors first identify and describe the four abstract components that comprise the FixD tool, then conclude with a proposal for how existing tools can be used to implement these components.",2007,0, 6261,Integrated fault diagnostics on the grid,"Model-based methods are commonly used for fault diagnosis. Many model-based fault diagnosis approaches have been proposed so far. But for modern complex processes, due to the variable nature of faults and model uncertainty, no single approach can diagnose all faults and meet different contradictory criteria. In this paper, the importance of integration of different fault diagnosis schemes in a common framework is emphasised. A service-oriented architecture for the integration is proposed based on grid technologies. The preliminary implementation of this integration for the gas turbine engine fault diagnosis is discussed.",2004,0, 6262,Using Spectrum-Based Fault Localization for Test Case Grouping,"Model-based test case generation allows one to derive almost arbitrary numbers of test cases from models. If resulting test suites are executed against real implementations, there are often huge numbers of failed test cases. Thus, the analysis of the test execution, i.e. the identification of failures for error reporting, becomes a tedious and time consuming task. In this paper we investigate a technique for grouping test runs that most likely reveal the same failure. This reduces the post analysis time and enables the generation of small regression test suites. The test case grouping is implemented by means of spectrum-based fault localization at the level of the specification. We calculate the grouping by relating the spectra of the test cases. Besides a brief discussion of our approach we present results of applying our approach to the Session Initiation Protocol.",2009,0, 6263,Automatic error diagnosis and correction for RTL designs,"Recent improvements in design verification strive to automate the error-detection process and greatly enhance engineers' ability to detect functional errors. However, the process of diagnosing the cause of these errors and fixing them remains difficult and requires significant ad-hoc manual effort. Our work proposes improvements to this aspect of verification by presenting novel constructs and algorithms to automate the error-repair process at the Register-Transfer Level (RTL), where most development occurs. Our contributions include a new RTL error model and scalable error-repair algorithms. Empirical results show that our solution can diagnose and correct errors in just a handful of minutes even for complex designs o/up to several thousand lines of RTL code in minutes. This demonstrates the superior scalability and efficiency of our approach compared to previous work.",2007,0, 6264,CP-Miner: finding copy-paste and related bugs in large-scale software code,"Recent studies have shown that large software suites contain significant amounts of replicated code. It is assumed that some of this replication is due to copy-and-paste activity and that a significant proportion of bugs in operating systems are due to copy-paste errors. Existing static code analyzers are either not scalable to large software suites or do not perform robustly where replicated code is modified with insertions and deletions. Furthermore, the existing tools do not detect copy-paste related bugs. In this paper, we propose a tool, CP-Miner, that uses data mining techniques to efficiently identify copy-pasted code in large software suites and detects copy-paste bugs. Specifically, it takes less than 20 minutes for CP-Miner to identify 190,000 copy-pasted segments in Linux and 150,000 in FreeBSD. Moreover, CP-Miner has detected many new bugs in popular operating systems, 49 in Linux and 31 in FreeBSD, most of which have since been confirmed by the corresponding developers and have been rectified in the following releases. In addition, we have found some interesting characteristics of copy-paste in operating system code. Specifically, we analyze the distribution of copy-pasted code by size (number lines of code), granularity (basic blocks and functions), and modification within copy-pasted code. We also analyze copy-paste across different modules and various software versions.",2006,0, 6265,Analyzing heap error behavior in embedded JVM environments,"Recent studies have shown that transient hardware errors caused by external factors such as alpha particles and cosmic ray strikes can be responsible for a large percentage of system down-time. Denser processing technologies, increasing clock speeds, and low supply voltages used in embedded systems can worsen this problem. In many embedded environments, one may not want to provision extensive error protection in hardware because of (i) form-factor or power consumption limitations, and/or (ii) to keep costs low. Also, the mismatch between the hardware protection granularity and the field access granularity can lead to false alarms and error cancellations. Consequently, software-based approaches to identify and possibly rectify these errors seem to be promising. Towards this goal, This work specifically looks to enhance the software's ability to detect heap memory errors in a Java-based embedded system. Using several embedded Java applications, This work first studies the tradeoffs between reliability, performance, and memory space overhead for two schemes that perform error checks at object and field granularities. We also study the impact of object characteristics (e.g., lifetime, re-use intervals, access frequency, etc.) on error propagation. Considering the pros and cons of these two schemes, we then investigate two hybrid strategies that attempt to strike a balance between memory space and performance overheads and reliability. Our experimental results clearly show that the granularity of error protection and its frequency can significantly impact static/dynamic overheads and error detection ability.",2004,0, 6266,A DDS-compliant infrastructure for fault-tolerant and scalable data dissemination,"Recent trends in data-centric systems have motivated significant standardization efforts, such as the Data Distribution Service (DDS) to enable data dissemination with guaranteed Quality of Service (QoS). However, clear design guidelines and techniques for the support of reliable and scalable DDS-based deployments, especially for (mobile) data intensive services, such as Internet-wide information dissemination for breaking news or financial analysis services, are still missing. After an analysis of main DDS fault-tolerance and scalability deployment issues, this paper proposes a novel solution with two core original contributions: i) DDS-compliant routing substrate to facilitate reliable data dissemination between mobile devices; ii) relay-based DDS support infrastructure with limited overhead to enable scalable Internet-wide data dissemination. Our solution, especially tailored for mobile computing scenarios, is lightweight and requires neither persistency nor heavy operations at mobile device side. Reported experimental results confirm that our proposal can guarantee desired scalability requirements with a limited network, CPU, and memory resource overhead.",2010,0, 6267,Behavioral Fault Modeling for Model-based Safety Analysis,"Recent work in the area of model-based safety analysis has demonstrated key advantages of this methodology over traditional approaches, for example, the capability of automatic generation of safety artifacts. Since safety analysis requires knowledge of the component faults and failure modes, one also needs to formalize and incorporate the system fault behavior into the nominal system model. Fault behaviors typically tend to be quite varied and complex, and incorporating them directly into the nominal system model can clutter it severely. This manual process is error-prone and also makes model evolution difficult. These issues can be resolved by separating the fault behavior from the nominal system model in the form of a ""fault model"", and providing a mechanism for automatically combining the two for analysis. Towards implementing this approach we identify key requirements for a flexible behavioral fault modeling notation. We formalize it as a domain-specific language based on Lustre, a textual synchronous dataflow language. The fault modeling extensions are designed to be amenable for automatic composition into the nominal system model.",2007,0, 6268,Markov model for dynamic behavior of ranging errors in indoor geolocation systems,"Recently, considerable attention has been devoted to modeling and analysis of the behavior of the ranging error in indoor environment. The ranging error modeling is essential in design of precise time of arrival (TOA) based indoor geolocation systems. In this paper we present a new framework for simulation of the dynamic spatial variations of ranging error observed by a mobile user based on an application of Markov model. The model relegates the behavior of ranging error into four main categories associated with four states of the Markov process. The parameters of the model are extracted from empirical data collected from a measurement calibrated ray tracing (RT) algorithm in a typical office environment. Results of simulated errors from Markov model and actual errors from empirical data show close agreement.",2007,0, 6269,Simple switch open fault detection method of voltage source inverter,"Recently, permanent magnet synchronous motors are applied to various applications such as electric vehicle, aerospace, medical service, and military applications due to several outstanding characteristics. Because of the importance of high reliable operation in these areas, many researches which are related to the fault detection and diagnosis of inverter systems are conducted. In this paper, a new simple fault detection method of voltage source inverter for permanent magnet synchronous motor is proposed. The feasibility of the proposed method is proved by simulation and experiment. By the simulation and experiments, rapid detection characteristic of the proposed method has been proved without any additional voltage sensor.",2009,0, 6270,Configurative Service Engineering - A Rule-Based Configuration Approach for Versatile Service Processes in Corrective Maintenance,"Recently, service orientation has increasingly been debated both in research and practice. While researchers postulate a paradigm shift towards services as the basic unit of exchange in economies, companies strive to efficiently provide a wide array of business services to their customers. To accomplish this, companies (a) are required to consciously design the services in their portfolio with respect to a structured engineering approach and (b) also have to flexibly adapt the engineered service processes to individual customer needs, wants, and demands. Hence, services shall be supplied efficiently and in consistent quality without sacrificing customization for customers. Supporting this mass-customization strategy for business services, we present a configurative service engineering approach. After engineering a configurable process model for business services, customized service processes can efficiently be derived from the model by applying configuration mechanisms. The process of configuration is aided by the software tool Adapt(X). We present the concept and tool support by applying them on business services for corrective maintenance in the mechanical engineering sector.",2009,0, 6271,Evaluating Performance and Fault Tolerance in a Virtual Large-Scale Disk,"Recently, the exchange of data has increased with progress in information technology. The capacity of for storing data is also increasing. However, even if the capacity of global storage becomes extremely large, the capacity of local storage is always limited. Moreover, storage of files larger then the available local storage is impossible. This paper discusses use of a network to construct a cheap, high trust and PB class, decentralized storage system using many hundreds of PCs in an educational environment. The performance of this 'large-scale virtual disk' is investigated both during normal operation and in instances of failure.",2008,0, 6272,Minimization of Product Utility Estimation Errors in Recommender Result Set Evaluations,"Recommender systems are wide-spread web applications which can effectively support users in finding suitable products in a large and/or complex product domain. Although state-of-the-art systems manage to accomplish the task of finding and presenting suitable products they show big deficits in the applied model of human behavior. Time limitations, cognitive capacities, and willingness to cognitive effort bound rational decision taking which can lead to unforeseen side effects and furthermore to sub-optimal decisions. Decoy effects are cognitive phenomenons which are omni-present on result pages. State-of-the-art recommender systems are completely unaware of such effects. Due to the fact that such effects constitute one source of irrational decisions their identification and, if necessary, the neutralization of their biasing potential is extremely important. This paper introduces an approach for identifying and minimizing decoy effects on recommender result pages. To undergird the presented approach we present the results of a corresponding user study which clearly proofs the concept.",2009,0, 6273,Fault-recovery Non-FPGA-based Adaptable Computing System Design,"Reconfigurability with fault-tolerance is one of the most desirable hardware combinations for space computing systems. This paper introduces an adaptable computing architecture that includes random and delay- fault recovering capability for avionics and space applications. A micro-architecture level fault handling and recovering scheme that can immunize random/delay errors is presented as a means of overcoming the limitations of gate-level fault tolerance. The fault- recovery flexible architecture was developed based on a pure-ASIC-based retargetable computing system. The retargetable system also offers sufficient flexibility without employing programmable devices. This adaptable system reasserts different signal patterns for random/delay faults by rerouting micro-operations of the operation that caused the faults. Different sequences of bit-pattern generated by the retargetable system avoid the same faulty situation in high-speed VLSI circuits, while continuously supporting seamless modification and migration of underlying hardware and software after fabrication of retargetable systems.",2007,0, 6274,Dependability evaluation of transient fault effects in reconfigurable compute fabric devices,"Reconfigurable compute fabrics (RCFs) are cellular architectures in which an array of computing elements and a configurable interconnection fabric are combined with a general-purpose processor. RCFs can play an important role in safety- or mission-critical applications, provided that a clear understanding of their dependability is available. In this paper, we report an evaluation of the effects induced by transient faults within the resources of an RCF Motorola MRC6011 and we resorted to extensive fault injection to investigate the effects of transient faults",2006,0, 6275,Fault tolerance and reliability in field-programmable gate arrays,"Reduced device-level reliability and increased within-die process variability will become serious issues for future field-programmable gate arrays (FPGAs), and will result in faults developing dynamically during the lifetime of the integrated circuit. Fortunately, FPGAs have the ability to reconfigure in the field and at runtime, thus providing opportunities to overcome such degradation-induced faults. This study provides a comprehensive survey of fault detection methods and fault-tolerance schemes specifically for FPGAs and in the context of device degradation, with the goal of laying a strong foundation for future research in this field. All methods and schemes are quantitatively compared and some particularly promising approaches are highlighted.",2010,0, 6276,Fast and Accurate Automatic Defect CLuster Extraction for Semiconductor Wafers,"Reduction in integrated circuit (IC) half technology, which will no longer be sustainable by traditional fault isolation and failure analysis techniques. There is an urgent need for diagnostic software tools with (which manifest as clusters) observed from manufacturing defects can be traced back to a specific process, equipment or technology, a novel data mining algorithm defects from test data logs. This algorithm and provides accurate detection of 99%.",2010,0, 6277,On the Impact of Atmospheric Correction on Lossy Compression of Multispectral and Hyperspectral Imagery,"Reflectance data are often preferred to radiance data in applications of multispectral and hyperspectral imagery in which subtle spectral features are analyzed. In such applications, atmospheric correction, the process which provides radiance-to-reflectance conversion, plays a prominent role in the data-distribution and archiving pipeline. Lossy compression, often in the form of the JPEG2000 standard, will also likely factor into the distribution and archiving data flow. The relative position of data compression with respect to atmospheric correction is considered and evaluated with experimental results on both multispectral and hyperspectral imagery, and recommendations on an appropriate order for compression in the data-flow chain are made.",2009,0, 6278,A controlled experiment assessing test case prioritization techniques via mutation faults,"Regression testing is an important part of software maintenance, but it can also be very expensive. To reduce this expense, software testers may prioritize their test cases so that those that are more important are run earlier in the regression testing process. Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited to hand-seeded faults, primarily due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults. We have therefore designed and performed a controlled experiment to assess the ability of prioritization techniques to improve the rate of fault detection techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. We also compare our results to those collected earlier with respect to the relationship between hand-seeded faults and mutation faults, and the implications this has for researchers performing empirical studies of prioritization.",2005,0, 6279,Inserting software fault measurement techniques into development efforts,"Over the past several years, techniques for estimating software fault content based on measurements of a system's structural evolution during its implementation have been developed. Proper application of the techniques will yield a detailed map of the faults that have been inserted into the system. This information can be used by development organizations to better control the number of residual faults in the operational system. There are several issues that must be resolved if these techniques are to be successfully inserted into a development effort. These issues are identified, as are possibilities for their resolution",2000,0, 6280,Design of fault-tolerant digital filters based on redundant residue number arithmetic for over-the-air reconfiguration in software radio communication systems,"Over-the-air reconfiguration is a key characteristic of software defined radio communication systems. It offers great advantages in terms of cost-effective software deployment to a large number of user terminals. It also enables manufacturers and service providers to introduce additional features in hardware components after their shipment. Even though automatic repeat request (ARQ) or forward error correction (FEC) schemes are used for error control in transmission of configuration data (CD), the received configuration data may not be error free, therefore, the erroneous CD result in implementation of faulty circuits. In order to reduce the effects of the CD error, fault-tolerant processing is provided in circuits to be reconfigured over the air. We designed fault-tolerant digital filters based on redundant residue number arithmetic. A computer simulation model is described and simulation results are presented for the filters operating in the presence of simulated hardware failures.",2002,0, 6281,Fault ride-through versus anti-islanding in distributed generation,"Owing to the increasing number and power rating of static power converter (SPC) based systems the need arises for distributed power generation systems to take part in the control-related issues of power generation. The requirements for large and small power generation systems differ from the aspect of utility interaction. Measures to avoid and compensate the appearance of, and to reduce the possibility of unwanted, avalanche-like spreading, stability increasing behavioral states of the utility are added to the already existing requisition of having as little impact on utility operation as possible. Two different control approaches are introduced, compared and evaluated with respect of utility interaction requirements.",2010,0, 6282,Design of Bipolar Odd-Even Method for Immediately Correction of Information Errors,"Packet losses and errors are common through current wireless transmission due to signal decay, barriers in-between and other environmental related reasons. Although there are many ways to correct these errors, there is no one can deal with them immediately. For the efficient use of energy to achieve the purpose of energy conservation, and use the smallest energy to implement monitoring more, rapid implementation of parallel processing, a division of cooperation can be real-time monitoring without individual inquiries. If data are sent in hardware method, safer transmission can be reached by simply modify the hardware. Adopted the principle of variable packet, each single-chip can act as transceiver at any time, automatically send or emergency call by turns. The use of searching the initial symbols and the suspension symbols of the interpretation of the way to transmit the information and the related content. Signals can be sent to the extensions by monitoring and management of the host, also can be transferred to another host through cross-regional, as well as the transmitting DATA can be mounted other messages without wasting time. By checking the connection between any two adjacent bytes, problems like signal decay can be solved. At the meantime, it can correct errors automatically, to prevent interferences, and fortify the safety and encryption of wireless transmission.",2010,0, 6283,Statistical estimation of ultrasonic propagation path parameters for aberration correction,"Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.",2005,0, 6284,Neural network-based correction and interpolation of encoder signals for precision motion control,"Precision control is the core of many applications in the industry, particularly robotics and drive control. To achieve it, precise measurement of the signals generated by incremental encoder sensors is essential. High precision and resolution motion control relies critically on the precision and resolution achievable from the encoders. In this paper, a dynamic neural network-based approach for the correction and interpolation of quadrature encoder signals is developed. In this work, the radial basis functions (RBF) neural network is employed to carry out concurrently the correction and interpolation of encoder signals in realtime. The effectiveness of the proposed approach is verified in the simulation results provided.",2004,0, 6285,Compensation for Heeling Error of MEMS Accelerator Sensor in Vehicle ABS,"Precision of wheel speed and vehicle speed determines accuracy and reliability of ABS. When measuring vehicle speed, MEMS inertia accelerator sensor is usually adopted to carry out indirect measurement. For MEMS inertia accelerator sensor, output signal of accelerator sensor not only has relationship with vehicle accelerator, but also with heeling angle of MEMS inertia accelerator sensor. Transient heeling angle can be obtained by separating acceleration and heeling angle from output signal of MEMS inertia accelerator sensor. When compensating for MEMS inertia accelerator signal in braking process by separated heeling angle, feint accelerator error produced by single-axis MEMS inertia accelerator sensor can be basically eliminated. Then, accuracy increase of actual accelerator signal will improve brake control performance of ABS controller",2006,0, 6286,Enhanced DO-RE-ME based defect level prediction using defect site aggregation-MPG-D,"Predicting the final value of the defective part level after the application of a set of test vectors is not a simple problem. In order for the defective part level to decrease, both the excitation and observation of defects must occur. This research shows that the probability of exciting an as yet undetected defect does indeed decrease exponentially as the number of observations increases. In addition, a new defective part level model is proposed which accurately predicts the final defective part level (even at high fault coverages) for several benchmark circuits and which continues to provide good predictions even as changes are made an the set of test patterns applied",2000,0, 6287,Differential Fault Analysis on PRESENT Key Schedule,"PRESENT is a lightweight block cipher designed by A. Bogdanov et al. in 2007 for extremely constrained environments such as RFID tags and sensor networks, where the AES is not suitable for. In this paper, the strength of PRESENT against the differential fault attack on the key schedule is explored. Our attack adopts the nibble oriented model of random faults and assumes that the attacker can induce a single nibble fault on the round key. The attack can efficiently recover the secret key with the computational complexity of 229, and sixty-four pairs of correct and faulty ciphertexts on average.",2010,0, 6288,An approach to improve the resolution of defect-based diagnosis,"Presents a practical approach to improve the resolution of defect-based diagnosis. To diagnose the faulty chips, various techniques are needed as well as modeling the defects precisely. In this paper, some techniques using the layout information, a multi-test diagnosis method, and a testing method for the delay fault are discussed, and some experimental results of actual chips are shown. The resolution of the diagnosing test patterns is also discussed",2001,0, 6289,Modeling range images with bounded error triangular meshes without optimization,"Presents a technique for approximating range images by means of adaptive triangular meshes with a bounded approximation error and without applying optimization. This approach consists of three stages. In the first stage, every pixel of the given range image is mapped to a 3D point defined in a reference frame associated with the range sensor. Then, those 3D points are mapped to a 3D curvature space. In the second stage, the points contained in this curvature space are triangulated through a 3D Delaunay algorithm, giving rise to a tetrahedronization of them. In the last stage, an iterative process starts digging the external surface of the previous tetrahedronization, removing those triangles that do not fulfill the given approximation error. In this way, successive fronts of triangular meshes are obtained in both range image space and curvature space. This iterative process is applied until a triangular mesh in the range image space fulfilling the given approximation error is obtained. Experimental results are presented",2000,0, 6290,Temperature Correction of PSP Measurement for Low-Speed Flow Using Infrared Camera,"Pressure-Sensitive Paint (PSP) system combined with an infrared (IR) camera has been developed at 2 m x 2 m low-speed wind tunnel at WINTEC/JAXA. The temperature correction of PSP was conducted using both temperature image acquired by the IR camera and wind-off images immediately after the wind tunnel shutdown. As a verification test, the pressure distribution on a supersonic transfer (SST) model was measured by the PSP/IR combined system. The measurement accuracy was fairly improved compared to the previous method, i.e., the temperature correction of PSP using only wind-off PSP images immediately after wind tunnel shutdown.",2005,0, 6291,Path-based error coverage prediction,"Previous studies have shown that error detection coverage and other dependability measures estimated by fault injection experiments are affected by the workload. The workload is determined by the program executed during the experiments, and the input sequence to the program. In this paper, we present a promising analytical post-injection prediction technique, called path-based error coverage prediction, which reduces the effort of estimating error coverage for different input sequences. It predicts the error coverage for one input sequence based on fault injection results obtained for another input sequence. Although the accuracy of the prediction is low, path based error coverage prediction manages to correctly rank the input sequences with, respect to error detection coverage, provided that the difference in the actually coverage is significant. This technique may, drastically decrease the number of fault injection experiments, and thereby the time, needed to find the input sequence with the worst-case error coverage among a set of input sequences",2001,0, 6292,Enhancing the Success Rate of Primary Version While Guaranteeing Fault-Tolerant Capability for Real-Time Systems,"Primary/alternate version technique is a cost-effective means which trades the quality of computation results for promptness to tolerate the software faults. Generally speaking, this method requires that each real-time periodic task has two versions: primary and alternate. The primary version provides a result that is in some sense more desirable, but it may be subject to timing failure due to its complexity. On the contrary, the alternate version simply affords an acceptable service, but it could guarantee the timeliness owing to its simplicity. The kernel algorithm proposed in this paper employs the off-line backwards-RM scheme to pre-allocate time intervals to the alternate version and the on-line RM scheme to dispatch the primary version. Simulation results show that kernel algorithm provides higher success rate of primary version.",2009,0, 6293,Call for Papers for Special Section on Fault Diagnosis and Tolerance in Cryptography,"Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.",2004,0, 6294,QoS-Aware Fault Tolerance in Grid Computing through Topology-Aware Replica Placement,Quality of service (QoS)-aware fault tolerance is defined as the capability of overcoming both hardware and software failures while maintaining communications QoS guarantees. An integrated fault tolerant scheme combining service replication and path restoration has the potential of providing QoS-aware fault tolerance while maximizing the percentage of recovered connections and minimizing the required service replicas. Previous studies focused on evaluating the optimal performance achievable by the integrated fault tolerant schemes through a mixed integer linear programming (MILP) model. This study concentrate on developing heuristics to place service replicas with topology awareness. The aim of this study is to evaluate whether topology-aware heuristics can approximate MILP optimal solutions,2006,0, 6295,Quantification of in vivo Magnetic Resonance Spectroscopy signals with baseline and lineshape corrections,"Quantification of Magnetic Resonance Spectroscopy (MRS) signals is a method to estimate metabolite concentrations of the tissue under investigation. Estimation of these concentrations provides information about the biochemical characteristics of the tissue and are finally used as a complementary information in the diagnosis of cancer, epilepsy and metabolic diseases. Obtaining reliable metabolite concentrations is still a challenge due to the experimental conditions affecting the spectral quality. The decay of MRS signals (lineshape of MR spectra), for instance, is affected by inhomogeneities in the magnetic field caused by shimming problems and tissue heterogeneities. To handle this type of distortions, we study a method where the unsuppressed water is used to correct lineshape distortions, an inversion recovery signal is used to account for macromolecules and lipids present in the tissue and splines are used to correct additional baseline distortions. In this study, we consider rat brain in vivo signals and quantify them taking into account both lineshape distortions and the background signal.",2010,0, 6296,Feasibility study of the quantitative corrections for the brain input function imaging from the carotid artery images by an ultra-high resolution dedicated brain PET,"Quantitative PET imaging usually requires the arterial blood sampling, which is an invasive measure and may introduce risks or other complications to patients. People are trying several non-invasive methods to obtain the quantitative tracer concentrations by measuring the reconstructed intensity of the artery in the PET imaging. However, all these methods have certain limitations for brain study due to difficulties such as the partial-volume-effect (PVE), no artery big enough in the FOV for obtaining the required data as the cardiology does, etc. Here we carried a simulation study on the feasibility of the quantitative corrections by carotid artery with an ultra-high resolution, large axial FOV dedicated brain PET system. This brain PET has a detector ring diameter of 48 cm and the axial length of 25 cm. The large AFOV ensures that the camera could cover both the brain and the carotid artery region at the same time for dynamic study. The detectors are the 1.41.411 mm3 LYSO crystals. The conservative estimation of the resolution is 1.7 to 2.0 mm, which is about 1/3 of the human carotid artery inner diameter. To evaluate the PVE on the quantitative results, a head-and-neck phantom with different-sized sources (5 to 20 mm) embedded that has a 6:1 concentration ratio between source and background is studied using Monte Carlo simulations. As the comparison, a whole-body PET - Siemens TruePoint scanner is also studied. From the reconstructed source intensities we find that with this brain PET, the recovery coefficient could reach 76% to 86% for a typical human carotid artery size source with the diameter between 5 to 7 mm; with the TruePoint scanner the recovery coefficient is only 34% to 54%. The simulation shows that with the help of an ultra-high resolution large axial FOV brain PET camera, the goal of non-invasive quantitative corrections by carotid artery for the brain dynamic study is feasible, which is not possible with other commercial- - whole-body scanners current available.",2010,0, 6297,Improved scatterer property estimates from ultrasound backscatter using gate-edge correction and a Pseudo-Welch technique,"Quantitative ultrasound (QUS) techniques have been widely used to estimate the size, shape and mechanical properties of tissue microstructure for specified regions of interest (ROIs). For conventional methods, an ROI size of 4 to 5 beamwidths laterally and 15 to 20 spatial pulse lengths axially has been suggested to estimate accuracy and precision better than 10% and 5%, respectively. A new method is developed to decrease the standard deviation of the quantitative ultrasound parameter estimate in terms of effective scatterer diameter (ESD) for small ROIs. The new method yielded estimates of the ESD within 10% of actual values at an ROI size of five spatial pulse lengths axially by two beamwidths laterally, and the estimates from all the ROIs had a standard deviation of 15% of the mean value. Such accuracy and precision cannot be achieved using conventional techniques with similar ROI sizes.",2010,0, 6298,Single Electron Fault in QCA Inverter Gate,Quantum Cellular Automata (QCA) represents an emerging technology at the nanotechnology level. There are various faults which may occur in QCA cells. One of these faults is the Single Electron Fault (SEF) that can happen during manufacturing or operation of QCA circuits. A detailed simulation based logic level modeling of Single Electron Fault for QCA Inverter gate is represented in this paper.,2009,0, 6299,Transition Faults Testing Based on Functional Delay Tests,"Rapid advances of semiconductor technology lead to higher circuit integration as well as higher operating frequencies. The statistical variations of the parameters during the manufacturing process as well as physical defects in integrated circuits can sometimes degrade circuit performance without altering its logic functionality. These faults are called delay faults. In this paper we consider the quality of the tests generated for two types of delay faults, namely, functional delay and transition faults. We compared the test quality of functional delay tests in regard to transition faults and vice versa. We have performed various comprehensive experiments with combinational benchmark circuits. The experiments exhibit that the test sets, which are generated according to the functional delay fault model, obtain high fault coverages of transition faults. However, the functional delay fault coverages of the test sets targeted for the transition faults are low. It is very likely that the test vectors based on the functional delay fault model can cover other kinds of the faults. Another advantage of test set generated at the functional level is that it is independent of and effective for any implementation and, therefore, can be generated at early stages of the design process.",2007,0, 6300,The induced overvoltage between UHV AC and DC transmission lines built on the same tower under fault conditions,"Rapid increase of the transmission capacity and environmental limit of the power transmission passage have indispensably promoted development of the AC/DC hybrid transmission lines on the same tower, especially for UHV systems being established in China. Unfortunately, severe electromagnetic coupling effects will inevitably exist between the UHV AC and DC transmission lines built on the same tower. The generating mechanisms, as well as the computational methodology, of both the inductive and capacitive coupling effects within the hybrid transmission lines are elucidated in the paper, then an equivalent circuitry model is established as to analyze the induced overvoltage within the hybrid transmission lines under different fault conditions. Further, the impacts on the induced overvoltage on the healthy line from key influential factors, such as the grounding fault resistance, the collocation and layout of DC filters and fault type, are accounted for in details. Specifically, phase-mode transform method is adopted to theoretically analyze the effect of transposition mode of the UHV AC transmission line as how to reduce the induced overvoltage within the hybrid transmission lines. The proposed method and analyzed results present reliable reference for realizing optimal design of the UHV AC/DC hybrid transmission lines.",2009,0, 6301,Using rational filters for digital correction of a spectrometric microtransducer,"Raw spectrometric data are subject to systematic errors of an instrumental type that may be reduced, provided a mathematical model of the spectrometer, or its pseudoinverse, i.e., an operator of reconstruction, is identified. The idea of identifying this operator, directly during calibration of the spectrometer, is developed in this paper. The applicability of an operator of reconstruction, having the form of a rational filter, is studied when it is used for correction of the instrumental errors introduced by a low-resolution spectrometric microtransducer (SMT) that is intended for designing a microspectrometer. Several algorithms of correction are developed and systematically studied using real-world spectra and a nonlinear mathematical model of the microtransducer, proposed by the authors in a previous publication",2000,0, 6302,A survey of error-concealment schemes for real-time audio and video transmissions over the Internet,"Real-time audio and video data streamed over unreliable IP networks, such as the Internet, may encounter losses due to dropped packets or late arrivals. This paper reviews error-concealment schemes developed for streaming real-time audio and video data over the Internet. Based on their interactions with (video or audio) source coders, we classify existing techniques into source coder-independent schemes that treat underlying source coders as black boxes, and source coder-dependent schemes that exploit coder-specific characteristics to perform reconstruction. Last, we identify possible future research directions",2000,0, 6303,An unequal error protection using Reed-Solomon codes for real-time MPEG video stream,"Real-time MPEG video stream over a packet switching network is very sensitive to losses and errors. In the same way as the priority encoding transmission (PET) scheme, our purpose is to protect the different contents of the MPEG video stream according to their importance for video sample quality played out at the receiver side. However, our approach considers as very important the latency delay for interactive applications. So we determine an upper bound for the processing time of Reed-Solomon erasure codes. Considering as criteria the ratio between the network bandwidth and the picture quality, the optimal application policy for unequal error protection is highlighted by experimental results.",2002,0, 6304,FPGA-based Ultra-Low Latency HIL Fault Testing of a Permanent Magnet Motor Drive using RT-LAB-XSG,"Real-time simulation of PMSM drives enables thorough testing of control strategies & software protection routines and therefore allows rapid deployment of vehicular or industrial applications. The proposed PMSM model is a phase domain model with sinusoidal flux induction. A 3-phase IGBT inverter drives the PMSM machine. Both models are implemented on an FPGA chip, without any VHDL coding, with the RT-LAB real-time simulation platform of Opal-RT Technologies using a Simulink blockset called Xilinx System Generator (XSG). The paper explains various aspects of the design of the motor drive models in fixed-point representation in XSG, as well as simulation validation against a standard PMSM drive model built in Simulink. The phase-domain PMSM drive model runs with an equivalent 10 nanosecond time step (100 MHz FPGA card) and has a latency of 300 nanoseconds (PMSM machine and inverter). The motor drive has a resulting total hardware-in-the-loop latency of 1.3 microseconds.",2008,0, 6305,Automated source-level error localization in hardware designs,"Recent achievements in formal verification techniques allow for fault detection even in large real-world designs. Tool support for localizing the faulty statements is critical, because it reduces development time and overall project costs. Automated source-level debugging and a new and novel debugging model allow for source-level debugging of large VHDL designs at the granularity of statements and expressions. This technique is fully automated and does not require that an engineer be familiar with formal verification techniques.",2006,0, 6306,An autonomous FPGA-based emulation system for fast fault tolerant evaluation,"Platform FPGAs provide a high degree of reconfigurability and a high density of integration. These features make these devices very suitable for hardware emulation and in particular for fault tolerance evaluation. There are several FPGA-based approaches that enhance notably the fault tolerance evaluation process achieving an important speed up. However, such methods are limited by the communication between the FPGA and the host computer, which manages the emulation process. In order to minimize this communication and therefore accelerate the overall process, an autonomous emulation system is proposed in this paper. This solution profits from additional hardware resources available in current platform FPGAs such as embedded RAM. In the proposed system, a complete emulation campaign and its management is embedded in the FPGA, accelerating emulation process up to two orders of magnitude without losing flexibility with respect to other hardware solutions.",2005,0, 6307,Acceleration of a model based scatter correction technique for Positron Emission Tomography using high performance computing technique,"Positron Emission Tomography (PET) is a widely used and powerful metabolic imaging technique for functional diagnosis of organs. Compton scattering is a physical effect that results in distortions in the reconstructed image. The model based, so-called, Scatter Simulation (SSS) algorithm is an appropriate solution for scatter correction. However, the SSS algorithm is extremely computation intensive. The application of the SSS algorithm in clinical environment requires the application of high performance computing (HPC) techniques. In this work we give a survey about different high performance computing techniques and introduce the selection process of optimal HPC platform for the implementation of the Single Scatter Simulation algorithm.",2010,0, 6308,Software for Power Grid Fault Location with Traveling-wave,"Power grid fault location system using voltage traveling wave has been developed successfully by the authors. It includes traveling wave sensor, fault locator, and fault calculation center. Outputs of the sensors are applied to record the fault traveling wave arrived time with global position system (GPS) receiver in fault locator that is installed in every power station. The recorded times are then sent to fault calculation center. And the fault calculation center calculates the fault position. The fault calculation center communicates with fault locators in star model. Software for the calculation center and fault locator are designed in Delphi Language and presented in the paper, including display, human machine interface (HMI), communication, fault calculation, fault reporting, data base. The software is applied in a real power system for fault location. Test results show that the software has good performance and high reliability.",2006,0, 6309,Incipient fault detection in 33/11kV power transformers by using combined Dissolved Gas Analysis technique and acoustic partial discharge measurement and validated through untanking,"Power transformer consists of components which are under consistent thermal and electrical stresses. The major component which degrades under these stresses is the paper insulation of the power transformer. The electrical fault can develop into thermal fault such as localized insulation burning or hot-spots. Any fault in the transformer can be detected by using Dissolved Gas Analysis technique. In this paper, the detection of electrical and thermal faults in 14 units of 33/11kV, 30 MVA and 15MVA transformers were done by using Dissolved Gas Analysis (DGA). Then, the acoustic partial discharge test was carried out to detect the activity and locate the source of the electrical fault. All the transformers were untanked and the inspection was done. From the inspection done, there were a few incipient faults detected such as overheating due to loose connection, sharp edges, insulation burning, choking effect due to moisture and surface tracking in On-Load Tap Changer (OLTC) compartment. As a conclusion, the combination of the acoustic partial discharge technique and DGA technique have proved to be a useful tool in detecting and locating incipient faults in the power transformer.",2010,0, 6310,Supporting information system for power transformer fault forecasting applications,"Power transformers' failures carry great costs to electric companies since they need resources to recover from them and to perform periodical maintenance. To avoid this problem in four working 40 MVA transformers, the authors have implemented the measurement system of a failure prediction tool, that is the basis of a predictive maintenance infrastructure. The prediction models obtain their inputs from sensors, whose values must be previously conditioned, sampled and filtered, since the forecasting algorithms need clean data to work properly. Applying data warehouse (DW) techniques, the models have been provided with an abstraction of sensors the authors have called virtual cards (VC). By means of these virtual devices, models have access to clean data, both fresh and historic, from the set of sensors they need. Besides, several characteristics of the data flow coming from the VCs, such as the sample rate or the set of sensors itself, can be dynamically reconfigured. A replication scheme was implemented to allow the distribution of demanding processing tasks and the remote management of the prediction applications. VCs and the modular architecture proposed make the system scalable, reconfigurable and easy to maintain.",2003,0, 6311,Real-time fault detection and classification for manufacturing etch tools,"Process control in semiconductor manufacturing has sought to improve yield, increase tool productivity and reduce manufacturing costs through the analysis of tool sensor outputs. Statistical process control (SPC) utilizes statistical algorithms to detect excursion events, but here a novel fault detection and classification (FDC) approach based upon a pattern recognition algorithm is presented. This FDC method from StraatumTM is real-time, outputting a chamber status metric known as the plasma index. The system is in place at ProMOS Technologies Inc, 200 mm manufacturing facility on various semiconductor tools - this document presents its implementation on a number of TokyoTM DRMTM oxide etch tools and includes a number of case studies.",2004,0, 6312,A fault-driven lightweight process improvement approach,"Process improvement is of high importance and with crucial impact on business and prosperity for software developing companies. The requirements on software are that it needs to be produced faster, cheaper and with higher quality. A recent trend in software development is the use of agile methods. The general idea of more lightweight approaches can also be applied to process improvement. The authors describe a fault-driven lightweight process improvement approach to be used between projects. The objective is to decrease the number of faults and hence shorten the project lead-time. The fault-driven process improvement approach sets focus on business requirements and relevance for the company associated. We discuss the need for a lightweight approach and introduce a lightweight process improvement method. It also reports on some findings from an industrial study and presents some conclusions.",2003,0, 6313,Fault prognosis using dynamic wavelet neural networks,"Prognostic algorithms for condition based maintenance of critical machine components are presenting major challenges to software designers and control engineers. Predicting time-to-failure accurately and reliably is absolutely essential if such maintenance practices are to find their way into the industrial floor. Moreover, means are required to assess the performance and effectiveness of these algorithms. This paper introduces a prognostic framework based upon concepts from dynamic wavelet neural networks and virtual sensors and demonstrates its feasibility via a bearing failure example. Statistical methods to assess the performance of prognostic routines are suggested that are intended to assist the user in comparing candidate algorithms. The prognostic and assessment methodology proposed here may be combined with diagnostic and maintenance scheduling methods and implemented on a conventional computing platform to serve the needs of industrial and other critical processes",2001,0, 6314,"Eliminating exception handling errors with dependability cases: a comparative, empirical study","Programs fail mainly for two reasons: logic errors in the code and exception failures. Exception failures can account for up to two-thirds of system crashes, hence, are worthy of serious attention. Traditional approaches to reducing exception failures, such as code reviews, walkthroughs, and formal testing, while very useful, are limited in their ability to address a core problem: the programmer's inadequate coverage of exceptional conditions. The problem of coverage might be rooted in cognitive factors that impede the mental generation (or recollection) of exception cases that would pertain in a particular situation, resulting in insufficient software robustness. This paper describes controlled experiments for testing the hypothesis that robustness for exception failures can be improved through the use of various coverage-enhancing techniques: N-version programming, group collaboration, and dependability cases. N-version programming and collaboration are well known. Dependability cases, derived from safety cases, comprise a new methodology based on structured taxonomies and memory aids for helping software designers think about and improve exception handling coverage. All three methods showed improvements over control conditions in increasing robustness to exception failures but dependability cases proved most efficacious in terms of balancing cost and effectiveness",2000,0, 6315,Evaluation of inspectors' defect estimation accuracy for a requirements document after individual inspection,"Project managers need timely feedback on the quality of development products to monitor and control project progress. Inspection is an effective method to identify defects and to measure product quality. Objective and subjective models can be used to estimate the total number of defects in a product based on defect data from inspection. This paper reports on a controlled experiment to evaluate the accuracy of individual subjective estimates of developers, who had just before inspected the document, on the number of defects in a software requirements specification. In the experiment most inspectors underestimated the total number of defects in the document. The number of defects reported and the number of (major) reference defects found were identified as factors that separated groups of inspectors who over- or underestimated on average",2000,0, 6316,Modeling and fault simulation of propellant filling system based on Modelica/Dymola,"Propellant filling system is one of the key components of liquid rocket engine test bed and improving its reliability and security is very necessary. However because of the shortage of failure data and high expense and risk of test experiments, modeling and fault simulation are urgently needed. Based on the modular modeling, a new method using the objected-oriented language Modelica in Dymola is proposed for modeling the filling system in this paper. After that, fault simulation is processed by modifying the models or resetting their parameters, and the results demonstrate that it can not only take less time to model the system, but also gain available simulation results.",2008,0, 6317,Automatic detection of head refixation errors in fractionated stereotactic radiotherapy (FSR),Patient surface images are acquired using a novel 3D camera when the patient is at the CT-simulation position and after setup for fractionated stereotactic treatment. The simulation and treatment images are aligned through an initial registration using several feature points followed by a refined automatic matching process using an iterative-closest-point mapping-align algorithm. All of the video-surface images could be automatically transformed to the machine coordinate according to the calibration file obtained from a template image. Phantom tests have demonstrated that we can capture surface images of patients in a second with spatial resolution of submillimeter. A millimeter shift and one-degree rotation relative to the treatment machine can be accurately detected. The entire process takes about two minutes. Our primary result on patients involved in a clinical trial is very promising. This research is partially supported by INH Grant 1R43CA91690-01 and NIH CA88843.,2004,0, 6318,Fault tolerant application execution model in computing grid,"Performance and availability of resources cannot be guaranteed in the highly distributed and decentralized grid environment. For a reliable application execution in grid, mechanisms are needed to minimize or neutralize the effects of resource related faults and their volunteer leaving or joining the grid. In this paper a fault tolerant application execution model for grid has been investigated. The proposed model is an efficient solution towards resource usage and application execution cost. Analytical study of the reliability for the proposed model is specified. Illustrative examples are also presented.",2010,0, 6319,Average symbol error rate of cooperative spatial multiplexing in composite channels,"Performance of cooperative spatial multiplexing systems over composite channels is presented. In particular, we derive the closed-form average symbol error rate (SER) expression for cooperative spatial multiplexing systems with linear equalization receiver (e.g. Zero-Forcing) and M-ary modulation scheme in the composite channel of log-normal shadowing and Rayleigh fading. Using the Gauss-Hermite quadrature integration, the average SER is expressed in the form of the Appell hypergeometric function. Applying the series representation of the Appell hypergeometric function, we derive a very tight approximation for the average SER. We also perform Monte-Carlo simulations to validate our analysis. Subsequently, we apply the obtained analytical SER expression to evaluate some representative cooperative spatial multiplexing scenarios in composite channels.",2008,0, 6320,"Transistor Count, Chip Area and Cost Optimization of Fault Tolerant Active Pixel Sensors (FTAPS) by Modified Sensor Architecture and T-spice Based Verification of Proposed Architecture","Pixel defects are unavoidable in many solid-state image sensors, especially, CMOS image sensors. The pixel defects includes Hot pixels, Partially-stuck pixels, Fully-stuck pixels, Abnormal sensitivity defects, Random Telegraph Signal defects. Among them the most common and significant is hot pixel defect. Many approaches have been proposed to counter the effects of hot pixel defect. They include the dark frame subtraction and Fault Tolerant Active Pixel Sensor (FTAPS). This paper focuses on transistor count optimization of fault tolerant APS to achieve Chip Area And Cost Optimization by proposing a new architecture for FTAPS. It also includes verification of the response of proposed pixel architecture using TSPICE based simulations.",2010,0, 6321,A comparison of CT-based attenuation correction strategies for PET data of moving structures,"Respiratory motion can introduce image artefacts not only in 3D PET but also in 4D PET due to incorrect attenuation correction. In this work the influence of different attenuation correction strategies on 3D and 4D PET has been investigated. An extensive phantom study was carried out, using a normal 3D CT (pitch 1.5), a slow 3D CT (pitch 0.5), an ultraslow 3D CT (pitch 0.15), an average CT and a maximum intensity projection calculated from a 4D CT (pitch 0.1) for attenuation correction of both a 3D and 4D PET of a respiratory motion phantom. Additionally, the 4D PET was corrected phase-wise with a 4D CT (phase-correlated attenuation correction). The reconstructed PET images were analyzed concerning the reconstructed volume, motion amplitude (for 4D PET), activity concentration and activity distribution. Moreover a patient study was carried out investigating the influence of the different attenuation correction strategies for 4D PET to patient data. Therefore 4D PET data from six patients with non-small cell lung cancer (NSCLC) was alternatively attenuation corrected with a normal 3D CT, an average CT and with phase-correlated attenuation correction. The tumor volume was analyzed and the motion amplitude of the tumor was obtained from the 4D PET data sets. For the phantom data the attenuation correction with the slow CT results in the best agreement between expected and measured values of the examined quantities in 3D PET, whereas in 4D PET this was the case for the phase-correlated attenuation correction. In the patient study only small differences between the 4D PET attenuation correction methods were found. This can be explained by the relative small tumor motion in the patient population investigated (peak to peak amplitude below 5 mm except for one patient).",2008,0, 6322,Respiratory motion correction of PET using motion parameters from MR,"Respiratory motion during PET acquisition from the chest/abdomen leads to significant image degradation. Combined PET/MR scanners open up the opportunity to correct motion using MR data acquired simultaneously with PET. As simultaneous human chest/abdomen PET/MR images are currently unobtainable, in this preliminary study we determined motion parameters from respiratory-gated MR and then used these to correct pseudo-PET images generated from the MR. The gated MR images were segmented to typical organ FDG SUV values, smoothed to mimic PET resolution, forward projected into the GE advance geometry and reconstructed separately using OSEM. The MR images were registered using a combined affine and non-rigid B-splines algorithm, with mutual information used as the cost function in a multi-resolution approach. Motion corrected images from both post-reconstruction registration and 4D image reconstruction are shown to be superior to those without motion compensation for most organs.",2009,0, 6323,Respiratory-motion errors in quantitative myocardial perfusion with PET/CT,"Respiratory motion is known to cause errors in whole-body oncologic and static cardiac imaging with PET/CT. These errors are caused by the difference in acquisition times of the PET and CT data sets leading to inconsistencies and hence artifacts when the CT scan is used for attenuation correction (CTAC). The purpose of this study was to use computer simulations to investigate how quantitative imaging of myocardial perfusion with dynamic Rb82 PET/CT may be affected by respiratory motion. The NCAT anthropomorphic computer phantom was used to generate uniform-activity images at each of 10 respiratory phases and 17 dynamic frames. PET projection data for each of these 170 images were generated using the SimSET Monte Carlo simulator. The GE Discovery LS PET/CT was modeled and 400 M photon histories were tracked for each simulation. Images were reconstructed using OSEM and 4 different approaches to CTAC: phase-matched CTAC, correction with a single-phase CT (end- inspiration, end-expiration, and mid-inspiration), correction with a CT averaged over the respiratory cycle, and correction with a CT that was a voxel-by-voxel maximum over the respiratory cycle. The dynamic image sets were then processed using software developed in-house for the kinetic analysis of myocardial perfusion data. Images of Kl (blood-flow) were converted to polar maps and compared point-by-point and by segment using 17-segment regional analysis. Comparing all results to those of the phase-matched correction, we found that a single-phase correction had mean segmental errors as high as 20% (end- inspiration) with mid-inspiration correction providing the least error at 6%. An average CTAC had errors as high as 12% in the mid-inferior wall. The max-CTAC approach produced errors as high as 21%. Also of note, though, was that the phase-matched polar map was not uniform, as expected, with a visible decrease in blood-flow in the inferior wall. This deficit is potentially caused by motion-blurring leading to i- nterference from the activity in the stomach and liver through the model fitting of the spill-over correction term. We conclude that respiratory motion can lead to errors in quantitative estimates of blood-flow obtained from dynamic Rb82 PET perfusion studies. A CT map acquired at the mid-respiratory phase provides an accurate CTAC correction, but is not practical to acquire. An average CTAC provides the most accurate, practical solution to the respiratory-motion problem, however, it still produced segmental errors as large as 12% in this simulation. Errors in the inferior wall of the heart were observed with all corrections and may be related to motion-blurring of activity from extra-cardiac organs into the heart. The source of the inferior-wall deficits requires further investigation.",2007,0, 6324,The rosetta experiment: atmospheric soft error rate testing in differing technology FPGAs,"Results are presented from real-time experiments that evaluated large field programmable gate arrays (FPGAs) fabricated in different CMOS technologies (0.15 m, 0.13 m, and 90 nm) for their sensitivity to radiation-induced single-event upsets (SEUs). These results are compared to circuit simulation (Qcrit) studies as well as to Los Alamos Neutron Science Center (LANSCE) neutron beam results and Crocker Nuclear Laboratory (University of California, Davis) cyclotron proton beam results.",2005,0, 6325,The effect of real-time software reuse in FPGAs and microcontrollers with respect to software faults,"Reuse is considered as an important aspect in software design, but certain challenges have to be met if software reuse is applied in embedded systems. In these systems, specific requirements, as for example safety or real-time requirements, have to be considered, which typically complicate the reuse of software. Moreover, a large variety of hardware platforms is present in embedded systems. Those hardware platforms have different properties, which might affect the reuse of the corresponding software. In this paper, the different impacts of microcontrollers and FPGAs on software reuse are considered by empirical investigations. In particular, the investigations focus on the effect of this reuse on faults in real-time software. As a result, different benefits and drawbacks of software reuse were identified for microcontrollers and FPGAs.",2008,0, 6326,On error detection and error synchronization of reversible variable-length codes,"Reversible variable-length codes (RVLCs) are not only prefix-free but also suffix-free codes. Due to the additional suffix-free condition, RVLCs are usually nonexhaustive codes. When a bit error occurs in a sentence from a nonexhaustive RVLC, it is possible that the corrupted sentence is not decodable. The error is said to be detected in this case. We present a model for analyzing the error detection and error synchronization characteristics of nonexhaustive VLCs. Six indices, the error detection probability, the mean and the variance of forward error detection delay length, the error synchronization probability, the mean and the variance of forward error synchronization delay length are formulated based on this model. When applying the proposed model to the case of nonexhaustive RVLCs, these formulations can be further simplified. Since RVLCs can be decoded in backward direction, the mean and the variance of backward error detection delay length, the mean and the variance of backward error synchronization delay length are also introduced as measures to examine the error detection and error synchronization characteristics of RVLCs. In addition, we found that error synchronization probabilities of RVLCs with minimum block distance greater than 1 are 0.",2005,0, 6327,Rotogravure Printing Press Fault Diagnosis System,"Rotogravure printing press becomes more and more complicated, integrated, high-speed and intellectualized. To insure rotogravure printing press in its good conditions, the function of fault diagnosis gets more important than before in the process of repairing. This paper designed and developed a network fault diagnosis system based on PROFIBUS and B/S (browser/server) frame through the analysis of the need of fault diagnosis. Taking good use of testing information and diagnosis rules, the system realized open and distributed diagnosing process, and provided a flat of sharing information, which is the technique basis of combining testing and diagnosing. During design, we use Kingview as developing software. It is proved in LAN that the system can utilize present diagnosis rules in database to diagnose fault distributed.",2008,0, 6328,Jointly optimized error-feedback and realization for roundoff noise minimization in state-space digital filters,"Roundoff noise (RN) is known to exist in digital filters and systems under finite-precision operations and can become a critical factor for severe performance degradation in infinite impulse response (IIR) filters and systems. In the literature, two classes of methods are available for RN reduction or minimization-one uses state-space coordinate transformation, the other uses error feedback/feed-forward of state variables. In this paper, we propose a method for the joint optimization of error feedback/feed-forward and state-space realization. It is shown that the problem at hand can be solved in an unconstrained optimization setting. With a closed-form formula for gradient evaluation and an efficient quasi-Newton solver, the unconstrained minimization problem can be solved efficiently. With the infinite-precision solution as a reference point, we then move on to derive a semidefinite programming (SDP) relaxation method for an approximate solution of optimal error-feedback matrix with sum-of-power-of-two entries under a given state-space realization. Simulations are presented to illustrate the proposed algorithms and demonstrate the performance of optimized systems.",2005,0, 6329,Web shop user error detection based on rule based expert system,"Rule based systems (RBS) have been recognized as probably the best solution for knowledge based expert systems. This article tries to provide the overview of the architecture and basic characteristics of the RBS, focusing on both their weaknesses and strengths. Based on a theory, rule based expert system for web shop error detection has been proposed. The RBS builds on the available application knowledge base and focuses on a problem of detecting the possible error in the shortest possible timeframe. The formalization of the whole process has a potential to significantly reduce the time required to detect the possible error.",2010,0, 6330,Applying run-time monitoring to the Deep-Impact fault protection engine,Run-time monitoring is a lightweight verification method whereby the correctness of a programs' execution is verified at run-time using executable specifications. This paper describes the verification of the fault protection engine of the Deep-Impact spacecraft flight software using a temporal logic based run-time monitoring tool.,2003,0, 6331,Combining error masking and error detection plus recovery to combat soft errors in static CMOS circuits,"Soft errors are changes in logic state of a circuit/system resulting from the latching of single-event transients (transient voltage fluctuations at a logic node or SETs) caused by high-energy particle strikes or electrical noise. Due to technology scaling and reduced supply voltages, they are expected to increase by several orders of magnitude in logic circuits. In this work, we present a very efficient and systematic approach to cope with soft errors in combinational and sequential logic circuits. The features and merits of our approach are: (1) use of error masking in non-critical paths along with error detection and recovery in critical paths, which substantially lowers overhead for error correction; (2) average 93% soft-error rate (SER) reduction as SETs of width approximately half the clock period time can be tolerated; (3) area and power overheads can be traded-off with SER reduction based on application requirements. We also present two additional techniques to more aggressively utilize slack in circuits and further improve SER reduction by: (1) exploiting circuit delay dependence on input vectors and (2) redistributing slack in pipelined circuits.",2005,0, 6332,A Field Analysis of System-level Effects of Soft Errors Occurring in Microprocessors used in Information Systems,"Soft errors due to alpha and cosmic particles are a growing reliability threat to information systems. In this work, a methodology is developed to analyze the effects of single event upsets (SEU) and obtain FIT rates for commercial microprocessors in live information systems. Our methodology is based on data collected from error logs and error traces of the information systems present globally in the field. We also compare the system effects of errors that are suspected to be due to SEUs as compared with non-SEU errors. Soft errors are further localized within specific microprocessor resources with the assistance of the machine check architecture. The analyzed field data represents a world-wide population of microprocessors installed in the field. In total, several thousands systems and thirty-six months of field data were analyzed. The methodology used in carrying out this field analysis is discussed in detail and results are presented.",2008,0, 6333,Zero-Hardened SRAM Cells to Improve Soft Error Tolerance in FPGA,"Soft errors due to charged particle strikes at the sensitive cell nodes could modify the functionality of the design by changing the configuration bits of an SRAM based FPGA. However, with the development of very-deep-sub-micron (VDSM) or even the nano-technologies, aggressive device size has impacted severely the soft error rate of integrated circuits. In this paper, three new SRAM cell designs are proposed which mainly aim at reducing the soft error rate in FPGA. We verify the soft error tolerance and the power dissipation of these three designs using HSPICE simulation with Berkeley Predictive Technology Model (PTM) of the 65 nm, 1.0 V technology. The simulation results of our three designs are compared with that of standard 6-transistor SRAM cell and an existing increased soft error tolerance cell - ASRAM0. Comparison result shows that our new cells, especially the 0-hardened SRAM cell, have triple the critical charge of the standard 6-transistor SRAM cell, when the cell is storing 0.",2008,0, 6334,Soft errors Past history and recent discoveries,"Soft errors from alpha particles and terrestrial neutrons have been an issue in commercial electronic systems for over three decades. Measurement and mitigation techniques are well developed, but recent work highlights new issues that will need to be addressed for deep sub-micrometer technologies. The contribution of thermal neutrons does not appear to be eliminated with BPSG-free processing. In addition, neutrons in the spectral range of 1-10 MeV appear to be significant for soft error rates. Charge sharing and multi-node effects will negate some of the redundant circuit designs. As low power devices gain in applications, the impact of soft errors in the sub-threshold region of operation will be important.",2010,0, 6335,IBM z990 soft error detection and recovery,"Soft errors in logic are becoming more significant in the design of computer systems due to increased sensitivities of latches and combinatorial logic and the increased number of transistors on a chip. At the same time, users of computer systems continue to expect higher levels of system reliability. Therefore, the investment in hardware and firmware software mitigation is likely to continue to rise. The IBM eServer z990 system is designed to detect and recover from myriad instances of soft and permanent errors. The error detection and recovery within the z990 processors and the ""nest"" chips is described with respect to the system level protection against soft errors.",2005,0, 6336,Soft Error Rate Estimation in Deep Sub-micron CMOS,"Soft errors resulting from the impact of charged particles are emerging as a major issue in the design of reliable circuits at deep sub-micron dimensions. In this paper, we model the sensitivity of individual circuit classes to single event upsets using predictive technology models over a range of CMOS device sizes from 90 nm down to 32 nm. Modeling the relative position of particle strikes as injected current pulses of varying amplitude and fall time, we find that the critical charge for each technology is an almost linear function both of the fall time of the injected current and the supply voltage. This simple relationship will simplify the task of estimating circuit-level soft error rate (SER) and support the development of an efficient SER modeling and optimization tool that might eventually be integrated into a high level language design flow.",2007,0, 6337,Fault Injection Campaign for a Fault Tolerant Duplex Framework,"Software based fault tolerance may allow the use of COTS digital electronics in building a highly reliable computing system for spacecraft. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1], [2] that allows to run two copies (or replicas) of the same program on two different nodes of a commercial off-the-shelf (COTS) computer cluster. By the means of a third process (comparator) running on a different node that constantly monitors the results computed by the two replicas, the DF is able to restart the two replica processes if an inconsistency in their computation is detected. In order to test the reliability of the DF we wrote a simple fault injector that injects faults in the virtual memory of one of the replica process to simulate the effects of radiation in space. These faults occasionally cause the process to crash or produce erroneous outputs. For this study we used two different applications, one that computes an encryption of a input file using the RSA algorithm, and another that optimizes the trade-off between time spent and the fuel consumption for a low-thrust orbit transfer. But the DF is generic enough that any application written in C or Fortran could be used with little or no modification of the original source code. Our results show the potential of such approach in detecting and recovering from radiation induced random errors. This approach is very cost efficient compared to hardware implemented duplex operations and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.",2007,0, 6338,A Quasi-experiment for Effort and Defect Estimation Using Least Square Linear Regression and Function Points,"Software companies are currently investing large amounts of money in software process improvement initiatives in order to enhance their products' quality. These initiatives are based on software quality models, thus achieving products with guaranteed quality levels. In spite of the growing interest in the development of precise prediction models to estimate effort, cost, defects and other project's parameters, to develop a certain software product, a gap remains between the estimations generated and the corresponding data collected in the project's execution. This paper presents a quasi-experiment reporting the adoption of effort and defect estimation techniques in a large worldwide IT company. Our contributions are the lessons learned during (a) extraction and preparation of project historical data, (b) the use of estimation techniques on these data, and (c) the analysis of the results obtained. We believe such lessons can contribute to the improvement of the state-of-the-art in prediction models for software development.",2008,0, 6339,A Data Mining Model to Predict Software Bug Complexity Using Bug Estimation and Clustering,"Software defect(bug) repositories are great source of knowledge. Data mining can be applied on these repositories to explore useful interesting patterns. Complexity of a bug helps the development team to plan future software build and releases. In this paper a prediction model is proposed to predict the bug's complexity. The proposed technique is a three step method. In the first step, fix duration for all the bugs stored in bug repository is calculated and complexity clusters are created based on the calculated bug fix duration. In second step, bug for which complexity is required its estimated fix time is calculated using bug estimation techniques. And in the third step based on the estimated fix time of bug it is mapped to a complexity cluster, which defines the complexity of the bug. The proposed model is implemented using open source technologies and is explained with the help of illustrative example.",2010,0, 6340,Mining Frequent Patterns from Software Defect Repositories for Black-Box Testing,"Software defects are usually detected by inspection, black-box testing or white-box testing. Current software defect mining work focuses on mining frequent patterns without distinguishing these different kinds of defects, and mining with respect to defect type can only give limited guidance on software development due to overly broad classification of defect type. In this paper, we present four kinds of frequent patterns from defects detected by black-box testing (called black-box defect) based on a kind of detailed classification named ODC-BD (Orthogonal Defect Classification for Blackbox Defect). The frequent patterns include the top 10 conditions (data or operation) which most easily result in defects or severe defects, the top 10 defect phenomena which most frequently occur and have a great impact on users, association rules between function modules and defect types. We aim to help project managers, black-box testers and developers improve the efficiency of software defect detection and analysis using these frequent patterns. Our study is based on 5023 defect reports from 56 large industrial projects and 2 open source projects.",2010,0, 6341,A method for correcting cosine-law errors in SEU test data,"Single-event upset tests often change the angle of the ion beam relative to the device to mimic a change in ion linear energy transfer, and the data are then converted via an assumed cosine law. The converted data are intended to represent device susceptibility at normal incidence, but the cosine law sometimes contains considerable error. The standard method for correcting this error is based on the rectangular parallelepiped (RPP) model. However, exact analytical expressions derived from this model are not particularly simple, so specialized computer codes are needed unless approximations are used. This paper starts with an alternate physical model, utilizing a charge-collection efficiency function, and derives an exact analytical result (called the alpha law here) that replaces the cosine law but is almost as simple as the cosine law, even when device susceptibility has a strong azimuthal dependence. The same model can be used to calculate (via numerical integrations) rates in a known heavy-ion environment. An alternative is to use model parameters to construct the parameters for an integrated RPP calculation of rates",2002,0, 6342,Fault Localization Based on Dynamic Slicing and Hitting-Set Computation,"Slicing is an effective method for focusing on relevant parts of a program in case of a detected misbehavior. Its application to fault localization alone and in combination with other methods has been reported. In this paper we combine dynamic slicing with model-based diagnosis, a method for fault localization, which originates from Artificial Intelligence. In particular, we show how diagnosis, i.e., root causes, can be extracted from the slices for erroneous variables detected when executing a program on a test suite. We use these diagnoses for computing fault probabilities of statements that give additional information to the user. Moreover, we present an empirical study based on our implementation JSDiagnosis and a set of Java programs of various size from 40 to more than 1,000 lines of code.",2010,0, 6343,Smoothing Algorithm for Tetrahedral Meshes by Error-Based Quality Metric,"Smoothing or geometrical optimization is one of basic procedures to improve the quality of mesh. This paper first introduces an error-based mesh quality metric based on the concept of optimal Delaunay triangulations, and then examines the smoothing scheme which minimizes the interpolation error among all triangulations with the same number of vertices. Facing to its deficiency, a modified smoothing scheme and corresponding optimization model for tetrahedral mesh that avoid illegal elements are proposed. The optimization model is solved by integrating chaos search and BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm efficiently. Quality improvement for tetrahedral mesh is realized through alternately applying the smoothing approach suggested and topological optimization technique. Testing results show that the proposed approach is effective to improve mesh quality and suitable for combining with topological technique.",2010,0, 6344,A Fault Detection Mechanism for Service-Oriented Architecture Based on Queueing Theory,"SOA is an ideal solution to application building, since it reuses the existing services as many as possible. The fault tolerance is one important capability to ensure the SOA- based applications are high reliable and available. However, fault tolerance is such a complex issue for most SOA providers that they hardly provide this capability in their products. This paper provides a queuing-theory-based algorithm to fault detection, which can be used to detect the services whose performance becomes unsatisfactory at runtime according to the QoS descriptor. Based on this algorithm, this paper also gives the reference models of the extended service and the architecture of fault-tolerance control center of enterprise services bus for SOA-based applications.",2007,0, 6345,FREP: A soft error resilient pipelined RISC architecture,"Soft error has become one of the major areas of attention with the device scaling and large scale integration. Lot of variants for superscalar architecture were proposed with focus on program re-execution, thread re-execution and instruction re-execution. In this paper we proposed a fault tolerant micro-architecture of pipelined RISC. The proposed architecture, Floating Resources Extended pipeline (FREP), re-executes the instructions using extended pipeline stages. The instructions are re-executed by hybrid architecture with a suitable combination of space and time redundancy.",2010,0, 6346,Delay and Area Efficient First-level Cache Soft Error Detection and Correction,"Soft error rates are an increasing problem in modern VLSI circuits. Commonly used error correcting codes reduce soft error rates in large memories and second level caches but are not suited to small fast memories such as first level caches, due to the area and speed penalties they entail. Here, an error detection and correction scheme that is appropriate for use in low latency first level caches and other small, fast memories such as register files is presented. The scheme allows fine, e.g., byte write granularity with acceptable storage overhead. Analysis demonstrates that the proposed method provides adequate soft error rate reduction with improved latency and area cost.",2006,0, 6347,AVF Stressmark: Towards an Automated Methodology for Bounding the Worst-Case Vulnerability to Soft Errors,"Soft error reliability is increasingly becoming a first-order design concern for microprocessors, as a result of higher transistor counts, shrinking device geometries and lowering of operating voltages. It is important for designers to be able to validate whether the Soft Error Rate (SER) targets of their design have been met, and help end users select the processor best suited to their reliability goals. The knowledge of the observable worst-case SER allows designers to select their design point, and bound the worst-case vulnerability at that design point. We highlight the lack of a methodology for evaluation of the overall observable worst-case SER. Hence, there is a clear need for a so called stress mark that can demonstrably approach the observable worst-case SER. The worst-case thus obtained can be used to identify reliability bottlenecks, validate safety margins used for reliability design and identify inadequacies in benchmark suites used to evaluate SER. Starting from a comprehensive study about how micro architecture-dependent program characteristics affect soft errors, we derive the insights needed to develop an automated and flexible methodology for generating a stress mark that approaches the maximum SER of an out-of-order processor. We demonstrate how our methodology enables architects to quantify the impact of SER-mitigation mechanisms on the worst-case SER of the processor. The stress mark achieves 1.4X higher SER in the core, 2.5X higher SER in DL1 and DTLB, and 1.5X higher SER in L2 as compared to the highest SER induced by SPEC CPU2006 and MiBench programs.",2010,0, 6348,A novel soft error sensitivity characterization technique based on simulated fault injection and constrained association analysis,"Soft error, a concern for space applications in the past, became a critical issue in deep sub-micron VLSI design for the continuous technology scaling. Automated fault injection technique is employed to characterize the soft error sensitivity of VHDL based design and association analysis algorithm is firstly introduced into this realm to explore the fault dependency of the components in the design. The association analysis technique makes the soft error characterization more systematic and methodic. An automated simulation based fault injector HSECT (HIT Soft Error Characterization Toolkit) is designed and a simple RISC processor, DP32-processor is selected as the research prototype. By using HSECT, 30,000 soft errors are injected to the DP32-processor with good statistical significance. The soft error sensitivity of the processor and the fault dependency of its components are also further investigated to direct the future design of the fault-tolerant and dependable circuits.",2008,0, 6349,Efficient Soft Error-Tolerant Adaptive Equalizers,"Soft errors are becoming an increasingly important issue for circuit reliability. Traditional techniques to protect against soft errors, like triple modular redundancy (TMR), have a large cost in terms of area and power. This has motivated the development of specific protection techniques for various types of circuits. In this paper, techniques to protect adaptive filters are presented, which provide reasonable reliability with reduced cost and power consumption. An adaptive equalizer case study is used to discuss and evaluate the proposed techniques in terms of both protection and cost.",2010,0, 6350,Error Tracking Based Error Concealment Strategy for Scalable Video Coding,"Scalable video coding (SVC) is a new emerging video coding standard to be released as an amendment of the H.264/AVC The SVC standard adopts four post-processing EC methods to cope with frame loss caused by transmission error. In SVC, however, there have been few studies on burst errors by which several successive frames are corrupted in noisy channels. In this paper, we first propose an error tracking model with which the concealment error and the propagation error can be estimated. Then, in order to cope with the burst error in error-prone networks, we introduce a novel error concealment (EC) strategy based on the proposed error tracking model. For each lost frame, the proposed strategy selects one of four EC methods adopted in the SVC that minimizes the visual degradation.",2008,0, 6351,Fault-tolerant 2-tier P2P based service discovery in PIRST-ON,"Scaleable and fault tolerant architectures that protect communication flows through diverse paths have a fundamental problem to solve. Especially in distributed systems - like the Internet - it is not trivial to find necessary vertex-and edge disjoint paths. This paper presents a new distributed service discovery architecture that can solve this problem. The key idea of our approach is, to route the traffic on the alternate path - in an overlay network - through a single autonomous system that is not part of the original data path. We show that, if this autonomous system is selected in an ""intelligent"" way, most of the hops of the original data path can be bypassed by this diverse path. Our approach builds on a new 2-tier peer to peer (P2P) signalling architecture. The lower tier consist of several P2P networks that are each limited to the boundaries of single autonomous systems. The higher tier in our architecture connects selected peers from different autonomous systems. Based on this flat hierarchy, our service discovery architecture is scaleable and due to the P2P nature, inherently fault tolerant.",2003,0, 6352,Deterministic Diagnostic Pattern Generation (DDPG) for Compound Defects,"Scan chain failure diagnosis has become an important means for silicon debug and yield improvement. Although plenty of prior work discussed how to perform scan chain diagnosis, most of the previously proposed techniques made an assumption that the system logic is fault-free, which could be an impractical assumption leading to incorrect diagnostic results. In this paper, we propose a scan chain deterministic diagnostic pattern generation (DDPG) method that can tolerate the faults in the system logic without degradation of chain diagnostic resolution and precision. The entire flow includes three steps. In the first step, patterns are created to propagate the state of a targeted scan cell to as many reliable observation points as possible. In the second step, the load error probability of each targeted scan cell is calculated based on the hamming distances between the observed responses and the expected good or faulty responses. In the last step, a suspect profile is plotted, which can be used to identify the suspect scan cell(s) based on ranking scores. Experimental results show that the diagnostic resolution and precision are not degraded even with dozens of faults injected into the system logic.",2008,0, 6353,"Scansar Differential Interferometry and Wet Delay Correction: Case Studies in Dangxiong, Tibet","ScanSAR is a synthetic aperture radar (SAR), which has a very wide image swath. This provides a new option for interferometric applications. It makes the large scale deformation observation easier to implement. In this paper, we applied ScanSAR differential interferometry in Dangxiong, Tibet to observe the crust deformation there. However, the accuracy of ScanSAR interferometry is also affected by water vapour in the atmosphere. In our work, we tried to correct the wet delay using synchronous MERIS integrated water vapour products. In the end of this paper, the advantages and problems of ScanSAR interferometry are concluded.",2008,0, 6354,Schedulability Analysis for Fault-Tolerant Hard Real-Time Tasks with Arbitrary Large Deadlines,"Schedulability analysis based the worst-case response time is an important research issue in real-time systems. Due to the restriction that the traditional schedulability analysis for fault-tolerant real-time systems only considered that the deadline of each task must be no larger than its period, this paper extends the computation model that the deadline of each task can be longer than its period. Based on the worst-case response time schedulability analysis for fault-tolerant hard real-time task set, we present a fault-tolerant priority assignment algorithm for our proposed computation model. This algorithm can be useful, together with schedulability analysis, especially for the real-time communication systems and distribution systems.",2008,0, 6355,Comparison between segmented and nonsegmented attenuation correction on the HR+ tomograph,"Segmentation of transmission (TR) images is implemented to improve signal noise in attenuation correction (AC) and allow shorter TR scans. This study compares (i) measurement of myocardial glucose metabolic rate (MRGlu), (ii) correction for myocardial wall thickness and (iii) measurement of lung density, by using AC derived from transmission images (a) unsegmented (US), (b) segmented (S) and (c) reconstructed with OSEM. All data were acquired on a CTI HR+ tomograph and processed with standard scanner software. Methods: TR scans were carried out on a multi-cylinder phantom containing substances of varying density and mean mu-values (cm-1) were derived from TR images. A multi-compartment heart phantom was constructed as a simple simulation of varying myocardial wall thickness and left ventricular volume. MRGlu, using FDG (2D mode) and blood volume (C15O inhalation), was measured in 11 patients with coronary heart disease, following a 20 min TR scan. Results: Good linearity was found between NS mu-values and density but there were significant discrepancies with S over the typical range of lung density, and better agreement with OSEM. Overall mean ratios of MRGlu were within 2% for the three AC methods but there was systematic individual variation about the mean. The C.V. of regional MRGlu for each patient, using US AC, was not significantly different from S and OSEM. The heart phantom data demonstrated reasonable accuracy of correction for myocardial wall thickness. Conclusions: US AC on the HR+ gives more accurate measurement of the range of density encountered in the chest and more reliable measurement of MRGlu than S and OSEM. There appears to be no ""statistical penalty"" in the use of US AC, noise being dominated by that in emission data",2004,0, 6356,Bit-slice logic interleaving for spatial multi-bit soft-error tolerance,"Semiconductor devices are becoming more susceptible to single event upsets (SEUs) as device dimensions, operating voltages and frequencies are scaled. The majority of architecture-, logic- and circuit-level techniques that have been developed to address SEUs in logic assume a single-point fault model. This will soon be insufficient as the occurrence of spatial multi-bit errors is becoming prevalent in highly scaled devices. In this paper, we explore this new fault model and evaluate the effectiveness of conventional fault tolerance techniques to mitigate such faults. We also extend the idea of bit interleaving in memory to logic bit slices and explore its utility as an approach to spatial multi-bit error mitigation in logic. We present a comparison of these techniques using a case study of a Brent-Kung adder at a 90-nm process.",2010,0, 6357,Stator winding turn-fault detection for closed-loop induction motor drives,"Sensorless diagnostics for line-connected machines is based on extracting fault signatures from the spectrum of the line currents. However, for closed-loop drives, the power supply is a regulated current source and, hence, the motor voltages must also be monitored for fault information. In this paper, a previously proposed neural network scheme for turn-fault detection in line-connected induction machines is extended to inverter-fed machines, with special emphasis on closed-loop drives. Experimental results are provided to illustrate that the method is impervious to machine and instrumentation nonidealities, and that it requires lesser data memory and computation requirements than existing schemes, which are based on data lookup tables.",2003,0, 6358,Hardware-based Reliability Tree (HRT) for fault tree analysis,"Reliability analysis of critical systems is performed using fault trees. Fault trees are then converted to their equivalent Binary Decision Diagram, Cut Set, Markov Chain or Bayesian Network. These approaches however are complex and time consuming if a continuous time reliability curve is aimed, particularly for large systems. This paper introduces Hardware-based Reliability Tree (HRT). The HRT can be implemented by hardware in order to decrease reliability calculation time of a complex system. In this method, from a given fault tree, an equivalent Reliability Tree is generated and an equivalent hardware using op-amp, adder, gain, and multiplier circuits, is constructed. After obtaining continuous reliability curve over time, an integrator is utilized for calculating time to failure of the system. In order to evaluate the model, two systems were evaluated. The first one is a security alarm system and the second consists of two processors which share a single spare. In the case of failure of each processor, the spare is activated. Evaluation of these benchmarks using hardware implemented HRT demonstrates an speed up of up to 3.4E+6.",2010,0, 6359,Fault tolerance as an aspect using JReplica,"Reliability and availability are very important trends in the development process of distributed systems. In order to improve these features, object replication mechanisms have been introduced. Programming replication policies for a given application is not an easy task, and this is the reason why transparency for the programmer has been one of the most important properties offered by all replication models. However, this transparency for the programmer is not always desirable. In this paper we present a replication model, JReplica, based on Aspect Oriented Programming (AOP). JReplica allows the separated specification of the replication code from the functional behaviour of objects, providing not only a high degree of transparency, as done by previous models, but also the possibility for programmers to introduce new behaviour to specify different fault tolerance requirements. Moreover, the replication aspect has been introduced at design time, and in this way, UML has been extended in order to consider replication issues separately when designing fault tolerance systems",2001,0, 6360,Enable efficient stage construction for replication based fault-tolerant execution of mobile agent,"Reliability as well as fault-tolerance is a fundamental issue for the development of robust mobile agent system. A number of research works is done in these areas. Some researchers adopt the use of spatial replication based approach since it can reduce blocking possibility of mobile agent execution. Unfortunately, this fault-tolerance scheme is not cost effective because it incurs great costs of time and communication, which leads to increased system load. In this paper, we present a dynamic stage construction protocol for mobile agent execution to maintain high performance without being forced to make tradeoffs in reliability. The protocol schedules task execution dynamically according to the execution ability of visited nodes, hence avoid the unnecessary revisit to them at latter stage and cut down total execution time. On the other hand, it has specific node selection strategy to guarantee reliability, decrease communication overhead between two consecutive stages and improve efficiency",2005,0, 6361,A QoS-Aware Middleware for Fault Tolerant Web Services,"Reliability is a key issue of the service-oriented architecture (SOA) which is widely employed in critical domains such as e-commerce and e-government. Redundancy-based fault tolerance strategies are usually employed for building reliable SOA on top of unreliable remote Web services. Based on the idea of user-collaboration, this paper proposes a QoS-aware middleware for fault tolerant Web services. Based on this middleware, service-oriented applications can dynamically adjust their optimal fault tolerance strategy to achieve good service reliability as well as good overall performance. A dynamic fault tolerance replication strategy is designed and evaluated. Experiments are conducted to illustrate the advantage of the proposed middleware as well as the dynamic fault tolerance replication strategy. Comparison of the effectiveness of the proposed dynamic fault tolerance strategy and various traditional fault tolerance strategies are also provided.",2008,0, 6362,Fault-tolerance for stateful application servers in the presence of advanced transactions patterns,"Replication is widely used in application server products to tolerate faults. An important challenge is to correctly coordinate replication and transaction execution for stateful application servers. Many current solutions assume that a single client request generates exactly one transaction at the server. However, it is quite common that several client requests are encapsulated within one server transaction or that a single client request can initiate several server transactions. In this paper, we propose a replication tool that is able to handle these variations in request/transaction association. We have integrated our approach into the J2EE application server JBoss. Our evaluation using the ECPerf benchmark shows a low overhead of the approach.",2005,0, 6363,Detecting VLIW Hard Errors Cost-Effectively through a Software-Based Approach,"Research indicates that as technology scales, hard errors such as wear-out errors are increasingly becoming a critical challenge for microprocessor design. While hard errors in memory structures can be efficiently detected by error correction code, detecting hard errors for functional units cost-effectively is a challenging problem. In this paper, we propose to exploit the idle cycles of the under-utilized VLIW functional units to run test instructions for detecting wear-out errors without increasing the hardware cost or significantly impacting performance. We also explore the design space of this software-based approach to balance the error detection latency and the performance for VLIW architectures. Our experimental results indicate that such a software-based approach can effectively detect hard errors with minimum impact on performance for VLIW processors, which is particularly useful for reliable embedded applications with cost constraints.",2007,0, 6364,Fault tolerance in autonomic computing environment,"Since the characteristic of current information systems is the dynamic change of their configurations and scales with non-stop provision of their services, the system management should inevitably rely on autonomic computing. Since fault tolerance is one of the important system management issues, it should also be incorporated in an autonomic computing environment. This paper argues what should be taken into consideration and what approach could be available to realize the fault tolerance in such environments.",2002,0, 6365,Gigabit Ethernet for Stacking LAN's Networks Performance Correction,"Since the computers network has become increasingly disorderly, the stacking numbers is causing a lot of damages in the network performance and because the overloading and data diversification, the applications becomes more exigent, so to demand a solution offering more than a higher velocity the gigabit Ethernet is a grand unifying technology that enables communication of multiple forms of content voice, video and data technology. This work suggests analyses the necessity of a gigabit Ethernet solution for network LAN's traffic, and the research will be achieve about the network performance in hardware with fast Ethernet technology, with a software simulation of a network gigabit Ethernet which applications claims (VoIP, image transfer, video IP, etc).",2006,0, 6366,A Fault-Tolerant Scheme for Multicast Communication Protocols,"Since the multicast communication is the best technology to provide one to many communication, more and more service providers are using this technology to deliver the same service to multiple customers. As such, providing fault tolerance to multicast connections is gaining attention both in business and research communities because a single link or a node failure in the multicast communication delivery tree affects a large number of customers. There are some existing schemes proposed for fault recovery in the multicast communication. They either calculate a new tree without using any node from the existing tree or calculate a path from affected node/tree to the unaffected tree when a fault occurs. In either case, they need the global view of the multicast communication tree. In this paper, we propose a fault tolerant scheme in which we do not need the global view of the multicast tree. We compute the shortest path from a node to the source of the multicast tree assuming that the node's link to its parent node in the multicast tree is broken. The shortest path information is sent hop-by-hop toward the source and is stored in the routers. When the assumed broken link really breaks the recovery message is sent toward the source and the previously stored fault recovery message at each node is used to make a multicast recovery tree",2005,0, 6367,An adaptive fault tolerance for situation-aware ubiquitous computing,"Since ubiquitous applications need situation-aware middleware services and computing environment (e.g., resources) keeps changing as the applications change, it is challenging to detect errors and recover them in order to provide seamless services and avoid a single point of failure. This paper proposes an adaptive fault tolerance (AFT) algorithm in situation-aware middleware framework and presents its simulation model of AFT-based agents.",2005,0, 6368,Monte Carlo Analysis of the Effects of Soft Errors Accumulation in SRAM-Based FPGAs,"Single event effects in SRAM-based FPGAs have been widely studied and there is a variety of mitigation techniques that can be used in order to achieve a complete fault tolerance. In this scenario, multiple faults are becoming a concern and new methodologies have to be developed in order to evaluate the effects of fault accumulation. In this paper a new Monte Carlo based methodology is used to evaluate soft errors accumulation in the configuration memory of triple modular redundancy designs implemented in SRAM-based FPGAs. Analytical predictions are confirmed by means of fault injection experiments.",2008,0, 6369,Compiler-level implementation of single Event Upset errors mitigation algorithms,"Single event upset is a common source of failure in microprocessor-based systems working in environment with increased radiation level especially in places like accelerators and synchrotrons, where sophisticated digital devices operate closely to the radiation source. One of the possible solutions to increase the radiation immunity of the microprocessor systems is a strict programming approach known as the software implemented hardware fault tolerance. Unfortunately, a manual implementation of SIHFT algorithms is difficult and can introduce additional problems with program functionality caused by human errors. In this paper author presents new approach to this problem, that is based on the modifications of the source code of the C language compiler. Protection methods are applied automatically during source code processing at intermediate representation of the compiled program.",2009,0, 6370,Cross-Layer Analysis of Error Control in Wireless Sensor Networks,"Severe energy constraints and hence the low power communication requirements amplify the significance of the energy efficient and preferably cross-layer error control mechanisms in wireless sensor networks (WSN). In this paper, a cross-layer methodology for the analysis of error control schemes in WSNs is presented such that the effects of multi-hop routing and the broadcast nature of the wireless channel are investigated. More specifically, the cross-layer effects of routing, medium access and physical layers are considered. This analysis enables a comprehensive comparison of forward error correction (FEC) and automatic repeat request (ARQ) in WSNs. FEC schemes improve the error resiliency compared to ARQ. In a multi-hop network, this improvement can be exploited by reducing the transmit power (transmit power control) or by constructing longer hops (hop length extension), which can be achieved through channel-aware routing protocols. The results of our analysis reveal that for certain FEC codes, the hop length extension decreases both the energy consumption and the end-to-end latency subject to a target PER compared to ARQ. Thus, FEC codes can be regarded as an important candidate for delay sensitive traffic in WSNs. On the other hand, transmit power control results in significant savings in energy consumption at the cost of increased latency. Moreover, the cases where ARQ outperforms FEC codes are indicated for various end-to-end distance and target PER values",2006,0, 6371,Reliability considerations and fault handling strategies for multi-MW modular drive systems,"Shunt-interleaved electrical drive systems consisting of several parallel medium-voltage back to back converters enable power ratings of tens of MVA, low current distortions and a very smooth airgap torque. In order to meet stringent reliability and availability goals despite the large parts count, the modularity of the drive system needs to be exploited and a suitable fault handling strategy that allows the exclusion and isolation of faulted threads is required. This avoids the shutdown of the complete system and enables the drive system to continue operating. If full power capability is also required in degraded mode operation, redundancy on a thread level needs to be added. Experimental results confirm that thread exclusion allows the isolation of the majority of faults without affecting the mechanical load. As the drive system continues to run, faulted threads can be repaired and then added on-the-fly to the running system by thread inclusion. As a result, the downtime of such a modular drive system is expected to not exceed a few hours per year.",2009,0, 6372,Inter-defect charge exchange in silicon particle detectors at cryogenic temperatures,"Silicon particle detectors in the next generation of experiments at the CERN Large Hadron Collider will be exposed to a very challenging radiation environment. The principal obstacle to long-term operation arises from changes in detector doping concentration (Neff), which lead to an increase in required operating voltage. We have previously presented a model of inter-defect charge exchange between closely-spaced centres in the dense terminal clusters formed by hadron irradiation. This manifestly non-Shockley-Read-Hall mechanism leads to a marked increase in carrier generation rate and negative space charge over the SRH prediction. We present here measurements of spectra from 241Am alpha particles and 1064 nm laser pulses as a function of bias over a range of temperatures. Values of Neff and substrate type are extracted from the spectra and compared with the model. The model is implemented in both a commercial finite-element device simulator (ISE-TCAD) and a purpose-built simulation of inter-defect charge exchange. Deviations from the model are explored, and conclusions drawn as to the feasibility of operating silicon particle detectors at cryogenic temperatures.",2001,0, 6373,GRAAL: a new fault tolerant design paradigm for mitigating the flaws of deep nanometric technologies,"Silicon-based CMOS technologies are fast approaching their ultimate limits. By approaching these limits, power dissipation, fabrication yield, and reliability worsen steadily making further nanometric scaling increasingly difficult. These problems would stop further scaling of silicon-based CMOS technologies at channel lengths between 10 and 20 nm. But even before reaching these limits, these problems could become show-stoppers unless new techniques are introduced to maintain acceptable levels of power dissipation, yield and reliability. The paper describes the principles of GRAAL (global reliability architecture approach for logic), a new fault tolerant architecture for logic designs, aimed to provide a global solution for mitigating the above mentioned problems.",2007,0, 6374,Study of Fault detection device based on ARM in distribution networks,"Since currently in China the power supply distribution system adopts low current grounding system with indirect grounding of neutral, so the fault of single phase grounding in the system takes place frequently. With reference of actual conditions for single phase grounding fault, the paper designs a new real-time monitoring and fault diagnosis device based on ARM (S3C2410). On the one hand, the device can measure some elementary power parameters, such as the three-phase voltages, current, power, frequency etc. On the other hand, it can really monitor system status. Results of operation show that this device computing speed quickly, strong anti-jamming capability, stable operation, has the good performance-to-price ratio, is very practical in the small current grounding and the grid.",2010,0, 6375,A robust and fault-tolerant distributed intrusion detection system,"Since it is impossible to predict and identify all the vulnerabilities of a network, and penetration into a system by malicious intruders cannot always be prevented, intrusion detection systems (IDSs) are essential entities for ensuring the security of a networked system. To be effective in carrying out their functions, the IDSs need to be accurate, adaptive, and extensible. Given these stringent requirements and the high level of vulnerabilities of the current days' networks, the design of an IDS has become a very challenging task. Although, an extensive research has been done on intrusion detection in a distributed environment, distributed IDSs suffer from a number of drawbacks e.g., high rates of false positives, low detection efficiency etc. In this paper, the design of a distributed IDS is proposed that consists of a group of autonomous and cooperating agents. In addition to its ability to detect attacks, the system is capable of identifying and isolating compromised nodes in the network thereby introducing fault-tolerance in its operations. The experiments conducted on the system have shown that it has high detection efficiency and low false positives compared to some of the currently existing systems.",2010,0, 6376,Vertical quench furnace Hammerstein fault predicting model based on least squares support vector machine and its application,"Since large-scale vertical quench furnace is voluminous, whose working condition is a typically complex process with distributed parameter, nonlinear, multi-inputs/multi-outputs, close coupled variables, etc, Hammerstein model of the furnace is presented. Firstly, the nonlinear function of Hammerstein model is constructed by least squares support vector machines regression. A numerical algorithm for subspace system (singular value decomposition, SVD) is utilized to identify the Hammerstein model. Finally, the model is used to predict the furnace temperature. The simulation research shows this model provides accurate prediction and is with desirable application value.",2009,0, 6377,BDD based analysis of parametric fault trees,"Several extensions of the fault tree (FT) formalism have been proposed in the literature. One of them is called parametric fault tree (PFT) and is oriented to the modeling of redundant systems, and provides a compact form to model the redundant parts of the system. Using PFTs instead of FTs to model systems with replicated parts, the model design is simplified since the analyst can fold subtrees with the same structure in a single parametric subtree, reducing the number of elements in the model. The method based on binary decision diagrams (BDD) for the quantitative analysis of FTs, is adapted in this paper to cope with the parametric form of PFTs: an extension of BDDs called parametric BDD (pBDD) is used to analyze PFTs. The solution process is simplified by using pBDDs: comparing the pBDD obtained from a PFT, with the ordinary BDD obtained from the unfolded FT, we can observe a reduction of the number of nodes inside the pBDD. Such reduction is proportional to the level of redundancy inside the PFT and leads to a consequent reduction of the number of steps necessary to perform the analysis. Concerning the qualitative analysis, we can observe that several minimal cut sets (MCS) obtained from the FT model of a redundant system, involve basic events relative to similar components. A parametric MCS (pMCS) allows to group such MCSs in an equivalence class, and consequently, to evidence only the failure pattern, regardless the identity of replicated components. A method to derive pMCSs from a PFT is provided in the paper",2006,0, 6378,Induction motor fault detection and diagnosis using supervised and unsupervised neural networks,"Successful and reliable motor fault detection and diagnosis requires expertise and knowledge. Neural network technologies can be used to provide inexpensive but effective fault detection mechanism This paper presents two neural networks algorithms: supervised and unsupervised types with applications to induction motor fault detection and diagnosis problems. The detection algorithm was simulated and its performance verified on various fault types. Simulation results illustrated that, after training the neural network, the system is able to detect the faulty machine.",2002,0, 6379,Evaluation of a Fault-Tolerance Mechanism for HLA-Based Distributed Simulations,"Successful integration of Modeling and Simulation (M&S) in the future Network-Based Defence (NBD) depends, among other things, on providing faulttolerant (FT) distributed simulations. This paper describes a framework, named Distributed Resource Management System (DRMS), for robust execution of simulations based on the High Level Architecture. More specifically, a mechanism for FT in simulations synchronized according to the time-warp protocol is presented and evaluated. The results show that utilization of the FT mechanism, in a worst-case scenario, increases the total number of generated messages by 68% if one fault occurs. When the FT mechanism is not utilized, the same scenario shows an increase in total number of generated messages by 90%. Considering the worst-case scenario a plausible requirement on an M&S infrastructure of the NBD, the overhead caused by the FT mechanism is considered acceptable.",2006,0, 6380,Demagnetization Analysis of Permanent Magnet Synchronous Machines under Short Circuit Fault,"Sudden symmetrical short circuit is a serious fault when the entire demagnetization or partial demagnetization of permanent magnet can occur. The aim of this paper is to analyze the demagnetization phenomenon of permanent magnet synchronous machines (PMSM) based on FEM and analytic method. Firstly, the transient FEM is utilized to analyze the demagnetization operating point of permanent magnet when symmetrical short circuit occurs. The computed time evolution of the maximum partial demagnetization operating point is derived. Secondly, the synthesized magnetomotive force (MMF) of the short circuit current is analyzed by analytical method. Through the analysis some characters are obtained. At last several factors that affect the demagnetization operating point are summarized and several measures are put forward to improve the max demagnetization operating point of permanent magnet.",2010,0, 6381,Scheduling algorithms for ultra-reliable execution of tasks under both hardware and software faults,"Summary form only given, as follows. We study the development of integrated fault-tolerant scheduling algorithms. The proposed algorithms ensure ultra-reliable execution of tasks where both hardware and software failures are considered, and system performance improvement. Also, the proposed algorithms have the capability for on-line system-level fault diagnosis.",2003,0, 6382,Acyclic circuit partitioning for path delay fault emulation,"Summary form only given. Acyclic partitioning of VLSI circuits is studied under area/delay, 1-0 size and communication constraints. In this paper, we define the path-delay-fault emulation problem which adds a new constraint, viz. path count constraint, to partitioning problem. We present two algorithms to solve the problem. The first algorithm decomposes a circuit into entirely-fanout-free cones (EFFC), and clusters them into partitions. The second one finds an intermediate partitioning solution with the partitioning algorithm ignoring path count constraint. Later, it applies the first algorithm to the partitions which violate the path count constraint. We implemented the first algorithm and measured its efficiency in terms of the number of resulting partitions, cut-cost, and time cost for ISCAS85 benchmarks.",2005,0, 6383,New electron beam proximity effects correction (EBPC) approach for 45nm and 32nm nodes,"Summary form only given. After the last successful results obtained these last years, EBDW (E-beam direct write) use for ASIC manufacturing is now demonstrated. However, throughput and resolution capabilities need to be improved to push its interest for fast cycle product and advanced R&D applications. In this way, the development of the process needs a good dimensional control of patterns. That means a better control of the proximity effects affected by the back scattering electrons and others phenomenon. There exist several methods to provide a correction for these effects and the most commonly used is that of dose adjustment as implemented by PDF solutions PROXECCO software package. However it has been observed that this correction is not perfect and significantly it fails to accurately correct the smallest and most dense structures encountered in designs with features below 65nm. To continue reducing feature sizes a method to provide a complementary correction to PROXECCO has been proposed. Based upon detailed characterization of the observed effects a rules correction scheme has been developed not dissimilar to the rule based corrections used in optical proximity correction (OPC). This electron beam proximity correction, or EBPC, has been shown to provide good results down to 40nm, with improvements in CD linearity, the isolated dense bias (IDB), line end shortening (LES), mask error enhancement factor (MEEF) and the energy latitude (EL) all of which leads to an improvement in the overall accuracy of the design, and furthermore an improvement in the process window.",2005,0, 6384,System-level fault-tolerance in large-scale parallel machines with buffered coscheduling,"Summary form only given. As the number of processors for multiteraflop systems grows to tens of thousands, with proposed petaflops systems likely to contain hundreds of thousands of processors, the assumption of fully reliable hardware has been abandoned. Although the mean time between failures for the individual components can be very high, the large total component count will inevitably lead to frequent failures. It is therefore of paramount importance to develop new software solutions to deal with the unavoidable reality of hardware faults. We will first describe the nature of the failures of current large-scale machines, and extrapolate these results to future machines. Based on this preliminary analysis we will present a new technology that we are currently developing, buffered coscheduling, which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency - requiring no changes to user applications. Preliminary results show that this is attainable with current hardware.",2004,0, 6385,Missing-sensor-fault-tolerant control for SSSC facts device with real-time implementation,"Summary form only given. Control of power systems relies on the availability and quality of sensor measurements. However, measurements are inevitably subjected to faults caused by sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software. These faults in turn may cause the failure of power system controllers and consequently severe contingencies in the power system. To avoid such contingencies, this paper presents a sensor evaluation and (missing sensor) restoration scheme (SERS) by using auto-associative neural networks (auto-encoders) and particle swarm optimization (PSO). Based on the SERS, a missing-sensor-fault-tolerant control (MSFTC) is developed for controlling a static synchronous series compensator (SSSC) connected to a power network. This MSFTC improves the reliability, maintainability and survivability of the SSSC and the power network. The effectiveness of the MSFTC is demonstrated by a real-time implementation of an SSSC connected to the IEEE 10-machine 39-bus system on the real time digital simulator (RTDS) and TMS320C6701 digital signal processor platform. The proposed fault-tolerant control can be readily applied to many existing controllers in power systems.",2009,0, 6386,Event-based fault detection of manufacturing cell: Data inconsistencies between academic assumptions and industry practice,"Some problems with event-based faults in manufacturing systems cannot be handled by existing fault detection solutions, including finding faults in event-based data for systems for which limited information is known. A new fault detection solution that finds faults in event-based data using model generation is presented here. This solution assumes that some information is known about the system from its design information and data structure. An example application of this solution is presented for a Ford machining cell that has been experiencing a gantry waiting problem. In the course of this example application, five inconsistencies were found between relatively common academic assumptions made by this fault detection solution (as well as others) and the actual cell's set-up and data. These inconsistencies and possible means of addressing them are discussed. Some of these means to resolve the inconsistencies have been implemented, and preliminary results in generating models using the fault detection solution are presented.",2010,0, 6387,Using Logic Criterion Feasibility to Reduce Test Set Size While Guaranteeing Fault Detection,"Some software testing logic coverage criteria demand inputs that guarantee detection of a large set of fault types. One powerful such criterion, MUMCUT, is composed of three criteria, where each constituent criterion ensures the detection of specific fault types. In practice, the criteria may overlap in terms of fault types detected, thereby leading to numerous redundant tests, but due to the unfortunate fact that infeasible test requirements don't result in tests, all the constituent criteria are needed. The key insight of this paper is that analysis of the feasibility of the constituent criteria can be used to reduce test set size without sacrificing fault detection. In other words, expensive criteria can be reserved for use only when they are actually necessary. This paper introduces a new logic criterion, Minimal-MUMCUT, based on this insight. Given a predicate in minimal DNF, a determination is made of which constituent criteria are feasible at the level of individual literals and terms. This in turn determines which criteria are necessary, again at the level of individual literals and terms. This paper presents an empirical study using predicates in avionics software. The study found that Minimal-MUMCUT reduces test set size -- without sacrificing fault detection -- to as little as a few percent of the test set size needed if feasibility is not considered.",2009,0, 6388,Fault-Tolerant 2D Fourier Transform with Checksum Encoding,"Space-based applications increasingly require more computational power to process large volumes of data and alleviate the downlink bottleneck. In addressing these demands, commercial-off-the-shelf (COTS) systems can serve a vital role in achieving performance requirements. However, these technologies are susceptible to radiation effects in the harsh environment of space. In order to effectively exploit high-performance COTS systems in future spacecraft, proper care must be taken with hardware and software architectures and algorithms that avoid or overcome the data errors that can lead to erroneous results. One of the more common kernels in space-based applications is the 2D fast Fourier transform (FFT). Many papers have investigated fault-tolerant FFT, but no algorithm has been devised that would allow for error correction without re-computation from original data. In this paper, we present a new method of applying algorithm-based fault tolerance (ABFT) concepts to the 2D-FFT that will not only allow for error detection but also error correction within memory-constrained systems as well as ensure coherence of the data after the computation. To further improve reliability of this ABFT approach, we propose use of a checksum encoding scheme that addresses issues related to numerical precision and overflow. The performance of the fault-tolerant 2D-FFT will be presented and featured as part of a dependable range Doppler processor, which is a subcomponent of synthetic-aperture radar algorithms. This work is supported by the Dependable Multiprocessor project at Honeywell and the University of Florida, one of the experiments in the Space Technology 8 (ST-8) mission of NASA's New Millennium Program.",2007,0, 6389,Isolating Suspiciousness from Spectrum-Based Fault Localization Techniques,"Spectrum-based fault localization (SBFL) is one of the most promising fault localization approaches, which normally uses the failed and passed program spectrum to evaluate the risks for all program entities. However, it does not explicitly distinguish the different degree in definiteness between the information associated with the failed spectrum and the passed spectrum, which may result in an unreliable location of faults. Thus in this paper, we propose a refinement method to improve the accuracy of the predication by SBFL, through eliminating the indefinite information. Our method categorizes all statements into two groups according to their different suspiciousness, and then uses different evaluation schemes for these two groups. In this way, we can reduce the use of the unreliable information in the ranking list, and finally provide a more precise result. Experimental study shows that for some SBFL techniques, our method can significantly improve their performance in some situations, and in other cases, it can still remain the techniques' original performance.",2010,0, 6390,An approach for analyzing and correcting spelling errors for non-native Arabic learners,"Spellcheckers are widely used in many software products for identifying errors in users' writings. However, they are not designed to address spelling errors made by non-native learners of a language. As a matter of fact, spelling errors made by non-native learners are more than just misspellings. Non-native learners' errors require special handling in terms of detection and correction, especially when it comes to morphologically rich languages such as Arabic, which have few related resources. In this paper, we address common error patterns made by non-native Arabic learners and suggest a two-layer spell-checking approach, including spelling error detection and correction. The proposed error detection mechanism is applied on top of Buckwalter's Arabic morphological analyzer in order to demonstrate the capability of our approach in detecting possible spelling errors. The correction mechanism adopts a rule-based edit distance algorithm. Rules are designed in accordance with common spelling error patterns made by Arabic learners. Error correction uses a multiple filtering mechanism to propose final corrections. The approach utilizes semantic information given in exercising questions in order to achieve highly accurate detection and correction of spelling errors made by non-native Arabic learners. Finally, the proposed approach was evaluated using real test data and promising results were achieved.",2010,0, 6391,Errors in Operational Spreadsheets: A Review of the State of the Art,"Spreadsheets are thought to be highly prone to errors and misuse. In some documented instances, spreadsheet errors have cost organizations millions of dollars. Given the importance of spreadsheets, little research has been done on how they are used in organizations. We review the existing state of understanding of spreadsheet errors, concentrating on two studies. One analyzes errors in 50 operational spreadsheets; the other studies the quantitative impacts of errors in 25 spreadsheets from five organizations. These studies suggest that counts of error cells are not sufficient to understand the problem of errors. Average error cell counts reported in the literature range from 1 percent to 5 percent depending on definitions and methods used. However, some errors are benign while others are fatal. Furthermore, spreadsheets in some organizations appear to be error-free. Several types of new research are needed to understand the spreadsheet error problem more fully.",2009,0, 6392,The challenge of accurate software project status reporting: a two stage model incorporating status errors and reporting bias,"Software project managers perceive and report about the project's status. Recognizing that their status perceptions might be wrong and that they may not faithfully report what they believe leads to a natural question - how different is the true software project status from the reported status? In this paper, we construct a two-stage model which accounts for project manager errors in perception and bias that might be applied before reporting the project's status to executives. We call the combined effect of errors in perception and bias ""project statics distortion"". The probabilistic model has its roots in information theory and uses the discrete project status from traffic-light reporting. The true states of projects of varying risk were elicited from a panel of five experts, and these formed the model input. Key findings suggest that executives should be skeptical of favorable status reports, and that, for higher-risk projects, executives should concentrate on reducing bias if they are to improve the accuracy of project reporting.",2001,0, 6393,Study on software reliability design criteria based on defect patterns,"Software reliability design criteria, one of the primary means to improve software reliability, could help to summarize experience and lessons learned. Nowadays, more and more organizations are putting emphasis on the collection and utilization of software defects. Software defects are the root causes for software failures. Therefore, software reliability design criteria based on the analysis of the defects could avoid the occurrence of similar defects and improve software quality. This paper investigates the idea of studying software reliability design criteria on the basis of defect patterns. Through the analysis of defect data, we advance a defect classification suitable for each software development phase and the definition of software defect patterns. The software defect patterns in requirement analysis, design and coding phase are listed. One method to solve the problem of how to convert from defect patterns to reliability design criteria is proposed. Subsequently the reliability design criteria in requirement analysis, design and coding phase are expounded by using the method. The reliability design criteria are tested to be practical and valid in application examples.",2009,0, 6394,Quasi-Renewal Time-Delay Fault-Removal Consideration in Software Reliability Modeling,"Software reliability growth models based on a nonhomogeneous Poisson process (NHPP) have been considered as one of the most effective among various models since they integrate the information regarding testing and debugging activities observed in the testing phase into the software reliability model. Although most of the existing NHPP models have progressed successfully in their estimation/prediction accuracies by modifying the assumptions with regard to the testing process, these models were developed based on the instantaneous fault-removal assumption. In this paper, we develop a generalized NHPP software reliability model considering quasi-renewal time-delay fault removal. The quasi-renewal process is employed to estimate the time delay due to identifying and prioritizing the detected faults before actual code change in the software reliability assessment. Model formulation based on the quasi-renewal time-delay assumption is provided, and the generalized mean value function (MVF) for the proposed model is derived by using the method of steps. The general solution of the MVFs for the proposed model is also obtained for some specific existing models. The numerical examples, based on a software failure data set, show that the consideration of quasi-renewal time-delay fault-removal assumption improves the descriptive properties of the model, which means that the length of time delay is getting decreased since testers and programmers adapt themselves to the working environment as testing and debugging activities are in progress.",2009,0, 6395,Consider of fault propagation in architecture-based software reliability analysis,"Software reliability models are used to estimation and prediction of software reliability. Existing models either use black-box approach that based on test data during software test phase or white-box approach that based on software architecture and individual component reliability, which is more suited to assess the reliability of modern software system. However, most of the reliability models based on architecture assumed that a failure occurring within one component will not cause any other component to fail, which is inconsistent with the facts. This paper introduces a reliability model and a reliability analysis technique for architecture-based reliability evaluation. Our approach extend existing reliability model by considering fault propagation. We believe that this model can be used to effectively improve software quality.",2009,0, 6396,Compiling a benchmark of documented multi-threaded bugs,"Summary form only given. Testing multithreaded, concurrent, or distributed programs is acknowledged to be a very difficult task. We decided to create a benchmark of programs containing documented multithreaded bugs that can be used in the development of testing tool for the domain. In order to augment the benchmark with a sizable number of programs, we assigned students in a software testing class to write buggy multithreaded Java programs and document the bugs. This paper documents this experiment. We explain the task that was given to the students, go over the bugs that they put into the programs both intentionally and unintentionally, and show our findings. We believe this part of the benchmark shows typical programming practices, including bugs, of novice programmers. In grading the assignments, we used our technologies to look for undocumented bugs. In addition to finding many undocumented bugs, which was not surprising given that writing correct multithreaded code is difficult, we also found a number of bugs in our tools. We think this is a good indication of the expected utility of the benchmark for multithreaded testing tool creators.",2004,0, 6397,Effectiveness of Software Solutions in Reducing Errors Due to Multi-Path in Spherical near Field Measurements,"Summary form only given. The presence of multi-path reflections is usually a source of significant error in low frequency spherical near field measurements as the test antennas typically have low gain and the cost of lining the anechoic chamber with optimal low reflectivity is prohibitive. Some earlier papers have discussed the effects of there errors on far field patterns and mitigation of some of the errors using hardware solutions and by range optimization. Many of the commercially available software suits for near-field to far field conversion provide algorithms and utilities for reducing the errors due to multi-path. Some examples of such software solutions include MARS add-on for Near Field Systems Inc. (NSI), IsoFilter technique from MI Technologies, and Spherical mode filtering routines in CASAMS and TICRA SNIFTD software. These techniques generally require either over sampling of measured data or mounting the antenna such that the phase centre is displaced with respect to the centre of measurement thereby acquiring data on a sphere larger than the minimum sphere originating at the phase centre. The error reduction algorithms then estimate and filter out the contribution due to multi- path This paper presents the results of a study undertaken to determine the effectiveness of some of these algorithms in reducing the multi-path errors in a low frequency measurement facility. A variety of antennas such as horn antennas, log-periodic, helical and phased arrays were measured in a spherical near field facility housed in a sub-optimally lined anechoic chamber. The far field data with and without the software correction was compared to the expected behaviour of the antenna calculated using numerical techniques. The results of these measurements showing the effectiveness and limitations of the techniques studied will be presented. Practical considerations in reaching optimal solutions will be discussed.",2007,0, 6398,"Architecture of LA-MPI, a network-fault-tolerant MPI","Summary form only given. We discuss the unique architectural elements of the Los Alamos message passing interface (LA-MPI), a high-performance, network-fault-tolerant, thread-safe MPI library. LA-MPI is designed for use on terascale clusters which are inherently unreliable due to their sheer number of system components and trade-offs between cost and performance. We examine in detail the design concepts used to implement LA-MPI. These include reliability features, such as application-level checksumming, message retransmission, and automatic message rerouting. Other key performance enhancing features, such as concurrent message routing over multiple, diverse network adapters and protocols, and communication-specific optimizations (e.g., shared memory) are examined.",2004,0, 6399,Soft-Errors Phenomenon Impacts on Design for Reliability Technologies,"Summary form only given. We mainly address here the ""alter ego"" of quality, which is reliability, and is becoming a growing concern for designers using the latest technologies. After the DFM nodes in 90 nm and 65 nm, we are entering the DFR area, or design for reliability straddling from 65 nm to 45 nm and beyond. Because of the randomness character of reliability - failures can happen anytime anywhere - executives should mitigate reliability problems in terms of risk, which costs include cost of recalls, warranty costs, and loss of goodwill. Taking as an example the soft error phenomenon, we demonstrate how the industry first started to respond to this new technology scaling problem with silicon test to measure and understand the issue, but should quickly move to resolving reliability issues early in the design. In this field, designers can largely benefit from new EDA analysis tools and specific IPs to overcome in a timely and economical manner this new hurdle.",2007,0, 6400,Application of SVM to engine parameter collector fault diagnosis,"Support Vector Machine (SVM), based on structural risk minimization principle, is now widely used in pattern recognition, classification and other research fields. It shows better generalization performance than traditional statistical learning theory, especially in small samples. In this paper, some dimensionless parameter is selected as SVM eigenvector, and then support vector machine is applied to fault diagnosis in engine parameter collector. Result shows that it has good ability in fault pattern classification of engine parameter collector.",2008,0, 6401,A Hybrid Approach to Detecting Security Defects in Programs,"Static analysis works well at checking defects that clearly map to source code constructs. Model checking can find defects of deadlocks and routing loops that are not easily detected by static analysis, but faces the problem of state explosion. This paper proposes a hybrid approach to detecting security defects in programs. Fuzzy inference system is used to infer selection among the two detection approaches. A cluster algorithm is developed to divide a large system into several clusters in order to apply model checking. Ontology based static analysis employs logic reasoning to intelligently detect the defects. We also put forwards strategies to improve performance of the static analysis. At last, we perform experiments to evaluate the accuracy and performance of the hybrid approach.",2009,0, 6402,Static Detection of Disassembly Errors,"Static disassembly is a crucial first step in reverse engineering executable files, and there is a considerable body of work in reverse-engineering of binaries, as well as areas such as semantics-based security analysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine-learning-based approach, using decision trees, for statically identifying possible errors in a static disassembly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental results using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.",2009,0, 6403,Hygrothermal Failures From Small Defects in Lead-Free Solder Reflowed Electronic Packages,"Steam-driven delamination failure is a main failure mode in electronics packages during solder reflow. Steam pressures built up within interfaces in packages are sensitive functions of the reflow temperature. The switch to lead-free soldering will raise re-flow temperature by more than 20degC and double the equilibrium saturated steam pressure within defects in the package. The effects of saturated steam driven interfacial failure was analyzed using finite element in this study. Analyses revealed that packages which are thin and made using high thermal conductivity materials are at higher risk of failure than conventional packages made using standard materials. This suggests that electronics made with thick and inexpensive encapsulants are less prone to failure when switched to lead-free solder. Portable and mobile electronics which have low profiles and are made of high thermal conductive encapsulants are at higher risk when switched to lead-free solder reflow. Moreover, the study found that the critical temperature for failure is dependent on the defect size in the package. Reduction of initial defect size can reduce failures in high risk packages in lead-free solder reflow.",2007,0, 6404,Fault diagnosis for large-scale wind turbine rolling bearing using stress wave and wavelet analysis,"Stress wave (SW) technique has been applied initially in the fault diagnosis as a new dynamic detecting method. But it is difficult to detect the fault signals due to the faintness of the fault signals and the background noise for the low speed machine. The properties of the SW and its transmission law in the large-scale wind turbine were studied in this paper. Firstly, the three-dimension contact model of the bearing was set up. The defects which occurred on the outer race and the inner race were simulated in the model. According to the model, the stress, the strain and the contact stress distribution was computed. Then, the stress and strain distribution law and the contact stress distribution law of the interface on good and fault bearing were compared. In the real-world testing, fault signals were acquired using stress wave transducer. Fault characteristic parameters were extracted and the background noise was reduced using wavelet analysis. Both simulation and real-world testing result obtained indicate that SW and wavelet transform can be the effective method in the fault diagnosis of large-scale wind turbine bearing.",2005,0, 6405,Enhanced reliability of finite-state machines in FPGA through efficient fault detection and correction,"SRAM based FPGA are subjected to ion radiation in many operating environments. Following the current trend of shrinking device feature size & increasing die area, newer FPGA are more susceptible to radiation induced errors. Single event upsets (SEU), (also known as soft-errors) account for a considerable amount of radiation induced errors. SEU are difficult to detect & correct when they affect memory-elements present in the FPGA, which are used for the implementation of finite state machines (FSM). Conventional practice to improve FPGA design reliability in the presence of soft-errors is through configuration memory scrubbing, and through component redundancy. Configuration memory scrubbing, although suitable for combinatorial logic in an FPGA design, does not work for sequential blocks such as FSM. This is because the state-bits stored in flip-flops (FF) are variable, and change their value after each state transition. Component redundancy, which is also used to mitigate soft-errors, comes at the expense of significant area overhead, and increased power consumption compared to nonredundant designs. In this paper, we propose an alternate approach to implement the FSM using synchronous embedded memory blocks to enhance the runtime reliability without significant increase in power consumption. Experiments conducted on various benchmark FSM show that this approach has higher reliability, lower area overhead, and consumes less power compared to a component redundancy technique.",2005,0, 6406,Evaluation of Single Event Upset Mitigation Schemes for SRAM based FPGAs using the FLIPPER Fault Injection Platform,"SRAM based reprogrammable FPGAs are sensitive to radiation-induced single event upsets (SEU), not only in their user flip-flops and memory, but also in the configuration memory. Appropriate mitigation has to be applied if they are used in space, for example the XTMR scheme implemented by the Xilinx TMRTool and configuration scrubbing. The FLIPPER fault injection platform, described in this paper, allows testing the efficiency of the SEU mitigation scheme. FLIPPER emulates SEU-like faults by doing partial reconfiguration and then applies stimuli derived from HDL simulation (VHDL/Verilog test-bench), while comparing the outputs with the golden pattern, also derived from simulation. FLIPPER has its device-under-test (DUT) FPGA on a mezzanine board, allowing an easy exchange of the DUT device. Results from a test campaign are presented using a design from space application and applying various levels of TMR mitigation.",2007,0, 6407,Improving testability and soft-error resilience through retiming,"State elements are increasingly vulnerable to soft errors due to their decreasing size, and the fact that latched errors cannot be completely eliminated by electrical or timing masking. Most prior methods of reducing the soft error rate (SER) involve combinational redesign, which tends to add area and decrease testability, the latter a concern due to the prevalence of manufacturing defects. Our work explores the fundamental relations between the SER of sequential circuits and their testability in scan mode, and appears to be the first to improve both through retiming. Our retiming methodology relocates registers so that [1] registers become less observable with respect to primary outputs, thereby decreasing overall SER, and [2] combinational nodes become more observable with respect to registers (but not with respect to primary outputs), thereby increasing scan testability. We present experimental results which show an average decrease of 42% in the SER of latches, and an average improvement of 31% random pattern testability.",2009,0, 6408,Research on surface defect inspection for small magnetic rings,"Surface defect characters of the small magnetic ring is diversiform and the diameter of surface defect is usually about 0.1mm, thus the magnetic ring existing defects used in industrial production is great security risks. However, currently the disadvantages of artificial visual method are cost high, efficiency low and etc. Therefore, a new method is needed to detect small magnetic ring automatically. In this paper, the design of the automatic inspection system is illustrated, the modified BHPF filter is used for de-noising and eventually the defect is located and recognized.",2009,0, 6409,A Lightweight Fault-Tolerant Mechanism for Network-on-Chip,"Survival capability is becoming a crucial factor in designing multicore processors built with on-chip packet networks, or networks on chip (NoCs). In this paper, we propose a lightweight fault-tolerant mechanism for NoCs based on default backup paths (DBPs) designed to maintain, in the presence of failures, network connectivity of both non-faulty routers as well as healthy processor cores which may be connected to faulty routers. The mechanism provides default paths as backup between certain router ports which serve as alternative datapaths to circumvent failed components within a faulty router. Along with a minimal subset of normal network channels, the set of default backup paths internal to faulty routers form - in the worst case - a unidirectional ring topology that provides network-wide connectivity to all processor cores. Routing using the DBP mechanism is proved to be deadlock-free with only two virtual channels even for fault scenarios in which regular networks degrade to irregular (arbitrary) topologies. Evaluation results show that, for a 2-D mesh wormhole NoC, only 12.6% additional hardware resources are needed to implement the proposed DBP mechanism in order to provide graceful performance degradation without chip-wide failure as the number of faults increases to the maximum needed to form ring.",2008,0, 6410,Search-Based Prediction of Fault Count Data,"Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.",2009,0, 6411,Software Defect Identification Using Machine Learning Techniques,"Software engineering is a tedious job that includes people, tight deadlines and limited budgets. Delivering what customer wants involves minimizing the defects in the programs. Hence, it is important to establish quality measures early on in the project life cycle. The main objective of this research is to analyze problems in software code and propose a model that will help catching those problems earlier in the project life cycle. Our proposed model uses machine learning methods. Principal component analysis is used for dimensionality reduction, and decision tree, multi layer perceptron and radial basis functions are used for defect prediction. The experiments in this research are carried out with different software metric datasets that are obtained from real-life projects of three big software companies in Turkey. We can say that, the improved method that we proposed brings out satisfactory results in terms of defect prediction",2006,0, 6412,Software fault detection for reliability using recurrent neural network modeling,"Software fault detection is an important factor for quantitatively characterizing software quality. One of the proposed methods for software fault detection is neural networks. Fault detection is actually a pattern recognition task. Faulty and fault free data are different patterns which must be recognized. In this paper we propose a new framework for modeling software testing and fault detection in applications. Recurrent neural network architecture is used to improve performance of the system. Based on the experiments performed on the software reliability data obtained from middle-sized application software, it is observed that the non-linear RNN can be effective and efficient for software faults detection.",2010,0, 6413,Automated design flaw correction in object-oriented systems,"Software inevitably changes. As a consequence, we observe the phenomenon referred to as ""software entropy"" or ""software decay"": the software design continually degrades making maintenance and functional extensions overly costly if not impossible. There exist a number of approaches to identify design flaws (problem detection) and to remedy them (refactoring). There is, however, a conceptual gap between these two stages: There is no appropriate support for the automated mapping of design flaws to possible solutions. Here we propose an integrated, quality-driven and tool-supported methodology to support object-oriented software evolution. Our approach is based on the novel concept of ""correction strategies"". Correction strategies serve as reference descriptions that enable a human-assisted tool to plan and perform all necessary steps for the safe removal of detected design flaws, with special concern towards the targeted quality goals of the restructuring process. We briefly sketch our tool chain and illustrate our approach with the help of a medium-sized real-world case-study.",2004,0, 6414,An empirical study on testing and fault tolerance for software reliability engineering,"Software testing and software fault tolerance are two major techniques for developing reliable software systems, yet limited empirical data are available in the literature to evaluate their effectiveness. We conducted a major experiment to engage 34 programming teams to independently develop multiple software versions for an industry-scale critical flight application, and collected faults detected in these program versions. To evaluate the effectiveness of software testing and software fault tolerance, mutants were created by injecting real faults occurred in the development stage. The nature, manifestation, detection, and correlation of these faults were carefully investigated. The results show that coverage testing is generally an effective means to detecting software faults, but the effectiveness of testing coverage is not equivalent to that of mutation coverage, which is a more truthful indicator of testing quality. We also found that exact faults found among versions are very limited. This result supports software fault tolerance by design diversity as a creditable approach for software reliability engineering. Finally we conducted domain analysis approach for test case generation, and concluded that it is a promising technique for software testing purpose.",2003,0, 6415,On the Selection of Error Model(s) for OS Robustness Evaluation,"The choice of error model used for robustness evaluation of operating systems (OSs) influences the evaluation run time, implementation complexity, as well as the evaluation precision. In order to find an ""effective"" error model for OS evaluation, this paper systematically compares the relative effectiveness of three prominent error models, namely bit-flips, data type errors and fuzzing errors using fault injection at the interface between device drivers OS. Bit-flips come with higher costs (time) than the other models, but allow for more detailed results. Fuzzing is cheaper to implement but is found to be less precise. A composite error model is presented where the low cost of fuzzing is combined with the higher level of details of bit-flips, resulting in high precision with moderate setup and execution costs.",2007,0, 6416,Methodology of modeling applied to fault injection based on EDA,"The circuit fault injection based on EDA simulation is one of the efficient methods of testability analysis. In order to implement fault simulation, the first step is to establish the models that reflect the function or fault modes appropriately. Considering the various components of number systems in PHM system, although the prevalent software contains parts libraries, the symbols in the libraries and well-known modeling methods cannot meet the needs of simulation. Therefore, the analysis of integrated and practical methods of establishing function/performance models is of great necessity. In this paper, an integrated methodology of modeling is proposed. The methodology is introduced with several cases in avionics system. It provides general approach for modeling of the number systems and BITE in PHM system. The analysis of the methodology will be significant to implement the fault injection and testability analysis for PHM system.",2010,0, 6417,Code construction algorithm for architecture aware LDPC codes with low-error-floor,"The common approach for the design of an error correction system is first to construct a code and then to define the hardware structure of the encoder and decoder. However, in the case of LDPC codes (low-density parity-check) such a constructed code is generally not well suited for a hardware implementation. It has been recognized that the code construction and hardware design must be considered jointly to facilitate LDPC decoder and encoder implementation. In this paper, an efficient decoder structure for regular and irregular LDPC codes, based on TDMP (turbo-decoding message passing) scheme is designed first. The decoder has been implemented and verified in an FPGA device. Constraints for the parity check matrix of a code to be suitable for the decoder architecture are defined. Then an algorithm for LDPC parity check matrix construction subject to these constraints is presented. The algorithm aims at improving performance of the code in the low SNR region by employing irregular codes as well as in high SNR region by reducing the number of small Stopping Sets and Trapping Sets in the Tanner graph of the code making use of a computer search technique.",2008,0, 6418,CORBA Replication Support for Fault-Tolerance in a Partitionable Distributed System,"The common object request broker architecture (CORBA) specification originally did not include any support for fault-tolerance. The fault-tolerant CORBA standard was added to address this issue. One drawback of the standard is that it does not include fault-tolerance in the case of network partitioning faults. The main contribution of this paper is the design of a fault-tolerance CORBA add-on for partitionable environments. In contrast to other solutions, our modular design separates replication and reconciliation policies from the basic replication mechanisms. This modularity allows the replication and reconciliation strategies to be modified easily",2006,0, 6419,A wearable computing based system for the prevention of medical errors committed by registered nurses in the intensive care unit,"The common occurrence of medical errors (MEs) is one of the most serious problems affecting healthcare delivery. Registered nurses (RN) are at the highest risk of committing MEs due to the extended time they expend with patients. The level of stress and the dynamics of the intensive care unit (ICU) make it the healthcare setting with the highest rate of MEs per patient. Information technology has been suggested as one the components of an aggregated solution to the problem of preventing MEs. We report an ongoing research toward the development of the registered nurse performance support system (RNPSS), a hardware/software wearable computing based system that would reduce and, it is hoped prevent MEs committed by RNs in the ICU.",2002,0, 6420,Fast fault locating in rural MV distribution networks,The competition of the German energy market demands a significant reduction of costs to the owners of electricity supply systems. The resulting long-term decrease of supply reliability can be partly compensated by means of modern protection equipment and the use of intelligent software. This paper discusses the fast fault location in rural MV distribution networks as an example for this innovative concept,2001,0, 6421,On application of precision servo mode and fault control strategies to actuator models for structural applications,"The complexity of modern high precision servomechanism systems, which involve not only the tracking function of the servomechanism but also the coordinated system-level control of numerous supporting mechanical, electrical and software subsystems, is placing discrete event controller design into the industrial spotlight. The control of such systems often walks a fine line: autonomy is desirable, because the system is often not conveniently accessible; however, high reliability is also desirable, and the complex software to realize autonomous response is often unacceptable because of the cost and time required for development and verification. Using the framework of an hysteretic actuator employed in the stabilization of a structure under seismic excitation, an approach that is becoming an industry standard for a class of high precision servomechanisms is described, wherein N-squared diagrams are used to model the discrete event portion of the system. The approach practically balances the competing requirements of autonomy and reliability, and has been successfully applied in a timely and cost-effective manner on several complex systems. Additionally, it relates in a straightforward manner to the class of time-varying discrete-time state space systems.",2002,0, 6422,Self-checking and fault tolerance quality assessment using fault sampling,"The computational effort associated with fault simulation (FS) processes in digital systems can become overwhelming, due to circuit complexity, test pattern size or fault list size. The same applies when safety properties (such as fault tolerance or fail-safe) need to be verified in a new product development, in the design environment. If a bridging fault model replaces the simple stuck-at fault model, the fault list size easily becomes very large. If the product needs to comply to safety standards, such as EN298, these properties need to be verified in the presence of double faults, which explodes the fault list dimension. In this paper, a novel method is proposed to deal with this problem, based on fault sampling. A model to compute the confidence level that the global fault coverage, FC, is within the interval [FCmin 100%] is proposed. A case study, an ASIC for a safety-critical gas burner control system, is used to ascertain the usefulness of the proposed methodology.",2002,0, 6423,Fault Injection-based Test Case Generation for SOA-oriented Software,"The concept of service oriented architecture (SOA) implies a rapid construction of a software system with components as published Web services. How to effectively and efficiently test and assess available Web services with similar functionalities published by different service providers remains a challenge. In this paper, we present a step-by-step fault injection-based automatic test case generation approach. Preliminary test results are also reported",2006,0, 6424,A fault diagnosis system for the connected home,"The connected home of the future, in which all consumer appliances in a home are networked together, is close to becoming the connected home of today. This article explores the role of fault diagnosis in such an environment and explains how agent technology may be applied. The article outlines the need for future standards that can maximize the benefit of a shared diagnostic system.",2004,0, 6425,Heterogeneous redundancy for fault and defect tolerance with complexity independent area overhead,"The continuous increase in digital system complexity is raising the area cost of redundancy-based fault and defect tolerance. This paper introduces a technique for heterogeneous redundancy in control path and datapath circuitry that provides high reliability with area overhead that is independent of system complexity. Small amounts of circuit-specific reconfigurable logic are finely integrated with fixed-logic circuitry to provide fine-grained heterogeneous fault and defect tolerance. Results reveal that the technique is effective for a variety of circuits, providing high reliability with a constant magnitude area overhead that is independent of system complexity.",2003,0, 6426,A novel Shared-Clock scheduling protocol for fault-confinement in CAN-based distributed systems,"The Controller Area Network (CAN) protocol is widely employed in the development of distributed embedded systems. Building on the CAN protocol, previous studies have illustrated how Shared-Clock (S-C) algorithms can be used in conjunction with off-the-shelf microcontrollers to create high-reliability distributed systems at low cost. In such studies, it has generally been assumed that S-C designs will be based on a bus topology: in the present paper, a novel S-C algorithm is introduced which is intended for use with star networks. Through a fault-injection case study and quantitative comparisons, it is demonstrated that the star-based design has a number of advantages when compared with a bus-based equivalent.",2010,0, 6427,Machine learning techniques for ocular errors analysis,"The conventional techniques for refractive error measurements (myopia, hypermetropia, and astigmatism) have been considered inadequate for several optometry researches. In this context, they have investigated alternative methodologies for refractive error measurement. A new strategy is the determination of refractive errors from images of the globe of the eye. A process named Hartmann-Shack can obtain these images. The HS images should be analysed in order to extract relevant information for identification of refractive errors. The present paper investigates a technique based on radial basis functions (RBFs), an artificial neural network (ANN), and on support vector machines (SVMs), which automatically performs analysis of images from the globe of the eye and identifies refractive errors. The most relevant data of these images are extracted using Gabor wavelets transform, and then these machine learning techniques carry out the image analysis",2004,0, 6428,Optimal software release time incorporating fault correction,"The ""stopping rule"" problem which involves determining an optimal release time for a software application at which costs justify the stop test decision has been addressed by several researchers. However, most of these research efforts assume instantaneous fault correction, an assumption that underlies many software reliability growth models, and hence provide optimistic predictions of both the cost at release and the release time. In this paper, we present an economic cost model which takes into consideration explicit fault correction in order to provide realistic predictions of release time and release cost. We also present a methodology to compute the failure rate of the software in the presence of fault correction, which is necessary in order to apply the cost model. We illustrate the utility of the cost model to provide realistic predictions of release time and cost with a case study.",2003,0, 6429,A Robust GM-Estimator for the Automated Detection of External Defects on Barked Hardwood Logs and Stems,"The ability to detect defects on hardwood trees and logs holds great promise for the hardwood forest products industry. At every stage of wood processing, there is a potential for improving value and recovery with knowledge of the location, size, shape, and type of log defects. This paper deals with a new method that processes hardwood laser-scanned surface data for defect detection. The detection method is based on robust circle fitting applied to scanned cross-section data sets recorded along the log length. It can be observed that these data sets have missing data and include large outliers induced by loose bark that dangles from the log trunk. Because of that and because of the nonlinearity of the circle model, which presents both additive and nonadditive errors, we initiated a new robust generalized M-estimator, for which the residuals are standardized via scale estimates calculated by means of projection statistics and incorporated in the Huber objective function, yielding a bounded influence method. Our projection statistics are based on the 2-D radial vectors instead of the row vectors of the Jacobian matrix as advocated in the literature dealing with linear regression. These radial distances allow us to develop algorithms aimed at pinpointing large surface rises and depressions from the contour image levels, and thereby, locating severe external defects having at least a height of 0.5 in and a diameter of 5 in.",2007,0, 6430,The Optimized Combination of Fault Location Technology Based on Traveling Wave Principle,"The accuracy and the reliability of modern D-type double-ended and A-type single-ended traveling wave fault location principles used for transmission lines is comprehensively evaluated. Based on the evaluation, this paper presents the idea of optimized combination of location based on these two traveling wave principles, and successfully applies the idea in actual fault analysis of transient traveling waves. Compared with the traveling wave location schemes based on D-type or A-type principle alone, this scheme has the greatest advantages of utilizing the A-type traveling wave principle to verify and correct the location results obtained with the D-type traveling wave principle, so that both the location reliability and accuracy are enhanced. Practical applications showed that the optimized combination of traveling wave location schemes is feasible, and the location precision is improved significantly.",2009,0, 6431,Gazing estimation and correction from elliptical features of one iris,"The accuracy of eye gaze estimation by image information is affected by several objective factors, including the image resolution, anatomical structure of eye, posture change, etc. Especially, the irregular movements of head and eye are the main problem and key technology being researched. We describe an effective way of estimating the eye gazing from the elliptical features of one iris under the conditions without auxiliary source, head fixing equipment or multiple-camera. Firstly, we give the preliminary estimations of the gazing direction and then obtain the vectors describing translation and rotation of eyeball movement using central projection on the cross section passing through the line-of-sight, which avoids the complex computations involved in known methods. We also disambiguate the solution on the basis of the experimental findings. Secondly, the error correction is carried on the BP neural network trained by a sample collection of the translation and rotation vectors. In our simulations, we achieve an accuracy of 0.8 on the test images which are different from the training images. The result is found to be better than that of the existing non-intrusive method with single-camera. The performance of the algorithm proves that the proposed method has excellent generalized abilities.",2010,0, 6432,"Speech Recognition Standard Procedures, Error Recognition and Repair Strategies","The accuracy of Input speech signal is very essential as the speech interface is now desired for several domains. This paper attempts to propose a standard procedure for speech recognition. It also propose an algorithm to outline the cause for errors in recognition based on several related works and suggest an error recognition and repair procedures . The research work suggests that it is practically possible to predict the misrecognized utterances with a high degree of accuracy from 1. an utterance's sound file, 2. the language model being employed, 3. and recognizer outputs such as confidence, In addition, there is empirical data upon which to base successful repair strategies in relation to these misrecognitions.",2006,0, 6433,Extended Fault Detection Techniques for Systems-on-Chip,"The adoption of systems-on-chip (SoCs) in different types of applications represents an attracting solution. However, the high integration level of SoCs increases the sensitivity to transient faults and consequently introduces some reliability concerns. Several solutions have been proposed to attack this issue, mainly intended to face faults in the processor or in the memory. In this paper, we propose a solution to detect transient faults affecting data transmitted between the microprocessor and the communication peripherals embedded in a SoC. This solution combines some modifications of the source code at high level with the introduction of an Infrastructure IP (I-IP) to increase the dependability of the SoC.",2007,0, 6434,Towards the application of classification techniques to test and identify faults in multimedia systems,"The advances in computer and graphic technologies have led to the popular use of multimedia for information exchange. However, multimedia systems are difficult to test. A major reason is that these systems generally exhibit fuzziness in their temporal behaviors. The fuzziness is caused by the existence of non-deterministic factors in their runtime environments, such as system load and network traffic. It complicates the analysis of test results. The problem is aggravated when a test involves the synchronization of different multimedia streams as well as variations in system loading. We conduct an empirical study on the testing and fault-identification of multimedia systems by treating the issue as a classification problem. Typical classification techniques, including Bayesian networks, k-nearest neighbor, and neural networks, are experimented with the use of X-Smiles, an open source multimedia authoring tool supporting the Synchronized Multimedia Integration Language (SMIL). The encouraging result of our study, which is based only on five attributes, shows that our proposal can achieve an accuracy of 57.6 to 79.2% in identifying the types of fault in environments where common cause variations are present. A further improvement of 7.6% is obtained via normalization.",2004,0, 6435,A Weaker Knowledge Connectivity Condition Sufficient for Fault-Tolerant Consensus with Unknown Participants,"The agreement problem is usually exploited to improve the fault-tolerant capability of software systems. For self-organized networks like cloud computing architectures that possess highly decentralized and self-organized natures, consensus, which is essential to solving the agreement problem, in such networks cannot be achieved in the ways for traditional fixed networks. To address this problem of Consensus with Unknown Participants (CUP), a variant of the traditional consensus problem was proposed in the literature. Correspondingly, the CUP problem considering process crashes was also introduced, called the Fault-Tolerant Consensus with Unknown Participants (FT-CUP) problem. In this paper, we propose a new knowledge connectivity condition sufficient for solving the FT-CUP problem. Our new condition is weaker and more viable than an existing one, which is hard to implement in practice.",2010,0, 6436,Evolutionary generation of test data for multiple paths coverage with faults detection,"The aim of software testing is to find faults in the program under test. Generating test data which can reveal faults is the core issue. Although existing methods of path-oriented testing can generate test data which traverse target paths, they cannot guarantee that the data find the faults in the program. In this paper, we transform the problem into a multi-objective optimization problem with constrains and propose a method of evolutionary generation of test data for multiple paths coverage with faults detection. First, we establish the mathematical model of this problem and then a strategy based on multi-objective genetic algorithms is given. Finally we apply the proposed method in some programs under test and the experimental results validate that our method can find specified faults effectively. Compared with other methods of test data generation for multiple paths coverage, our method has greater advantage in faults detection and testing efficiency.",2010,0, 6437,Exploiting Memory Soft Redundancy for Joint Improvement of Error Tolerance and Access Efficiency,"Technology roadmap projects nanoscale multibillion- transistor integration in the coming years. However, on-chip memory becomes increasingly exposed to the dual challenges of device-level reliability degradation and architecture-level performance gap. In this paper, we propose to exploit the inherent memory soft (transient) redundancy for on-chip memory design. Due to the mismatch between fixed cache line size and runtime variations in memory spatial locality, many irrelevant data are fetched into the memory thereby wasting memory spaces. The proposed soft-redundancy allocated memory detects and utilizes these memory spaces for jointly achieving efficient memory access and effective error control. A runtime reconfiguration scheme is also proposed to further enhance the soft-redundancy allocation. Simulation results demonstrate 74.8% average error-control coverage ratio on the SPEC CPU2000 benchmarks with average of 59.5% and 41.3% reduction in memory miss rate and bandwidth usage, respectively, as compared to the existing memory techniques. Furthermore, the proposed technique is fully scalable with respect to various memory configurations.",2009,0, 6438,A fast error correction technique for matrix multiplication algorithms,"Temporal redundancy techniques will no longer be able to cope with radiation induced soft errors in technologies beyond the 45 nm node, because transients will last longer than the cycle time of circuits. The use of spatial redundancy techniques will also be precluded, due to their intrinsic high power and area overheads. The use of algorithm level techniques to detect and correct errors with low cost has been proposed in previous works, using a matrix multiplication algorithm as the case study. In this paper, a new approach to deal with this problem is proposed, in which the time required to recompute the erroneous element when an error is detected is minimized.",2009,0, 6439,PEDS: A Parallel Error Detection Scheme for TCAM Devices,"Ternary content-addressable memory (TCAM) devices are increasingly used for performing high-speed packet classification. A TCAM consists of an associative memory that compares a search key in parallel against all entries. TCAMs may suffer from error events that cause ternary cells to change their value to any symbol in the ternary alphabet ""0"",""1"",""*"". Due to their parallel access feature, standard error detection schemes are not directly applicable to TCAMs; an additional difficulty is posed by the special semantic of the ""*"" symbol. This paper introduces PEDS, a novel parallel error detection scheme that locates the erroneous entries in a TCAM device. PEDS is based on applying an error-detection code to each TCAM entry, and utilizing the parallel capabilities of the TCAM, by simultaneously checking the correctness of multiple TCAM entries. A key feature of PEDS is that the number of TCAM lookup operations required to locate all errors depends on the number of symbols per entry rather than the (orders-of-magnitude larger) number of TCAM entries. For large TCAM devices, a specific instance of PEDS requires only 200 lookups for 100-symbol entries, while a naive approach may need hundreds of thousands lookups. PEDS allows flexible and dynamic selection of trade-off points between robustness, space complexity, and number of lookups.",2009,0, 6440,Fault distinguishing pattern generation,"Test Generation for VLSI circuits suffers from two competing goals-to reduce the cost of test by minimizing the number of tests, and to be able to diagnose errors when failures occur. This paper outlines a methodology for generating diagnostic test patterns as they are needed using standard ATPG took. These diagnostic patterns are guaranteed to provide better diagnostic resolution than traditional manufacturing test patterns, and the use of standard ATPG fools enables generation of diagnostic patterns only when these patterns are needed",2000,0, 6441,Prioritizing Tests for Software Fault Localization,"Test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as the combination of testing cost and debugging cost, on the Siemens set, the results of our test case prioritization approach show up to a 53% reduction of the overall QA cost, compared with the next best technique.",2010,0, 6442,Automatic generation of instruction sequences targeting hard-to-detect structural faults in a processor,"Testing a processor in native mode by executing instructions from cache has been shown to be very effective in discovering defective chips. In previous work, we showed an efficient technique for generating instruction sequences targeting specific faults. We generated tests using traditional techniques at the module level and then mapped them to instruction sequences using novel methods. However, in that technique, the propagation of module test responses to primary outputs was not automated. In this paper, we present the algorithm and experimental results for a technique which automates the functional propagation of module level test responses. This technique models the propagation requirement as a Boolean difference problem and uses a bounded model checking engine to perform the instruction mapping. We use a register transfer level (RT-Level) abstraction which makes it possible to express Boolean difference as a succinct linear time logic (LTL) formula that can be passed to a bounded model checking engine. This technique fully automates the process of mapping module level test sequences to instruction sequences",2006,0, 6443,Evolution and Search Based Metrics to Improve Defects Prediction,"Testing activity is the most widely adopted practice to ensure software quality. Testing effort should be focused on defect prone and critical resources i.e., on resources highly coupled with other entities of the software application.In this paper, we used search based techniques to define software metrics accounting for the role a class plays in the class diagram and for its evolution over time. We applied Chidamber and Kemerer and the newly defined metrics to Rhino, a Java ECMA script interpreter, to predict version 1.6R5 defect prone classes. Preliminary results show that the new metrics favorably compare with traditional object oriented metrics.",2009,0, 6444,Compact Test Generation for Small-Delay Defects Using Testable-Path Information,"Testing for small-delay defects requires fault-effect propagation along the longest testable paths. However, the selection of the longest testable paths requires high CPU time and leads to large pattern counts. Dynamic test compaction for small-delay defects has remained largely unexplored thus far. We propose a path-selection scheme to accelerate ATPG based on stored testable critical-path information. A new dynamic test-compaction technique based on structural analysis is also introduced. Simulation results are presented for a set of ISCAS'89 benchmark circuits.",2009,0, 6445,Procedure based on mutual information and bayesian networks for the fault diagnosis of industrial systems,"The aim of this paper is to present a new method for process diagnosis using a Bayesian network. The mutual information between each variable of the system and the class variable is computed to identify the important variables. To illustrate the performances of this method, we use the Tennessee Eastman Process. For this complex process (51 variables), we take into account three kinds of faults with the minimal recognition error rate objective.",2007,0, 6446,Predicting and controlling FPGA Device Heat using System monitor and IBERT (internal bit error ratio tester),The aim of this paper is to present a new methodology and the tools used to predict and control the FPGA Device Heat before starting the design. Knowing that the FPGA silicon heat is crucial as they all have a temperature above and under which their functionalities is not longer guaranteed. The silicon temperature is linked to the different options and strategies used to implement the design. Many tools such ldquouse Xpowerrdquo from Xilinx allows the user to have an estimation of the power consumption. This paper will present a primitive called System monitor which is present in every Virtex 5 to monitor the environment around the FPGA. Monitoring the device environment maximises the probability of getting the FPGA work after implementing required design.,2009,0, 6447,Fault Detection and Diagnosis in an Induction Machine Drive: A Pattern Recognition Approach Based on Concordia Stator Mean Current Vector,"The aim of this paper is to study the feasibility of fault detection and diagnosis in a three-phase inverter feeding an induction motor. The proposed approach is a sensor-based technique using the mains current measurement. A localization domain made with seven patterns is built with the stator Concordia mean current vector. One is dedicated to the healthy domain and the last six are to each inverter switch. A probabilistic approach for the definition of the boundaries increases the robustness of the method against the uncertainties due to measurements and to the PWM. In high-power equipment where it is crucial to detect and diagnose the inverter faulty switch, a simple algorithm compares the patterns and generates a Boolean indicating the faulty device. In low-power applications (less than 1 kW) where only fault detection is required, a radial basis function (RBF) evolving architecture neural network is used to build the healthy operation area. Simulated experimental results on 0.3- and 1.5-kW induction motor drives show the feasibility of the proposed approach.",2005,0, 6448,Web Service Testing Method Based on Fault-coverage,"The aim of web service verification is how well the web service conforms to the WSDL specification and it is the key of web service popular adoption at present. In web service testing, the adequacy of test is the essence to verify if the software satisfies WSDL specification with the increasing complexity and applications of web service. This paper presents a systematic approach for web service testing based on fault-coverage which is intended to be used for service testing automation. In this method, HPNs representing operations are produced from WSDL specification after parsing process. A graph transforming from the HPN representing a web service is used to generate the adapted UIO sequence, then test sequence based on the adapted UIO sequence is given to acquire high fault coverage. Constraints-based test data generation for service testing getting sufficient test data to kill mutant program is presented in this paper. Constrains are departed to two kinds: user-defined and policy based on the syntactic and semantic analysis for WSDL specification. At last, we applied a test script language based on XML to effectively describe the test sequence. The test sequence and constrains for test data are expressed by this language. The test scenario is built on this formal language. The prototype system based on above method automating the web service test is developed in our lab.",2006,0, 6449,Comparison and analysis research on geometric correction of remote sensing images,"The algorithms of remote image approximate geometric correction are mainly based on the least squares method (LSM) about linear or nonlinear models. Their disadvantages lie in overfitting, poor generalizing ability, and enough amount samples demand, due to the principle of the empirical risk minimization (ERM). It is put forward that the geometric correction algorithm of remote image making's use of support vector machine, combined with the essence theory of image approximate geometric correction. One testing region is selected; the coordinates of the ground control points in the remote image and in the ground are measured. Varying number control points are selected to correct the remote image. Other control points serve as testing points, by the cluster algorithm. The approximate geometric correction algorithm, neural network, and support vector machines algorithm are applied to geometrically correct the images respectively, and the comparison analysis of the correction accuracy is obtained.",2010,0, 6450,One-Dimensional Variational Retrieval of the Wet Tropospheric Correction for Altimetry in Coastal Regions,"The altimeter range is corrected for tropospheric humidity by means of microwave radiometer measurements (Envisat/MWR, Jason-1/JMR, Jason-2/AMR). Over an open ocean, the altimeter/radiometer combination is satisfactory. However, in coastal areas, radiometer measurements are contaminated by the surrounding land surfaces, and the humidity retrieval method is not appropriate anymore. In this paper, a variational assimilation technique is proposed to retrieve the wet tropospheric correction near coasts. The method is first developed on simulations using the data from a meteorological model. A performance assessment is performed, as well as a comparison with a standard algorithm. The method is then applied on actual measurements, thus evaluating its feasibility.",2010,0, 6451,"Application potential, error considerations and post-processing software for ADCP deployments on AUVs","The application field of ocean current measurements made from autonomous underwater vehicles (AUVs) is discussed with reference to different system configurations and scientific scenarios. Factors that affect measurement accuracy are addressed. A novel post-processing software package for Earth-referencing, quality checks and smoothing of data from AUV-borne acoustic Doppler current profilers (ADCPs) is presented. The system provides interactive visualization features intended to facilitate perception of time-space relationships between ADCP vector data, conductivity-temperature-depth (CTD) scalar data and platform/sensor system state variables. The software is named ANCOR (ADCP navigation correction).",2004,0, 6452,Sensor fault-tolerant vector control of induction motors,"The authors propose a multisensor switching strategy for fault-tolerant vector control of induction motors. The proposed strategy combines three current sensors and associated observers that estimate the rotor flux. The estimates provided by the observers are compared at each sampling time by a switching mechanism which selects the sensors-observer pair with the smallest error between the estimated flux magnitude and a desired flux reference. The estimates provided by the selected pair are used to implement a vector control law. The authors consider both field-oriented control and direct torque and flux control schemes. Pre-checkable conditions are derived that guarantee fault tolerance under an abrupt fault of a current sensor. These conditions are such that the observers that use measurements from the faulty sensor are automatically avoided by the switching mechanism, thus maintaining good performance levels even in the presence of a faulty sensor. Simulation results under realistic conditions illustrate the effectiveness of the scheme.",2010,0, 6453,Automatic fault analysis and user notification for predictive maintenance,"The automatic analysis of different power system events is important for the successful operation of the system and the management of assets in substations or industrial facilities. The paper analyzes the types of system events, available data sources and requirements for the development of hierarchical event analysis systems. The analysis functions available in intelligent electronic devices (IEDs) or at different levels of automatic analysis systems are discussed. The requirements for recording of system events and the hierarchy of such systems are presented. The importance of accurate time-synchronization and methods to achieve it is also described",2006,0, 6454,Improved Error Control for Real-Time Video Broadcasting Over CDMA2000 Networks,"The broadcast and multicast services (BCMCS) protocol is designed for real-time applications such as MPEG-4 video streaming, which requires successive frames to arrive within a specific time interval. We analyze the execution time of Reed-Solomon (RS) decoding, which is the medium access control (MAC)-layer forward error-correction (FEC) scheme used in CDMA2000 1xEV-DO BCMCS, under various air channel conditions. The results show that the timing constraints of MPEG-4 cannot always be met by RS decoding when the packet loss rate (PLR) is high, due to the limited processing power of current hardware. We therefore propose three error control schemes: First, we have our static scheme, which bypasses RS decoding at the mobile node to satisfy the MPEG-4 timing constraint when the PLR exceeds a given level. Second, we have the dynamic scheme, which corrects as many errors as possible within the timing constraint, instead of giving up altogether when the PLR is high; this improves quality. Third, we have the video-aware dynamic scheme, which fixes errors in a similar way to the dynamic scheme but in a priority-driven manner, yielding a further increment in video quality at mobile terminals. Extensive simulation results show the effectiveness of our schemes compared with the original FEC scheme.",2009,0, 6455,Fault-tolerance properties and self-healing abilities implementation in FPGA-based embryonic hardware systems,"The cell-based structure, which makes up the majority of biological organisms offers the ability to grow with fault-tolerance abilities and self-repair. By adapting these mechanisms and capabilities from nature, scientific approaches have helped researches understand related phenomena and associated with principles to engine complex novel digital systems and improve their capability. Founded by these observations, the paper is focused on computer-aided modeling, simulation and experimental research of embryonic systems fault-tolerance and self-healing abilities, with the purpose to implement VLSI hardware structures which are able to imitate cells or artificial organism operation mode, with similar robustness properties like their biological equivalents from nature. The presented theoretical and simulation approaches were tested on a laboratory prototype embryonic system (embryonic machine), built with major purpose to implement self-healing properties of living organisms.",2009,0, 6456,A Virtual Instrument for the Rotor Winding Inter-turn Short Circuit fault of Generator,"The characteristics of the stator winding parallel-connected branches circulating current are first analyzed, which is the second harmonics circulating current will increase when the rotor winding inter-turn short circuit fault occurs, and the size and the distribution of circulating current are associated with severity of the short circuit. Next, the virtual instrument for the rotor winding inter-turn short circuit fault is designed based on LabVIEW and the data acquisition card PCI-6251, which consist of the data analysis and data processing module, the fault characteristic extract module, the data text storage module, the data and curve display module, the database management module, the Off-line system and help system and the system configuration module. Finally the virtual instrument is successfully applied to SDF-9 fault simulation generator.",2006,0, 6457,Design of a fault-tolerant parallel processor,"The Charles Stark Draper Laboratory, under contract to the NASA Johnson Space Center, has developed a Fault-Tolerant Parallel Processor (FTPP) for use on the NASA X-38 experimental vehicle. Using commercial processor boards and the industry-standard VME backplane, the system is configured as a quadruplet Flight-Critical Processor (FCP) and five simplex Instrumentation Control Processors (ICPs). The FCP is Byzantine resilient for any two non-simultaneous permanent faults, and for any number of non-simultaneous recoverable faults, as long as a maximum of one other fault condition occurs during the recovery process (only two recoveries can be in progress at once). This paper focuses on some of the hardware and software design of the Fault-Tolerant System Services (FTSS) that isolate, as much as possible, the redundancy of the FCP from the application software, such as the guidance, navigation and flight control software, on the X-38 FTPP. FTSS also performs reconfiguration and recovery functions.",2002,0, 6458,The Extended Finite State Machine and Fault Tolerant Mechanism in Distributed Systems,"Synchronization and fault tolerance of processes are emphasis in the distributed systems research, but only a few people involves in the mathematics model used in processes synchronization and fault tolerance yet. This paper takes distributed system as an event driven system, classify the events that cause system state variety into four classes, and proposed an extended finite state machine (EFSM) with synchronization and fault tolerant message to the distributed system. Accordingly, a checkpoint set up algorithm based in this EFSM is proposed. During the establishing of the checkpoint, the consistency of checkpoint can be determined by calculating the number of sending and receiving messages. In case of lost message, sending and receiving process that lost message can be found by checking the number of sending and receiving messages, and the lost messages can be retransmitted and received. Thus the establishing of the distributed systems global state has been simplified.",2009,0, 6459,Power factor correction in industrial facilities using adaptive excitation control of synchronous machines,Synchronous machines provide a practical way to control VA consumption of the plant. One of the main advantages of using synchronous motors in a plant is their ability to generate reactive power for plant loads. In petrochemical plants synchronous motors are often operated with a constant set point of power factor (PF) without considering overall performance and the dynamic changes of the distribution system in the plant. This often results in less than optimum operating conditions. This paper addresses a new application to automate VAr generation and voltage control in a petrochemical facility using advantages and capabilities of advanced power monitoring devices to optimize VAr and voltage conditions.,2002,0, 6460,Fault Injection Techniques and their Accelerated Simulation in SystemC,"SystemC has been widely accepted for the description of electronic systems. An essential advantage of a SystemC description is the possibility of a built-in compiled-code simulation. Beyond the functional simulation for validation of a hardware design, there are additional requirements for an advanced simulation of faults in order to analyze the system behavior under fault conditions. The paper introduces known and novel methods of SystemC-based simulations with fault injections and provides first results. Some strategies are shown to accelerate the SystemC simulation by parallel computing. Additionally, we present gate level and switch level models for an effective simulation in SystemC.",2007,0, 6461,Towards Autonomic Fault Recovery in System-S,"System-S is a stream processing infrastructure which enables program fragments to be distributed and connected to form complex applications. There may be potentially tens of thousands of interdependent and heterogeneous program fragments running across thousands of nodes. While the scale and interconnection imply the need for automation to manage the program fragments, the need is intensified because the applications operate on live streaming data and thus need to be highly available. System-S has been designed with components that autonomically manage the program fragments, but the system components themselves are also susceptible to failures which can jeopardize the system and its applications. The work we present addresses the self healing nature of these management components in System-S. In particular, we show how one key component of System-S, the job management orchestrator, can be abruptly terminated and then recover without interrupting any of the running program fragments by reconciling with other autonomous system components. We also describe techniques that we have developed to validate that the system is able to autonomically respond to a wide variety of error conditions including the abrupt termination and recovery of key system components. Finally, we show the performance of the job management orchestrator recovery for a variety of workloads.",2007,0, 6462,Mapping a Fault-Tolerant Distributed Algorithm to Systems on Chip,"Systems on chip (SoC) have much in common with traditional (networked) distributed systems in that they consist of largely independent components with dedicated communication interfaces. Therefore the adoption of classic distributed algorithms for SoCs suggests itself. The implementation complexity of these algorithms, however, significantly depends on the underlying failure models. In traditional software-based solutions this is normally not an issue, such that the most unconstrained, namely the Byzantine, failure model is often applied here. Our case study of a hardware implemented tick synchronization algorithm shows, however, that in an SoC-implementation substantial hardware savings can result from restricting the failure model to benign failures (omissions, crashes). On the downside, it turns out that such restricted failure models have a fairly poor coverage with respect to the hardware faults occurring in practice, and that additional measures to enforce these restrictions may entail an implementation overhead that outweighs the gain obtained in the implementation of a simpler algorithm. As a remedy we investigate the potential of failure transformation in this context and show that this technique may indeed yield an optimized overall solution.",2008,0, 6463,Surface defects and corrosion in electrostatically deposited powder films,"The corrosion resistance of electrostatic powder coatings depends upon obtaining a smooth, nonporous film deposited uniformly over the surface of the entire substrate. The film must be free of voids, pinholes, and other surface defects. Two common defects on powder coated film arise from: (1) Faraday Cage effect causing uneven film thickness of recessed surface and (2) back corona that results in pinholes and craters. The effects of surface defects on corrosion were studied using an industrial grade thermosetting powder electrostatically sprayed onto grounded aluminum alloy substrates using a corona gun. The film thickness was varied from 25 to 75 /spl mu/m to represent the effect of Faraday Cage problem in recessed areas. Similarly the effects of back corona were studied by subjecting the powder layer to corona ion bombardment. For both the cases, additional surface defects in the form of scratch and pinholes were introduced on the cured film and tested for corrosion resistance. Corrosion testing of the coated substrates was performed by immersion in neutral salt solution as per standard method recommended in ASTM G 44-99. The corrosion resistance was estimated by electrochemical impedance spectroscopy (EIS) and Tafel plots. Corrosion resistance decreased from 700 kohms/cm/sup 2/ to 350 kohms/cm/sup 2/ when film thickness decreased from 75 /spl mu/m to 25 /spl mu/m, but with induced back corona, the resistance for the same film thickness changed from 300 kohms/cm/sup 2/ to 200 kohms/cm/sup 2/.",2002,0, 6464,Using Probabilistic Characterization to Reduce Runtime Faults in HPC Systems,"The current trend in high performance computing is to aggregate ever larger numbers of processing and interconnection elements in order to achieve desired levels of computational power, This, however, also comes with a decrease in the Mean Time To Interrupt because the elements comprising these systems are not becoming significantly more robust. There is substantial evidence that the Mean Time To Interrupt vs. number of processor elements involved is quite similar over a large number of platforms. In this paper we present a system that uses hardware level monitoring coupled with statistical analysis and modeling to select processing system elements based on where they lie in the statistical distribution of similar elements. These characterizations can be used by the scheduler/resource manager to deliver a close to optimal set of processing elements given the available pool and the reliability requirements of the application.",2008,0, 6465,Photo track defect control using multiple masking layer defect data,"The defect monitoring strategy presented here has been developed for defectivity feedback for track and stepper issues typically seen in a high volume multi-device manufacturing facility. It combines data streams from multiple masking layers and product mixes improving the signal to noise ratio (S/N) of the defectivity signal utilizing an AMD/Spansion developed statistical control system known as ASPECT. True defect driven failures at the current layer, faster feedback loops, and a more comprehensive look at potential problems within the photo lithography area are the results of this integrated monitor process control strategy.",2007,0, 6466,Reconfigurable architecture of ultrasonic defect detection based on wavelet packet and back propagation artificial neural network,"The defects detection based on algorithms of discrete wavelet packet transform and back propagation artificial neural network is described firstly. Then the reconfigurable architecture of the defects detection in embedded reconfigurable system is discussed. Upon the reconfigurable architecture, the algorithms of discrete wavelet packet transform and back propagation neural net work are rearranged. The reconfigurable architectures of both algorithms are carried out eventually. According to the experiments of ultrasonic signal processing, the reconfigurable architecture of the defect detection can provides a flexible and efficient solution to embedded reconfigurable signal processing system.",2010,0, 6467,Incorporation of hard-fault-coverage in model-based testing of mixed-signal ICs,"The application of the Linear Error Mechanism Modeling Algorithm (LEMMA) to various DAC and ADC architectures has raised the issue of including hard-fault-coverage as an integral part of the algorithm. In this work, we combine defect-oriented functionality tests and specification-oriented linearity tests of a mixed-signal IC to save test time. The key development is a novel test point selection strategy which not only optimizes the INL-prediction variance of the model, but also satisfies hard-fault-coverage constraints",2000,0, 6468,Neural network based technique for detecting catastrophic and parametric faults in analog circuits,The approach to transient functional test of analog circuits is considered. The artificial neural network is proposed for realization of the circuit under test (CUT) response analysis. The coefficients of wavelet decomposition of CUT transient output responses reflecting the dynamical behavior of analog circuit are used for neural network training sensitivity analysis is applied for selecting the test frequencies and test nodes. The experimental results for analog benchmark circuits are provided.,2005,0, 6469,The minimum worst case error of fuzzy approximators,"The approximation capability of fuzzy systems is an important topic of research when the systems are regarded as input-output maps. By using the notion of information-based complexity (IBC), we derive the minimum worst case error of a fuzzy approximator, which is independent of the detailed construction of the fuzzy rule bases",2001,0, 6470,Error detection and unit conversion,"The article discusses the accuracy mathematical modeling languages (MML) for biomedicine, for example in cardiac electrophysiology. Unit balance checking is showed that it can be automated. The implemented example is JSim (http://www.physiome.org/ jsim/), which is general and can be applied to other systems in which units can be specified and checked. ODE-based simulator Physiome CellML Environment is also discussed.",2009,0, 6471,The Associative Memory Model with Expecting Fault-Tolerant Field on Multi-Value Information Space,The associative memory model with expecting fault-tolerant field on multi-value information space is proposed. The sample fault-tolerant field of the associative memory model memory has the hoped situation. The design method of the associative memory model with expecting fault-tolerant field on multi-value information space better solves the difficult synthesis problems of associative memory models.,2008,0, 6472,On SINS for Rocket Bomb Trajectory Correction Based on MEMS,"Taken guided rocket bomb as the research object and aimed at trajectory correction, the kinematics model of guided rocket bomb is established and a strap down inertial navigation system(SINS) is designed using MEMS inertial sensors. According to the output data of guidance system, a differential geometry algorithm is proposed to calculate the curvature of the actual trajectory. Compared it with the which of the designed trajectory, deviate error is obtained to control rudder leaning angle to achieve trajectory correction. The simulation analysis shows that the method can greatly improve the accuracy of long-range rocket bomb.",2010,0, 6473,Passive and Active Combined Attacks on AES Combining Fault Attacks and Side Channel Analysis,"Tamper resistance of hardware products is currently a very popular subject for researchers in the security domain. Since the first Kocher side-channel (passive)attack, the Bellcore researchers and Biham and Shamir fault (active) attacks, many other side-channel and fault attacks have been published. The design of efficient countermeasures still remains a difficult task for IC designers and manufacturers as they must also consider the attacks which combine active and passive threats. It has been shown previously that combined attacks can defeat RSA implementations if side-channel countermeasures and fault protections are developed separately instead of being designed together. This paper demonstrates that combined attacks are also effective on symmetric cryptosystems and shows how they may jeopardize a supposedly state of the art secure AES implementation.",2010,0, 6474,A Solution for Fault-Tolerance Based on Adaptive Replication in MonALISA,"The domains of usage of large-scale distributed systems have been extending during the past years from scientific to commercial applications. Together with the extension of the application domains, new requirements have emerged for large-scale distributed systems. Among these, fault tolerance is needed by more and more modern distributed applications, not only by the critical ones. In this paper we present a solution aiming at fault tolerant monitoring of the distributed systems within the MonALISA framework. Our approach uses replication and guarantees that all processing replicas achieve state consistency, both in the absence of failures and after failure recovery. We achieve consistency in the former case by implementing a module that ensures that the order of monitoring tuples is the same at all the replicas. To achieve consistency after failure recovery, we rely on check pointing techniques. We address the optimization problem of the replication architecture by dynamically monitoring and estimating inter-replica link throughputs and real-time replica status. We demonstrate the strengths of our solution using the MonALISA monitoring application in a distributed environment. Our tests show that the proposed approach outperforms previous solutions in terms of latency and that it uses system resources efficiently by carefully updating replicas, while keeping overhead very low.",2010,0, 6475,Study for Performance Benchmark of Bank Intermediary Business on High-Performance Fault-Tolerant Computers,"The dominant position of High-Performance Fault-Tolerant (HPFT) computers in security and economics has advanced the studies on the performance benchmarks on the HPFT computers in the specific field, such as bank finance and telecommunication etc. Although TPC (Transaction Processing Council) has proposed some benchmarks models for different OLTP (On-Line Transaction Processing) complex business, such as TPC-C and TPC-E, there is still a lack of the performance benchmark model dedicated to the bank intermediary business on HPFT computers. This paper proposes a Bank Intermediary Business performance benchmark (BIBbench), and gives a solution to test and evaluate this benchmark on HPFT computers for bank intermediary business. In this paper, we present the architecture of BIBbench, defining the structures and attributes of the business model, the database model and the transaction/frame model, and illuminating the workload generation mechanism of the intermediary business system as well. The BIBbench testing environment architecture is also discussed in the paper, as well as the testing solutions and tools. Currently, this BIBbench has been partly implemented on the Oracle 10g database system, and some performance testing experiences based on the BIBbench for HPFT computers have been made.",2010,0, 6476,A novel improved combined Dynamic Voltage Restorer (DVR) using Fault Current Limiter (FCL) structure,"The dynamic voltage restorer (DVR) as a means of series compensation for mitigating the effect of voltage sags has been established as a preferred approach for improving power quality at sensitive load locations. In this paper a novel structure for voltage sags mitigation and for power quality improvement are proposed. Considering this fact that for rejection of high order harmonics we have to use DVRs with high-speed switches, therefore increasing the rating required for energy storage device and the need to use high-speed switches in DVRs results in a considerable increasing in costs of this equipments. So in this paper a low power DVR with low-speed switches for decreasing costs are proposed. In presented structure, the combination of FCL and DVR for decreasing the requested power rating and time response of abnormal variations at DVRs are proposed. The proposed structure operation is investigated through computer simulation by using PSCAD/EMTDC.",2007,0, 6477,Earth-fault protection of VLT automationdrive FC 301,The earth-fault protection of VLTreg AutomationDrive FC 301 is based on a single current transducer in the high-side DC-link and desaturation protection of the low-side switching elements. Used together with a novel software algorithm this makes the AC-drive earth-fault proof in any operating point.,2007,0, 6478,Bit error rate performance of OFDM in narrowband interference with excision filtering,"The effect of narrowband interference on OFDM systems is considered, with particular regard to the receiver post-detection bit error rate performance. It is shown both by analysis and by computer simulation that the ensemble average bit error rate is severely affected by narrowband interference and that particular values of interferer carrier frequency and phase can produce bit error rates significantly higher than the ensemble average. An interference suppression technique based on excision (notch) filtering is proposed and is shown by computer simulation to improve ensemble average bit error rates to about 0.001 for BPSK modulated OFDM with signal-to-interference ratios as low as -30 dB",2006,0, 6479,Research and Development of Thermal Error Compensation Embedded in CNC System,"The effective compensation upon machining error, which is given rise to by machine tool's thermal deformation, is an important fashion of improving machining efficiency in CNC system. The external thermal error compensation method is mostly adopted because of the closure in conventional CNC system. In order to solve embedded integral problem of real-time thermal error compensation function in CNC system, an concentrating approach is introduced by integral design. Of a concentrating manner, the thermal deformation error of X-axis screw is modeled and analyzed in THK6370 Horizontal processing centre. Not only is the inserted thermal error real-time compensation function realized in complete open CNC system, but also can the thermal error on-line real-time compensation be executed. Finally, the experiment validates the concentrating mode and the effective compensation of thermal deformation error can be implemented upon THK6370 horizontal machining centre X-axis screw.",2010,0, 6480,Fault tolerance for an embedded wormhole switched network,"The effectiveness of parallel and distributed systems depends heavily upon the reliability and efficiency of the method used for information transfer. To satisfy these requirements, the communication medium must supply fault tolerance throughout the communication layers, but should minimise operational overheads. The work described relates to a scalable communication system for a distributed-memory parallel processing architecture, which is constructed with message routing switches. The system employs a hardware mechanism that is local to each physical connection, which provides a distributed solution for fault detection and isolation. By isolating faults and the use of adaptive routing algorithms, networks may be designed that will maintain operability in the presence of faults. An explanation of the basic switch and fault isolation mechanism is provided. The paper concludes with implementation details of the operational hardware and details of the environment, in which it has been tested",2000,0, 6481,Error rates for hybrid SC/MRC systems on Nakagami-m channels,"The efficacy of an hybrid M/L-SC/MRC receiver structure (also known as generalized selection combining) in a variety of fading environments is analyzed by deriving considerably simpler expressions for the statistics (i.e., moment generating function (MGF) and cumulative distribution function (CDF)) of the combiner output signal-to-noise ratio (SNR) on Nakagami-m channels with arbitrary parameters. Different from previous studies, these results hold for arbitrary orders of M and L. As well as for any real values of fading severity index m⩾0.5. A simple procedure for deriving an exact closed-form expression for the MGF of SNR when the fading index assumes a positive integer m value is also outlined. These MGFs are then used to derive the average symbol error probability (ASEP) for a broad class of binary and M-ary modulations employing coherent SC/MRC receiver. Analytical expressions for computing the outage rate of error probability and the average combined output SNR are also derived. Finally, computationally efficient but approximate solutions for the MGF of SNR are presented",2000,0, 6482,An Interferometric Wave Front Sensor for Measuring Post-Coronagraph Errors on Large Optical Telescopes,"The Gemini Planet Imager (GPI) [B.Macintosh et al.], now in the early stages of development, is a ground-based extreme adaptive optics system with an advanced coronagraphic system and integral-field spectrometer. At commissioning in early 2011, it will be deployed on one of the twin eight meter Gemini Telescopes. This powerful instrument, which works at a science wavelength in the near-infrared, will enable the direct detection and characterization of self-luminous Jupiter-class planets from the ground. Semi-static and non-common path wave front errors that are not sensed by the active wave front sensor in the adaptive optics system will lead to a focal plane speckle pattern that will mask exo-planets. The GPI Instrument will incorporate an interferometric wave front sensor, designed and developed at JPL, which will measure these errors. This talk will emphasis this novel sensor and describes how it is used to measure the non-common path amplitude and phase errors in the system that would otherwise limit the achievable contrast. We will describe the system error budget as well as simulations that model the system performance. Finally, we will also discuss the status of our laboratory testbed that is designed to test the fundamental principles of post-coronagraph wave front sensing. This system promises a rich combination of interferometry and large optical systems in support of cutting edge science research.",2007,0, 6483,Fault-Tolerant Earliest-Deadline-First Scheduling Algorithm,The general approach to fault tolerance in uniprocessor systems is to maintain enough time redundancy in the schedule so that any task instance can be re-executed in presence of faults during the execution. In this paper a scheme is presented to add enough and efficient time redundancy to the earliest-deadline-first (EDF) scheduling policy for periodic real-time tasks. This scheme can be used to tolerate transient faults during the execution of tasks. We describe a recovery scheme which can be used to re-execute tasks in the event of transient faults and discuss conditions that must be met by any such recovery scheme. For performance evaluation of this idea a tool is developed.,2007,0, 6484,Roundoff errors in fixed-point FFT,"The general assumptions made about roundoff noise are that its samples form a white sequence. and they are uniformly distributed between plusmnq/2, where q is the size of the LSB. While this is often true, strange cases may appear, e.g. misleading peaks can occur in the spectrum. This paper investigates the roundoff error of fixed-point FFT. It reproduces the results of Welch (1969) with modern tools, points at an error in his simulations, and investigates the consequences of the violation of the assumption for almost pure sine waves. The maximum amplitude of spurious peaks is determined and the reduced dynamic range is given.",2009,0, 6485,Model of Reliability of the Software with Coxian Distribution of Length of Intervals between the Moments of Detection of Errors,"The generalized software reliability model on the basis of nonstationary Markovian system of service is proposed. Approximation by distribution of Cox allows investigating growth of software reliability for any kinds of distribution of time between the moments of detection of errors and exponential distributions of time of their correction. The model allows receiving the forecast of important characteristics: the number of the corrected and not corrected errors, required time of debugging, etc. The diagram of transitions between states of the generalized model and system of the differential equations are presented. The example of calculation with use of the offered model is considered, research of influence of variation coefficient of Cox distribution of duration of intervals between the error detection moments on values of look-ahead characteristics is executed.",2010,0, 6486,On the Error Elimination for Multi-Axis CNC Machining,"The geometrical accuracy of a machined feature on a workpiece during machining processes is mainly affected by the kinematic chain errors of multi-axis CNC machines, locating precision of fixtures, and datum errors on the workpiece. It is necessary to find a way to minimize the feature errors on the workpiece. In this paper, the kinematic chain errors are transformed into the displacements of the workpiece. The relationship between the kinematic chain errors and the displacements of the position and orientation of the workpiece is developed. A mapping model between the displacements of workpieces and the datum errors, and adjustments of fixtures is established. An error elimination (EE) method of the machined feature is formulated. A case study is given to verify the EE method.",2007,0, 6487,"Processors for ALOS Optical Data: Deconvolution, DEM Generation, Orthorectification, and Atmospheric Correction","The German Aerospace Center (DLR) is responsible for the development of prototype processors for PRISM and AVNIR-2 data under a contract of the European Space Agency. The PRISM processor comprises the radiometric correction, an optional deconvolution to improve image quality, the generation of a digital elevation model, and orthorectification. The AVNIR-2 processor comprises radiometric correction, orthorectification, and atmospheric correction over land. Here, we present the methodologies applied during these processing steps as well as the results achieved using the processors.",2009,0, 6488,Predicting identification errors in a multibiometric system based on ranks and scores,"The goal of a biometric identification system is to determine the identity of the input biometric data. In such a system, the input probe (e.g., a face image) is compared against the labeled gallery data (e.g., face images in a watch-list) resulting in a set of ranked scores pertaining to the different identities in the gallery database. The identity corresponding to the best score is then associated with that of the probe. The aim of this work is to predict identification errors and improve the recognition accuracy of the biometrie system. The method utilizes the rank and score information generated by the identification operation in order to validate the output. Further, we demonstrate the proposed predictor can be effectively applied in multimodal scenarios. Experiments performed on two multimodal databases show the effectiveness of our framework in improving identification performance of biometrie systems.",2010,0, 6489,Error propagation of the robotic system for liver cancer coagulation therapy,"The goal of this paper is to establish the error propagation model of the ultrasound-guided robot for liver cancer coagulation therapy, which consists of ultrasound machine, image-guided software subsystem, position tracking unit and needle-driven robot. The target of tumor is transformed to robot coordinate frame to let the robot move to the target. The transformation includes three dimension ultrasound construction, registration between pre-operative model and intra-operative physical body, coordinate transformation from position tracking unit to robot. The factors affecting the system accuracy can be expressed by the sum of target mapping error and robot positioning error. Then, the propagation model of target mapping error on the Euclidean motion group is established. At last, the simulations of the propagation model of target mapping error and the experiment of the system accuracy are carried out and the results show our proposed error propagation model is efficient and the system accuracy can satisfy the need of coagulation therapy for liver cancer.",2009,0, 6490,Reliability Estimation of Fault-Tolerant Wireless and Mobile Networks,"The environmental conditions along with user mobility obstruct the reliability analysis of mobile computing system. The methods to provide reliability rely on a modeling approach that recognizes the effects of node mobility. Consequently, a Monte Carlo simulation method is proposed in this paper that accounts for node mobility, node reliability and hence predicts the resultant all operating terminal reliability of the network as well as coverage. We have also considered recoverable faults at nodes as the computing potential of the wireless network is often hampered by failures. This approach also covers imperfect nodes, their handoff rates and uses tolerance of disconnection interval to distinguish between disconnection due to node failure and disconnection due to node movement (moving outside the network). Smooth Random Mobility Model is used to estimate node location in time showing the effect on overall system reliability. This paper discusses a reliability estimation model to calculate all operating terminal reliability (AOTR) and network coverage of fault tolerant wireless network. We have shown that the AOTR is a function of time, node mobility, and network infrastructure.",2010,0, 6491,Fast calculation algorithm of the undetected errors probability of CRC codes,"The error detecting functions of linear block code can be realized via simple software or hardware. The error detection, which includes long-term theoretical research and many good properties, is often applied widely in digital communication and data storage. The weight distribution of linear block code and its dual code are important parameters of calculating the probability Pud of undetected errors. Further, cyclic redundancy check (CRC) codes and Bose, Chaudhuri and Hocquenghem (BCH) cyclic codes are subclasses of linear block codes. This paper proposes a fast calculation algorithm of weight distribution of the dual code which outperforms those of previous studies in time complexity, and the probability of undetected error of different CRC codes standards under various codeword lengths are also simulated efficiently.",2005,0, 6492,On the error performance of 8-VSB TCM decoder for ATSC terrestrial broadcasting of digital television,"The error performance of various 8-VSB TCM decoders for reception of terrestrial digital television is analyzed. In previous work, 8-state TCM decoders were proposed and implemented for terrestrial broadcasting of digital television. In this paper, the performance of a 16-state TCM decoder is analyzed and simulated. It is shown that not only a 16-state TCM decoder outperforms one with 8-states, but it also has much smaller error coefficients",2000,0, 6493,Mapping a group of jobs in the error recovery of the Grid-based workflow within SLA context,"The error recovery mechanism receives an important position in the system supporting service level agreements (SLAs) for the grid-based workflow. If one sub-job of the workflow is late, a group of directly affected sub-jobs should be re-mapped in a way that does not affect the start time of other sub-jobs in the workflow and is as inexpensive as possible. With the distinguished workload and resource characteristics as well as the goal of the problem, this problem needs new method to be solved. This paper presents a mapping algorithm, which can cope with the problem. Performance measurements deliver good evaluation results on the quality and efficiency of the method.",2007,0, 6494,3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic,The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generated FKBs based on two optimization paradigms are used for the reconstruction of the direction- dependent probe error w. The angles beta and gamma are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real/ binary like coded genetic algorithm developed by the authors. The influence of the optimization criteria on the precision of the genetically-generated FKBs is presented.,2008,0, 6495,Predicting error floors of structured LDPC codes: deterministic bounds and estimates,"The error-correcting performance of low-density parity check (LDPC) codes, when decoded using practical iterative decoding algorithms, is known to be close to Shannon limits for codes with suitably large blocklengths. A substantial limitation to the use of finite-length LDPC codes is the presence of an error floor in the low frame error rate (FER) region. This paper develops a deterministic method of predicting error floors, based on high signal-to-noise ratio (SNR) asymptotics, applied to absorbing sets within structured LDPC codes. The approach is illustrated using a class of array-based LDPC codes, taken as exemplars of high-performance structured LDPC codes. The results are in very good agreement with a stochastic method based on importance sampling which, in turn, matches the hardware-based experimental results. The importance sampling scheme uses a mean-shifted version of the original Gaussian density, appropriately centered between a codeword and a dominant absorbing set, to produce an unbiased estimator of the FER with substantial computational savings over a standard Monte Carlo estimator. Our deterministic estimates are guaranteed to be a lower bound to the error probability in the high SNR regime, and extend the prediction of the error probability to as low as 10-30. By adopting a channel-independent viewpoint, the usefulness of these results is demonstrated for both the standard Gaussian channel and a channel with mixture noise.",2009,0, 6496,Cross-layer fault tolerant data aggregation for improved network delay in healthcare management applications,"The escalation of American health care costs compels a new approach to manage chronic diseases. Wireless sensor networks (WSN) have been applied successfully in remote monitoring in military, aerospace, civil structure, and healthcare. However, existing wireless network framework cannot provide required quality of service (QoS) due to communication device failure, message loss caused by link error, collision, and hidden terminal for personalized disease management applications. In this paper, we present scalable network architecture and an operating mechanism that tolerates network structure changes caused by failure, with the application level data aggregation algorithm able to heal from the failure. We provide close form solutions that can achieve optimized network delay. Performance analysis was done to evaluate the significance of different nodes' failure in both homogeneous and heterogeneous sensor network and the effects of sensing and communication speed on failure impact in heterogeneous sensor networks.",2009,0, 6497,Using the Number of Faults to Improve Fault-Proneness Prediction of the Probability Models,"The existing fault-proneness prediction methods are based on unsampling and the training dataset does not contain the information on the number of faults of each module and the fault distributions among these modules. In this paper, we propose an oversampling method using the number of faults to improve fault-proneness prediction. Our method uses the information on the number of faults in the training dataset to support better prediction of fault-proneness. Our test illustrates that the difference between the predictions of oversampling and unsampling is statistically significant and our method can improve the prediction of two probability models, i.e. logistic regression and naive Bayes with kernel estimators.",2009,0, 6498,Dependability of CORBA systems: service characterization by fault injection,"The dependability of CORBA systems is a crucial issue for the development of today's distributed platforms and applications. This paper analyzes various techniques that can be applied to the dependability evaluation of CORBA systems. Due to the complexity of a middleware platform like CORBA and its various types of software components, experiments using several fault injection techniques are required to obtain comprehensive dependability benchmarks. To illustrate one of these techniques, we have applied fault injection at the communication level, targeting requests to major CORBA services, such as naming and events. Experiments have been carried out on a number of off-the-shelf implementations of CORBA. We present and discuss some of the results that we have obtained. They provide objective insights into the system's behaviour in the presence of faults, and are significant inputs for the selection of a candidate for a given application domain.",2002,0, 6499,The effect of high resistance faults on a distance relay,"The design and operating behavior of a distance relay suitable for the protection of a transmission feeder is described in the paper. The relay uses a Fourier filter to derive the voltage and current phasors and a digital measuring technique that combines symmetrical components and the complex different equation of the fault loop circuit. The relay was simulated using the model language in PSCAD/EMTDC. The operating behavior of the relay was assessed using two different simulated networks with both solid and resistive faults. The data generated by PSCAD/EMTDC describes the voltage and current signals at the relay location both immediately before and during the fault. The signals include the DC offset and the effects of high frequency traveling waves. The data is applied to the relay simulator, which than evaluates whether the impedance trajectory of the fault enters one or more of the operating zones. The results are presented in graphical form using an R-X diagram.",2003,0, 6500,Validating the dependability of embedded systems through fault injection by means of loadable kernel modules,"The design of complex embedded systems deployed in safety-critical or mission-critical applications mandate the availability of methods for validating the system dependability across the whole design flow. In this paper we introduce a fault-injection approach based on loadable kernel modules which can be adopted as soon as a running prototype of the systems is available. Moreover, in order to decouple dependability analysis from the hardware availability, we propose to adopt hardware virtualization for building virtual prototype. Extensive experimental results are reported showing that dependability analyzes made using virtual prototype closely match those performed on physical prototypes.",2007,0, 6501,Fault-robust microcontrollers for automotive applications,"The design space that a system architect should manage when designing a microcontroller for a safety related system is rather large due to the variety of faults that can affect the given equipment under control (EUC), the different failures that these faults can generate and the wide set of techniques that can be used to detect, confine or stop the resulting hazards, each one with its efficiency and cost. In this paper it is proposed a systematic platform-based approach, in which a library of blocks (HW and SW) is used together with a set of tools and methodologies to find the optimum solution in this design space, following the IEC61508 guidelines",2006,0, 6502,Prony-Based Optimal Bayes Fault Classification of Overcurrent Protection,"The development of deregulation and demand for high-quality electrical energy has lead to a new requirement in different fields of power systems. In the protection field, this means that high sensitivity and fast operation during the fault are required while maltripping of relay protection is not acceptable. One case that may lead to a maltrip of the high-sensitive overcurrent relay is the starting current of the induction motor or inrush current of the transformer. This transient current has the potential to affect the correct operation of protection relays close to the component being switched. In the case of switching events, such transients must not lead to overcurrent relay operation; therefore, a reliable and secure relay response becomes a critical matter. Meanwhile, proper techniques must be used to prevent maltripping of such relays, due to transient currents in the network. In this paper, the optimal Bayes classifier is utilized to develop a method for discriminating the fault from nonfault events. The proposed method has been designed based on extracting the modal parameters of the current waveform using the Prony method. By feeding the fundamental frequency damping and ratio of the 2nd harmonic amplitude over the fundamental harmonic amplitude to the classifier, the fault case is discriminated from the switching case. The suitable performance of this algorithm is demonstrated by simulation of different faults and switching conditions on a power system using PSCAD/EMTDC software.",2007,0, 6503,A Heuristic Approach for Predicting Fault Locations in Distribution Power Systems,"The first step in restoring systems after a fault is detected, is determining the fault location. The large number of candidate locations for the fault makes this a complex process. Knowledge based methods have the capability to accomplish this quickly and reliably. In this paper, a heuristic approach has been used to predict potential fault locations. A software tool implements the heuristic rules and a genetic algorithm based search. The implementation and evaluation results of this tool have been presented.",2006,0, 6504,Overflow Detection and Correction in a Fixed-Point Multiplier,"The fixed-point binary representation, an integer format with an implied binary point, is an alternative to the IEEE floating-point binary layout. Systems that do not support the IEEE floating-point format, e.g., mobile devices, use the fixed-point format because it fits well into integer data paths whereas floating-point requires its own data path. Software developers who port to fixed-point systems often face issues when balancing range and precision. Those issues, overflow and large rounding errors, often arise from arithmetic operations, making debugging more difficult. The proposed solution limits hardware support to a set of fixed-point formats and adjusts the format of the output based on the user supplied format and overflow. The format of the result is readjusted on overflow in order to return a useful result but with the sacrifice of precision. In addition to the corrected result, the overflow flag is raised so the software and subsequent logic are aware of the readjustment in the result's format. This work has been implemented in a fixed-point multiplier because multiplication yields the largest overflow among the four basic arithmetic operations. In order to detect overflow early, the fixed-point multiplier adopts preliminary overflow detection. With the idea of taking the burden of fixed-point scaling off of the programmer, the fixed-point multiplier with overflow detection and correction provides a starting point towards mitigating fixed-point errors.",2007,0, 6505,Active Network Fault Response,"The flexibility and power achieved by using active networks come with their own risks - any fault in the active code or the security infrastructure now represents a fault in the network as a whole. Secure containment of active code is necessary in order to ameliorate this risk. The Active Network Fault Response project has developed and implemented innovative approaches to respond to faults in the active code as well as faults in the security infrastructure of an active network. Diverse authentication techniques, which provide fail-over, and compensatory authentication techniques, which provide substitutes, furnish effective responses when some component of the security infrastructure is unavailable. An active code revocation capability provides for secure containment of faulty active code within the active network.",2003,0, 6506,Fault and intrusion tolerance of wireless sensor networks,"The following three questions should be answered in developing new topology with more powerful ability to tolerate node-failure in wireless sensor network. First, what is node-failure tolerance of topologies? Second, how to evaluate this tolerance ability? Third, which type of topologies is more efficient in tolerating node-failure? Without giving the answers, the existing work regards fault-tolerance topology as the multiply connected graph, and use the connectivity of the graph as the standard to evaluate tolerance ability. In this paper, we argue that fault tolerance of topologies is not equivalent to the connectivity of multiply connected graph by illustrating two concrete examples. Then the definition of node-failure tolerance is presented. According fault and intrusion, the two sources of failure nodes, we define fault tolerance and intrusion tolerance as the standards to evaluate the tolerance ability of topologies, and analyze the tolerance performance of hierarchical structure of wireless sensor network by using these standards. Finally, the function relation between hierarchical topology and its tolerance abilities of fault and intrusion is obtained, and an obvious corollary is that fault tolerance increase with the ratio of cluster head hierarchical structure, but with the intrusion tolerance decreasing.",2006,0, 6507,Higher-order corrections to the pi criterion using center manifold theory,"The frequency-dependent pi criterion of Bittanti et al. has been used extensively in applications to predict potential performance improvement under periodic forcing in a nonlinear system. The criterion, however, is local in nature and is limited to periodic forcing functions of small magnitude. The present work develops a method to determine higher-order corrections to the pi criterion, derived from basic results of center manifold theory. The proposed method is based on solving the center manifold PDE via recursive Taylor series. The advantage of the proposed approach is the improvement of the accuracy of the pi criterion in predicting performance under larger amplitudes. The proposed method is applied to a continuous stirred tank reactor, where the yield of the desired product must be maximized.",2002,0, 6508,The prediction of fault currents in a large multiwinding reactor transformer,"The fault currents occurring in power transformers are determined by the leakage reactances within the windings. Where the transformers are used in a phase-shifting mode, there is additional coupling between phases which influences the fault current. This work describes the modelling of the self and mutual inductances in a 90 MVA autotransformer with a tertiary winding, on the assumption that the airgap formed by the transformer window dictates the reluctance of the leakage flux paths. Recordings made during a short-circuit between two phases of the tertiary winding show a remarkably close comparison with the predicted waveforms.",2003,0, 6509,A novel online diagnosis of brushless generator rotary rectifier fault,"The fault detection of rotary rectifier based on harmonic analysis has some deficiencies. A new fault diagnosis method is presented using fractal theory and dynamics. Firstly, the quantitative description of exciter field currentpsilas complexity and irregularity is performed by box dimension calculation. Then the exciter field current fluctuation range under noisy environments is obtained by dynamics. Finally, the fault detection and identification of rotary rectifier is implemented through fractal dimension-signal fluctuation range trajectory area. The analysis results of testing waveforms show that box dimension and dynamics can analyze the waveform characteristics of exciterpsilas field current synthetically, the diagnosis effect is remarkable. Because the arithmetic has a little working load, it is more feasible to realize online fault diagnosis.",2008,0, 6510,Fault Tolerant PID Control based on Software Redundancy for Nonlinear Uncertain Processes,"The fault diagnosis and close-loop tolerant PID control for nonlinear multi-variables system under multiple sensor failures are investigated in the paper. A complete FDT architecture based on software redundancy is proposed to efficiently handle the fault diagnosis and the accommodation for multiple sensor failures in online situations. The methods colligates the adaptive threshold technique with the envelope and weighting moving average residual to detect multi-type sensor fault, use fault propagation technique, variable structure analyzing technique and neural network techniques to online reconstruct sensor signal, and achieves the tolerant PID control through recombining feedback loop of PID controller. The three-tank with multiple sensor fault concurrence is simulated, the simulating result shows that the fault detection and tolerant control strategy has stronger robustness and tolerant fault ability",2006,0, 6511,Fault Diagnosis Based on Bond Graph for Feedwater,"The fault tree analysis method based on bond graph for feed water pump is introduced in this paper. Using a knowledge representation of bond graph modeling, which includes system structural, functional and behavioral information and there relation, fault tree based cause-effect reasoning is created by assigning qualitative value of parameters. Multiple fault hypotheses is employed to simplify branches of fault tree. The simulation demonstrates that the bond graph fault diagnosis method is effective, corrective and flexible.",2007,0, 6512,Fault Tolerant Control Research for High-Speed Maglev System with Sensor Failure,"The faults of sensors occurred in Maglev train's suspension system reduce the running performance even disable the suspension system. Especially in the high-speed Maglev train, the loss is unpredictable. This paper focuses on the fault tolerant control problem of the suspension system. And the methods of control strategy reconstruction and state estimator were adopted. Based on the model of the simplified suspension system, the reconstructed controller and state estimator were designed. Furthermore, the simulated analysis for the static and dynamic levitation process was done by the software of Simulink. It is clearly observed that the FTC scheme can remain the static and dynamic performance in conformity with the original system. At last, the new FTC index used to judge the FTC problem of the unstable system is brought forwarded",2006,0, 6513,Analysis and suppression for impact fault of hybrid machine tool based on preview control,"The impact fault of the hybrid machine tool were considered as the analysis object in this paper. The kinematics control principle of the hybrid machine tool was given as well as the causes of the impact based on the hybrid structure. The impact suppression method was given, which based on the preview-control algorithm. It was confirmed by experiment that the impact was reduced which was account for the application of preview-control algorithm and the performance of the control strategy was improved.",2010,0, 6514,Fault Injection and Simulation for Fault Tolerant Reconfigurable Duplex System,"The implementation and the fault simulation technique for the highly reliable digital design using two FPGAs under a processor control is presented. Two FPGAs are used for duplex system design, each including the combination of totally self-checking blocks based on parity predictors to obtain better dependability parameters. Combinatorial circuit benchmarks have been considered in all our experiments and computations. A Totally Self-Checking analysis of duplex system is supported by experimental results from our proposed FPGA fault simulator, where SEU-fault resistance is observed. Our proposed hardware fault simulator is compared also with the software simulation. An area overhead of individual parts implemented in each FPGA is also discussed.",2007,0, 6515,Implementation and verification of the Amplitude Recovery Method algorithm with the faults diagnostic system on induction motors,"The implementation and verification of the amplitude recovery method algorithm (A.R.M.), which was first presented in ICEMS2008, are displayed in this paper. The mathematics deduction, application and test results show that the A.R.M. can extract directly the energy of the harmonics of other orders (including high orders and fractional orders) in the tested original signals of three phase stator currents of induction motors even though the harmonics elements are much smaller than the fundamental one.",2009,0, 6516,An Investigation into the Functional Form of the Size-Defect Relationship for Software Modules,"The importance of the relationship between size and defect proneness of software modules is well recognized. Understanding the nature of that relationship can facilitate various development decisions related to prioritization of quality assurance activities. Overall, the previous research only drew a general conclusion that there was a monotonically increasing relationship between module size and defect proneness. In this study, we analyzed class-level size and defect data in order to increase our understanding of this crucial relationship. In order to obtain validated and more generalizable results, we studied four large-scale object-oriented products, Mozilla, Cn3d, JBoss, and Eclipse. Our results consistently revealed a significant effect of size on defect proneness; however, contrary to common intuition, the size-defect relationship took a logarithmic form, indicating that smaller classes were proportionally more problematic than larger classes. Therefore, practitioners should consider giving higher priority to smaller modules when planning focused quality assurance activities with limited resources. For example, in Mozilla and Eclipse, an inspection strategy investing 80% of available resources on 100-LOC classes and the rest on 1,000-LOC classes would be more than twice as cost effective as the opposite strategy. These results should be immediately useful to guide focused quality assurance activities in large-scale software projects.",2009,0, 6517,Analysis of Transformer Type Superconducting Fault Current Limiters,"The inductive type SFCLs and the transformer type SFCLs need the iron cores. In both, the fault current is transformed from the primary winding to the secondary winding. The difference is in the secondary windings. The superconducting materials for every kind of SFCL should have both high value of resistivity in the resistive state,rhotau , and high critical current density . The paper shows, that the transformer SFCL can be made using every type of commercial HTS elements. The inductive SFCLs need the HTS materials with very large value of the rhotauJc parameter.",2007,0, 6518,Congestion controllers for high bandwidth connections with fiber error rates,The inefficiency of a TCP connection in the presence of high bandwidth links due to the constant multiplicative decrease factor has been well documented in recent literature. In this paper we look at the effect of fiber error rates on the throughput of a TCP connection. We propose a congestion controller that removes the ill-effects of fiber error rates on TCP throughput by lower bounding the marking probability. We show that this congestion controller can achieve extremely high utilizations in high bandwidth links. We also discuss the TCP friendliness of this congestion controller and present simulation results that validate our analysis.,2004,0, 6519,Emulation of Software Faults: A Field Data Study and a Practical Approach,"The injection of faults has been widely used to evaluate fault tolerance mechanisms and to assess the impact of faults in computer systems. However, the injection of software faults is not as well understood as other classes of faults (e.g., hardware faults). In this paper, we analyze how software faults can be injected (emulated) in a source-code independent manner. We specifically address important emulation requirements such as fault representativeness and emulation accuracy. We start with the analysis of an extensive collection of real software faults. We observed that a large percentage of faults falls into well-defined classes and can be characterized in a very precise way, allowing accurate emulation of software faults through a small set of emulation operators. A new software fault injection technique (G-SWFIT) based on emulation operators derived from the field study is proposed. This technique consists of finding key programming structures at the machine code-level where high-level software faults can be emulated. The fault-emulation accuracy of this technique is shown. This work also includes a study on the key aspects that may impact the technique accuracy. The portability of the technique is also discussed and it is shown that a high degree of portability can be achieved",2006,0, 6520,Analysis of mechanical stresses developed on T-G shaft during faults,"The instability of an interconnected power system means a condition denoting loss of synchronism or falling out of phase. Stability considerations have been recognized as an essential part of power system planning for a long time. With interconnected systems continually growing in size and extending over vast geographical regions, it is becoming increasingly more difficult to maintain synchronism between various parts of power system. It is found that due to transient disturbance generator speeds up causing motor angle to increase resulting in the development of varying torque on the rotor of generator. This varying torque develops the varying stresses on the motor shaft which when crosses the endurance limit i.e 45 times 107 N/m2 leads to damage of the shaft. In this paper a strategy is presented which seeks the coordination of generator drooping, fast valving and resynchronising the dropped generator by 120deg phase rotation. The implementation of this strategy will enable more effective use of generator dropping without load shedding which is normally associated with generator dropping.",2007,0, 6521,GPRS-based fault monitoring for distribution grid,"The GPRS is a useful mean to solve the communication problem in the automatic system of power distribution network system. This paper designs a GPRS-based real-time system to monitor the on-off states of the switching station in the power distribution system. The system is consisting of a low-power controller of MSP430F149, the module of GR64 from WAVECOM company and detection circuits of on-off states. The IEC60870-5-101 protocol is used in the system. Therefore, it can conveniently communicate with other SCADA systems. It is reliable and stable running on site.",2010,0, 6522,A decomposition approach to the inverse problem-based fault diagnosis of liquid rocket propulsion systems,"The health monitoring of propulsion systems has being been one of the most challenging issues in space launch vehicles, particularly for the manned space missions. The development of an advanced health monitoring system involves many technical aspects, such as failure detection and fault diagnosis as well as the integration of hardware and algorithms, for improving the safety and reliability of propulsion systems. The inverse problem-based strategy provides a new solution to the design of model-based fault diagnosis methods for monitoring the health of propulsion systems. This paper presents a decomposition approach to the inverse problem-based fault diagnosis for a class of liquid rocket propulsion systems. Simulation results are provided for demonstrating the effectiveness of the proposed approach to the inverse problem-based fault diagnosis.",2004,0, 6523,Fault Detection in Distributed Systems by Representative Subspace Mapping,"The high dimensionality of system observation, together with the frequent changes of system normal behavior resulting from workload variations, makes fault detection very difficult in distributed computing systems. This paper addresses these issues by proposing a novel statistical technique, the principal canonical correlation analysis (PCCA), and applying it to monitor the system in a supervised manner. Given a set of input variables u and system measurements x, PCCA extracts a subspace xtilde from x that is not only highly correlated with the input u, but also a significant representative of the whole distribution of x. Such property of PCCA, which combines the strengths of both PCA and CCA, is beneficial to the fault detection task. Experimental results from a real e-commerce system based on the multi-tiered J2EE architecture demonstrate the effectiveness of PCCA",2006,0, 6524,Concurrent monitoring of PCI bus transactions for timely detection of errors initiated by FPGA-based applications,"The integration of FPGAs as co-processing engines in various dependable workstations makes the real time detection of errors an important issue. Thus, concurrent error detection components should enforce the operation of such systems, in order to prevent the propagation of errors within the system. The functionality of such monitoring modules should not interfere with the performance of the workstation and must ensure high system availability. An embedded monitoring component with concurrent error detection features was implemented to illustrate the benefits of this approach. The monitor checks for PCI protocol and application errors based on the forensic analysis of a workstation with embedded FPGA-based co-processing support.",2007,0, 6525,Advanced Cu CMP defect excursion control for leading edge micro-processor manufacturing,"The introduction of yield sensitive, advanced interconnect technology coupled with the requirement for accelerating yield ramp in today's state-of-the-art semiconductor manufacturing facilities, are driving tool monitoring requirements for fast and accurate defect excursion control. In the Copper CMP module the challenge is accentuated by the relative immaturity of this process, the dominance of single wafer excursions and a high count of nuisance defect types relative to the critical yield-limiting defect types. A manufacturing-worthy Copper CMP tool monitor methodology is described here that improves excursion control through detection and tracking of critical, yield-limiting defect types, independent of non-yield-critical nuisance defect types. High-resolution automatic defect review and classification, a critical component of the methodology, is limited to wafers with high critical-defect counts, reducing monitoring cost and time-to-results. A new trigger sampling feature and intelligent image sampling reduces monitoring cost and time-to-results through minimizing defect review overhead. Integration of such a solution into the manufacturing environment is presented in detail and contrasted next to existing traditional defect excursion control model. Ease-of-use considerations are highlighted with use case examples. The paper will approximate the cost savings to manufacturing such as reducing existing levels of false excursion due to nuisance defects and improving the cycle time in the Cu CMP module. Benefits are achieved by integrating functionality into existing inspection hardware. No additional capital equipment was required.",2002,0, 6526,Visual Assessment for The Quantization Error in Wavelet Based Monochrome Videos,The investigation of the discrete wavelet transform (DWT) based video coder is still undergoing in the literature. One of the open problems to be solved is the perception to the quantization noise in different subbands in the DWT domain. This is a critical issue for the development of a better motion compensation (MC) scheme. An experiment and relevant results analysis are presented in this paper to address the above issue. Monochrome video sequences of natural scenes are used in the experiment therefore the so-called masking effects can be taken into account in the decision of the sensitivity to the noise hidden in the DWT domain. The preliminary results show that the most sensitive subbands are those in the lowest three resolution levels under a five-levels decomposition scheme. The further analysis proves that the distribution of the sensitivity to each individual subband has been shifted by the context of the video.,2005,0, 6527,Optimizing Issue Queue Reliability to Soft Errors on Simultaneous Multithreaded Architectures,"The issue queue (IQ) is a key microarchitecture structure for exploiting instruction-level and thread-level parallelism in dynamically scheduled simultaneous multithreaded (SMT) processors. However, exploiting more parallelism yields high susceptibility to transient faults on a conventional IQ. With the rapidly increasing soft error rates, the IQ is likely to be a reliability hot-spot on SMT processors fabricated with advanced technology nodes using smaller and denser transistors with lower threshold voltages and tighter noise margins. In this paper, we explore microarchitecture techniques to optimize IQ reliability to soft error on SMT architectures. We propose to use off-line instruction vulnerability profiling to identify reliability critical instructions. The gathered information is then used to guide reliability-aware instruction scheduling and resource allocation in multithreaded execution environments. We evaluate the efficiency of the proposed schemes across various SMT workload mixes. Extensive simulation results show that, on average, our microarchitecture level soft error mitigation techniques can significantly reduce IQ vulnerability by 42% with 1% performance improvement. To maintain runtime IQ reliability for pre-defined thresholds, we propose dynamic vulnerability management (DVM) mechanisms. Experimental results show that our DVM techniques can effectively achieve desired reliability/performance tradeoffs.",2008,0, 6528,Detecting computer-induced errors in remote-sensing JPEG compression algorithms,"The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. The design goal is to guarantee no compressed or decompressed data contain computer-induced errors without detection. Detection methods are expressed at the algorithm level so that a wide range of hardware and software implementation techniques can be covered by the fault tolerance procedures while still maintaining the JPEG output format. The major subsystems to be addressed are the discrete cosine transform, quantizer, entropy coding, and packet assembly. Each error detection method is determined by the data representations within the subsystem or across the boundaries. They vary from real number parities in the DCT to bit-level residue codes in the quantizer, cyclic redundancy check parities for entropy coding, and packet assembly. The simulation results verify detection performances even across boundaries while also examining roundoff noise effects in detecting computer-induced errors in processing steps.",2006,0, 6529,A better way to handle instrument error checking,"The LabWindows/CVItrade ""C"" language compiler has no built in method of handling errors that occur in functions or drivers. The usual method is to use an ""if"" statement for every function call made, but this is very tedious and generates a lot of code. Often programs simply ignore errors and keep on going and depend on an eventual test failure to prevent acceptance of the item being tested. This leads to the wrong conclusion about what really went wrong. Other languages such as C++ have ""try"", ""catch"" and ""throw"" for exception handling. This paper explores several methods of handling the problem with much less code",2005,0, 6530,Fault-tolerant communication runtime support for data-centric programming models,"The largest supercomputers in the world today consist of hundreds of thousands of processing cores and many more other hardware components. At such scales, hardware faults are a commonplace, necessitating fault-resilient software systems. While different fault-resilient models are available, most focus on allowing the computational processes to survive faults. On the other hand, we have recently started investigating fault resilience techniques for data-centric programming models such as the partitioned global address space (PGAS) models. The primary difference in data-centric models is the decoupling of computation and data locality. That is, data placement is decoupled from the executing processes, allowing us to view process failure (a physical node hosting a process is dead) separately from data failure (a physical node hosting data is dead). In this paper, we take a first step toward data-centric fault resilience by designing and implementing a fault-resilient, one-sided communication runtime framework using Global Arrays and its communication system, ARMCI. The framework consists of a fault-resilient process manager; low-overhead and network-assisted remote-node fault detection module; non-data-moving collective communication primitives; and failure semantics and err or codes for one-sided communication runtime systems. Our performance evaluation indicates that the framework incurs little overhead compared to state-of-the-art designs and provides a fundamental framework of fault resiliency for PGAS models.",2010,0, 6531,Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley National Laboratory,The Lawrence Berkeley National Laboratory cyclotron offers broad-spectrum neutrons for single event effects testing. We discuss results from this beamline for neutron soft upsets in Xilinx Virtex-4 and -5 field-programmable-gate-array (FPGA) devices.,2008,0, 6532,Range Non-linearities Correction in FMCW SAR,"The limiting factor to the use of Frequency Modulated Continuous Wave (FMCW) technology with Synthetic Aperture Radar (SAR) techniques to produce lightweight, cost effective, low power consuming imaging sensors with high resolution, is the well known presence of non-linearities in the transmitted signal. This results in contrast and range resolution degradation, especially when the system use is intended for long range applications, as it is the case for SAR. The paper presents a novel processing solution, which completely solves the non- linearity problem. It corrects the non-linearity effects for the whole range profile at once, differently from the algorithms described in literature so far, which work only for very short range intervals. The proposed method operates directly on the deramped data and it is very computationally efficient.",2006,0, 6533,Ephemeris type a fault analysis and mitigation for LAAS,"The Local Area Augmentation System (LAAS) has been developed by the FAA to enable precision approach and landing operations using the Global Positioning System (GPS). Each LAAS installation provides services through a LAAS Ground Facility (LGF) which is located at the airport it serves. By monitoring the GPS signals, measurements, and navigation messages, the LGF is able to exclude unhealthy satellites and broadcast real-time range-correction messages for healthy satellites to users via a VHF data link. Airborne users apply these corrections to remove errors that are common between the LGF and the aircraft. The LGF is also responsible for warning the aircraft of any potential integrity threats that cannot easily be resolved by excluding unhealthy satellites. One source of potential errors is the satellite broadcast ephemeris message, which users decode and use to compute GPS satellite positions. In LAAS, potential GPS ephemeris faults are categorized into two types, A and B, based upon whether or not the fault is associated with a satellite maneuver. This work focuses on aviation navigation threats caused by Type A faults. To detect and mitigate these threats, we investigate two LGF monitors based on comparing expected ranges and range rates (based on broadcast ephemeris) with those measured by the LGF. The effectiveness of these monitors is analyzed and verified in this paper.",2010,0, 6534,Error Separation and Compensation of Inductosyn Angle Measuring System,"The long period error and short period error of the inductosyn have been studied. And the study result is that the long period error mainly includes the first order error and secondary error in the period of 360A and the short period error mainly includes first order, secondary, third, fifth harmonic error, and so on. A novel model of error separation and compensation is firstly presented according to the error characteristic of inductosyn. And the fourier transform are compared with least-squares in many aspects, The implement method of the error separation based on least-squares is also discussed in detail. One new measuring method is proposed, which can use less than test position to attain the long period error and the short period error. The experiments show that the error compensation method can improve the precision of the inductosyn.",2010,0, 6535,Analytical Modeling Approach to Detect Magnet Defects in Permanent-Magnet Brushless Motors,The paper presents a novel approach to detect magnet faults such as local demagnetization in brushless permanent-magnet motors. We have developed a new form of analytical model that solves the Laplacian/quasi-Poissonian field equations in the machine's air-gap and magnet element regions. We verified the model by using finite-element software in which demagnetization faults were simulated and electromotive force was calculated as a function of rotor position. We then introduced the numerical data of electromotive force into a gradient-based algorithm that uses the analytical model to locate demagnetized regions in the magnet as simulated in the finite-element package. The fast and accurate convergence of the algorithm makes the model useful in magnet fault diagnostics.,2008,0, 6536,Fault treatment with net condition/event systems: a first approach,"The paper presents a preliminary report on modeling parts of a modular production system, their dedicated controllers, and the appropriate methods of fault treatment on the level of net condition/event systems (NCES). To achieve practicability, NCES support a systematic and modular way of modeling more complex systems as well as concurrent and non-deterministic behavior which is highly beneficial for modeling and control of DES in failure situations as studies of existing methods show.",2001,0, 6537,A Hough transform-based method for radial lens distortion correction,"The paper presents an approach for a robust (semi-)automatic correction of radial lens distortion in images and videos. This method, based on the Hough transform, has the characteristics to be applicable also on videos from unknown cameras that, consequently, can not be a priori calibrated. We approximated the lens distortion by considering only the lower-order term of the radial distortion. Thus, the method relies on the assumption that pure radial distortion transforms straight lines into curves. The computation of the best value of the distortion parameter is performed in a multi-resolution way. The method precision depends on the scale of the multi-resolution and on the Hough space's resolution. Experiments are provided for both outdoor, uncalibrated camera and an indoor, calibrated one. The stability of the value found in different frames of the same video demonstrates the reliability of the proposed method.",2003,0, 6538,"Modeling, analysis and detection of rotor field winding faults in synchronous generators","The paper presents an approach to modeling of shorted turns in rotor field winding of synchronous generator using finite element method. It enables detailed analysis of magnetic field at several operating conditions under healthy and faulty states which are difficult or even impossible to carry out by available measurement methods in industrial environment. Modeling of field winding faults are performed for both typical generator designs - turbo and hydro, and analysis reveals some differences, which are significant for practical use in fault detection procedures. It is confirmed that an extensive analysis should be performed to assure accurate healthy/faulty state predictions, since the level of diagnostic signal is considerably influenced by a combination of many machine and operating parameters. The scheme of developed on-line diagnostic system with its hardware and software concept is also presented and discussed.",2010,0, 6539,Frequency error measurement in GMSK signals in a multipath propagation environment,"The paper presents an efficient method for evaluating the carrier frequency in GMSK communication systems. This method operates in a nonintrusive way. It utilizes the learning vector quantisation neural network based demodulator for reconstructing the transmitted phases. From these and the expected phases is estimated the carrier frequency error. The method is able to operate both in static and multipath propagation cases and it does not require a high frequency sampling rate because the base-band signal is processed. In order to apply the method two procedures, PSP (Procedure for Static Propagation) and PMP (Procedure for Multipath Propagation), are set-up. Tests performed on GMSK signals show that the method is quite attractive, fast and more accurate if compared with other approaches",2001,0, 6540,An experimental study of security vulnerabilities caused by errors,"The paper presents an experimental study which shows that, for the Intel x86 architecture, single-bit control flow errors in the authentication sections of targeted applications can result in significant security vulnerabilities. The experiment targets two well-known Internet server applications: FTP and SSH (secure shell), injecting single-bit control flow errors into user authentication sections of the applications. The injected sections constitute approximately 2-8% of the text segment of the target applications. The results show that out of all activated errors: (a) 1-2% comprised system security (create a permanent window of vulnerability); (b) 43-62% resulted in crash failures (about 8.5% of these errors create a transient window of vulnerability); and (c) 7-12% resulted in fail silence violations. A key reason for the measured security vulnerabilities is that, in the x86 architecture, conditional branch instructions are a minimum of one Hamming distance apart. The design and evaluation of a new encoding scheme that reduces or eliminates this problem is presented.",2001,0, 6541,Correcting the influence of autocorrelated errors in linear regression models,"The paper presents the case often met in a regression model, the autocorrelated errors. In the first part of the paper are summarized some theoretical issues about the sources of appearance of autocorrelated errors, some statistic tests to identify the autocorrelation and there are presented in more detail three alternatives of the classical methods for estimating parameters, methods that are better suited to the given situation: the Cochrane-Orcutt method (with its variant Yule-Walker method), the Durbin method and the Hildreth-Lu method. The second part of the paper presents an example of a regression model with autocorrelated errors and uses a method for correcting the influence of the autocorrelation on the estimated parameters, using the statistical package SAS 9.1.",2010,0, 6542,"An outlook on the dynamic error ""blind"" correction for the time-varying measurement channel","The paper presents the measuring system, which allows for correction of dynamic error caused by the analogue signal transducers, whose dynamic characteristics are changing with a rate approximate to the rate of change of the measured signal. Three methods for self-identification of the coefficients of the transducers' dynamics model, using exclusively the measured signal at the transducers' operating locations, are proposed. Analytical justification for the correctness of the proposed methods is presented for both: the special case of measuring periodic signals and for the general case when the measured signals are nonperiodic. The self-identification and correction procedures are performed as the algorithms processing the data collected from transducers.",2004,0, 6543,Improving fault handling software techniques,"The paper presents the new software library supporting development of the fault-robust applications. The main goals of the proposed software hardening mechanisms are: usage simplicity for the programmer, independence from the development tool, effectiveness in terms of fault coverage, low static and dynamic overheads. The paper describes implemented software mechanisms and discusses their effectiveness verified with fault injection experiments.",2010,0, 6544,"Hardware support for high performance, intrusion- and fault-tolerant systems","The paper proposes a combined hardware/software approach for realizing high performance, intrusion- and fault-tolerant services. The approach is demonstrated for (yet not limited to) an attribute authority server, which provides a compelling application due to its stringent performance and security requirements. The key element of the proposed architecture is an FPGA-based, parallel crypto-engine providing (1) optimally dimensioned RSA Processors for efficient execution of computationally intensive RSA signatures and (2) a KeyStore facility used as tamper-resistant storage for preserving secret keys. To achieve linear speed-up (with the number of RSA Processors) and deadlock-free execution in spite of resource-sharing and scheduling/synchronization issues, we have resorted to a number of performance enhancing techniques (e.g., use of different clock domains, optimal balance between internal and external parallelism) and have formally modeled and mechanically proved our crypto-engine with the Spin model checker. At the software level, the architecture combines active replication and threshold cryptography, but in contrast with previous work, the code of our replicas is multithreaded so it can efficiently use an attached parallel crypto-engine to compute an attribute authority partial signature (as required by threshold cryptography). Resulting replicated systems that exhibit nondeterministic behavior, which cannot be handled with conventional replication approaches. Our architecture is based on a preemptive deterministic scheduling algorithm to govern scheduling of replica threads and guarantee strong replica consistency.",2004,0, 6545,Risk Assessment of Human Error in Information Security,"The paper proposes a human error risk assessment model based on probabilistic risk assessment and analyses of human cognitive reliability. And some relevant problems about human reliability in human error risk assessment are also analyzed. Besides, the paper sums up the framework and the analyses approach on the risk assessment, and sets up a human misplay model based on probabilistic risk analyses and human cognizance reliability",2006,0, 6546,A Java API for advanced faults management,"The paper proposes an alternative for modeling managed resources using Java and telecommunication network management standards. It emphasizes functions related to fault management, namely: diagnostic testing and performance monitoring. Based on Java management extension (JMXTM), specific extensions are proposed to facilitate diagnostic testing and performance measurements implementation. The new API also called Java fault management extension (JFMX) consists of managed objects that model real resources being tested or monitored and support objects defined for the need of diagnostic testing and performance measurements. The paper discusses four Java implementations of a 3-tier client/server scenario focusing on the SystemUnderTest package of the new API to instrument a minimalist managed system scenario. These implementations are respectively built on top of the following Java based communication infrastructures: JMX/JFMX, RMI, CORBA/Java, and VoyagerTM. The paper extends the Voyager implementation with JMX/JFMX and uses their dynamic and advanced features to provide a highly efficient solution. The later implementation also uses the mobile agent paradigm to overcome well-known limitations of the RPC based implementations",2001,0, 6547,Study on fault tolerant switched reluctance machines,"The paper's goal is to compare a usual switched reluctance machine and a fault tolerant variant of it. By coupled Flux 2D and Simulink transient simulations the behaviour of the fault tolerant drive system under different winding fault conditions were studied. It was proved that using the proposed machine structure and converter topology same torque capability of the machine in faulty states as in healthy conditions can be assured. A short discussion on fault-tolerant converters is included in the paper, too.",2008,0, 6548,High performance error correcting code of the high-dimensional discrete torus knot,"The new high-dimensional torus knot code with respect to its geometrical structure has been studied. The special features of the code are presented. (1) The code block is wound up into a small, compact code ball, so the code passes hardly damaged through the channel of a dense shower of error-making disturbances. (2) The torus knot winding works as block-size interleaving, which distributes the received burst errors randomly in the parity check cycles, so the code exhibits excellent burst error correction capability. (3) Majority logic decoding of each code digit based on the erroneous parity lines can be made up of a high-speed logic circuit thanks to the cyclical properties of the code parity check function. The four-dimensional, size-five 4Dm5-code was burned onto a 50-kilogate, 0.6-micron-order VLSI chip. The code block length and the transmission rate are 625 bits and 0.41, respectively. It was operated at a clock speed of 50 MHz, with a throughput of 6.25 Gbps. Through 100000 block trials, it was proven that the chip can perfectly correct a mean BER of 0.021 for burst and random mixed error situations",2001,0, 6549,A New System for Frequency Monitoring and Fault Analysis,"The new system studied in this paper is made of late-model single chip microcomputer and personal computer. It not only can monitor networks' frequency in real-time but also can calculate frequency variable then distinguish and record frequency fault course automatically. The whole recording time can come to 30 minutes. It is also equipped with data analytic software under WINDOWS circumstance. The new system can be used in power plants, substations and all levels dispatching station. It will play an important role in monitoring and recording networks' frequency.The recorded messages will be used in analyzing system frequency characteristic and under- frequency load shedding devices' performance. It has been used in power system and the effect is very good.",2006,0, 6550,NFTAPE: networked fault tolerance and performance evaluator,"The NFTAPE is a software implemented, highly flexible fault injection environment for conducting automated fault/error injection-based dependability characterization. NFTAPE: (1) enables a user: (i) to specify a fault/error injection plan, (ii) to carry out injection experiments, and (iii) to collect the experimental results for analysis; (2) targets assessment of a broad set of dependability metrics, e.g., availability, reliability, coverage; (3) operates in a distributed environment; (4) can be configured to implement a variety of fault/error injection strategies and thus to serve multiple users and target systems; (5) imposes minimal disturbance of target systems.",2002,0, 6551,Analysis and correction of the nonuniformity of light field in the high resolution X-ray digital radiography,"The nonuniformity of light field in the X-ray digital radiography will cause the sensitivity and resolution of the digital X-ray detection decline. And the causes of the nonuniformity of X-ray image light field are the nonuniformity of the X-ray source intensity, nonuniformity of the double proximity focusing X-ray image intensifier response and nonuniformity of the lens vignetting, CCD dark current and light response. A correction method is proposed based on this analysis, this method obtains correction matrix by experiment and bring forth proper correction parameters to the correction matrix to correct the digital image light field. An correction experiment was done to a step-like aluminum block under the microfocus X-ray source, the outcome indicated that the nonuniformity of the radiographic has been apparently proved and the inspection sensitive and resolution of the image has been proved.",2010,0, 6552,The nullspace method - a unifying paradigm to fault detection,"The nullspace method is a powerful framework to solve the synthesis problem of fault detection filters in the most general setting. It is also well suited to address the least order synthesis problem. In the same time, the nullspace method represents an unifying paradigm for several methods, because popular approaches like parity space or observer based methods can be interpreted as special classes of nullspace method. The main differences among different methods lie in the numerical properties of the underlying computational algorithms.",2009,0, 6553,Practical Deadlock-Free Fault-Tolerant Routing in Meshes Based on the Planar Network Fault Model,"The number of virtual channels required for deadlock-free routing is important for cost-effective and high-performance system design. The planar adaptive routing scheme is an effective deadlock avoidance technique using only three virtual channels for each physical channel in 3D or higher dimensional mesh networks with a very simple deadlock avoidance scheme. However, there exist one idle virtual channel for all physical channels along the first dimension and two idle virtual channels for channels along the last dimension in a mesh network based on the planar adaptive routing algorithm. A new deadlock avoidance technique is proposed for 3D meshes using only two virtual channels by making full use of the idle channels. The deadlock-free adaptive routing scheme is then modified to a deadlock-free adaptive fault-tolerant routing scheme based on a planar network (PN) fault model. The proposed deadlock-free adaptive routing scheme is also extended to n-dimensional meshes still using two virtual channels. Sufficient simulation results are presented to demonstrate the effectiveness of the proposed algorithm.",2009,0, 6554,IFRA: Instruction Footprint Recording and Analysis for post-silicon bug localization in processors,"The objective of IFRA, instruction footprint recording and analysis, is to overcome the challenges associated with a very expensive step in post-silicon validation of processors - bug localization in a system setup. IFRA consists of special design and analysis techniques required to bridge a major gap between system-level and circuit-level debug. Special hardware recorders, called footprint recording structures (FRS's), record semantic information about data and control flows of instructions passing through various design blocks of a processor. This information is recorded concurrently during normal operation of a processor in a post-silicon system validation setup. Upon detection of a problem, the recorded information is scanned out and analyzed for bug localization. Special program analysis techniques, together with the binary of the application executed during post-silicon validation, are used for the analysis. IFRA does not require full system-level reproduction of bugs or system-level simulation. Simulation results on a complex super-scalar processor demonstrate that IFRA is effective in accurately localizing bugs with very little impact on overall chip area.",2008,0, 6555,The online prediction of the faults for integrated maintenance and reliability,"The objective of this paper is to realize analytic studies and applications in oriented engineering, especially in maintenance, reliability and security of the big technical installations, particularly for the nuclear domain. Software applications were developed to permit the online automatic computation of the reliability, maintenance, availability and predictive maintenance parameters. This paper presents a task of an integrated application that accomplishes structural and dynamic analysis computation, geometric simulation, software for monitoring on-line predictive maintenance for the installation meant to tritium elimination, all based on scientific methods.",2008,0, 6556,An enhanced low-power high-speed Adder For Error-Tolerant application,"The occurrence of errors are inevitable in modern VLSI technology and to overcome all possible errors is an expensive task. It not only consumes a lot of power but degrades the speed performance. By adopting an emerging concept in VLSI design and test-error-tolerance (ET), we managed to develop a novel error-tolerant adder which we named the Type II (ETAII). The circuit to some extent is able to ease the strict restriction on accuracy to achieve tremendous improvements in both the power consumption and speed performance. When compared to its conventional counterparts, the proposed ETAII is able to achieve more than 60% improvement in the power-delay product (PDP). The proposed ETAII is an enhancement of our earlier design, the ETAI, which has problem adding small number inputs.",2009,0, 6557,Radiation-induced soft errors in advanced semiconductor technologies,"The once-ephemeral radiation-induced soft error has become a key threat to advanced commercial electronic components and systems. Left unchallenged, soft errors have the potential for inducing the highest failure rate of all other reliability mechanisms combined. This article briefly reviews the types of failure modes for soft errors, the three dominant radiation mechanisms responsible for creating soft errors in terrestrial applications, and how these soft errors are generated by the collection of radiation-induced charge. The soft error sensitivity as a function of technology scaling for various memory and logic components is then presented with a consideration of which applications are most likely to require soft error mitigation.",2005,0, 6558,A novel approximation method for error rate curves in radio communication systems,"The method of choice to investigate radio communication systems on a system or network level are computer simulations. The behaviour of the physical layer is often modelled by the error rate (ER) behaviour, e.g. the bit error rate (BER) or packet error rate (PER), of previous detailed investigations of the physical layer. A number of methods are known to approximate the ER curves, all of which have weaknesses. This paper proposes a novel method which is based on properties of the resulting ER curves rather than those of the physical layer. The approximation equations contain parameters which can be determined by fitting methods. The usefulness and performance of the novel approximation methods is shown with the help of selected examples.",2002,0, 6559,Robust detection of incipient faults: an active approach,"The methodology of auxiliary signal design for robust failure detection based on multi-model formulation of normal and failed systems is used to study the problem of incipient fault detection. Here, the fault is modeled as a drift in a system parameter, and an auxiliary signal is to be designed to enhance the detection of variations in this parameter. It is shown that it is possible to consider the model of the system with a drifted parameter as a second model and use the multi-model framework for designing the auxiliary signal by considering the limiting case as the parameter variation goes to zero. The result can be applied very effectively to early detection problems where small parameter variations should be detected",2006,0, 6560,The study of single line to ground fault line selection in non-direct ground power system based on DSP device,"The middle-low voltage distribution networks mostly adopt neuter point not-valid grounding(be so called to the small current groundding system).The distribution wire fault particularly the fast and accurate location that the single grounding fault, not only to the repair wire and promise dependable power supply, and circulate to the safe stability and economy that promises the whole electric power system to all have a very important function. With reference of actual conditions for single phase earthing fault, the paper puts forward the principle on single phase earthing faulty line selection, and with the support of digital signal processing device(DSP)capable in floating point computation, to design a new type of low current single phase earthing faulty line selection device for power distribution system.",2010,0, 6561,Probabilistic fault diagnosis for IT services in noisy and dynamic environments,"The modern society has come to rely heavily on IT services. To improve the quality of IT services it is important to quickly and accurately detect and diagnose their faults which are usually detected as disruption of a set of dependent logical services affected by the failed IT resources. The task, depending on observed symptoms and knowledge about IT services, is always disturbed by noises and dynamic changing in the managed environments. We present a tool for analysis of IT services faults which, given a set of failed end-to-end services, discovers the underlying resources of faulty state. We demonstrate empirically that it applies in noisy and dynamic changing environments with bounded errors and high efficiency. We compare our algorithm with two prior approaches, Shrink and Max coverage, in two well-known types of network topologies. Experimental results show that our algorithm improves the overall performance.",2009,0, 6562,Generic faultloads based on software faults for dependability benchmarking,"The most critical component of a dependability benchmark is the faultload, as it should represent a repeatable, portable, representative, and generally accepted set of faults. These properties are essential to achieve the desired standardization level required by a dependability benchmark but, unfortunately, are very hard to achieve. This is particularly true for software faults, which surely accounts for the fact that this important class of faults has never been used in known dependability benchmark proposals. This paper proposes a new methodology for the definition of faultloads based on software faults for dependability benchmarking. Faultload properties such as repeatability, portability and scalability are also analyzed and validated through experimentation using a case study of dependability benchmarking of Web-servers. We concluded that software fault-based faultloads generated using our methodology are appropriate and useful for dependability benchmarking. As our methodology is not tied to any specific software vendor or platform, it can be used to generate faultloads for the evaluation of any software product such as OLTP systems.",2004,0, 6563,Two-dimensional channel rate allocation for SVC over error-prone channel,"The motion compensated temporal filtering (MCTF) based scalable video coding (SVC) provides a full scalability including spatial, temporal and signal-to-noise ratio (SNR) scalability with fine granularity, each of which may result in different visual effect. This paper addresses a novel approach of two-dimensional unequal error protection (2D UEP) for the scalable video with a combined temporal and quality (SNR) scalability over packet-erasure channel. The bit-stream is divided into scalable sub-bit-streams based on the structure of MCTF. Each sub-bit-stream is further divided into several quality layers. Unequal quantities of bits are allocated to protect different layers to obtain acceptable quality video with smooth degradation under different transmission error conditions. Experimental results are presented to show the advantage of the proposed 2D UEP scheme over the traditional one-dimensional unequal error protection (1D UEP) scheme",2006,0, 6564,MPEG-2 error concealment based on block-matching principles,"The MPEG-2 compression algorithm is very sensitive to channel disturbances due to the use of variable-length coding. A single bit error during transmission leads to noticeable degradation of the decoded sequence quality, in that part or an entire slice information is lost until the next resynchronization point is reached. Error concealment (EC) methods, implemented at the decoder side, present one way of dealing with this problem. An error-concealment scheme that is based on block-matching principles and spatio-temporal video redundancy is presented in this paper. Spatial information (for the first frame of the sequence or the next scene) or temporal information (for the other frames) is used to reconstruct the corrupted regions. The concealment strategy is embedded in the MPEG-2 decoder model in such a way that error concealment is applied after entire frame decoding. Its performance proves to be satisfactory for packet error rates (PER) ranging from 1% to 10% and for video sequences with different content and motion and surpasses that of other EC methods under study",2000,0, 6565,Robust Estimation and Fault Diagnostics for Aircraft Engines with Uncertain Model Data,"The paper first presents a reasonably thorough tutorial type review of the literature in the area of fault diagnostics of dynamical systems. Then it summarizes the specific research being carried out by the author's group in aircraft engine parameter estimation and fault diagnostics. In turbine engines, performance parameters such as thrust, turbine inlet temperature and stall margins cannot be measured directly and thus there is a need to estimate them from the measured outputs. The engine dynamics is highly nonlinear and the traditional linear models used in Kalman filter based techniques are subject to large uncertainties. In this research, we develop an adaptive estimator that augments the linear Kalman filter with a neural network to compensate for any nonlinearity that is not handled by the linear filter. The neural network is a radial basis function network that is trained off line using a growing and pruning algorithm. Next, a contribution in fault diagnostics in aircraft engines is presented. Specifically, we present a fault detection algorithm which uses a dynamic/adaptive threshold. The algorithm takes the parameter uncertainties into consideration and proposes a dynamic/adaptive threshold that makes use of the bounds on the parameter uncertainties which can thus distinguish an actual fault from the model uncertainties. In the absence of faults, a predetermined constant threshold would lead to more false alarms and missed detections under modeling uncertainties. However, a dynamic/adaptive threshold can accommodate uncertainties in the model and help in reducing false alarms and missed detections. The proposed methodologies are demonstrated by applying it to the simulation model of an aircraft engine available in the literature. The simulation results clearly show the improved effectiveness of the proposed approaches of this research.",2007,0, 6566,Fault tolerance on interleaved inverter with magnetic couplers,"The paper focuses on a new control strategy for improving the availability of power electronic converters based on interleaved structures. By using this strategy, the power electronic converters can continue to work (with reduced output power) in case of power component failure. The paper describes how to adapt the magnetic output filtering structure for this original control strategy. This structure is based on a monolithic coupler or a coupling transformer. Which are usually employed to minimize in a significant way the mass of the converters. They are normally sized to work with a fixed number of phases. Our control strategy induces new constraints on magnetic component, especially saturation problems. To reduce this problem some extra switch are added. Finally an experimental Power electronic converter driven by an FPGA is presented with the experimental results. It shows some experimental results with a 6 phases converter who work with 5 or 4 phases, to simulate one or two converter leg breakdown.",2010,0, 6567,Fault detection methods for frequency converters fed induction machines,"The paper focuses on experimental investigation for stator faults detection and fault detection methods of electrical drive systems using voltage source inverter (VSI) fed cage rotor induction machines (CRIM). Two experimental investigations (one stator phase unbalance and one stator phase open) have been performed to study the behaviour of the electrical machine. A description of the measurement system including acquisition and processing of the data is presented and stator current signature, current Park's vector and instantaneous power as diagnostic techniques are considered.",2007,0, 6568,User-behavior based software fault detection for device,"The paper focuses on the Pre-Detection for device fault to ensure the reliability of the devices. A User-Behavior based software fault detection method was proposed, which imports User-Behavior analysis to select the high priority service set for Pre-Detection of the device software fault. The method is user oriented and could improve the efficiency of detection. The architecture and flow for the User-Behavior based software fault detection were introduced. In addition, the User-Behavior Analysis model and the User-Behavior Set-Selection model to reflect the dependence degree of the user to each service were given. At last, the method was validated by simulation and proved to be valid by comparing with other Pre-Detection methods.",2010,0, 6569,Research of fault diagnosis system for pickling and cold-rolling electric drives based on BP neural network,"The Paper has exploited a system of on-line monitoring and fault diagnosis for pickling and cold-rolling electric drives, making use of configuration technologies, Matlab simulation tools and BP neural network. Based on the theory of BP neural network, a fault diagnosis model is designed, which is for electro-hydraulic servo valve, the key execute machine of pickling and cold-rolling electric drives. The model is directed at analyzing the fault possibility and fault type of electro-hydraulic servo valve. After tests on the well-trained BP-based fault diagnosis system, the system proves to have the ability of diagnosing any type of fault of electro-hydraulic servo valve accurately. The result of experiment gives the evidence that this method is very efficient for fault diagnosis system for pickling and cold-rolling electric drives.",2009,0, 6570,A high-level dynamic-error model of a pipelined analog-to-digital converter,"The paper presents a fast and accurate high-level model of a pipelined analog-to-digital converter implemented in MATLAB. Mechanisms causing dynamic errors, such as the settling time of a slew-rate-limited amplifier, are analyzed and parameters to model these identified. All parameters are associated with actual physical properties and all simulations are validated by comparison to measured data in both time and frequency domains.",2005,0, 6571,Pre-Processing Correction for Micro Nucleus Image Detection Affected by Contemporaneous Alterations,"The paper presents a method pointed out to detect and to correct the alterations of(i) exposure, (ii) out of focus, and (iii) Gaussian noise affecting contemporaneously the images acquired into flow cytometer measurement devices. These alterations reduce the image quality and interfere with the correct micro nucleus detection in lymphocyte. The objective of the proposed correction is (i) to make the image able to be correctly processed by the pattern matching algorithm in order to detect the micro nucleus into human lymphocytes, (ii) to minimize the doubtful detections, and (iii) to enhance the confidence that in the rejected images micro nucleuses are not included. Numerical and experimental tests confirm the validity of the proposed correction method and permit to evaluate the upper bound and lower bound of the admissible variation range of each alteration",2006,0, 6572,An Approach Based on Neural Networks for Identification of Fault Sections in Radial Distribution Systems,"The main objective involved with this paper consists of presenting the results obtained from the application of artificial neural networks and statistical tools in the automatic identification and classification process of faults in electric power distribution systems. The developed techniques to treat the proposed problem have used, in an integrated way, several approaches that can contribute to the successful detection process of faults, aiming that it is carried out in a reliable and safe way. The compilations of the results obtained from practical experiments accomplished in a pilot radial distribution feeder have demonstrated that the developed techniques provide accurate results, identifying and classifying efficiently the several occurrences of faults observed in the feeder.",2006,0, 6573,Real-time position error detecting in nanomanipulation using Kalman filter,"The main roadblock to atomic force microscope (AFM) based nanomanipulation is lack of real time visual feedback. Although the model based visual feedback can partly solve this problem, due to the complication of nano environment, it is difficult to accurately describe the behavior of nano-objects with a model. The modeling error will lead to an inaccurate feedback and a failed manipulation. In this paper, a Kalman filter is developed to real time detect this modeling error. During manipulation, the residual between the estimated behavior and the visual display behavior is real time updated. The residual's Mahalanobis distance is calculated and compared with an threshold to determine whether there is a position error. Once the threshold is exceeded, an alarm signal will be triggered to tell the system there is a position error. Furthermore, the position error can be on-line corrected by local scan method. With the assistance of Kalman filter and local scan, the position error not only can be real-time detected, but also can be online corrected. The visual display keeps matching with the real manipulation result during the whole manipulation process, which significantly improve the efficiency of the AFM based nano-assembly. Experiments of manipulating nano-particles are presented to verify the effectiveness of Kalman filter and local scan method.",2007,0, 6574,A new algorithm for atmospheric correction of the multiangular and hyperspectral data acquired during the DAISEX campaign,"The main scientific objective of DAISEX (Digital Airborne Spectrometer Experiment) was to demonstrate the retrieval of geo/biophysical variables from imaging spectrometer data. Target variables included surface temperature, Leaf Area Index (LAI), canopy biomass, leaf water content, canopy height, canopy structure and soil properties. The imaging spectrometers used for DAISEX were the DAIS-7915, HyMap and POLDER. The campaign took place during the summers of 1998, 1999 and 2000 in Barrax (Spain) and Colmar (France). A new algorithm is under development for the atmospheric correction of the hyperspectral and multiangular data acquired during this campaign. This algorithm is intended to improve the current atmospheric correction by taking into account the coupling between atmosphere and surface (including a non-Lambertian treatment of the latter). Moreover, the hyperspectral data allow to characterise in detail the absorption and the multiangular characteristics of the data allow to describe accurately the aerosol scattering. The method consists in identifying some pixels on an image with an a priori information about its BRDF and assume that the atmosphere is the same over the whole image. Applying a radiative transfer code we can reproduce the reflectance measured by the sensor by modifying the parameters describing the surface and the aerosols through an iterative process. Once the atmosphere is known the system atmosphere-surface is uncoupled and the reflectance for the whole image can be obtained.",2003,0, 6575,Machine learning techniques for diagnosing and locating faults through the automated monitoring of power electronic components in shipboard power systems,"The management and control of shipboard medium voltage AC (MVAC) and medium voltage DC (MVDC) power system architectures under fault conditions present a number of challenges. The use and resulting interaction of multiple power electronic components in mesh-like power distribution architectures possibly result in the effects of faults being detectable throughout the system, for example, line-to-hull faults on DC systems with high resistive grounding.",2009,0, 6576,Reconfigurable control system design for fault diagnosis and accommodation,"The online fault tolerant control problem for dynamic systems under unanticipated failures is investigated from a realistic point of view without any specific assumption on type of system dynamical structure or failure scenarios. The necessary and sufficient conditions for system online stability under catastrophic failures have been derived using the discrete-time Lyapunov stability theory. Based upon existing control theory and modern intelligent techniques, an online fault accommodation control strategy is proposed to deal with the desired trajectory-tracking problems for systems suffering from various unknown and unanticipated catastrophic component failures. Through the online estimator, effective control signals to accommodate the dynamic failures can be computed using only the partially available information of the faults. To investigate the feasibility of using the developed technique for unanticipated fault accommodation in real hardware under the real-time environment, an online fault tolerant control test bed has been constructed to validate the proposed technology. Both online simulations and the real-time experiment show encouraging results and promising futures of online real-time fault tolerant control based solely upon insufficient information of the system dynamics and the failure modes",2001,0, 6577,Design of coal-mechanical online fault diagnosis based embedded system,"The operating status of coal machinery is related to the mining production and security directly. The embedded system and the principle of fuzzy fault tree diagnosiss were introduced into the coal machinery diagnosis. At the same time, the embedded system is used to accomplish a variety protocol conversion between field-bus and Internet to solve the problem which the equipment linked to Internet. At the other time,we use ACLinux operating system as the software platform to configure and migrate the Boa Web server. Experiments show this method is effective for remote monitoring and fault diagnosis of coal machinery.",2010,0, 6578,Fault-Tolerant Policy for Optical Network Based Distributed Computing System,"The optical network based distributed computing system has been thought as a promising technology to support large-scale data-intensive distributed applications. For such a system with so many heterogeneous resources and middlewares involved, faults seem to be inevitable. However, for those applications that need to be finished before the given deadline, a fault in the system will lead to the failure of the application. Therefore, fault-tolerant policy is necessary to improve the performance of the system when faults could happen. In this paper, we address to the fault-tolerant problem for the optical network based distributed computing system. We first propose an overlay approach which applies the existing fault-tolerant policies for distributed computing and optical network. Then we present a joint fault-tolerant policy which takes into account the fault tolerance for computing resource and network resource in the same time. We compare the performances of different polices by simulation. The simulation results show that the joint fault-tolerant policy achieves much better performances compared to overlay approaches.",2008,0, 6579,Correction of smart antennas receiving channels characteristics for 4G mobile communications,The paper considers a way of correction of the reception channels characteristics for smart-antennas in 4G mobile communications.,2003,0, 6580,"Joint faults detection in LV switchboard and its global diagnosis, through a Temperature Monitoring System","The paper deals with an entire system of monitoring and diagnosis of LV switchboards based on the measurements of currents, ambient temperatures and local temperatures of electrical joints. This system meets the needs to prevent the breakdowns of LV switchboards, which, although rare, can involve huge financial and human loss. The thermal measurements are done by wireless thermal sensor. The measured data are transmitted via internet and collected in a server, to be centrally processed. This centralized data processing includes a local detection of failures and a global diagnosis which leads to some maintenance recommendations. This paper will focus on, the local detection by comparison with an healthy model, and the global diagnosis using Bayesian network technique. The feasibility of these methods is tested with experimental data and expert's information.",2007,0, 6581,Experiences with Software Implemented Fault Injection,The paper deals with the problem of evaluating system dependability using software implemented fault injectors (SWIFIs). In particular we describe methods of improving functionality and performance in SWIFI injectors. We discuss problems related to experiment scheduling and simulation result interpretation. The presented considerations base on our long experience with fault injection tools. They are illustrated with some practical examples.,2007,0, 6582,Fault diagnosis based on timed automata: Diagnoser verification,"The paper deals with the supervisory control problem based on vector synchronous product (VSP) of automata. A necessary and sufficient condition for the existence of such a controller is given, which is based on the notion of vs-controllability. Furthermore, a more general framework called vector synchronous product with communication is proposed. In addition, isomorph and homomorph of two VSPs are defined. Some simplified traffic examples are used to illustrate the notions and the result",2006,0, 6583,SVM-based approach for instrument fault accomodation in automotive systems,"The paper deals with the use of support vector machines (SVMs) in software-based instrument fault accommodation schemes. A performance comparisons between SVMs and artificial neural networks (ANNs) is also reported. As an example, a real case study on an automotive system is presented. The ANNs and SVMs regression capability are employed to accommodate faults that could occur on main sensors involved in the engine operating. The obtained results prove the good behaviour of both tools. Similar performances have been achieved in terms of accuracy.",2005,0, 6584,EPL proximity and Coulomb effect correction by mask bias method,"The mask bias method has proved to be a suitable method for EPL proximity effect correction. However, the linewidth reduction ratio due to the backscattering energy changes if the beam blur of the pattern changes. When the beam blur due to the Coulomb interaction effect in the sub-field is not uniform, the value of the mask bias should be modified. In this paper, we discuss the proximity effect correction method, considering the Coulomb interaction distribution in the sub-field.",2001,0, 6585,Analysis of the ABS Wheel Speed Signal Error and Method of Equal Period Sampling,"The measuring error of wheel speed signal of anti-lock braking system (ABS) has been analyzed in this paper. It has been concluded that trigger error is the main factor which influences the wheel speed measurement limitation, and the correlation between trigger error and the wheel speed signal are surveyed by experiments. It is necessary in ABS to transform the wheel speed signal sampled in the form of equiangular to the form of equal period. In the low speed measurement situation of lacking wheel speed signal, a prediction algorithm based on polynomial order two fit has been utilized to estimate wheel speed. Simulation results verified the algorithm.",2006,0, 6586,Defect detection for multithreaded programs with semaphore-based synchronization,"The solution to the problem of automatic defects detection in multithreaded programs is covered in this paper. General approaches for defect detection are considered. Static analysis is chosen because of its full automation and soundness properties. Overview of papers about static analysis usage for defect detection in parallel programs is presented. The approach for expansion of static analysis algorithms to multithreaded programs is suggested. This approach is based on Thread Analysis Algorithm. Thread analysis algorithm provides analysis of threads creation and thread-executed functions. This algorithm uses static analysis algorithm results in particular to identify semaphore objects. Thread analysis algorithm and static analysis algorithms are processing jointly. Thread analysis algorithm interprets thread control functions calls (create, join, etc.) and synchronization functions calls (wait, post, etc.). The algorithm determines program blocks which may execute in parallel and interaction pairs of synchronization functions calls. This information is taking into consideration to analyze threads cooperation and detect synchronization errors. To analyze threads cooperation this algorithm uses join of shared objects values in -functions. Basic rules of thread analysis algorithm are considered in the paper. Application of these rules to multithreaded program example is presented. The suggested approach allows us to detect all single-threaded program defect types and some synchronization errors such as Race condition or Deadlock. This approach gives sound results. It obtains analysis of programs with any number of semaphores and threads. It is possible to analyze dynamically created threads. The approach can be extended to other classes of parallel programs and other types of synchronization objects.",2010,0, 6587,Error reducing techniques for the scattering parameter characterization of differential networks using a two-port network analyzer,"The s-parameter characterization of differential four-port networks using a two-port vector network analyzer (VNA) involves the application of an electrical stimulus to one of the four ports of the network and measuring the reflection of the signal at that port or the transmission through to any of the other three ports, under the condition that the two idle ports are terminated with a chosen load. The single-ended s-parameters obtained in this manner can be converted to the desired differential s-parameters using well established numerical combinations. However, a consequence of this technique of four-port network characterization is the return losses of each port (S11 , S22, S33 and S44) are measured thrice. Ideally, all three return loss measurements for each port must be identical. Non-ideally, however, the three return losses could be inconsistent. This work provides two methods to reduce these inconsistencies. The first involves the averaging of the return losses at each port, as well as further averaging depending on the symmetry of the device under test (DUT). The second method describes theoretically the removal of the reflections, due to the necessary loads used during the measurements, from the return loss parameters. The DUT used in this work is a pair of symmetrical coupled microstrip lines of characteristic impedance 55Omega each at 1 GHz",2005,0, 6588,Research on the Gear Fault Diagnosis Using Order Envelope Spectrum,"The speed-up and speed-down of the gearbox are non-stationary process and the vibration signal can not be processed by traditional processing method. In order to process the non-stationary vibration signals such as speed-up or speed-down signals effectively, the order envelope analysis technique is presented. This new method combines order tracking technique with envelope spectrum analysis. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments. Therefore, the time domain non-stationary signal is changed into angle domain stationary signal. In the end, the resampled signals are processed by envelope spectrum analysis. The experimental results show that order envelope spectrum analysis can effectively diagnosis the faults of the gear crack.",2009,0, 6589,"Faults identification, location and characterization in electrical systems using an analytical model-based approach","The start of the electrical energy market has encouraged distributors to make new investments at distribution level so as to attain higher quality levels. The service continuity is one of the aspects of greater importance in the definition of the quality of the electrical energy. For this reason, the research in the field of fault diagnostic for distribution systems is spreading ever more. This paper presents a novel methodology to identify, locate and characterize the faulty events in the electrical distribution systems. The methodology can be extended to all types of faulty events and is applicable to reconfigurable systems. After having described the guidelines of the methodology, the Authors describe the architecture of the diagnostic control system implementing the analytic procedure. Finally the results of some relevant applications are reported.",2005,0, 6590,ERCOT's experience in identifying parameter and topology errors using State Estimator,"The State Estimator is an important tool that ERCOT relies on to monitor the real time state of power grid. As the parameter and topology errors are critical to the quality of state estimator results, operations engineers in ERCOT are using multiple tools to detect and identify the topology and parameter errors in ERCOT EMS network model. This paper will present ERCOT experience in detecting and identifying the topology and parameter errors using state estimator monitoring tools and by analyzing SE results.",2010,0, 6591,Investigation of dependence distribution of statistics used for testing hypothesis about means on error's heterogeneity: Case of balanced design,The statistic distributions. of criteria used for testing hypothesis about means have been investigated by statistical simulation methods. The case of normality and heterogeneity assumptions failure has been considered. The errors of achieved significance levels evaluation under using ldquoclassicalrdquo Fisher's distribution in this conditions have been given.,2008,0, 6592,Structural method of fault location in a LAN segment,"The structural method of fault location in a LAN segment has been proposed. It combines a method of many-valued fault table analysis using vectors of elementary probes with a method of structural fault location by reachability matrix, where lines of the matrix are used instead of fault table lines. Such an approach allows reduction of the area of suspected faults and fault location time. Experimental results are valid and they correspond to real behavior of LAN with defined conditions.",2003,0, 6593,The Nonlinearity Error Analysis of 2D-PSD and Its Application in Medical Engineering,"The structure, the characteristic and the principle of position sensitive detector (PSD) are expounded. By thoroughly analyzing the non-linearity provided by 2D-PSD, a new method using bridge theory to analyze the non-linearity is introduced. This method is very simpler in comparison with interpolation algorithms, optimization method of neural network and analytic method. The improved 2D-PSD signal debugging circuit board is designed and debugged by using front-end amplifier with low temperature drift and high precision and high-input impedance, summator, subtracter, divisor and other components. By using the improved 2D-PSD and continuous-emitting laser as optical source, a 3D non-contact optical measuring system with high precision is designed. This system can be used to measure the parameters of 3D object with anomalous form such as the abnormality of man's skin. Furthermore, combined with computer technology, tomography and chromatography, this high precision photoelectric device can be developed into various medical equipment, which have extremely wide application in the diagnosis of difficult pathological changes, especially in the early diagnosis of parameters for the abnormality of man's skin.",2007,0, 6594,A new electromagnetic transient simulation method for faults in complex power system,"The study of the digital simulation of electromagnetic transients has been an everlasting issue in power systems. Especially, after improvement by Dommel, the time-domain Bergeron model has been successfully applied in EMTP. But the pre-processing before simulating transients caused by various faults and operations are quite troublesome. This paper presents a new electromagnetic transients simulation method which can be generally used to calculate most of the electric faults in power systems. Unlike EMTP, the presented method doesn't need to calculate the initial values of the system again when the structure or parameter of the system changes. Meanwhile, this method can simulate faults on random locations in single- and double-circuit lines including series compensated lines, bus faults, operation of breakers and open-circuit faults. In addition, the fault-start-angle can be set in the method. Because the new method can conveniently simulate not only inrush currents, overvoltages and harmonics components at different fault locations, but also developing faults, it is significant in line protection simulation verification and overvoltage computation. Lots of tests show that the method is both accurate and fast.",2002,0, 6595,A sensor fault tolerant drive for interior permanent-magnet synchronous motors,"The study reported in this paper deals with the problem of developing a controller with tolerance to current sensor faults. To achieve this goal, two control strategies are considered. In the first method, field oriented control and a developed observer are used in case of no fault. The second approach is concerned with fault tolerant strategy based on an observer for faulty conditions. Current sensors failures are detected and the current will be estimated successfully in order to allow continuous operation of the vector control. Based on the motor model, currents can be estimated using a nonlinear observer. A decoupling current vector control strategy is developed to ensure high performance operation by incorporating the maximum power factor per ampere operation. The simulation of the proposed scheme applied to the interior permanent magnet synchronous motor has been implemented in the Matlab environment. Simulation results illustrate the applicability of the proposed approach.",2008,0, 6596,Forward error correction strategies for media streaming over wireless networks,"The success of next-generation mobile communication systems depends on the ability of service providers to engineer new added-value multimedia-rich services, which impose stringent constraints on the underlying delivery/transport architecture. The reliability of real-time services is essential for the viability of any such service offering. The sporadic packet loss typical of wireless channels can be addressed using appropriate techniques such as the widely used packet-level forward error correction. In designing channel-aware media streaming applications, two interrelated and challenging issues should be tackled: accuracy of characterizing channel fluctuations and effectiveness of application-level adaptation. The first challenge requires thorough insight into channel fluctuations and their manifestations at the application level, while the second concerns the way those fluctuations are interpreted and dealt with by adaptive mechanisms such as FEC. In this article we review the major issues that arise when designing a reliable media streaming system for wireless networks.",2008,0, 6597,The influence of inspection error on single screening procedure under onesided specification,"The screening problem, namely, observing the screening variable rather than directly observing the performance variable, has been studied for a long time. However, inspection error which frequently occurs in a screening process has not been investigated in previous research. In this paper, we proposed a two-stage single screening procedure under one-sided specification in order to gain insight into this issue. Stage I is the classic screening procedure, whilst in Stage II we developed an effective index of inspection to calculate the cutoff limit taking into account of the influence of inspection error. Additionally, we analysed the influence of inspection error if we ignore it, and it was demonstrated that once the true selection ratio (beta) is given, the effective index of inspection inversely changed with the ratio of sigmam/sigmax (a reflection of the precision level). The software simulation results showed that the inspection error should not be ignored in the screening problem.",2009,0, 6598,Evaluation of fault-tolerant mobile agents in distributed systems,"The secure execution of a mobile agent is a very important design issue in building a mobile agent system and many fault-tolerant schemes have been proposed so far. Mobile agents are no longer a theoretical issue since different architectures for their realization have been proposed .In the context of e-commerce, execution atomicity is an important property for mobile agents. A mobile agent executes atomically, if either all its operations succeed, or none at all. This requires to solve an instance of the atomic commitment problem. However, it is important that failures (e.g., of machines or agents) do not lead to blocking of transactional mobile agents, i.e., agents that execute as a transaction. In this paper, we give a novel specification of non-blocking atomic commitment in the context of mobile agent execution. Fault tolerance for mobile agent systems is an unsolved topic, to which more importance should be attached. Besides the security problems by intended attacks it is very important to realize that an agent can simply get lost by errors of the network or the hosts. We then show how transactional mobile agent execution can be built on top of earlier work on fault-tolerant mobile agent execution and give preliminary performance results",2005,0, 6599,Scan-based transition fault testing - implementation and low cost test challenges,"The semiconductor industry as a whole is growing increasingly concerned about the possible presence of delay-inducing defects. There exist structured test generation and application techniques which can detect them, but there are many practical issues associated with their use. These problems are particularly acute when using low cost test equipment. In this paper, we describe an overall approach for implementing scan-based delay testing with emphasis on low-cost test.",2002,0, 6600,Exact computation of maximally dominating faults and its application to n-detection tests for full-scan circuits,"The size of an n-detection test set increases approximately linearly with n. This increase in size may be too fast when an upper bound on test set size must be satisfied. A test generation method is proposed for obtaining a more gradual increase in the sizes of n-detection test sets, while still ensuring that every additional test would be useful in improving the test set quality. The method is based on the use of fault-dominance relations to identify a small subset of faults (called maximally dominating faults) whose numbers of detections are likely to have a high impact on the defect coverage of the test set. Structural analysis obtains a superset of the maximally dominating fault set. A method is proposed for determining exact sets of maximally dominating faults. New types of n-detection test sets are based on the approximate and exact sets of maximally dominating faults. The test sets are called (n,n2)-detection test sets and (n,n2,n3)-detection test sets. Experimental results demonstrate the usefulness of these test sets in producing high-quality n-detection test sets for the combinational logic of ISCAS-89 benchmark circuits.",2004,0, 6601,Towards an ocean salinity error budget estimation within the SMOS mission,"The SMOS (Soil Moisture and Ocean Salinity) mission will provide from 2008 onwards global sea surface salinity estimations over the oceans. This work summarizes several insights gathered in the framework of salinity retrieval studies, aimed to address an overall salinity error budget. The paper covers issues ranging from the impact of auxiliary data on SSS error to the potential exploitation of GNSS-R signals as surface's roughness descriptor, and establishes several guidelines to approach a quasi-realistic post-launch retrieval scenario. Having defined a retrieval setup, an error budget scheme has been built, listing the different contributions to the final retrieved SSS error. On-going activities refer to the fulfillment of the pending issues of the error budget, which are mostly relevant to residual bias mitigation techniques, and Sun and Faraday rotation effect characterization.",2007,0, 6602,Methodology to support laser-localized soft defects on analog and mixed-mode advanced ICs,"The soft defect localization on analog or mixed-mode ICs is becoming more and more challenging due to their increasing complexity and integration. New techniques based on dynamic laser stimulation are promising for analog and mixedmode ICs. Unfortunately, the considerable intrinsic sensitivity of this kind of devices under laser stimulation makes the defect localization results complex to analyze. As a matter of fact, the laser sensitivity mapping contains not only abnormal sensitive regions but also naturally sensitive ones. In order to overcome this issue by extracting the abnormal spots and therefore localize the defect, we propose in this paper a methodology that can improve the FA efficiency and accuracy. It consists on combining the mapping results with the electrical simulation of laser stimulation impact on the device. First, we will present the concept of the methodology. Then, we will show one case study on a mixed-mode IC illustrating the soft defect localization by using laser mapping technique & standard electrical simulations. Furthermore, we will argument the interest of a new methodology and we will show two simple examples from our experiments to validate it.",2009,0, 6603,Memory reliability model for accumulated and clustered soft errors,"The soft error rate of memories is increased by high-energy particles as technology shrinks. Single-error correction codes (SEC), scrubbing techniques and interleaving schemes are the most common approaches for protecting memories from soft errors. It is essential to employ analytical models to guide the selection of interleaving distance; relying on rough estimates may lead to unreasonable design choices. The analytic model proposed in this paper includes row clustering effects of accumulated upsets and was able to estimate the failure probability with only a difference of 0.41% compared to the test data for a 45 nm SRAM design.",2010,0, 6604,Two Mistakes and Error-Free Software: A Confession,"The software development process and the resulting product are so complex that no error-detecting approach will ever be able to produce error-free software. The test coverage analyzer was a wonderful tool for measuring how well-tested a piece of software. First, the software being tested is instrumented so that the tool would capture which of the software's logic segments had been executed. Then a suite of test cases are run against that software and learned which segments had been executed, and how many times. The Test Coverage Analyzer concept, in whatever form it takes today, is still important and useful. And so are all the other error- removal processes we've developed over the years. But it will take a pretty elaborate combination of testing approaches to even let us produce truly reliable software.",2008,0, 6605,Quantifying Software Maintainability Based on a Fault-Detection/Correction Model,"The software fault correction profiles play significant roles to assess the quality of software testing as well as to keep the good software maintenance activity. In this paper we develop a quantitative method to evaluate the software maintainability based on a stochastic model. The model proposed here is a queueing model with an infinite number of servers, and is related to the software fault- detection/correction profiles. Based on the familiar maximum likelihood estimation, we estimate quantitatively both the software reliability and maintainability with real project data, and refer to their applicability to the software maintenance practice.",2007,0, 6606,A novel feature extraction and optimisation method for neural network-based fault classification in TCSC-compensated lines,"The suitability of fault classifiers introduced hitherto to operate correctly under a real TCSC transmission system remains a challenge since the computations are determined based on a number of postulations. This paper describes an alternative approach to fault classification in TCSC tines using artificial neural networks (ANNs). Special emphasis is placed on illustrating a combined wavelet transform and selforganising map (SOM) methodology to extract, validate and optimise the key characteristics of the fault transient phenomena in a TCSC line such that the input features to the ANNs are near optimal. As a result, it is shown that the fault classification proposed provides the ability to accurately classify the fault type, obviating the need for any predefined assumptions. Extensive simulation studies have been made to verify that the proposed method is both powerful and appropriate for fault classification.",2002,0, 6607,Application of a matched filter approach for finite aperture transducers for the synthetic aperture imaging of defects,"The suitability of the synthetic aperture imaging of defects using a matched filter approach on finite aperture transducers was investigated. The first part of the study involved the use a finite-difference time-domain (FDTD) algorithm to simulate the phased array ultrasonic wave propagation in an aluminum block and its interaction with side-drilled hole-like defects. B-scans were generated using the FDTD method for three active aperture transducer configurations of the phased array (a) single element and (b) 16-element linear scan mode, and (c) 16-element steering mode. A matched filter algorithm (MFA) was developed using the delay laws and the spatial impulse response of a finite size rectangular phased array transducer. The conventional synthetic aperture focusing technique (SAFT) algorithm and the MFA were independently applied on the FDTD signals simulated with the probe operating at a center frequency of 5 MHz and the processed B-scans were compared. The second part of the study investigated the capability of the MFA approach to improve the SNR. Gaussian white noise was added to the FDTD generated defect signals. The noisy B-scans were then processed using the SAFT and the MFA and the improvements in the SNR were estimated. The third part of the study investigated the application of the MFA to image and size surface-crack-like defects in pipe specimens obtained using a 45 steered beam from a phased array probe. These studies confirm that MFA is an alternative to SAFT with little additional computational burden. It can also be applied blindly, like SAFT, to effect synthetic focusing with distinct advantages in treating finite transducer effects, and in handling steered beam inspections. Finally, limitations of the MFA in dealing with larger-sized transducers are discussed.",2010,0, 6608,An analysis of fault detection latency bounds of the SNS scheme incorporated into an Ethernet based middleware system,"The supervisor-based network surveillance (SNS) scheme is a semi-centralized network surveillance scheme for detecting the health status of computing components in a distributed real-time (RT) system. An implementation of the SNS scheme in a middleware architecture, named ROAFTS (real-time object-oriented adaptive fault-tolerance support), has been underway in the authors' lab. ROAFTS is a middleware subsystem which is layered above a COTS (commercial-off-the-shelf) operating system (OS), such as Windows XP or UNIX, and functions as the core of a reliable RT execution engine for fault-tolerant (FT) distributed RT applications. The applications supported by ROAFTS are structured as a network of RT objects, named time-triggered message-triggered objects (TMOs). The structure of the prototype implementation of the SNS scheme is discussed first, then a rigorous analysis of the time bounds for fault detection and recovery is provided.",2002,0, 6609,Comparative study on Switched Reluctance Machine based fault-tolerant electrical drive systems,"The switched reluctance machine (SRM) based electrical drive systems are ideal for critical applications (aerospace, automotive, defense, medical, etc.) where the fault tolerance is a basic requirement. The phase independence characteristics of the SRM enable it to operate also under partial phase failure conditions also in its classical construction. Its reliability can be improved by applying special fault-tolerant designs, respectively monitoring its condition and applying fault detection techniques. The SRMs used in such safe electrical drive systems has to be fed from power converters having also fault-tolerant capability. In the paper two SRMs are proposed together with their converters. The fault tolerance capacities of the two electrical drive systems are compared by means of simulations. Two advanced simulation platforms were coupled together to simulate the drive system. The results of the comparative study emphasize the usefulness of the proposed fault-tolerant electrical drive system. The conclusions of study help the users to select the best fitted variant for they specific application.",2009,0, 6610,DSP-Based Automated Error-Reducing Flux-Linkage-Measurement Method for Switched Reluctance Motors,"The switched reluctance motor (SRM) has received considerable attention from researchers for its many inherent advantages, and thus, it has become a popular research topic in the field of variable-speed drives as well as servo drives. Research on SRMs mainly includes their design, modeling and performance analysis, control, as well as applications. However, for verification of design, performance prediction, as well as development of a high-performance sensorless control algorithm, accurate measurement of the magnetic characteristics of the SRM is most critical. Hence, one of the most important problems in the field of SRMs is a practical and accurate instrumentation system for the measurement of the SRM magnetic characteristics. This paper first describes an accurate and fully automated digital method for the measurement of the magnetic characteristic of SRMs, which includes online offset-error removal and winding resistance estimation. In this method, a digital-signal-processor-based virtual instrumentation for measurement of flux linkage is developed. Then, the results of the measurement conducted on a four-phase SRM are presented. The accuracy of the measurement system is verified by comparing with that found via a magnetic analyzer. Finally, the various sources of errors and their contributions to the errors are discussed. The scheme can also be used, in general, for transformers or inductors.",2007,0, 6611,Development of the TanDEM-X Calibration Concept: Analysis of Systematic Errors,"The TanDEM-X mission, result of the partnership between the German Aerospace Center (DLR) and Astrium GmbH, opens a new era in spaceborne radar remote sensing. The first bistatic satellite synthetic aperture radar mission is formed by flying TanDEM-X and TerraSAR-X in a closely controlled helix formation. The primary mission goal is the derivation of a high-precision global digital elevation model (DEM) according to High-Resolution Terrain Information (HRTI) level 3 accuracy. The finite precision of the baseline knowledge and uncompensated radar instrument drifts introduce errors that may compromise the height accuracy requirements. By means of a DEM calibration, which uses absolute height references, and the information provided by adjacent interferogram overlaps, these height errors can be minimized. This paper summarizes the exhaustive studies of the nature of the residual-error sources that have been carried out during the development of the DEM calibration concept. Models for these errors are set up and simulations of the resulting DEM height error for different scenarios provide the basis for the development of a successful DEM calibration strategy for the TanDEM-X mission.",2010,0, 6612,Multi-level fault injection experiments based on VHDL descriptions: a case study,"The probability of transient faults increases with the evolution of technologies. There is a corresponding increased demand for an early analysis of erroneous behaviors. This paper reports on results obtained with SEU-like fault injections in VHDL descriptions of digital circuits. Several circuit description levels are considered, as well as several fault modeling levels. These results show that an analysis performed at a very early stage in the design process can actually give a helpful insight into the response of a circuit when a fault occurs.",2002,0, 6613,"Bottom-Up Construction of Minimum-Cost and/ or Trees for Sequential Fault Diagnosis","The problem of generating the sequence of tests required to reach a diagnostic conclusion with minimum average cost, which is also known as a test-sequencing problem, is considered. The traditional test-sequencing problem is generalized here to include asymmetrical tests. In general, the next test to execute depends on the results of previous tests. Hence, the test-sequencing problem can naturally be formulated as an optimal binary AND/OR decision tree construction problem, whose solution is known to be NP-hard. Our approach is based on integrating concepts from one-step look-ahead heuristic algorithms and basic ideas of Huffman coding to construct an AND/OR decision tree bottom-up as opposed to heuristics proposed in the literature that construct the AND/OR trees top-down. The performance of the algorithm is demonstrated on numerous test cases, with various properties.",2007,0, 6614,A Mac-error-warning method for SCTP congestion control over high BER wireless network,"The problem of high BER (bit error rate) usually plagues the wireless connection, especially for some real time applications such as VoIP (voice over IP) and some military uses. The newly developed transport layer protocol SCTP (stream control transmission protocol) also has to face this problem. Though equipped with many new features, SCTP congestion control mechanism fails to distinguish wireless loss from congestion loss, thus its performance over high BER wireless network suffers from unnecessary congestion window decreasing. To improve the performance of SCTP in such a scenario, a Mac-error-warning method is proposed in this paper. Simulation experiments conducted through extended ns-2 validated that the proposed method could achieve higher throughput. The throughput improvement arrives at 946.22% when BER is 0.0005.",2005,0, 6615,Detecting type errors and secure coding in C/C++ applications,"The programming languages such as C/C++ suffer from memory management and security of code especially when their codes are used in critical systems. Therefore, we need an efficient mechanism to detect memory and type errors. Some researches have been done and many tools have been developed to detect these errors and to secure C/C++ code. However, theses tools have some drawbacks such as memory management and leak, and type errors in static and dynamic analysis. Generally speaking, this paper proposes a dynamic analysis mechanism to detect type errors in modules of C/C++ code using aspect-oriented programming. We illustrate problems by examples and discuss their solutions.",2010,0, 6616,"An automated fault analysis system for SP energy networks: Requirements, design and implementation","The proliferation of monitoring equipment on modern electrical power transmission networks is causing an increasing amount of monitoring data to be captured by transmission network operators. Traditional manual data analysis techniques fail to meet the analysis and reporting requirements of the utilities which have chosen to invest in monitoring. The volume of monitoring data, the complexities in analysing multiple related data sources and the preparation of internal reports based on that analysis, render timely manual analysis impractical, if not intractable. In 2006, the authors reported on the first online trials of the protection engineering diagnostics agents (PEDA) system, an automated fault diagnosis system which integrated legacy intelligent systems for the analysis of SCADA and digital fault recorder (DFR) data in order to provide automatic post fault assessment of protection system performance. In this paper the authors revisit the requirements of the TNO where PEDA was trialled. Based on a new formal specification of requirements carried out in 2008, the authors discuss the requirements met by the current version of PEDA and how PEDA could be augmented to meet these new requirements highlighted in this latest analysis of the utilities' requirements.",2009,0, 6617,Supporting Composite Smart Home Services with Semantic Fault Management,"The proliferation of smart home technologies providing home users with digital services presents an interoperability problem. This paper asserts that the value of these services can be greatly extended by enabling technology-neutral compositions of these home services, and furthermore that the reliability of such composite services will be of paramount importance to ensuring widespread deployment. Therefore fault management capabilities must accompany the composition capabilities in order to increase the robustness and reliability of such services. This paper proposes a web services-based abstraction layer for home area network service composition and a semantically informed fault management system for composed services, which can assist in diagnosis and correction of problems with composite services in smart homes of the future. Prototyping work and use cases are also described and initial metrics are presented that investigate the overhead of introducing the technology neutral abstraction layer.",2010,0, 6618,A software methodology for detecting hardware faults in VLIW data paths,"The proposed methodology aims to achieve processor data paths for VLIW architectures able to autonomously detect transient and permanent hardware faults while executing their applications. The approach, carried out on the compiled application software, provides the introduction of additional instructions for controlling the correctness of the computation with respect to failures in one of the data path functional units. The advantage of a software approach to hardware fault detection is interesting because it allows one to apply it only to the critical applications executed on the VLIW architecture, thus not causing a delay in the execution of noncritical tasks. Furthermore, by exploiting the intrinsic redundancy of this class of architectures no hardware modification is required on the data path so that no processor customization is necessary.",2003,0, 6619,A quantitative study of firewall configuration errors,"The protection that firewalls provide is only as good as the policy they are configured to implement. Analysis of real configuration data show that corporate firewalls are often enforcing rule sets that violate well established security guidelines. Firewalls are the cornerstone of corporate intranet security. Once a company acquires a firewall, a systems administrator must configure and manage it according to a security policy that meets the company's needs. Configuration is a crucial task, probably the most important factor in the security a firewall provides.",2004,0, 6620,New informative features for fault diagnosis of industrial systems by supervised classification,The purpose of this article is to present a method for industrial process diagnosis. We are interested in fault diagnosis considered as a supervised classification task. The interest of the proposed method is to take into account new features (and so new informations) in the classifier. These new features are probabilities extracted from a Bayesian network comparing the faulty observations to the normal operating conditions. The performances of this method are evaluated on the data of a benchmark example: the Tennessee Eastman Process. Three kinds of fault are taken into account on this complex process. We show on this example that the addition of these new features allows to decrease the misclassification rate.,2010,0, 6621,Fault detection of univariate non-Gaussian data with Bayesian network,"The purpose of this article is to present a new method for fault detection with Bayesian network. The interest of this method is to propose a new structure of Bayesian network allowing to detect a fault in the case of a non-Gaussian signal. For that, a structure based on Gaussian mixture model is proposed. This particular structure allows to take into account the non-normality of the data. The effectiveness of the method is illustrated on a simple process corrupted by different faults.",2010,0, 6622,Modeling and performance considerations for automated fault isolation in complex systems,"The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project.",2010,0, 6623,Fault Tolerant Control System in Critical Process Based on Ethernet Network,"The purpose of this paper is to study and develop the redundant control system for the critical process. The critical process means a high priority process which is very essential to the global system. When a fault condition occurs on such process, it affects to other processes. In this paper, the gas pressure process is used a case study. This system consists of dual redundant supervisory control system (HMI Station), dual redundant Ethernet network, triple redundant controller system and triple redundant filed signal. The experimental result is able to indicate some fault tolerance features. Our system can increase the reliability as well as the risk of fault of a whole system and apply to any high priority process appropriately",2006,0, 6624,A versatile high speed bit error rate testing scheme,"The quality of a digital communication interface can be characterized by its bit error rate (BER) performance. To ensure the quality of the manufactured interface, it is critical to quickly and precisely test its BER behavior. Traditionally, BER is evaluated using software simulations, which are very time-consuming. Though there are some standalone BER test products, they are expensive and none of them includes channel emulators, which are essential to testing BER under the presence of noise. To overcome these problems, we present a versatile scheme for BER testing in FPGAs. This scheme consists of two intellectual property (IP) cores: the BER tester (BERT) core and the additive white Gaussian noise (AWGN) generator core. We demonstrate through case studies that the proposed solution exhibits advantages in speed and cost over existing solutions.",2004,0, 6625,Robust Error Handling for Video Streaming over Mobile Networks,"The quality of video streaming in an error-prone environment suffers from packet loss. Since the loss of a packet typically affects a rather large picture area, the performance of error concealment is limited, too. In order to avoid the huge overhead caused by using smaller packet sizes, in this paper we propose and analyze a scheme utilizing the residual redundancy of the encoded video stream. This scheme has two components -(i) syntax analysis implemented at the decoder, allowing exacter localization of errors within the packet and (ii) entropy code resynchronization mechanism based on the out-of-band signalized length indicators. We show that the proposed approach provides substantial improvement in PSNR for the same rate, compared to the standard packet size reduction.",2007,0, 6626,Thermal Switching Error Versus Delay Tradeoffs in Clocked QCA Circuits,"The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.",2008,0, 6627,A Flexible Scheme for Scheduling Fault-Tolerant Real-Time Tasks on Multiprocessors,"The recent introduction of multicore system-on-a-chip architectures for embedded systems opens a new range of possibilities for both increasing the processing power and improving the fault-robustness of real-time embedded applications. Fault-tolerance and performance are often contrasting requirements. Techniques to improve robustness to hardware faults are based on replication of hardware and/or software. Conversely, techniques to improve performance are based on exploiting inherent parallelism of multiprocessor architectures. In this paper, we propose a technique that allows the user to trade-off parallelism with fault-tolerance in a multicore hardware architecture. Our technique is based on a combination of hardware mechanisms and real-time operating system mechanisms. In particular, we apply hierarchical scheduling techniques to efficiently support fault-tolerant, fault-silent and non-fault-tolerant tasks in the same system.",2007,0, 6628,Reliability of fault tolerant control systems: Part I,"The reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a system composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed",2001,0, 6629,Incorporating fault tolerance in analog-to-digital converters (ADCs),The reliability of ADCs used in highly critical systems can be increased by applying a two-step procedure starting with sensitivity analysis followed by redesign. The sensitivity analysis is used to identify the most sensitive blocks which could then be redesigned for better reliability by incorporating fault tolerance. This paper illustrates the steps involved in incorporating fault tolerance in an ADC. Two redesign techniques to improve the reliability of a circuit are presented. Novel selective node resizing algorithms for increased tolerance against -particle induced transients are discussed.,2002,0, 6630,Computation and analysis of output error probability for C17 benchmark circuit using bayesian networks error modeling,"The reliability of digital circuits is in question since the new scaled transistor technologies continue to emerge. The major factor deteriorating the circuit performance is the random and dynamic nature of errors encountered during its operation. Output-error probability is the direct measure of circuit's reliability. Bayesian networks error modeling is the approach used to compute error probability of digital circuits. In our paper, we have used this technique to compute and analyze the output error probability of LGSynth's C17 benchmark circuit. The simulations are based on MATLAB and show important relationships among output-error probability, execution time and number of priors involved in the analysis.",2010,0, 6631,The method study of fault diagnosis for grounding grid adopting the near degree,"The reliable characters of grounding grid influence the safe and steady operations of the power system, but the grounding grid are mostly buried under the earth. It is very difficult to diagnose the fault of the grounding grid in this instance of no excavating the grounding grid. For settling the problem, a new method adopting the near degree for diagnosing the fault of grounding grid is put forward. Firstly, the calculative theory adopting the near degree for diagnosing the fault of the grounding grid is described. Secondly, using the software of CDEGS and ANSYS analyzes and computes the theory, and a model of gullet stimulant test is established. Making up of all kinds of fault grounding grid to simulate the spot instance, using the probe to measure the potential, using the theory of degree of nearness to diagnose the fault of the grounding grid, it is feasible and exact to diagnose the fault of the grounding grid adopting the fuzzy degree of nearness experimentally and theoretically.",2010,0, 6632,Diagnostic fault detection & intelligent reconfiguration of fuel delivery systems,"The reliable operation of an engines fuel delivery system is fundamental. A failure in the fuel system that impacts the ability to deliver fuel to the engine will have an immediate effect on system performance and safety. There are very few diagnostic systems that monitor the health of the fuel system and even fewer that can accommodate for detected faults. Current diagnostic techniques call for careful maintenance of fuel system components. These techniques tend to be backward thinking in that they are based on previous experience which is not always a good indicator for future systems. This paper describes a technique developed at the Penn State Applied Research Laboratories Condition Based Maintenance Department for fault detection and reconfiguration for fuel delivery system components. This technique has been applied to a diesel engine test rig. The test rig is fully instrumented with sensors including those for fuel pressure. Even though this technique is being applied on a diesel engine, the approach is fully compatible to any fuel delivery system",2005,0, 6633,Determination of Geometry and Absorption Effects and Their Impact on the Accuracy of Alpha Particle Soft Error Rate Extrapolations,The results of a physical experiment and extensive simulation runs are presented for the first time demonstrating the significant effects of geometry and air absorption on accelerated alpha particle soft error rate tests. These results show that geometry and absorption must be properly accounted for even when the source is in close proximity to the device to avoid substantial underestimation of product soft failure rates in the terrestrial environment.,2007,0, 6634,Fault-tolerant solutions for a MPI compute intensive application,"The running times of large-scale computational science and engineering parallel applications, executed on clusters or grid platforms, are usually longer than the mean-time-between-failures (MTBF). Hardware failures must be tolerated by the parallel applications to ensure that no all computation done is lost on machine failures. Checkpointing and rollback recovery is a very useful technique to implement fault-tolerant applications. Although extensive research has been carried out in this field, there are few available tools to help parallel programmers to enhance with fault tolerant capability their applications. This work presents two different approaches to endow with fault tolerance the MPI version of an air quality simulation. A segment-level solution has been implemented by means of the extension of a checkpointing library for sequential codes. A variable-level solution has been implemented manually in the code. The main differences between both approaches are portability, transparency-level and checkpointing overheads. Experimental results comparing both strategies on a cluster of PCs are shown in the paper",2007,0, 6635,Fully Distributed and Fault Tolerant Task Management Based on Diffusions,"The task management is a critical component for the computational grids. The aim is to assign tasks on nodes according to a global scheduling policy and a view of local resources of nodes. A peer-to-peer approach for the task management involves a better scalability for the grid and higher fault tolerance. But some mechanisms have to be proposed to avoid the computation of replicated tasks that can reduce the efficiency and increase the load of nodes. In the same way, these mechanisms have to limit the number of exchanged messages to avoid the overload of the network.In previous work, we have proposed two methods for the task management called active and passive. These methods rebased on a random walk: they are fully distributed and fault tolerant. Each node owns a local tasks states set updated thanks to a random walk and each node is in charge of the local assignment. Here, we propose three methods to improve the efficiency of the active method. These new methods are based on a circulating word. The nodes local tasks states sets are updated thanks to periodical diffusions along trees built from the circulating word. Particularly, we show that these methods increase the efficiency of the active method: they produce less replicated tasks. These three methods are also fully distributed and fault tolerant. On the other way, the circulating word can be exploited for other applications like the resources management or the nodes synchronization.",2009,0, 6636,"Low-Cost Digital Detection of Parametric Faults in Cascaded Modulators","The test of SigmaDelta modulators is cumbersome due to the high performance that they reach. Moreover, technology scaling trends raise serious doubts on the intradie repeatability of devices. An increase of variability will lead to an increase in parametric faults that are difficult to detect. In this paper, a design-oriented testing approach is proposed to perform a simple and low-cost detection of variations in important design variables of cascaded SigmaDelta modulators. The digital tests could be integrated in a production test flow to improve fault coverage and bring data for silicon debug. A study is presented to tailor signature generation, with test-time minimization in mind, as a function of the desired measurement precision. The developments are supported by experimental results that validate the proposal.",2009,0, 6637,Study on Adaptation of Traveling Waves Based on Wavelet Transform for Fault Location in Automatic Blocking and Continuous Power Transmission Lines,"The theory of traveling waves and its wavelet representation were presented. The characteristics and fault location research actuality in railway automatic blocking and continuous power transmission line were introduced. Considering the influence of the bus bar, conductor's architecture and the load along the line, the adaptation of traveling waves was analyzed. A method to locate fault by combining zero modal and aerial modal was brought forth, i.e., to distinguish the reflected wave of the faulty point from that of the bus at the opposite terminal by zero mode components and to locate the faulty position by aerial mode component. As for the mixed line, some characteristics can be obtained from the combination of aerial modal and zero modal of fault current, therefore the fault section can be determined. The modeling and simulation of the proposed method are conducted by software PSCAD/EMTDC. The simulation results show that the proposed method is feasible",2005,0, 6638,The fault-tolerant technique in the Rotor Current Controller in Induction Wind Generator,"The thesis expounds the importance of the rotor current controller in induction wind generator in wind generator sets. The topic analyses the kind and the source of the faults of the rotor current controller. Also it design the method of the technology of tolerating faults in the rotor current controller. Some kinds of the technology of tolerating faults are used in hardware and software. So the system can identify the kinds of the fault and eliminate the fault, then making the system can control itself and achieve the best work state again. Using mix redundant methods in software part in order to protect the program running reliably and disposing dates correctly. The system which can be controlled remotely and showed communicates with the computer and can send the result of diagnosis to it in time.",2009,0, 6639,Impacts of TCSC on Switching Transients of HV transmission lines due to fault clearing,"The thyristor Controlled series capacitor (TCSC) is one of the most promising components in flexible AC transmission system (FACTS).It can adjust the power flow and improve the stability of power system . TCSC's different advantages can be classified as steady-state and transient ones. During a fault, TCSC can enhance power quality by limiting the current and help to keep the voltage as high as possible. In the operation of TCSCs, it is necessary to study side effects of their integration in power system. One of these studies is the effect of series compensation on Transients of line circuit breaker due to clearing a fault, here specially transient recovery voltage (TRV) is considered. The amplitude and rate of rise of TRV are two parameters which affect the CB's capability to interrupt a fault current . It is important to evaluate the impact of series compensation on operation of circuit breaker (CB) and Transient Recovery Voltage (TRV) when clearing a fault because Circuit breakers can fail to interrupt fault currents when power systems have Transient Recovery Voltage (TRV) characteristics, which exceed their ratings. In other hand large and sharp TRV may be lead to damaging switch and capacitor . In this paper, first, different factors and conditions which can affect TRV in the line CB of a HV line including TCSC, is analyzed and then, the influence of protective operation of TCSC on TRV in different fault conditions is depicted and discussed Through time domain simulations. Digital simulations are carried out with PSCAD/EMTDC to analysis of transient behavior of power system. In the Transient analysis of power system TCSC is simulated considering its protection devices.",2009,0, 6640,Multiple Failure Correction in the Time-Triggered Architecture,"The Time-Triggered Architecture (TTA) is an architecture for safety-critical applications. Fault-tolerance mechanisms are therefor of upmost importance to ensure correct system operation in presence of failures as well as after transient disturbances. Currently the TTA tolerates one faulty component. Multiple transient failures are outside of the fault hypothesis of the TTA and scenarios can be established, after multiple transient failures, which cannot be corrected by the conventional TTA mechanism. Therefore, we propose an algorithm, for correction of the system after multiple transient failures, as an extension to the fault-tolerance mechanisms of the TTA. Furthermore, we discuss variations of this algorithm.",2003,0, 6641,A Parallel Perceptron network for classification with direct calculation of the weights optimizing error and margin,"The Parallel Perceptron (PP) is a simple neural network which has been shown to be a universal approximator, and it can be trained using the Parallel Delta (P-Delta) rule. This rule tries to maximize the distance between the perceptron activations and their decision hyperplanes in order to increase its generalization ability, following the principles of the Statistical Learning Theory. In this paper we propose a closed-form analytical expression to calculate, without iterations, the PP weights for classification tasks. The calculated weights globally optimize a cost function which takes simultaneously into account the training error and the perceptron margin, similarly to the P-Delta rule. Our approach, called Direct Parallel Perceptron (DPP) has a linear computational complexity in the number of inputs, being very interesting for high-dimensional problems. DPP is competitive with SVM and other approaches (included P-Delta) for two-class classification problems but, as opposed to most of them, the tunable parameters of DPP do not influence the results very much. Besides, the absence of an iterative training stage gives to DPP the ability of on-line learning.",2010,0, 6642,A Fault Detection and Reconfigurable Control Architecture for Unmanned Aerial Vehicles,"The past decade has seen the development of several reconfigurable flight control strategies for unmanned aerial vehicles. Although the majority of the research is dedicated to fixed wing vehicles, simulation results do support the application of reconfigurable flight control to unmanned rotorcraft. This paper develops a fault tolerant control architecture that couples techniques for fault detection and identification with reconfigurable flight control to augment the reliability and autonomy of an unmanned aerial vehicle. The architecture is applicable to fixed and rotary wing aircraft. An adaptive neural network feedback linearization technique is employed to stabilize the vehicle after the detection of a fault. Actual flight test results support the validity of the approach on an unmanned helicopter. The fault tolerant control architecture recovers aircraft performance after the occurrence of four different faults in the flight control system: three swash-plate actuator faults and a collective actuator fault. All of these faults are catastrophic under nominal conditions",2005,0, 6643,Double-Talk-Robust Prediction Error Identification Algorithms for Acoustic Echo Cancellation,"The performance of an acoustic echo canceller may be severely degraded by the presence of a near-end signal. In such a double-talk situation, the variance of the echo path estimate typically increases, resulting in slow convergence or even divergence of the adaptive filter. This problem is usually tackled by equipping the echo canceller with a double-talk detector that freezes adaptation during near-end activity. Nevertheless, there is a need for more robust adaptive algorithms since the adaptive filter's convergence may be affected considerably in the time interval needed to detect double-talk. Moreover, in some applications, near-end noise may be continuously present and then the use of a double-talk detector becomes futile. Robustness to double-talk may be established by taking into account the near-end signal characteristics, which are, however, unknown and time varying. In this paper, we show how concurrent estimation of the echo path and an autoregressive near-end signal model can be performed using prediction error (PE) identification techniques. We develop a general recursive prediction error (RPE) identification algorithm and compare it to three existing algorithms from adaptive feedback cancellation. The potential benefit of the algorithms in a double-talk situation is illustrated by means of computer simulations. It appears that especially in the stochastic gradient case a huge improvement in convergence behavior can be obtained",2007,0, 6644,Error resilient H.264/AVC video over satellite for low packet loss rates,"The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The new Inmarsat BGAN system at 256 kbit/s is used as test case. This systems operates at low loss rates guaranteeing a packet loss rate of not more than 10~3. For high-end applications as 'reporter-in-the-field' live broadcast, it is crucial to obtain high quality without increasing delay.",2007,0, 6645,On the Dimensional Estimate of Rounding-Errors of a typical Computing Process,"The phenomenon of roundoff-error propagation is a well known problem in computations involving floating point arithmetic. Prominent works in the field of error analysis include (1) the error-analysis based on differential error-propagation model for computer algebra system (CAS), (2) the identification and reformulation of instability in a code generated by CAS, (3) estimating the bounds on errors in symbolic and numerical environments. The main concern in these attempts is to control error-propagation by using numerically stable code. Beside these attempts, only few efforts are made towards the theoretical understanding of the underlying process of error propagation. In this paper, we attempt to show that the roundoff-errors may propagate as a random-fractal process. We apply concepts of nonlinear time-series analysis on a series constituting successive roundoff-errors generated during the computation of Henon-map solutions. We estimate the correlation dimension, which is a measure of the fractal dimension, of the series as 5.5 plusmn 0.05. This low value of correlation dimension shows that the error series can be modeled by a low dimensional dynamical system.",2007,0, 6646,Improving Fault Tolerance by Virtualization and Software Rejuvenation,"The phenomenon that the state of software degrades with time is known as software aging. The primary method to fight aging is software rejuvenation. This paper presents new ways of effective software rejuvenation using virtualization for addressing software aging. This new approach is meant to be the less disruptive as possible for the running service and to get a zero downtime in most of the cases. We construct the state transition models to describe the behaviors of virtualized and non- virtualized application server. We map through the rejuvenation actions to this transition model with stochastic process and express availability, downtime and downtime costs in terms of the parameters in our models. Our results show that virtualization and software rejuvenation can be used to prolong the availability of the services.",2008,0, 6647,"Work in progress - Designing an elective course on fault tolerant systems, aiming partnership between industry and academia","The present globalization trends and needs drive universities offering engineering degrees to find new avenues in the programs that they are offering, in order to be attractive for their prospective students and competitive on the job market. Universities and departments within universities are concerned about the curricula, the type and content of offered courses. Under these auspices, the elective courses have to be chosen carefully, providing the students with updated topics, research opportunities and the skills needed by today's demanding industry. An elective course on the Design of Fault tolerant Systems is the main focus of this paper, trying to relate the topics of the course to the industry's needs as much as possible. What the course will try to bring new is a set of integrated laboratories using industrial CAD tools performing various dependability (reliability, availability, safety) analyses, Fault Tree Analysis and Failure Modes, Effects and Criticality Analysis.",2007,0, 6648,DC-bus voltage control for double star asynchronous fed drive under fault conditions,The present paper analyses the capacitor voltage problem in a double star asynchronous motor fed drive. The study has been conducted keeping as a reference the E464 locomotive. The proposed control is analysed in simulation and then tested on the real locomotive,2000,0, 6649,A new fault location approach for overhead HV lines with line equations,"The present paper deals with the problem of fault location; a new approach requiring one-side measurements is proposed. The distributed line equations method was chosen to describe the overhead lines. The proposed method does not require any assumption on the fault type or on the fault resistance value. The numerical tests are conducted on single- and double-circuit lines, for various fault resistance and fault distance values, and fault type conditions, simulated under ATP-EMTP.",2003,0, 6650,Software dependability techniques validated via fault injection experiments,The present paper proposes a C/C++ source-to-source compiler able to increase the dependability properties of a given application. The adopted strategy is based on two main techniques: variable duplication/triplication and control flow checking. The validation of these techniques is based on the emulation of fault appearance by software fault injection. The chosen test case is a client-server application in charge of calculating and drawing a Mandelbrot fractal.,2001,0, 6651,Accurate evaluation of bit-error rates of optical communication systems using the Gram-Charlier series,"The probability densities and cumulative distribution functions of decision statistics of optical communications systems are expanded as a Gram-Charlier (G-C) series, leading to arbitrarily accurate systematic evaluation of bit-error rates (BERs) and optimal decision thresholds of optical communication systems. The method displays negligible computational complexity and is applicable whenever the moment or cumulant generating functions of the decision statistics are analytically available. We applied the technique to a birth-and-death Markovian model of a direct-detection receiver with optical preamplifier in a two-level amplitude-shift keying system. The modal expansion series rapidly converged, whereas the alternative saddlepoint approximation method predicted a BER which deviated by 7% from the G-C result.",2006,0, 6652,Bandwidth-Efficient Forward-Error-Correction-Coding for High Speed Powerline Communications,"This paper applies erasure correction codes to powerline communications and compares them to turbo product codes (TPC). The TPC offers near Shannon limit performance for the Gaussian channel. However, assuming a channel with burst noise, then the TPC must be able to detect burst noise boundaries to assign lower reliability to burst errors. If symbol-by-symbol reliability is available, then an erasure correction code, such as information dispersal algorithm (IDA), may perform as well as TPC in terms of block error rate for large blocks, while the TPC will always have a lower bit error rate. An approximation for the performance of IDA is also given",2006,0, 6653,Image Defect Recognition Based on Rough Set,"This paper applies rough set theory to recognition system for image defect, and designs a decision algorithm on rough set suitable for image defect recognition. Firstly, the image is made regionalization and sequential discrete set is proposed, the continuous attributes of image is discretized. Then the decision table model on discrete condition attributes and decision attributes is constructed. Further the condition attributes significance function and reduction algorithm is given. A novel approach for decision rule analysis and rough set recognition is proposed. Finally, this paper takes the example for fabric defect recognition to validate these algorithms. The result shows the rough set algorithm is effective for image defect recognition with less calculation and fast speed.",2009,0, 6654,Fault Tolerant Production Infrastructures in Practice,"This paper applies the theory of transformation of existing High Availability Standby Systems to fully Fault Tolerant Production Infrastructures in order to increase productivity, effectiveness and availability. Bringing together the concepts of change management and the experience from IT industry, a case study of a financial institution is presented and analysed, illustrating the increased capabilities and practical limitations of a Fault Tolerant Production Infrastructure.",2007,0, 6655,A New Algorithm for Fabric Defect Detection Based on Image Distance Difference,"This paper brings forward a new method of detection of fabric defect, namely image distance difference arithmetic. The system permit user to set appropriate control parameter of fabric defect defection based on the type of the fabric. It can detect more than 30 kinds of common defects, which has advantages of high identification correctness and fast inspection speed. Finally, using some image processing technology to score the grade of piece to satisfy the quality and elevate the finished product ratio.",2009,0, 6656,Short-circuit fault mitigation methods for interior PM synchronous machine drives using six-leg inverters,"This paper characterizes six-leg inverters to mitigate short-circuit faults for interior permanent magnet (IPM) synchronous machines. Key differences between bus structures in six-leg inverters are identified. For six-leg inverters employing two isolated DC links, it is shown that up to 75% of rated output power could be produced following a single-switch short-circuit fault. A magnet flux ing control method is proposed as a response to stator winding type short-circuit faults. This control method results in a zero-torque fault response by the motor. The important influence of the zero sequence in both the motor and inverter structure is identified and developed for this class of fault. Simulation and experimental results are presented verifying the proposed magnet flux ing control method.",2004,0, 6657,Architecture-Level Soft Error Analysis: Examining the Limits of Common Assumptions,"This paper concerns the validity of a widely used method for estimating the architecture-level mean time to failure (MTTF) due to soft errors. The method first calculates the failure rate for an architecture-level component as the product of its raw error rate and an architecture vulnerability factor (AVF). Next, the method calculates the system failure rate as the sum of the failure rates (SOFR) of all components, and the system MTTF as the reciprocal of this failure rate. Both steps make significant assumptions. We investigate the validity of the AVF+SOFR method across a large design space, using both mathematical and experimental techniques with real program traces from SPEC 2000 benchmarks and synthesized traces to simulate longer real-world workloads. We show that AVF+SOFR is valid for most of the realistic cases under current raw error rates. However, for some realistic combinations of large systems, long-running workloads with large phases, and/or large raw error rates, the MTTF calculated using AVF+SOFR shows significant-discrepancies from that using first principles. We also show that SoftArch, a previously proposed alternative method that does not make the AVF+SOFR assumptions, does not exhibit the above discrepancies.",2007,0, 6658,Hierarchical fault tolerance for nanoscale memories,"This paper considers dynamic fault tolerance techniques applicable to ultradense memories based on nanoscale crossbar architectures. It describes how they can be integrated, in a hierarchical fashion, to provide runtime protection against device failures. Simulation is employed to estimate the effectiveness of a number of configurations, and the results show that there are synergistic combinations that allow for substantial reliability improvements over conventional techniques. For example, a memory with a bit-level failure rate of 2times10-4 FIT and a failure distribution of 10% arrays and 30% each for bits, rows, and columns shows three orders of magnitude reduction in uncorrectable errors at 100 000 hours when a given amount of redundancy is allocated to a combination of error correction coding and spare rows, columns, and arrays versus other configurations",2006,0, 6659,Construction of secure and fast hash functions using nonbinary error-correcting codes,"This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function constructions based on block ciphers such as the Data Encryption Standard (DES), where the key size is slightly smaller than the block size; IDEA, where the key size is twice the block size; Advanced Encryption Standard (AES), with a variable key size; and to MD4-like hash functions. Under reasonable assumptions about the underlying compression function and/or block cipher, it is proved that the new hash functions are collision resistant. More precisely, a lower bound is shown on the number of operations to find a collision as a function of the strength of the underlying compression function. Moreover, some new attacks are presented that essentially match the presented lower bounds. The constructions allow for a large degree of internal parallelism. The limits of this approach are studied in relation to bounds derived in coding theory.",2002,0, 6660,Control-oriented errors quantification under measurement disturbance,"This paper considers the problem of control-oriented errors quantification for a known linear discrete-time SISO model under coprime factor perturbations and measurement disturbance. Upper bounds on exogenous disturbance, measurement disturbance and perturbations in output and control are assumed to be unknown to the controller designer. The problem under consideration is to compute data-consistent upper bounds on steady-state tracking errors in the framework of the lscr1 robust control theory.",2009,0, 6661,Verifying architectural variabilities in software fault tolerance techniques,"This paper considers the representation of different software fault tolerance techniques as a product line architecture (PLA) for promoting the reuse of software artifact. The proposed PLA enables to specify a series of closely related architectural applications, which is obtained by identifying variation points associated with design decisions regarding software fault tolerance. These decisions are used to choose the appropriate technique depending on the features selected, e.g, the number of redundant resources, or the type of adjudicator. The proposed approach also comprises the formalisation of the PLA, using B-method and CSP, for systematising the verification of fault-tolerant software systems at the architectural level. The properties verified cover two complementary contexts: the selection of the correct architectural variabilities for instantiating the PLA, and also the properties of the chosen fault tolerance techniques.",2009,0, 6662,Analysis of the Dynamic Behavior of a Self-Commutated BTB System During Line Faults,"This paper deals with a 50-MW self-commutated back-to-back (BTB) system intended for power-flow control between two ac transmission networks. It focuses on the dynamic behavior of the BTB system during single-line-to-ground and double-line-to-ground faults. Attention is particularly paid to the dc magnetic flux deviation in the grid and converter transformers, and the circulating current inside the grid transformers, which would produce undesirable effects on the system. Theoretical equations related to the amount of circulating current and the dc magnetic flux deviation are derived. The theoretical results developed in this paper are confirmed by computer simulation.",2010,0, 6663,Influences of Inter-Stream Synchronization Error on Collaborative Work in Haptic and Visual Environments,"This paper deals with a haptic media and video transfer system in which users can touch and move a real object located at a remote place by using haptic interface devices while watching video of the object. Making use of the system, the users can do collaborative work in which the users lift and move the object by holding it between the styluses of the haptic interface devices. In the system, we investigate the influences of inter-stream synchronization error between haptic media and video on the media synchronization quality and the easiness of the collaborative work by subjective assessment. Assessment results show that as the synchronization error increases, the media synchronization quality and the easiness of the collaborative work become worse.",2008,0, 6664,Digital terrestrial television broadcasting error rate measurement,"This paper deals with a measurement of bit error rates in the DVB - T transmission system. Both non- hierarchical and hierarchical modulations are examined as well as basic transmission channel models (Gaussian, Rice and Rayleigh). Measured bit error rate before Viterbi error correction (channel error rate) and after Viterbi error correction, with varying C/N ratio, are graphically expressed and compared to the values mentioned in the DVB - T specification. A broadcast test system ""SFU"" developed by Rohde& Schwarz and a test receiver ""MSK 33"" by Kathrein were used for the measurement. Finally, obtained results are evaluated and discussed with the theory.",2008,0, 6665,Model Based Fault Detection of Backlash in Mechatronic Test Bench,This paper deals with a model based fault detection and isolation of backlash phenomenon. The dynamic model of the electromechanical system is derived by using the bond graph tool. The innovation interest of this contribution is the use of one representation language for modelling and monitoring the system with presence of backlash. Fault indicators are deduced from the analytical model and used in order to detect and isolate possible faults on the physical system including undesirable backlash. Simulation tests are done on an electromechanical test bench which consists of a DC motor carrying a mechanical load and including a backlash phenomenon,2006,0, 6666,On the coverage of delay faults in scan designs with multiple scan chains,"The use of multiple scan chains for a scan design reduces the test application time by reducing the number of clock cycles required for a scan-in/scan-out operation. In this work, we show that the use of multiple scan chains also increases the fault coverage achievable for delay faults, requiring two-pattern tests, under the scan-shift test application scheme. Under this scheme, the first pattern of a two-pattern test is scanned in, and the second pattern is obtained by shifting the scan chain once more. We also demonstrate that the specific way in which scan flip-flops are partitioned into scan chains affects the delay fault coverage. This is true even if the order of the flip-flops in the scan chains remains the same. To demonstrate this point, we describe a procedure that partitions scan flip-flops into scan chains so as to maximize the coverage of transition faults.",2002,0, 6667,Correction of Omnidirectional Camera Images using Reconfigurable Hardware,The use of omnidirectional cameras in computer vision algorithms is an additional computational load if the processing algorithm needs to extract the rectangular transformed image. There exist different alternatives to deal with this task inside desktop computers but it would be desirable to design a hardware correction unit in order to operate in real-time while using minimal silicon area resources. The proposed design allows the online transformation of a circular grayscale image coming from an omnidirectional camera to obtain the corresponding rectangular image. The model parameters include the image size and radial center. The processing time is one output pixel per clock cycle (50 MHz) which allows the image to be transformed before the arrival of the next one in most applications. It is possible to insert the developed FPGA core in the digital interface of the current omnidirectional cameras,2006,0, 6668,A novel fixed bit plane error resilient image coding for wireless multimedia transmission,"The variable length code (VLC) is the most popular technique used in DCT-based image compression standards such as JPEG, MPEG and H.26x. Unfortunately it is highly sensitive to channel noise. For wireless multimedia transmission, any error bit will cause serious error propagation and result in large image quality degradation. Moreover, it usually has no retransmission for real time applications. As a result, a high error resilient image coding scheme is important for wireless applications. We propose a novel DCT-based fixed bit plane error resilient image coding (FBP-ERIC) scheme, which can minimize the error propagation effect with low redundancy. The complexity is much less than conventional coding schemes. In addition, it has a high accurate error detection capability for performing error concealment. Even at a 0.1% bit-error-rate, 46.5% of erroneous blocks can be accurately detected. Hence, the high image quality (PSNR=32.06 dB) can be obtained by applying a simple error compensation mechanism.",2002,0, 6669,Improving Fault Tolerance in High-Precision Clock Synchronization,"The very popular Precision Time Protocol (PTP or IEEE 1588) is widely used to synchronize distributed systems with high precision. The underlying principle is a master/slave concept based on the regular exchange of synchronization messages. This paper investigates an approach to enhance PTP with fault tolerance and to overcome the transient deterioration of synchronization accuracy during a recovery from a master failure. To this end, a concept is proposed where a group of masters negotiates a fault-tolerant agreement on the system-wide time and transparently synchronizes the associated IEEE 1588 slaves. Experimental verification on the basis of an Ethernet implementation shows that the approach is feasible and indeed improves the overall synchronization accuracy in terms of fault tolerance.",2010,0, 6670,A Labview based rotor fault diagnostics tool for inverter fed induction machines by means of the Vienna monitoring method at variable speed,"The Vienna monitoring method (VMM) is a fault detection technique for squirrel cage induction machines. It is based on the comparison of the calculated torque values of two machine models with different model structure. Till now, steady state operation has been investigated only. This contribution deals with an exploitation for variable speed drives under dynamic conditions. The introduced configuration is due to a Labview application, based on a portable personal computer system",2000,0, 6671,"Performance, Fault-Tolerance and Scalability Analysis of Virtual Infrastructure Management System","The virtual infrastructure has become more and more popular in the grid and cloud computing. With the aggrandizement scale, the management of the resources in virtual infrastructure faces a great technical challenge. To support the upper services effectively, it raises higher requirements for the performance, fault-tolerance and scalability of virtual infrastructure management systems. In this paper, we study the performance, fault-tolerance and scalability of virtual infrastructure management systems with the three typical structures, including centralized, hierarchical and peer-to-peer structures. We give the mathematical definition of the evaluation metrics and give detailed quantitative analysis, and then get several useful conclusions for enhancing the performance, fault-tolerance and scalability, based on the quantitative analysis. We believe that the results of this work will help system architects make informed choices for building virtual infrastructure.",2009,0, 6672,Automated fault localization based on unified Web service and NGN benchmarking,"The wide adoption of web-based distributed systems and consequently their complexity leads to an increasing demand for performance and QoS testing tools for these systems. Pushing forward open NGN protocols has resulted in a proliferation of benchmarking, interoperability and prototyping tools, yet there is still no solid strategy for integrating these efforts, collecting test information and processing it at higher levels. We propose a thin management layer able to consume real-time test data from existing tools, using it for statistics collection and fault management based on an extensible policy system. The results of validating the proposed system in connection with a unified web service and NGN performance testing tool are discussed.",2009,0, 6673,Exploiting FPGA for accelerating fault injection experiments,"The widespread adoption of VLSI devices for safety-critical applications asks for effective tools for the evaluation and validation of their reliability. Fault injection is commonly adopted for this task, and the effectiveness of the adopted techniques is therefore a key factor for the reliability of the final products. In this paper we present new techniques for exploiting FPGAs to speed-up fault injection in VLSI circuits. Thanks to the suitable circuitry added to the original circuit, transient faults affecting memory elements in the circuit can be considered. The proposed approach allows performing fault injection campaigns that are comparable to those performed with hardware-based techniques in terms of speed, but shows a much higher flexibility in terms of supported fault models",2001,0, 6674,Position Measurement and Wireless Measurement System of Straightness Error Based on Two-Dimensional PSD,"The wireless measurement system of straightness error based on two-dimensional PSD, C8051F330 and PTR4000 was designed. The principle of PSD, the signal processing circuit, and the wireless straightness error data transmission system were introduced in detail. An accuracy algorithm meeting the least condition which constructed the characteristic polygon and calculated the maximum intercepts of the measuring points in the characteristic polygon as straightness error was proposed. The system has features of high precision and fast speed. Combination with the two types of measurement methods of straightness error, the evaluation software of straightness error was designed. It can be used with other straightness measuring instruments such as level-meter, autocollimator.",2010,0, 6675,Optimized Spacecraft Fault Protection for the WISE Mission,"The WISE project is a NASA-funded medium- class Explorer mission to map the entire sky in four infrared bands during the course of a 6-month survey. Because of the mission's limited financial resources, a traditional robustness strategy of full block-redundancy was not feasible. By leveraging aspects of the mission design that tend to reduce the risk associated with certain failures, the project has been able to adopt a robustness strategy of mitigating high-risk failures, while accepting the risk of low-impact faults or unlikely faults in heritage equipment with proven reliability. The resulting WISE flight system design is primarily single-string with some select functional- and block-redundancy and includes fault tolerance measures targeted at achieving the most cost- effective risk reduction possible for the system design. The fault protection team has been challenged with balancing the risk of faults, cost, and down-time with Ground Segment capabilities, heritage, and effectively designed fault mitigations. Faults were identified via a collection of analyses, and a criticality rating was applied to each fault to assess its impact to the mission. Taking into consideration each fault's impact and time criticality, mitigations to each possible fault were considered in areas such as on-board autonomy, the addition or use of functional and block redundancy, and ground system detection. Through this exercise, the project has realized a robust and reliable system design in line with the project's risk posture and cost constraints.",2008,0, 6676,Experimental study and analysis of soft errors in 90nm Xilinx FPGA and beyond,The Xilinx methodology used for soft error test and measurement in FPGAs is exposed. The technology scaling impact on SER from 250 nm down to 65 nm is presented and analyzed by comparing beam and real time testing. Some trends are presented and analyzed.,2007,0, 6677,Stochastic Analysis and Measurement of Error Vector Magnitude of OFDM Signal in MMIC Nonlinear Power Amplifier,"Theoretical approach to predict the error vector magnitude (EVM) performance of an orthogonal frequency-division multiplexing (OFDM) system in the presence of nonlinear distortion caused by RF amplifiers is described. It is shown that the EVM model can be described by means of the autocorrelation of uncorrelated distortion noise term and a common complex gain. For the experimental validation, an EVM value of modulated signals through the fabricated HBT MMIC power amplifier is derived by the proposed method and compared with the measurement results.",2006,0, 6678,Agent-based real-time fault diagnosis,"Theory and applications of model-based fault diagnosis have progressed significantly in the last four decades. In addition, there has been increasing use of model-based design and testing in automotive industry to reduce design errors, perform real-time simulations for rapid prototyping, and hardware-in-the-loop testing. For vehicle diagnosis, a global diagnosis method, which collects the diagnostic information from all the subsystem electronic control units (ECUs), is not practical because of high communication requirements and time delays induced by centralized diagnosis. Consequently, an agent-based distributed diagnosis architecture is needed. In this architecture, each subsystem resident agent (embedded in the ECU) performs its own fault inference and communicate the diagnostic results to a vehicle expert agent. A vehicle expert agent performs cross-subsystem diagnosis to resolve conflicts among resident agents, and to provide an accurate vehicle-level diagnostic inference. In this paper, we propose a systematic way to design an agent-based diagnosis architecture. A hybrid model-based technique that seamlessly employs a graph-based dependency model and quantitative models for intelligent diagnosis is applied to each individual ECU. Diagnostic tests for each individual ECU are designed via model-based diagnostic techniques based on a quantitative model. The fault simulation results, in the form of a diagnostic matrix, are extracted into a dependency model for fast fault inference by a resident agent. The global diagnostic inference is performed through a vehicle expert agent that trades off computational complexity and communication load. This architecture is demonstrated on the engine air induction subsystem. The solution is generic and can be applied to a variety of distributed control systems",2005,0, 6679,Study on modern spectrum analysis system for mechanical fault diagnosis,There are few modern spectrum analysis systems available on the literature for mechanical fault diagnosis. In this paper we develop a modern spectrum analysis system to identification fault characteristic from vibration signal of complex mechanical systems. The system is tested on several typically examples in engineering. It is shown that the presented system can be performance well in actual engineering structures and may also be applied to the signal processing in electronic or communication systems.,2010,0, 6680,A case study in root cause defect analysis,"There are three interdependent factors that drive our software development processes: interval, quality and cost. As market pressures continue to demand new features ever more rapidly, the challenge is to meet those demands while increasing, or at least not sacrificing, quality. One advantage of defect prevention as an upstream quality improvement practice is the beneficial effect it can have on interval: higher quality early in the process results in fewer defects to be found and repaired in the later parts of the process, thus causing an indirect interval reduction. We report a retrospective root cause defect analysis study of the defect Modification Requests (MRs) discovered while building, testing, and deploying a release of a transmission network element product. We subsequently introduced this analysis methodology into new development projects as an in-process measurement collection requirement for each major defect MR. We present the experimental design of our case study discussing the novel approach we have taken to defect and root cause classification and the mechanisms we have used for randomly selecting the MRs to analyze and collecting the analyses via a Web interface. We then present the results of our analyses of the MRs and describe the defects and root causes that we found, and delineate the countermeasures created to either prevent those defects and their root causes or detect them at the earliest possible point in the development process. We conclude with lessons learned from the case study and resulting ongoing improvement activities.",2000,0, 6681,An optimization to automatic Fault Tree Analysis and Failure Mode and Effect Analysis approaches for processes,"There are two issues to be addressed in the automatic Fault Tree Analysis (FTA) and Failure Mode and Effect Analysis (FMEA) approaches: resource fault and channel fault are not considered when generating the artifact flow graph (AFG) from Little-JIL processes. The AFG is incomplete, thus the fault trees and FMEA reports automatically generated partially based on AFG is incomplete. In this paper, we put forward to an approach of introducing resource instances and channel instances into fault propagation in Little-JIL processes. Thus, it makes the FTA and FMEA for Little-JIL processes to be much more effective. That is to say, how resource instances and channel instances might affect the fault propagation in Little-JIL processes is taken into account while performing these two automatic safety analysis techniques.",2010,0, 6682,A numerical optimization-based methodology for application robustification: Transforming applications for error tolerance,"There have been several attempts at correcting process variation induced errors by identifying and masking these errors at the circuit and architecture level. These approaches take up valuable die area and power on the chip. As an alternative, we explore the feasibility of an approach that allows these errors to occur freely, and handle them in software, at the algorithmic level. In this paper, we present a general approach to converting applications into an error tolerant form by recasting these applications as numerical optimization problems, which can then be solved reliably via stochastic optimization. We evaluate the potential robustness and energy benefits of the proposed approach using an FPGA-based framework that emulates timing errors in the floating point unit (FPU) of a Leon3 processor. We show that stochastic versions of applications have the potential to produce good quality outputs in the face of timing errors under certain assumptions. We also show that good quality results are possible for both intrinsically robust algorithms as well as fragile applications under these assumptions.",2010,0, 6683,A benchmark study approach to fault diagnosis of industrial process control systems,"There have been several proposals and suggestions of benchmark studies for evaluating the performance of fault detection and isolation (FDI) methods allied to real industrial plant. The main aim of these benchmarks is to provide a training facility for the engineering community (both industry and academia) to gain an understanding and ""feel"" for the way in which various FDI methods can perform in a realistic control engineering application setting. This is deemed an essential step in the process of transferring the technology (often gained in the academic community) into real application. This presentation would provide the description and application of a benchmark scheme based on an intelligent electro-pneumatic valve actuator with in-situ (in the loop) testing and fault signalling in a sugar factory evaporisation process. The overall application would also be outlined and this involves on-line FDI and monitoring of the sensors and several actuators of a sugar juice evaporisation plant, providing overall monitoring of the plant under closed-loop control. The study was conducted within the Research Training Network ""Development and Application of Methods for Actuator Diagnosis in Industrial Control Systems "" DAMADICS {www.eng.hull.ac.uk/research/control}, funded by the European Commission in the Human Improvement Programme of Framework 5. The FDI benchmark is method-independent and based on an in-depth study of the phenomena that can lead to likely faults in valve actuator systems. The work to be presented uses a detailed consideration of the physical and electro-mechanical properties (and their modelling requirements) of an intelligent industrial actuator. The presentation would also include the typical engineering requirements of an actuator valve operating under challenging process conditions, together with the setting up of suitable performance indices for evaluating the FDI results. The results to be described correspond to real in the loop testing and FDI evaluation with injected fault signals.",2005,0, 6684,Fault detection of Air Intake Systems of SI gasoline engines using mean value and within cycle models,"This paper addresses the detection of faults in Air Intake Systems (AIS) of SI gasoline engines based on realtime measurements. It presents comparison of two classes of models for fault detection, namely those using a Mean Value Engine Model (MVEM) involving variables averaged over cycles and Within Cycle Crank-angle-based Model (WCCM) involving instantaneous values of variables changing with crank angle. Numerical simulation results of intake manifold leak and mass air flow sensor gain faults, obtained using the industry standard software called AMESimTM, have been used to demonstrate the fault detection capabilities of individual approaches. Based on these results it is clear that the method using WCCM has a higher fault detection sensitivity compared to one that uses MVEM, albeit at the expense of increased computational and modeling complexity.",2009,0, 6685,Fault-coverage analysis techniques of crosstalk in chip interconnects,"This paper addresses the problem of evaluating the effectiveness of test sets to detect crosstalk defects in system-level interconnects and buses of deep submicron (DSM) chips. The fast and accurate estimation technique will enable: 1) evaluation of different existing tests, like functional, scan, logic built-in self-test (BIST), and delay tests, for effective testing of crosstalk defects in core-to-core interconnects and 2) development of crosstalk tests if the existing tests are not sufficient, thereby minimizing the cost of interconnect testing. Based on a covering relationship we distinguish between transition tests in detecting crosstalk defects and develop an abstract crosstalk fault model for chip interconnects. With this fault model and the covering relationship, we develop a fast and efficient method to estimate the fault coverage of any general test set. We also develop a simulation-based technique to calculate the probability of occurrence of the defects corresponding to each fault, which enables the fault-coverage analysis technique to produce accurate estimates of the actual crosstalk defect coverage of a given test set. The crosstalk test and fault properties, as well as the accuracy of the proposed crosstalk coverage analysis techniques, have been validated through extensive simulation experiments. The experiments also demonstrate that the proposed crosstalk techniques are orders of magnitude faster than the alternative method of SPICE-level simulation. Finally, we demonstrate the practical applicability of the proposed fault-coverage analysis technique by using it to evaluate the crosstalk fault coverage of logic BIST tests for the system-level interconnects and buses in a digital signal processor core.",2003,0, 6686,Assessing quantum circuits reliability with mutant-based simulated fault injection,"This paper addresses the problem of evaluating the fault tolerance algorithms and methodologies (FTAMs) for quantum circuits, by making use of fault injection techniques. The proposed mutant-based fault injection techniques are inspired from their classical counterparts [T.A. DeLong et al., 1996] [E. Jenn et al., 1994], and were adapted to the specific features of quantum computation, including the available error models [J. P. Hayes et al., 2004] [E. Knill et al., 1997]. The HDLs were employed in order to perform fault injection, due to their capacity of behavioral and structural circuit description, as well as their hierarchical features. Besides providing a much realistic description, the experimental simulated fault injection campaigns provide quantitative means for quantum fault tolerance assessment.",2007,0, 6687,Path Splicing with Guaranteed Fault Tolerance,"This paper addresses the problem of exploring the fault tolerance potential of the routing primitive called path splicing. This routing mechanism has been recently introduced in order to improve the reliability level of networks. The idea is to provide for each destination node in a network several different routing trees, called slices, by running different routing protocols simultaneously. The possibility for the traffic to switch between different slices at any hop on the way to the destination makes it possible to achieve a level of reliability that is close to the ideal level achieved by the underlying network. In this work we show that there is a method for computing just two slices that achieves fault tolerance against all single-link failures that do not disconnect the underlying network. We present an experimental evaluation of our approach, showing that for a number of realistic topologies our method of computing the slices achieves the same level of fault tolerance that is achieved by a much larger number of slices using the previously proposed method.",2009,0, 6688,A study of adaptive forward error correction for wireless collaborative computing,"This paper addresses the problem of reliably multicasting Web resources across wireless local area networks (WLANs) in support of collaborative computing applications. An adaptive forward error correction (FEC) protocol is described, which adjusts the level of redundancy in the data stream in response to packet loss conditions. The proposed protocol is intended for use on a proxy server that supports mobile users on a WLAN. The software architecture of the proxy service and the operation of the adaptive FEC protocol are described. The performance of the protocol is evaluated using both experimentation on a mobile computing testbed as well as simulation. The results of the performance study show that the protocol can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels.",2002,0, 6689,Residual generator function design for actuator fault detection and isolation of a Piper PA30 aircraft,"This paper addresses the problem of the detection and isolation of actuator faults on a general aviation aircraft, characterised by a nonlinear model, in the presence of wind gust disturbances. In particular, this work investigates the design of residual generators in order to realise complete diagnosis schemes when additive faults are present. The use of a canonical input-output polynomial description for the linearised model of the aircraft allows to compute in a straightforward way minimal order residual generators. These tools lead to dynamic filters that can guarantee both disturbance signal decoupling and robustness properties with respect to linearisation errors. The results obtained in the simulation of the faulty behaviour of a Piper PA30 are finally reported.",2004,0, 6690,"High compression rate and error free grayscale images coding method using morphological ""two steps"" skeleton","This paper addresses the representation of grayscale images by means of binary mathematical morphology, a relatively new nonlinear theory for image processing, based on set theory. The new image representation described, called the ""two steps"" skeleton representation, is an extension of the morphological binary skeleton. This article presents the theoretical background of the morphological image representation and shows an application for grayscale images",2001,0, 6691,Study on software fault-tolerance of computer-based measuring and controlling system,"This article discusses the application of software fault-tolerance in the railway computer-based measuring and controlling system, according to the feature of railway transportation and the theory of software engineering. The further study on the feasibility of this method based on the technique of redundancy is also posed. This paper will be a reference for understanding the fault-tolerance in the computer-based measuring and controlling system and its programming",2000,0, 6692,Design and implementation of MW-class wind turbine fault monitoring system based on GSM short message,"This article discusses the development and implementation of megawatt-class wind turbine fault monitoring system. This system merges the GSM communication technology, DSP control technology, computer networking technology, integrated display technology and information management technology into one organic, and has carried on a comprehensive wireless remote monitoring and control to the wind turbine. The paper emphatically introduces the structure composition of the fault monitoring system, the design and implementation of GSM short message module and other technological problems, analyzes the hardware circuit design of entire GSM module, short message encoding method and the methods of transmitting and reading short message, and has given the related realized procedures.",2010,0, 6693,Optimized error protection of scalable image bit streams [advances in joint source-channel coding for images],"This article focuses on FEC for scalable image coders. For various channel models, we survey recent progress made in system design and discuss efficient source-channel bit allocation techniques, with emphasis on unequal error protection. This article considered JSCC (joint source-channel coding) at the application layer only. Recent research has studied cross-layer optimization where JSCC is applied to both the application layer and the physical layer. The basic task here is to minimize the average distortion by allocating available power, subcarriers, and bandwidth among users at the physical layer and source-channel symbols at the application layer subject to a total resource constraint. Most of the JSCC systems covered in this article can be readily extended to transmit scalable compressed bit streams of video sequences and 3-D meshes. Due to the stringent delay constraints in video communications and the fact that MPEG is currently exploring a scalable video coding standard, fast JSCC algorithms are expected to play a bigger role and bring more performance gains. This article is also expected to stimulate further research efforts into JSCC and more importantly, prompt the industry to adopt some of these JSCC algorithms in their system designs, thus closing the cycle from algorithm development to implementation.",2005,0, 6694,Research on Precise Synchronization for TMR Fault-Tolerant Embedded Computer,"This article presents a precise synchronization algorithm based on status tracking and locking mechanism. Tracking execution state of triple computer and running state of time base counter through dual state machine not only can implement precise synchronization of TMR computer, making status synchronization precision and time-base synchronization precision below 30ns, but also save valuable interconnection resource, reduce implementation cost and other overhead of system resources.",2009,0, 6695,"Toward hardware-redundant, fault-tolerant logic for nanoelectronics","This article provides an overview of several logic redundancy schemes, including von Neumann's multiplexing logic, N-tuple modular redundancy, and interwoven redundant logic. We discuss several important concepts for redundant nanoelectronic system designs based on recent results. First, we use Markov chain models to describe the error-correcting and stationary characteristics of multiple-stage multiplexing systems. Second, we show how to obtain the fundamental error bounds by using bifurcation analysis based on probabilistic models of unreliable gates. Third, we describe the notion of random interwoven redundancy. Finally, we compare the reliabilities of quadded and random interwoven structures by using a simulation-based approach. We observe that the deeper a circuit's logical depth, the more fault-tolerant the circuit tends to be for a fixed number of faults. For a constant gate failure rate, a circuit's reliability tends to reach a stationary state as its logical depth increases.",2005,0, 6696,How to measure the impact of specific development practices on fielded defect density,"This author has mathematically correlated specific development practices to defect density and probability of on time delivery. She summarizes the results of this ongoing study that has evolved into a software prediction modeling and management technique. She has collected data from 45 organizations developing software primarily for equipment or electronic systems. Of these 45 organizations, complete and unbiased delivered defect data and actual schedule delivery data was available for 17 organizations. She presents the mathematical correlation between the practices employed by these organizations and defect density. This correlation can and is used to: predict defect density; and improve software development practices for the best return on investment",2000,0, 6697,Multi-rate receiver design with IF sampling and digital timing correction,This contribution deals with a fully digital multirate radio receiver suitable for vehicular applications. Timing correction and sample rate conversion are performed by a polynomial interpolator. Three different receiver configurations are considered in terms of computational complexity and BER performance. Careful selection of the intermediate frequency turns out to play a crucial role. System parameters are provided yielding good BER performance for all considered symbol rates. Results are verified by computation of the BER degradation as compared to an analog receiver with synchronized symbol-rate sampling.,2003,0, 6698,GNSS-Derived Path Delay: An Approach to Compute the Wet Tropospheric Correction for Coastal Altimetry,"This letter presents an innovative method for computing the wet tropospheric correction for altimetry measurements in the coastal regions, where the measurements from the microwave radiometers (MWRs) onboard altimetric missions become invalid. The method, called Global Navigation Satellite System (GNSS)-derived Path Delay, gives an estimation of the correction, along with the associated mapping error, from the combination of independent zenith wet delay (ZWD) values obtained from the tropospheric delays derived at a network of coastal GNSS stations, from the MWR measurements acquired before land degradation, and from the European Centre for Medium-Range Weather Forecasts Deterministic Atmospheric Model. The wet tropospheric correction is estimated at each altimeter point with an invalid MWR value using a linear space-time objective analysis technique that takes into account the spatial and temporal variability of the ZWD field and the accuracy of each data set used. The method was applied in the South West European region for the whole Envisat data series, and the results are presented here. The uncertainty of the wet-delay estimates is below 1 cm, provided they are obtained for points at distances shorter than ~ 50 km from a GNSS station, and/or valid MWR measurements are available for the estimation. The method can be implemented globally and foster the use of satellite altimetry in coastal studies.",2010,0, 6699,A Novel Frame Error Concealment Algorithm Based on Dynamic Texture Synthesis,"The traditional frame error concealment methods primarily predict the motion trend for the corrupted pixel or block, which would generate edge fragmentation and object deformation. When the damaged images contain the contents of the non-linear motion and the global illumination change, these methods would have bad effects. In order to improve the efficiency of frame error concealment, we propose an improved frame error concealment method using dynamic texture extrapolation. First, we give an improved solution of the dynamic texture model, which is good for video coding.",2010,0, 6700,Fault Diagnosis Way Based on RELAX Algorithms in Frequency Domain for the Squirrel Cage Induction Motors,"The traditional method of spectrum analysis on the current signal via FFT is hard to diagnose the fault of the broken rotor bars. This paper presents a fault diagnosis method using the RELAX algorithm in frequency domain. It can estimate the amplitude and phase values of various frequency components using coarse and fine estimation according to the criterion of minimum energy. Finally the fundamental component can be eliminated in frequency domain after the expressions of the above frequency components are constructed. Compared with the method of eliminating the fundamental component in time domain, this method has the advantage of computing faster although less accurate. However, it has been proved that the algorithm can highlight the fault characteristic and value a lot early in the motor fault diagnosis.",2010,0, 6701,Video error correction using data hiding techniques,"The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled by using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error free environment",2001,0, 6702,The use of steganography to enhance error detection and correction in MPEG-2 video,"The transmission of data is always subject to corruption due to errors; however, video transmission, because of its real time nature, must often deal with these errors without retransmission of the corrupted data. Our MPEG-2 compliant codec uses data hiding principles to transmit parity checking information for the DCT coefficients and uses side information (as provided in the MPEG-2 standard) to enhance the recovery of lost differentially encoded values. This information allows for improved resynchronization by correctly resynchronizing at least 7 times as often, reducing the number of macroblocks in error by a factor of 2 and improving the PSNR of error regions by at least 8 dB (in I frames) while uncorrupted picture quality decreases by less than 0.5 dB PSNR. Our work also demonstrates the ability to recover lost differential motion vectors by transmitting final values.",2002,0, 6703,A SysML model for code correction and detection systems,"The Unified Modeling Language (UML) is a well known approach for specifying and designing software components. UML for hardware designs of embedded systems is also possible in the simulation process, when the hardware is in the software form. The large number of tools for UML, the generl adoption of this technology for heterogeneous system design and verification, makes UML a very powerful and robust design instrument. Based on UML, the SysML [1] language has been developed in order to support all the details of system designs. SysML extends UML towards the systems engineering domain. As a good example, a SysML model for hardware components that perform error detection and correction, based on polynomial registers mod p(x), will be presented. The approach is justified as efficient and flexible.",2010,0, 6704,New progressive method suitable for the exposure optimization of large and complex defect-free chips direct written by ZBA 21 e-beam tool,The use of a new progressive method suitable for the exposure optimization is investigated for large and complex defect-free chips direct written by ZBA 21 electron beam pattern generator together with all corresponding microprocesses. Well controlled and resolved details (spaces) between individual quasi-square structures of the final large area neural holography chip were achieved in the range of about 50 nm.,2008,0, 6705,An Investigation into Timing Synchronization of /4-DQPSK Signals using Gardner Symbol Timing Error Detection Algorithm and Polyphase Filterbanks,The use of a polyphase filterbank with the Gardner timing error detection (TED) algorithm for symbol timing synchronization of the pi4-shifted differentially encoded quadrature phase shift keying (pi4-DQPSK) modulation scheme is investigated. The S-curve and normalized power spectral density of the simulated baseband modem incorporating the Gardner TED are presented. Closed loop performance indicates that this algorithm can also be used with the pi4-DQPSK modulation scheme to detect the timing error,2006,0, 6706,Fault tolerant multi-layer neural networks with GA training,"This paper addresses a fault tolerant architecture of multi-layer neural networks with a genetic algorithm scheme. For large scale neural networks, implemented in a single chip or silicon wafer, it is necessary to develop self-recovery mechanisms that can automatically recover faults without a host computer. In this paper, we propose fault tolerant multi-layer neural networks employing both hardware redundancy and weight retraining in order to realise self-recovering neural networks. The main advantages of our architecture are low hardware cost for adding redundant neurons and fast training by a genetic algorithm implemented in hardware. A prototype system is implemented on a field programmable gate array to show the possibility of self-recovering neural networks.",2003,0, 6707,Systematic and Non-Systematic Position Error Correction of Wheeled Robots while Navigation in a Square,"This paper addresses a modified localization scheme of a wheeled mobile robot. When it navigates in a square given by the operator, the position error of a robot accumulates and it never ends up with the goal position where the robot intends to go to initially. The objective of localization is to estimate the position of a robot precisely. Many algorithms were developed and still are being researched for localization of a mobile robot at present. Among them, a localization algorithm named continuous localization proposed by Schultz (Schultz and Adams, 1998) has some merits on real-time navigation and is easy to be implemented compared to other schemes. Continuous localization (CL) is based on map-matching algorithm with global and local maps using only ultrasonic sensors for making grid maps following a given path by us (e.g. squares). In addition to CL, we here propose systematic error correction and fast powerful map-matching algorithm for localization of a mobile robot by experiments",2006,0, 6708,"Performance assessment of baseband algorithms for direct conversion tactical software defined receivers: I/Q imbalance correction, image rejection, DC removal, and channelization","This paper addresses issues relating to software-defined receivers. Namely radio frequency (RF) to baseband architectures and the signal processing algorithms involved. Several direct conversion receiver architectures are introduced and analyzed for their performance. Issues relating to the quadrature imbalances and DC offset impairments are analyzed and detailed. To estimate and compensate for these impairments, several DSP algorithms are proposed and simulated for use with these receivers.",2002,0, 6709,Open Switch Fault Diagnosis for a Doubly-Fed Induction Generator,This paper addresses the analysis and detection of open switch faults in back-to-back PWM converters used in doubly-fed induction generators (DFIG). Several methods have previously been proposed to detect open switch faults in either the machine-side or line-side converter. The operating conditions that can cause possible false alarms with these methods are investigated. The proposed method detects open switch faults more reliably than any of the existing methods and hence improves overall system reliability. The performance of the existing methods and proposed methods has been verified by both simulation and experiment.,2007,0, 6710,Multiband simultaneous reception front-end with adaptive mismatches correction algorithm,"This paper addresses the architecture of multistandard simultaneous reception receivers and aims at improving the performance-power-complexity trade-off of the front-end. To this end we propose a single front-end architecture offering lower complexity and therefore lower power consumption. In order to obtain the same performance as the state of the art receivers, a light weight adaptive method is designed and implemented. It uses a mix of two digital implemented algorithms dedicated to the correction of the front-end IQ mismatches. A study case concerning the simultaneous reception of 802.11g and UMTS signals is developed in this article.",2009,0, 6711,A Model-Based Approach to Fault Diagnosis in Service Oriented Architectures,"This paper aims to present a method of creating architectures which allow monitoring occurrence of failure in Service oriented Architectures (SoA). The presented approach extends Discrete Event Systems techniques to produce a method of automated creation of Diagnoser Service which monitors interaction between the services to identify if a failure has happened and the type of failure. To do so, a formal representation of business processes is introduced, which allows modeling of Observable/Unobservable events, failure and the type of failure.The paper puts forward a set of algorithms for creating models of Diagnoser Service. Such models are then transformed into new Services implemented in BPEL, which interact with the existing services to identify if a failure has happened and the type of failure. The approach has been applied to an example of diagnosis of Right-first-time failure in Services used in telecommunications.",2009,0, 6712,Checkpointing Based Fault Tolerance Patterns for Systems with Arbitrary Deadlines,"This paper aims to provide a fault tolerant scheduling algorithm that have fault tolerance patterns for periodic task sets with arbitrary deadlines. The fault tolerance is achieved by checkpointing where number of checkpoint is decided on the bases of the lemmas proposed. These patterns provide minimum tolerance to all the releases and an improved tolerance to some releases pertaining to the availability of the slack time. They may be binary (i.e., either provide maximum or minimum tolerance to a release) or greedy (i.e., provide an improved tolerance whenever it is possible) in nature. Theorems have been proposed to ensure that the task set is schedulable with at least minimum fault tolerance. The effectiveness of the proposed patterns have been measured through extensive examples and simulations.",2007,0, 6713,The influence of fault distribution on stochastic prediction of voltage sags,"This paper analyzes the influence of modeling of fault distribution along transmission line on the assessment of number and characteristics of voltage sags. The generic distribution network was used in all simulations. Different types of transformer winding connections were modeled and different (symmetrical and asymmetrical) types of faults were simulated. A line was selected from the previously identified area of vulnerability for a given bus and different faults having different distributions along the line were simulated. It was shown that depending on the fault distribution (uniform, normal, exponential) along the line, different numbers and characteristics of voltage sags could be expected at the selected bus.",2005,0, 6714,A case history of International Space Station requirement faults,"There is never enough time or money to perform verification and validation (V&V) or independent verification and validation (IV&V) on all aspects of a software development project, particularity for complex computer systems. We have only high-level knowledge of how the potential existence of specific requirements faults increases project risks, and of how specific V&V techniques (requirements tracing, code analysis, etc.) contribute to improved software reliability and reduced risk. An approach to this problem, fault-based analysis, is proposed and a case history of the National Aeronautics and Space Administration's (NASA) International Space Station (ISS) project is presented to illustrate its use. Specifically, a tailored requirement fault taxonomy was used to perform trend analysis of the historical profiles of three ISS computer software configuration items as well as to build a prototype common cause tree. ISS engineers evaluated the results and extracted lessons learned",2006,0, 6715,Fault-Tolerance Verification of the Fluids and Combustion Facility of the International Space Station,"This article describes our experience with faulttolerance verification of the Fluids and Combustion Facility (FCF) of the International Space Station (ISS). The FCF will be a permanent installation for scientific microgravity experiments in the U.S. Laboratory Module aboard the ISS. The ability to withstand faults is vital for all ISS installations. Currently, the FCF safety specification requires one-component faulttolerance. In future versions, even greater robustness may be required. Faults encountered by ISS modules vary in nature and extent. Self-stabilization is an adequate approach to tolerance design of the FCF. However, for systems as complex as the FCF, analytical tolerance verification is not feasible. We use automated model-checking. We model the FCF in SPIN and specify stabilization predicates to which the FCF must conform. Our model of the FCF allows us to inject component faults as well as hazardous conditions. We use SPIN to automatically verify the convergence of the FCF model to legitimate states.",2006,0, 6716,A Novel Method for Geometric Correction of Multi-cameras in Panoramic Video System,"This paper describes a novel method based on self-adaptive subdivision mesh for geometry correction of multi-cameras in a panoramic video system. The system hardware consists of an array of six board cameras pointing outwardly. Digitally combining synchronized frames from each camera, a wide-field panoramic video can be created. Using sparse initial corresponding points between the ideal target surface and camera image, the proposed method can subdivide the grid into a dense one in arbitrary precision. Hence, the bijection between camera image and projection surface can be established in pixel level. This mapping can be used to warp a camera image to ideal panorama surface. The method is robust to lens distortion and camera relative position, and can seamlessly integrate with image matting processing. With the GPU processing, the real time frame rate panoramic video can be available. The experiment proves the effectiveness and simpleness of the method.",2010,0, 6717,Fault detection and visualization through micron-resolution X-ray imaging,"This paper describes a novel, non-intrusive method for the detection of faults within printed circuit boards (PCBs) and their components using digital imaging and image analysis techniques. High-resolution X-ray imaging systems provide a means to detect and analyze failures and degradations down to micron-levels both within the PCB itself and the components that populate the board. Further, software tools can aid in the analysis of circuit features to determine whether a failure has occurred, and to obtain positive visual confirmation that a failure has occurred. Many PCB and component failures previously undetectable through todaypsilas test methodologies are now detectable using this approach.",2008,0, 6718,Investigating no fault found in the aerospace industry,This paper describes a package of work to investigate the root cause of no fault found (NFF) events within the aerospace industry. The project focus is to develop practical guidance for designers and project managers to facilitate a reduction in NFF removal events for both current products and new designs. This investigation forms part of the second phase of the REMM (Reliability Enhancement Methodology and Modelling) project and comprises three diverse investigation activities: (i) examination of NFF issues at a system level that can highlight common areas of concern for all partner companies and across the Aerospace industry; (ii) classification and root cause analysis of service-data collected by partner companies; (iii) system modelling the 'softer' NFF issues to determine the effects of intervention. This paper describes the formulation of the work package strategy and details the progress made during the first year of this three-year project.,2003,0, 6719,Fault-tolerant technique in the cluster computation of the digital watershed model,"This paper describes a parallel computing platform using the existing facilities for the digital watershed model. In this paper, distributed multi-layered structure is applied to the computer cluster system, and the MPI-2 is adopted as a mature parallel programming standard. An agent is introduced which makes it possible to be multi-level fault-tolerant in software development. The communication protocol based on checkpointing and rollback recovery mechanism can realize the transaction reprocessing. Compared with conventional platform, the new system is able to make better use of the computing resource. Experimental results show the speedup ratio of the platform is almost 4 times as that of the conventional one, which demonstrates the high efficiency and good performance of the new approach.",2007,0, 6720,A Probabilistic Method for Aligning and Merging Range Images with Anisotropic Error Distribution,"This paper describes a probabilistic method of aligning and merging range images. We formulate these issues as problems of estimating the maximum likelihood. By examining the error distribution of a range finder, we model it as a normal distribution along the line of sight. To align range images, our method estimates the parameters based on the expectation maximization (EM) approach. By assuming the error model, the algorithm is implemented as an extension of the iterative closest point (ICP) method. For merging range images, our method computes the signed distances by finding the distances of maximum likelihood. Since our proposed method uses multiple correspondences for each vertex of the range images, errors after aligning and merging range images are less than those of earlier methods that use one-to-one correspondences. Finally, we tested and validated the efficiency of our method by simulation and on real range images.",2006,0, 6721,On-Line Process Monitoring and Fault Isolation Using PCA,"This paper describes a real-time on-line process monitoring and fault isolation approach using PCA (principal component analysis). It also presents the software implementation architecture using an OPC (OLE for process control) compliant framework, which enables a seamless integration with the real plant and DCS. The proposed approach and architecture are implemented to monitor a refinery process simulation that produces cyclohexane using benzene and hydrogen. The result shows that both sensor faults and process faults can be detected on-line and the dominating process variables may be isolated",2005,0, 6722,Enhanced spatial error concealment with directional entropy based interpolation switching,"This paper describes a spatial error concealment method that uses edge related information for concealing missing macroblocks in a way that not only preserves existing edges but also avoids introducing new strong ones. The method relies on a novel switching algorithm which uses the directional entropy of neighboring edges for choosing between two interpolation methods, a directional along detected edges or a bilinear using the nearest neighboring pixels. Results show that the performance of the proposed method is subjectively and objectively (PSNR wise) better compared to both 'single interpolation' and to edge strength based switching methods",2006,0, 6723,Analysis and correction of the influence of probe distortion in near field-far field transformations,"This paper describes a technique for the determination and correction of probe antenna distortion from near field (NF) measurements. The purpose is to correct the near field distortion using directive probe when measuring antennas at near field ranges, and correct the calculated far field pattern (FF) when performing NF-FF transformations using inverse integral equation techniques (source reconstruction techniques)",2006,0, 6724,Defect Prevention and Detection in Software for Automated Test Equipment,"This paper describes a test application development tool designed with a high degree of defect prevention and detection built-in. While this tool is specific to a particular tester, the PT3800, the approach that it uses may be employed for other ATE. The PT3800 tester is the successor of a more than 20 year-old tester, the PT3300. The development of the PT3800 provided an opportunity to improve the test application development experience. The result was the creation of a test application development tool known as the PT3800 AM creation, revision and archiving tool, or PACRAT (AM refers to automated media, specifically test application source code). This paper details the built-in defect prevention and detection techniques employed by PACRAT.",2008,0, 6725,Advanced Fault-Tolerant Control of Induction-Motor Drives for EV/HEV Traction Applications: From Conventional to Modern and Intelligent Control Techniques,"This paper describes active fault-tolerant control systems for a high-performance induction-motor drive that propels an electrical vehicle (EV) or a hybrid one (HEV). The proposed systems adaptively reorganize themselves in the event of sensor loss or sensor recovery to sustain the best control performance, given the complement of remaining sensors. Moreover, the developed systems take into account the controller-transition smoothness, in terms of speed and torque transients. The two proposed fault-tolerant control strategies have been simulated on a 4-kW induction-motor drive, and speed and torque responses have been carried to evaluate the consistency and the performance of the proposed approaches. Simulation results, in terms of speed and torque responses, show the global effectiveness of the proposed approaches, particularly the one based on modern and intelligent control techniques in terms of speed and torque smoothness",2007,0, 6726,Atmospheric Correction at AERONET Locations: A New Science and Validation Data Set,"This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 times 50 km2 subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li Sparse-Ross Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo, normalized BRF (computed for a standard viewing geometry, VZA = 0deg, SZA = 45deg), and instantaneous BRF (or one-angle BRF value derived from the last day of MODIS measurement for specific viewing geometry) for the MODIS 500-m bands 1-7. The results are produced daily at a resolution of 1 km in gridded format. We also provide a cloud mask, a quality flag, and a browse bitmap image. The ASRVN data set, including 6 years of MODIS TERRA and 1.5 years of MODIS AQUA data, is available now as a standard MODIS product (MODASRVN) which can be accessed through the Level 1 and Atmosphere Archive and Distribution System website ((http://ladsweb.nascom.nasa.gov/data/search.html).). It can be used for a wide range of applications including validation analysis and science research.",2009,0, 6727,A power converter for fault tolerant machine development in aerospace applications,"This paper describes an experimental tool to evaluate and support the development of fault tolerant machines designed for aerospace motor drives. Aerospace applications involve essentially safety critical systems which should be able to overcome hardware or software faults and therefore need to be fault tolerant. A way of achieving this is to introduce variable degrees of redundancy into the system by duplicating one or all of the operations within the system itself. Looking at motor drives, multiphase machines such as multiphase brushless DC machines are considered to be good candidates in the design of fault tolerant aerospace motor drives. The paper introduces a multi-phase two level inverter using a flexible and reliable FPGA/DSP controller for data acquisition, motor control and fault monitoring to study the fault tolerance of such systems.",2008,0, 6728,Improvement of frequency resolution for three-phase induction machine fault diagnosis,"This paper deals with the use of the zoom FFT algorithm (ZFFTA) for the electrical fault diagnosis of squirrel-cage three-phase induction machines with a special interest in broken rotor bar situation. The machine stator current can be analysed to observe the side-band harmonics around the fundamental frequency. In this case, it is necessary to take a very long data sequence to get high frequency resolution. This is not always possible due to the hardware and software limitations. The proposed algorithm can be considered for solving high frequency resolution problem without increasing the initial data acquisition size. The ZFFTA is applied to detect incipient rotor fault in a three-phase squirrel-cage induction machine by using both stator current and stray flux sensors.",2005,0, 6729,State Estimation and Fault Diagnosis for Nonlinear Analog Circuits,"This paper demonstrates a new approach of fault diagnosis for the nonlinear analog circuits, which is based on the computation of maximum Lyapunov exponent of chaotic time series. We use this method to estimate the operating states of strong nonlinear analog circuits, and make a discussion on selecting a proper pair of embedding dimension and time delay for phase space reconstruction. This new fault diagnosis method is validated by an instance: the output signal of a harmonic oscillator with variable periods in radar is taken as the testing object, the surrogate method is used to generate the fault data, and the maximum Lyapunov exponent is calculated by means of the small data-set approach. The simulation testing results indicate that this method can efficiently find out the abnormal changes in nonlinear analog circuits",2006,0, 6730,Estimation of faults in DC electrical power system,"This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. The model includes faults changing the circuit topology along with sensor faults. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using l1 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed at NASA. Accurate estimates of multiple faults are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.",2009,0, 6731,Coordinating fuzzy ART neural networks to improve transmission line fault detection and classification,"This paper demonstrates several uses of adaptive resonance theory (ART) based neural network (NN) algorithm combined with fuzzy K-NN decision rule for fault detection and classification on transmission lines. To deal with the large input data set covering system-wide fault scenarios and improve the overall accuracy, three fuzzy ART neural networks are proposed and coordinated for different tasks. The performance of improved scheme is compared with the previous development based on the simulation using a typical power system model. The speed and accuracy of detecting continuous signals during the fault is also evaluated. Simulation results confirm the improvement benefits when compared with the previous implementation.",2005,0, 6732,An ACS robotic control algorithm with fault tolerant capabilities,"This paper demonstrates that an adaptive computing system (ACS) is good platform for implementing robotic control algorithms. We show that an ACS can be used to provide both good performance and high dependability. An example of an FPGA-implemented dependable control algorithm is presented. The flexibility of ACS is exploited by choosing the best precision for our application. This reduces the amount of required hardware and improves performance. Results obtained from a WILDFORCE emulation platform showed that even using 0.35 m technology, an FPGA-implemented control algorithm has comparable performance with the software-implemented control algorithm in a 0.25 m microprocessor. Different voting schemes are used in conjunction with multi-threading and combinational redundancy to add fault tolerance to the robotic controller. Error-injection experiments demonstrate that robotic control algorithms with fault tolerance techniques are orders of magnitude less vulnerable to faults compared to algorithms without any fault tolerant features",2000,0, 6733,A two stage defect recognition method for parquet slab grading,This paper demonstrates the use of a simple but effective color-based inspection method in parquet slab grading. The approach is to divide the image to small rectangular regions and calculate color percentile features from these areas. Classification is performed in two stages: defect detection and recognition. The recognition results are further used in determining the final grade for the parquet slabs. Comparative results are also presented,2000,0, 6734,Development of a tool to detect faults in induction motors via current signature analysis,"This paper demonstrates through industrial case histories, how current signature analysis can reliably diagnose rotor cage problems in induction motor drives. Traditional CSA measurements can result in false alarms and/or misdiagnosis of healthy machines due to the presence of current frequency components in the stator current resulting from nonrotor related conditions such as mechanical load fluctuations, gearboxes, etc. Theoretical advancements have now made it possible to predict many of these components, thus making CSA testing much more robust and less error prone technology. Based on these theoretical developments, case histories are presented which demonstrate the ability to separate current components resulting from mechanical gearboxes from those resulting from broken rotor bars. From this data, a new handheld instrument for reliable detection of broken rotor bars, air gap eccentricity, shorted turns in LV stator windings and mechanical phenomena/problems in induction motor drives is being developed and is described. Detection of the inception of these problems prior to failure facilitates remedial action to be carried out thus avoiding the significant costs associated with unexpected down time due to unexpected failures.",2003,0, 6735,Theoretical derivation of bit error rate in MB-OFDM UWB system,"This paper derives the theoretical bit error rate (BER) of the multiband orthogonal frequency division multiplexing (MB-OFDM) system for ultra wideband (UWB) communication. The proposed derivation takes into account the fading correlation caused by time and frequency spreading, which is one of the most significant factors in MB-OFDM UWB system. Moreover, the proposed formula gives consideration to the frequency characteristics of propagation loss that are not negligible in UWB systems. The theoretical results by the proposed approach are compared with the computer simulation results in order to confirm the validity of the proposed derivation.",2009,0, 6736,Prototype of fault adaptive embedded software for large-scale real-time systems,"This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an 'expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.",2005,0, 6737,Vision-based end-effector position error compensation,This paper describes a computationally efficient algorithm that provides the ability to accurately place an arm end-effector on a target designated in an image using low speed feedback from a fixed stereo camera. The algorithm is robust to visual occlusion of the end-effector and does not require high fidelity calibration of either the arm or stereo camera. The algorithm works by maintaining an error vector between the locations of a fiducial on the arm's end-effector as predicted by a kinematic model of the arm and detected and triangulated by a stereo camera pair. It then uses this error vector to compensate for errors in the kinematic model and servo to the target designated in the stereo camera pair,2006,0, 6738,Development of a dynamic routing system for a fault tolerant solid state mass memory,This paper describes a fault tolerant Solid State Mass Memory (SSMM) for satellite applications. Definition of requirements plays an important role in the architectural solutions selected for the data storing system. The interconnection system proposed is based on a crossbar switch with half duplex links. All the connections have a complete flow control and the path of switched packets is dynamically reconfigurable. A controller tests functionality of each component and handles dynamic data allocation in the memory modules. Memory modules grant data integrity: an original tool developed by the authors derives data coding parameters,2001,0, 6739,FPGA based fault emulation of synchronous sequential circuits,"This paper describes a feasibility study of accelerating fault simulation by emulation on FPGA. Fault simulation b an important subtask in test pattern generation and it is frequently used throughout the test generation process. In order to further speed up simulation, we propose to make use of reconfigurable hardware by emulating circuit together with fault insertion structures on FPGA. Experiments showed that it is beneficial to use emulation for circuits/methods that require large numbers of test vectors, e.g., sequential circuits and/or genetic algorithms.",2004,0,7229 6740,A fuzzy error correction control system,"This paper describes a fuzzy error correction control system used to navigate a robot along an easily modifiable path in a well-structured environment. An array of Hall sensors mounted on the bottom of a robot gathers sensory information from a path of ferromagnetic disks placed on the ground. This sensory input is processed by an analog-to-digital converter and the output signals are then inputted into a fuzzy logic engine. The fuzzy engine outputs commands for the robot wheels. These commands determine the necessary angle of rotation to correct the direction of travel in order for the robot to remain on the path. The fuzzy logic controller stores prior disk information to predict a path trajectory when no path is detected. If the controller then senses a path, it anchors on it and starts following it",2001,0, 6741,Fault Diagnosis on Board for Analog to Digital Converters,"This paper describes a general purpose high reliable data acquisition system which allows A/D converter testing by histogram and two tone tests for the fault diagnosis on the same board. A reliability analysis has been carried out in order to optimize the project, the components choice and redundancy configuration. The software has been written in Matlab and LabVIEW, with an easy graphical user interface.",2007,0, 6742,Outcomes of overvoltage monitoring and fault location in underground distribution networks,"This paper describes an information-measuring system for surge voltages monitoring and the outcomes of multi-year monitoring in 10 kV underground distribution networks. The statistical analysis of monitoring outcomes has shown a low level of the most often overvoltages in networks originating during restriking earth faults (REFs). For automatic location of REFs, a parametrical method based on the frequency properties of the network is offered.",2002,0, 6743,Application of Fault Tolerant Control Using Sliding Modes With On-line Control Allocation on a Large Civil Aircraft,This paper describes an on-line sliding mode control allocation scheme for fault tolerant control of the lateral and longitudinal axes of the non-linear B747 aircraft. The effectiveness level of the actuators is used by the control allocation scheme to redistribute the control signals to the functioning actuators when a fault or failure occurs. The simulation results on the non-linear B747 model show good performance when tested on different fault and even certain total actuator failure scenarios without reconfiguring the controller.,2007,0, 6744,Error Vector Magnitude Measurement On Cascaded Butler Matrices System,"This paper describes error vector magnitude (EVM) measurement on vehicle communication system that employs low noise amplifiers (LNAs) and cascading Butler Matrices in producing broad beam high linearity and high gain narrow beam system. The output signals from the first Butler Matrix that have high gain and narrow beam width can be used for long distance communication while the outputs from the second Butler Matrix, which have high linearity, and broad beam width can be used for short range communications.",2007,0, 6745,Emulation of faults and remedial control strategies in a multiphase power converter drive used to analyse fault tolerant drive systems for Aerospace applications,"This paper describes how an experimental test rig, a multiphase power converter drive and its control have been used to emulate failures (of the converter and machine) and control strategies to study a way of achieving fault tolerant drive systems employed especially in Aerospace applications. Experimental results which validate simulation of the emulated faults are presented.",2009,0, 6746,Error analysis in indoors localization using ZigBee wireless networks,"This paper describes indoors radio frequency (RF) localization using radio signal strength indication (RSSI) which is available in wireless communications networks. The main advantage of described methodology is taking as localization hardware the communications sub-system and developing dedicated software in order to obtain a location of mobile network nodes using trilateration algorithm. Accuracy is strongly dependent on quality of measured RSSI. Needing of post-processing of raw RSSI values in order to obtain good results in terms of mobile nodes location estimation is shown. In order to apply RSSI values from wireless communications hardware to localization sub-system, filtering is thus an approach to overcome limitations due to RSSI values fluctuations.",2010,0, 6747,Research on Digital Circuit Fault Location Procedure Based on LASAR,"This paper describes LASAR V6 (logic automated stimulus response) that is the software for digital test program to generate fault dictionary used as a diagnosis database of fault isolation for a tested circuit based on fault simulation, presents three core files relating to circuit fault diagnosis which is generated by LASAR, i.e. fault dictionary, node truth table and pin connection table, analyses the content of fault dictionary, pin connection table and node truth table, finds the necessary information for fault location, summarizes the procedure of circuit test and fault location. Finally the digital circuit diagnosis system which can locate the fault on the pin of components is designed. With the help of probe, fault location of component pins can be accurately pinpointed.",2008,0, 6748,Efficient Test Pattern Compression Method Using Hard Fault Preferring,"This paper describes new compression method that is used for test pattern compaction and compression in algorithm called COMPAS, which utilizes a test data compression method based on pattern overlapping. This algorithm reorders and compresses deterministic test patterns previously generated in an ATPG by overlapping them. Independency of COMPAS on used ATPG is discussed and verified. New method improves compression ratio by preprocessing input data to determine the degree of random test resistance for each fault. This information allows the algorithm to reorder test patterns more efficiently and results to 10% compression ratio improvement in average. Compressed data sequence is well suited for decompression by the scan chains in the embedded tester cores.",2008,0, 6749,Maintenance data mining and visualization for fault trend analysis,"This paper describes research efforts currently underway to acquire and analyze test data to determine whether trends and other tendencies may exist that may be indicative of future circuit board failures and potential reduced weapon system readiness. We begin by citing that in today's test environment using test program sets (TPSs) hosted on automatic test equipment (ATE), no provisions are made for capturing or analyzing Unit Under Test (UUT) data, on a large scale. The distributed resources used to perform UUT testing further complicate the situation,since no methodology currently exists that can demonstrate whether trends or events exist in the data that may be indicative of supportability, maintainability, or readiness problems. Our approach. is based upon fulfilling the need to recognize changes in the tolerance of equipment performance. This can be accomplished through the large-scale recording and analysis of test data that can aid in the performance of remote testing and recognition of tolerance changes and other issues that effect diagnostic ability. This would also facilitate taking appropriate corrective action to predict and/or compensate for such behavior before significant mission impact or failure occurs",2001,0, 6750,Transient stability assessment using artificial neural network considering fault location,"This paper describes the capability of artificial neural network for predicting the critical clearing time of power system. It combines the advantages of time domain integration schemes with artificial neural network for real time transient stability assessment. The training of ANN is done using selected features as input and critical fault clearing time (CCT) as desire target. A single contingency was applied and the target CCT was found using time domain simulation. Multi layer feed forward neural network trained with Levenberg Marquardt (LM) back propagation algorithm is used to provide the estimated CCT. The effectiveness of ANN, the method is demonstrated on single machine infinite bus system (SMIB). The simulation shows that ANN can provide fast and accurate mapping which makes it applicable to real time scenario.",2010,0, 6751,Multisensor secondary device for detection of low-level arcing faults in metal-clad MCC switchgear panel,"This paper describes the development of a multisensor device, based on four different physical phenomena, for reliable detection of low-level arcing faults in metal-clad switchgear. The proposed device was tested for actual arcing, generated on a low-voltage motor control center panel, feeding a dry type 15 kVA, 230/115 V Y/ transformer. Some results showing the performance of the developed device are presented in the paper. The device can also be applied for detection of arcing in power electronic drives, dry type transformers, gas insulated switchgear, generator bus-ducts, and other metal-clad electrical apparatus",2002,0, 6752,The Digital Circuit Fault Diagnosis Interface Design and Realization Based on VXI,"This paper discusses in detail the development process of general interface adapter in the digital circuit fault diagnosis system based on VXI. After introducing VXI bus, the paper gives overall description of fault diagnosis system, presents a method of solving the problem about load matching and interface matching, and realizes the function of identification to read and write on the memory of interface circuit and to control chip selection. The method of identity installation in interface circuit to be selected is to give an ID number and add a memory to interface circuit to ensure the accuracy and effectiveness. The paper also describes the method of self diagnosis in the interface circuit that is the key to whole fault diagnosis system.",2008,0, 6753,Gradient Non-Linearity Correction of MR Images for Functional Radiosurgery,"This paper discusses the correction of MR images to submillimeter accuracy needed for functional radiosurgery. MR images experience non-linear distortion due to the magnetic field, which becomes more of a problem for newer machines with larger bores and stronger magnetic fields. This paper models the distortion correction parameters using a spherical harmonics basis, which avoids the need to invert the function to correct the image. The coefficients appear linearly for the spherical harmonics so they are solved for in each dimension by least squares techniques for an MR image of a phantom and the measurements of the phantom. Practical considerations in the design are also covered",2006,0, 6754,The design of fault tolerant machines for aerospace applications,"This paper discusses the design of a fault tolerant electric motor for an aircraft main engine fuel pump. The motor in question is a four phase fault tolerant motor with separated windings and a six pole permanent magnet rotor. Methods of reducing machine losses in both the rotor and stator are introduced and discussed. The methods used to calculate rotor eddy current losses are examined. 3D finite element, 2D finite element time-stepping and 2D finite element harmonic methods are discussed and the differences between them and the results they produce investigated. Conclusions are drawn about the accuracy of the results produced and how the methods in question helps the machine designer",2005,0, 6755,Hardware Building Blocks for High Data-Rate Fault-Tolerant In-vehicle Networking,"This paper discusses the hardware implementation of high speed and fault-tolerant communication systems for in-vehicle networking. Emerging safety-critical automotive control systems, such as X-by-wire and active safety, need complex distributed algorithms. Large bunches of data have to be exchanged in real-time and with high dependability between electronic control units, sensors and actuators. According to this perspective, the FlexRay protocol, which features data-rates up to 10 Mb/s, time and event triggered transmissions, as well as scalable fault-tolerance support, was developed and it is now expected to become the future standard for in-vehicle communication. However, collision avoidance and driver assistance applications based on vision/radar systems, poses requirements on the communication systems that can hardly be covered by current and expected automotive standards. A candidate that will play a significant role in the development of safety systems which need data-rates up to hundreds of Mb/s as well as fault-tolerance seems to be the new SpaceWire protocol, whose effectiveness has already been proved in avionic and aerospace. This paper presents the design of the major hardware building blocks of the FlexRay and SpaceWire protocols.",2007,0, 6756,Using Data Confluences in a Distributed Network with Social Monitoring to Identify Fault Conditions,This paper discusses the potential benefits of socially attentive monitoring in multi-agent systems. A multi-agent system with this feature is shown to detect and identify when an individual within the network fails to operate correctly. The system that has been developed is capable of detecting a range of common faults such as stuck at zero by allowing communication between peers within a software agent network. Further adaption to the model allows an improvement in system response without introduction of specific control design algorithms,2006,0, 6757,Design Structural Stability Metrics and Post-Release Defect Density: An Empirical Study,"This paper empirically explores the correlations between a suite of structural stability metrics for object-oriented designs and post-release defect density. The investigated stability metrics measure the extent to which the structure of a design is preserved throughout the evolution of the software from one release to the next. As a case study, thirteen successive releases of Apache Ant were analyzed. The results indicate that some of the stability metrics are significantly correlated with post-release defect density. It was possible to construct statistically significant regression models to estimate post-release defect density from subsets of these metrics. The results reveal the practical significance and usefulness of some of the investigated stability metrics as early indicators of one of the important software quality outcomes, which is post-release defect density.",2006,0, 6758,Analysis of measurement delay errors in an Ethernet based communication infrastructure for power systems,"This paper examines how communication delays in delivering power system measurements across a computer control network (IEEE 802.3-Ethernet) can affect the accuracy of these measurements as viewed by remote hosts on the network. A stochastic system model is developed, which is composed of both the physical infrastructure of the power system as well as the embedded computer network communication infrastructure. An experimental platform has been developed to determine the parameters of the developed model. This model is the first step in examining how delays in delivering power system measurements, as well as power system dynamics, can impact ""real-time observability"" of the power system.",2002,0, 6759,A multivariate statistical analysis technique for on-line fault prediction,"This paper describes a generalized multivariate statistical analysis technique for prediction of impending failures in electronic and electromechanical equipment. This data-driven prognostic technique is useful in health-monitoring situations where equipment physical models are unavailable or of limited fidelity. Statistical analysis algorithms, integrated into a predictive fault detection (PFD) statistical analysis engine, operate on heterogeneous streams of data from sensors that monitor selected equipment structural and functional parameters. The statistical analysis engine processes input data in two stages - similarity testing followed by trend determination. The input stage algorithm extracts multidimensional feature data samples from the arriving sensor data streams. It then performs statistical comparisons of the feature data samples to corresponding feature patterns from equipment considered to be in nominal operating condition. The nominal-condition data may be obtained from equipment specifications, or it may be derived in situ from the sensor data streams. The algorithm maintains a set of scalar similarity metrics for the equipment being monitored, which it periodically compares with pre-computed thresholds. The thresholds may be adaptive, with values typically functions of the amount and variability of the feature data. The trend determination stage is triggered when a threshold value is exceeded. The trend determination algorithm projects trends of feature data from the similarity analysis onto the future. The automatic data trending computation for a given feature is performed by statistically analyzing a window of the feature data. This analysis addresses a collection of different characteristics of a well-fitting trend, which are then fused into a single trend result per data window. The statistical analysis engine applies the trending results to determine the most probable trend, which in the PFD context is related to the requirements for schedulin- - g of equipment maintenance actions.",2008,0, 6760,Correction of Optical Flow using Characteristic of Wrinkles on Palms,"This paper describes a method of correcting the velocity vector by using the characteristics of the wrinkles in palms for hand gesture input for remote controllers. First, the HSV color system is used for hand images because this color system is not affected by changes in the lighting conditions. Next, an input method by switching the referenced parameters used for recognition when the range of acquisition of the images differs is proposed. Additionally, the block-matching method is applied to detect the motion vector because the ability to handle changes in brightness is required for consumer use environments. Moreover, how to reduce computational complexities and modify the detection accuracy of the rate vector are discussed. Finally, an experiment of operating a user interface based on the proposed method was conducted by using a practical system with an embedded MCU and a small camera. Consequently the effectiveness of the proposal was successfully confirmed",2006,0, 6761,Faraday rotation correction in the polarimetric mode of MIRAS,This paper describes a new method to compensate the Faraday rotation in polarimetric radiometric measurements under the assumption that the polarimetric brightness temperature matrix of the target is diagonal. Simulations results of how this method can be used within the Soil Moisture and Ocean Salinity mission and the Microwave Imaging Radiometer with Aperture Synthesis imaging radiometer are presented.,2004,0, 6762,Performance of a self-commutated BTB system under a single-line-to-ground fault condition,"This paper deals with a self-commutated BTB (back-to-back) system for the purpose of power flow control and/or frequency change in transmission systems. Each BTB unit consists of two sets of 16 three-phase voltage-source converters, and their AC terminals are connected in series with each other via 16 three-phase transformers. Hence, the BTB unit uses totally 192 switching devices capable of achieving gate commutation. This results in a great reduction of voltage and current harmonics without performing PWM control. Simulation results verify the validity of the proposed system configuration and control scheme not only under a normal operating condition but also under a single-line-to-ground fault condition",2002,0, 6763,Information Packets and MPC Enable Fault-tolerance in Network Control,"This paper deals with fault-tolerant control of a network controlled systems (NCS) problem, where the sensors, actuator and controller are inter-connected via a communication network. A procedure is proposed for controlling a system over a network using the concept of an NCS-information-packet which is an augmented vector comprising control moves and fault flags. The size of this packet is used to define a completely fault tolerant NCS. The behavior and control of this scheme is illustrated by way of an example, where the plant is being controlled over a network. Implicit in this paper is that appropriate FDI schemes exist within the set up. The software environment used is MATLABcopy and LABVIEWcopy. The results illustrate that the scheme is tolerant to faults.",2006,0, 6764,Location of faults in partially parallel transmission networks,"This paper deals with the algorithm intended for locating faults in partially parallel transmission networks. The delivered algorithm is categorized as the one-end technique. The algorithm consists of the two subroutines designated for locating faults in the part where the parallel lines are mutually coupled and at the segment of the network, which is not coupled with any other circuit. Detailed derivation of the algorithm and considerations concerning the selection of the valid result are included. The algorithm has been tested with the fault data obtained from versatile ATP-EMTP simulations. Sample examples of fault location are presented and discussed. Results of evaluation of fault location accuracy with using ATP-EMTP simulations of faults are reported",2001,0, 6765,Dynamic Error Recovery in theATLAS TDAQ System,"This paper describes the new dynamic recovery mechanisms in the ATLAS Trigger and DataAcQuisition (TDAQ) system. The purpose of the new recovery mechanism is to minimize the impact certain errors and failures have on the system. The new recovery mechanisms are capable of analyzing and recovering from a variety of errors, both software and hardware, without stopping the data-gathering operations. An expert system is incorporated to perform the analysis of the errors and to decide what measures are needed. Due to the wide array of sub-systems there is also a need to optimize the way similar errors are handled for the different sub-systems. The main focus of the paper is to consider the design and implementation of the new recovery mechanisms and how expert knowledge is gathered from the different sub-systems and implemented in the recovery procedures.",2008,0, 6766,TTIPP3 - A fault-tolerant time-triggered platooning demonstrator,"This paper describes the realization of a model truck which fully autonomously follows another vehicle using stereo vision. In addition to the pair of cameras needed, a set of further sensors for measuring speed, steering angle, and avoiding collisions are used. For communication and task execution/synchronization, a time-triggered architecture is used. All necessary equipment, together with power supply for thirty minutes of operation, has been installed on a model truck of about 40 cm length.",2008,0, 6767,A Multi-Agent Fault Detection System for Wind Turbine Defect Recognition and Diagnosis,This paper describes the use of a combination of anomaly detection and data-trending techniques encapsulated in a multi-agent framework for the development of a fault detection system for wind turbines. Its purpose is to provide early error or degradation detection and diagnosis for the internal mechanical components of the turbine with the aim of minimising overall maintenance costs for wind farm owners. The software is to be distributed and run partly on an embedded microprocessor mounted physically on the turbine and on a PC offsite. The software will corroborate events detected from the data sources on both platforms and provide information regarding incipient faults to the user through a convenient and easy to use interface.,2007,0, 6768,Diagnostics of bar and end-ring connector breakage faults in polyphase induction motors through a novel dual track of time-series data mining and time-stepping coupled FE-state space modeling,"This paper develops the fundamental foundations of a technique for detection of faults in induction motors that is not based on the traditional Fourier transform frequency domain approach. The technique can extensively and economically characterize and predict faults from the induction machine adjustable speed drive design data. This is done through the development of dual-track proof-of-principle studies of fault simulation and identification. These studies are performed using our proven time stepping coupled finite element-state space method to generate fault case data. Then, the fault cases are classified by their inherent characteristics, so called signatures or fingerprints. These fault signatures are extracted or mined here from the fault case data using our novel time series data mining technique. The dual-track of generating fault data and mining fault signatures was tested here on 3, 6, and 9 broken bar and broken end ring connectors in a 208-volt, 60-Hz, 4-pole, 1.2-hp, squirrel cage 3-phase induction motor",2001,0,5015 6769,Forecasting error tolerable resource allocation for All-IP networks,"This paper discuss some resource allocation methods that can tolerate forecast errors under the budget-based management infrastructure, BBQ, which is designed to offer end-to-end QoS assurance for All-IP networks. BBQ takes pre-planning approach to forecast incoming traffic based on historic statistics and allocates link resources accordingly. Traffic forecast may not be perfectly accurate due to traffic fluctuation and imperfect forecast. Forecasting errors may lead to poor resource allocation. We have designed some mechanisms that can compensate forecasting errors and thus may reduce performance degradation accordingly.",2005,0, 6770,A Novel Approach for Arcing Fault Detection for Medium/Low-Voltage Switchgear,"This paper describes the development of a novel approach for the detection of arcing faults in medium/low-voltage switchgear. The basic concept involves the application of differential protection for the detection of any arcing within the switchgear. The new approach differs from the traditional differential concept in the fact that it employs higher frequency harmonic components of the line current as the input for the differential scheme. Actual arc generating test-benches have been set up in the laboratory to represent both medium and low voltage levels. Hall-effect sensors in conjunction with data acquisition system are employed to record the line current data before, during and after the arcing phenomenon. The methodology is first put to test via simulation approach for medium voltage levels and then corroborated by actual hardware laboratory testing for low voltage levels. The plots provided from the data gathering and simulation process clearly underline the efficiency of this approach to detect switchgear arcing faults. Both magnitude and phase differential concepts seem to provide satisfactory results. Apart from the technical efficiency, the approach is financially feasible considering the fact that the differential protection is already being comprehensively employed worldwide",2006,0,4650 6771,A method for computing error vector magnitude in GSM EDGE systems-simulation results,"This paper describes the error vector magnitude (EVM) as it is specified for a GSM EDGE (8-PSK) system and presents a method of its derivation. Simulation results of the algorithm applied to a nonlinear power amplifier are shown, and compared to the data obtained with an alternative commercial implementation. The validity of the current EVM proposal as a system linearity figure of merit is also discussed.",2001,0, 6772,Comparisons of error control techniques for wireless video multicasting,"This paper explores three different methods, employed separately and in combination, to improve the quality of video delivery on wireless local area networks. The approaches are: leader-driven multicast (LDM), monitoring MAC layer unicast (re)transmissions by other receivers; application-level forward error correction (FEC) using block erasure codes; negative feedback from selected receivers in the form of extra parity requests (EPR). The performance of these three methods is evaluated using both experiments on a mobile computing testbed and simulation. The results indicate that, while LDM is helpful in improving the raw packet reception rate, the combination of FEC and EPR is most effective in improving the frame delivery rate",2002,0, 6773,Weighted least-squares design of IIR all-pass filters using a Lyapunov error criterion,"This paper extends a neural network based architecture for the weighted least-squares design of IIR all-pass filters. The error difference between the desired phase response and the phase of the designed all-pass filter is formulated as a Lyapunov error criterion. The filter coefficients are obtained when neural network achieves convergence by using the corresponding dynamic function. Furthermore, a weighted updating function is proposed to achieve good approximation to the minimax solution. Simulation results indicate that the proposed technique is able to achieve good performance in a parallelism manner.",2010,0, 6774,CMOS standard cells characterization for defect based testing,"This paper extends the CMOS standard cells characterization methodology for defect based testing. The proposed methodology allows to find the types of faults which may occur in a real IC, to determine their probabilities, and to find the input test vectors which detect these faults. For shorts at the inputs two types of cell simulation conditions - ""Wired-AND"" and ""Wired-OR"" are used. Examples of industrial standard cells characterization indicate that a single logic fault probability table is not sufficient. Separate tables for "" Wired-AND "" and "" Wired-OR"" conditions at the inputs are needed for full characterization and hierarchical test generation",2001,0, 6775,Probabilistic Algebraic Analysis of Fault Trees With Priority Dynamic Gates and Repeated Events,"This paper focuses on a sub-class of Dynamic Fault Trees (DFTs), called Priority Dynamic Fault Trees (PDFTs), containing only static gates, and Priority Dynamic Gates (Priority-AND, and Functional Dependency) for which a priority relation among the input nodes completely determines the output behavior. We define events as temporal variables, and we show that, by adding to the usual Boolean operators new temporal operators denoted BEFORE and SIMULTANEOUS, it is possible to derive the structure function of the Top Event with any cascade of Priority Dynamic Gates, and repetition of basic events. A set of theorems are provided to express the structure function in a sum-of-product canonical form, where each product represents a set of cut sequences for the system. We finally show through some examples that the canonical form can be exploited to determine directly and algebraically the failure probability of the Top Event of the PDFT without resorting to the corresponding Markov model. The advantage of the approach is that it provides a complete qualitative description of the system, and that any failure distribution can be accommodated.",2010,0, 6776,A signature-based approach for diagnosis of dynamic faults in SRAMs,"This paper focuses on diagnosis of dynamic faults in SRAMs. The current techniques for fault diagnosis are mainly based on the signature method. Here, we introduce an extension of the signature scheme by taking in account additional information related to the addressing order during March test execution. A first advantage of the proposed approach is its capability to distinguish between static and dynamic faults. Another main feature is the correct identification of the location of the failure in a given memory component: the core-cell array, write drivers, sense amplifiers, address decoders and pre- charge circuits. Moreover, since this approach does not modify the March test, there is no increase of test complexity, conversely to other existing diagnosis techniques.",2008,0, 6777,A Pattern Recognition System Based on Cluster and Discriminant Analysis for Fault Identification during Production,"This paper focuses on one stage of a research project concerning online surveillance of the knitting process, which intends to detect faults as soon as possible. The objective of the paper is focused on the pattern recognition stage, i.e, distinguishing faults. For that purpose, discriminant analysis is proposed as the approach to be explored. The general problem is discussed, followed by the prototype developed up to this stage. The techniques used for detecting faults are also briefly presented in order to follow immediately into the main issue of the paper: pattern recognition using discriminant analysis. Results obtained from experiments on industrial weft knitting machines are presented and discussed and future improvements and approaches are also presented.",2007,0, 6778,Reconfiguration of Carrier-Based Modulation Strategy for Fault Tolerant Multilevel Inverters,"This paper focuses on the fault-tolerance potential of multilevel inverters with redundant switching states such as the cascaded multilevel inverters and capacitor self-voltage balancing inverters. The failure situations of the multilevel inverters are classified into two types according to the relationship between output voltage levels and switching states. The gate signals can be reconfigured according to the failure modes when some of the power devices fail. The reconfiguration method is discussed for phase disposition PWM strategy (PDPWM) and phase shifted PWM strategy (PSPWM) in the paper and it can be extended for other carrier-based PWM strategies easily. Balanced line-to-line voltage will be achieved with the proposed method when device failure occurs. Furthermore, the circuit structures can be the same as the general ones and the voltage stress of the devices does not increase. Simulation and experimental results are included in the paper to verify the proposed method.",2007,0, 6779,A Fault-Tolerant Target Location Detection Algorithm in Sensor Networks,"This paper focuses on the fault-tolerant target detection and localization in sensor networks. Typical applications include the habitat monitoring and roadway safety warning. We propose an algorithm for target detection and localization to improve the target position accuracy. Our algorithm is purely localized and thus is suitable for large scale of sensor networks. The computational overhead is low since the detection algorithm is based on a simple clustering technique which only simple numerical operations are involved. Simulation results show that our algorithm can decrease the false alarm rate and improve the target localization accuracy when as many as 30% sensors become faulty. Therefore, our algorithm achieves a great improvement over the previous algorithms.",2009,0, 6780,Study on a Closed-Loop Fire Correction Algorithm in Vehicle Close-In Weapon System,"This paper forms the integrated structure for the vehicle close-in weapon system and study on closed-loop fire correction algorithm. Through the analysis of different kinds of error factors which lead to miss distance, a mathematical model of miss distance is established to solve the problem of miss distance measuring by using equivalent principle. A feasible closed-loop fire correction model is built and a simulation is made by using the vehicle close-in weapon system. Both the simulation and experiment prove that this algorithm could improve the firing accuracy of the vehicle close-in weapon system effectively.",2010,0, 6781,Error calculation techniques and their application to the Antenna Measurement Facility Comparison within the European Antenna Centre of Excellence,"This paper gives an overview of the ongoing activities under the Antenna Measurement activity of the Antenna Centre of Excellence (ACE) network within the EU 6th framework research program. In particular, in this work an attempt is made to establish a common uncertainty estimation criteria in spherical near field and far field antenna measurement systems. The results from this activity are important instruments to verify the measurements accuracies for antenna measurement ranges as well as to investigate and evaluate possible improvements in measurement set-ups and procedures. These results will be used in the facility comparison campaigns in order to calculate a reference pattern for each of the high accuracy reference antennas (VAST 12, SATIMO SH800 and SATIMO SH2000) measured during the last 4 years by different institutions in Europe and US.",2007,0, 6782,MATLAB Based Fault Analysis Toolbox for Electrical Power System,"This paper has developed a Matlab based GUI tool for fault analysis for power systems students at under graduate level. This is not a technically rich paper, however, this type of tool box helps student to get a better idea while studying the theory. The notations used in the program are mostly compatible with the formats used in electrical power system textbooks. The program is developed under MATLAB 6.5 and is tested to work perfectly in the recent release of MATLAB 7.0. The toolbox uses the user friendly graphical user interface (GUI)",2006,0, 6783,Mechanical fault diagnosis using wireless sensor networks and a two-stage neural network classifier,"This paper has three contributions. First, we develop a low-cost test-bed for simulating bearing faults in a motor. In Aerospace applications, it is important that motor fault signatures are identified before a failure occurs. It is known that 40% of mechanical failures occur due to bearing faults. Bearing faults can be identified from the motor vibration signatures. Second, we develop a wireless sensor module for collection of vibration data from the test-bed. Wireless sensors have been used because of their advantages over wired sensors in remote sensing. Finally, we use a novel two-stage neural network to classify various bearing faults. The first stage neural network estimates the principal components using the generalized Hebbian algorithm (GHA). Principal component analysis is used to reduce the dimensionality of the data and to extract the fault features. The second stage neural network uses a supervised learning vector quantization network (SLVQ) utilizing a self organizing map approach. This stage is used to classify various fault modes. Neural networks have been used because of their flexibility in terms of online adaptive reformulation. At the end, we discuss the performance of the proposed classification method.",2009,0, 6784,Fault location using electron beam current absorbed in LSI interconnects,This paper introduced the developed apparatus named electron beam absorbed current (EBAC) to locate fault sites in LSI interconnects. This technique has a potential to become an effective tool and is driven forward to extended applications.,2004,0, 6785,Fault-Adaptive Control for Robust Performance Management of Computing Systems,"This paper introduces a fault-adaptive control approach for the robust and reliable performance management of computing systems. Fault adaptation involves the detection and isolation of faults, and then taking appropriate control actions to mitigate the fault effects and maintain control.",2007,0, 6786,A modular control design method for a flexible manufacturing cell including error handling,"This paper introduces a modular design method for a flexible manufacturing cell. First, we provide a definition for a flexible manufacturing cell, and then we propose a design method for its cell controller. We divide the controller into two parts: the resource allocation control and the operation control. Based on this structure, we develop operation blocks integrated with error handling and recovery, and prove some properties about their behaviors. Finally, we introduce a case study and apply the proposed method to this example.",2005,0, 6787,Implement fault diagnosis high speed reasoning expert system with FPGA,"This paper introduces a new method to design expert system reasoning with FPGA for fault diagnosis. Firstly, detailed analysis the reasoning process of normal expert system; Secondly, the experiences and knowledge of experts are transformed to multi-valued or binary reasoning with fault tree, and the processes are realized with simple gate circuits; Lastly, the new scheme is used in fault diagnosis for HV circuit breakers instead of primary reasoning with software. The experiment indicates that the reasoning speed using this scheme is faster than traditional reasoning modes, and it can be applicable to many expert systems based on single-chip controller or DSP (Digital Signal Processor).",2009,0, 6788,Current-Based Slippage Detection and Odometry Correction for Mobile Robots and Planetary Rovers,"This paper introduces a novel method for wheel-slippage detection and correction based on motor current measurements. Our proposed method estimates wheel slippage from motor current measurements, and adjusts encoder readings affected by wheel slippage accordingly. The correction of wheel slippage based on motor currents works only in the direction of motion, but not laterally, and it requires some knowledge of the terrain. However, this knowledge does not have to be provided ahead of time by human operators. Rather, we propose three tuning techniques for determining relevant terrain parameters automatically, in real time, and during motion over unknown terrain. Two of the tuning techniques require position ground truth (i.e., GPS) to be available either continuously or sporadically. The third technique does not require any position ground truth, but is less accurate than the two other methods. A comprehensive set of experimental results have been included to validate this approach",2006,0, 6789,"Predicting Future States With -Dimensional Markov Chains for Fault Diagnosis","This paper introduces a novel method of predicting future concentrations of elements in lubrication oil, for the aim of identifying possible anomalies in continued operation aboard a large marine vessel. The research carried out is supported by a discussion of previous work in the field of fault detection in tribological mechanisms, although with a focus upon two stroke marine diesel engines. The approach taken implements an n-dimensional Markov chain model with a singular weighted connection between layers. The approach leverages the computational simplicity of the Markov chain and combines this with a weighted decision calculated from the correlational coefficients between variables, with the notable assumption that interconnectivity between elements is not constant. The approach is compared to an established method, which is the Kalman filter, with promising results for future work and extension of the method to include expert knowledge in the decision making process.",2009,0, 6790,Fault Tolerant Actuation for Steer-by-Wire Applications,"This paper introduces a R&D project concerned with the development of a fault-tolerant actuation system for steer-by-wire applications. The essential safety and reliability requirements for automotive vehicles are assessed. General redundancy schemes and current practices are examined. The paper then focuses on the use of actuators based on permanent magnetic brushless dc motors, and analyses internal fault-tolerant potentials of the actuator technology with possible control schemes evaluated. Finally key innovations that may provide practical and affordable solutions are discussed.",2007,0, 6791,Integrating system health management into the early design of aerospace systems using Functional Fault Analysis,"This paper introduces a systematic design methodology, namely the functional fault analysis (FFA), developed with the goal of integrating SHM into early design of aerospace systems. The basis for the FFA methodology is a high-level, functional model of a system that captures the physical architecture, including the physical connectivity of energy, material, and data flows within the system. The model also contains all sensory information, failure modes associated with each component of the system, the propagation of the effects of these failure modes, and the characteristic timing by which fault effects propagate along the modeled physical paths. Using this integrated model, the designers and system analysts can assess the sensor suitepsilas diagnostic functionality and analyze the ldquoracerdquo between the propagation of fault effects and the fault detection isolation and response (FDIR) mechanisms designed to compensate and respond to them. The Ares I Crew Launch Vehicle has been introduced as a case example to illustrate the use of the Functional Fault Analysis (FFA) methodology during system design.",2008,0, 6792,Iterative learning control based on a hybrid tracking and contour error algorithm,This paper presents a learning algorithm which is based on the iterative learning control (ILC) and tracking/contour error formulation. The contour errors of the free form curve are computed and feedback as the correction signal to generate a new command. It is shown that learning using the tracking error can result in different performance for the free form curve. A modified ILC method is proposed to overcome the problem. It shows that the convergence rate and error performance can be improved by choosing the appropriate weighting during the iteration. Simulations are conducted to validate the proposed ILC scheme. The results show that the modified ILC can perform better than the traditional ILC using the tracking error alone.,2009,0, 6793,A Machine Learning Approach to Fault Diagnosis of Rolling Bearings,"This paper presents a method based on classification techniques for automatic fault diagnosis of rolling element bearings. Experimental results achieved on vibration signals collected by an accelerometer on an experimental test rig show that the method can automatically detect different types of faults. Furthermore, the method is able, once trained on an appropriate representative set of basic faults, to recognize more serious faults, provided they are of the same type. We also analyzed the trend of correct classification of bearing faults on variation of the signal-to-noise ratio achieving high levels of robustness.",2008,0, 6794,Multilayer resistivity interpretation and error estimation using electrostatic images,"This paper presents a method for generating groups of electrostatic images for the estimation of soil parameters in the case of multilayered horizontal soil. The method can be utilized for the interpretation of resistivity sounding measurements of stratified soil. The maximum errors of the calculated resistivity values can also be estimated and are used to validate the soil model. Errors and variations in apparent resistivity values used in the interpretation process can originate from slow convergence in calculations or variations during field measurements, such as local fluctuations in soil resistivity at points of measurements and instruments precision. An estimation of the effect of these variations on the calculated soil model parameters can be used to provide a confidence level in the results. This paper demonstrates the necessity of evaluating the sensitivity of the soil parameters and proposes methods of estimating a confidence level in the soil model. Confidence levels are also used to delimit boundaries during geophysical inversion with respect to the information available in the field measurements. Simulation results are presented for three-layer soil",2006,0, 6795,Joint optimization of the power-added efficiency and the error-vector measurement of 20-GHz pHEMT amplifier through a new dynamic bias-control method,"This paper presents a method for the optimization of the power-added efficiency (PAE), as well as the error-vector measurement (EVM) of a 20-GHz power amplifier (PA) applied in this case to the M quadrature and amplitude modulations. A first key point lies in that both input and output biasing voltages of the solid-state power amplifiers (SSPAs) are dynamically controlled according to the RF power level associated with the symbol to be transmitted. The leading idea is that the dynamic biasing control is designed and implemented to keep fixed amplitude (AM/AM) and phase (AM/PM) conversion values, while the RF input power level changes. The power gain of the PAs can then be dynamically tuned to a fixed power gain corresponding to the compression gain behavior for which the PAE is optimum at low-, medium-, and high-input RF power levels. As a main consequence, PAE performances can be drastically improved as compared to classical backoff solutions and optimized while keeping a very good EVM. A Ka-band hybrid amplifier has been realized using an 875 m power pseudomorphic high electron-mobility transistor. The proposed linearization technique is validated by comparisons between measured PAE and EVM on the SSPA when a fixed and controlled bias are used.",2004,0, 6796,"Delay faults in dual-rail, self-reset wave-pipelined circuits","This paper presents a method to detect delay faults in wave-pipeline high speed arithmetic circuits that are constructed of dual-rail self-reset logic gates with input-disable. For this category of circuits we develop a fault model and show that standard test pattern generation algorithm can be used after using a 9-valued logic set. Also, we demonstrate that as soon as a delay fault occurs at any stage of the pipeline, the fault is eventually manifested at the output of the circuit as if a stuck-at fault existed in the circuit for that wave.",2007,0, 6797,Specification and Verification of Soft Error Performance in Reliable Internet Core Routers,This paper presents a methodology for developing a specification for soft error performance of an integrated hardware/software system that must achieve highly reliable operation. The methodology enables tradeoffs between reliability and cost to be made during the early silicon design and SW architecture phase. An accelerated measurement technique using neutron beam irradiation is also described that ties the final system performance to the reliability model and specification. The methodology is illustrated for the design of a line card for an internet core router.,2008,0, 6798,Motion vector recovery for error concealment based on angular similarity,"This paper presents a motion vector (MV) recovery using the similarity of the angle of the corresponding surrounded MVs. The approach is based on the assumption that a group of macroblocks (MBs) which belongs to the same object and resides in the same region likely to move in the same direction. Hence, those corresponded MVs that move in the same direction likely to have similar angle. As a result a lost motion vector can be estimated using a set of candidate MVs selected from the neighbouring MVs on the left, top-left, top, top-right, right, bottom-right, bottom and bottom-left. The experimental results for several test video sequences are compared with conventional error concealment methods and higher performance is achieved in both objective peak signal-to-noise ratio (PSNR) measurements and subjective visual quality.",2009,0, 6799,Network Fault Model for Dependability Assessment of Networked Embedded Systems,"This paper presents a network-based fault model for dependability assessment of distributed applications built over networked embedded systems. This fault model represents global failures in terms of wrong behavior of packet-based asynchronous data transmissions. Packets are subject to different faults, i.e., drop, cut, bit errors, and duplication; these events can model either HW/SW failures of the networked embedded systems or problems in the channel among them. The paper describes 1) the proposed fault model in relation with existing ones, 2) its possible application scenarios, and 3) a SystemC tool for the simulation of both fault-free and faulty wireless sensor networks. Experimental results show the validity of the approach in the verification of communication protocols and its support to determine the optimal number of nodes in a wireless sensor network based on the IEEE 802.15.4 standard. Part of the software is available at http://sourceforge.net/projects/scnsl/.",2008,0, 6800,Thermal analysis of a winding turn-to-turn fault in PM synchronous machine,"This paper presents a detailed lumped parameter (LP) thermal model of an armature slot in a permanent magnet synchronous machine for traction applications. The model is used to investigate the temperature distribution in the slot after a turn-to-turn failure occurs. Steady-state analyses are conducted and a good agreement is found with FEM thermal simulations. The LP model is modified into a transient model and transient thermal analysis are conducted to predict the processing damage in the slot, which eventually might lead to a turn-to-tooth (ground) failure.",2010,0, 6801,Hierarchical fault detection and diagnosis for unmanned ground vehicles,"This paper presents a fault detection and diagnosis (FDD) method for unmanned ground vehicles (UGVs) operating in multi agent systems. The hierarchical FDD method consisting of three layered software agents is proposed: Decentralized FDD (DFDD), centralized FDD (CFDD), and supervisory FDD (SFDD). Whereas the DFDD is based on modular characteristics of sensors, actuators, and controllers connected or embedded to a single DSP, the CFDD is to analyze the performance of vehicle control system and/or compare information between different DSPs or connected modules. The SFDD is designed to monitor the performance of UGV and compare it with local goal transmitted from a ground station via wireless communications. Then, all software agents for DFDD, CFDD, and SFDD interact with each other to detect a fault and diagnose its characteristics. Finally, the proposed method will be validated experimentally via hardware-in-the-loop simulations.",2009,0, 6802,Robust actuator fault reconstruction for LPV systems using sliding mode observers,"This paper presents a fault reconstruction method for LPV systems which is robust against uncertainty and corrupted measurements. The design is analyzed based on a virtual system resulting from factorizing the input distribution matrix associated with the monitored actuators. The key observer parameter (which is used to define the observer gains) is designed using LMIs to minimize the effect of the uncertainty and measurement corruption on the fault reconstruction. The output error injection signals are used to reconstruct the virtual faults, and the input distribution matrix factorization is used to map back to the actual faults. Simulations using an LPV model of a large transport aircraft are presented.",2010,0, 6803,Fault tolerant XGFT network on chip for multi processor system on chip circuits,"This paper presents a fault-tolerant eXtended Generalized Fat Tree (XGFT) Network-On-Chip (NOC) implemented with a new fault-diagnosis-and-repair (FDAR) system. The FDAR system is able to locate faults and reconfigure switch nodes in such a way that the network can route packets correctly despite the faults. This paper presents how the FDAR finds the faults and reconfigures the switches. Simulation results are used for showing that faulty XGFTs could also achieve good performance, if the FDAR is used. This is possible if deterministic routing is used in faulty parts of the XGFTs and adaptive Turn-Back (TB) routing is used in faultless parts of the network for ensuring good performance and Quality-of-Service (QoS). The XGFT is also equipped with parity bit checks for detecting bit errors from the packets.",2005,0, 6804,Fault-Tolerant Control for SSSC Using Neural Networks and PSO,"This paper presents a fault-tolerant indirect adaptive neuro-controller (FTNC) for controlling a static synchronous series compensator (SSSC), which is connected to a power network. The FTNC consists of a sensor evaluation and restoration scheme (SERS), a radial basis function neuro-identifier (RBFNI) and a radial basis function neuro-controller (RBFNC). The SERS is designed using the auto-associative neural networks (auto-encoder) and the particle swarm optimizer (PSO). This FTNC is able to provide efficient control to the SSSC when single or multiple crucial sensor measurements are unavailable. The validity of the proposed FTNC model is examined by simulations in PSCAD/EMTDC environment",2006,0, 6805,A framework for finding minimal test vectors for stuck-at-faults,"This paper presents a framework that utilizes Boolean difference theory to find test vectors for stuck-at-fault detection. The framework reads in structural-style Verilog models, and automatically injects single stuck-at-faults (either stuck-at-zero or stuck-at-one) into the models. The simulations are then performed to find minimal sets of test vectors. Using this setup, we conducted experiments on more than 4000 different circuits. The results show that an appreciable savings in test time and effort can be achieved using the method. The same setup can also be used for didactic purposes, specifically for digital design and test courses.",2009,0, 6806,Fault detection and isolation for nonlinear F16 models using a gain-varying UIO approach,"This paper presents a gain-varying UIO (unknown input observer) method for actuator FDI (fault detection and isolation) problems. A novel residual scheme together with a piecewise threshold and a moving horizon threshold is proposed. This design methodology is applied to a nonlinear F16 system with polynomial coefficient models, where the F16 plant and UIOs may use different aerodynamics coefficient models. The simulation results show that a satisfactory FDI performance can be achieved even when the system is under the environment of model uncertainties, exogenous noise and measurement errors.",2009,0, 6807,Error-tolerant execution of complex robot tasks based on skill primitives,"This paper presents a general approach to specify and execute complex robot tasks considering uncertain environments. Robot tasks are defined by a precise definition of so-called skill primitive nets, which are based on Mason's hybrid force/velocity and position control concept, but it is not limited to force/velocity and position control. Two examples are given to illustrate the formally defined skill primitive nets. We evaluated the controller and the trajectory planner by several experiments. Skill primitives suite very well as interface to robot control systems. The presented hybrid control approach provides a modular, flexible, and robust system; stability is guaranteed, particularly at transitions of two skill primitives. With the interface explained here, the results of compliance motion planning become possible to be examined in real work cells. We have implemented an algorithm to search for mating directions in up to three-dimensional configuration-spaces. Thereby, on one hand we have released compliant motion control concepts and on the other hand we can provide solutions for fine motion and assembly planning. This paper shows, how these two fields can be combined by the general concept of skill primitive nets introduced here, in order to establish a powerful system, which is able to automatically execute prior calculated assembly plans based on CAD-data in uncertain environments.",2003,0, 6808,A Modified Error Concealment Algorithm Designed for P Frame of H.264,"This paper introduces an error concealment algorithm for P frame in video images that based on the error concealment algorithm in the reference software of H.264, which makes it extremely easy and flexible to obtain the best performance of error concealment at the decoder, and describes the theoretical foundation and realization in detail. External boundary matching algorithm (EBMA) based on calculate the difference of external nearby pixel between reference block and lost block which could better adapts to the characteristics of alterable blocks in motion estimation and improve the effect of error concealment of lost MB. The result of the proposed algorithm implemented on JM version 8.6 shows that have some advantages and practicality.",2008,0, 6809,The offline fault diagnosis of FCV powertrain based on CAN bus,"This paper introduces an offline fault diagnosis method based on CAN bus for fuel cell vehicles powertrain, which synthetically applies CAN bus communication technology, embedded system development technology, hierarchical fault processing technology and computer technology. Meanwhile, PC was applied as upper machine while the vehicle management system controller (VMS) based on MPC555 as lower machine. The upper machine and under-bit machine realize communication through CANdela protocol to record, upload and statistics analyses the fault. Such fault diagnosis system, which successfully complete the online fault record and offline fault diagnosis for powertrain, has been applied to ldquoSTART IIIrdquo fuel cell car developed by shanghai FCV powertrain Co., Ltd.",2009,0, 6810,Scheduling tasks with mixed preemption relations for robustness to timing faults,"This paper introduces and shows how to schedule two novel scheduling abstractions that overcome limitations of existing work on preemption threshold scheduling. The abstractions are task clusters, groups of tasks that are mutually non-preemptible by design, and task barriers, which partition the task set into subsets that must be mapped to different threads. Barriers prevent the preemption threshold logic that runs multiple design-time tasks in the same runtime thread from violating architectural constraints, e.g. by merging an interrupt handler and a user-level thread. We show that the preemption threshold logic for mapping tasks to as few threads as possible can rule out the schedules with the highest critical scaling factors - these schedules are the least likely to miss deadlines under timing faults. We have developed a framework for robust CPU scheduling and three novel algorithms: an optimal algorithm for maximizing the critical scaling factor of a task set under restricted conditions, a more generally applicable heuristic that finds schedules with approximately maximal critical scaling factors, and a heuristic search that jointly maximizes the critical scaling factor of computed schedules and minimizes the number of threads required to run a task set. We demonstrate that our techniques for robust scheduling are applicable in a wide variety of situations where static priority scheduling is used.",2002,0, 6811,Application of BP neural network fault diagnosis in solar photovoltaic system,"This paper introduces fault diagnosis modes and points out the source of trouble in grid-connected solar photovoltaic systems. It analyses and researches the structure and algorithm of BP neural network. After that, the paper brings forward fault diagnosis method based on BP neural network for the grid-connected solar photovoltaic system. It shows this method is efficacious and earthly and attains the expected results, it can be applied to fault diagnosis of grid-connected solar photovoltaic system definitely.",2009,0, 6812,SymPLFIED: Symbolic program-level fault injection and error detection framework,"This paper introduces SymPLFIED, a program-level framework that allows specification of arbitrary error detectors and the verification of their efficacy against hardware errors. SymPLFIED comprehensively enumerates all transient hardware errors in registers, memory, and computation (expressed as value errors) that potentially evade detection and cause program failure. The framework uses symbolic execution to abstract the state of erroneous values in the program and model checking to comprehensively find all errors that evade detection. We demonstrate the use of SymPLFIED on a widely deployed aircraft collision avoidance application, tcas. Our results show that the SymPLFIED framework can be used to uncover hard-to-detect corner cases caused by transient errors in programs that may not be exposed by random fault-injection based validation.",2008,0, 6813,Fault Diagnosis Expert System of the Electrical Traction Shearer Type 3LS,"This paper introduces the C language-based expert system development platform-CLIPS, discusses the CLIPS expert system as a platform shearer 3LS fault diagnosis tool necessity and its concrete realization, and discusses fault diagnosis expert system of the system structure, tectonic theory, the difficulties and at the same time give the implementation program of system based on fault tree.",2010,0, 6814,Estimation of Fault Location on Distribution Feeders using PQ Monitoring Data,"This paper investigates the challenges in the extraction of fault data (fault current magnitude and type) for fault location application based on actual field data and proposes a procedure for practical implementation. The proposed scheme is implemented as a stand-alone software program, and is tested using actual field data collected at distribution substations and the results are compared with results of the state-of-the-art software package currently used in a utility.",2007,0, 6815,LMI-based approach to robust fault detection for uncertain discrete-time piecewise affine slab systems,"This paper investigates the problem of fault detection for a class of uncertain discrete-time piecewise affine systems. The objective is to design an admissible fault detection filter guaranteeing the asymptotic stability of the resulting residual system with prescribed performances. It is assumed that the piecewise affine systems are partitioned based on the state space instead of the measurable output space so that the filter implementations may not be synchronized with the plant state trajectory transitions. Based on a piecewise quadratic Lyapunov function combined with S-procedure and some matrix inequality convexifying techniques, the results are formulated in the form of linear matrix inequalities. Finally, a simulation example is provided to illustrate the effectiveness of the proposed approach.",2010,0, 6816,Fault tolerant detection and tracking of multiple sources in WSNs using binary data,"This paper investigates the use of a Wireless Sensor Network for detecting and tracking the location of multiple event sources (targets) using only binary data. Due to the simple nature of the sensor nodes, sensing can be tampered (accidentally or maliciously), resulting in a significant number of sensor nodes reporting erroneous observations. Therefore, it is essential that any event tracking algorithm used in Wireless Sensor Networks (WSNs) exhibits fault tolerant behavior in order to tolerate misbehaving nodes. The main contribution of this paper is the development of a simple and decentralized algorithm that uses the binary observations of the sensors for tracking multiple targets in a fault-tolerant way. Furthermore, tracking is performed in real-time by the alarmed sensor nodes that are elected as leaders, utilizing only information from their neighbors.",2009,0, 6817,An induction motor drive system with improved fault tolerance,"This paper investigates the utilization of a simplified topology that permits the fault-tolerant operation of a three-phase induction motor drive system. When one of the inverter legs is lost, the machine can operate with only two stator windings by connecting the machine neutral to a fourth converter leg. The structure and the operation principle of the system are presented. The machine model corresponding to the asymmetric two-windings machine is developed and a suitable controller is proposed. Experimental results are presented",2001,0, 6818,Performance Analysis of a Controlled Database Unit Subject to Control Delays and Decision Errors,"This paper is an extension of the work performed by Wu, Metzler, and Linderman on a controlled database unit. The system configuration has been changed from the earlier work to help further enhance the database performance. When a server fails, state variable feedback is used to trigger the process of server restoration. In this paper, control delays and decision errors due to the time required and the uncertainty present in estimating the state are captured simultaneously within a single model of the system, whereas they were studied separately in two different models in the earlier work. The performance of the database unit is evaluated in terms of its mean time to failure, steady state availability, expected response time, and service overhead. These performance measures are examined with respect to the probability of correct decision to restore a server upon its failure, the length of control action delay, and the rate of server restoration.",2006,0, 6819,Fault tolerant shared-object management system with dynamic replication control strategy,"This paper is based on a dynamic replication control strategy for minimizing communications costs. In dynamic environments where the access pattern to share resources cannot be predicted statically, it is required to monitor such parameter during the whole lifetime of the system so as to adapt it to new requirements. The shared-object management system is implemented in a centralized manner in which a master processor deals with the serialization of invocations. On one hand, we attempt to provide fault tolerance as a way to adjust the system parameters to work only with a set of correct processors so as to enhance system functionality. On the other hand, we attempt to furnish availability by masking the failure of the master processor. A new master processor is elected that resumes the master processor processing. Our shared-object management system modularity is realized through a meta level implementation",2000,0, 6820,A Fast and Flexible Platform for Fault Injection and Evaluation in Verilog-Based Simulations,"This paper presents a complete framework for Verilog-based fault injection and evaluation. In contrast to existing approaches, the proposed solution is the first one based on the Verilog programming interface (VPI). Due to standardization of the VPI, the framework is-in contrast to simulator command based techniques-independent from the used simulator. Additionally, it does not require recompilation for different fault injection experiments like techniques modifying the Verilog code for fault injection. The feasibility of the VPI-based approach is shown in a case study.",2009,0, 6821,Overload Alleviation With Preventive-Corrective Static Security Using Fuzzy Logic,"This paper presents a concept overview of an automatic operator of electrical networks (AOEN) for real-time alleviation of component overloads and increase of system static loadability, based on state-estimator data only. The control used for this purpose is real-power generation rescheduling, although any other control input could fit the new framework. The key performance metrics are the vulnerability index of a generation unit (VIGS) and its sensitivity (SVIGS), accurately computed using a realistic ac power flow incorporating the AGC model (AGC-PF). Transmission overloads, vulnerability indices and their sensitivities with respect to generation control are translated into fuzzy-set notations to formulate, transparently, the relationships between incremental line flows and the active power output of each controllable generator. A fuzzy-rule-based system is formed to select the best controllers, their movement and step-size, so as to minimize the overall vulnerability of the generating system while eliminating overflows. The controller performance is illustrated on the IEEE 39-bus (New England) network and the three-area IEEE-RTS96 network subjected to severe line outage contingencies. A key result is that minimizing the proposed vulnerability metric in real-time results in increased substantial loadability (prevention) in addition to overload elimination (correction).",2009,0, 6822,On the integration of software product management with software defect management in distributed environments,"This paper presents a conceptual model for integrating software product management (SPM) and defect management in a distributed environment. Two case studies are carried out to identify SPM and defect management processes and the relation between them. From these case studies and from SPM and defect management theory, domain concepts are deducted that are used to create our conceptual model. An expert interview indicated that SPM practitioners and experts agreed that that managing software defects differs from managing requirements. In addition, 90% of the interviewees indicated that the proposed conceptual model is a good way to handle requirements and defects.",2009,0, 6823,Research on fault detection for satellite attitude control systems based on sliding mode observers,"This paper investigates the design and application of a sliding mode observer (SMO) strategy for fault detection problem for satellite attitude control systems. A particular design of sliding mode observer is presented for which the parameters can be obtained by a linear change of coordinates under some conditions. Instead of generating residuals for other observer-based fault detection methods, the sliding mode observer discussed is designed based on the so-called equivalent output estimation error injection concept to reconstruct the faults. Both the actuator and sensor fault detection and reconstruct for satellite attitude control systems are taken into account. A mathematical simulation is given to illustrate the effectiveness of the proposed approach.",2009,0, 6824,Optimum Machine Performance in Fault-Tolerant Networked Control Systems,"This paper investigates the effect of failures on the productivity of fault-tolerant networked control systems under varying loads. Higher speeds of operation are sometimes used to increase production and compensate for down time due to component failures. Improved Markov models are developed and used to calculate system probabilities. When these probabilities are combined with the maximum speed of operation in each system state, the average speed of operation is obtained. If machines cannot be run at maximum speed all the time, the Markov models are used again to find the best speed mix that would yield maximum output capacity",2005,0, 6825,Analysis of Error-Agnostic Time- and Frequency-Domain Features Extracted From Measurements of 3-D Accelerometer Sensors,"This paper investigates the expressive power of several time- and frequency-domain features extracted from 3-D accelerometer sensors. The raw data represent movements of humans and cars. The aim is to obtain a quantitative as well as a qualitative expression of the uncertainty associated with random placement of sensors in wireless sensor networks. Random placement causes calibration, location and orientation errors to occur. Different type of movements are considered-slow and fast movements; horizontal, vertical, and lateral movements; smooth and jerky movements, etc. Particular attention is given to the analysis of the existence of correlation between sets of raw data which should represent similar or correlated movements. The investigation demonstrates that while frequency-domain features are generally robust, there are also computationally less intensive time-domain features which have low to moderate uncertainty. Moreover, features extracted from slow movements are generally error prone, regardless of their specific domain.",2010,0, 6826,Towards high-precision lens distortion correction,"This paper points out and attempts to remedy a serious discrepancy in results obtained by global calibration methods: The re-projection error can be rendered very small by these methods, but we show that the optical distortion correction is far less accurate. This discrepancy can only be explained by internal error compensations in the global methods that leave undetected the inadequacy of the distortion model. This fact led us to design a model-free distortion correction method where the distortion can be any image domain diffeomorphism. The obtained precision compares favorably to the distortion given by state of the art global calibration and reaches a RMSE of 0.08 pixels. Nonetheless, we also show that this accuracy can still be improved.",2010,0, 6827,Asymmetrical operation of a twelve-pulse LCI drive system with power converter faults,"This paper presents a study concerning the operation of a twelve-pulse LCI drive system after the occurrence of an open-switch fault in the line-side power converters. This type of fault introduces harmonics in the currents and in the electromagnetic torque, which normally don't exist, degrading the operation of the drive. In order to improve the faulty operation, a compensated mode using an asymmetrical triggering of the semiconductors in the healthy converter is proposed. This results in a minimization or even elimination of some low order harmonics in the DC voltage and a smoothed DC link current and electromagnetic torque. Experimental results are presented to validate the proposed solution and compared with compensating schemes of the sequential triggering type. The main objective of this study is to introduce fault-tolerance and survivability characteristics to this high-power adjustable speed motor drive, therefore increasing its reliability and availability.",2004,0, 6828,Demagnetization fault detection by means of Hilbert Huang transform of the stator current decomposition in PMSM,"This paper presents a study of the permanent magnet synchronous motor (PMSM) running under demagnetization. The simulation has been carried out by means of two dimensional (2-D) finite element analysis (FEA), and simulations results were compared with experimental results. The demagnetization fault is analyzed by means of decomposition of stator currents obtained at different speeds for torque change.",2008,0, 6829,Wind plant collector system fault protection and coordination,"This paper presents a summary of the most important protection and coordination considerations for wind power plants. Short-circuit characteristics of both aggregate wind plant and individual wind turbine generators, as well as general interconnection protection requirements are discussed. Many factors such as security, reliability, and safety are considered for proper conservative protection of the wind power plant and individual turbines.",2010,0, 6830,Design for Resilience to Soft Errors and Variations,"This paper presents adaptive variation-and-error-resilient agent (AVERA), an approach to address the challenge of designing reliable systems in the presence of soft errors and variations. AVERA extends our previous built-in soft error resilience (BISER) approach by adding additional capabilities to support process variation diagnosis, degradation detection, and system adaptation, besides soft error correction. We also discuss open challenges for building variation-and-error-resilient systems.",2007,0, 6831,Algorithm-level recomputing with shifted operands-a register transfer level concurrent error detection technique,"This paper presents Algorithm-level REcomputing with Shifted Operands (ARESO), which is a new register transfer (RT) level time redundancy-based concurrent error detection (CED) technique. In REcomputing with Shifted Operands (RESO), operations (additions, subtractions, etc.) are carried out twice-once on the basic input and once on the shifted input. Results from these two operations are compared to detect an error. Although using RESO operators in RT-level designs is straightforward, it entails time and area overhead. In contrast, ARESO does not use specialized RESO operators. In ARESO, an algorithm is carried out twice-once on the basic input and once on the shifted input. Results from these two algorithm-level instantiations are compared to detect an error. By operating at the algorithm level, ARESO exploits RT-level scheduling, pipelining, operator chaining, and multicycling to incorporate user-specified error detection latencies. ARESO supports hardware versus performance versus error detection latency tradeoffs. The authors validated ARESO on practical design examples using the Synopsys Behavior Compiler (BC). An industry standard behavioral synthesis system.",2006,0,4582 6832,An accurate scheme for fault location in combined overhead line with underground power cable,"This paper presents an accurate fault location scheme for transmission systems consisting of an overhead line in combination with an underground power cable. The algorithm requires phasor measurements data from one end of the transmission line and the synchronized measurements at the most far end of the power cable. Fault location is derived using distributed line model, modal transformation theory and discrete Fourier transform. The technique can be used on-line or off-line using the data stored in the digital transient recording apparatus. The proposed scheme has the ability to locate the fault whether it is in the overhead line or in the underground power cable. Extensive simulation studies carried out using MATLAB show that the proposed scheme provides a high accuracy in fault location under various system and fault conditions.",2005,0, 6833,Adaptive Causal Models for Fault Diagnosis and Recovery in Multi-Robot Teams,"This paper presents an adaptive causal model method (adaptive CMM) for fault diagnosis and recovery in complex multi-robot teams. We claim that a causal model approach is effective for anticipating and recovering from many types of robot team errors, presenting extensive experimental results to support this claim. To our knowledge, these results show the first, full implementation of a CMM on a large multi-robot team. However, because of the significant number of possible failure modes in a complex multi-robot application, and the difficulty in anticipating all possible failures in advance, our empirical results show that one cannot guarantee the generation of a complete a priori causal model that identifies and specifies all faults that may occur in the system. Instead, an adaptive method is needed to enable the robot team to use its experience to update and extend its causal model to enable the team, over time, to better recover from faults when they occur. We present our case-based learning approach, called LeaF (for learning-based fault diagnosis), that enables robot team members to adapt their causal models, thereby improving their ability to diagnose and recover from these faults over time",2006,0, 6834,Locating fault using voltage sags profile for underground distribution system,This paper presents an alternative fault location algorithm to estimate short-circuit faults location in electrical distribution networks using only voltage sags data. The proposed algorithm uses voltage sags profile as a means to locate fault. The possible fault locations is estimated by incorporating the measured voltage sags magnitude and its corresponding phase angle into an equation of voltage sag as a function of fault distance. A ranking procedure is also introduced to rank possible fault locations due the same electrical distance. The uncertainty of fault resistance is also considered in this algorithm. The performance of the technique is presented by testing it using an actual underground distribution network. The simulation results indicated a possibility of practical implementation.,2010,0, 6835,"Prediction of error vector magnitude using AM/AM, AM/PM distortion of RF power amplifier for high order modulation OFDM system","This paper presents an analysis of error vector magnitude (EVM) performance of power amplifier using the amplitude/phase distortion coefficients derived from probability density function (PDF) of OFDM input signal. Amplitude coefficient values are calculated from the amplitude/phase distortion coefficients. The amplitude/phase distortion coefficient can be calculated from summation of the measured AM/AM (amplitude distortion) and AM/PM (phase distortion) data considered the probability of an input OFDM signal level. A complete analysis of the EVM model for output OFDM symbol with variable envelop through power amplifier can be derived from amplitude/phase distortion coefficients. To investigate, we use 1024(210) IFFT-OFDM signal based on 802.16e mobile system, 9.7dB peak to average power ratio (PAPR) at 0.01% CCDF and 10MHz overall occupation bandwidth, and the fabricated HBT MMIC power amplifier. The error ratio between the predicted EVM and measured EVM is as small as 1%.",2005,0, 6836,Identification of faulted section in TCSC transmission line based on DC component measurement,"This paper presents an analysis of possibility of detection of a fault position with respect to the compensating bank in a series compensating transmission line. The algorithm designed for this purpose is based on determining the contents of dc components in the distance relay input currents. Fuzzy logic technique is applied for making the decision whether a fault is in front of the compensating bank or behind it. The delivered algorithm has been tested and evaluated with use of the fault data obtained from versatile ATP-EMTP simulations of faults in the test power network containing the 400 kV, 300 km transmission line, compensated with the aid of TCSC (Thyristor Controlled Series Capacitor) bank installed at mid-line. The results of the evaluation are reported and discussed.",2009,0, 6837,An Analysis of Fault Effects and Propagations in AVR Microcontroller ATmega103(L),"This paper presents an analysis of the effects and propagations of transient faults by simulation-based fault injection into the AVR microcontroller. This analysis is done by injecting 20000 transient faults into main components of the AVR microcontroller that is described in VHDL language. The sensitivity level of various points of the AVR microcontroller such as ALU, Instruction-Register, Program-Counter, Register-file and Flag Registers against fault manifestation is considered and evaluated. The behavior of AVR microcontroller against injected faults is reported and shown that about 41.46% of faults are recovered in simulation time, 53.84% of faults are effective faults and reminding 4.70% of faults are latent faults; moreover a comparison of the behavior of AVR microcontroller in fault injection experiments against some common microprocessors is done. Results of fault analyzing will be used in the future research to propose the fault-tolerant AVR microcontroller.",2009,0, 6838,Analysis and modelling of novel band stop and band pass millimeter wave filters using defected microstrip structure (DMS),"This paper presents novel millimeter wave filter structures by creation of some slots on the strip. These slots perform a serious LC resonance property in certain frequency and suppress the spurious signals. In high frequencies and high density applications, the board area is seriously limited, so using these filters; the circuit area is minimized. The proposed filters are compact and very suitable for high density MMIC circuits.",2009,0, 6839,"On line sensor fault detection, isolation and accommodation in tactical aerospace vehicle","This paper presents on line sensor fault detection, isolation (FDI) and the associated fault tolerant control (FTC) algorithm for a tactical aerospace vehicle. A study on the analytical redundancy and a sensor fault detection scheme (FDI ) into a flight control system has been performed for a tactical aerospace vehicle using the longitudinal model. There are various methods available in the academic literature to apply FDI and FTC schemes to control systems and some have already been applied to real applications. Among these, observer-based approaches have arisen as one of the most widespread. The basic ideas behind observer-based FDI schemes are the generation of residuals, and the use of an optimal threshold function to differentiate faults from disturbances. Generally, the residuals, also known as diagnostic signals, are generated from estimates of the system's measurements obtained by a Luenberger observer or a Kalman filter. The threshold function is then used to 'detect' the fault by separating the residuals from false faults and disturbances. The change in residual signal is used to detect and isolate the fault and corresponding fault tolerant control action is taken to arrest the failure of the aerospace vehicle. A closed-loop simulation with nonlinear 6-degree of freedom (6-DoF) model shows that the above FDI and FTC scheme will be able to reduce the probability of mission failure due to the fault in one of the sensors.",2004,0, 6840,A co-operative hybrid algorithm for fault diagnosis in power transmission,"This paper presents our co-operative hybrid algorithm for fault diagnosis in power transmission networks. When a fault occurs in a transmission network, it must be identified and eliminated as soon as possible. Since control centers are flooded with hundreds of alarm messages during a fault, fault diagnosis, which involves the analysis of alarm messages, is a time consuming task. Towards the development of a fault diagnostician, model-based, heuristic, and neural networks are applied to the domain and the results are presented in this paper. The algorithm is a hierarchical model which combines several reasoning methods such as heuristic, temporal and model-based diagnosis and incorporates a network of neural networks at one of the levels of the hierarchy. The working of this co-operative algorithm is discussed and its results are analysed",2000,0, 6841,Assessing the Dependability of SOAP RPC-Based Web Services by Fault Injection,"This paper presents our research on devising a dependability assessment method for SOAP-based Web Services using network level fault injection. We compare existing DCE middleware dependability testing research with the requirements of testing SOAP RPC-based applications and derive a new method and fault model for testing web services. From this we have implemented an extendable fault injector framework and undertaken some proof of concept experiments with a system based around Apache SOAP and Apache Tomcat. We also present results from our initial experiments, which uncovered a discrepancy within our system. We finally detail future research, including plans to adapt this fault injector framework from the stateless environment of a standard web service to the stateful environment of an OGSA service.",2003,0, 6842,Dynamic Testing of an SRAM-Based FPGA by Time-Resolved Laser Fault Injection,This paper presents principles and results of dynamic testing of an SRAM-based FPGA using time- resolved fault injection with a pulsed laser. The synchronization setup and experimental procedure are detailed. Fault injection results obtained with a DES crypto-core application implemented on a Xilinx Virtex II are discussed.,2008,0, 6843,Reflective fault-tolerant systems: from experience to challenges,"This paper presents research work performed on the development and the verification of dependable reflective systems based on MetaObject Protocols (MOPS). We describe our experience, we draw the lessons learned from both a design and a validation viewpoint, and we discuss some possible future trends on this topic. The main originality of this work relies on the combination of both design and validation issues for the development of reflective systems, which has led to the definition of a reflective framework for the next generation of fault-tolerant systems. This framework includes: 1) the specification of a MetaObject Protocol suited to the implementation of fault-tolerant systems and 2) the definition of a general test strategy to guide its verification. The proposed approach is generic and solves many issues related to the use and evolution of system platforms with dependability requirements. Two different instances of the specified MOP have been implemented in order to study the impact of the MOP architecture in the development of a reflective fault-tolerant system. As far as the test strategy is concerned, a different testing level is associated with each reflective mechanism defined in the MOP. For each testing level, we characterize the test objectives and the required test environments. According to this experience, several new research challenges are finally identified.",2003,0, 6844,Detection of high impedance fault using adaptive non-communication protection technique,"This paper presents the application of the adaptive noncommunication protection technique to high impedance fault conditions. In this technique, protection relays make operate or restraint decisions, adapting to system and fault conditions, without the need for communication links. The neglecting of communication links to signal the remote end relay is achieved by detecting the remote breaker operation to determine whether the protected line section is in a balanced operation condition or not. An algorithm based on the symmetric components is proposed to detect and identify the balance condition of the system during the fault. The paper is focused on the responses of the technique to high impedance fault conditions.",2002,0, 6845,Error probability performance for coherent DS-CDMA systems in correlated Nakagami fading channels,"This paper presents the average error probability performance in coherent DS-CDMA systems using a 2D-RAKE receiver with an arbitrary number of antennas and arbitrary number of RAKE fingers. In the analysis, we assume a general frequency selective Nakagami-m fading channel with exponential multipath intensity profile. An analytical expression of the average bit-error-rate (BER) is derived, including the effects of the spatial correlation between antenna elements and non-identical fading for different multipaths. The impacts of the receiver configuration (e.g. the spatial diversity order M, the temporal diversity order L and the antenna separation) and the operating environment parameters (such as fading, maximum number of multipath components, angular spread and MIP power decay factor), on the BER performance and the cell capacity are illustrated",2001,0, 6846,An Integrated Neural Fuzzy Approach for Fault Diagnosis of Transformers,"This paper presents a new and efficient integrated neural fuzzy approach for transformer fault diagnosis using dissolved gas analysis. The proposed approach formulates the modeling problem of higher dimensions into lower dimensions by using the input feature selection based on competitive learning and neural fuzzy model. Then, the fuzzy rule base for the identification of fault is designed by applying the subtractive clustering method which is very good at handling the noisy input data. Verification of the proposed approach has been carried out by testing on standard and practical data. In comparison to the results obtained from the existing conventional and neural fuzzy techniques, the proposed method has been shown to possess superior performance in identifying the transformer fault type.",2008,0, 6847,Assessing application features of protective relays and systems through automated testing using fault transients,"This paper presents a new approach to assessing application features of protective relays. The approach utilizes a test methodology based on the use of transients. Application features of protective relays, such as the response time and trip selectivity, may be extremely important when investigating relay misoperations or when purchasing new relays. These features are not readily known or made available in the relay manuals. The paper provides examples of how the test methodology may be applied and what kind or results may be obtained. It also discusses the requirements for the test tools to be used. A discussion of the test tools features that may be needed as well as the ways the tools may be applied to perform testing is presented as well. Conclusions about the needs for suitable test tools are given at the end.",2002,0, 6848,A New Diagnostic Model for Identifying Parametric Faults,"This paper presents a new approach to failure detection and isolation (FDI) for systems modeled as an interconnection of subsystems that are each subject to parametric faults. This paper develops the concept of a diagnostic model and the concept of a fault emulator which are used to model and parameterize subsystem faults. There are two stages to the FDI scheme. In the first stage there is a requirement to identify the diagnostic model. Once identified, the diagnostic model is used in the second stage to generate a residual. Artifacts within the measured residual are then used as a basis for identifying parametric faults. The scheme is distinct from others as it does not require an online recursive least squares type identifier.",2010,0, 6849,Low voltage fault attacks to AES,"This paper presents a new fault based attack on the Advanced Encryption Standard (AES) with any key length, together with its practical validation through the use of low voltage induced faults. The CPU running the attacked algorithm is the ARM926EJ-S: a 32-bit processor widely deployed in computer peripherals, telecommunication appliances and low power portable devices. We prove the practical feasibility of this attack through inducing faults in the computation of the AES algorithm running on a full fledged Linux 2.6 operating system targeted to two implementations of the ARM926EJ-S on commercial development boards.",2010,0, 6850,The Fault Diagnosis of a Class of Nonlinear Stochastic Time-delay systems,"This paper presents a new fault detection algorithm for a class of nonlinear stochastic time-delay systems. Different from the classical fault detection design, a fault detection filter with an output observer and a consensus filter is constructed for fault detecting. Simulations are provided to show the efficiency of the proposed approach.",2006,0, 6851,A Fault Analysis and Classifier Framework for Reliability-Aware SRAM-Based FPGA Systems,"This paper presents a new framework for the analysis of SRAM-based FPGA systems with respect to their dependability properties against single, multiple and cumulative upsets errors. The aim is to offer an environment for performing fault classification and error propagation analyses for designed featuring fault detection or tolerance techniques against soft errors, where the focus is not only the overall achieved fault coverage, but an understanding of the fault/error relation inside the internal elements of the system. We propose a fault analyzer/classifier laying on top of a classical fault injection engine, used to monitor the evolution of the system after a fault as occurred, with respect to the applied reliability-oriented design technique. The paper introduces the framework and reports some experimental results of its application to a case study, to highlight the benefits of the proposed solution.",2009,0, 6852,The research on a new fault wave recording device in generator-transformer units,"This paper presents a new kind of distributed fault recorder including the design of the system structure, the hardware and software design of the recorder. The recorder adopts NI CompaceRIO series programmable automation controller (PAC) in the hardware while virtual instrument technology in the software. The network communication based on TCP/IP between the client and the server is adopted in the power plant. Moreover, an improved frequency tracking algorithm is presented in the monitoring of the electric quantity to improve the detection precision and the processing speed. The detection and operation results show that it has improved the performance greatly and realized authenticity, integrity and reliability and so on.",2009,0, 6853,New Method of Locating Faults on Three-terminal Lines Equipped with Current Differential Relays,This paper presents a new method for locating faults on three-terminal power lines. Estimation of a distance to fault and indication of a faulted section is performed using three- phase current from all three terminals and additionally three- phase voltage from the terminal at which a fault locator is installed. The fault location algorithm consists of three subroutines designated for locating faults within particular line sections and a procedure for indicating the faulted line section. Testing and evaluation of the algorithm has been performed with fault data obtained from comprehensive ATP-EMTP simulations. Sample results of the evaluation are reported and discussed.,2007,0, 6854,Fault diagnosis with Coloured Petri Nets using Latent Nestling Method,"This paper presents a new methodology for permanent and intermittent fault diagnosis, named Faults Latent Nestling Method (FLNM), using Coloured Petri Nets (CPNs). CPNs and FLNM method allow for an enhanced capability for synthesis and modelling in contrast to the classical phenomena of combinational state explosion when using Finite State Machine based methods.",2008,0, 6855,DARX - a framework for the fault-tolerant support of agent software,"This paper presents DARX, our framework for building applications that provide adaptive fault tolerance. It relies on the fact that multi-agent platforms constitute a very strong basis for decentralized software that is both flexible and scalable, and makes the assumption that the relative importance of each agent varies during the course of the computation. DARX regroups solutions which facilitate the creation of multi-agent applications in a large-scale context. Its most important feature is adaptive replication: replication strategies are applied on a per-agent basis with respect to transient environment characteristics such as the importance of the agent for the computation, the network load or the mean time between failures. Firstly, the interwoven concerns of multi-agent systems and fault-tolerant solutions are put forward. An overview of the DARX architecture follows, as well as an evaluation of its performances. We conclude, after outlining the promising outcomes, by presenting prospective work.",2003,0, 6856,An on-line UPS system with power factor correction and electric isolation using BIFRED converter,"This paper presents design consideration and performance analysis of an on-line, low-cost, high performance single-phase Uninterruptible Power Supply (UPS) system based on Boost Integrated Flyback Rectifier/Energy storage DC/DC (BIFRED) converter. The system consists of an isolated AC/DC BIFRED converter, a bi-directional DC/DC converter, and a DC/AC inverter. It has input power factor correction, electric isolation of the input from the output, and control simplicity. Detailed circuit operation, analysis, as well as simulation and experiment results are presented.",2003,0, 6857,Entropy-driven parity-tree selection for low-overhead concurrent error detection in finite state machines,"This paper presents discuss the problem of parity-tree selection for performing concurrent error detection (CED) with low overhead in finite state machines (FSMs). We first develop a nonintrusive CED method based on compaction of the state/output bits of an FSM via parity trees and comparison to the correct responses, which are generated through additional on-chip parity prediction hardware. Similar to off-line test-response-compaction practices, this method minimizes the number of parity trees required for performing lossless compaction. However, while a few parity trees are typically sufficient, the area and the power consumption of the corresponding parity predictor is not always in proportion with the number of implemented functions. Therefore, parity-tree-selection methods that minimize the overhead of the parity predictor, rather than the number of parity trees, are required. Towards this end, we then extend our method into a systematic search that exploits the correlation between the area and the power consumption of a function and its entropy, in order to select parity trees that minimize the incurred overhead. Experimental results on benchmark circuits demonstrate that this solution achieves significant reduction in area and power consumption over the basic method that simply minimizes the number of parity trees.",2006,0, 6858,Electrical equipment fault diagnosis system based on the decomposition products of SF6,"This paper presents electrical equipment fault diagnosis system based on the decomposition products of SF6, and makes an introduction of a method that electrical equipment fault diagnosis system of hardware and software implementation. The hardware uses ATmega128 series single-chip platform, and the software uses advanced wavelet neural network fault diagnosis method. To prove the superiority of this algorithm, we make the simulation and comparison with others. A good synergy of hardware and software is used in SF6 electrical equipment fault diagnosis, and analysis of SF6 gas content of decomposition products to judge if the electrical equipment fault happens and to make a fault prediction. In this paper we will introduce the hardware design method and the detailed design of the software just because of strong electromagnetic interference environment and it is very important to the system design, finally by giving the experimental data to prove system reliability and practicality.",2009,0, 6859,Experimental results in evolutionary fault-recovery for field programmable analog devices,"This paper presents experimental results of fast intrinsic evolutionary design and evolutionary fault recovery of a 4-bit digital to analog converter (DAC) using the JPL stand-alone board-level evolvable system (SABLES). SABLES is part of an effort to achieve integrated evolvable systems and provides autonomous, fast (tens to hundreds of seconds), on-chip evolution involving about 100,000 circuit evaluations. Its main components are a JPL field programmable transistor array (FPTA) chip used as transistor-level reconfigurable hardware, and a TI DSP that implements the evolutionary algorithm controlling the FPTA reconfiguration. The paper describes an experiment consisting of the hierarchical evolution of a 4-bit DAC using 20 cells of the FPTA chip. Fault-recovery is demonstrated after applying stuck-at 0 faults to all switches of one particular cell, and using evolution to recover functionality. It is verified that the functionality can be recovered in less than one minute after the fault is detected while the evolutionary design of the 4-bit DAC from scratch took about 3 minutes.",2003,0, 6860,Reliability optimization models for fault-tolerant distributed systems,"This paper presents four models to demonstrate the authors' techniques for optimizing software and hardware reliability for fault-tolerant distributed systems. The models help us find the optimal system structure while considering basic information on reliability and cost of the available software and hardware components. Each model is suitable for a distinct set of conditions or situations. All four models maximize reliability while meeting cost constraints. The simulated annealing optimization algorithm is selected to demonstrate system reliability optimization techniques for distributed systems because of its flexibility in applying to various problem types with various constraints, as well as its efficiency in computation time. It provides satisfactory reliability results while meeting the constraints",2001,0, 6861,Using consensus for solving conflict situations in fault-tolerant distributed systems,"This paper presents an approach based on using consensus to solving conflict situations in fault-tolerant distributed systems. It is assumed that the processors of a distributed system may have different pictures of the failure situation because the faulty processors may give a wrong answer for a common message or even may impersonate other processors, thus if the number of good and bad processors are not estimated then the proper version of the failure situation is not known. We propose to determine the consensus of versions possessed by the processors and treat it as the most reliable version of the situation, which should be used for further analysis. The paper presents a consensus problem for solving this kind of problem, the postulates for consensus choice, their analysis and some algorithms for determining consensus when the structure of versions is known. Using consensus methods is a new approach to solving this kind of problem, and it is useful in the cases when the upper bound of the number of fault processors is not known",2001,0, 6862,Automatic fault localization based on lightning information,"This paper presents an automated real-time fault localization tool that correlates SCADA and lightning information from the lightning location system (LLS). The aim of this approach is to produce on-line information on possible fault spots for the network operator and/or restoration crew in the field. The results are obtained by temporal and spatial correlation of information from SCADA, LLS and geographical data sources. The correlation is performed by the software package and results are available for on-line observation. Utilization of such a tool may significantly shorten the fault spotting process and improve overall power supply quality. Preliminary results are presented and commented on",2006,0, 6863,DCE-MRI Segmentation and Motion Correction Based on Active Contour Model and Forward Mapping,"This paper presents an automatic method to segment and correct motion artifact on dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). The breast region is segmented from DCE-MRI using mathematical morphology, region growing, and active contour models. The motion artifact presented in the image is then corrected by applying B-spline curve fitting, active contour model and forward mapping algorithm. Our segmentation method has been tested on 72 DCR-MRI studies from 33 patients. The average segmentation accuracy was 96.76%, and the confidence interval was [95.55%, 97.97%] with p<0.05. Simulation and validation experiments for motion correction are working in progress. The detail experimental results were presented at the conference. The paper represents work-in-progress in our effort to build a CAD system for breast DCE-MRI",2006,0, 6864,An efficient and optimized FPGA Feedback M-PSK Symbol Timing Recovery Architecture based on the Gardner Timing Error Detector,"This paper presents an efficient and optimized FPGA implementation of a complete digital Symbol Timing Recovery (STR) architecture based on a digital PLL loop structure. Matlab modelling and then a complete hardware communication system test, reveal that the implemented STR circuit offers the best performances compared with the other implemented works present in literature. When implemented on a Xilinx Virtex-2P XC2VP7 FF672 FPGA chip the proposed STR circuit occupies just 138 slices, uses 2 embedded multipliers and reaches a clock frequency of 106 MHz; a symbol rate of 10 Msymbol/sec can be reached when 10 samples per symbol are employed. The obtained results are promising for its use in software defined radio system applications.",2007,0, 6865,CCDA: Correcting control-flow and data errors automatically,"This paper presents an efficient software technique to detect and correct control-flow errors through addition of redundant codes in a given program. The key innovation performed in the proposed technique is detection and correction of the control-flow errors using both control-flow graph and data-flow graph. Using this technique, most of control-flow errors in the program are detected first, and next corrected, automatically; so, both errors in the control-flow and program data which is caused by control-flow errors can be corrected. In order to evaluate the proposed technique, a post compiler is used, so that the technique can be applied to every 8086 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% of the control-flow errors can be detected and corrected by the proposed technique automatically without any data error generation. Moreover, the performance and memory overheads of the technique are noticeably less than traditional techniques.",2010,0, 6866,Silicon Evaluation of Static Alternative Fault Models,"This paper presents an extensive study on evaluating the effects of static alternative fault models (AFM) on product quality in the face of the latest defect screening techniques. The fault models that are presented in this research are multiple-detect stuck-at, static transition fault, and layout-based deterministic bridges. The results show the quality impact when these new fault models are combined with a high quality stuck-at and TDF test sets",2007,0, 6867,An FPGA Based Travelling-Wave Fault Location System,"This paper presents an FPGA based fault recorder capable of recording fault transient signals on MV distribution systems for both single and double ended fault location schemes. The proposed platform consists of a Xilinx Spartan-3 FPGA device; Texas Instruments 125 MSPS, 14 bit ADCs; Cypress 167 MHz 2 Mx 18 bits QDR memory; a high speed USB 2.0 link; and a GPS unit to provide accurate timestamps. The proposed system is capable of recording three current phases and three voltage phases simultaneously with the ability to set a pre gain for both current and voltage measurements independently.",2007,0, 6868,FPGA-Based Fault Injection into Synthesizable Verilog HDL Models,"This paper presents an FPGA-based fault injection tool, called FITO that supports several synthesizable fault models for dependability analysis of digital systems modeled by Verilog HDL. Using the FITO, experiments can be performed in real-time with good controllability and observability. As a case study, an OpenRISC 1200 microprocessor was evaluated using an FPGA circuit. About 4000 permanent, transient, and SEU faults were injected into this microprocessor. The results show that the FITO tool is more than 79 times faster than a pure simulation-based fault injection with only 2.5% FPGA area overhead.",2008,0, 6869,On-line diagnosis of interconnect faults in FPGA-based systems,"This paper presents an on-line diagnosis approach for locating the interconnect faults in field programmable gate arrays (FPGAs)-based systems. The diagnosis proposed approach consists of two phases. Phase one is locating the faulty tile through partitioning the FPGA-based system into self-checking tiles. The faulty tile can be detected concurrently with the normal system operation. This operation is performed prior to scheduling and allocating the circuit. The proposed partitioning approach was applied on certain circuits as a case study, and has been implemented using Xilinx foundation CAD tool with FPGA chip XC4010. The simulation study proved that our partitioning scheme reduces the test complexity and produces lower overheads. Upon locating a faulty tile and by the aid of a proposed path-list file per tile created during the routing process, the second phase of the diagnosis approach is applied only on the utilized interconnection of that tile for locating the faulty wires and switches. Therefore, the diagnosis approach is considered to be simplified.",2004,0, 6870,Numerical design of nonlinear observers with approximately linear error dynamics,"This paper presents a numerical approach for the approximate observer error linearization technique of Kazantzis (1998) and Krener (2002), The proposed design procedure is based on the solution of a linear matrix equation that is obtained in explicit form. In addition, an alternative method for computing the change of coordinates and the output injection is developed that can be used to design regionally exponentially stable observers with approximately linear error dynamics. Simple examples demonstrate the results of the paper",2006,0, 6871,Forward error correction for an enhanced SARSAT distress message,"This paper presents a proposed enhancement to the current SARSAT distress message that is backward-compatible with the existing local user terminals (LUT) (i.e., ground stations) receivers. Additional coding is appended to the current message to allow for better performance for the new LUTs, soft-decision decoding is used in the evaluation of the current and enhanced format. Performance results on the AWGN channel are presented. It is shown that near maximum-likelihood performance is achieved for the new code format.",2008,0, 6872,Low-cost embedded system for the IM fault detection using neural networks,"This paper presents a realization of a low-cost, portable measurement system for induction motor fault diagnosis. The system is composed of a four-channel data acquisition recorder and laptop computer with specialized software. The diagnostic software takes advantage of the motor current signature analysis method (MCSA) for detection of rotor faults, voltage unbalance and mechanical misalignment. For diagnostics purposes the neural network technique is applied.",2010,0, 6873,Soft error resilient VLSI architecture for signal processing,"This paper presents a reliability-configurable coarse-grained reconfigurable array for signal processing, which offers flexible reliability to soft error. A notion of cluster is introduced as a basic element of the proposed reconfigurable array, each of which can select one of four operation modes with different levels of spatial redundancy and area-efficiency. Evaluation of permanent error rates demonstrates that four different reliability levels can be achieved by a cluster of the reconfigurable array. A fault-tolerance evaluation of Viterbi decoder mapped on the proposed reconfigurable array demonstrates that there is a considerable trade-off between reliability and area overhead.",2009,0, 6874,Robust and Efficient Rule Extraction Through Data Summarization and Its Application in Welding Fault Diagnosis,"This paper presents a robust and efficient method to discover knowledge for classification problems through data summarization. It discretizes continuous features and then summarizes the data using a contingency table. Inconsistency rate for different subsets of features can then be easily calculated from the contingency table. Sequential search is then used to find the best feature subset. After the number of features is reduced to a certain extent, easy-to-understand knowledge can be intuitively derived from data summary. Another desirable feature of the proposed method is its capability to learn incrementally; namely, knowledge can be updated quickly whenever new data are obtained. Moreover, the proposed method is capable of handling missing values when used for prediction. The method is applied on two benchmark data sets showing its effectiveness on selecting discriminative features. The practical usefulness of this method in manufacturing is demonstrated through an application on welding fault diagnosis.",2008,0, 6875,A novel approach using a FIRANN for fault detection and direction estimation for high-voltage transmission lines,"This paper presents a novel approach to fault detection, faulted phase selection, and direction estimation based on artificial neural networks (ANNs). The suggested approach uses the finite impulse response artificial neural network (FIRANN) with the same structure and parameters in each relaying location. Our main objective in this work is to find a fast relay design with a detection time not dependent on fault conditions (i.e., current transformer saturation, dynamic arcing faults, short-circuit level, and system topology) and that uses only unfiltered voltage and current samples at 2 kHz. The suggested relay, which we have named FIRANN-DSDST, is composed of a FIRANN together with post-processing elements. The FIRANN is trained globally using training patterns from more than one relaying position in order to be as general as possible. The FIRANN is trained using an improved training algorithm, which depends on a new synaptic weights updating method, which we have named the mixed updating technique. The proposed relay is trained using training patterns created by simulating a real 400-kV network from the Spanish transmission network (REE). Finally, the proposed relay is tested using simulated and real fault data. The results encourage the use of this technology in a protective relaying field.",2002,0, 6876,Diagnostic and protection of inverter faults in IPM motor drives using wavelet transform,"This paper presents a novel faults diagnostic and protection technique for interior permanent magnet (IPM) motor drives using wavelet transform. The proposed wavelet based diagnostic and protection technique for inverter faults is developed and implemented in real-time for a voltage source inverter fed IPM motor. In the proposed technique, the motor currents of different faulted and unfaulted conditions of an IPM motor drive system are preprocessed by wavelet packet transform. The wavelet packet transformed coefficients of motor currents are used as inputs of a three-layer wavelet neural network. The performances of the proposed diagnostic and protection technique are investigated in simulation and experiments. The proposed technique is experimentally tested on a laboratory 1-hp IPM motor drive using the ds1102 digital signal processor board. The test results showed satisfactory performances of the proposed diagnostic and protection technique in terms of speed, accuracy and reliability.",2008,0, 6877,An accurate fault classification technique for power system monitoring devices,This paper presents a novel method of classifying transmission line shunt faults. Most algorithms employed for analyzing fault data require that the fault type be classified. The older fault-type classification algorithms are inefficient because they are not effective under certain operating conditions of the power system and may not be able to accurately select the faulted transmission line if the same fault recorder monitors multiple lines. The technique described in this paper has been proven to accurately identify all ten types of shunt faults that may occur in an electric power transmission system. The other advantage of this technique is that it can be used where multiple transmission lines are present. It is able to identify the faulted line even if secondary effects are recorded in the unfaulted lines.,2002,0, 6878,Simultaneous design of controller and fault detector and its application to motor drive control system,This paper presents a simultaneous design strategy of controller and fault detector and its application to motor drive control systems. The influence of a fault in power electronic devices are analyzed and simulated. Experimental results demonstrate the effectiveness of the proposed design method,2001,0, 6879,Design optimization of a boost power factor correction converter using genetic algorithms,"This paper presents a software tool for designing a low-cost boost power factor correction front-end converter with an input electromagnetic interference filter. A genetic algorithm based discrete optimizer is used to obtain the design. A detailed and experimentally validated model of the system, including second order effects, is considered. A graphical user interface for managing the design specifications and system component databases, controlling and monitoring the optimization process, and analyzing the performance of the top designs found by the optimizer is also described. The results of a design study for a 1.15 kW unit are presented to demonstrate the usefulness of the software tool",2002,0, 6880,Software-based delay fault testing of processor cores,"This paper presents a software-based self-testing methodology for delay fault testing. Delay faults affect the circuit functionality only when it can be activated in functional mode. A systematic approach or the generation of test vectors, which are applicable in functional mode, is presented. A graph theoretic model (represented by IE-Graph) is developed in order to model the datapath. A finite state machine model is used for the controller. These models are used for constraint extraction so that the generated test can be applied in functional mode.",2003,0, 6881,A Method of simple adaptive control using neural networks with offset error reduction for an SISO magnetic levitation system,"This paper proposes the implementation of the method of SAC using neural network with offset error reduction to control an SISO magnetic levitation system. In this paper, the control input for the SISO magnetic levitation system is given by the sum of the output of a simple adaptive controller and the output of neural networks. The role of neural networks is to compensate for constructing a linearized model so as to minimize the output error caused by nonlinearities in the magnetic levitation system. The neural networks use the backpropagation algorithm for the learning process. The role of simple adaptive controller is to perform the model matching for the linear system with unknown structures to a given linear reference model. In this method, only part of the control input is fed to the PFC. Thus, the error will be reduced using this method, and the output of the magnetic levitation system can follow significantly closely the output of the reference model. Finally, the effectiveness of this method is confirmed through experiments to the real SISO magnetic levitation system.",2010,0, 6882,Two Efficient Software Techniques to Detect and Correct Control-Flow Errors,"This paper proposes two efficient software techniques, Control-flow and Data Errors Correction using Data-flow Graph Consideration (CDCC) and Miniaturized Check-Pointing (MCP), to detect and correct control-flow errors. These techniques have been implemented based on addition of redundant codes in a given program. The creativity applied in the methods for online detection and correction of the control-flow errors is using data-flow graph alongside of using control-flow graph. These techniques can detect most of the control-flow errors in the program firstly, and next can correct them, automatically. Therefore, both errors in the control-flow and program data which is caused by control-flow errors can be corrected, efficiently. In order to evaluate the proposed techniques, a post compiler is used, so that the techniques can be applied to every 8086 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% and 89% of the control-flow errors can be detected and corrected without any data error generation by the CDCC and MCP, respectively. Moreover, the strength of these techniques is significant reduction in the performance and memory overheads in compare to traditional methods, for as much as remarkable correction abilities.",2010,0, 6883,The combination method for dependent evidence and its application for simultaneous faults diagnosis,"This paper provides a method based on Dezert-Smarandache theory (DSmT) for simultaneous faults diagnosis when evidence is dependent. Firstly, according to the characteristics of simultaneous faults, a frame of discernment is given for both single fault and simultaneous faults diagnosis, the DSmT combination rule applicable to simultaneous faults diagnosis is introduced. Secondly, the dependence of original evidence is classified according to three main factors in information acquisition and extraction, a method for evidence decorrelation is provided. On the other hand, the weights for measuring evidence credibility are given to modify independent evidence based on generalized ambiguity measure. Next, DSmT combination rule is used to aggregate the modified evidence. Finally, an example of rotor faults diagnosis is given to illustrate effectiveness of the proposed method.",2009,0, 6884,On the asymptotic performance analysis of subspace DOA estimation in the presence of modeling errors: case of MUSIC,"This paper provides a new analytic expression of the bias and RMS error (root mean square) error of the estimated direction of arrival (DOA) in the presence of modeling errors. In , first-order approximations of the RMS error are derived, which are accurate for small enough perturbations. However, the previously available expressions are not able to capture the behavior of the estimation algorithm into the threshold region. In order to fill this gap, we provide a second-order performance analysis, which is valid in a larger interval of modeling errors. To this end, it is shown that the DOA estimation error for each signal source can be expressed as a ratio of Hermitian forms, with a stochastic vector containing the modeling error. Then, an analytic expression for the moments of such a Hermitian forms ratio is provided. Finally, a closed-form expression for the performance (bias and RMS error) is derived. Simulation results indicate that the new result is accurate into the region where the algorithm breaks down.",2006,0, 6885,Asymptotic exactness of parameter-dependent lyapunov functions: An error bound and exactness verification,"This paper provides an approximate approach to a robust semidefinite programming problem with a functional variable and shows its asymptotic exactness. This problem covers a variety of control problems including a robust stability/performance analysis with a parameter-dependent Lyapunov function. In the proposed approach, an approximate semidefinite programming problem is constructed based on the division of the set of parameter values. This approach is asymptotically exact in the sense that, as the resolution of the division becomes higher, the optimal value of the constructed approximate problem converges to that of the original problem. Our convergence analysis is quantitative. In particular, this paper gives an a priori upper bound on the discrepancy between the optimal values of the two problems. Moreover, it discusses how to verify that an optimal solution of the approximate problem is actually optimal also for the original problem.",2007,0, 6886,Simulation of Performance in Error,"This paper purposes to describe S.PERERE (simulation of performance in error), a human behavior computational simulator, which main objective is to produce, in a random way, human error states. For the generation of error states, S.PERERE has a behaviour disturber mechanism and also a mechanism to start the perturbations. The construction of the simulator is based on ACT-R (atomic components of thought - rational) cognitive architecture",2005,0, 6887,Fault-oriented software robustness assessment for multicast protocols,"This paper reports a systematic approach for detecting software defects in multicast protocol implementations. We deploy a fault-oriented methodology and an integrated test system targeting software robustness vulnerabilities. The primary method is to assess protocol implementation by non-traditional interface fault injection that simulates network attacks. The test system includes a novel packet driving engine, a PDU generator based on Strengthened BNF notation and a few auxiliary tools. We apply it to two multicast protocols, IGMP and PIM-DM, and investigate their behaviors under active functional attacks. Our study proves its effectiveness for promoting production of more reliable multicast software.",2003,0, 6888,Applying multi-agent system technology in practice: automated management and analysis of SCADA and digital fault recorder data,"This paper reports on the use of multi-agent system technology to automate the management and analysis of SCADA and digital fault recorder (DFR) data. The multi-agent system, entitled Protection Engineering Diagnostic Agents (PEDA), integrates legacy intelligent systems that analyze SCADA and DFR data to provide data management and online diagnostic information to protection engineers. Since November 2004, PEDA agents have been intelligently interpreting and managing data online at a transmission system operator in the U.K. As the results presented in this paper demonstrate, PEDA supports protection engineers by providing access to interpreted power systems data via the corporate intranet within minutes of the data being received. In this paper, the authors discuss their experience of developing a multi-agent system that is robust enough for continual online use within the power industry. The use of existing agent development toolsets and standards is also discussed.",2006,0, 6889,The effect of SPECT reconstruction corrections on the absolute and relative quantitative accuracy of myocardial perfusion studies,"This paper reports the findings of investigations into the performance of SPECT corrections for photon attenuation, distance-dependent resolution loss and photon scatter on the absolute and relative quantitative accuracy of myocardial perfusion studies. The measurements of myocardial wall thickness and myocardial infarct size were used to estimate the accuracy of the absolute and relative quantitative accuracy, respectively. A series of phantom studies were performed and additional information was gathered from a group of 37 normal patients. Each set of data was reconstructed with (1) filtered-back projection (FBP), (2) ordered subset expectation maximization (OSEM), (3) OSEM plus attenuation correction (AC), (4) OSEM plus detector response compensation (DRC), (5) OSEM plus AC and DRC, and (6) OSEM plus AC, DRC and scatter correction (SC). The image analysis toolbox iQuant was used to perform the analysis. Both patient and phantom data showed SPECT image corrections to have a significant effect on myocardial wall thickness, with reconstructions involving SC providing the most accurate results. Phantom data showed that good estimates of anterior and lateral wall infarct sizes are provided by all reconstruction techniques, whereas good estimates of inferior and septal wall infarct sizes are only provided by reconstructions including AC. Accurate measurement of in fa rets in any location and of any size is only possible with SC. This analysis suggests that in order to achieve perfect quantitative accuracy of myocardial perfusion studies, corrections for photon attenuation, distance-dependent resolution loss and photon scatter should be applied. However, for routine clinical analysis involving visual interpretation and an estimate of infarct size, the application of attenuation correction and detector resolution compensation might be considered sufficient.",2004,0, 6890,PLC Control Logic Error Monitoring and Prediction Using Neural Network,"This paper reviews monitoring and error prediction of PLC-program using Neural Network. In the PLC-device controlled manufacturing line, PLC-program holds place of underlying component. It becomes controlling mechanism. The level of automation in the production line relies on control mechanism practiced. In the modern manufacturing, PLC devices can handle whole production line given that structured and smart PLC-program is executed. In other words, PLC-program can manage whole process structure consisting set of procedures. We present a method to monitor PLC-program and PLC error prediction it using neural network. The neural network method being predictive in nature, it rigorously can monitor process signals from sensors, sensed during operation of PLC devices or execution of PLC-program. Subsequently, a neural network algorithm practiced for the analysis of signals. In this way, thorough monitoring of PLC-program can find possible errors from temporal parameters (e.g. Voltage, bias etc). In addition, possible alterations in program and irregularities can be minimized. That can result, easily to use in fault detection, maintenance, and decision support in manufacturing organization. Similarly, it can lessen down-time of machines and prevent possible risks.",2008,0, 6891,Fault Anticipation Software System architecture for aircraft EMA,"This paper shows a Fault Anticipation Software System (FASS) architecture, for diagnosis and prognosis purposes, in order to be implemented in a FASS unit, that nowadays, is a necessary tool to diagnose aircraft Electro Mechanical Actuators which are been implemented in order to face up to ldquoMore Electric Aircraftrdquo (MEA) technologies. This work shows the requirements in a FAS system, and a fault diagnosis and prognosis structure purposed to solve this challenge.",2009,0, 6892,Compensation strategies in the PWM-VSI topology for a fault tolerant induction motor drive system,"This paper shows how to integrate fault detection, fault identification and fault compensation into two different types of induction motor drive systems. The proposed strategies can compensate both open-circuit and short-circuit failures occurring in the converter power devices. The fault compensation is achieved by reconfiguring the power converter topology with the help of isolating and connecting devices, allowing the system for continuous post-fault operation. Two isolation techniques are investigated and experimental results demonstrate the validity of the proposed fault tolerant strategy.",2003,0, 6893,Formally verified Byzantine agreement in presence of link faults,"This paper shows that deterministic consensus in synchronous distributed systems with link faults is possible, despite the impossibility result of Gray (1978). Instead of using randomization, we overcome this impossibility by moderately restricting the inconsistency that link faults may cause system-wide. Relying upon a novel hybrid fault model that provides different classes of faults for both nodes and links, we provide a formally verified proof that the m+1-round Byzantine agreement algorithm OMH (Lincoln and Rushby (1993)) requires n > 2fls + flr + flra + 2(fa + fs) + fo + fm + m nodes for transparently masking at most fls broadcast and flr receive link faults (including at most flra arbitrary ones) per node in each round, in addition to at most fa, fs, fo, fm arbitrary, symmetric, omission, and manifest node faults, provided that m fa + fo + 1. Our approach to modeling link faults is justified by a number of theoretical results, which include tight lower bounds for the required number of nodes and an analysis of the assumption coverage in systems where links fail independently with some probability p.",2002,0, 6894,The ATPG Conflict-Driven Scheme for High Transition Fault Coverage and Low Test Cost,"This paper presents two new conflict-driven techniques for improving transition fault coverage using multiple scan chains. These techniques are based on a novel test application scheme, in order to break the functional dependency of broadside testing. The two new techniques analyze the ATPG conflicts in broadside test generation, and try to control the flip-flops with most influence on the fault coverage.The conflict-driven method selects some flip-flops that work in the enhanced mode and distributes them into different chains. In the multiple scan chain architecture, to avoid too many scan-in pins, some chains are driven by the same scan-in pin to construct a tree-based architecture. Based on the architecture, the new test application scheme allows some flip-flops working in the enhanced mode, while most of others working in the traditional broadside mode. With the efficient conflict-driven selection method, fault coverage is improved greatly, which can also reduce test application time and compress test data volume. Experimental results show that fault coverage based on the proposed method is comparable to the enhanced scan.",2009,0, 6895,Simple and effective techniques for core-region detection and slant correction in offline script recognition,"This paper presents two new preprocessing techniques for cursive script recognition. Enhanced algorithms for core-region detection and effective uniform slant angle estimation are proposed. Reference lines composed of core-region are usually obtained as the ones surrounding highest density peaks, but are strongly affected by the presence of long horizontal strokes and erratic characters in the word. Therefore, it caused confusion with the actual core-region and leads to decisive errors in normalizing the word. To overcome this problem in core-region detection quantile is introduced to make resulting process robust. On the other hand, research community has introduced computationally heavy approaches to remove slant in cursive script. Therefore, a simple formalized and effective method is presented for the detection and removal of slant angle for offline cursive handwritten words to avoid heavy experimental efforts. Additionally, already not-slanted words are not affected negatively by applying this algorithm. The core-region detection is based on statistical features, while slant angle estimation is based on structure features of the word image. The algorithms are tested on IAM benchmark database of cursive handwritten words. Promising results for core-region detection, slant angle estimation/removal are reported and compared with widely applied Bozinovic and Srihari method (BSM).",2009,0, 6896,Wavelet criteria for identification of arc intermittent faults in medium voltage networks,"This paper presents two novel criteria for identification of arc earth faults in medium voltage (MV) networks. These criteria are based on the discrete wavelet transform (DWT). The product of zero sequence current and voltage details is used to detect and to discriminate the internal faults from the external ones. New criteria ware tested in a MV network. The test results show that the new criteria are very selective, accurate and effective, especially during arc intermittent short-circuits in compensated networks.",2010,0, 6897,Error detection For H.264/AVC coded video based on artifact characteristics,"This paper proposed a novel error detection algorithm of the H.264/AVC coded video stream based on the analysis of artifact characteristics. Firstly, the visual distortions caused by transmission are classified and construed. Then, two methods are implemented according to different kinds of visual artifact. Comparing with conventional ways, proposed algorithm not only considers the error happening on the boundary of the macroblocks(MBs), but also considers the error appears inside the MBs. Simulation results demonstrate that our algorithm shows a good performance for helping concealing the erroneous MBs to gain a high video quality.",2010,0, 6898,On the outphasing power amplifier nonlinearity analysis and correction using digital predistortion technique,"This paper proposes a comprehensive theoretical and experimental analysis of the source of nonlinearity exhibited by outphasing based power amplifiers The important load-pulling effect engendered by the outphasing decomposition and the Chireix combiner is first investigated. Its effect on the two power amplifiers behavior is then identified as the dominant source of nonlinearity in the LINC system. In addition, this paper suggests the application of baseband predistortion technique to mitigate this nonlinear behavior.",2008,0, 6899,Magnet flux ing control of interior PM machine drives for improved response to short-circuit faults,"This paper proposes a control method to the magnet flux in an interior permanent magnet (IPM) motor following short-circuit type faults in either the inverter drive or motor stator windings. Phase control is employed to implement the flux ing control method so that it is possible to take advantage of a zero sequence current in order to minimize the current in the shorted phase. It is shown that phase control results in a smaller induced current than employing a synchronous frame dq0 current regulator. The induced torque is also less than employing a purposely commanded symmetrical short-circuit in response to a short-circuit type fault. In the paper, the complete magnet flux ing control algorithm is derived with reference to the proposed phase current control method. The impact of controlling the zero sequence on the resulting phase currents is presented. Both simulation and experimental results are presented verifying operation of the proposed methods.",2004,0, 6900,Implementation of online air-gap torque monitor for detection of squirrel cage rotor faults using TMS320C31,"This paper proposes a digital signal processor (DSP) based implementation of online air-gap torque estimator for detection of squirrel cage rotor faults. Two noninvasive, computationally efficient methods for online air-gap torque monitoring are discussed. In both the methods, the air-gap torque is computed from two stator voltages and currents and is cost effective when compared with the use of a conventional torque transducer. The online air-gap torque monitor has been implemented using TMS320C31 and successfully tested on faulty cage rotors.",2002,0, 6901,Fault-Tolerant Optimal Neurocontrol for a Static Synchronous Series Compensator Connected to a Power Network,"This paper proposes a fault-tolerant optimal neurocontrol scheme (FTONC) for a static synchronous series compensator (SSSC) connected to a multi-machine benchmark power system. The dual heuristic programming technique and the radial basis function neural networks are used to design a nonlinear optimal neurocontroller (NONC) for the external control of the SSSC. Compared to the conventional external linear controller, The NONC improves the damping performance of the SSSC. The internal control of the SSSC is achieved by a conventional liner controller. A sensor evaluation and (missing sensor) restoration scheme (SERS) is designed by using the auto-associative neural networks and the particle swarm optimization technique. This SERS provides a set of fault-tolerant measurements to the SSSC controllers and therefore guarantees a fault-tolerant control for the SSSC. The proposed FTONC is verified by simulation studies in PSCAD/EMTDC environment",2006,0,5136 6902,A framework for quantifying error proneness in software,"This paper proposes a framework for assessing quantitatively the error-proneness of computer program modules. The model uses an information theory approach to derive an error proneness index, that can be used in a practical way. Debugging and testing rake at least 40% of a software project's effort, but do not uncover all defects. While current research looks at identifying problem-modules in a program, no attempt is made for a quantitative error-proneness evaluation. By quantitatively assessing a module's susceptibility to error, we are able to identify error prone paths in a program and enhance testing efficiency. The goal is to identify error prone paths in a program using genetic algorithms. This increases software reliability, aids in testing design, and reduces software development cost",2000,0, 6903,A Merge Method for Decentralized Discrete-Event Fault Diagnosis,This paper proposes a generic component model (GCM) to give a functional representation of large scale systems. The GCM rests on the notion of services provided by components and their organization into operating modes. To each operating mode corresponds a behavior model uses as reference to diagnose fault. Faults are detected by local diagnosers built on subsystems using a merging procedure without any global model and global diagnoser.,2008,0, 6904,Fault tolerant design for X-by-wire vehicle,"This paper proposes a hardware/software design for a fault tolerant X-by-wire system that consists of two front steering motors and four drive motors. The vehicle lateral motion is controlled by active front steering system in the normal state, and by direct yaw-moment control system when the steering systems fail. Numerical simulations demonstrate the fault tolerance ability of the proposed system.",2004,0, 6905,On the use of VHDL simulation and emulation to derive error rates,"This paper proposes a high level technique to inject transient faults in processor-like circuits, and a convenient way to collect and analyze the fault effects in order to cope with them. Faults can be injected in all sensitive parts of the design, such as registers, flip-flops and memory. This approach was implemented and tested in an 8051-like micro-controller VHDL description, emulated in a Virtex FPGA platform. Experimental results of this technique in the standard and in the SEU hardened core show a dramatic reduction in execution time of the experiments, allowing early intervention to protect dedicated cores at low cost.",2001,0, 6906,A Distributed Approach to Autonomous Fault Treatment in Spread,"This paper presents the design and implementation of the distributed autonomous replication management (DARM) framework built on top of the Spread group communication system. The objective of DARM is to improve the dependability characteristics of systems through a fault treatment mechanism. Unlike many existing fault tolerance frameworks, DARM focuses on deployment and operational aspects, where the gain in terms of improved dependability is likely to be the greatest. DARM is novel in that recovery decisions are distributed to each individual group deployed in the system, eliminating the need for a centralized manager with global information about all groups. This scheme allows groups to perform fault treatment on themselves. A group leader in each group is responsible for fault treatment by means of replacing failed group members; the approach also tolerates failure of the group leader. The advantages of the distributed approach is: (i) no need to maintain globally centralized information about all groups which is costly and limits scalability, (ii) reduced infrastructure complexity, and (iii) less communication overhead. We evaluate the approach experimentally to validate its fault handling capability; the recovery performance of a system deployed in a local area network is evaluated. The results show that applications can recover to their initial system configuration in a very short period of time.",2008,0, 6907,A multi-agent based fault tolerance system for distributed multimedia object oriented environment: MAFTS,"This paper presents the design and implementation of the MAFTS (a multi-agent based fault-tolerance system), which is running on distributed multimedia object oriented environment. DOORAE (distributed object oriented collaboration environment) is a good example of the foundation technology for a computer-based multimedia collaborative work that allows development of required application by combining many agents composed of units of functional module when user wishes to develop a new application field. MAFTS has been designed and implemented in DOORAE environment. It is a multi-agent system that is implemented with object oriented concept. The main idea is to detect an error by using polling method. This system detects an error by polling periodically processes with relation to sessions. And, it is to classify the type of errors automatically by using learning rules. The characteristic of this system is to use the same method to get back again it as it creates a session.",2005,0, 6908,Fault-Secure Multidetector Fire Protection System for Trains,"This paper presents the design of a low-cost, robust, and fault-secure fire protection system for trains. This system consists of three temperature detectors and three smoke detectors, whose outputs are connected to a controller. The system produces three warning signals: fire, alarm, and cigarette. The system takes into account the presence or absence of wind and is expected to be extremely useful in cargo trains in developing countries. Also, the probability of false alarms is minimized. Finally, fault-tolerance is introduced into the system, and the increase in reliability is calculated",2007,0, 6909,On Line Fault Detection and an Adaptive Algorithm to Fast Distance Relaying,"This paper presents the design of an hybrid scheme of wavelet transforms and an adaptive Fourier filtering technique for on line fault detection and phasor estimation to fast distance protection of transmission lines. The wavelet transform is used as a signal processing tool. The sampled voltage and current signals at the relay location are decomposed using wavelet transform-Multi Resolution Analysis (MRA). The decomposed signals are used for the fault detection and as input to the phasor estimation algorithm. The phasor estimation algorithm possesses the advantage of recursive computing and a decaying dc offset component is removed from fault signals by using an adaptive compensation method. Fault detection index and a variable data window scheme are embedded in the algorithm. The proposed scheme provides capability for fast tripping decision, taking accuracy into account. Extensive simulation tests and comparative evaluation presented prove the efficacy of the proposed scheme in distance protection.",2008,0, 6910,Adatpive single phase fault identification and selection technique for maintaining continued operaton of distributed generation,This paper presents the development of an adaptive rule based fault identification and phase selection technique to be used in the implementation of single phase auto-reclosing (SPAR) in power distribution network with distributed generators (DGs). The proposed method uses only the three line currents measured at the relay point. The waveform pattern of phase angle and symmetrical components of the three line currents during transient period of fault condition are analyzed using conditioning rules with IF-THEN in order to determine the type of single phase to ground fault and initiate single pole auto-reclosing command or three phase reclosing command for other type of faults. The proposed method is implemented and verified in PSCAD/EMTDC power system software. The test results show that the proposed method can correctly detects the faulty phase within one cycle in power distribution network with DGs under various network operating conditions.,2010,0, 6911,Development of customized distribution automation system (DAS) for secure fault isolation in low voltage distribution system,"This paper presents the development of customized distribution automation system (DAS) for secure fault isolation at the low voltage (LV) down stream, 415/240 V by using the Tenaga Nasional Berhad (TNB) distribution system. It is the first DAS research work done on customer side substation for operating and controlling between the consumer side system and the substation in an automated manner. Most of the work is focused on developing very secure fault isolation whereby the fault is detected, identified, isolated and remedied in few seconds. Supervisory Control and Data Acquisition (SCADA) techniques has been utilized to build Human Machine Interface (HMI) that provides a graphical operator interface functions to monitor and control the system. Microprocessor based Remote Monitoring Devices have been used for customized software to be downloaded to the hardware. Power Line Carrier (PLC) has been used as communication media between the consumer and the substation. As result, complete DAS fault isolation system has been developed for cost reduction, maintenance time saving and less human intervention during faults.",2008,0, 6912,The error concealment feature in the H.26L test model,"This paper presents the error concealment (EC) feature implemented by the authors in the test model of the draft ITU-T video coding standard H.26L. The selected EC algorithms are based on weighted pixel value averaging for INTRA. pictures and boundary-matching-based motion vector recovery for INTER pictures. The specific concealment strategy and some special methods, including handling of B-pictures, multiple reference frames and entire frame losses, are described. Both subjective and objective results are given based on simulations under Internet conditions. The feature was adopted and is now included in the latest H.26L reference software TML-9.0.",2002,0, 6913,Soft error propagations and effects analysis on CAN controller,"This paper presents the evaluation of soft error effects and propagations on a Controller Area Network (CAN) controller using SINJECT fault injection tool. To do this, three main sub-modules of the core were selected for fault injection targets. The experimental results of fault injection sites show that the Register module is more fault tolerant than the other parts, since only 0.3% of the injected faults led to a system failure. On the other hand, the Bit Timing module is the most vulnerable part in the CAN controller, since the failure rate for this module is about 93%. The Bit Stream Processor is somehow in the middle, because from all injected faults in the Bit Stream Processor, 6.2% of them lead to system failure, 70.1% of them were detected and recovered during simulations, and 23.7% of them were remained latent in the controller.",2010,0, 6914,Adaptive multiple fault detection and alarm processing for loop system with probabilistic network,"This paper presents the fault detection and alarm processing in loop system with fault detection system (FDS). FDS consists of adaptive architecture with probabilistic neural network (PNN). Training PNN uses the primary/backup information of protective devices to create the training sets. However, when network topology changes, adaptation capability becomes important in neural network applications. PNN can be retained and estimated effectively. With a looped system, computer simulations were conducted to show the effectiveness of the proposed system, and PNN's adapt network topology changes.",2004,0, 6915,Scaling Byzantine Fault-Tolerant Replication toWide Area Networks,"This paper presents the first hierarchical Byzantine fault-tolerant replication architecture suitable to systems that span multiple wide area sites. The architecture confines the effects of any malicious replica to its local site, reduces message complexity of wide area communication, and allows read-only queries to be performed locally within a site for the price of additional hardware. A prototype implementation is evaluated over several network topologies and is compared with a flat Byzantine fault-tolerant approach",2006,0, 6916,Error Sector Inventory on Optical Disc for Defect Avoidance during Recording / Playback Digest of Technical Papers,"This paper proposes a method to create on-the-fly inventory of error sectors, so that subsequent recordings / playback are free from the audio-video hiccups and retries and still compliant with DVD-Video / DVD+RW logical standard. This will in-turn aid in creating good user experience (no audio-video hiccups) and helps in using processor MIPS efficiently (no retries).",2008,0, 6917,Stator Current and Motor Efficiency as Indicators for Different Types of Bearing Faults in Induction Motors,"This paper proposes a new approach to use stator current and efficiency of induction motors as indicators of rolling-bearing faults. After a presentation of the state of the art about condition monitoring of vibration and motor current for the diagnostics of bearings, this paper illustrates the experimental results on four different types of bearing defects: crack in the outer race, hole in the outer race, deformation of the seal, and corrosion. The first and third faults have not been previously considered in the literature, with the latter being analyzed in other research works, even if obtained in a different way. Another novelty introduced by this paper is the analysis of the decrease in efficiency of the motor with a double purpose: as alarm of incipient faults and as evaluation of the extent of energy waste resulting from the lasting of the fault condition before the breakdown of the machine.",2010,0, 6918,An Autonomous Robust Fault Tolerant Control System,This paper proposes a new autonomous robust fault tolerant control system. It combines the advantages of the passive and the active fault tolerant control technologies. The system responses instantly to the failures to guarantee the stability in emergency and eventually obtains the best performance for the faulty system. A robust reliable control example is given to show its function in emergency.,2006,0, 6919,Fault recovery for a distributed SP-based delay constrained multicast routing algorithm,"This paper proposes a new distributed shortest path (SP) based delay constrained multicast routing algorithm which is capable of constructing a delay constrained multicast tree when node,failures occur during the tree construction period and recovering from any node failure in a multicast tree during the on-going multicast session without interrupting the running traffic on the unaffected portion of the tree. The proposed algorithm performs the failure recovery efficiently, which gives better performance in terms of the number of exchanged messages and the convergence time than the existing distributed SP-based delay constrained multicast routing algorithms in a network where node failures occur.",2002,0, 6920,Induction machines performance evaluator 'torque speed estimation and rotor fault diagnostic',"This paper proposes a new DSP based tool for evaluating the performance of induction motors based on the data extracted from the stator current. In the proposed algorithm, a pattern recognition technique according to Bayes minimum error classifier is developed to detect incipient rotor faults such as broken rotor bars and static eccentricity in induction motors. Also, part of the algorithm is based on the acceleration method presented in the IEEE Std. 112. It helps to calculate the motor's torque using two line currents and voltages. The use of linear and quadratic time-frequency representations is investigated as a viable solution to the task at hand. Speed information is vital in this approach, so an algorithm to track the speed related saliency induced harmonics from the machine's line current spectrogram is presented. Capturing the harmonics gives the rotor speed that can also be used to extract the feature vector for diagnostic. The implementation of the algorithm on TMS320C6000 family of DSP chips is currently underway. The complete algorithm is then be used to obtain the induction motor's performance curves. This is a complete stand-alone panel mounted induction motor diagnostic tool currently being developed in their lab. This package will be used in conjunction with a drive system (inverter) for online performance monitoring and preventing unwanted shutdown of the induction motor. The difficulties encountered, including a limited dynamic range and the presence of cross terms, are addressed and the suggested solution is provided. Experimental results corroborating the proposed algorithm are presented, and a discussion of the advantages and disadvantages of such an approach is touched upon",2002,0, 6921,State monitoring and fault diagnosis of the PWM converter using the magnetic field near the inductor components,"This paper proposes a new fault diagnostic method for the PWM converter. A loop magnetic near field probe is used to detect the magnetic field near the inductor components in the PWM converters, and the measured waveform is utilized as the diagnostic criterion. The features of the waveform are extracted by the Fast Fourier Transform, and the interested low and high order harmonic components are used to classify the states of the converters. The low order harmonic components are classified by the Back Propagation Neural Network and the high order harmonic components are classified by the simple mathematical method. Finally, by compromising both the two results, the detailed diagnostic conclusions are obtained.",2010,0, 6922,A new method for automatic construction of fault propagation digraph based on OO models,"This paper proposes a new method for automatic construction of fault tree of control systems. The design of the knowledge base for the control systems are explained in details. An algorithm based on the OO models is presented for automatic generation of the digraph which describes the normal, failed and conditional propagation relations among the events in systems. This algorithm can also find the negative feedback and feedforward loops of control system automatically during the digraph construction. Simulation results show that the fault tree can then be simply constructed from the digraph by using a number of methods in the literatures.",2010,0, 6923,"Framework for modeling software reliability, using various testing-efforts and fault-detection rates","This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems",2001,0, 6924,Locating Phase-to-Ground Short-Circuit Faults on Radial Distribution Lines,"This paper proposes a new single phase-to-ground short-circuit fault location algorithm for overhead three-phase radial distribution lines with single-ended measurements using the sinusoidal steady-state analysis method. By using this approach, two sinusoidal signals with different frequencies are first injected to the faulted line. By measuring the voltages and currents at the sending end and solving some nonlinear distributed-parameter equations, the distances and resistances of all possible fault candidates can be determined. A feature extraction method is derived to distinguish the actual fault from other pseudofault candidates. A fault locator based on the proposed approach is designed and implemented for a real-world problem. Physical model experiments and the field tests on radial distribution lines are presented to validate the proposed fault location approach",2007,0, 6925,A simulation-based soft error estimation methodology for computer systems,This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates the soft errors of the whole system using instruction-set simulation. Our experiment demonstrates that the reliability of computer systems depends on not only SERs of memories but also the behavior of software running on the systems,2006,0, 6926,Speed-sensorless induction motor control system using a rotor speed compensation with the rotor flux error,"This paper proposes a speed-sensorless induction motor control system using a rotor speed compensation with the rotor flux error. The rotor flux observer using the reduced-dimensional state estimator technique instead of directly measuring the rotor flux indirectly estimates the rotor flux. The estimated rotor speed is obtained directly from the electrical frequency, the slip frequency, and the rotor speed compensation with the estimated f-axis rotor flux. To precisely estimate the rotor flux, the actual value of the stator resistance, whose actual variation is reflected, is derived. An implementation of PWM pulses using the effective space vector modulation (SVM) is briefly mentioned. For fast calculation and improved performance of the proposed algorithm, all control functions are implemented in software using a digital signal processor (DSP) with its environmental circuits. It is shown through experimental results that the proposed system gives good performance for the speed-sensorless induction motor control.",2004,0, 6927,Syntactic Detection and Correction of Misrecognitions in Mathematical OCR,"This paper proposes a syntactic method for detection and correction of misrecognized mathematical formulae for a practical mathematical OCR system. Linear monadic context-free tree grammar (LM-CFTG) is employed as a formal framework to define syntactically acceptable mathematical formulae.For the purpose of practical evaluation, a verification system is developed, and the effectiveness of the method is demonstrated by using the ground-truthed mathematical document database InftyCDB-1 and a misrecognition database newly constructed for this study.A satisfactory number of misrecognitions are detected and delivered to the correction process.",2009,0, 6928,Influence of fault arc characteristics on the accuracy of digital fault locators,"This paper proposes a time domain model of a fault locator with special reference to fault are nonlinearities by applying the MODELS language of the EMTP. It has been found that an impedance relay type locator is significantly influenced by the fault arc nonlinearities, while the current diversion ratio method is not influenced. This validates the advantage of the current diversion approach over the impedance approach",2001,0, 6929,Analysis of System-Failure Rate Caused by Soft-Errors using a UML-Based Systematic Methodology in an SoC,"This paper proposes an analytical method to assess the soft-error rate (SER) in the early stages of a System-on-Chip (SoC) platform-based design methodology. The proposed method gets an executable UML (Unified Modeling Language) model of the SoC and the raw soft- error rate of different parts of the platform as its inputs. Soft-errors on the design are modeled by disturbances on the value of attributes in the classes of the UML model and disturbances on opcodes of software cores. The Dynamic behavior of each core is used to determine the propagation probability of each variable disturbance to the core outputs. Furthermore, the SER and the execution time of each core in the SoC and a Failure Modes and Effects Analysis (FMEA) that determines the severity of each failure mode in the SoC are used to compute the System-Failure Rate (SFR) of the SoC.",2007,0, 6930,An error messages clustering-based fault diagnosis framework in home networks,"This paper proposes an efficient fault detection and analysis framework based on clustering error messages. Most home network middleware provide simple rule-based fault processing mechanisms optimized only to handle error messages individually. The proposed architecture focuses on detecting and analyzing error messages generated by identical faults as collectively as possible. For this, three relation graphs are used to decide the fault range and to find error messages. Message filters are also used to omit analyzing the found messages.",2010,0, 6931,A Flexible Macroblock Scheme for Unequal Error Protection,"This paper proposes an enhanced error protection scheme using flexible macroblock ordering in H.264/AVC. The algorithm uses a two-phase system. In the first phase, the importance of every macroblock is calculated based on its influence on the current frame and future frames. In the second phase, the macroblocks with the highest impact factor are grouped together in a separate slice group using the flexible macroblock ordering feature of H.264/AVC. By using an unequal error protection scheme, the slice group containing the most important macroblocks can be better protected than the other slice group. The proposed algorithm offers better concealment opportunities than the algorithms which are predefined for flexible macroblock ordering in H.264/AVC.",2006,0, 6932,A fault diagnosis method for power system based on multilayer information fusion structure,"This paper proposes an information fusion method for diagnosis. Multilayer structure of information fusion included data proposing, feature extraction and decision making, is constructed for dealing with the objects such as current, voltage and wave, etc. A Petri-network and a fault matching method of WAMS data are used in characteristic fusion. The application of this fame in the simulation resolves the problem of information loss which provides a way of the advanced applications in smart grid.",2010,0, 6933,Neutralization of errors and attacks in wireless ad hoc networks,"This paper proposes and evaluates strategies to build reliable and secure wireless ad hoc networks. Our contribution is based on the notion of inner-circle consistency, where local node interaction is used to neutralize errors/attacks at the source, both preventing errors/attacks from propagating in the network and improving the fidelity of the propagated information. We achieve this goal by combining statistical (a proposed fault-tolerant cluster algorithm) and security (threshold cryptography) techniques with application-aware checks to exploit the data/computation that is partially and naturally replicated in wireless applications. We have prototyped an inner-circle framework with the ns-2 network simulator, and we use it to demonstrate the idea of inner-circle consistency in two significant wireless scenarios: (1) the neutralization of black hole attacks in AODV networks and (2) the neutralization of sensor errors in a target detection/localization application executed over a wireless sensor network.",2005,0, 6934,FISCADE - A Fault Injection Tool for SCADE Models,"This paper presents the FISCADE fault injection tool which has been developed as a plug-in to SCADE (Safety-Critical Application Development Environment). The tool automatically replaces original operators with fault injection nodes (FINs). A FIN is a node that encapsulates the original operator so the operator can be replaced or the operator output can be manipulated. During execution of the generated source code, FISCADE controls the SCADE simulator to execute the model, inject the fault, and log the results. The tool allows the user to inject errors (activated faults) in all signals in the model. Furthermore FISCADE can simulate specification of design errors by automatically replacing operators with fault injection nodes, as well as simulating transient, intermittent or permanent faults affecting memories and CPU registers. The tool automatically performs a pre-injection analysis to reduce the number of fault injection experiments needed and supports the work of configuring and carrying out automated fault injection campaigns.",2007,0, 6935,Verification of a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Clock Synchronization,"This paper presents the mechanical verification of a simplified model of a rapid byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the symbolic model verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.",2008,0, 6936,A prototype of a VHDL-based fault injection tool,"This paper presents the prototype of an automatic and model-independent fault injection tool, to use on an IBM-PC (or compatible) platform. The tool has been built around a commercial VHDL simulator. With this tool, both transient and permanent faults, of a wide range of types, can be injected into medium-complexity models. Another remarkable aspect of the tool is the fact that it is able to analyse the results obtained from the injection campaigns, in order to study the error syndrome of the system model and/or validate its fault-tolerance mechanisms. Some results of a fault injection campaign carried out to validate the dependability of a fault tolerant microcomputer system are shown. We have analysed the pathology of the propagated errors, measured their latencies, and calculated both error detection and recovery latencies and coverages",2000,0, 6937,Resilient current control of five-phase induction motor under asymmetrical fault conditions,"This paper presents the resilient current control of a five-phase induction motor under asymmetrical fault conditions. This kind of control scheme ensures that the five-phase induction motor operates continuously and steadily without additional hardware connections in case of loss of up to two phases, which is of importance to some specific applications where fault tolerance and high reliability are required. The five-phase induction motor proposed adopts the concentrated winding structure and makes use of the third harmonic currents to generate the nearly rectangular flux distribution in the air gap resulting in improvement of flux density and increase of output torque. Under asymmetrical fault conditions, the proper third harmonic currents are still superimposed on the fundamental currents of the five-phase induction motor with the remaining phases so as to maintain a nearly rectangular air-gap flux similar to the symmetrical conditions. Consequently, the five-phase induction motor can still produce improved flux density and increased output torque. Simulation analysis and experimental verifications on the resilient current control of the five-phase induction motor under the symmetrical and asymmetrical conditions are included in this paper",2002,0, 6938,Evaluating Alpha-induced soft errors in embedded microprocessors,"This paper presents the results of alpha single event upsets tests of an embedded 8051 microprocessor. Cross sections for the different memory resources (i.e., internal registers, code RAM, and user memory) are reported as well as the error rate for different codes implemented as test benchmarks. Test results are then discussed to find the contribution of each available resource to the overall device error rate.",2009,0, 6939,Statistical analysis of random errors from calibration standards,"This paper presents the results of error estimation caused by the VNA calibration standards using StatistiCAL designed at NIST. A GCPW-stripline-GCPW type transition and a test fixture for transceiver module testing are chosen to do the study over 8 to 12 GHz. Calibration standards, that is through, lines and shorts are fabricated in-house. The electromagnetic modeling needed in the development of the calibration standards is done through Ansoft HFSS. The results are used to verify HFSS simulations and they are effectively applied to module testing and performance screening.",2005,0, 6940,A New Approach to Ungrounded Fault Location in a Three-Phase Underground Distribution System using Combined Neural Networks & Wavelet Analysis,"This paper presents the results of investigations into a new fault location technique based on a new modified cable model, in the EMTP software. The simulated data is then analysed using advanced signal processing technique based on wavelet analysis to extract useful information from signals and this is then applied to the artificial neural networks (ANNs) for locating ungrounded shunt faults in a practical underground distribution system. The paper concludes by comprehensively evaluation the performance of the technique developed in the case of ungrounded short circuit faults. The results indicate that the fault location technique has an acceptable accuracy under a whole variety of different systems and fault conditions.",2006,0, 6941,Residual generation approaches in navigation sensors fault detection applications,"This paper presents two different approaches for developing the residual generation module of a model-based sensor fault detection (FD) system applied to the robot navigation problem. The first approach is based on the structural analysis. The second one exploits the structure of nonlinear geometric control theory to derive the nonlinear analytical redundancy (NLAR) tests for sensor navigation purposes. The robot sensor suite includes at least a Global Positioning System (GPS) antenna, an Inertial Measurement Unit (IMU), and two incremental optical encoders. Analysis of the residuals generated by these two presented methods shown that the structural analysis is the feasible way to treat the navigation sensor fault detection problem, as confirmed by the developed experimental results.",2007,0, 6942,Defect Prevention Techniques and its Usage in Requirements Gathering - Industry Practices,"This paper will present various techniques use for a defect prevention (DP) strategy that, when introduced at all stages of a software life cycle, can reduce the time and resources necessary to develop high quality systems. Specifically, how implementing a model-based strategy to reduce Requirement Defects, Development Rework and Manual test development efforts will lead to significant achievements in cost reduction and total productivity. Major focus of this paper is on DP Techniques (for Requirement Phase) used in the Pakistan software industry. Survey data represent which DP techniques are being in use and how much effective are they.",2005,0, 6943,Performance enhancement defect tolerance in the cell matrix architecture,"This research concentrates on the area of fault tolerant circuit implementation in a field programmable type architecture, In particular, an architecture called the Cell Matrix, presented as a fault tolerant alternative to field programmable gate arrays using their Supercell approach, is studied. Architectural constraints to implement fault tolerant circuit design in this architecture are discussed. Some modifications of its basic Structure, such as the integration of circuitry for error correction and scan path, to enhance fault tolerant circuits design are introduced and are compared to the Supercell approach.",2004,0, 6944,Error investigation of models for improved detection of masses in screening mammography,This study analyzes the performance of a computer aided detection (CAD) scheme for mass detection in mammography. We investigate the trained parameters of the detection scheme before any further testing. We use an extended version of a previously reported mass detection scheme. We analyze the detection parameters by using linear canonical discriminants (LCD) and compare results with logistic regression and multi layer perceptron neural network models. Preliminary results suggest that regression and multi layer perceptron neural network showed the best receiver operator characteristics (ROC). The LCD analysis predictive function showed that the trained CAD scheme performance can maintain 99.08% sensitivity (108/109) with false positive rate (FPI) of 8 per image with ROC Az= 0.74plusmn0.01. The regression and the multi layer perceptron neural network ROC analysis showed stronger backbone for the CAD algorithm viewing that the extended CAD scheme can operate at 96% sensitivity with 5.6 FPI per image. These preliminary results suggest that further logic to reduce FPI is needed for the CAD algorithm to be more predictive,2005,0, 6945,An Image Enhancement Technique in Inspecting Visual Defects of Polarizers in TFT-LCD Industry,"This study develops an image-processing filter to enhance the visual defects such as particles, stains, and uneven intensity on polarizers in TFT-LCD industry. Each pixel in the subimage of a polarizer is initially processed to calculate its standard deviation (SD) of gray level, which is sampled by its neighbors within a window. The gray level of each pixel is re-scaled by the maximal and minimal SD values on entire subimage to determine its new gray level. Real polarizers with visual defects are tested in this study. Experimental results show that the proposed filter achieves better performance than conventional image enhancement filters do. Moreover, the proposed image enhancement scheme provides more information for potential defect detection and classification alternatives. The proposed filter is simple, straightforward, and requiring no high-resolution image. Therefore, it is practically suitable for large-polarizer manufacturers to increase inspection speed.",2009,0, 6946,An Immune Fault Detection System with Automatic Detector Generation by Genetic Algorithms,This work deals with fault detection of electronic analog circuits. A fault detection system for analog circuits based on cross-correlation and artificial immune systems is proposed. It is capable of detecting faulty components in analog circuits by analyzing its impulse response. The use of cross-correlation for preprocessing the impulse response drastically reduces the size of the detector used by the real-valued negative selection algorithm (RNSA). The proposed method makes use of genetic algorithms to automatically generate a small number of very efficient detectors. Results have demonstrated that the proposed system is able to detect faults in a Sallen-Key bandpass filter and in a universal filter.,2007,0, 6947,Improving Real-time Fault Analysis and Validating Relay Operations to Prevent or Mitigate Cascading Blackouts,"This paper proposes a new strategy at the local (substation) level, aimed at preventing or mitigating the cascading blackouts that involve relay misoperations or inadequate local diagnostic support. The strategy consists of an advanced real-time tool that combines neural network based fault detection and classification (NNFDC) algorithm and synchronized sampling based fault location (SSFL) algorithm with a relay monitoring tool using event tree analysis (ETA). The fault analysis tool provides a reference for conventional distance relay with its better performance and the relay monitoring tool provides detailed local information about the disturbances. The idea of the entire strategy is to meet several NERC recommendations to prevent blackouts using wide area protection and control",2006,0, 6948,No-reference video quality estimation based on error-concealment effectiveness,"This paper proposes a no-reference videoquality estimation method for monitoring end-user video quality. It is suitable for applications to video transmitted over IP networks. IP network conditions vary for individual users, and assuring end-user video quality is an important issue. To do this, it is necessary to monitor video quality at end-user terminals. With the proposed method, video quality is estimated on the basis of the number of macroblocks containing errors which it has not been possible to conceal. Error concealment effectiveness is evaluated using motionlevel information and luminance discontinuity at the boundaries of error regions. Simulation results show a high correlation (0.95) between the actual mean square error and the number of macroblocks in which complete error concealment has not been possible.",2007,0, 6949,A numerical simulation approach for reliability analysis of fault-tolerant repairable system,"This paper proposes a numerical simulation approach for reliability analysis of fault-tolerant system with repairable components. In the traditional method for the reliability analysis of fault-tolerant system, the system structure is described by means of binary decision diagram (BDD) and Markov process, and then the reliability indexes are calculated. However, as the size of system augments, the size of state space will increase exponentially. In addition, Markov approach requires that the failure and repair time of the components obey exponential distribution. In this paper, by combining dynamic fault tree (DFT) and numerical simulation based on the minimal sequence cut sets (MSCS), we propose a new method to evaluate reliability of fault-tolerant system with repairable components. The approach presented does not depend on Markov model, so that it can effectively solve the problem of the state-space combination explosion. Moreover, our method does not require the system to have Markov property, and it is suitable for systems whose failure and repair time obey arbitrary distributions. Therefore, our method is more flexible than traditional method. At last, one example is given to verify the method.",2009,0, 6950,Sinusoidal steady-state analysis for fault location in power distribution systems,"This paper studies fault location with single-ended measurement when a line-to-line short-circuit fault occurs in an overhead radial three-phase distribution wire. Sinusoidal steady state analysis based on the distributed-parameters model of the transmission wires is employed to locate a fault. By injecting two sinusoidal excitations with different frequencies to the faulted phases, measuring the voltages and currents phasors at the sending-end of the wire and solving certain nonlinear distributed-parameter equations, the distance and resistance of fault candidates can be determined. It is shown that the one has the minimum difference between the calculated fault distances or fault resistances under the two frequencies are the most likely actual fault point. A fault locator based on the proposed scheme is designed and implemented. The parameter measurements of wires as well as the receiving-end transformers in the field are studied. Simulation on the physical model of distribution wire shows that this fault location scheme works successfully.",2004,0, 6951,An improved method on the performance of hybrid filter banks of ADC due to realization errors,"This paper studies the error of the hybrid filter banks (HFB) due to analog realization errors. It is shown that the architecture of HFB is much more sensitive to even small analog imperfections. Therefore, the performance of HFB has a dramatic deterioration. In order to improve the performance, a method named central frequency is presented. The results show that the value of max aliasing function the main restricting source of resolution in HFB ADC (analog-to-digital converter) decreases about 50 dB. And the SNR (signal-to-noise ratio) of the whole system is enhanced above 30 dB.",2008,0, 6952,Characterization of probabilistic faults diagnostic models,"This paper suggests a characterization of probability-based search models describing various fault diagnostics processes, which are widely used in the current practice of complex computer systems and networks systems maintenance and service. The characterization is performed in terms of some tuples including: characteristics of fault models, localization procedures, cost functions, and other factors. Such characterization (and an induced classification) can be used for a rational choice of search algorithms at earlier system design stages, for development of strategies of possible fault localization during the target systems maintenance and service. Special attention is paid to the teaching the related topics in fault diagnostics of computer systems and networks.",2006,0, 6953,A digital correction technique for channel mismatch in TI ADCs,"Time interleaved sigma-delta analog to digital converter seems to be a potential solution for wide bandwidth analog to digital converter with the lowest hardware complexity compared to other solutions using parallel sigma-delta modulators. Its performance depends on the digital filter and is very sensitive to the channel mismatch. This paper summarizes our work on the digital signal processing for this kind of converter, including filtering, decimation and channel mismatch correction in order to reduce the implementation complexity while minimizing the channel mismatch effect.",2010,0, 6954,Nonrandom quantization errors in timebases,"Timebase distortion causes nonlinear distortion of waveforms measured by sampling instruments. When such instruments are used to measure the rms amplitude of the sampled waveforms, such distortions result in errors in the measured rms values. This paper looks at the nature of the errors that result from nonrandom quantization errors in an instrument's timebase circuit. Simulations and measurements on a sampling voltmeter show that the errors in measured rms amplitude have a non-normal probability distribution, such that the probability of large errors is much greater than would be expected from the usual quantization noise model. A novel timebase compensation method is proposed which makes the measured rms errors normally distributed and reduces their standard deviation by a factor of 25. This compensation method was applied to a sampling voltmeter and the improved accuracy was realized",2000,0, 6955,UDP identification and error mitigation in toa-based indoor localization systems using neural network architecture,"Time-of-Arrival (ToA) based localization has attracted considerable attention for solving the very complex and challenging problem of indoor localization, mainly due to its fine range estimation process. However, ToA-based localization systems are very vulnerable to the blockage of the direct path (DP) and occurrence of undetected direct path (UDP) conditions. Erroneous detection of other multipath components as DP, which corresponds to the true distance between transmitter and receiver, introduces substantial ranging and localization error into ToA-based systems. Therefore, in order to enable robust and accurate ToA-based indoor localization, it is important to identify and mitigate occurrence of DP blockage. In this paper we present two methodologies to identify and mitigate the UDP conditions in indoor environments. We first introduce our identification technique which utilizes the statistics of radio propagation channel metrics along with binary hypothesis testing and then we introduce our novel identification technique which integrates the same statistics into a neural network architecture. We analyze each approach and the effects of neural network parameters on the accuracy of the localization system. We also compare the results of the two approaches in a sample indoor environment using both real-time measurement and ray tracing simulation. The identification metrics are extracted from wideband frequency-domain measurements conducted in a typical office building with a system bandwidth of 500 MHz, centered around 1 GHz. Then we show that with the knowledge of the channel condition, it is possible to improve the localization performance by mitigating those UDP-induced ranging errors. Finally, we compare the standard deviation of localization error of traditional localization system and UDP identification-enhanced localization system with their respective lower bound.",2009,0, 6956,On low-cost error containment and recovery methods for guarded software upgrading,"To assure dependable onboard evolution, we have developed a methodology called guarded software upgrading (GSU). We focus on a low-cost approach to error containment and recovery for GSU. To ensure low development cost, we exploit inherent system resource redundancies as the fault tolerance means. In order to mitigate the effect of residual software faults at low performance cost, we take a crucial step in devising error containment and recovery methods by introducing the confidence-driven notion. This notion complements the message-driven (or communication-induced) approach employed by a number of existing checkpointing protocols for tolerating hardware faults. In particular, we discriminate between the individual software components with respect to our confidence in their reliability and keep track of changes of our confidence (due to knowledge about potential process state contamination) in particular processes. This, in turn, enables the individual processes in the spaceborne distributed system to make decisions locally at run-time, on whether to establish a checkpoint upon message passing and whether to roll back or roll forward during error recovery. The resulting message-driven confidence-driven approach enables cost-effective checkpointing and cascading-rollback free recovery",2000,0, 6957,A fault-tolerant structure for reliable multi-core systems based on hardware-software co-design,"To cope with the soft errors and make full use of the multi-core system, this paper gives an efficient fault-tolerant hardware and software co-designed architecture for multi-core systems. And with a not large number of test patterns, it will use less than 33% hardware resources compared with the traditional hardware redundancy (TMR) and it will take less than 50% time compared with the traditional software redundancy (time redundant).Therefore, it will be a good choice for the fault-tolerant architecture for the future high-reliable multi-core systems.",2010,0, 6958,Neural network modeling of distribution transformers with internal short circuit winding faults,"To detect and diagnose a transformer internal fault an efficient transformer model is required to characterize the faults for further research. This paper discusses the application of neural network (NN) techniques in the modeling of a distribution transformer with internal short-circuit winding faults. A transformer model can be viewed as a functional approximator constructing an input-output mapping between some specific variables and the terminal behaviors of the transformer. The complex approximating task was implemented using six small simple neural networks. Each small neural network model takes fault specification and energized voltage as the inputs and the output voltage or terminal currents as the outputs. Two kinds of neural networks, back-propagation feedforward network (BPFN) and radial basis function network (RBFN), were investigated to model the faults in distribution transformers. The NN models were trained offline using training sets generated by finite element analysis (FEA) models and field experiments. The FEA models were implemented using a commercial finite element analysis software package. The comparison between some simulation cases and corresponding experimental results shows that the well-trained, neural networks can accurately simulate the terminal behaviors of distribution transformers with internal short circuit faults",2001,0, 6959,A hardware approach to concurrent error detection capability enhancement in COTS processors,"To enhance the error detection capability in COTS (commercial off-the-shelf)-based design of safety-critical systems, a new hardware-based control flow checking (CFC) technique is presented. This technique, control flow checking by execution tracing (CFCET), employs the internal execution tracing features available in COTS processors and an external watchdog processor (WDP) to monitor the addresses of taken branches in a program. This is done without any modification of application programs, therefore, the program overhead is zero. The external hardware overhead is about 3.5% using an Altera Flex 10K30 FPGA. For different workload programs, the execution time overhead and the error detection coverage of the technique vary between 33.3 and 140.8% and between 79.7 and 84.6% respectively. The errors are detected with about zero latency.",2005,0, 6960,Multiplication of Fault Tolerance for Unmanned Aerial Vehicle System Using HILS,"To handle the complexity in avionic systems, the application software are large and becoming hard to meet the real-time capabilities. The proposed architecture for avionics control system meets the requirements of fault tolerance as a major issue. Fault tolerance proposed in this paper is based on redundancy; permanent faults are covered by hardware replication, transient faults, and fault detection which are processed by software techniques. In this paper, we propose the architecture which is flexible on distributed function and assured replication that can react to failures on real time in both hardware and software of UAV (unmanned aerial vehicle). This reliable architecture can enhance analysis capabilities for critical safety properties and reduce certification costs for such systems using hardware in loop simulation.",2009,0, 6961,Research of the on-line hydropower units fault diagnosis based on fuzzy theory and multi-Agent,"To improve existing defects of on-line monitoring of hydropower generating units, the paper designs a MAS-based fault diagnosis system model. In allusion to the classic model of MAS-based fault diagnosis system, it improves information interactive features between mission-controlled subsystems and diagnosis task decomposition subsystem to increase the rate of control signal transmission and makes status-monitored subsystem to get abnormal signals directly from factory to increase sensitivity of system. In fault diagnosis subsystem of real-time fault diagnosis system, an MAS-based interactive parallel working structure is designed to meet requirements of high reliability and good real-time. Java-based software so called as JAFMAS is used to build platform to construct a multi-Agent coordination mechanism. Experimental results show the effectiveness of the proposed system.",2010,0, 6962,SWIFT: software implemented fault tolerance,"To improve performance and reduce power, processor designers employ advances that shrink feature sizes, lower voltage levels, reduce noise margins, and increase clock rates. However, these advances make processors more susceptible to transient faults that can affect correctness. While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a lower-cost and more flexible alternative. This paper presents a novel, software-only, transient-fault-detection technique, called SWIFT. SWIFT efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs. SWIFT also provides a high level of protection and performance with an enhanced control-flow checking mechanism. We evaluate an implementation of SWIFT on an Itanium 2 which demonstrates exceptional fault coverage with a reasonable performance cost. Compared to the best known single-threaded approach utilizing an ECC memory system, SWIFT demonstrates a 51% average speedup.",2005,0, 6963,A Service-Oriented Alarm Correlation Method for IT Service Fault Localization,"To improve the quality of IT services it is important to quickly and accurately detect and diagnose its faults. Because associations exist among entities, one fault in a single entity may cause a number of accompanying alarms beside the root alarm during fault propagation. Service-oriented alarm correlation analysis, that isolates the root causes or faults from numerous alarms, is an important method to ensure quality of service. This paper firstly provides an entity relationships model of IT infrastructure. Based on the fault propagation map depicted in the model, we then propose a service-oriented alarm correlation method. We have validated our efforts by developing a prototype and testing it in a real environment",2006,0, 6964,A Framework for Fault Tolerant Real Time Systems Based on Reconfigurable FPGAs,"To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.",2006,0, 6965,Insulation fault detection in a PWM controlled induction motor-experimental design and preliminary results,"To investigate feature extraction methods for early detection of insulation degradation in low voltage (under 600 V), 3-phase, PWM controlled induction motors, a series of seeded fault tests was planned on a 50 HP, 440 V motor. In this paper, the background and rationale for the test plan are described. The instrumentation and test plan are then detailed. Finally, preliminary test experiences are related",2000,0, 6966,Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects,"To produce high quality object-oriented (OO) applications, a strong emphasis on design aspects, especially during the early phases of software development, is necessary. Design metrics play an important role in helping developers understand design aspects of software and, hence, improve software quality and developer productivity. In this paper, we provide empirical evidence supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects. Our results, based on industry data from software developed in two popular programming languages used in OO development, indicate that, even after controlling for the size of the software, these metrics are significantly associated with defects. In addition, we find that the effects of these metrics on defects vary across the samples from two programming languages-C++ and Java. We believe that these results have significant implications for designing high-quality software products using the OO approach.",2003,0, 6967,Modeling and minimization of discretization error in one-dimensional PML's using FDTD,"This work presents an accurate physical model of discretization error in one-dimensional perfectly matched layer (PML) using finite difference time domain (FDTD) method. The proposed model is based on the concept of the effective wave impedance of the PML. This concept implies that the wave impedance in the discretized space changes, with respect to the continuous value, when absorption occurs. These changes are dependent on the PML absorption per unit length as well as on the cell size. The validity of the proposed model has been checked with numerical simulations in coaxial waveguide geometry. One of the important consequences of the proposed modeling scheme is the feasibility of a PML with, ideally, no return losses due to discretization error. This prediction has also been corroborated by numerical simulations; being the improvement of the PML return losses of about 30 dB.",2007,0, 6968,Impact of statechart implementation techniques on the effectiveness of fault detection mechanisms,"This work presents the analysis of an experiment series aiming at the discovery of the impact of two inherently different statechart implementation methods on the behavior of the resulting executables in the presence of faults. The discussion identifies the key features of implementation techniques influencing the effectiveness of standard fault detection mechanisms (memory protection, assertions etc.) and an advanced statechart-level watchdog scheme used for detecting the deviations from the abstract implementation-independent behavioral specification.",2004,0, 6969,New algorithms for fault classification in Electrical Distribution Systems,"This work presents the basis of a new approach for fault diagnosis in Electrical Distribution Systems (EDS). This approach aims to classify different types of faults by means of the factor named as Negative Sequence Factor (F2). This factor is calculated from post-fault negative sequence and pre-fault positive sequence currents using Thevenin equivalents. Series, shunt and simultaneous faults with cable either on the source or on the load side are considered. A simple test case was used to show the effectiveness of the proposed methodology.",2010,0, 6970,Combined solution for fault location in three-terminal lines based on wavelet transforms,"This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.",2010,0, 6971,To measure the cumulate crustal deformation of important faults system on the Western China by PS InSAR technique,"This work puts forwards a plan to measure the cumulate long period slow slip of some active fault zone by applied PS InSAR technique and introduces some prophase work. At first, some scenes ERS-1/2 SAR data from strong earthquakes in West China are processed by D-InSAR technique. Next, corner reflector experiment field is established in where fault slibs in intensity and earthquake happen in high frequency, and the cumulate long period crustal deformation and coseismic deformation of important structural region will be measured at high precision.",2004,0, 6972,Alpha-emitter induced soft-errors in CMOS 130nm SRAM: Real-time underground experiment and Monte-Carlo simulation,"This work reports a long-duration (> 2 years) realtime characterization study of SRAM memories at the underground laboratory of Modane (LSM) to quantify alpha emitter radioactive impurities present in the circuit materials and responsible of soft-errors detected in absence of atmospheric neutrons. Experimental data have been obtained using ~3.5 Gbit of SRAMs manufactured in CMOS 130 nm technology. In a second part of this work, the underground experiment is simulated using a Monte-Carlo code to extract the contamination level related to the disintegration chain of uranium in silicon at secular equilibrium. Results are finally compared to data obtained from experimental counting experiments using an ultra low background alpha-particle gas proportional counter.",2010,0, 6973,Real-time neutron and alpha soft-error rate testing of CMOS 130nm SRAM: Altitude versus underground measurements,"This work reports real-time soft-error rate (SER) testing of semiconductor static memories in both altitude and underground environments to separate the component of the SER induced by the cosmic rays (i.e. primarily by atmospheric neutrons) from that caused by on-chip radioactive impurities (alpha-particle emitters). Two European dedicated sites were used to perform long-term real-time measurements with the same setup: the Altitude SEE Test European Platform (ASTEP) at the altitude of 2252 m and the underground laboratory of Modane (LSM, CEA-CNRS) under 1700 m of rock (4800 meters water equivalent). Experimental data obtained using 3.6 Gbit of SRAMs manufactured in CMOS 130 nm technology are reported and analyzed. Comparison with accelerated and simulated SER is also discussed.",2008,0, 6974,Undetected disk errors in RAID arrays,"Though remarkably reliable, disk drives do fail occasionally. Most failures can be detected immediately; moreover, such failures can be modeled and addressed using technologies such as RAID (Redundant Arrays of Independent Disks). Unfortunately, disk drives can experience errors that are undetected by the drivewhich we refer to as undetected disk errors (UDEs). These errors can cause silent data corruption that may go completely undetected (until a system or application malfunction) or may be detected by software in the storage I/O stack. Continual increases in disk densities or in storage array sizes and more significantly the introduction of desktop-class drives in enterprise storage systems are increasing the likelihood of UDEs in a given system. Therefore, the incorporation of UDE detection (and correction) into storage systems is necessary to prevent increasing numbers of data corruption and data loss events. In this paper, we discuss the causes of UDEs and their effects on data integrity. We describe some of the basic techniques that have been applied to address this problem at various software layers in the I/O stack and describe a family of solutions that can be integrated into the RAID subsystem.",2008,0, 6975,Three Bio-Inspired Approaches to Telecommunications Defect-Tracking and Reliability-Estimation,"Three alternative bio-inspired approaches are proposed to investigate telecommunication system reliability and defect tracking. They employ a recent model for the failure discovery in the associated system software. These are: half-sibling and a clone (HSAC) genetic algorithm; a recurrent dynamic neural network (RDNN) requiring parametric adjustments and using wavelets as basis; another RDNN with Adaptive parameters to incoming stream of input data, such that the error in failure intensity is minimized, subject to the model constraints. Each approach aims to improve speed of convergence, reliability, noise tolerance, and suitability for hardware implementation. Simulation results seem to favor the ARDNN since it iterates (about 10) on the shape of the wavelet basis and provide adequate recovery of the data in the form of piecewise linear differential.",2006,0, 6976,Simulating realistic bridging and crosstalk faults in an industrial setting,"Three different techniques for simulating realistic faults generated from IC layout are discussed. Two of them deal with bridging faults, and the third one handles crosstalk faults. The simulation is performed on top of a commercial simulator and thus is very well applicable in an industrial environment. No change of the design database and only minimal changes of the test shell are required. Experimental results are reported for a library cell and a block from a full-custom design.",2002,0, 6977,An Error Prediction Framework for Interferometric SAR Data,"Three of the major error sources in interferometric synthetic aperture radar measurements of terrain elevation and displacement are baseline errors, atmospheric path length errors, and phase unwrapping errors. In many processing schemes, these errors are calibrated out by using ground control points (GCPs) (or an external digital elevation model). In this paper, a simple framework for the prediction of error standard deviation is outlined and investigated. Inputs are GCP position, a priori GCP accuracy, baseline calibration method along with a closed-form model for the covariance of atmospheric path length disturbances, and a model for phase unwrapping errors. The procedure can be implemented as a stand-alone add-on to standard interferometric processors. It is validated by using a set of single-frame interferograms acquired over Rome, Italy, and a double difference data set over Flevoland, The Netherlands.",2008,0, 6978,Single-phase power-factor-correction AC/DC converters with three PWM control schemes,"Three pulse-width modulation (PWM) control schemes for a single-phase power-factor-correction (PFC) AC/DC converter are presented to improve the power quality. A diode bridge with two power switches is employed as a PFC circuit to achieve a high power factor and low line current harmonic distortion. The control schemes are based on look-up tables with hysteresis current controller (HCC) to generate two-level or three-level PWM on the DC side of diode rectifier. Based on the proposed three control schemes, the line current is driven to follow the sinusoidal current command which is in phase with the supply voltage, and two capacitor voltages on the DC bus are controlled to be balanced. The simulation and experimental results of a 1 kW converter with load as well as line voltage variation and shown to verify the proposed control schemes. It is shown that unity PFC is achieved using a simple control circuit and the measured line current harmonics satisfy the IEC 1000-3-2 requirements",2000,0, 6979,Enhancing Motion Picture Lens Performance by Digital Calibration and Correction,"To some degree, all lenses used by the motion picture industry exhibit certain distortions which can detract from the ideal viewing experience. This paper presents a lens calibration and correction system which enables these problems to be resolved digitally in post production. For some time lenses such as the Cook 4i and now 5i provide the necessary metadata on focus and aperture settings to enable digital post corrections to be applied. Cameras such the Alexa and RED are capable of capturing this metadata. Many other lenses however may be adapted to provide the necessary metadata by means of a simple encoder. The paper presents how this is achieved followed by a presentation of the digital correction system for enhancing such scenarios as extreme focus pulls. Using digital high definition camera systems the calibration of each individual lens is presented along with the automatic derivation of the required lens database. The full post production work flow through to final image generation is presented.",2010,0, 6980,Towards a Flexible Fault-Tolerant System-on-Chip,"TodayAs technology advances still lead to increased integration densities and higher clock rates. However, these advances are more and more accompanied by continuously increasing drawbacks such as intra-die-variation, temperature dependencies, and device degradation mechanisms. Besides the classic measure mips-per-watt, the overall system reliability is steadily becoming more important. In this context, device reliability has become a critical aspect of system-on-chip (SoC) design. To cope with this challenge, we divide the SoC itself into different architectural layers. Each layer is tailored individually to the specific SoC needs in terms of fault-tolerance. At the same time, we derive a comprehensive method of how to account for all layer dependencies in an efficient manner and yet enable error detection and correction mechanisms at system level. In particular, error detection is predominantly established at lower levels, whereas required error correction mechanisms are applied at higher system levels.",2009,0, 6981,Forward error correction techniques suitable for the utilization in the PLC technology,"Todaypsilas PLC systems are mostly limited in transmission speeds to several kilobits per seconds. Such speeds are certainly not sufficient for running popular services like a broadband Internet, a teleworking or the Video on Demand. This fact is reducing a chance of the PLC technologypsilas deployment in a wide range. One way how to overcome this problem could be in introduction of optimal errors correcting techniques into the PLC technology. In this contribution, different types of coding techniques are discussed, including RS, Turbo or Convolutional codes. Theoretical analyses are focused on their positive and negative features visible in case of their deployment in the PLC transmission environment. Special attention is pointed to their robustness against noises, which is analyzed also practically in form of the simulations. Analyses and simulations are done for individual codes as well as combination of codes.",2008,0, 6982,Fault tolerant linear algebra: Recovering from fail-stop failures without checkpointing,"Today's long running high performance computing applications typically tolerate fail-stop failures by checkpointing. While checkpointing is a very general technique and can be applied in a wide range of applications, it often introduces a considerable overhead especially when applications modify a large amount of memory between checkpoints. In this research, we will design highly scalable low overhead fault tolerant schemes according to the specific characteristics of an application. We will focus on linear algebra operations and re-design selected algorithms to tolerate fail-stop failures without checkpointing. We will also incorporate the developed techniques into the widely used numerical linear algebra library package ScaLAPACK.",2010,0, 6983,VGrADS: enabling e-Science workflows on grids and clouds with fault tolerance,"Today's scientific workflows use distributed heterogeneous resources through diverse grid and cloud interfaces that are often hard to program. In addition, especially for time-sensitive critical applications, predictable quality of service is necessary across these distributed resources. VGrADS' virtual grid execution system (vgES) provides an uniform qualitative resource abstraction over grid and cloud systems. We apply vgES for scheduling a set of deadline sensitive weather forecasting workflows. Specifically, this paper reports on our experiences with (1) virtualized reservations for batchqueue systems, (2) coordinated usage of TeraGrid (batch queue), Amazon EC2 (cloud), our own clusters (batch queue) and Eucalyptus (cloud) resources, and (3) fault tolerance through automated task replication. The combined effect of these techniques was to enable a new workflow planning method to balance performance, reliability and cost considerations. The results point toward improved resource selection and execution management support for a variety of e-Science applications over grids and cloud systems.",2009,0, 6984,An Open-Source Scaled Automobile Platform for Fault-Tolerant Electronic Stability Control,"Today's technology allows the construction of complex experimental apparatus with reasonable budget. This paper supplies detailed guidelines for constructing a low-cost scaled automobile platform for research and education in vehicle dynamics and control. It summarizes the knowledge gathered when designing, building, and evaluating a model car, which was deployed in a real-world environment for testing an electronic stability control (ESC) algorithm. The model car was built using off-the-shelf hardware and open-source software. Data from a variety of onboard sensors are fused in real time so as to deliver accurate measurements to the ESC algorithm, whereas sensor fault diagnosis is achieved at the same time through an innovative approach based on artificial neural networks (NNs). The detailed presentation of this case study provides a roadmap on how a researcher can build effective experimental automotive platforms for research and educational purposes.",2010,0, 6985,Full wafer defect analysis with time-of-flight secondary Ion Mass Spectrometry,"ToF-SIMS was used for defect and failure analysis on full wafers using KLA/Tencor maps for addressing selected defects for analysis. In the first case study, analysis of surface contamination is discussed. The analysis was performed in microscan mode for single particle analysis or in macroscan mode for large area analysis. In a second example, ToF-SIMS was used to identify particle type metallic defects from a P-type buried layer of BiCMOS transistors under 200 nm of SiO2. The last case study discusses the detection of unintentionally implanted P in micron-sized polysilicon lines in the active punch-through area of a wafer.",2010,0, 6986,Performance-asymmetry-aware topology virtualization for defect-tolerant NoC-based many-core processors,"Topology virtualization techniques are proposed for NoC-based many-core processors with core-level redundancy to isolate hardware changes caused by on-chip defective cores. Prior work focuses on homogeneous cores with symmetric performance and optimizes on-chip communication only. However, core-to-core performance asymmetry due to manufacturing process variations poses new challenges for constructing virtual topologies. Lower performance cores may scatter over a virtual topology, while operating systems typically allocate tasks to continuous cores. As a result, parallel applications are probably assigned to a region containing many slower cores that become bottlenecks. To tackle the above problem, in this paper we present a novel performance-asymmetry-aware reconfiguration algorithm Bubble-Up based on a new metric called core fragmentation factor (CFF). Bubble-Up can arrange cores with similar performance closer, yet maintaining reasonable hop distances between virtual neighbors, thus accelerating applications with higher degree of parallelism, without changing existing allocation strategies for OS. Experimental results show its effectiveness.",2010,0, 6987,Design and analysis of vector color error diffusion halftoning systems,"Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank",2001,0, 6988,Constructing Subtle Faults Using Higher Order Mutation Testing,"Traditional mutation testing considers only first order mutants, created by the injection of a single fault. Often these first order mutants denote trivial faults that are easily killed. This paper investigates higher order mutants (HOMs). It introduces the concept of a subsuming HOM; one that is harder to kill than the first order mutants from which it is constructed. By definition, subsuming HOMs denote subtle fault combinations. The paper reports the results of an empirical study into subsuming HOMs, using six benchmark programs. This is the largest study of mutation testing to date. To overcome the exponential explosion in the number of mutants considered, the paper introduces a search based approach to the identification of subsuming HOMs. Results are presented for a greedy algorithm, a genetic algorithm and a hill climbing algorithm.",2008,0, 6989,Reduction of beam dropout related defects in a hybrid scan implanter,"Traditional raster scan systems were able to minimize the effect of beam dropouts by using fast beam scanning speeds combined with long minimum implant times. However, with 200 and 300 mm wafer sizes, most medium current implant systems have been developed using hybrid scanning techniques to minimize beam angles and these systems are inherently more susceptible to beam dropouts. Additionally the drive for increased productivity combined with wafer rotation capability has shortened minimum allowable implant times. This paper describes a method implemented on Axcelis medium current systems for actively correcting the effect of beam dropouts. Implant results are presented that demonstrate spec level wafer uniformity and dose repeatability with induced beam dropouts. This improvement is of particular relevance for very short implants, such as a single rotation of a quadrant implant",2000,0, 6990,Cluster fault-tolerance: An experimental evaluation of checkpointing and MapReduce through simulation,"Traditionally, cluster computing has employed checkpointing to address fault tolerance. Recently, new models for parallel applications have grown in popularity namely MapReduce and Dryad, with runtime systems providing their own re-execute based fault tolerance mechanisms, but with no analysis of their failure characteristics. Another development is the availability of failure data spanning years for systems of significant size at Los Alamos National Labs (LANL), but the time between failure (TBF) for these systems is a poor fit to the exponential distribution assumed by optimization work in checkpointing, bringing these results into question. The work in this paper describes a discrete event simulation driven by the LANL data and by models of parallel checkpointing and MapReduce tasks. The simulation allows us to then evaluate and assess the fault tolerance characteristics of these tasks with the goal of minimizing the expected running time of a parallel program in a cluster in the presence of faults for both fault tolerance models.",2009,0, 6991,"Development of a cow cost, fault tolerant, and highly reliable command and data handling computer (PulseTM)","Traditionally, command and data handling computers have been designed to manage the different and many remote interface units within the satellite bus platform. In this distributed architecture, the command and data handling requires low throughput processors to pass data to other units or for download to ground stations for further processing. The advent of very large radiation hardened ASICs has enabled the application of powerful processing of the RHPPc (PowerPC 603e) with a simplified IEEE-1394 backplane bus to provide a highly reliable and cost competitive centralized command and data handling subsystem. This robust architecture is tailorable and easily modified to meet the varying needs of the satellite and space transportation applications. By using a commercially compliant processor, software and its tools, which are one of the most complex, high risk, and expensive undertaking of the system architecture for a satellite bus controller, become a low risk design issue and much more cost effective. An extensive array of COTS software tools is currently available for the Power PC processor family, rendering the software development environment associated with the PulseTM to be a relatively low impact on the overall program thus, reducing the overall program recurring and non-recurring cost. The PulseTM supports most of the COTS operating systems with the current board support package (both basic and custom) being designed to be VxWorks compliant",2000,0, 6992,Application-Level Correctness and its Impact on Fault Tolerance,"Traditionally, fault tolerance researchers have required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100% numerically correct, the program can still appear to execute correctly from the user's perspective. Hence, whether a fault is unacceptable or benign may depend on the level of abstraction at which correctness is evaluated, with more faults being benign at higher levels of abstraction, i.e. at the user or application level, compared to lower levels of abstraction, i.e. at the architecture level. The extent to which programs are more fault resilient at higher levels of abstraction is application dependent. Programs that produce inexact and/or approximate outputs can be very resilient at the application level. We call such programs soft computations, and we find they are common in multimedia workloads, as well as artificial intelligence (AI) workloads. Programs that compute exact numerical outputs offer less error resilience at the application level. However, we find all programs studied in this paper exhibit some enhanced fault resilience at the application level, including those that are traditionally considered exact computations - e.g., SPECInt CPU2000. This paper investigates definitions of program correctness that view correctness from the application's standpoint rather than the architecture's standpoint. Under application-level correctness, a program's execution is deemed correct as long as the result it produces is acceptable to the user. To quantify user satisfaction, we rely on application-level fidelity metrics that capture user-perceived program solution quality. We conduct a detailed fault susceptibility study that measures how much more fault resilient programs are when defining correctness at the application level compared to the architecture level. Our results show for 6 multimedia and AI benchmarks that 45.8% of architecturally incorrect faults are corre- ct at the application level. For 3 SPECInt CPU2000 benchmarks, 17.6% of architecturally incorrect faults are correct at the application level. We also present a lightweight fault recovery mechanism that exploits the relaxed requirements on numerical integrity provided by application-level correctness to reduce checkpoint cost. Our lightweight fault recovery mechanism successfully recovers 66.3% of program crashes in our multimedia and AI workloads, while incurring minimum runtime overhead",2007,0, 6993,Industrial experience with cycle error computation of cycle-accurate transaction level models,"Transaction level modeling is gaining increasing popularity with the increasing design complexity of the system-on-a-chip. Transaction level models are frequently built from existing register transfer level models, which usually cause cycle errors. Measurable indicators of cycle errors are necessary, and their definitions are important. This paper presents the challenges in cycle error computation and our proposed method, although its effectiveness has not been proved formally. The main contribution of our study is to report an industrial experience with cycle error computation.",2007,0, 6994,Fault-tolerant FFT data compression,"Transform coefficients carry important data characteristics but can also be compressed significantly in many remote sensing applications. Failures in the several computing facilities that execute lossy compression algorithms and support the transmission of Fourier transform data can corrupt the values beyond recovery at the final destination. Various methods for including fault tolerance at the data processing level are exemplified by describing a protected system that computes the FFT, truncates small coefficients and compresses the remaining nonzero coefficients using lossless arithmetic coding. Algorithmic checks within the FFT and arithmetic encoding and decoding operations are augmented with additional features between and across several subsystems involved in compressing and transmitting the FFT data. End-to-end error detection is achieved in this manner",2000,0, 6995,Transformer Fault Analysis Using Event Oscillography,"Transformer differential protection operates on Kirchhoff's well-known law that states, ""the sum of currents entering and leaving a point is zero"". Although Kirchhoff's law is well understood, the implementation of the law in transformer differential protection involves many practical considerations such as current transformer (CT) polarity, phase-angle correction, zero-sequence removal, and CT grounding. Still, even correctly implemented transformer differential protection misoperates occasionally, resulting from conditions such as CT saturation during heavy through faults. Whereas electromechanical and electronic relays provide no or very little fault information, numerical relays provide an abundance of information. However, the analyst must still select the correct fault information from this abundance of information to perform useful fault analysis. This paper demonstrates how to begin analysis of such events by using real-life oscillographic data and going through a step-by-step analysis of the relay algorithm using a mathematical relay model. Relay engineers can use this paper as a reference for analyzing transformer oscillography in a systematic and logical manner",2007,0, 6996,Fault Diagnosis of Partial Discharge in the Transformers Based on the Fuzzy Neural Networks,"Transformer is a very important equipment in the power system. In order to ensure security and stability of the work,it is an urgent demand to carry on a fault diagnosis of partial discharge. This paper presents an approach of combination of wavelet singularity detection theory and fuzzy neural network to carry on a fault diagnosis of partial discharge. The experimental results show that this method is an effective way in fault diagnosis of partial discharge.",2010,0, 6997,Low cost error recovery in Delay-Intolerant Wireless Sensor Networks,"Transmission efficiency of wireless sensor networks (WSN) is lower than that of conventional networks due to frequent propagation errors. In light of specific features and diverse applications of WSN, common assumptions from communication systems may not hold true and efficient application-specific protocols can be formulated. In this paper, we demonstrate this based on an interesting observation related to shortened Reed-Solomon (RS) codes for packet reliability in WSN. We show that multiple instances (gamma) of RS codes defined on a smaller alphabet combined with interleaving result in smaller resource usage while the performance exceeds the benefits of a shortened RS code defined over a larger alphabet. In particular, the proposed scheme can have an error correction capability of up to gamma times larger that for the conventional RS scheme without changing the rate of the code with much lower power, timing and memory requirements. Implementation results on 25 mm motes developed by Tyndall National Institute show that such a scheme is 43% more power efficient compared to RS scheme with same code rate. Besides, such an approach results in 44% faster computations and 53% reduction in memory required.",2007,0, 6998,Adaptive FEC error control scheme for wireless video transmission,"Transmission errors have detrimental impact on video quality in wireless network. Hence, this requires highly efficient error correction scheme to significantly improve the quality of the media content. Deploying error correction technique alone would not strategically eradicate the problem unless some adaptation mechanism has been included in order to make efficient decision while adding more redundant information base on the channel condition. Adapting with channel condition can significantly enhance the network performance and video quality as well. In this paper, we paper presents an approach using forward error correction and cross layer mechanism which dynamically adapts with the channel condition to recover the loss packets in order to enhance the perceived video quality. The scheme has been developed and tested on NS-2 simulator and it shows more dramatic improvement in video quality.",2010,0, 6999,Reliability evaluation of transmission line protective schemes using static fault tree,"Transmission line protective schemes are sometimes very complex, incorporating many different devices. Reliability of such complex systems is a concern to the power system protection engineer and presents a significant analytical problem. This paper describes the use of fault tree analysis as one method of analyzing the reliability of these complex systems. A MATLAB-based software is developed and reliability of different protection schemes using various devices are studied.",2004,0, 7000,Hybrid error concealment in digital compressed video streams,"Transmission of a compressed video signal over a lossy communication network exposes the information to losses and errors, which may cause serious and visible degradation to the decoded video stream. We present a new hybrid error concealment algorithm, relying on temporal and spatial concealment. This algorithm includes consideration of the spatial and temporal activity in the surrounding of a lost block, and introduces four thresholds, which lead to choosing the suitable concealment scheme for each block.",2004,0, 7001,Trap Spectroscopy by Charge Injection and Sensing (TSCIS): A quantitative electrical technique for studying defects in dielectric stacks,"Trap spectroscopy by charge injection and sensing (TSCIS) is a new, fast and powerful material analysis technique that provides detailed information on the trap density profile and trap energy level in dielectric materials. We show the measurement principle and explain the data analysis. The technique is applied to a number of example materials: SiO2, Al2O3, and Si3N4. We show that TSCIS has excellent resolution and is capable of distinguishing between different process-variations.",2008,0, 7002,On error floor and free distance of turbo codes,"Turbo codes have excellent performance at low and medium signal-to-noise ratios (SNR) very close to the Shannon limit, and are at the basis of their success. However, a turbo code performance curve can change its slope at high SNR if the code free distance is small. This error floor phenomenon is not acceptable for applications requiring very low values of bit error rates. A knowledge of the free distance and its multiplicity allows one to analytically estimate the error floor. An algorithm for computing the turbo code free distance, based on the notion of constrained subcodes, is described. Some considerations on the free distance distribution of turbo codes with growing interleaver length are also provided",2001,0, 7003,A stochastic model of fault introduction and removal during software development,"Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development",2001,0, 7004,Conflict driven scan chain configuration for high transition fault coverage and low test power,"Two conflict-driven schemes and a new scan architecture based on them are presented to improve fault coverage of transition fault. They make full use of the advantages of broadside, skewed-load and enhanced scan testing, and eliminate the disadvantages of them, such as low coverage, fast global scan enable signal and hardware overhead. Test power is also a challenge for delay testing, so our method tries to reduce the test power at the same time. By the analysis of the functional dependency between test vectors in broadside testing and the shift dependency between vectors in the skewed-load testing, some scan cells are selected to operate in the enhanced scan and skewed-load scan mode, while others operate in traditional broadside mode. In the architecture, scan cells with common successors are divided into one chain. With the efficient conflict driven selection methods and partition of scan cells, fault coverage can be improved greatly and test power can be reduced, without sacrificing the test time and test data. Experimental results show that the fault coverage of the proposed method can reach the level of enhanced scan design.",2009,0, 7005,Digital fault location for parallel double-circuit multi-terminal transmission lines,"Two new methods are proposed for fault point location in parallel double-circuit multi-terminal transmission lines by using voltages and currents information from CCVTs and CTs at all terminal. These algorithms take advantage of the fact that the sum of currents flowing into a fault section equals the sum of the currents at all terminals. Algorithm 1 employs an impedance calculation and algorithm 2 employs the current diversion ratio method. Computer simulations are carried out and applications of the proposed methods are discussed. Both algorithms can be applied to all types of fault such as phase-to-ground and phase-to-phase faults. As one equation can be used for all types of fault, classification of fault types and selection of faulted phase are not required. Phase components of the line impedance are used directly, so compensation of unbalanced line impedance is not required",2000,0, 7006,A new digital relay for generator protection against asymmetrical faults,"Under unbalanced conditions three phase instantaneous power oscillates at twice the power system frequency. The magnitude of these oscillations can be used as a measure of the system unbalance. This paper has introduced a new digital relaying algorithm designed to detect asymmetrical faults by monitoring the sinusoidal oscillations of the three-phase instantaneous power measured at the generator terminal. Once an asymmetrical fault is detected, the algorithm checks the direction of the negative sequence-reactive power flow at the machine terminal to discriminate between internal and external faults. Power system test studies presented show that the new relay provides fast tripping for internal asymmetrical faults and back up protection for external asymmetrical fault conditions",2002,0, 7007,User interface dependability through goal-error prevention,"User interfaces form a critical coupling between humans and computers. When the interface fails, the user fails, and the mission is lost. For example, in computer security applications, human-made configuration errors can expose entire systems to various forms of attack. To avoid interaction failures, a dependable user interface must facilitate the speedy and accurate completion of user tasks. Defects in the interface cause user errors (e.g., goal, plan, action and perception errors), which impinge on speed and accuracy goals, and can lead to mission failure. One source of user error is poor information representation in the interface. This can cause users to commit a specific class of errors - goal errors. A design principle (anchor-based subgoaling) for mitigating this cause was formulated. The principle was evaluated in the domain of setting Windows file permissions. The native Windows XP file permissions interface, which did not support anchor-based subgoaling, was compared to an alternative, called Salmon, which did. In an experiment with 24 users, Salmon achieved as much as a four-fold increase in accuracy for a representative task and a 94% reduction in the number of goal errors committed, compared to the XP interface.",2005,0, 7008,Segmented attenuation correction using 137Cs single photon transmission,"Using a 137Cs single photon transmission source for transmission scanning allows a higher photon flux and thus, better transmission statistics compared to coincidence transmission scanning. However, 137Cs suffers from a high scatter fraction as well as emission contamination, both leading to an underestimation of the attenuation values. On our NaI- and GSO-systems this is currently compensated by subtracting emission contamination, scatter-scaling and re-mapping. Histogram based segmentation, widely used to shorten the scan time on 68Ge devices, inherently is capable to compensate for a potential bias in the attenuation values. We have investigated segmented attenuation correction for 137Cs transmission scans with NaI(Tl) PET scanners in previous work, and came to the conclusion that our current processing was superior to the formerly used segmentation routine. In this paper we re-investigate segmentation, however, using a more sophisticated algorithm. Our focus was mainly to improve the accuracy of our transmission scans rather than shorten the scan times. However, the potential to reduce the scan duration was investigated as well",2001,0, 7009,Faults Location in High Voltage Transmission System using ICA,"Various methods for fault location in transmission lines have been proposed in the literature. This work presents an alternative method based upon the analysis of independent components (ICA) to localize the distance at which single-phase, two-phase, two-phase grounding and three-phase faults occur in a 500 kV transmission system starting from different angles of fault incidence at different distances along the line together with fault signals or travelling waves subject to other perturbations unrelated with the desired fault signal at the fault distance placement. Simulation results of a 500kV high voltage transmission system got with the software Alternative transients program (ATP) show that the proposed methodology implemented with Matlab is a rather efficient tod for various fault types location.",2007,0, 7010,On the Prevalence of Sensor Faults in Real-World Deployments,"Various sensor network measurement studies have reported instances of transient faults in sensor readings. In this work, we seek to answer a simple question: How often are such faults observed in real deployments? To do this, we first explore and characterize three qualitatively different classes of fault detection methods. Rule-based methods leverage domain knowledge to develop heuristic rules for detecting and identifying faults. Estimation methods predict ""normal"" sensor behavior by leveraging sensor correlations, flagging anomalous sensor readings as faults. Finally, learning-based methods are trained to statistically identify classes of faults. We find that these three classes of methods sit at different points on the accuracy/robustness spectrum. Rule-based methods can be highly accurate, but their accuracy depends critically on the choice of parameters. Learning methods can be cumbersome, but can accurately detect and classify faults. Estimation methods are accurate, but cannot classify faults. We apply these techniques to four real-world sensor data sets and find that the prevalence of faults as well as their type varies with data sets. All three methods are qualitatively consistent in identifying sensor faults in real world data sets, lending credence to our observations. Our work is a first-step towards automated on-line fault detection and classification.",2007,0, 7011,Threshold calculation for segmented attenuation correction in PET with histogram fitting,"Various techniques for segmented attenuation correction (SAC) have been shown to be capable of reducing transmission scan time significantly and performing accurate image quantification. The majority of well established methods are based on analyzing attenuation histograms to classify the main tissue components, which are lung and soft tissue. Methods using statistical approaches, i.e. class variances, to separate two clusters of a measured attenuation map have been shown to perform accurate attenuation correction at a scan lime within a range of 2-3 min, but may fail due to peak deformations, which occur when the transmission scan time is further reduced. The authors implemented a new method for segmented attenuation correction with the aim of minimizing the transmission scan time and increasing the robustness for extremely short scan times using a coincidence transmission device. The implemented histogram fitting segmentation (HFS) allows accurate threshold calculation without assuming normally distributed peaks in the histogram, by adapting a suitable function to the soft tissue peak. The algorithm uses an estimated lung position (ELP) for patient contour finding and lung segmentation. Iterative reconstruction is used to generate the transmission images",2001,0, 7012,Content-based image retrieval of Web surface defects with PicSOM,"This work describes the application of PicSOM, a content-based image retrieval (CBIR) system based on self-organizing maps, on a defect image database containing 2004 images from a Web inspection system. Six feature descriptors from the MPEG-7 standard and an additional shape descriptor developed for surface defect images are used in the experiments. The classification performance of the descriptors is evaluated using K-nearest neighbor (KNN) leave-one-out cross-validation and PicSOM's built-in CBIR analysis system. The KNN results show good performance from three MPEG-7 descriptors and our shape descriptor. The CBIR results using these descriptors show that PicSOM's SOM-based indexing engine yields efficient and accurate retrieval of similar defect images from our database.",2004,0, 7013,Optimizing the error recovery capabilities of LDPC-staircase codes featuring a Gaussian elimination decoding scheme,"This work focuses on the LDPC codes for the packet erasure channel, also called AL-FEC (application-level forward error correction codes). Previous work has shown that the erasure recovery capabilities of LDPC-triangle and LDPC-staircase AL-FEC codes can be greatly improved by means of a Gaussian elimination (GE) decoding scheme, possibly coupled to a preliminary Zyablov iterative decoding (ID) scheme. Thanks to the GE decoding, the LDPC-triangle codes were very close to an ideal code. If the LDPC-staircase performances were also improved, they were not as close to an ideal code as the LDPC-triangle codes were. The first goal of this work is to reduce the gap between the LDPC-staircase codes and the theoretical limit. We show that a simple modification of the parity check matrix can significantly improve their recovery capabilities when using a GE decoding. Unfortunately the performances of the same codes featuring an ID are negatively impacted, as well as the decoding complexity. The second goal of this work is therefore to find an appropriate balance between all these aspects.",2008,0, 7014,Multi-Network-Feedback-Error-Learning in pelletizing plant control,"This work is devoted to present a control application in an industrial process of iron pellet cooking in an important mining company in Brazil. This work uses an adaptive control in order to improve the performance of the conventional controller already installed in the plant. The main strategy approached here is known Multi-Network-Feedback-Error-Learning (MNFEL), it uses multiple neural networks in the strategy Feedback-Error-Learning (FEL). EEL is a strategy control which a neural network (NN) learns to improve the control actuation of a Conventional Feedback Controller (CFC), in this case a Proportional-Integral-Derivative (PID) controller. The advantage of the FEL strategy is to provide cooperation between the adaptive controller and the conventional controller. The NN learns not only the actuation necessary for the control, but new actions can be acquired as consequence of changes in the process. The approach of MNFEL is to add a new neural network whenever the network's error stops decreasing, so that avoid the conventional approach, restart the learning of the NN. It is emphasized MNFEL can be used when wish to improve the results obtained with FEL strategy, just adding many neural networks in the system. That is good option because FEL improves the performance of the CFC and MNFEL is a improvement of FEL, so MNFEL improves more than FEL, the performance of the control system. In this work, due to the unknown mathematic model of the plant and, in order to simulate the control of the process, a neural model of the plant is also presented. In a simulation environment, PID, FEL and MNFEL strategies are compared and the results are discussed.",2010,0, 7015,Multiple Description Coding of 3D Geometrywith Forward Error Correction Codes,"This work presents a multiple description coding (MDC) scheme for compressed three dimensional (3D) meshes based on forward error correction (FEC). It allows flexible allocation of coding redundancy for reliable transmission over error-prone channels. The proposed scheme is based on progressive geometry compression, which is performed by using wavelet transform and modified SPIHT algorithm. The proposed algorithm is optimized for varying packet loss rates (PLR) and channel bandwidth. Modeling distortion-rate function considerably decreases computational complexity of bit allocation.",2007,0, 7016,Processing of Ultrasonic Array Signals for Characterizing Defects. Part I: Signal Synthesis,"This work presents a novel procedure to characterize damage using an array of ultrasonic measurements in a generalized model-based inversion scheme, which integrates the complete information recorded from the measurements. In the past, we proposed some idealized nondestructive evaluation test methods with emphasis on the numerical results, but it is necessary to develop the techniques in greater detail in order to apply the techniques to real conditions. Our detection principle is based on the measurement and inversion of frequency-domain data combined with a reduced set of output parameters. The approach is developed and tested for the case of an aluminum specimen with a synthetic array of point contact ultrasonic transmitters and receivers. The first part of this two-part paper is focused on numerical synthesis of the experimental measurements using the boundary element method for a general ultrasonic propagation model. This part also deals with the deconvolution by comparing the data measured from the damaged and undamaged specimens. The deconvolution technique allows us to calibrate the data by taking into account the uncertainties due to mechanical properties, input signal, and other coherent noise. The second part of the paper presents the inversion of the measurements to obtain the parameters and ultimately to predict the position and size of the real defect.",2007,0, 7017,Fractal-ANN Tool for Classification of Impulse Faults in Transformers,"Transformers are impulse tested in laboratory to assess their insulation strength against atmospheric lightning strikes. Inadequate insulation may cause transformer winding to fail during such tests. Detection of such faults is an important issue for repair and maintenance of such transformers. This paper describes the application of the concept of fractal geometry to obtain the features inherent in the impulse response of transformers subjected to impulse test. Fractal features such as fractal dimension (calculated by Higuchi, Kartz, Petrosian and Box counting methods), lacunarity and entropy has been used for collection of proper features from the current waveforms. Artificial Neural Network (ANN) has been used to classify the patterns inherent in the features extracted from Fractal analysis. The complex nature of transformer winding and its impulse response gives rise to a complex non-linear pattern of fractal features. In this regard, the application of ANN for pattern classification has greatly reduced the complexity and at the same time increased the accuracy in the fault localization and identification system. The proposed tool has been tested to identify the type and location of faults by analyzing experimental impulse responses of analog and digital transformer models.",2005,0, 7018,A fuzzy logic tool for transformer fault diagnosis,"Transformers have complicated winding structures and are subject to electrical, thermal and mechanical stresses. During the last few years, there has been a trend of continuous increase of transformer failures. It is therefore vital to correctly diagnose their incipient faults for safety and reliability of an electrical network. Various faults could occur in a transformer such as overheating, partial discharge and arcing, which can generate various fault-related gases. From dissolved gas analysis, the faults may be determined. The conventional interpretation methods such as IEC codes cannot determine the fault in many cases, especially if there is more than one type of fault existing in a transformer. This paper presents a fuzzy logic tool that can be used to diagnose multiple faults in a transformer and monitor the trend. It has been proved to be a very useful tool for transformer diagnosis and maintenance planning",2000,0, 7019,Robust system design with built-in soft-error resilience,"Transient errors caused by terrestrial radiation pose a major barrier to robust system design. A system's susceptibility to such errors increases in advanced technologies, making the incorporation of effective protection mechanisms into chip designs essential. A new design paradigm reuses design-for-testability and debug resources to eliminate such errors.",2005,0, 7020,PLR: A Software Approach to Transient Fault Tolerance for Multicore Architectures,"Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point toward multicore designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper presents process-level redundancy (PLR), a software technique for transient fault tolerance, which leverages multiple cores for low overhead. PLR creates a set of redundant processes per application process and systematically compares the processes to guarantee correct execution. Redundancy at the process level allows the operating system to freely schedule the processes across all available hardware resources. PLR uses a software-centric approach to transient fault tolerance, which shifts the focus from ensuring correct hardware execution to ensuring correct software execution. As a result, many benign faults that do not propagate to affect program correctness can be safely ignored. A real prototype is presented that is designed to be transparent to the application and can run on general-purpose single-threaded programs without modifications to the program, operating system, or underlying hardware. The system is evaluated for fault coverage and performance on a four-way SMP machine and provides improved performance over existing software transient fault tolerance techniques with a 16.9 percent overhead for fault detection on a set of optimized SPEC2000 binaries.",2009,0, 7021,EGNOS test bed ionospheric corrections under the October and November 2003 storms,"Two severe geomagnetic storms were experienced on October 29-31 and November 20, 2003, degrading significantly the European Geostationary Navigation Overlay Service (EGNOS) Test Bed (ESTB) performance in Europe. Such storms reached extreme values of Kp=9 during the most severe periods. The analysis of the ESTB ionospheric corrections and their effect on the ESTB integrity and accuracy is presented in this work. The ESTB performance during those storms was monitored from a network of global positioning system (GPS) receivers widely distributed over Europe, including the ESTB reference stations, and the geographical degradation of the accuracy is analyzed in this paper. The correlation between the Kp index and the misleading information (MI) events is also shown. During the most severe stormy periods, the errors in the ESTB ionospheric corrections and its integrity bounds are analyzed to explain the peaks in the navigation system error, which produces MIs. This analysis has been carried out by comparing with direct dual-frequency GPS measurements and global ionospheric maps.",2005,0, 7022,Safety assessment for safety-critical systems including physical faults and design faults,"Two types of faults, design faults and physical faults, are discussed in this paper. Since they are two mutually exclusive and complete fault types on the fault space, the safety assessment of safety-critical computer systems in this paper considers the hazard contribution from both types. A three-state Markov model is introduced to model safety-critical systems. Steady state safety and mean time to unsafe failure (MTTUF) are the two most important metrics for safety assessment. Two homogenous Markov models are derived from the three-state Markov model to estimate the steady state safety and the MTTUF. The estimation results are generalized given the fault space is divided by M mutually exclusive and complete types of faults",2006,0, 7023,Assisting Students with Typical Programming Errors During a Coding Session,"Typical programming errors that introductory students make are known to stall and frustrate them during their first few lab coding sessions. Traditionally, lab instructors help with the process of correcting these errors; however, with medium or large size computer labs, the degree of interaction between students and the instructor tends to decline; leaving some students feeling unattended and subsequently perhaps uninterested. To help automate the process of finding and correcting these errors we have devised a solution that collects periodical or on-compile time code snapshots which we analyze upon unsuccessful compilation or whenever a lengthy stall is detected. The analysis is then used to provide feedback to students directly into their Integrated Development Environment (IDE), and generate useful reports that can be perused by both instructors and students later on. This article describes the design and implementation of our solution in the context of BlueJ; a common first year IDE.",2010,0, 7024,Some optimal object-based architectural features for corrective maintenance,"We investigate the relationship between some characteristics of software architecture present at the top-level design stage and the resulting corrective maintainability of five Ada systems. Measures are developed for both internal and external complexity for the subset of packages, within our five projects, that were changed to correct a software fault. These measures involve the context coupling of packages and the number of visible declarations that can be exported by a package. A relationship establishing the optimal number of object couples as a function of the mean number of visible declarations is empirically estimated based on the faults contained within our projects. We find that the optimal number of object couples varies inversely with the mean number of visible declarations. When initially designing a system, or when making modifications to an existing system, this relationship can be used to provide guidance for choosing the most maintainable design among alternative designs.",2003,0, 7025,Memory fault tolerance software mechanisms: design and configuration support through SWN models,"We present a case study of a software fault tolerance mechanisms, the distributed memory, designed and implemented within the European projects TIRAN and DEPAUDE, and currently under study within the Italian project ISIDE. The studied mechanisms are part of a complete framework of general purpose software fault tolerance mechanisms. We show a method for the compositional construction of models of the DM and of the environment in which it operates, expressed in the stochastic well formed nets (SWN) formalism. Different versions of submodels, at different detail level are presented and compared using some behaviour inheritance notions taken from the literature.",2003,0, 7026,Precision and error analysis of MATLAB applications during automated hardware synthesis for FPGAs,"We present a compiler that takes high level signal and image processing algorithms described in MATLAB and generates an optimized hardware for an FPGA with external memory. We propose a precision analysis algorithm to determine the minimum number of bits required by an integer variable and a combined precision and error analysis algorithm to infer the minimum number of bits required by a floating point variable. Our results show that on average, our algorithms generate hardware requiring a factor of 5 less FPGA resources in terms of the configurable logic blocks (CLBs) consumed as compared to the hardware generated without these optimizations. We show that our analysis results in the reduction in the size of lookup tables for functions like sin, cos, sqrt, exp etc. Our precision analysis also enables us to pack various array elements into a single memory location to reduce the number external memory accesses. We show that such a technique improves the performance of the generated hardware by an average of 35%",2001,0, 7027,"Reliable 3D surface acquisition, registration and validation using statistical error models","We present a complete data acquisition and processing chain for the reliable inspection of industrial parts considering anisotropic noise. Data acquisition is performed with a stripe projection system that was modeled and calibrated using photogrammetric techniques. Covariance matrices are attached individually to points during 3D coordinate computation. Different datasets are registered using a new multi-view registration technique. In the validation step, the registered datasets are compared with the CAD model to verify that the measured part meets its specification. While previous methods have only considered the geometrical discrepancies between the sensed part and its CAD model, we also consider statistical information to decide whether the differences are significant",2001,0, 7028,Defect Tolerance Based on Coding and Series Replication in Transistor-Logic Demultiplexer Circuits,"We present a family of defect tolerant transistor-logic demultiplexer circuits that can defend against both stuck-ON (short defect) and stuck-OFF (open defect) transistors. Short defects are handled by having two or more transistors in series in the circuit, controlled by the same signal. Open defects are handled by having two or more parallel branches in the circuit, controlled by the same signals, or more efficiently, by using a transistor-replication method based on coding theory. These circuits are evaluated, in comparison with an unprotected demultiplexer circuit, by: 1) modeling each circuit's ability to tolerate defects and 2) calculating the cost of the defect tolerance as each circuit's redundancy factor R, which is the relative number of transistors required by the circuit. The defect-tolerance model takes the form of a function giving the failure probability of the entire demultiplexer circuit as a function of the defect probabilities of its component transistors, for both defect types. With the advent of defect tolerance as a new design goal for the circuit designer, this new form of performance analysis has become necessary.",2007,0, 7029,Hardware-based Error Rate Testing of Digital Baseband Communication Systems,"We present a flexible architecture for evaluating the bit-error-rate (BER) performance of prototype digital baseband communication systems. Using an efficient elastic buffer interface, an arbitrary baseband module can be added to the cascaded architecture of a digital baseband communication system, independent of the module's operating rate, its position in the cascade structure, and its latency. The proposed BER tester uses an accurate fading channel model and a Gaussian noise generator to provide a realistic and repeatable test environment in the laboratory. This evaluation environment should reduce the need for time-consuming field tests, hence reducing the time-to-market and increasing productivity.",2008,0, 7030,Designing Run-Time Fault-Tolerance Using Dynamic Updates,We present a framework for designing run-time fault- tolerance using dynamic program updates triggered by faults. This is an important problem in the design of autonomous systems as it is often the case that a running program needs to be upgraded to its fault-tolerant version once faults occur. We formally state fault-triggered program updates as a design problem. We then present a sound and complete algorithm that automates the design of fault- triggered updates for replacing a program that does not tolerate faults with a fault-tolerant version thereof at run-time. We also define three classes of fault-triggered dynamic updates that tolerate faults during the update. We demonstrate our approach in the context of a fault-triggered update for the gate controller of a parking lot.,2007,0, 7031,System-level hardware-based protection of memories against soft-errors,"We present a hardware-based approach to improve the resilience of a computer system against the errors occurred in the main memory with the help of error detecting and correcting (EDAC) codes. Checksums are placed in the same type of memory locations and addressed in the same way as normal data. Consequently, the checksums are accessible from the exterior of the main memory just as normal data and this enables implicit fault-tolerance for interconnection and solid-state secondary storage sub-systems. A small hardware module is used to manage the sequential retrieval of checksums each time the integrity of the data accessed by the processor sub-system needs to be verified. The proposed approach has the following properties: (a) it is cost efficient since it can be used with simple storage and interconnection sub-systems that do not possess any inherent EDAC mechanism, (b) it allows on-line modifications of the memory protection levels, and (c) no modification of the application software is required.",2009,0, 7032,Collecting broken frames: Error statistics in IEEE 802.11b/g links,"We present a measurement method that allows to capture the complete set of all PSDU (PLCP Service Data Unit) transmissions and receptions in live IEEE 802.11b/g links with very high timing resolution. This tool provides an in-depth view of the statistics of frame-losses as it makes it possible to distinguish between different loss types such as complete miss, partial corruption and physical-layer capture. Getting access to this low-level statistics on nodes that actively participate in transmissions themselves is a challenging task since the software-interface provided to the network layer needs to remain untouched and cannot be used for tracing. In this contribution we describe in detail how to non-intrusively circumvent these restrictions and also present initial results.",2008,0, 7033,Fault localization with nearest neighbor queries,"We present a method for performing fault localization using similar program spectra. Our method assumes the existence of a faulty run and a larger number of correct runs. It then selects according to a distance criterion the correct run that most resembles the faulty run, compares the spectra corresponding to these two runs, and produces a report of ""suspicious"" parts of the program. Our method is widely applicable because it does not require any knowledge of the program input and no more information from the user than a classification of the runs as either ""correct"" or ""faulty"". To experimentally validate the viability of the method, we implemented it in a tool, Whither, using basic block profiling spectra. We experimented with two different similarity measures and the Siemens suite of 132 programs with injected bugs. To measure the success of the tool, we developed a generic method for establishing the quality of a report. The method is based on the way an ""ideal user"" would navigate the program using the report to save effort during debugging. The best results obtained were, on average, above 50%, meaning that our ideal user would avoid looking half of the program.",2003,0, 7034,Asynchronous stochastic convex optimization over random networks: Error bounds,"We consider a distributed multi-agent network system where the goal is to minimize the sum of convex functions, each of which is known (with stochastic errors) to a specific network agent. We are interested in asynchronous algorithms for solving the problem over a connected network where the communications among the agent are random. At each time, a random set of agents communicate and update their information. When updating, an agent uses the (sub)gradient of its individual objective function and its own stepsize value. The algorithm is completely asynchronous as it neither requires the coordination of agent actions nor the coordination of the stepsize values. We investigate the asymptotic error bounds of the algorithm with a constant stepsize for strongly convex and just convex functions. Our error bounds capture the effects of agent stepsize choices and the structure of the agent connectivity graph. The error bound scales at best as m in the number m of agents when the agent objective functions are strongly convex.",2010,0, 7035,Fault management in IP-over-WDM networks: WDM protection versus IP restoration,"We consider an IP-over-WDM network in which network nodes employ optical crossconnects and IP routers. Nodes are connected by fibers to form a mesh topology. Any two IP routers in this network can be connected together by an all-optical wavelength-division multiplexing (WDM) channel, called a lightpath, and the collection of lightpaths that are set up form a virtual topology. In this paper, we concentrate on single fiber failures, since they are the predominant form of failures in optical networks. Since each lightpath is expected to operate at a rate of few gigabits per second, a fiber failure can cause a significant loss of bandwidth and revenue. Thus, the network designer must provide a fault-management technique that combats fiber failures. We consider two fault-management techniques in an IP-over-WDM network: (1) provide protection at the WDM layer (i.e., set up a backup lightpath for every primary lightpath) or (2) provide restoration at the IP layer (i.e., overprovision the network so that after a fiber failure, the network should still be able to carry all the traffic it was carrying before the fiber failure). We formulate these fault-management problems mathematically, develop heuristics to find efficient solutions in typical networks, and analyze their characteristics (e.g., maximum guaranteed network capacity in the event of a fiber failure and the recovery time) relative to each other",2002,0, 7036,Collaborative control of robot motion: robustness to error,"We consider collaborative control systems, where multiple sources share control of a single robot. These sources could come from multiple sensors (sensor fusion), multiple control processes (subsumption), or multiple human operators. Reports suggest that such systems are highly fault tolerant, even with large numbers of sources. We develop a formal model, modeling sources with finite automata. A collaborative ensemble of sources generates a single stream of incremental steps to control the motion of a point robot moving in the plane. We first analyze system performance with a uniform ensemble of well-behaved deterministic sources. We then model malfunctioning sources that go silent or generate inverted control signals. We discover that performance initially improves in the presence of malfunctioning sources and remains robust even when a sizeable fraction of sources malfunction. Initial tests suggest similar results with non-deterministic (random) sources. The formal model may also provide insight into how humans can share control of an online robot",2001,0, 7037,Label-based path switching and error-free forwarding in a prototype optical burst switching node using a fast 44 optical switch and shared wavelength conversion,We demonstrate for the first time optical burst switching using fast and scalable EO switch and a shared wavelength converter for contention resolution. 3.5 m label routing of variable-length bursts and error-free operation was achieved at 10 Gbps payload.,2006,0, 7038,Zero-error information hiding capacity of digital images,"We derive a theoretical capacity for digital image watermarking with zero transmission errors. We present three different discrete memoryless channel model to represent the watermarking process. Given the magnitude bound of noise set by applications and the acceptable watermark magnitude determined by the just-noticeable distortion, we estimate the zero-error capacity by applying Shannon's (1948) adjacency-reducing mapping technique. The capacity we estimate here corresponds to a deterministic guarantee of zero error, different from the traditional theorem approaching zero error asymptotically",2001,0, 7039,Error-resilient transcoding for video over wireless channels,"We describe a method to maintain quality for video transported over wireless channels. The method is built on three fundamental blocks. First, we use a transcoder that injects spatial and temporal resilience into an encoded bitstream. The amount of resilience is tailored to the content of the video and the prevailing error conditions, as characterized by bit error rate. Second, we derive analytical models that characterize how corruption propagates in a video that is compressed using motion-compensated encoding and subjected to bit errors. Third, we use rate distortion theory to compute the optimal allocation of bit rate among spatial resilience, temporal resilience, and source rate. Furthermore, we use the analytical models to generate the resilience rate distortion functions that are used to compute the optimal resilience. The transcoder then injects this optimal resilience into the bitstream. Simulation results show that using a transcoder to optimally adjust the resilience improves video quality in the presence of errors while maintaining the same input bit rate.",2000,0, 7040,The recovery language approach for software-implemented fault tolerance,"We describe a novel approach for software-implemented fault tolerance that separates error detection from error recovery and offers a distinct programming and processing context for the latter. This allows the application developer to address separately the non-functional aspects of error recovery from those pertaining to the functional behaviour that the user application is supposed to have in the absence of faults. We conjecture that this way only a limited amount of non-functional code intrusion affects the user application, while the bulk of the strategy to cope with errors is to be expressed by the user in a recovery script, conceptually as well physically distinct from the functional application layer. Such script is to be written in what we call a recovery language, i.e. a specialised linguistic framework devoted to the management of the fault tolerance strategies that allows to express scenarios of isolation, reconfiguration, and recovery. These are to be executed on meta-entities of the application with physical or logical counterparts (processing nodes, tasks, or user-defined groups of tasks). The developer is therefore made able to modify the fault tolerance strategy with only a few or no modifications in the application part, or vice-versa, tackling more easily and effectively any of these two fronts. This can result in a better maintainability of the target fault-tolerant application and in support for reaching portability of the service while moving the application to different unfavourable environments. The paper positions and discusses the recovery language approach and a prototypal implementation for embedded applications developed within project TIRAN on a number of distributed platforms",2001,0, 7041,Defect Identification in Large Area Electronic Backplanes,We describe a rapid testing system for active matrix thin-film transistor (TFT) backplanes which enables the identification of many common processing defects. The technique spatially maps the charge feedthrough from TFTs in the pixel and is suited for pixels with switched-capacitor architecture.,2009,0, 7042,Algorithm-based fault tolerance for spaceborne computing: basis and implementations,"We describe and test the mathematical background for using checksum methods to validate results returned by a numerical subroutine operating in a fault-prone environment that causes unpredictable errors in data. We can treat subroutines whose results satisfy a necessary condition of a linear form; the checksum tests compliance with this necessary condition. These checksum schemes are called algorithm-based fault tolerance (ABFT). We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent in finite-precision numerical calculations. Two series of tests are described. The first tests the general effectiveness of the linear ABFT schemes we propose, and the second verifies the correct behavior of our parallel implementation of them. We find that under simulated fault conditions, it is possible to choose a fault detection scheme that for average case matrices can detect 99% of faults with no false alarms, and that for a worst-case matrix population can detect 80% of faults with no false alarms",2000,0, 7043,"Transparent, Incremental Checkpointing at Kernel Level: a Foundation for Fault Tolerance for Parallel Computers","We describe the software architecture, technical features, and performance of TICK (Transparent Incremental Checkpointer at Kernel level), a system-level checkpointer implemented as a kernel thread, specifi- cally designed to provide fault tolerance in Linux clusters. This implementation, based on the 2.6.11 Linux kernel, provides the essential functionality for transparent, highly responsive, and efficient fault tolerance based on full or incremental checkpointing at system level. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5s; and it supports incremental and full checkpoints with minimal overhead-less than 6% with full checkpointing to disk performed as frequently as once per minute.",2005,0, 7044,Aspects for improvement of performance in fault-tolerant software,"We describe the use of aspect-oriented programming to improve performance of fault-tolerant (FT) servers built with middleware support. Its contribution is to shift method call logging from middleware to application level in primary-backup replication. The novelty consists in no burden being placed on application writers, except for a simple component description aiding automatic generation of aspect code. The approach is illustrated by describing how synchronization aspects are weaved in an application, and modifications of an FT-CORBA platform to avoid middleware level logging. Evaluation is performed using a telecom application enriched with aspects, running on top of the aspect-supporting platform. We compare overheads with earlier results from runs on the base-line platform. Experiments show a drop of around 40% of original overheads. This is due to methods starting execution before previous ones end, in contrast to ordering enforced at middleware level where methods are executed sequentially, not adapting to application knowledge.",2004,0, 7045,"Insect Population Inspired Wireless Sensor Networks: A Unified Architecture with Survival Analysis, Evolutionary Game Theory, and Hybrid Fault Models","We envision a wireless sensor network (WSN) as an entity analogous to a biological population with individual nodes mapping to individual organisms and the network architecture mapping to the biological population. In particular, a mobile wireless sensor network consisting of hundreds or more of microchip- driven nodes is analogous to a flying insect population in several important aspects. The notions of lifetime, space distribution and environment apply to both domains. The interactions between individuals, either insects or WSN sensors, can be captured with evolutionary game theory models, in which individuals are the players and reliability are the fitness (payoff) of the players. The evolutionary stable strategy (ESS) forms the basis of network survivability. Furthermore, we introduce hybrid fault models into the proposed architecture with the notion of ""Byzantine generals playing evolutionary games""[13J. This unified architecture makes it possible to dynamically (realtime) predict reliability, survivability, and fault tolerance capability of a WSN. On the node level, we introduce survival analysis to model lifetime, reliability and survivability of WSN nodes. By adopting survival analysis, rather than standard engineering reliability theory, we try to address four critical issues: (i) replacing unrealistic constant failure rates with general stochastic models, (ii) replacing unrealistic independent failures with shared frailty models that effectively address common risks or events-related dependence, and with multi-state modeling for common events dependence, (Hi) harnessing the censoring-modeling mechanisms to assess the influences of unpredictable malicious events on reliability and survivability, and (iv) addressing spatial aspects of failures with spatial survival analysis, spatial frailty.",2008,0, 7046,Applicability of IEEE maintenance process for corrective maintenance outsourcing-an empirical study,"We establish the context of maintenance outsourcing of mission critical applications by Fortune 500 organizations. We present the results of empirical studies that were conducted at Syntel, a NASDAQ listed application management and e-business solutions company, on 46 software maintenance projects that belonged to various lines of business on the IBM mainframe platform using an automated data collection tool EQUIP. After establishing that corrective maintenance activities form a significant component of the overall maintenance efforts, we examine the applicability of the IEEE standard maintenance process for corrective maintenance by measuring the efforts spent on the various activities. We conclude that (a) the processes for each type of maintenance need to be fine tuned especially in the context of outsourcing (b) analysis, testing form significant part of the corrective maintenance effort (c) teams need to carry out other activities such as database reorganization and configuration management that are not defined by the IEEE maintenance process.",2002,0, 7047,Defect oriented fault diagnosis for semiconductor memories using charge analysis: theory and experiments,We evaluated a diagnostic technique based on the charge delivered to the IC during a transition. Charge computed from the transient supply current is related to the circuit internal activity. A specific activity can be forced into the circuit using appropriate test vectors to highlight possible defect locations. Experimental results from a small test circuit and a 256 K SRAM demonstrate the experimental viability of the technique. The theoretical foundation is also discussed,2001,0, 7048,Attenuation correction for the NIH ATLAS small animal PET scanner,"We evaluated two methods of attenuation correction for the NIH ATLAS small animal PET scanner: 1) a CT-based method that derives 511 keV attenuation coefficients () by extrapolation from spatially registered CT images; and 2) an analytic method based on the body outline of emission images and an empirical . A specially fabricated attenuation calibration phantom with cylindrical inserts that mimic different body tissues was used to derive the relationship to convert CT values to for PET. The methods were applied to three test data sets: 1) a uniform cylinder phantom, 2) the attenuation calibration phantom, and 3) a mouse injected with [18F] FDG. The CT-based attenuation correction factors were larger in nonuniform regions of the imaging subject, e.g. mouse head, than the analytic method. The two methods had similar correction factors for regions with uniform density and detectable emission source distributions.",2003,0, 7049,Exploring errors in a medication process: an analysis of information delivery,"We examine the prescribing process in a hospital to see whether information systems failure contributes to the occurrence of prescribing errors. J.T. Reason's (1990) model of organisational failure suggests that this may be the case. Reason identified circumstances in work systems and processes where errors may occur. While Reason's model has been applied in a medical context, it has not previously been linked to errors which result from information systems failure. It is held, however, that the model can be used as a predictive tool, suggesting that prescribing errors have an increased likelihood of occurring if one or more of the types of failure identified in Reason's model are present in the existing information delivery process in a hospital. In this paper, we examine the application of Reason's model in predicting prescribing errors and then calculate the extent to which these errors are evident in the hospital ward under examination.",2002,0, 7050,Compiler-directed instruction duplication for soft error detection,"We experiment with compiler-directed instruction duplication to detect soft errors in VLIW datapaths. In the proposed approach, the compiler determines the instruction schedule by balancing the permissible performance degradation with the required degree of duplication. Our experimental results show that our algorithms allow the designer to perform tradeoff analysis between performance and reliability.",2005,0, 7051,Error-Resilient Performance of Dirac Video Codec Over Packet-Erasure Channel,"Video transmission over the wireless or wired network requires error-resilient mechanism since compressed video bitstreams are sensitive to transmission errors because of the use of predictive coding and variable length coding. This paper investigates the performance of a simple and low complexity error-resilient coding scheme which combines source and channel coding to protect compressed bitstream of wavelet-based Dirac video codec in the packet-erasure channel. By partitioning the wavelet transform coefficients of the motion-compensated residual frame into groups and independently processing each group using arithmetic and forward error correction (FEC) coding, Dirac could achieves the robustness to transmission errors by giving the video quality which is gracefully decreasing over a range of packet loss rates up to 30% when compared with conventional FEC only methods. Simulation results also show that the proposed scheme using multiple partitions can achieve up to 10 dB PSNR gain over its existing un-partitioned format. This paper also investigates the error-resilient performance of the proposed scheme in comparison with H.264 over packet-erasure channel.",2007,0, 7052,HACKER: human and computer knowledge discovered event rules for telecommunications fault management,"Visualization integrated with data mining can offer 'human-assisted computer discovery' and 'computer-assisted human discovery'. Such a visual environment; reduces the time to understand complex data, thus enabling practical solutions to many real world problem to be developed far more rapidly than either humans or computers operating independently. In doing so the remarkable perceptual abilities that humans possess can be utilized, such as the capacity to recognize lanes quickly, and detect the subtlest changes in size, color, shape, movement or texture. One such complex real world problem is fault management in global telecommunication systems. These system have a large amount of built in redundancy to ensure robustness and quality of service. Unfortunately, this means that when a fault does occur, it can trigger a cascade of alarm events as individual parts of the system discover and report fallen making it difficult to locate the origin of the fault. This alarm behavior has been described as appearing to an operator as non-deterministic, yet it does result in a large data mountain that is ideal for data mining. The paper presents a visualization data mining prototype that incorporates the principles of human and computer discovery, the combination of computer-assisted human discovery with human-assisted computer discovery through a three-tier framework. The prototype is specifically designed to assist in the semi-automatic discovery of previously unknown alarm rules that can then be utilized in commercial role based component solutions, ""business rules"", which are at the heart of many of todays fault management systems.",2002,0, 7053,Adopting SCTP and MPLS-TE mechanism in VoIP architecture for fault recovery and resource allocation,"VoIP application allows us to transmit the voice data over Internet. In this paper, a VoIP architecture with fault recovery and resource allocation by adopting Multi-homed Stream Control Transmission Protocol (SCTP) and Multi-Protocol Label Switching Traffic Engineering (MPLS-TE) mechanism was proposed. In the proposed architecture, SCTP was employed to transmit SIP messages while MPLS-TE mechanism was applied to set up the voice data transmitting path. With the multi-homing capability of SCTP, the voice call failure rate could be reduced and network resources could be utilized efficiently. Through MPLS-TE mechanism, traffic engineering functions such as network resources optimization, strict quality of service (QoS) voice data delivery, and fast recovery upon link or node failures could be ensured. We simulate three architectures in the network simulator (NS2) software and compare them under different network conditions. The simulation results reveal that the applicability of the proposed architecture.",2009,0, 7054,A Fault Diagnosis and Security Framework for Water Systems,"Water resources management is a key challenge that will become even more crucial in the years ahead. From a system-theoretic viewpoint, there is a need to develop rigorous design and analysis tools for control, fault diagnosis and security of water distribution networks. This work develops a mathematical framework suitable for fault diagnosis and security in water systems; in addition it investigates the problem of determining a suitable set of locations for sensor placement in large-scale drinking water distribution networks such that contaminant detection is optimized. This work contributes to the research by presenting a problem formulation were the state-space representation of the propagation and reaction dynamics is coupled with the impact dynamics describing the damage caused by a contamination of the water distribution network. We propose a solution methodology for the sensor-placement problem by considering several risk-objectives, and by utilizing various optimization and evolutionary computation techniques. To illustrate the methodology, we present results of a simplified and a real water distribution network.",2010,0, 7055,Sensors Fault Diagnosis of Hydraulic Automatic Gauge Control System Based on Wavelet Neural Network,"Wavelet neural network model is established, a sensor-online fault diagnosis method based on this model is proposed, fault diagnosis of multi-sensor is realized through the establishment of the separate neural network prediction model of every sensor. Simulation experiments are carried out with a large number of sensor data of modern strip mill HAGC (Hydraulic Automatic Gauge Control) system, The feasibility of the method is verified.",2010,0, 7056,Mining the coherence of GNOME bug reports with statistical topic models,"We adapt latent Dirichlet allocation to the problem of mining bug reports in order to define a new information-theoretic measure of coherence. We then apply our technique to a snapshot of the GNOME Bugzilla database consisting of 431,863 bug reports for multiple software projects. In addition to providing an unsupervised means for modeling report content, our results indicate substantial promise in applying statistical text mining algorithms for estimating bug report quality. Complete results are available from our supplementary materials Web site at http://sourcerer.ics.uci.edu/msr2009/gnome_coherence.html.",2009,0, 7057,Forward Error Correction for Multipath Media Streaming,"We address the problem of joint optimal rate allocation and scheduling between media source rate and error protection rate in scalable streaming applications over lossy multipath networks. Starting from a distortion representation of the received media information at the client, we propose a novel optimization framework in which we analyze the performance of the most relevant forward error correction and scheduling techniques. We describe both optimal and heuristic algorithms that find solutions to the rate allocation and scheduling problem, and emphasize the main characteristics of the compared techniques. Our results show that efficient unequal error protection schemes improve the quality of the streaming process. At the same time we emphasize the importance of priority scheduling of the information over the best available network paths, which outperforms traditional first-in-first-out models or network flooding mechanisms.",2009,0, 7058,Object-oriented executives and components for fault tolerance,"We have created two kinds of reusable, object-oriented software components to facilitate building fault tolerant applications. Executive components orchestrate familiar software fault tolerance techniques in a data type independent manner. Building block components provide fault tolerance utilities and application-specific functions. We use a three-level class framework (or design pattern) to create data type and application-independent classes at the highest level, define data type-dependent base classes in the middle level, and organize application and data type-specific derived classes at the lowest level. This approach employs polymorphism, pointer conversions and Run-Time Type Information. These techniques have successfully handled applications with dissimilar data types. Reusing these components greatly speeds the development of applications that exploit software fault tolerance techniques",2001,0, 7059,Prediction models for software fault correction effort,"We have developed a model to explain and predict the effort associated with changes made to software to correct faults while it is undergoing development. Since the effort data available for this study is ordinal in nature, ordinal response models are used to explain the effort in terms of measures of fault locality and the characteristics of the software components being changed. The calibrated ordinal response model is then applied to two projects not used in the calibration to examine predictive validity",2001,0, 7060,A Case Study of Defect-Density and Change-Density and their Progress over Time,"We have performed an empirical case study, investigating defect-density and change-density of a reusable framework compared with one application reusing it over time at a large Oil and Gas company in Norway, Statoil ASA. The framework, called JEF, consists of seven components grouped together, and the application, called DCF, reuses the framework, without modifications to the framework. We analyzed all trouble reports and change requests from three releases of both. Change requests in our study covered any changes (not correcting defects) in the requirements, while trouble reports covered any reported defects. Additionally, we have investigated the relation between defect-density and change-density both for the reusable JEF framework and the application. The results revealed that the defect-density of the reusable framework was lower than the application. The JEF framework had higher change-density in the first release, but lower change-density than the DCF application over the successive releases. For the DCF application, on the other hand, a slow increase in change-density appeared. On the relation between change-density and defect-density for the JEF framework, we found a decreasing defect-density and change-density. The DCF application here showed a decreasing defect-density, with an increasing change-density. The results show that the quality of the reusable framework improves and it becomes more stable over several releases, which is important for reliability of the framework and assigning resources",2007,0, 7061,Novel Agent-Based Management for Fault-Tolerance in Network-on-Chip,We introduce a novel agent-based reconfiguring concept for futures network-on-chip (NoC) systems. The necessary properties to increase architecture level fault tolerance are introduced. The system control is modeled as multi-level agent hierarchy that is able to increase application fault-tolerance and performance with autonomous reactions of agents. The agent technology adds a system level intelligence level to the traditional NoC system design. The architecture and functions of this system are described on conceptual level. Communication and reconfiguring data flows are presented as study cases. Principles of reconfiguration of a NoC on faulty environment are demonstrated and simulated. Probability of reconfiguration success is measured with different latency requirements and amount of redundancy by Monte Carlo simulations. The effect of network topology in reconfiguration of a faulty mesh was also under research in the simulations.,2007,0, 7062,Towards Optimal Resource Allocation in Partial-Fault Tolerant Applications,"We introduce Zen, a new resource allocation framework that assigns application components to node clusters to achieve high availability for partial-fault tolerant (PFT) applications. These applications have the characteristic that under partial failures, they can still produce useful output though the output quality may be reduced. Thus, the primary goal of resource allocation for PFT applications is to prevent, delay, or minimize the impact of failures on the application output quality. This paper is the first to approach this resource allocation problem from a theoretical perspective, and obtains a series of results regarding component assignments that provide the highest service availability under the constraints imposed by the application data flow graph and the hosting clusters. We show that (1) even simple versions of this resource allocation problem are NP-Hard, (2) a 2-approximate polynomial-time algorithm works for tree topologies, and (3) a simple greedy component placement performs well in practice for general application topologies. We implement a system prototype to study the application availability achieved by Zen compared to failure-oblivious placement, replication, and Zen+replication. Our experimental results show that three PFT applications achieve significant data output quality and availability benefits using Zen.",2008,0, 7063,Design of Timing Error Detectors for Orthogonal Space-Time Block Codes,"We present a method for the design of low complexity timing error detectors in orthogonal space-time block coding (OSTBC) receivers. A general expression for the S-curve of timing error detectors is derived. Based on this result, we obtain sufficient conditions for a difference of threshold crossings timing estimate that is robust to channel fading. A number of timing error detectors for 3- and 4-transmit antenna codes are presented. The performance is evaluated by examining their tracking capabilities within a timing loop of an OSTBC receiver. Symbol-error-rate results are presented showing negligible loss due to timing synchronization. In addition, we study the performance as a function of the timing drift and show that the receiver is able to track up to the normalized timing drift bandwidth of 0.001",2006,0, 7064,A Model of Bug Dynamics for Open Source Software,We present a model to describe open source software (OSS) bug dynamics. We validated the model using real world data and performed simulation experiments. The results show that the model has the ability to predict bug occurrences and failure rates. The results also reveal that there exists an optimal release cycle for effectively managing OSS quality.,2008,0, 7065,Transient-fault tolerant VHDL descriptions: a case-study for area overhead analysis,"We present a new approach to design reliable complex circuits with respect to transient faults in memory elements. These circuits are intended to be used in harmful environments like radiation. During the design flow this methodology is also used to perform an early-estimation of the obtained reliability level. Usually, this reliability estimation step is performed in the laboratory, by means of radiation facilities (particle accelerators). By doing so, the early-estimated reliability level is used to balance the design process into a trade-off between maximum area overhead due to the insertion of redundancy and the minimum reliability required for a given application. This approach is being automated through the development of a CAD tool (FT-PRO). Finally, we present also a case-study of a simple microprocessor used to analyze the FT-PRO performance in terms of the area overhead required to implement the fault-tolerant circuit.",2000,0, 7066,Safety optimization: a combination of fault tree analysis and optimization techniques,"We present a new form of quantitative safety analysis -safety optimization. This method is a combination of fault tree analysis (FTA) and mathematical optimization techniques. With the use of the results of FTA, statistics, and a quantification of the costs of hazards, it allows to find the optimal configuration of a given system with respect to opposed safety requirements. Furthermore, the system may not only be examined for safety, but usability as well. We illustrate this method on a real-world case study: the height control system of the Elbtunnel in Hamburg. Safety optimization showed some significant problems in trustworthiness of the system, yielded optimal values for configuration of free parameters and showed possible modifications to improve the system.",2004,0, 7067,A partition-based approach for identifying failing scan cells in scan-BIST with applications to system-on-chip fault diagnosis,"We present a new partition-based fault diagnosis technique for identifying failing scan cells in a scan-BIST environment. This approach relies on a two-step scan chain partitioning scheme. In the first step, an interval-based partitioning scheme is used to generate a small number of partitions, where each element of a partition consists of a set of scan cells. In the second step, additional partitions are created using an earlier-proposed random-selection partitioning method. Two-step partitioning leads to higher diagnostic resolution than a scheme that relies only on random-selection partitioning, with only a small amount of additional hardware. The proposed scheme is especially suitable for a system-on-chip (SOC) composed of multiple embedded cores, where test access is provided by means of a TestRail that is threaded through the internal scan chains of the embedded cores. We present experimental results for the six largest ISCAS-89 benchmark circuits and for two SOCs crafted from some of the ISCAS-89 circuits.",2003,0, 7068,Modeling cross-sensory and sensorimotor correlations to detect and localize faults in mobile robots,"We present a novel framework for learning cross- sensory and sensorimotor correlations in order to detect and localize faults in mobile robots. Unlike traditional fault detection and identification schemes, we do not use a priori models of fault states or system dynamics. Instead, we utilize additional information and possible source of redundancy that mobile robots have available to them, namely a hierarchical graph representing stages of sensory processing at multiple levels of abstractions and their outputs. We learn statistical models of correlations between elements in the hierarchy, in addition to the control signals, and use this to detect and identify changes in the capabilities of the robot. The framework is instantiated using Self-Organizing Maps, a simple unsupervised learning algorithm. Results indicate that the system can detect sensory and motor faults in a mobile robot and identify their cause, without using a priori models of the robot or its fault states.",2007,0, 7069,Optimizing a highly fault tolerant software RAID for many core systems,We present a parallel software driver for a RAID architecture to detect and correct corrupted disk blocks in addition to tolerate disk failures. The necessary computations demand parallel execution to avoid the processor being the bottleneck for a RAID with high bandwidth. The driver employs the processing power of multicore and manycore systems. We report on the performance of a prototype implementation on a quadcore processor that indicates linear speedup and promises good scalability on larger machines. We use reordering of I/O orders to ensure balance between CPU and disk load.,2009,0, 7070,A persistent diagnostic technique for unstable defects,"We present a persistent diagnostic technique for unstable defects, such as open defects or delay defects. A new ""segment model"" diagnosis for the completely open defects is discussed. Here, we not only focus on the behavior of the principal offender, but also the behavior of the accomplices which cause the unstable behavior of the defect. In this paper, a technique using the layout information for an open fault diagnosis, and a testing method for the delay fault are discussed. Some experimental results of actual chips are shown.",2002,0, 7071,Automatic Generation of Instructions to Robustly Test Delay Defects in Processors,"We present a technique for generating instruction sequences to test a processor functionally. We target delay defects with this technique using an ATPG engine to generate delay tests locally, a verification engine to map the tests globally, and a feedback mechanism that makes the entire procedure faster. We demonstrate nearly 96% coverage of delay faults with the instruction sequences generated. These instruction sequences can be loaded into the cache to test the processor functionally.",2007,0, 7072,EPIC: profiling the propagation and effect of data errors in software,"We present an approach for analyzing the propagation and effect of data errors in modular software enabling the profiling of the vulnerabilities of software to find 1) the modules and signals most likely exposed to propagating errors and 2) the modules and signals which, when subjected to error, tend to cause more damage than others from a systems operation point-of-view. We discuss how to use the obtained profiles to identify where dependability structures and mechanisms will likely be the most effective, i.e., how to perform a cost-benefit analysis for dependability. A fault-injection-based method for estimation of the various measures is described and the software of a real embedded control system is profiled to show the type of results obtainable by the analysis framework.",2004,0, 7073,Value-based scheduling of distributed fault-tolerant real-time systems with soft and hard timing constraints,We present an approach for scheduling of fault-tolerant embedded applications composed of soft and hard real-time processes running on distributed embedded systems. The hard processes are critical and must always complete on time. A soft process can complete after its deadline and its completion time is associated with a value function that characterizes its contribution to the quality-of-service of the application. We propose a quasi-static scheduling algorithm to generate a tree of fault-tolerant distributed schedules that maximize the application's quality value and guarantee hard deadlines.,2010,0, 7074,Fault Management based on peer-to-peer paradigms; A case study report from the CELTIC project Madeira,"We present an approach to fault management based on an architecture for distributed and collaborative network management as developed in the CELTIC project Madeira. It uses peer-to-peer communication facilities and a logical overlay network facilitating decentralized and iterative alarm processing and correlation. We argue that such an approach might help to overcome key challenges that are posed by NGN scenarios to traditional centralized network management systems. Its feasibility is demonstrated by means of a case study from the area of wireless mesh networks, where an application prototype has been developed.",2007,0, 7075,Techniques for fast transient fault grading based on autonomous emulation [IC fault tolerance evaluation],"Very deep submicron and nanometer technologies have increased notably integrated circuit (IC) sensitivity to radiation. So errors are currently appearing in ICs working at the Earth's surface. Hardened circuits are currently required in many applications where fault tolerance (FT) was not a requirement in the very near past. The use of platform FPGAs for the emulation of single-event upset effects (SEU) is gaining attention in order to speed up the FT evaluation. In this work, a new emulation system for FT evaluation with respect to SEU effects is proposed, providing shorter evaluation times by performing all the evaluation process in the FPGA and avoiding emulator-host communication bottlenecks.",2005,0, 7076,Error resilience video transcoding for wireless communications,"Video communication through wireless channels is still a challenging problem due to the limitations in bandwidth and the presence of channel errors. Since many video sources are originally coded at a high rate and without considering the different channel conditions that may be encountered later, a means to repurpose this content for delivery over a dynamic wireless channel is needed. Transcoding is typically used to reduce the rate and change the format of the originally encoded video source to match network conditions and terminal capabilities. Given the existence of channel errors that can easily corrupt video quality, there is also the need to make the bitstream more resilient to transmission errors. In this article we provide an overview of the error resilience tools found in today's video coding standards and describe a variety of techniques that may be used to achieve error-resilient video transcoding.",2005,0, 7077,An Experimental Study of Packet Loss and Forward Error Correction in Video Multicast over IEEE 802.11b Network,"Video multicast over wireless local area networks (WLANs) faces many challenges due to varying channel conditions and limited bandwidth. A promising solution to this problem is the use of packet level forward error correction (FEC) mechanisms. However, the adjustment of the FEC rate is not a trivial issue due to the dynamic wireless environment. This decision becomes more complicated if we consider the multi-rate capability of the existing wireless LAN technology that adjusts the transmission rates based on the channel conditions and the coverage range. In order to explore the above issues we conducted an experimental study of the packet loss behavior of the IEEE 802.11b protocol. In our experiments we considered different transmission rates under the broadcast mode in indoor and outdoor environments. We further explored the effectiveness of packet level FEC for video multicast over wireless networks with multi-rate capability. In order to evaluate the system quantitatively, we implemented a prototype using open source drivers and socket programming. Based on the experimental results, we provide guidelines on how to efficiently use FEC for wireless video multicast in order to improve the overall system performance. We show that the Packet Error Rate (PER) increases exponentially with distance and using a higher transmission rate together with stronger FEC is more efficient than using a lower transmission rate with weaker FEC for video multicast.",2009,0, 7078,Composing Emergent Behaviour in a Fault Intolerant System,"We adopt a layered approach to autonomic systems to simplify the specification, implementation and evaluation of self-* behaviours. Our approach is to develop a suite of tools that interoperate in a layered manner to permit generic autonomic behaviours to be incorporated into a wide range of applications. The layered architecture has been designed such that autonomy is achieved in two dimensions. Vertical autonomy exists between the independently functioning layers whilst horizontal autonomy is achieved between nodes, which remain independent while having specific roles in each layer. This paper exemplifies our approach by presenting an emergent solution to the management of air-space. Simulation and visualisation models have been developed for both the cluster and application layers. We analyze performance using these simulations",2005,0, 7079,Text mining for a clear picture of defect reports: a praxis report,"We applied the text mining categorization technology, in the publicly available, IBM Enterprise Information Portal V8.1 to more than 15,000 customer reported, product problem records. We used a proven software quality category set to categorize these problem records into different areas of interest. Our intent was to develop a clear picture of potential areas for quality improvement in each of the software products reviewed, and to provide this information to development's management. We present the benefits that can be gained from categorizing problem records, as well as the limitations.",2003,0, 7080,Geometric robust watermarking based on a new mesh model correction approach,"While geometric attacks are one of the most challenging problems in watermarking, random bending is probably the most difficult to handle among all geometric attacks. We present a watermarking scheme based on a new deformable mesh model to combat such attacks. The distortion is corrected using the distortion field (DF) estimated by minimizing the matching error between the meshes of the original and the attacked image. A CDMA watermarking method is used for testing the proposed method, which embeds a multi-bit signature in the DCT domain and uses mesh model correction to achieve robustness. Experiments show that the proposed scheme can survive a wide range of random bending attacks.",2002,0, 7081,A learning-based approach for fault tolerance on grid resources scheduling,"While Grid environment has developed increasingly, unfortunately the importance of fault tolerance has not been remarkable in Grid resource management. On the other hand, the cost of computing by grid is important because grid is an economy-based system. Most organizations intend to spend little on their own computations by grid. Therefore, using a better approach to resource scheduling to avoid fault is necessary. This paper presents a new approach on fault tolerance mechanisms for the resource scheduling on grid by using Case-Based Reasoning technique in a local fashion. This approach applies a specific structure in order to prepare fault tolerance between executer nodes to retain system in a safe state with minimum data transferring. Certainly, this algorithm increases fault tolerant confidence therefore, performance of grid will be high.",2009,0, 7082,DMTracker: finding bugs in large-scale parallel programs by detecting anomaly in data movements,"While software reliability in large-scale systems becomes increasingly important, debugging in large-scale parallel systems remains a daunting task. This paper proposes an innovative technique to find hard-to-detect software bugs that can cause severe problems such as data corruptions and deadlocks in parallel programs automatically via detecting their abnormal behaviors in data movements. Based on the observation that data movements in parallel programs typically follow certain patterns, our idea is to extract data movement (DM)-based invariants at program runtime and check the violations of these invariants. These violations indicate potential bugs such as data races and memory corruption bugs that manifest themselves in data movements. We have built a tool, called DMTracker, based on the above idea: automatically extract DM-based invariants and detect the violations of them. Our experiments with two real-world bug cases in MVAPICH/MVAPICH2, a popular MPI library, have shown that DMTracker can effectively detect them and report abnormal data movements to help programmers quickly diagnose the root causes of bugs. In addition, DMTracker incurs very low runtime overhead, from 0.9% to 6.0%, in our experiments with High Performance Linpack (HPL) and NAS Parallel Benchmarks (NPB), which indicates that DMTracker can be deployed in production runs.",2007,0, 7083,Fisheye lens distortion correction on multicore and hardware accelerator platforms,"Wide-angle (fisheye) lenses are often used in virtual reality and computer vision applications to widen the field of view of conventional cameras. Those lenses, however, distort images. For most real-world applications the video stream needs to be transformed, at real-time (20 frames/sec or better), back to the natural-looking, central perspective space. This paper presents the implementation, optimization and characterization of a fisheye lens distortion correction application on three platforms: a conventional, homogeneous multicore processor by Intel, a heterogeneous multicore (Cell BE), and an FPGA implementing an automatically generated streaming accelerator. We evaluate the interaction of the application with those architectures using both high- and low-level performance metrics. In macroscopic terms, we find that todays mainstream conventional multicores are not effective in supporting real-time distortion correction, at least not with the currently commercially available core counts. Architectures, such as the Cell BE and FPGAs, offer the necessary computational power and scalability, at the expense of significantly higher development effort. Among these three platforms, only the FPGA and a fully optimized version of the code running on the Cell processor can provide realtime processing speed. In general, FPGAs meet the expectations of performance, flexibility, and low overhead. General purpose multicores are, on the other hand, much easier to program.",2010,0, 7084,The solution of the ground fault using GOOSE message in the wind farm system,"Wind farm systems presented some unique challenges for protection. Ground faults on feeders will result in unfaulted phase voltages rising to line levels and transient over voltages can be produced, which can degrade insulation, resulting in eventual equipment failure. This paper focused on the particular problem of feeder ground faults. A novel, yet simple solution is presented that makes use of a peer-to-peer fast message - GOOSE message. It designed and implemented information and communication model of the Wind-Turbine-Generator (WTG) IED. And it introduced the timing sequence for a feeder fault using the transfer trip solution. Finally GOOSE message is designed to transfer trip information. By Simulation test of GOOSE message and analyzing sequence diagram of IED's interaction for transfer trip, we know that wind turbine equipment can be avoided damage using GOOSE message, which is a good solution of the ground fault.",2009,0, 7085,Sentomist: Unveiling Transient Sensor Network Bugs via Symptom Mining,"Wireless Sensor Network (WSN) applications are typically event-driven. While the source codes of these applications may look simple, they are executed with a complicated concurrency model, which frequently introduces software bugs, in particular, transient bugs. Such buggy logics may only be triggered by some occasionally interleaved events that bear implicit dependency, but can lead to fatal system failures. Unfortunately, these deeply-hidden bugs or even their symptoms can hardly be identified by state-of-the-art debugging tools, and manual identification from massive running traces can be prohibitively expensive. In this paper, we present Sentomist (Sensor application anatomist), a novel tool for identifying potential transient bugs in WSN applications. The Sentomist design is based on a key observation that transient bugs make the behaviors of a WSN system deviate from the normal, and thus outliers (i.e., abnormal behaviors) are good indicators of potential bugs. Sentomist introduces the notion of event-handling interval to systematically anatomize the long-term execution history of an event-driven WSN system into groups of intervals. It then applies a customized outlier detection algorithm to quickly identify and rank abnormal intervals. This dramatically reduces the human efforts of inspection (otherwise, we have to manually check tremendous data samples, typically with brute force inspection) and thus greatly speeds up debugging. We have implemented Sentomist based on the concurrency model of TinyOS. We apply Sentomist to test a series of representative real-life WSN applications that contain transient bugs. These bugs, though caused by complicated interactions that can hardly be predicted during the programming stage, are successfully confined by Sentomist.",2010,0, 7086,A Causal Model Method for Fault Diagnosis in Wireless Sensor Networks,"Wireless sensor network are composed of many wireless sensing devices namely sensor nodes, they are small in size, limited in resources and randomly deployed in harsh environment. Therefore, it is not uncommon for sensor networks to have malfunction behaviour, power shortage or network failure. To address this issue, we proposed a new scheme based on Causal Model Method (CMM) which applies fault sources analyzer for component-level fault diagnosis in wireless sensor networks. Our new method consists of three phases to define the node failure sources as collect, classify, and correct. Once the fault source has been classified, CMM mechanism will enable reconfigure process to compensate for the erroneous sensor nodes impact. We have conducted a simulation study to our work using the Georgia Tech Network Simulator (GTNetS). Our simulation results show that CMM can improve wireless senor networks performance and reliability.",2010,0, 7087,Low Cost Differential GPS Receivers (LCD-GPS): The Differential Correction Function,"Wireless Sensor Networks (WSN) are used in many applications such as environmental data collection, smart home, smart care and intelligent transportation system. Sensor nodes composing the WSN cooperate together in order to monitor physical entities such as temperature, humidity, sound, atmospheric pressure, motion or pollutants at different locations. To have location information, it is possible to configure nodes with their locations, in small deployments, but in large-scale deployments or when the nodes are mobile, the use of GPS is very interesting. However, the current accuracy of standard civil GPS is not sufficient for all WSN applications. Indeed, GPS measurements suffer from many errors especially in city. To improve GPS accuracy the differential mode (DGPS) has been introduced. In this paper, we present a WSN used to provide a DGPS solution. It's consisting of a set of low cost standard civil GPS communicating receivers. We present the design, implementation and some experimental results of this solution.",2008,0, 7088,Cluster-Based Error Messages Detecting and Processing for Wireless Sensor Networks,"Wireless sensor networks (WSNs) have emerged as a new technology about acquiring and processing messages for a variety of applications. Faults occurring to sensor nodes are common due to lack of power or environmental interference. In order to guarantee the network reliability of service, it is necessary for the WSN to be able to detect and processes the faults and take appropriate actions. In this paper, we propose a novel approach to distinguish and filter the error messages for cluter-based WSNs. The simulation results show that the proposed method not only can avoid frequent re-clustering but also can save the energy of sensor nodes, thus prolong the lifetime of sensor network.",2008,0, 7089,Neural fault isolator for Wireless Sensor Networks,"Wireless sensor networks are emerging as an innovative technology that can help to improve business processes. In such environments malfunctions and break-down states must be efficiently diagnosed to reduce to a minimum the economic losses. In this paper we present a fault isolation approach based on neural networks, which utilizes only a minimum set of information such as the sensor value, node ID and timestamp as inputs. We believe that this information set could be provided by any WSN regardless of its specific implementation. This abstraction makes the fault isolator generically applicable in enterprise business systems. The neural fault isolator was evaluated in a trial with 36 nodes and has proved to be highly efficient in the isolation of failed components.",2008,0, 7090,Decentralized Fault Detection and Management for Wireless Sensor Networks,"Wireless Sensor Networks are increasingly being deployed in long-lived, challenging application scenarios which demand a high level of availability and reliability. To achieve these characteristics in inherently unreliable and resource constrained sensor network environments, fault tolerance is required. This paper presents a generic and efficient fault tolerance algorithm for Wireless Sensor Networks. In contrast to existing approaches, the algorithm presented in this paper is entirely decentralized and can thus be used to support fully autonomic fault tolerance in sensor network environments.",2010,0, 7091,Combinational circuit fault diagnosis using logic emulation,"We propose an emulation-based diagnosis technique for combinational circuits in this paper. To verify our approach, a hardware emulator is implemented by using Altera MAX+Plus II CPLD Development System. Our approach reduces the CPU time required by a software-based diagnosis technique significantly, and greatly eliminates the hardware requirements with circuit partitioning techniques and novel fault injection elements (FIEs). Moreover, our diagnosis algorithm also decreases the times of simulation when performing diagnosis. Experimental results for ISCAS-85 benchmark circuits show that our emulation system is 45 times faster than Kokan's (1999) on the average.",2003,0, 7092,Symbol recognition by error-tolerant subgraph matching between region adjacency graphs,We propose an error-tolerant subgraph isomorphism algorithm formulated in terms of region adjacency graphs (RAG). A set of edit operations to transform one RAG into another one are defined as regions are represented by polylines and string matching techniques are used to measure their similarity. The algorithm follows a branch and bound approach driven by the RAG edit operations. This formulation allows matching computing under distorted inputs and also reaching a solution in a near polynomial time. The algorithm has been used for recognizing symbols in hand drawn diagrams,2001,0, 7093,An Intelligent Error Detection Model for Reliable QoS Constraints Running on Pervasive Computing,"We propose an intelligence predictive model for reliable QoS constraints running on pervasive computing. FTA is a system that is suitable for detecting and recovering software error based on pervasive computing environment as RCSM(Reconfigurable Context-Sensitive Middleware) by using software techniques. One of the methods to detect error for session's recovery inspects process database periodically. But this method has a weak point of inspecting all processes without regard to session. Therefore, we propose FTA. This method detects error by inspecting by hooking method. If an error is found, FTA informs GSM of the error. GSM informs Daemon or SA-SMA of the error. Daemon creates SA-SMA and so on. SA-SMA creates Video Service Provide Instance and so on.",2006,0, 7094,FlexiMAC: A flexible TDMA-based MAC protocol for fault-tolerant and energy-efficient wireless sensor networks,"We propose FlexiMAC, a novel TDMA-based protocol for efficient data gathering in wireless sensor networks that provides end-to-end guarantees on data delivery: throughput, fair access, and robust self-healing, whilst also respecting the severe energy and memory constraints of wireless sensor networks. Flex-iMAC achieves this balance through a synchronized and flexible slot structure in which nodes in the network can build, modify, or extend their scheduled number of slots during execution, based on their local information. This scheme allows FlexiMAC to be strongly fault tolerant and highly energy efficient. FlexiMAC further minimizes energy by selecting optimum node transmission power for a given topology. FlexiMAC is scalable for large number of nodes because it allows communication slots to be reused by nodes outside each others' interference range, and its depth-first-search schedule minimizes buffering. Simulations show that FlexiMAC ensures energy efficiency and is robust to network dynamics (faults such as dropped packets, nodes joining or leaving the network) under various network configurations.",2006,0, 7095,Design and implementation of error control algorithms for Bluetooth system: open-loop and closed-loop algorithms,We propose open-loop and closed-loop link quality control (LQC) algorithms using the correlation output of the access code for a short-range radio network. The new schemes can decrease the number of retransmissions and data overhead which result in improving the throughput of the Bluetooth system without extra hardware burden.,2000,0, 7096,Mutation-based diagnostic test generation for hardware design error diagnosis,We propose the use of mutation-based error injection to guide the generation of high-quality diagnostic test patterns. A software-based fault localization technique is employed to derive a ranked candidate list of suspect statements. Experimental results for a set of Verilog designs demonstrate that a finer diagnostic resolution can be achieved by patterns generated by the proposed method.,2010,0, 7097,MI-based correction of intensity inhomogeneity using singularity function analysis,"We proposed a new approach for correcting intensity nonuniformity (intensity inhomogeneity, bias field, or shading), that hampers the use of automatic image processing techniques. The intensity nonuniformity is perceived as a smooth intensity variation across the image. The proposed approach is on the basis of singularity function analysis, and mutual information is criterion of stopping iterative process. The rationale is that the low frequency component containing bias field is removed, followed by reconstruction from its high frequency component not containing bias field. Then the estimated bias field and the corrected image can be obtained",2006,0, 7098,"Fault diameter and fault tolerance of HCN(n,n)","We provide the way to make an n+1 node disjoint parallel path between any two nodes of HCN(n,n) which has a better network cost than the hypercube, and prove that the fault diameter of HCN(n,n) is dia(HCN(n,n))+4 by result. These parallel paths can reduce the time of transmitting messages between nodes, and they mean that if some nodes of HCN(n,n) would fail, there is still no communication delay time. Also, by analyzing the fault tolerance of the interconnection network HCN(n,n), we prove that there is maximal fault tolerance",2001,0, 7099,Data-driven fault diagnosis of oil rig motor pumps applying automatic definition and selection of features,"We report about fault diagnosis experiments to improve the maintenance quality of motor pumps installed on oil rigs. We rely on the data-driven approach to the learning of the fault classes, i.e. supervised learning in pattern recognition. Features are extracted from the vibration signals to detect and diagnose misalignment and mechanical looseness problems. We show the results of automatic pattern recognition methods to define and select features that describe the faults of the provided examples. The support vector machine is chosen as the classification architecture.",2009,0, 7100,A simple and efficient burst error correcting code based on an array code,"We show that a widely used array code, known as the even-odd code, which is targeted at phased burst errors, may also be useful for non-phased burst errors. A new decoder is proposed for this code which effectively converts it into a more general burst error correcting code. The proposed scheme is shown to be capable of correcting almost all bursts up to a certain length, such that its performance is attractive for many communication applications. Since the failure rate is sufficiently low, the code can be practically classified as a burst error correcting code. The redundancy in this code is equal to twice the maximal burst length, which is the same redundancy as the lower bound of conventional burst error correcting codes (the Reiger bound). Both the encoder and the decoder have very low complexity, both in terms of number of operations and in terms: of computer code size. We analyze the probability of failure, provide tight upper and lower bounds, and show that asymptotically this probability approaches zero for large blocks.",2004,0, 7101,Fault-Tolerant Sensor Coverage for Achieving Wanted Coverage Lifetime with Minimum Cost,"We study how to select and arrange multiple types of wireless sensors to build a star network that meets the coverage, the lifetime, the fault-tolerance, and the minimum-cost requirements, where the network lifetime, the acceptable failure probability of the network, and the failure rate of each type of sensors are given as parameters. This problem is NP-hard. We model this problem as an integer linear programming minimization problem. We then present an efficient approximation algorithm to find a feasible solution to the problem, which provides a sensor arrangement and a scheduling. We show that, through numerical experiments, our approximation provides solutions with approximation ratios less than 1.4.",2007,0, 7102,Study on eliminating the quality defects of vulcanized products with large size rubber-belt vulcanizing machine,"With ANSYS software hot plate thermal fields are analyzed and based on the MATLAB software, the functional relationship between hot plate temperature and the length is plotted, and then the fitting curve is given; the vulcanization intensity of belt is calculated, and the three-dimensional model figure of vulcanization intensity over time is drawn; the vulcanization effect of belt is calculated, the hot plate curing effect and the length of the curve is drawn to address the tape when the quality of intermittent curing deficiencies, the hot plate the equivalent scope was summarized.",2010,0, 7103,Adaptive online testing for efficient hard fault detection,"With growing semiconductor integration, the reliability of individual transistors is expected to rapidly decline in future technology generations. In such a scenario, processors would need to be equipped with fault tolerance mechanisms to tolerate in-field silicon defects. Periodic online testing is a popular technique to detect such failures; however, it tends to impose a heavy testing penalty. In this paper, we propose an adaptive online testing framework to significantly reduce the testing overhead. The proposed approach is unique in its ability to assess the hardware health and apply suitably detailed tests. Thus, a significant chunk of the testing time can be saved for the healthy components. We further extend the framework to work with the StageNet CMP fabric, which provides the flexibility to group together pipeline stages with similar health conditions, thereby reducing the overall testing burden. For a modest 2.6% sensor area overhead, the proposed scheme was able to achieve an 80% reduction in software test instructions over the lifetime of a 16-core CMP.",2009,0, 7104,Fault-tolerant task scheduling in multiprocessor systems based on primary-backup scheme,"With multiprocessor systems, redundant scheduling is a technique that trades processing power for increased reliability through redundancy. One novel approach, called primary-backup task scheduling, is a large number of used in hard real-time multiprocessor systems to guarantee the deadlines of tasks faults. In this paper, we analysis and compare the scheduling algorithm based on primary-backup replication in current distributed system. Finally, the important features that fault-tolerant Task Scheduling in Multiprocessor Systems based on primary-backup are summarized, and the future research strategies and tends are given.",2010,0, 7105,Temporal and Spatial Requirements for Optimized Fault Location,"With technological advancements data availability in power systems is drastically increased. Intelligent electronic devices are capable of communicating recorded data. Data can be stored, easily interfaced from different access points and, intelligent techniques can be used for automated fault analysis. After summarizing obstacles in the current framework of fault location analysis, this paper will explore temporal and spatial aspects of available data. This leads to introducing implementation framework of automated optimized fault location that is capable of taking advantage of both the time and space aspects of data.",2008,0, 7106,Timing-Error-Tolerant Network-on-Chip Design Methodology,"With technology scaling, the wire delay as a fraction of the total delay is increasing, and the communication architecture is becoming a major bottleneck for system performance in systems on chip (SoCs). A communication-centric design paradigm, networks on chip (NoCs), has been proposed recently to address the communication issues of SoCs. As the geometries of devices approach the physical limits of operation, NoCs will be susceptible to various noise sources such as crosstalk, coupling noise, process variations, etc. Designing systems under such uncertain conditions become a challenge, as it is harder to predict the timing behavior of the system. The use of conservative design methodologies that consider all possible delay variations due to the noise sources, targeting safe system operation under all conditions will result in poor system performance. An aggressive design approach that provides resilience against such timing errors is required for maximizing system performance. In this paper, we present T-error, which is a timing-error-tolerant aggressive design method to design the individual components of the NoC (such as switches, links, and network interfaces), so that the communication subsystem can be clocked at a much higher frequency than a traditional conservative design (up to 1.5x increase in frequency). The NoC is designed to tolerate timing errors that arise from overclocking without substantially affecting the latency for communication. We also present a way to dynamically configure the NoC between the overclocked mode and the normal mode, where the frequency of operation is lower than or equal to the traditional design's frequency, so that the error recovery penalty is completely hidden under normal operation. Experiments on several benchmark applications show large performance improvement (up to 33% reduction in average packet latency) for the proposed system when compared to traditional systems.",2007,0, 7107,A Multi-core Approach to Providing Fault Tolerance for Non-deterministic Services,"With the advent of multi- and many-core architectures, new opportunities in fault-tolerant computing have become available. In this paper we propose a novel process replication method that provides transparent failover of non-deterministic TCP services by utilizing spare CPU cores. Our method does not require any changes to the TCP protocol, does not require any changes to the client software, and unlike existing solutions, it does not require any changes to the server applications either. We measure performance overhead on two real-world applications, a multimedia streaming service and an Internet Relay Chat daemon and show that the imposed overhead is minimal as the price of seamless failover. Our prototype implementation consists of a kernel module for Linux 2.6 without any changes to the existing kernel code.",2010,0, 7108,"An empirical study of modifying the Fagan inspection process and the resulting main effects and interaction effects among defects found, effort required, rate of preparation and inspection, number of team members and product 1st pass quality","We present findings from a six sigma black belt project. Every black belt project has a charter that defines the customer focus and the goals of the project. This project is designed to identify the key factors that impact effectiveness for software inspections and to compare Fagan inspections and modified Fagan inspections used at Motorola. Empirical data is collected and simulation models of the generic processes are created. The models that are created abstract away unnecessary details of the process and provide a test-bed to evaluate the methodologies relative to their effectiveness, cost in effort, time required (duration), and complexity of the activity.",2002,0, 7109,Optical Proximity Correction for 0.13 micrometer SiGe:C BiCMOS,"We present results for a rule based optical proximity (RB-OPC) and a model based optical proximity correction (MB-OPC) for 0.13 micrometer SiGe:C BiCMOS technology. The technology provides integrated high performance heterojunction bipolar transistors (HBTs) with cut-off frequencies up to 300 GHz. This requires an optical proximity correction of critical layers with an excellent mask quality. This paper provides results of the MB-OPC and RB-OPC using the Mentor Calibre software in comparison to uncorrected structures (NO-OPC). We show RB- and MB-OPC methods for the shallow trench and gate layer, and the RB-OPC for the emitter window-, contact- and metal layers. We will discuss the impact of the RB- and MB-OPC rules on the process margin and yield in the 0.13 micrometer SiGe:C BiCMOS technology, based on CD-SEM data obtained from the evaluation of the RB- and MB-OPC corrected SRAM cells.",2008,0, 7110,Testing ThumbPod: Softcore bugs are hard to find,"We present the debug and test strategies used in the ThumbPod system for Embedded Fingerprint Authentication. ThumbPod uses multiple levels of programming (Java, C and hardware) with a hierarchy of programmable architectures (KVM on top of a SPARC core on top of an FPGA). The ThumbPod project teamed up seven graduate students in the concurrent development and verification of all these programming layers. We pay special attention to the strengths and weaknesses of our bottom-up testing approach.",2003,0, 7111,RedCANTM: simulations of two fault recovery algorithms for CAN,"We present the RedCAN concept to achieve fault tolerance against node and link failures in a CAN-bus system by means of configurable switches. The basic idea in RedCAN is to isolate faulty nodes or bus segments by configuring switches that will evade a faulty node or segment and exclude it from bus access. We propose changes to the original centralized protocol, vulnerable to single point failures, and show that with a new distributed algorithm considerable more efficiency can be achieved also when network size is growing. The distributed algorithm introduces redundancy and hereby increases robustness of the system. Furthermore, the new algorithm has logarithmic complexity, as opposed to the centralized algorithms linear complexity, as the number of nodes increase. The results were gathered through a new simulator, the ""RedCAN Simulation Manager"", also presented. Simulations allow assessing the break-even point between centralized and distributed algorithms reconfiguration latencies as well as give ideas for further research.",2004,0, 7112,Combinational Logic Soft Error Correction,"We present two techniques for correcting radiation-induced soft errors in combinational logic - error correction using duplication, and error correction using time-shifted outputs. Simulation results show that both techniques reduce combinational logic soft error rate by more than an order of magnitude. Soft errors affecting sequential elements (latches and flip-flops) at combinational logic outputs are automatically corrected using these techniques",2006,0, 7113,A Classification-Based Fault Detection and Isolation Scheme for the Ion Implanter,"We propose a classification-based fault detection and isolation scheme for the ion implanter. The proposed scheme consists of two parts: 1) the classification part and 2) the fault detection and isolation part. In the classification part, we propose a hybrid classification tree (HCT) with learning capability to classify the recipe of a working wafer in the ion implanter, and a k-fold cross-validation error is treated as the accuracy of the classification result. In the fault detection and isolation part, we propose a warning signal generation criteria based on the classification accuracy to detect and fault isolation scheme based on the HCT to isolate the actual fault of an ion implanter. We have compared the proposed classifier with the existing classification software and tested the validity of the proposed fault detection and isolation scheme for real cases to obtain successful results",2006,0, 7114,Fault Management in Functionally Distributed Transport Networking for Large Scale Networks,"We propose a fault management method in functionally distributed transport networking that separates the control-plane processing part (control element, CE) from the forwarding-plane processing part (forwarding element, FE) of the router. In this architecture, one path-control process in the CE consolidates and processes the path computations and the path settings for multiple routers. This leads to reduction in the path-control complexity and efficient operation of large scale networks. On the other hand, if faults occur in a CE and the CE become unable to serve a routing function, all of the FEs controlled by the CE will be affected. Therefore, it is absolutely critical to ensure the high reliability of the CE in this architecture. The proposed method takes the redundant configuration of N+m CEs and switches from a fault CE to a standby CE. Additionally, we describe the operation of each component in the proposed method and evaluate its feasibility by using software implementation.",2009,0, 7115,"An integrated decision, control and fault detection scheme for cooperating unmanned vehicle formations","We propose a hierarchical and decentralized scheme for integrated decision, control and fault detection in cooperating unmanned aerial systems flying in formations and operating in adversarial environments. To handle, in a cooperative fashion, events that may adversely affect the outcome of a multi-vehicle mission, events such as actuator faults, body damage, network interruption/delays, and vehicle loss, we present a decision-control system whose architecture comprises three main components: formation control and trajectory generation, abrupt and nonabrupt fault detection, and decision-making relying on optimization under uncertainty. The scheme seeks to provide the most effective team adaptation to contingencies despite partially known environments and limited available information. The integrated decision, control and fault detection scheme is demonstrated numerically by means of high-fidelity, nonlinear 6-DOF simulations of multiple formation flying airships. For a rendezvous mission, the paper shows that concurrent nonabrupt and abrupt type faults can be detected and effectively compensated for both at the formation control and at the decision-making levels, despite network mishaps, which represents a novelty in itself.",2008,0, 7116,HAFT: A hybrid FPGA with amorphous and fault-tolerant architecture,"We propose a hybrid FPGA architecture with a dense and defective nano-crossbar serving as its configuration memory. An amorphous routing architecture is adopted to optimally allocate logic and routing resource on per-mapping basis and to achieve high logic density. This hybrid FPGA is designed to be efficient in using nano-crosspoints, highly tolerant to memory defects, and versatile to provide features such as variable-granularity logic blocks and variable-length bypassing interconnects. A new placement algorithm and a modified delay-based routing procedure are designed to match with many unconventional architectural features of the proposed FPGA. Assuming zero defect-rate in the nano-crossbar, an FPGA with the proposed architecture can achieve a 30% improvement in logic density, 12% improvement in average net delay, and 8% improvement in the critical-path delay for the largest 20 MCNC benchmark circuits over an island-style baseline with the same nano-scale memory. As the rate of defects in the memory increases from 0% to 50%, this hybrid FPGA remains fully functional and its improvement in logic density and delay performance only drops by approximately 23%.",2008,0, 7117,BIST based fault diagnosis using ambiguous test set,"We propose a method for diagnosing single stuck-at faults under a built-in self-test (BIST) environment. Under the BIST environment, it is difficult to determine which BIST vectors produced errors due to the high degree of test response compaction. Therefore the detecting test set that is determined in BIST session includes non-detecting tests. We call the detecting test set determined after BIST session an ""ambiguous diagnostic test set"". Firstly, we propose a method for identifying candidate faults based on the ambiguous diagnostic test set. Moreover we propose a method for identifying candidate non-detecting tests that belong to the ambiguous diagnostic test set. Diagnosis by using more accurate diagnostic test sets is able to improve the diagnostic ambiguity.",2003,0, 7118,A rectilinear-monotone polygonal fault block model for fault-tolerant minimal routing in mesh,"We propose a new fault block model, minimal-connected-component (MCC), for fault-tolerant adaptive routing in mesh-connected multiprocessor systems. This model refines the widely used rectangular model by including fewer nonfaulty nodes in fault blocks. The positions of source/destination nodes relative to faulty nodes are taken into consideration when constructing fault blocks. The main idea behind it is that a node will be included in a fault block only if using it in a routing will definitely make the route nonminimal. The resulting fault blocks are of the rectilinear-monotone polygonal shapes. A sufficient and necessary condition is proposed for the existence of the minimal ""Manhattan"" routes in the presence of such fault blocks. Based on the condition, an algorithm is proposed to determine the existence of Manhattan routes. Since MCC is designed to facilitate minimal route finding, if there exists no minimal route under MCC fault model, then there will be absolutely no minimal route whatsoever. We also present two adaptive routing algorithms that construct a Manhattan route avoiding all fault blocks, should such routes exist.",2003,0, 7119,Servo Performance Enhancement of Motion System via a Quantization Error Estimation MethodIntroduction to Nanoscale Servo Control,"When compared to the accuracy of nanoscale control, the resolution of current positioning sensors is relatively low. Because of this, the output from low-precision sensors normally includes quantization errors that could degrade control performance. As a result, in this paper, a method of quantization error estimation based on the least square method is examined. In the proposed method, estimation accuracy is improved by taking into account the effect of input disturbances. Furthermore, a bias adjustment method is proposed that is expected to satisfy the constraints on quantization error. The effectiveness of the proposed method is demonstrated by simulations and experiments.",2009,0, 7120,Accurate Rank Ordering of Error Candidates for Efficient HDL Design Debugging,"When hardware description languages (HDLs) are used in describing the behavior of a digital circuit, design errors (or bugs) almost inevitably appear in the HDL code of the circuit. Existing approaches attempt to reduce efforts involved in this debugging process by extracting a reduced set of error candidates. However, the derived set can still contain many error candidates, and finding true design errors among the candidates in the set may still consume much valuable time. A debugging priority method was proposed to speed up the error-searching process in the derived error candidate set. The idea is to display error candidates in an order that corresponds to an individual's degree of suspicion. With this method, error candidates are placed in a rank order based on their probability of being an error. The more likely an error candidate is a design error (or a bug), the higher the rank order that it has. With the displayed rank order, circuit designers should find design errors quicker than with blind searching when searching for design errors among all the derived candidates. However, the currently used confidence score (CS) for deriving the debugging priority has some flaws in estimating the likelihood of correctness of error candidates due to the masking error situation. This reduces the degree of accuracy in establishing a debugging priority . Therefore, the objective of this work is to develop a new probabilistic confidence score (PCS) that takes the masking error situation into consideration in order to provide a more reliable and accurate debugging priority. The experimental results show that our proposed PCS achieves better results in estimating the likelihood of correctness and can indeed suggest a debugging priority with better accuracy, as compared to the CS.",2009,0, 7121,Parameter calculation based on perturbation theory for fault conditions of induction motors,"When motor faults (rotor broken-bar, stator turn- to-turn short, etc.) occur in a motor, its internal current distribution and electromagnetic field will change, which will further cause some variations to stator and rotor parameters. In this paper, the method conjoint of Finite Element Method and Energy Perturbation Method is used to calculate the motor's inductance parameter, and finite-element method is used to calculate the gross energy of internal magnetic field, and perturbation theory is used to calculate the motor's inductance parameter. The variation of inductance parameter under fault conditions is analyzed to generalize principles about variation of inductance parameter before and after the fault. From the results we can conclude that the magnetic energy will more and more numerous along with the increasing number of rotor broken-bar, and will become smaller along with the increasing degree of turn- to-turn short. Along with the increasing number of rotor broken- bar, stator winding self-inductance will be influenced little, but the self-inductance would be smaller and smaller along with the increasing degree of stator turn-to-turn short. The calculation parameter in this paper can be used to analysis of motor faults.",2007,0, 7122,Robust TCP connections for fault tolerant computing,"When processes on two different machines communicate, they most often do so using the TCP protocol. While TCP is appropriate for a wide range of applications, it has shortcomings in other application areas. One of these areas is fault tolerant distributed computing. For some of those applications, TCP does not address link failures adequately: TCP breaks the connection if connectivity is lost for some duration (typically minutes). This is sometimes undesirable. The paper proposes robust TCP connections, a solution to the problem of broken TCP connections. The paper presents a session layer protocol on top of TCP that ensures reconnection, and provides exactly-once delivery for all transmitted data. A prototype has been implemented as a Java library. The prototype has less than 10% overhead on TCP sockets with respect to the most important performance figures.",2002,0, 7123,Memory-based context-sensitive spelling correction at web scale,"We study the problem of correcting spelling mistakes in text using memory-based learning techniques and a very large database of token n-gram occurrences in web text as training data. Our approach uses the context in which an error appears to select the most likely candidate from words which might have been intended in its place. Using a novel correction algorithm and a massive database of training data, we demonstrate higher accuracy on correcting real- word errors than previous work, and very high accuracy at a new task of ranking corrections to non-word errors given by a standard spelling correction package.",2007,0, 7124,Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies,"We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.",2010,0, 7125,Location-known-exactly human-observer ROC studies of attenuation and other corrections for SPECT lung imaging,"We use receiver operating characteristic (ROC) analysis of a location-known-exactly (LKE) lesion detection task to compare the image quality of SPECT reconstruction with and without various combinations of attenuation correction (AC), scatter correction (SC) and resolution compensation (RC). Hybrid images were generated from Tc-99m labelled NeoTect clinical backgrounds into which Monte Carlo simulated solitary pulmonary nodule (SPN) lung lesions were added, then reconstructed using several strategies. Results from a human-observer study show that attenuation correction degrades SPN detection, while resolution correction improves SPN detection, even when the lesion location is known. This agrees with the results of a previous localization-response operating characteristic (LROC) study using the same images, indicating that location uncertainty is not the sole source of the changes in detection accuracy.",2006,0, 7126,Test pattern generation for timing-induced functional errors in hardware-software systems,"We present an ATPG algorithm for the covalidation of hardware-software systems. Specifically, we target the detection of timing-induced functional errors in the design by using a design fault model which we propose. The computational time required by the test generation process is sufficiently low that the ATPG tool can be used by a designer to achieve a significant reduction in validation cost",2001,0, 7127,Fault-tolerant Ethernet for IP-based process control: A demonstration,"We present an efficient middleware-based fault-tolerant Ethernet (FTE) prototype developed for process control networks. This unique approach requires no change of commercial-off-the-shelf (COTS) hardware (switch, hub, Ethernet physical link and network interface card (NIC)) and software (Ethernet driver and protocol), yet it is transparent to application software. The FTE performs failure detection and recovery for handling multiple points of network failures and supports communications with non FTE-native devices. In this demonstration, we focus on presenting the failure detection and recovery behavior under various failure modes and scenarios. Further, multiple failure handling, node departure and non FTE-native node and FTE node communication scenarios will be presented. The FTE protocol status will be displayed using an FTE user interface on a COTS-based network system",2000,0, 7128,Supporting component and architectural re-usage by detection and tolerance of integration faults,"We present an extended interface description language supporting the avoidance and the automatic-detection and tolerance of inconsistency classes likely to occur when integrating pre-developed components. In particular, the approach developed allows the automatic generation of component wrapping mechanisms aimed at handling the occurrence of local and global inconsistencies during runtime. On the whole, the application of the procedure suggested supports re-usage of components and of architectural patterns by allowing their easy adaptation to the specific needs of the application considered.",2005,0, 7129,Measurement and analysis of physical defects for dynamic supply current testing,We present an iDDT fault analysis study based on physical measurements of circuits with built-in defects. A variety of defects were inserted into basic circuit components. The measured results were utilized to better model the effect of defects on iDDT and improve simulated fault models.,2004,0, 7130,Automated Support for Propagating Bug Fixes,"We present empirical results indicating that when programmers fix bugs, they often fail to propagate the fixes to all of the locations in a code base where they are applicable, thereby leaving instances of the bugs in the code. We propose a practical approach to help programmers to propagate many bug fixes completely. This entails first extracting a programming rule from a bug fix, in the form of a graph minor of an enhanced procedure dependence graph. Our approach assists the programmer in specifying rules by automatically matching simple rule templates; the programmer may also edit rules or compose them from scratch. A graph matching algorithm for detecting rule violations is then used to locate the places in the code base where the bug fix is applicable. Our approach does not require that rules occur repeatedly in the code base. We present empirical results indicating that the approach nevertheless exhibits good precision.",2008,0, 7131,Efficient diagnosis for multiple intermittent scan chain hold-time faults,"When VLSI design and process enter the stage of ultra deep submicron (UDSM), process variations, signal integrity (SI) and design integrity (DI) issues can no longer be ignored. These factors introduce some new problems in VLSI design, test and diagnosis, which increase lime-to-market, time-to-volume and cost for silicon debug. Intermittent scan chain hold-time fault is one of such problems we encountered in practice. The fault sites have to be located to speedup silicon debug and improve yield. Recent study of the problem proposed a statistical algorithm to diagnose the faulty scan chains if only one fault per chain. Based on the previous work, in this paper, an efficient diagnosis algorithm is proposed to diagnose faulty scan chains with multiple faults per chain. The presented experimental results on industrial designs show that the proposed algorithm achieves good diagnosis resolution in reasonable time.",2003,0, 7132,Self-Calibration Algorithm for the Amplitude and Phase Error of the Multiple Beam Antenna,"When we use the multiple beam antenna to estimate the direction of arrival (DOA) by using the multiple signal classification (MUSIC) algorithm, the uncertainty element's amplitude and phase will effect the DOA estimation performance of the beam antenna. So a universal technique for self-calibration based on the minimize cost function is introduced. By minimize the cost function it can get the array covariance matrix and compensate the array uncertainty without estimation. Therefore, the proposed method is advantage over the source calibration algorithm that resorts to computing the sample minimize cost function. The performance of the proposed method is demonstrated by using the actual parameter of multiple beam antenna, and the computer simulations show that the proposed method provide comparable performance that can not only self-calibrate its sensitivity to the array uncertainty with reduce complexity compute with resource, but also further increase its precision.",2009,0, 7133,Project Data Incorporating Qualitative Factors for Improved Software Defect Prediction,"To make accurate predictions of attributes like defects found in complex software projects we need a rich set of process factors. We have developed a causal model that includes such process factors, both quantitative and qualitative. The factors in the model were identified as part of a major collaborative project. A challenge for such a model is getting the data needed to validate it. We present a dataset, elicited from 31 completed software projects in the consumer electronics industry, which we used for validation. The data were gathered using a questionnaire distributed to managers of recent projects. The dataset will be of interest to other researchers evaluating models with similar aims. We make both the dataset and causal model available for research use.",2007,1, 7134,A study on fault-proneness detection of object-oriented systems,"Fault-proneness detection in object-oriented systems is an interesting area for software companies and researchers. Several hundred metrics have been defined with the aim of measuring the different aspects of object-oriented systems. Only a few of them have been validated for fault detection, and several interesting works with this view have been considered. This paper reports a research study starting from the analysis of more than 200 different object-oriented metrics extracted from the literature with the aim of identifying suitable models for the detection of the fault-proneness of classes. Such a large number of metrics allows the extraction of a subset of them in order to obtain models that can be adopted for fault-proneness detection. To this end, the whole set of metrics has been classified on the basis of the measured aspect in order to reduce them to a manageable number; then, statistical techniques were employed to produce a hybrid model comprised of 12 metrics. The work has focused on identifying models that can detect as many faulty classes as possible and, at the same time, that are based on a manageably small set of metrics. A compromise between these aspects and the classification correctness of faulty and non-faulty classes was the main challenge of the research. As a result, two models for fault-proneness class detection have been obtained and validated",2001,1, 7135,An investigation of the relationships between lines of code and defects,"It is always desirable to understand the quality of a software system based on static code metrics. In this paper, we analyze the relationships between lines of code (LOC) and defects (including both pre-release and post-release defects). We confirm the ranking ability of LOC discovered by Fenton and Ohlsson. Furthermore, we find that the ranking ability of LOC can be formally described using Weibull functions. We can use defect density values calculated from a small percentage of largest modules to predict the number of total defects accurately. We also find that, given LOC we can predict the number of defective components reasonably well using typical classification techniques. We perform an extensive experiment using the public Eclipse dataset, and replicate the study using the NASA dataset. Our results confirm that simple static code attributes such as LOC can be useful predictors of software quality.",2009,1, 7136,A defect prediction model for software based on service oriented architecture using EXPERT COCOMO,"Software that adopts service oriented architecture has specific features on project scope, lifecycle and technical methodology, therefore demands for specific defect management process to ensure quality. By leveraging recent best practices in projects that adopt service oriented architecture, this paper presents a defect prediction model for software based on service oriented architecture, and discusses the defect management process based on the presented model.",2009,1, 7137,Application of Random Forest in Predicting Fault-Prone Classes,"There are available metrics for predicting fault prone classes, which may help software organizations for planning and performing testing activities. This may be possible due to proper allocation of resources on fault prone parts of the design and code of the software. Hence, importance and usefulness of such metrics is understandable, but empirical validation of these metrics is always a great challenge. Random forest (RF) algorithm has been successfully applied for solving regression and classification problems in many applications. This paper evaluates the capability of RF algorithm in predicting fault prone software classes using open source software. The results indicate that the prediction performance of random forest is good. However, similar types of studies are required to be carried out in order to establish the acceptability of the RF model.",2008,1, 7138,Early Software Fault Prediction Using Real Time Defect Data,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules by using clustering techniques. This approach has been tested with three real time defect datasets of NASA software projects, JM1, PC1 and CM1. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The results show that when all the prediction techniques are evaluated, the best prediction model is found to be the fusion of requirement and code metric model.",2009,1, 7139,Tree-based software quality estimation models for fault prediction,"Complex high-assurance software systems depend highly on reliability of their underlying software applications. Early identification of high-risk modules can assist in directing quality enhancement efforts to modules that are likely to have a high number of faults. Regression tree models are simple and effective as software quality prediction models, and timely predictions from such models can be used to achieve high software reliability. This paper presents a case study from our comprehensive evaluation (with several large case studies) of currently available regression tree algorithms for software fault prediction. These are, CART-LS (least squares), S-PLUS, and CART-LAD (least absolute deviation). The case study presented comprises of software design metrics collected from a large network telecommunications system consisting of almost 13 million lines of code. Tree models using design metrics are built to predict the number of faults in modules. The algorithms are also compared based on the structure and complexity of their tree models. Performance metrics, average absolute and average relative errors are used to evaluate fault prediction accuracy.",2002,1, 7140,Building effective defect-prediction models in practice,"Defective software modules cause software failures, increase development and maintenance costs, and decrease customer satisfaction. Effective defect prediction models can help developers focus quality assurance activities on defect-prone modules and thus improve software quality by using resources more efficiently. These models often use static measures obtained from source code, mainly size, coupling, cohesion, inheritance, and complexity measures, which have been associated with risk factors, such as defects and changes.",2005,1, 7141,A practical method for the software fault-prediction,"In the paper, a novel machine learning method, SimBoost, is proposed to handle the software fault-prediction problem when highly skewed datasets are used. Although the method, proved by empirical results, can make the datasets much more balanced, the accuracy of the prediction is still not satisfactory. Therefore, a fuzzy-based representation of the software module fault state has been presented instead of the original faulty/non-faulty one. Several experiments were conducted using datasets from NASA Metrics Data Program. The discussion of the results of experiments is provided.",2007,1, 7142,Effort-Aware Defect Prediction Models,"Defect Prediction Models aim at identifying error-prone modules of a software system to guide quality assurance activities such as tests or code reviews. Such models have been actively researched for more than a decade, with more than 100 published research papers. However, most of the models proposed so far have assumed that the cost of applying quality assurance activities is the same for each module. In a recent paper, we have shown that this fact can be exploited by a trivial classifier ordering files just by their size: such a classifier performs surprisingly good, at least when effort is ignored during the evaluation. When effort is considered, many classifiers perform not significantly better than a random selection of modules. In this paper, we compare two different strategies to include treatment effort into the prediction process, and evaluate the predictive power of such models. Both models perform significantly better when the evaluation measure takes the effort into account.",2010,1, 7143,Developing fault predictors for evolving software systems,"Over the past several years, we have been developing methods of predicting the fault content of software systems based on measured characteristics of their structural evolution. In previous work, we have shown there is a significant linear relationship between code churn, a synthesized metric, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code churn. We have begun a new investigation of this relationship with a flight software technology development effort at the jet propulsion laboratory (JPL) and have progressed in resolving the limitations of the earlier work in two distinct steps. First, we have developed a standard for the enumeration of faults. Second, we have developed a practical framework for automating the measurement of these faults. we analyze the measurements of structural evolution and fault counts obtained from the JPL flight software technology development effort. Our results indicate that the measures of structural attributes of the evolving software system are suitable for forming predictors of the number of faults inserted into software modules during their development. The new fault standard also ensures that the model so developed has greater predictive validity.",2003,1, 7144,The effects of fault counting methods on fault model quality,"Over the past few years, we have been developing software fault predictors based on a system's measured structural evolution. We have previously shown there is a significant linear relationship between code chum, a set of synthesized metrics, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code chum. A limiting factor in this and other investigations of a similar nature has been the absence of a quantitative, consistent, and repeatable definition of what constitutes a fault. The rules for fault definition were not sufficiently rigorous to provide unambiguous, repeatable fault counts. Within the framework of a space mission software development effort at the Jet Propulsion Laboratory (JPL) we have developed a standard for the precise enumeration of faults. This new standard permits software faults to be measured directly from configuration control documents. Our results indicate that reasonable predictors of the number of faults inserted into a software system can be developed from measures of the system's structural evolution. We compared the new method of counting faults with two existing techniques to determine whether the fault counting technique has an effect on the quality of the fault models constructed from those counts. The new fault definition provides higher quality fault models than those obtained using the other definitions of fault",2004,1, 7145,Locating where faults will be [software testing],"The goal of this research is to allow software developers and testers to become aware of which files in the next release of a large software system are likely to contain the largest numbers of faults or the highest fault densities in the next release, thereby allowing testers to focus their efforts on the most fault-prone files. This is done by developing a negative binomial regression model to help predict characteristics of new releases of a software system, based on information collected about prior releases and the new release under development. The same prediction model was also used to allow a tester to select the files of a new release that collectively contain any desired percentage of the faults. The benefit of being able to make these sorts of predictions accurately should be clear: if we know where to look for bugs, we should be able to target our testing efforts there and, as a result, find problems more quickly and therefore more economically. Two case studies using large industrial software systems are summarized. The first study used seventeen consecutive releases of a large inventory system, representing more than four years of field exposure. The second study used nine releases of a service provisioning system with two years of field experience.",2005,1, 7146,Using machine learning for estimating the defect content after an inspection,"We view the problem of estimating the defect content of a document after an inspection as a machine learning problem: The goal is to learn from empirical data the relationship between certain observable features of an inspection (such as the total number of different defects detected) and the number of defects actually contained in the document. We show that some features can carry significant nonlinear information about the defect content. Therefore, we use a nonlinear regression technique, neural networks, to solve the learning problem. To select the best among all neural networks trained on a given data set, one usually reserves part of the data set for later cross-validation; in contrast, we use a technique which leaves the full data set for training. This is an advantage when the data set is small. We validate our approach on a known empirical inspection data set. For that benchmark, our novel approach clearly outperforms both linear regression and the current standard methods in software engineering for estimating the defect content, such as capture-recapture. The validation also shows that our machine learning approach can be successful even when the empirical inspection data set is small.",2004,1, 7147,Test effort optimization by prediction and ranking of fault-prone software modules,"Identification of fault-prone or not fault-prone modules is very essential to improve the reliability and quality of a software system. Once modules are categorized as fault-prone or not fault-prone, test effort are allocated accordingly. Testing effort and efficiency are primary concern and can be optimized by prediction and ranking of fault-prone modules. This paper discusses a new model for prediction and ranking of fault-prone software modules for test effort optimization. Model utilizes the classification capability of data mining techniques and knowledge stored in software metrics to classify the software module as fault-prone or not fault-prone. A decision tree is constructed using ID3 algorithm for the existing project data. Rules are derived form the decision tree and integrated with fuzzy inference system to classify the modules as either fault-prone or not fault-prone for the target data. The model is also able to rank the fault-prone module on the basis of its degree of fault-proneness. The model accuracy are validated and compared with some other models by using the NASA projects data set of PROMOSE repository.",2010,1, 7148,Applying Novel Resampling Strategies To Software Defect Prediction,"Due to the tremendous complexity and sophistication of software, improving software reliability is an enormously difficult task. We study the software defect prediction problem, which focuses on predicting which modules will experience a failure during operation. Numerous studies have applied machine learning to software defect prediction; however, skewness in defect-prediction datasets usually undermines the learning algorithms. The resulting classifiers will often never predict the faulty minority class. This problem is well known in machine learning and is often referred to as learning from unbalanced datasets. We examine stratification, a widely used technique for learning unbalanced data that has received little attention in software defect prediction. Our experiments are focused on the SMOTE technique, which is a method of over-sampling minority-class examples. Our goal is to determine if SMOTE can improve recognition of defect-prone modules, and at what cost. Our experiments demonstrate that after SMOTE resampling, we have a more balanced classification. We found an improvement of at least 23% in the average geometric mean classification accuracy on four benchmark datasets.",2007,1, 7149,Detecting Fault Modules Applying Feature Selection to Classifiers,"At present, automated data collection tools allow us to collect large amounts of information, not without associated problems. This paper, we apply feature selection to several software engineering databases selecting attributes with the final aim that project managers can have a better global vision of the data they manage. In this paper, we make use of attribute selection techniques in different datasets publicly available (PROMISE repository), and different data mining algorithms for classification to defect faulty modules. The results show that in general, smaller datasets with less attributes maintain or improve the prediction capability with less attributes than the original datasets.",2007,1, 7150,Analyzing software quality with limited fault-proneness defect data,"Assuring whether the desired software quality and reliability is met for a project is as important as delivering it within scheduled budget and time. This is especially vital for high-assurance software systems where software failures can have severe consequences. To achieve the desired software quality, practitioners utilize software quality models to identify high-risk program modules: e.g., software quality classification models are built using training data consisting of software measurements and fault-proneness data from previous development experiences similar to the project currently under-development. However, various practical issues can limit availability of fault-proneness data for all modules in the training data, leading to the data consisting of many modules with no fault-proneness data, i.e., unlabeled data. To address this problem, we propose a novel semi-supervised clustering scheme for software quality analysis with limited fault-proneness data. It is a constraint-based semi-supervised clustering scheme based on the k-means algorithm. The proposed approach is investigated with software measurement data of two NASA software projects, JM1 and KC2. Empirical results validate the promise of our semi-supervised clustering technique for software quality modeling and analysis in the presence of limited defect data. Additionally, the approach provides some valuable insight into the characteristics of certain program modules that remain unlabeled subsequent to our semi-supervised clustering analysis.",2005,1, 7151,Built in Defect Prognosis for Embedded Memories,"With the shrinking technology and increasing statistical defects, multiple design respins are required based on yield learning. Hence, a solution is required to efficiently diagnose the failure types of memory during production in the shortest time frame possible. This paper introduces a novel method of fault classification through image based prognosis of predefined fail signature dictionary. In contrary to the existing bitmap diagnosis methodologies, this method predicts the compressed failure map without generating and transferring complete bitmap to the tester. The proposed methodology supports testing through a very low cost ATE. This architecture is partitioned to achieve sharing among various memories and at-speed testing.",2007,0, 7152,"Yield Improvement, Fault-Tolerance to the Rescue?","With the technology entering the nano dimension, manufacturing processes are less and less reliable, thus drastically impacting the yield. A possible solution to alleviate this problem in the future could consist in using fault tolerant architectures to tolerate manufacturing defects. In this paper, we analyze the conditions that make the use of a classical triple modular redundancy (TMR) architecture interesting for a yield improvement purpose.",2008,0, 7153,Design of Fault Diagnosis System Based on B/S Structure,"With the wide application of modern electrical technique to weapon systems, weapons become more and more complicated, integrated, high-speed and intellectualized. To insure weapons in their good conditions, the function of fault diagnosis gets more important than before in the process of repairing. Now, it cannot give fault diagnosis quickly and correctly only by conventional means. So developing proper system is in need. This paper designed and developed a network fault diagnosis system based on B/S (Browser/Server) frame through the analysis of the need of fault diagnosis. Taking good use of testing information and diagnosis rules, the system realized open and distributed diagnosing process, and provided a flat of sharing information, which is the technique basis of combining testing and diagnosing. During design, we use Macromedia Dreamweaver MX 2004 as developing software and Java Scripts and VB Scripts in ASP language as scripts to program. It is proved in LAN that the system can utilize present diagnosis rules in database to diagnose fault distributed. Besides, the system is stable with expansibility and security.",2007,0, 7154,A New Fault Ride-through Strategy for Doubly Fed Wind-Power Induction Generator,"Withstanding grid faults becomes an obligation for the bulk wind generation units connected to the transmission network and it is highly desired for distribution wind generators. In this paper, a proposed scheme is implemented for DFIG to keep it operating during transient grid faults. Challenges imposed on the generator configuration and the control during the fault and recovering periods are presented. A comprehensive time domain model for the DFIG with the decoupled dq controller is implemented using Matlab/Simulink software. Intensive simulation results are discussed to ensure the validity and feasibility of the proposed fault ride through technique. The scheme protects the DFIG components, fulfills the grid code requirements, and optimizes the hardware added to the generator.",2007,0, 7155,Error compensation of workpiece localization,"Workpiece localization has direct relations with many manufacturing automation applications. In order to gain accurate workpiece measurement by coordinate measuring machines (CMM) or on-machine measurement system, the touch trigger probe is widely adopted. In spite of the high repeatability of the touch trigger probe, there are still error sources associated with the probe. In this paper, we will focus on probe radius compensation. Several compensation methods in related papers are reviewed. In addition, a new radius compensation method is proposed in this paper. Simulation and experimental results of probe radius compensation by different methods are given. It is shown that our proposed method has the best performance both in terms of compensation accuracy and computational time. The method is also implemented in a computer aided setup (CAS) system.",2001,0, 7156,Software fault tolerance of distributed programs using computation slicing,"Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems, especially those employed in safety-critical environments, should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which involves finding a consistent cut of a distributed computation, if it exists, that satisfies the given global predicate. Detecting a predicate in a computation is, however, an NP-complete problem. To ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice in our earlier papers [5, 10]. Intuitively, slice is a concise representation of those consistent cuts that satisfy a certain condition. To detect a predicate, rather than searching the state-space of the computation, it is much more efficient to search the state-space of the slice. In this paper we provide efficient algorithms to compute the slice for several classes of predicates. Our experimental results demonstrate that slicing can lead to an exponential improvement over existing techniques in terms of lime and space.",2003,0, 7157,"Problems with Precision: A Response to ""Comments on 'Data Mining Static Code Attributes to Learn Defect Predictors'""","Zhang and Zhang argue that predictors are useless unless they have high precison&recall. We have a different view, for two reasons. First, for SE data sets with large neg/pos ratios, it is often required to lower precision to achieve higher recall. Second, there are many domains where low precision detectors are useful.",2007,0, 7158,Fault tolerant Web service,"Zwass (1996) suggested that middleware and message service is one of the five fundamental technologies used to realize electronic commerce (EC). The simple object access protocol (SOAP) is recognized as a more promising middleware for EC applications among other leading candidates such as CORBA. Many recent polls reveal however that security and reliability issues are major concerns that discourage people from engaging in EC transactions. We notice that the fault-tolerance issue is somewhat neglected in the current standard, i.e., SOAP 1.1. We therefore propose a fault tolerant Web service called fault-tolerant SOAP or FT-SOAP through which Web services can be built with higher resilience to failure. FT-SOAP is based on our previous experience with an object fault tolerant service (OFS) [Liang, D. et al., (1999)] and OMG's fault tolerant CORBA (FT-CORBA). There are many architectural differences between SOAP and CORBA. One of the major contributions of this work is to discuss the impact on FT-SOAP design due to these architectural differences. Our experience shows that Web services built on a SOAP framework enjoy higher flexibility as opposed to those built on CORBA. We also point out the limitations of the current feature sets of SOAP 1.1. We believe our experience is valuable not only to the fault-tolerance community, but also to other communities as well, in particular, to those who are familiar with the CORBA platform.",2003,0, 7159,A redundant nested invocation suppression mechanism for active replication fault-tolerant Web service,"Zwass suggested that middleware and message service is one of the five fundamental technologies used to realize electronic commerce (EC) [Zwass, V. (1996)]. The simple object access protocol (SOAP) is recognized as a more promising middleware for EC applications among other leading candidates such as CORBA. We notice that the fault-tolerance issue is somewhat neglected in the current standard, i.e., SOAP 1.1. We therefore proposed a fault tolerant Web service called fault-tolerant SOAP or FT-SOAP through which Web services can be built with higher resilience to failure. Active replication is a common approach to building highly available and reliable distributed software applications. The redundant nested invocation (RNI) problem arises when servers in a replicated group issues nested invocations to other server groups in response to a client invocation. In this work, we propose a mechanism to perform auto-suppression of redundant nested invocation in an active replication FT-SOAP system. Our approach ensures the portability requirement of a middleware, especially for FT-SOAP.",2004,0, 7160,prediction of fault count data using genetic programming,"Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.",2008,1, 7161,"Merging PMU, Operational, and Non-Operational Data for Interpreting Alarms, Locating Faults and Preventing Cascades","With the development of synchronized sampling technique and other advanced measurement approaches, the merging of various substation data to be used in new applications in the EMS solutions has not yet been explored adequately. This paper deals with the integration of time correlated information from Phasor Measurement Units, SCADA and non-operational data captured by other intelligent electronic devices such as protective relays and digital fault recorders, as well as their applications in alarm processing, fault location and cascading event analysis. A set of new control center visualization tools shows that the merging of PMU, operational and non-operational data could improve the effectiveness of alarm processing, accuracy of fault location and ability to detect cascades.",2010,0, 7162,Assessing inter-modular error propagation in distributed software,"With the functionality of most embedded systems based on software (SW), interactions amongst SW modules arise, resulting in error propagation across them. During SW development, it would be helpful to have a framework that clearly demonstrates the error propagation and containment capabilities of the different SW components. In this paper, we assess the impact of inter-modular error propagation. Adopting a white-box SW approach, we make the following contributions: (a) we study and characterize the error propagation process and derive a set of metrics that quantitatively represents the inter-modular SW interactions, (b) we use a real embedded target system used in an aircraft arrestment system to perform fault-injection experiments to obtain experimental values for the metrics proposed, (c) we show how the set of metrics can be used to obtain the required analytical framework for error propagation analysis. We find that the derived analytical framework establishes a very close correlation between the analytical and experimental values obtained. The intent is to use this framework to be able to systematically develop SW such that inter-modular error propagation is reduced by design",2001,0, 7163,Verification and Validation of (Real Time) COTS Products using Fault Injection Techniques,"With the goal of reducing time to market and project costs, the current trend of real time business and mission critical systems is evolving from the development of custom made applications to the use of commercial off the shelf (COTS) products. Obviously, the same confidence and quality of the custom made software components is expected from the commercial applications. In most cases, such products (COTS) are not designed with stringent timing and/or safety requirements as priorities. Thus, to decrease the gap between the use of custom made components and COTS components, this paper presents a methodology for evaluating COTS products in the scope of dependable, real time systems, through the application of fault injection techniques at key points of the software engineering process. By combining the use of robustness testing (fault injection at interface level) with software fault injection (using educated fault injection operators), a COTS component can be assessed in the context of the system it will belong to, with special emphasis given to timing and safety constraints that are usually imposed by the target real time dependable environment. In the course of this work, three case studies have been performed to assess the methodology using realistic scenarios that used common COTS products. Results for one case study are presented",2007,0, 7164,Guided Reasoning of Complex E-Business Process with Business Bug Patterns,"With the growing complexity of e-business applications and the urgent need for ensuring its reliability, much effort has been made to advocate the application of model checking in probing hidden flaws in these applications. This work devotes itself to the performance enhancement in reasoning e-business processes with model checking. Our major contribution lies in: (1) a set of business bug patterns are extracted from workflow patterns to exploit existing business knowledge in probing undesired violations in e-business processes; (2) the semantics of business bug patterns are formally captured with the IEEE standard of PSL; (3) guided verification algorithms are development based on the above findings to accelerate the reasoning of complex e-business applications. Their efficiencies are testified with three concrete business cases in banking and manufacturing domains with our business process verification toolkit of OPAL",2006,0, 7165,Implementing research for IEC 61850-based fault analysis system,"With the maturing of the IEC 61850, utilities are beginning to implement substation automation systems (SAS) that are based on this new international standard. This paper describes such an implementing research for power fault analysis system on east china electric power group in China. In particular, it presents the idea of applying object-oriented methodology to architecture design and providing an open interface of IEC 61850 in the substation layer. Based on the idea and technique, some benefits are brought.",2004,0, 7166,Fuzzy Logic Thermal Error Compensation for Computer Numerical Control Noncircular Turnning System,"With the new emerging technologies of high performance machining and the increasing demand for improved machining accuracy in recent years, the problem of thermal deformation of machine tool structures is becoming more critical than ever. The computer numerical control (CNC) turning system for noncircular section pistons is designed with giant magnetostrictive actuator (GMA) as the turning module. When the temperature of the cooler system of the GMA varies for about 6degC, the dimension error of the surface contour varies for about 20 micron, and cannot meet the precision requirement of the piston's contour dimension. In this paper, a method using fuzzy logic control for the compensation of the thermally induced error is developed. The fuzzy rule is used to compensate directly for the nonlinearity and uncertainty of the cooler system. The rule development strategy for the compensation system is to change the feed quantity of GMA to control the tool the pistons dimension. The fuzzy logic control developed here is a two-input single-output controller. The two inputs are the temperature deviation from setpoint error, and error rate. The output is the compensated value of the feed system. The triangular membership functions are used to define the input linguistic variables and the output linguistic variables. The fuzzy logic approach incorporates many advantages of using fuzzy logic such as the incorporation of heuristic knowledge, ease of implementation and the lack of a need for an accurate mathematical model. The experimental results are presented and the effectiveness of the fuzzy thermal error compensation control technique is discussed. Accuracy is greatly improved using the developed error compensation system",2006,0, 7167,Research of the Middleware Based Fault Tolerance for the Complex Distributed Simulation Applications,"With the rapid development of computer simulation technology, the Radar simulation applications scale up increasingly. More and more Radar simulation applications adopt distributed structure to improve system performance and availability. Hence, how to enhance the robustness and efficiency of these complex distributed simulation systems is a hot point. At the same time, fault tolerance middleware makes the applications more robust, available and reliable. Therefore, we strengthen the functionalities of existing fault tolerant middleware and integrate our middleware with the complex distributed simulation systems to provide efficient fault tolerance with balanced workload allocation among different replicas for the distributed simulation applications.",2009,0, 7168,A new method of fault tolerance TCP,"With the rapid development of Internet, the need of high availability of data services on Internet becomes more urgent. But as one of the most useful protocol on Internet, TCP protocol software can not solve the high availability due to the failure of hardware or software on server/client. In this paper, we propose a new method that can implement fault tolerance TCP to improve the high availability of data transmission. First, we analyze some existing methods of fault tolerant TCP; then based on the characteristic of present server architecture, we put forward our new method of fault tolerant TCP; and lastly, we describe in detail how to implement and test our method. Experimental results show that our fault-tolerant TCP can offer high available and high effective communication support for reliable data service on Internet.",2003,0, 7169,An error robust macro-block mode decision for H.26L stream,"With the rapid development of the Internet, more and more attention is focused on IP video streaming. We introduce an RD optimal macro-block mode decision scheme for the new H.26L video stream. Based on the statistical error propagation model of the Internet and the unequal NAL packet of H.26L, our new scheme can be more error robust than those currently adopted in the H.26L test mode.",2002,0, 7170,Software Defect Prediction Using Call Graph Based Ranking (CGBR) Framework,"Recent research on static code attribute (SCA) based defect prediction suggests that a performance ceiling has been achieved and this barrier can be exceeded by increasing the information content in data. In this research we propose static call graph based ranking (CGBR) framework, which can be applied to any defect prediction model based on SCA. In this framework, we model both intra module properties and inter module relations. Our results show that defect predictors using CGBR framework can detect the same number of defective modules, while yielding significantly lower false alarm rates. On industrial public data, we also show that using CGBR framework can improve testing efforts by 23%.",2008,1, 7171,Naive Bayes Software Defect Prediction Model,"Although the value of using static code attributes to learn defect predictor has been widely debated, there is no doubt that software defect predictions can effectively improve software quality and testing efficiency. Many data mining methods have already been introduced into defect predictions. We noted there have several versions of defect predictor based on Naive Bayes theory, and analyzed their difference estimation method and algorithm complexity. We found the best one which is Multi- variants Gauss Naive Bayes (MvGNB) by performing prediction performance evaluation, and we compared this model with decision tree learner J48. Experiment results on the benchmarking data sets of MDP made us believe that MvGNB would be useful for defect predictions.",2010,1, 7172,A Novel Evaluation Method for Defect Prediction in Software Systems,"In this paper, we propose a novel evaluation method for defect prediction in object-oriented software systems. For each metric to evaluate, we start by applying it to the dependency graph extracted from the target software system, and obtain a list of classes ordered by their predicted degree of defect under that metric. By utilizing the actual defect data mined from the subversion database, we evaluate the quality of each metric through means of a weighted reciprocal ranking mechanism. Our method can tell not only the overall quality of each evaluated metric, but also the quality of the prediction result for each class, especially those costly ones. Evaluation results and analysis show the efficiency and rationality of our method.",2010,1, 7173,Prediction of software faults using fuzzy nonlinear regression modeling,"Software quality models can predict the risk of faults in modules early enough for cost-effective prevention of problems. This paper introduces the fuzzy nonlinear regression (FNR) modeling technique as a method for predicting fault ranges in software modules. FNR modeling differs from classical linear regression in that the output of an FNR model is a fuzzy number. Predicting the exact number of faults in each program module is often not necessary. The FNR model can predict the interval that the number of faults of each module falls into with a certain probability. A case study of a full-scale industrial software system was used to illustrate the usefulness of FNR modeling. This case study included four historical software releases. The first release's data were used to build the FNR model, while the remaining three releases' data were used to evaluate the model. We found that FNR modeling gives useful results",2000,1, 7174,Timing error detector design and analysis for orthogonal space-time block code receivers,"A general framework for the design of low complexity timing error detectors (TEDs) for orthogonal space-time block code (OSTBC) receivers is proposed. Specifically, we derive sufficient conditions for a difference-of-threshold-crossings timing error estimate to be robust to channel fading. General expressions for the S-curve, estimation error variance and the signal-to-noise ratio are also obtained. As the designed detectors inherently depend on the properties of the OSTBC under consideration, we derive and evaluate the properties of TEDs for a number of known codes. Simulations are used to assess the system performance with the proposed timing detectors incorporated into the receiver timing loop operating in tracking mode. While the theoretical derivations assume a receiver with perfect channel state information and symbol decisions, simulation results include performance for pilot-symbol-based channel estimation and data symbol detection errors. For the case of frequency-flat Rayleigh fading and QPSK modulation, symbol-error-rate results show timing synchronization loss of less than 0.3 dB for practical timing offsets. In addition it is shown that the receiver is able to track timing drift with a normalized bandwidth of up to 0.001.",2008,0, 7175,Defect-oriented fault simulation and test generation in digital circuits,"A generalized approach is presented to fault simulation and test generation based on a uniform functional fault model for different system representation levels. The fault model allows one to represent the defects in components and defects in the communication network of components by the same technique. Physical defects are modeled as parameters in generalized differential equations. Solutions of these equations give the conditions at which defects are locally activated. The defect activation conditions are used as functional fault models for higher level fault simulation purposes. In such a way, the functional fault model can be regarded as an interface for mapping faults from one system level to another, helping to carry out hierarchical fault simulation and test generation in digital systems. A methodology is proposed which allows one to find the types of faults that may occur in a real circuit, to determine their probabilities, and to find the input test patterns that detect these faults. Experimental data of the hierarchical defect-oriented simulation for ISCAS'85 benchmarks are presented, which show that classical stuck-at fault based simulation and the test coverage calculation based on counting defects without considering defect probabilities may lead to a considerable overestimation of the result",2001,0, 7176,An intelligent microcontroller-based configuration for sensor validation and error compensation,"A general-purpose sensor interface is presented in this work. The interface supports certain intelligent features for single-sensors systems, and is aiming to improve the accuracy and reliability of gas sensing systems. A number of algorithms are used to test and evaluate the performance of the whole measuring system in order to minimize errors and evaluate sensor status. The functionality of self-calibration, self-validation and Built-In-Self-Test is described. An example of the technique is presented using a AT90s8515 microcontroller for commercial gas sensors.",2003,0, 7177,SPHINX: A Fault-Tolerant System for Scheduling in Dynamic Grid Environments,"A grid consists of high-end computational, storage, and network resources that, while known a priori, are dynamic with respect to activity and availability. Efficient scheduling of requests to use grid resources must adapt to this dynamic environment while meeting administrative policies. In this paper, we describe a framework called SPHINX that can administrate grid policies, and schedule complex and data intensive scientific applications. We present experimental results for several scheduling strategies that effectively utilize the monitoring and job-tracking information provided by SPHINX. These results demonstrate that SPHINX can effectively schedule work across a large number of distributed clusters that are owned by multiple units in a virtual organization in a fault-tolerant way in spite of the highly dynamic nature of the grid and complex policy issues. The novelty lies in use of effective monitoring of resources and job execution tracking in making scheduling decisions and fault-tolerance - something that is missed in todays grid environments.",2005,0, 7178,A fault-tolerant scheduling algorithm for real-time periodic tasks with possible software faults,"A hard real-time system is usually subject to stringent reliability and timing constraints. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software fault tolerance in hard real-time periodic task systems. We consider the problem of scheduling a set of real-time periodic tasks each of which has two versions: primary and alternate. The primary version contains more functions and produces good quality results, but its correctness is more difficult to verify. The alternate version contains only the minimum required functions and produces less precise results and its correctness is easy to verify. We propose a scheduling algorithm which 1) guarantees either the primary or alternate version of each critical task to be completed in time and 2) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to preallocate time intervals to the alternates and, at runtime, attempts to execute primaries first. An alternate will be executed only if necessary because of time or bugs.",2003,0, 7179,Fault Diagnosis Expert System for Modem Circuit Board Based on VXI Bus,A high-tech information electronic equipment of some given type is designed in order to proceed automatically fault detection and improve the efficiency and accuracy of diagnosis. This thesis which is a part of the program introduces the research of algorithm of fault diagnosis expert system of the modem of the equipment and algorithm realization and example proving on the hardware platform. It's quicker and more convenient to locate fault on the circuit boards on this equipment. It's proved that this expert system can solve the problems of high cost and long intervals of maintenance and keep the equipment in a stable status,2006,0, 7180,Implementing fault hierarchy to trace failures in home gateways,"A home gateway (HG) acts as border router by selecting the correct route to different appliances using devices address and supervising resource assignment between the local and access networks. From the quality of service standpoint, minimizing the number of failures for this equipment is a way to gain customerpsilas confidence. To reach this goal, when a service terminates abnormally, the overall failure cause and impact need to be analyzed and quantified. This paper describes a fault hierarchy modeling for HGs: starting from test and field failure data, we analyzed the different failures gravity, their originpsilas correlation and their impact on the overall service architectural elements. The hierarchy interpreting the transition from faults to failures, a designer, or tester, can thus predict the consequences of a particular fault from this model. Our study yielded some test sets that help to trace potential failures, and to enhance the design specifications.",2008,0, 7181,Fault detection and isolation based on hybrid modelling in an AC motor,"A hybrid neural network-first principles modelling scheme is used in this paper, to model an induction motor and to develop a fault detection and isolation (FDI) scheme. The hybrid model combines a partial first principles model, which incorporates the available prior knowledge about the process being modelled, with a neural network which serves as an estimator of unmeasured and unknown process parameters that are difficult to model from first principles. A fault detection and isolation scheme has been defined based on this hybrid model. This suitable model enables system faults to be simulated and the change in corresponding parameters to be predicted without physical experimentation. The detection scheme is based on the calculus of the residues as the difference between the real system and the hybrid model. The isolation scheme is based on neural networks. A three-phase induction motor was simulated under normal operation conditions using the hybrid methodology. Faults in some internal parameters and voltage imbalance between phases supply have been simulated and detected with the FDI scheme, with quite good results.",2004,0, 7182,Fast error resilient H.263 to H.264 video transcoding using RD optimized intra fresh,"A key issue of error resilient transcoding using ldquointra freshrdquo algorithm is how to decide the intra coding rate and adjust corresponding bits for error resilience, so that jointly optimization between video source and error resilience bit can be achieved. To provide practical solution for this issue, we propose to use rate distortion (RD) optimization criterion to decide the application of intra coding mode at MB level. To reduce the implementation complexity of RD optimization intra fresh method, we also present a fast mode decision algorithm that takes advantage of error propagation distortion estimation as assistance to decide if an early termination of existing coding mode choices is necessary. The algorithm is implemented to realize fast error resilient H.263 to H.264 video transcoding. Compared with error resilient transcoding method using random intra update, the proposed algorithm can enhance speed up to 7.2 times while maintaining better picture quality.",2008,0, 7183,Restoration of Directional Overcurrent Relay Coordination in Distributed Generation Systems Utilizing Fault Current Limiter,"A new approach is proposed to solve the directional overcurrent relay coordination problem, which arises from installing distributed generation (DG) in looped power delivery systems (PDS). This approach involves the implementation of a fault current limiter (FCL) to locally limit the DG fault current, and thus restore the original relay coordination. The proposed restoration approach is carried out without altering the original relay settings or disconnecting DGs from PDSs during fault. Therefore, it is applicable to both the current practice of disconnecting DGs from PDSs, and the emergent trend of keeping DGs in PDSs during fault. The process of selecting FCL impedance type (inductive or resistive) and its minimum value is illustrated. Three scenarios are discussed: no DG, the implementation of DG with FCL and without FCL. Various simulations are carried out for both single- and multi-DG existence, and different DG and fault locations. The obtained results are reported and discussed.",2008,0, 7184,A network distribution power system fault location based on neural eigenvalue algorithm,"A new approach to fault location for distribution network power system is presented. This approach uses the eigenvalue and an artificial neural network based learning algorithm. The neural network is trained to map the nonlinear relationship existing between fault location and characteristic eigenvalue The proposed approach is able to identify, to classify and to locate different types of faults such as: single-line-to-ground, double-line-to-ground, double-line and three-phase. Using the eigenvalue as neural network inputs the proposed algorithm is able to locate the fault distance. The results presented show the effectiveness of the proposed algorithm for correct fault diagnosis and fault location on a distribution power system networks.",2003,0, 7185,Classification of faults in double circuit lines using Wavelet transforms,"A new approach using digital relays for double circuit transmission line protection is presented in this paper. The proposed technique consists of a preprocessing module based on time-frequency transforms in combination with an artificial neural network (ANN) for detecting and classifying fault events. The pre-processing module extracts distinctive features in the input signals at the relay location, and simplifies the information contained in the input vector of the ANN, improving it in speed and accuracy. The wavelet transform Daubechies 6 have been used as pre-processor.",2008,0, 7186,Mobile Aerosol Lidar for Earth Observation Atmospheric Correction,"A new atmospheric correction method of earth observation images based on the combination of satellite data and lidar data is proposed in this paper. A mobile scanning Mie lidar was developed to detect the aerosols' spatial and temporal distribution for the purpose above. To obtain more accurate data, future development plan of a multi-wavelength, multi-channel Raman lidar is discussed. Earth observation images processed by the radiative transfer model and this new method are presented. Also issues to approach the final goal of this new atmospheric correction method are discussed.",2006,0, 7187,A new Bug-type navigation algorithm considering practical implementation issues for mobile robots,"A new Bug-type algorithm is proposed in this paper for the navigation of mobile robots. Unlike the related works which mainly focus on the theoretical analysis while ignoring the implementation issues, the new method presents not only an abstract concept which shortens the path length comparing with some previous works in the premise of guaranteeing convergence by its new switching conditions, but also a lower layer approach to realize its concept by keeping a safe distance to obstacles for collision avoidance. Simulation studies show that the new method generates shorter path than the classical Bug2 algorithm and a recent work termed as Bug2+. Experiment results on a real Pioneer3-AT robot further verify its practicability.",2010,0, 7188,Triggered-vacuum switch-based fault-current limiter,"A new fault current limiter (FCL) is proposed based on triggered vacuum switch (TVS). The TVS-based FCL (TFCL) is mainly composed of a capacitor, a current-limiting reactor connected with the capacitor in series and a TVS connected with the capacitor in parallel. With the TVS in the off or on state, the whole TFCL behaves as a conventional series compensation or fault current limitation, respectively. Compared with other types of FCL, such as superconductivity, thyristor/GTO-based FCL, TFCL demonstrates distinguished characteristics, such as high capacity, loss free, and low price. The digital simulation and prototype experiment based on LC resonant test circuit show that it is feasible to develop TFCL",2000,0, 7189,Diagnosing arbitrary defects in logic designs using single location at a time (SLAT),"A new form of logic diagnosis is described that is suitable for diagnosing fails in combinational logic. It can diagnose defects that can affect arbitrarily many elements in the integrated circuit. It operates by first identifying patterns during which only one element is affected by the defect, and then diagnosing the fails observed during the application of such patterns, one pattern at a time. Single stuck-at faults are used for this purpose, and the aggregate of stuck-at fault locations thus identified is then further analyzed to obtain the most accurate estimate of the identities of those elements that can be affected by the defect. This approach to logic diagnosis is as effective as that of classical stuck-at fault-based diagnosis, when the latter applies, but is far more general. In particular, it can diagnose fails caused by bridges and opens as well as fails caused by regular stuck-at faults.",2004,0, 7190,A new FPGA-based postprocessor architecture for channel mismatch correction of time interleaved ADCS,"A new hardware efficient, low power postprocessor architecture is presented in this paper to correct the mismatch errors in time interleaved ADCs. The Least Mean Squares (LMS) algorithm is utilized as correction algorithm to identify the offset and gain mismatches. The proposed architecture uses one processing core for calibrating all parallel channels output codes with reference channel. Increasing in the number of parallel channels in the time interleaved ADC does not considerably affect the required hardware for proposed postprocessor. FPGA synthesis results of the designed postprocessor for 4-channels 10 bit ADC show that in same throughput, 55% and 25% reduction in the resources usage and power consumption is achievable over conventional architecture.",2009,0, 7191,Mechanical Fault Identification Method Based on Vector Power Spectrum Coupled with Radial Basis Probabilistic Networks,"A new mechanical fault identification method coupling vector power spectrum with radial basis probabilistic neural networks (RBPNN) is proposed in the paper. Vector power spectrum is used as eigenvectors, and radial basis probabilistic neural network (RBPNN) is used as a classifier in the new method. The method is used to identify the typical mechanical fault. The result shows that the new method is very effective to identify the fault diagnosis of rotating machinery, and has higher correct identification rate and faster training speed.",2010,0, 7192,Induction Machine Broken Bar and Stator Short-Circuit Fault Diagnostics Based on Three-Phase Stator Current Envelopes,"A new method for the fault diagnosis of a broken rotor bar and interturn short circuits in induction machines (IMs) is presented. The method is based on the analysis of the three-phase stator current envelopes of IMs using reconstructed phase space transforms. The signatures of each type of fault are created from the three-phase current envelope of each fault. The resulting fault signatures for the new so-called ldquounseen signalsrdquo are classified using Gaussian mixture models and a Bayesian maximum likelihood classifier. The presented method yields a high degree of accuracy in fault identification as evidenced by the given experimental results, which validate this method.",2008,0, 7193,A New Method for Dynamic Fault Diagnosis of Electric Appliance,"A new method of fault diagnosis by dynamic modeling is put forward to obtain the diagnosis parameters of intelligent appliance in different running stages. The model identification approach using support vector regression (SVR) and immune clone selection algorithm (ICSA) is presented in this paper. The relation between process status and the temperature change rate is analyzed in the paper. For appliance fault with uncertainty, the way of fuzzy inference is applied for actualizing inference engine of fault diagnosis. Experimental results prove that the fault diagnosis method for intelligent appliance is credible in the accuracy.",2009,0, 7194,Real-Time Fault Detection and Diagnostics Using FPGA-based Architectures,"A new methodology for radiation induced real-time fault detection and diagnosis, utilizing FPGA-based architectures was developed. The methodology includes a full test platform to evaluate a circuit while under radiation and an algorithm to detect and diagnose fault locations within a circuit using Triple Design Triple Modular Redundancy (TDTMR). An analysis of the system was established using a fault injection. Additionally a functional gamma irradiation analysis was performed to assess the effectiveness of the method. The detection and diagnosis algorithm was capable of detecting errors by switching dynamically during the analysis of an FPGA. However, only the injected fault test was able to properly diagnose the location of the fault. The results indicate that FPGA radiation induced fault production is dependent upon radiation dose rate. A fully interchangeable and operational testing platform has been established along with an algorithm that detects and diagnoses errors in real-time.",2010,0, 7195,A direct duty cycle calculation algorithm for digital power factor correction (PFC) implementation,"A new PFC control method based on direct duty cycle calculation is proposed. The duty cycle required to achieve unity power factor is calculated directly based on the reference current and sensed inductor current, input voltage and output voltage. For both digital and analog implementation, the proposed PFC control method is simpler than commonly used average current mode control. Test results for a digital implementation show that the proposed method can achieve unity power factor under both steady and transient state. Sinusoidal input current can be achieved under nonsinusoidal input voltage condition. The proposed digital PFC control method can achieve good dynamic performance for load and input voltage change.",2004,0, 7196,Generating Compact Robust and Non-Robust Tests for Complete Coverage of Path Delay Faults Based on Stuck-at Tests,"A new rest generation method of fully scanned or combinational circuits is proposed for complete coverage of path delay faults based on single stuck-at tests. The proposed method adds the target path into the original circuit, where all off inputs of the path are connected with corresponding nodes in the original circuit. Test generation of the path delay fault is reduced to that of the single stuck-at fault at the fanout branch, where the additional path connects with its source node in the original circuit. A disjoint dynamic test compaction scheme is proposed to reduce the size of the test set in the process of test generation. A conjoint test compaction scheme is proposed based on fanout counts of the paths. The proposed method presents a very compact test set for complete coverage of robustly and non-robustly testable path delay faults.",2006,0, 7197,Single-stage power factor correction converter with parallel power processing for wide line and load changes,"A new single-phase single-stage power factor correction converter with a simple auxiliary circuit is proposed. Using parallel power processing, this converter can be operated in wide line and load changes while limiting the link voltage below 400 V. Experimental results show that the measured power factor and efficiency are about 0.98 and 81%, respectively, at rated condition and the auxiliary circuit to reduce the link voltage is effective",2002,0, 7198,Detection of Bugs by Compiler Optimizer Using Macro Expansion of Functions,"A new static analysis based approach is proposed to detect interface bugs in software. Unlike existing static analyses, which suggest new tools, this approach does not suggest a new tool, but leverages the optimizer which is part of the compiler already used by programmers. To facilitate the optimizer in detecting the interface bugs of a function, a macro is created which encodes the conditions to be checked for the function arguments. The approach is found to be effective when applied on two already well-tested commercial software systems, where it detected more than 50 bugs.",2007,0, 7199,Inferring specifications to detect errors in code,"A new static program analysis method for checking structural properties of code is proposed. The user need only provide a property to check; no further annotations are required. An initial abstraction of the code is computed that over-approximates the effect of function calls. This abstraction is then iteratively refined in response to spurious counterexamples. The refinement involves inferring a context-dependent specification for each function call, so that only as much information about a function is used as is necessary to analyze its caller. When the algorithm terminates, the remaining counterexample is guaranteed not to be spurious, but because the program and its heap are finitized, absence of a counterexample does not constitute proof",2004,0, 7200,A New System for Computer-Aided Preoperative Planning and Intraoperative Navigation During Corrective Jaw Surgery,"A new system for computer-aided corrective surgery of the jaws has been developed and introduced clinically. It combines three-dimensional (3-D) surgical planning with conventional dental occlusion planning. The developed software allows simulating the surgical correction on virtual 3-D models of the facial skeleton generated from computed tomography (CT) scans. Surgery planning and simulation include dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and segment repositioning. By coupling the software with a tracking system and with the help of a special registration procedure, we are able to acquire dental occlusion plans from plaster model mounts. Upon completion of the surgical plan, the setup is used to manufacture positioning splints for intraoperative guidance. The system provides further intraoperative assistance with the help of a display showing jaw positions and 3-D positioning guides updated in real time during the surgical procedure. The proposed approach offers the advantages of 3-D visualization and tracking technology without sacrificing long-proven cast-based techniques for dental occlusion evaluation. The system has been applied on one patient. Throughout this procedure, we have experienced improved assessment of pathology, increased precision, and augmented control",2007,0, 7201,High defect coverage with low-power test sequences in a BIST environment,"A new technique, random single-input change (RSIC) test generation, generates low-power test patterns that provide a high level of defect coverage during low-power BIST of digital circuits. The authors propose a parallel BIST implementation of the RSIC generator and analyze its area-overhead impact",2002,0, 7202,Fisher Discriminance of Fault Predict for Decision-Making Systems,"A new technology of fault prediction was presented based on the neural network and Fisher discriminance in statistics. First, many enough character of running situation of decision-making were extracted from the real-time observation data. Secondly, the FP software systems were designed and the algorithm of FP of decision-making systems was presented. Finally, a simply example indicated that the algorithm is effectively.",2009,0, 7203,Fast software MPEG-2 video transcoder with optimization of requantization error compensation,"A low-complexity software video transcoder with motion compensation (MC) of requantization errors has been developed. For reducing the processing time of the developed transcoder, two novel methods are proposed. One is a sparse block transcoding method that bypasses a transcoding process according to the distribution of the transform coefficients in an input block. This method is valid for about half of all blocks of P pictures. The other is a fast MC method that alternates rounding directions of MC per picture to decrease conditional judgments. The processing time of the fast MC is reduced to three fifths. The transcoder with the proposed methods has been running in real-time on PCs.",2002,0, 7204,Turbo Coded Modulation for Unequal Error Protection,"A major concern in designing communication systems is to maintain quality of service for a wide range of channel conditions. This is an important issue particularly for the applications where precise characterization of the channel is impossible. For such applications, the source data can be classified into several classes and Unequal Error Protection (UEP) can be used to effectively protect the more important classes even in poor receiving conditions. This paper is focused on the study, design, and performance evaluation of unequal error protecting turbo coded modulation schemes. We first propose several schemes for unequal error protecting using turbo coded modulation. All these schemes provide high performance gains for more important classes that can hardly be achieved using conventional coded modulation schemes. We then study unequal error protecting turbo coded modulation schemes by deriving channel capacity and cutoff rates for different protection levels. We show that for more important classes more room is available for improvement.",2008,0, 7205,Detection of wood defects from X-ray image by ANN,"A method for detection of wood defects based on ANN was studied in this paper. Because the intensity of X-ray that crosses the object changes, defects in wood were detected by the difference of X-ray absorption parameter, then computer was used to process and analyze the image. On the basis of image processing of nondestructive testing and characteristic construction, mathematic model of defects was established by using characteristic parameters. According to signal characters of nondestructive testing, artificial neural networks were set up. Meanwhile, adopt BP networks model to recognize all characteristic parameters, which reflected characters of wood defects. BP networks used coefficient matrix of each unit, including input layer, intermediate layer (concealed layer) and output layer, to get the model of input vector and finish networks recognition through the networks learning. The test results show that the method is very successful for detection and classification of wood defects.",2008,0, 7206,Detection and Correction of Lip-Sync Errors Using Audio and Video Fingerprints,"A method for measuring and maintaining time synchronization between an A/V stream (A/V) is described. A/V fingerprints are used to create a combined A/V synchronization signature (A/V Sync Signature) at a known reference point. This signature is used at later points to measure A/V timing synchronization relative to the reference point. This method may be used, for example, to automatically detect and correct A/V synchronization (i.e., lip-sync) errors in broadcast systems and other applications. Advantages of the method described over other existing methods include that it does not require modification of the audio or video signals, it can respond to dynamically changing synchronization errors, and it is designed to be robust to modifications of the A/V signals. Although the system requires data to be conveyed to the detection point, this data does not need to be synchronized with, or directly attached to, the audio or video streams. This method uses fingerprints; it also enables other fingerprinting applications within systems, such as content identification and verification. In addition, it may be used to maintain synchronization of other metadata associated with A/V streams.",2010,0, 7207,Statistical Fault Injection,"A method for statistical fault injection (SFI) into arbitrary latches within a full system hardware-emulated model is validated against particle-beam-accelerated SER testing for a modern microprocessor. As performed on the IBM POWER6 microprocessor, SFI is capable of distinguishing between error handling states associated with the injected bit flip. Methodologies to perform random and targeted fault injection are presented.",2008,0, 7208,Detection and Visualization of Defects in 3D Unstructured Models of Nematic Liquid Crystals,"A method for the semi-automatic detection and visualization of defects in models of nematic liquid crystals (NLCs) is introduced; this method is suitable for unstructured models, a previously unsolved problem. The detected defects - also known as disclinations - are regions were the alignment of the liquid crystal rapidly changes over space; these defects play a large role in the physical behavior of the NLC substrate. Defect detection is based upon a measure of total angular change of crystal orientation (the director) over a node neighborhood via the use of a nearest neighbor path. Visualizations based upon the detection algorithm clearly identify complete defect regions as opposed to incomplete visual descriptions provided by cutting-plane and isosurface approaches. The introduced techniques are currently in use by scientists studying the dynamics of defect change",2006,0, 7209,Faults Coverage Improvement Based on Fault Simulation and Partial Duplication,"A method how to improve the coverage of single faults in combinational circuits is proposed. The method is based on Concurrent Error Detection, but uses a fault simulation to find Critical points - the places, where faults are difficult to detect. The partial duplication of the design with regard to these critical points is able to increase the faults coverage with a low area overhead cost. Due to higher fault coverage we can increase the dependability parameters. The proposed modification is tested on the railway station safety devices designs implemented in the FPGA.",2010,0, 7210,Modeling mutually exclusive events in fault trees,"A method is given for constructing fault tree gates to model mutually exclusive events. The gates are constructed from stochastically independent events, AND gates and NOT gates. Examples are presented to illustrate the technique. If the gate construction must be performed manually, the method adds complexity to the fault tree model that may not be justified. Approximating mutually exclusive events by independent events may have little effect on computed gate probabilities. The method could easily be automated in a standard fault tree solver so that this gate construction goes on behind the scenes. This would permit users to specify disjoint events directly. The authors conjecture that the additional computational cost would be small, since the number of basic events in the tree does not increase and the new NOT gates are inserted at the bottom of the tree",2000,0, 7211,An accurate method for correction of head movement in PET,"A method is presented to correct positron emission tomography (PET) data for head motion during data acquisition. The method is based on simultaneous acquisition of PET data in list mode and monitoring of the patient's head movements with a motion tracking system. According to the measured head motion, the line of response (LOR) of each single detected PET event is spatially transformed, resulting in a spatially fully corrected data set. The basic algorithm for spatial transformation of LORs is based on a number of assumptions which can lead to spatial artifacts and quantitative inaccuracies in the resulting images. These deficiencies are discussed, demonstrated and methods for improvement are presented. Using different kinds of phantoms the validity and accuracy of the correction method is tested and its applicability to human studies is demonstrated as well.",2004,0, 7212,Implementation of LCD Driver Using Asymmetric Truncation Error Compensation for Mobile DMB Phone Display,16-bit color per pixel to reduce power consumption was generally used for mobile DMB phone display. This would account for the lack of smoothness such as blockings caused by 24-bit to 16-bit asymmetric pixel data truncation. This paper proposes and implements a truncation error compensation algorithm using data expansion from correlated color information for gray scale CCT consistency and for color constancy.,2007,0, 7213,3D Visualization of Stratum with Faults Based on VTK,"3D stratum can become one useful auxiliary geological tool. Based on this tool, geologist can analyze stratum more accurately and scientific. VTK is an object-oriented visualization class library. 3D stratum is interpolated with Kriging method. Faults are common geological phenomenon. In order to visualize stratum, with Faults, one new interpolation method is proposed.",2009,0, 7214,Cross-Layer Error Control Optimization in 3G LTE,"3G long-term evolution (LTE) is a recent effort taken by cellular industries to step into wireless broadband market. The key enhancements target an introduction of new all- IP architecture, enhanced link layer and radio access with OFDM modulation and multiple antenna techniques. In this study, we focus on the overhead deriving from the multilayer ARQ employed at the link and transport layers. To the aim of reducing unnecessary burden on the wireless link, we propose a cross-layer ARQ approach, called ARQ Proxy, which substitutes the transmission of TCP ACK packet with a short MAC layer request on the radio link. Packet identification is achieved through association of a hash function to the raw packet data. Performance of the ARQ Proxy is evaluated using EURAE extensions for ns2 simulator. Results demonstrate significant improvements in terms of system capacity, TCP throughput performance, and higher tolerance to transmission errors.",2007,0, 7215,A Pipelined 12-bit Analog-to-Digital Converter with Continuous On-Chip Digital Correction,A 12-bit 43MHz switched capacitor pipelined analog-to-digital converter was implemented in a 0.5Am 2P3M CMOS process based on 1.5-bit/stage architecture. The design features an on-chip continuous digital correction and a fully differential signal path circuitry that minimizes noise and relaxes comparator offset requirements. The design is a fully monolithic design that achieved an integral non-linearity of A 0.7 LSB and differential non-linearity of A0.8 LSB with power dissipation of 94mW.,2007,0, 7216,Error estimators in 3D linear magnetostatics,A 3D error estimator based on the hypercircle theorem is presented in the case of the F.E.M. scalar potential formulation in magnetostatics. This estimator requires the construction of an admissible magnetic flux density which may be discretised in the facet element space. Two ways to calculate this field are exposed. The proposed estimator is tested on several examples and compared to the one based on two complementary finite element solutions,2000,0, 7217,Calculation Studies of Operational Faults of Multielement Disk MCG,A calculation simulation of multielement disk magnetocumulative systems is carried out. Effect of failure of one or several detonators in a system of high-explosive (HE) charge initiation and axial shift of the detonators relative to a symmetry plane of the HE charges on the operation efficiency of the multielement disk magnetocumulative generator is considered. The considered models allow explaining the results of the experiments in the case of similar situations.,2010,0, 7218,Predicting Thermal Neutron-Induced Soft Errors in Static Memories Using TCAD and Physics-Based Monte Carlo Simulation Tools,A combination of commercial simulation tools and custom applications utilizing Geant4 physics libraries is used to analyze thermal neutron induced soft error rates in a commercial bulk CMOS SRAM. Detailed descriptions of the sensitive regions based upon technology in computer-aided design calibration are used in conjunction with a physics-based Monte Carlo simulator to predict neutron soft error cross sections that are in good agreement with experimental results,2007,0, 7219,An Automated On-line Monitoring and Fault Diagnosis System for Power Transformers,"A combined artificial neural network and expert system tool (ANNEPS) was developed over the years as an off-line diagnosis tool for power transformers based on dissolved gas analysis (DGA). ANNEPS takes advantage of the inherent positive features of each method and offers a further refinement of present techniques. This tool has been confirmed to have high performance of diagnosing multiple fault types in power transformers. An automated diagnosis system, ANNEPS v4.0, is presented in this paper. The new system extends the existing expert system and artificial neural network diagnostic engine with new interface, automated database interactions, and alarm notification functions. The system receives DGA data from an on-line DGA monitor and then stores all information into a database. The combined neural network and expert system engine validates data, detects the faults, and recommends appropriate action. When the result indicates an ""abnormal"" condition, a notification of the diagnosis results is sent to transformer maintenance personnel through email. It also performs an automatic daily backup of both the input database and the output file. The developed ANNEPS system allows users automatic processing of on-line dissolved gas in oil data with a comprehensive diagnosis algorithm",2006,0, 7220,Elimination of scan blindness with compact defected ground structures in microstrip phased array,"A compact H-shaped defected ground structure (DGS) is applied to reduce the mutual coupling between array elements and eliminate the scan blindness in a microstrip phased array design. The proposed DGS is inserted between the adjacent E-plane coupled elements in the array to suppress the pronounced surface waves. A two-element array is measured and the results show that a reduction in mutual coupling of 12 dB is obtained between elements at the operation frequency of the array. The scan properties of microstrip phased arrays with and without DGS are studied by the waveguide simulator method. The analysis indicates that the scan blindness of the microstrip phased array can be well eliminated because of the effect of the proposed DGS. Meanwhile, the active patterns of the array centre element in 7times3 element arrays with and without the H-shaped DGS are simulated, and the results agree with those obtained by the waveguide simulator method.",2009,0, 7221,Miniaturized microstrip lowpass filter with wide stopband using double equilateral U-shaped defected ground structure,"A compact double equilateral U-shaped defected ground structure (DGS) unit is proposed. In contrast to a single finite attenuation pole characteristic offered by the conventional dumbbell DGS, the proposed DGS unit provides dual finite attenuation poles that can be independently controlled by the DGS lengths. A 2.4-GHz microstrip lowpass filter using five cascaded double U-shaped DGS units is designed and compared with conventional DGS lowpass filters. This low pass filter achieves a wide stopband with overall 30-dB attenuation up-to10 GHz and more than 42% size diminution.",2006,0, 7222,Fault diagnosis approach of protective switching device based on support vector machines,"A fault diagnosis approach of protective switching device (PSD) is presented in this paper. The support vector machines (SVM) which is a new kind of fault diagnosis methods is used to classify the faults including short circuit, overload, ..., etc. By training and learning, the fault model is built, the faults are diagnosed quickly and exactly, and accordingly the protection feature of PSD is improved.",2008,0, 7223,Analog IC Fault Diagnosis based on Wavelet Neural Network Ensemble,"A fault diagnosis method for analog IC diagnosis based on wavelet neural network ensemble (WNNE) and Adaboost algorithm, is proposed in this paper. This makes the way of the directory be of use in fault, and enhances the validity of the fault diagnosis. Using wavelet decomposition as a tool for extracting feature, Then, after training the WNNE by faulty feature vectors, the model of the circuit with fault diagnosis system is built. Simulation results have shown that this claim is more effective than wavelet neural network (WNN).",2009,0, 7224,A tool for injecting SEU-like faults into the configuration control mechanism of Xilinx Virtex FPGAs,"A fault injection tool for Virtex FPGAs based on the fault emulation technique is presented. It allows injection of faults in the configuration control mechanism differently from the tools developed so far, which address only configuration memory cells and user registers. This permits a more realistic and complete study of device behaviour, especially in those applications in which the system operating in a harsh environment undergoes frequent reconfigurations. Injection is performed by modifying the configuration bit stream while this is loaded into the device, without using standard synthesis tools or available commercial software, such as Jbits or similar. This makes our tool independent of the system used for design development and allows a quick fault injection. Moreover, any register of the configuration state machine can be accessed and the effect of SEUs on them analyzed. This analysis is fundamental before performing radiation ground testing of this kind of device.",2003,0, 7225,A new algorithm of improving fault location based on SVM,"A fault location algorithm using estimated line parameters is provided in this paper. The characteristic of this algorithm is using estimated line parameters. And the influence of the line parameters is eliminated. Support vector machines theory is used to estimate transmission line parameters, which is a nonlinear black box modeling problem. The historical data is used as training sample. EMTP simulation shows that this method is to improve the accuracy of fault location notability.",2004,0, 7226,A Fault Recovery Approach in Fault-Tolerant Processor,"A fault recovery scheme of a fault-tolerant processor for embedded systems is introduced in this paper. The microarchitecture of the fault-tolerant processor called RSED is modified from superscalar processor architecture. The fault-tolerant mechanism of RSED is implemented mainly using temporal redundancy technique. Fault recovery scheme is an important part of the fault-tolerant mechanism. In order to resolve the problem of possible single point of failures, a novel TMR approach is adopted to generate re-execution instruction address. Compared with similar works, the fault recovery scheme proposed can recover processor execution more reliably.",2009,0, 7227,Fault tolerance design on onboard computer using COTS components,"A fault tolerance design on onboard computer (OBC) is proposed that allows commercial-off-the-shelf (COTS) devices to be incorporated into dual processing modules of onboard computer. The processing module is composed of 32-bit ARM RISC processor and other COTS devices. As well as, a set of fault handling mechanisms is implemented in the computer system. The onboard software was organized around a set of processes that communicate between each other through a routing process. The fault tolerant onboard computer has excellent data processing capability and is enough to meet the demands of the extremely tight constraints on mass, volume, power consumption and space environmental conditions",2006,0, 7228,Fault-tolerant router with built-in self-test/self-diagnosis and fault-isolation circuits for 2D-mesh based chip multiprocessor systems,"A fault-tolerant router design (20-path router) is proposed to reduce the impacts of faulty routers for 2D-mesh based chip multiprocessor systems. In our experiments, the OCNs using 20PRs can reduce 75.65% ~ 85.01% unreachable packets and 7.78% ~ 26.59% latency in comparison with the OCNs using generic XY routers.",2009,0, 7229,FPGA-based fault emulation of synchronous sequential circuits,"A feasibility study of accelerating fault simulation by emulation on field programmable gate arrays (FPGAs) is described. Fault simulation is an important subtask in test pattern generation and it is frequently used throughout the test generation process. The problems associated with fault simulation of sequential circuits are explained. Alternatives that can be considered as trade-offs in terms of the required FPGA resources and accuracy of test quality assessment are discussed. In addition, an extension to the existing environment for re-configurable hardware emulation of fault simulation is presented. It incorporates hardware support for fault dropping. The proposed approach allows simulation speed-up of 40-500 times as compared to the state-of-the-art in software-based fault simulation. On the basis of the experiments, it can be concluded that it is beneficial to use emulation for circuits/methods that require large numbers of test vectors while using simple but flexible algorithmic test vector generating circuits, for example built-in self-test",2007,0, 7230,Influence of Vector Control Algorithms on Stator Current Harmonics in Three-phase Squirrel-cage Induction Motors under Mixed Eccentricity Faults,"A fully analytical method is presented to study the performance of induction motor under eccentricity fault condition using winding function theory. Analytical nature of this method makes it possible to use system oriented simulation environments such as Simulink for analysis of application of different control algorithm. Influence of different control algorithm upon the stator current harmonics due to the eccentricity fault will be studied in this paper. It is also shown that after applying the Park's transformation to the three-phase currents of stator, detection of eccentricity harmonics around fundamental frequency is easier than the detection of these harmonics within the stator three-phase currents.",2007,0, 7231,A Unified Procedure for Fault Detection of Analog and Mixed-Mode Circuits Using Magnitude and Phase Components of the Power Supply Current Spectrum,"A method using the power supply current for fault detection in analog and mixed-mode circuits is presented. A new unified fault detection procedure is introduced, and its fault detection capability is estimated. The procedure combines the rms value and the magnitude and phase components of the power supply current. Certain required discrimination factors are defined, and their values are adjusted to include circuit parameter deviations and matching dependencies for the efficient application of the method. Representative results from the simulation of an operational amplifier (opamp) and a flash analog-to-digital converter circuit are given, showing the effectiveness of the proposed method.",2008,0, 7232,A Novel Real-Time Error Compensation Methodology for IMU-based Digital Writing Instrument,"A micro inertial measurement unit (lMU) which is based on Micro-Electro-Mechanical Systems (MEMS) accelerometers and gyroscope sensors is developed for real-time recognition of human hand motion. By using appropriate filtering, transformation and sensor fusion algorithms, a ubiquitous digital writing instrument is produced for recording handwriting on any surface. In this paper, we propose a method for deriving an error feedback to a Kalman filter based on the assumption that writing occurs only in two dimensions i.e. the writing surface is flat. By imposing this constraint, error feedback to the Kalman filter can be derived. Details of the feedback algorithm will be discussed and experimental results of its implementation are compared with the simple Kalman filter without feedback information.",2006,0, 7233,Realization of Fault-Tolerant Home Network Management Middleware with the TMO Structuring Approach and an Integration of Fault Detection and Reconfiguration Mechanisms,"A middleware model named ROAFTS (Real-time Object-oriented Adaptive Fault Tolerance Support) has been evolving in the UCI DREAM Lab. over the past decade as the core of a reliable execution engine model for fault-tolerant (FT) real-time (RT) distributed computing (DC) applications. It is meant to be an integration of various mechanisms for fault detection and recovery in a form that meshes well with high-level RT DC object-/component- based programming, in particular, TMO (Time-triggered Message-triggered Object) programming. Using ROAFTS as a backbone and low-layer middleware, we developed a model and a skeleton implementation for FT DC middleware providing efficient FT execution services for component-based home network applications. Capabilities for management of home information processing devices, including health monitoring of home devices, reconfiguration of device connections, and servicing queries on device status, were added to ROAFTS. Those additions were first designed as a network of high-level RT DC components, i.e., TMOs. Then the TMO network was extended into an FT TMO network by applying the replication scheme called the PSTR (Primary-Shadow TMO Replication) scheme and incorporating a component responsible for reconfiguring TMO replicas. This extension of ROAFTS is called ROAFTS-HNE (Home Network Extension) and its architecture is presented here. In addition, during the development of the ROAFTS-HNE model, we formulated a new approach for applying the PSTR scheme to RT DC components supported by ROAFTS. Finally, evaluations of the recovery times of a prototype implementation have been conducted.",2009,0, 7234,Software life cycle-based defects prediction and diagnosis technique research,"A model based on Bayesian network is put forward to predict and diagnose software defects before project. In the model causes and effects inference is used to predict defects, and the Bayesian formula is introduced to analyze the prediction result that helps to find the root reason of bring defects. The model considers every phase of software life cycle, such as requirement, design, development and testing, maintenance. The model computes predict result through variational weight of affect-factor. The computation results of specifically affect-factor by model compared with practical defect in same condition which to indict that the model is validity. The model can predict and find defects early and effectively, it can control the software quality and the cost of development.",2010,0, 7235,A generic real-time computer Simulation model for Superconducting fault current limiters and its application in system protection studies,"A model for the SCFCL suitable for use in real time computer simulation is presented. The model accounts for the highly nonlinear quench behavior of BSCCO and includes the thermal aspects of the transient phenomena when the SCFCL is activated. Implemented in the RTDS real-time simulation tool the model has been validated against published BSCCO characteristics. As an example for an application in protection system studies, the effect of an SCFCL on a utility type impedance relay has been investigated using a real time hardware-in-the-loop (RT-HIL) experiment. The test setup is described and initial results are presented. They illustrate the effect of how the relay misinterprets the dynamically changing SCFCL impedance as an apparently more distant fault location. It is expected that the new real-time SCFCL model will provide a valuable tool not only for further protection system studies but for a wide range of RT-HIL experiments of power systems.",2005,0, 7236,Fault management for Internet Services: Modeling and Algorithms,"A modeling approach is proposed in this paper to build the bipartite fault propagation model (FPM) for Internet services. The FPM is layered as Internet services involve multiple layers. Two fault localization algorithms, MCA (Max-Covering Algorithm) and MCA+, are designed for the bipartite FPM. MCA+ is an extension of MCA, taking lost and spurious symptoms into account. Simulation results show that MCA+ achieves high detection rate, low false positive rate and has polynomial computational complexity even in the presence of lost and spurious symptoms.",2006,0, 7237,Fast Enhancement of Validation Test Sets for Improving the Stuck-at Fault Coverage of RTL Circuits,"A digital circuit usually comprises a controller and datapath. The time spent for determining a valid controller behavior to detect a fault usually dominates test generation time. A validation test set is used to verify controller behavior and, hence, it activates various controller behaviors. In this paper, we present a novel methodology wherein the controller behaviors exercised by test sequences in a validation test set are reused for detecting faults in the datapath. A heuristic is used to identify controller behaviors that can justify/propagate pre-computed test vectors/responses of datapath register-transfer level (RTL) modules. Such controller behaviors are said to be compatible with the corresponding precomputed test vectors/responses. The heuristic is fairly accurate, resulting in the detection of a majority of stuck-at faults in the datapath RTL modules. Also, since test generation is performed at the RTL and the controller behavior is predetermined, test generation time is reduced. For microprocessors, if the validation test set consists of instruction sequences then the proposed methodology also generates instruction-level test sequences.",2009,0, 7238,Detecting Atomicity Errors in Message Passing Programs,"A distributed application can be viewed as a collection of processes that execute a number of atomic actions. Atomicity is the basis for reasoning about the correctness of a program. Atomicity errors in a run typically indicate the presence of program errors. This paper formalizes the notion of atomicity of an action in a message passing program based on a weak-order relation among atoms. An atom can be a single statement or a sequence of statements in a program. Knowing the atoms, the atomicity of a run can be monitored and checked. Serialization of conflicting atoms is another generic correctness requirement. When atoms affect a common property, such as in sharing resources or maintaining a common constraint, they must be serialized in a run. This paper presents two efficient algorithms for dynamically detecting atomicity and serialization errors, accompanied with their proof of correctness.",2007,0, 7239,Self-optimization of MPSoCs Targeting Resource Efficiency and Fault Tolerance,"A dynamically reconfigurable on-chip multiprocessor architecture is presented, which can be adapted to changing application demands and to faults detected at run-time. The scalable architecture comprises lightweight embedded RISC processors that are interconnected by a hierarchical network-on-chip (NoC). Reconfigurability is integrated into the processors as well as into the NoC with minimal area and performance overhead. Adaptability of the architecture relies on a self-optimizing reconfiguration of the MPSoC at run-time. The resource-efficiency of the proposed architecture is analyzed based on FPGA and ASIC prototypes.",2009,0, 7240,Evaluating Throughput of a Wormhole-Switched Routing Algorithm in NoC with Faults,"A famous wormhole-switched routing algorithm for mesh interconnection network called f-cube3 uses three virtual channels to pass faulty blocks, while only one virtual channel is used when a message does not encounter by fault. Routing with faults usually uses virtual channels to conquer faulty regions. One of the key issues in the design of Network-on-Chips (NoC) is the development of a well-organized communication system to provide high throughput interconnection network. We have evaluated if-cube3 - a fault-tolerant routing algorithm based on f-cube3 - for increasing the throughput of the network. Moreover, simulation of both f-cube3 and if-cube3 algorithm for the same conditions presented. Modifications of the use of virtual channels per each physical link without adding new extra virtual channel illustrated by results obtained from simulation. As the simulation results show, if-cube3 has a higher performance than f-cube3. The results also show that if-cube3 has less exist packets in network with improved performance at high traffic load in Network-on-Chip.",2009,0, 7241,An energy flow approach to fault propagation analysis,"A complex system such as an aircraft engine, is composed of several components or subsystems which interact with each other in several ways. These constituent components/subsystems and their interactions make up the whole system. When a fault condition arises in one of the components, not only that component behavior changes but the interaction of that component with the other system constituents may also change. This might result in spreading the effect of that fault to other components in a domino like effect, until the overall system fails. This paper presents a modular methodology to analyze propagation of faults from one subsystem to the other subsystems. The application domain focuses on an aero propulsion system of the turbofan type.",2009,0, 7242,Corrosion Model of a Rotor-Bar-Under-Fault Progress in Induction Motors,"A corrosion model of a rotor-bar-under-fault progress in induction motors is presented for simulations of induction machines with a rotor-bar fault. A rotor-bar model is derived from the electromagnetic theory. A leakage inductance of the corrosion model of a rotor bar is calculated from the relations of magnetic energy, inductance, current, and magnetic-field intensity by Ampere's law. The leakage inductance and resistance of a rotor bar varies when the rotor bar rusts. In addition, the skin effect is considered to establish the practical model of a rotor bar. Consequently, the variation of resistance and leakage inductance has an effect on the results of motor dynamic simulations and experiments, since a corrosive rotor bar is one model of a rotor bar in fault progress. The results of simulations and experiments are shown to be in good agreement with the spectral analysis of stator-current harmonics. From the proposed corrosion model, motor current signature analysis can detect the fault of a corrosive rotor bar as the progress of a rotor-bar fault. Computer simulations were achieved using the MATLAB Simulink with an electrical model of a 3.7-kW, three-phase, and squirrel-cage induction motor. Also, experimental results were obtained by real induction motors, which had the same specification as the electrical model used in the simulation",2006,0, 7243,CPW Band-Stop Filter with Tapered-Shaped Defected Ground Structure,"A CPW band-stop filter that has a tapered-shaped defected ground structure (DGS) is demonstrated. The proposed band-stop filter includes symmetrical tapered-shaped DGS and a CPW-fed excitation to obtain two resonance frequency nulls and wider -lOdB operating bandwidth. By modifying tapered-shaped DGS structure and centre strip width (w), the operating bandwidth can be easily adjusted. The simulated and measured results show accepted S-parameters. On other designs, the multisections filters using tapered-shaped DGS structure and CPW filter with dual-tapered-shaped DGS are designed and experimented.",2007,0, 7244,Fault diagnostics in power electronics based brake-by-wire system,"A DC motor based brake-by-wire system is studied for the purpose of fault diagnostics of its power electronic switches. The voltages and currents generated in the switching circuit under normal and faulted conditions are fed to a fuzzy algorithm to detect the particular solid state power switch which is faulty, and the moment of the occurrence of the fault. The algorithm can be easily implemented using an inexpensive processor for on-board diagnostics applications.",2005,0, 7245,Multichamber Tunable Liquid Microlenses with Active Aberration Correction,"A design approach and new manufacturing technique for a novel type of stacked fluidic multi-chamber tunable lenses is presented. The design offers flexibility and extensibility, leading to fully functional miniature tunable optical lens systems with the ability for low order aberration control.",2009,0, 7246,Corrections for nonlinear vector network analyzer measurements using a stochastic multi-line/reflect method,"A new 16-term statistical calibration has been developed for the correction of the vector network analyzer (VNA) data. The method uses multiple measurements of generic transmission line and reflection standards. Using a functional model of the system and transmission line standards, we apply a nonlinear least-squares estimator to simultaneously optimize the correction terms in the measurement model and the propagation constant. The method provide estimates of the uncertainty on each of the parameters using the final Jacobian. This paper shows for the first time an application of the new calibration to a commercial nonlinear VNA, plus quantitative statements regarding the quality of the parameters.",2004,0, 7247,Correction of bias field in MR images using singularity function analysis,"A new approach for correcting bias field in magnetic resonance (MR) images is proposed using the mathematical model of singularity function analysis (SFA), which represents a discrete signal or its spectrum as a weighted sum of singularity functions. Through this model, an MR image's low spatial frequency components corrupted by a smoothly varying bias field are first removed, and then reconstructed from its higher spatial frequency components not polluted by bias field. The thus reconstructed image is then used to estimate bias field for final image correction. The approach does not rely on the assumption that anatomical information in MR images occurs at higher spatial frequencies than bias field. The performance of this approach is evaluated using both simulated and real clinical MR images.",2005,0, 7248,Fault Diagnosis of a SCARA Robot,"A new approach for fault detection of a robot manipulator was introduced in this paper. Unlike existing methods mounting varies sensors at every joints, the new technique applied only one accelerometer mounted at the tip of the robot. Based on dynamic analysis of the robot, the theoretical acceleration profile of the programmed motion was obtained, which was compared with the acceleration signals measured during working for fault diagnosis. Advanced detrending algorithm was adopted to extract programmed acceleration data from the tested noisy acceleration signals. Typical FFT-based spectrum analysis was applied for analyzing both the trended and detrended data for error detection. The effectiveness of the proposed approach was verified with experiments conducted on a four-joint SCARA manipulator.",2008,0, 7249,Placing Functionality in Fault-Tolerant Hardware/Software Reconfigurable Networks,"A novel framework shows the potential of FPGA-based systems for increasing fault-tolerance and flexibility by placing functionality onto free hardware (HW) or software (SW) resources at runtime. Based on field-programmable gate arrays (FPGAs) in combination with CPUs, tasks implemented in HW or SW will migrate between computational nodes in a network. Moreover, if not enough HW/SW resources are available, the migration of functionality from HW to SW or vice versa is provided. By integrating this resource management, also called online HW/SW partitioning, in a distributed operating system for networked systems, a substantial step towards self-adaptive systems has been done. Besides a short motivation for this project and the presentation of related work, this paper gives an idea about a novel approach and a powerful experimental setup.",2006,0, 7250,Error-resilient video transcoding for robust internetwork communications using GPRS,"A novel fully comprehensive mobile video communications system is proposed. The system exploits the useful rate management features of video transcoders and combines them with error resilience for the transmission of coded video streams over general packet radio service (GPRS) mobile-access networks. The error-resilient video transcoding operation takes place at a centralized point, referred to as a video proxy, which provides the necessary output transmission rates with the required amount of robustness. With the use of this proposed algorithm, error resilience can be added to an already compressed video stream at an intermediate stage at the edge of two or more different networks through two resilience schemes, namely the adaptive intra refresh (AIR) and feedback control signaling (FCS) methods. Both resilience tools impose an output rate increase which can also be prevented with the proposed novel technique. Thus, the presented scheme gives robust video outputs at near target transmission rates that only require the same number of GPRS timeslots as non-resilient schemes. Moreover, an ultimate robustness is also accomplished with the combination of the two resilience algorithms at the video proxy. Extensive computer simulations demonstrate the effectiveness of the proposed system",2002,0, 7251,"Scalable, error-resilient, and high-performance video communications in mobile wireless environments","A novel mobile communications system is proposed, which provides not only effective but also efficient video access for mobile users when communicating over low-bandwidth error-prone wireless links. The middleware implemented by a mobile proxy server at the mobile support station is designed for the seamless integration of mobile users with video servers, so the specific details of the underlying protocols and source/channel coding techniques are hidden to both the video server and mobile client. Based on the concept of application-level framing, the application (video codec in our case) plays a significant role in network communications that most of functionalities of the system are implemented as part of the application. As such, at the application layer, adaptive source- and channel-coding techniques are developed to jointly provide the user with the highest possible video quality. For efficient source coding, our high-performance low-complexity video-coding algorithm called a 3-D significance-linked connected component analysis (3D-SLCCA) is chosen. Due to its high robustness against channel-error propagation, 3D-SLCCA is well suited for wireless environments. For error-resilient channel coding, a multilayer transmission error-control mechanism is developed. Since there is no additional requirement imposed on either the mobile client or the video server, mobile users interact with the server in exactly the same way as stationary users. Extensive computer experiments demonstrate the effectiveness of the proposed system",2001,0, 7252,Fuzzy Reasoning Based Temporal Error Concealment,"A novel temporal error concealment based on fuzzy reasoning is proposed in this paper. On temporal error concealment, motion vector (MV) of the lost block can be selected from candidate MVs. Generally, side match distortion (SMD) which reflects the continuity across the boundary and sum of absolute difference (SAD) which shows the degree of similarity of matching area are two widely used criterions to make the decision. Each criterion describes the status partly. Combining these two measures, a refined measure based on fuzzy reasoning is calculated to balance the effects of SMD and SAD. According to the experimental results, our method can perform better than the others.",2009,0, 7253,Using orthogonal visual servoing errors for classifying terrain,"A novel, centimeter-scale crawling robot has been developed to address applications in surveillance, search-and-rescue, and planetary exploration. This places constraints on size and durability that minimizes the mechanism. As a result, a dual-use design employing two arms for both manipulation and locomotion was conceived. In a complementary fashion, this paper investigates the dual-use of visual servoing error. Visual servoing can be used by a mobile robot for homing and tracking. But because ground-based mobile robots are inherently planar, the control methodology (steering) is one-dimensional. The two-dimensional nature of image-based servoing leaves additional information content to be used in other contexts. We explore this information in the context of classifying terrain conditions. An outline for gait adaptation based on this is suggested for future work",2001,0, 7254,Error-Pooling Empirical Bayes Model for Enhanced Statistical Discovery of Differential Expression in Microarray Data,"A number of statistical approaches have been proposed for evaluating the statistical significance of a differential expression in microarray data. The error estimation of these approaches is inaccurate when the number of replicated arrays is small. Consequently, their resulting statistics are often underpowered to detect important differential expression patterns in the microarray data with limited replication. In this paper, we propose an empirical Bayes (EB) heterogeneous error model (HEM) with error-pooling prior specifications for varying technical and biological errors in the microarray data. The error estimation of HEM is thus strengthened by and shrunk toward the EB priors that are obtained by the error-pooling estimation at each local intensity range. By using simulated and real data sets, we compared HEM with two widely used statistical approaches, significance analysis of microarray (SAM) and analysis of variance (ANOVA), to identify differential expression patterns across multiple conditions. The comparison showed that HEM is statistically more powerful than SAM and ANOVA, particularly when the sample size is smaller than five. We also suggest a resampling-based estimation of Bayesian false discovery rate to provide a biologically relevant cutoff criterion of HEM statistics.",2008,0, 7255,Optical network packet error rate due to physical layer coding,"A physical layer coding scheme is designed to make optimal use of the available physical link, providing functionality to higher components in the network stack. This paper presents results of an exploration of the errors observed when an optical gigabit Ethernet link is subject to attenuation. The results show that some data symbols suffer from a far higher probability of error than others. This effect is caused by an interaction between the physical layer and the 8B/10B block coding scheme. The authors illustrate how the application of a scrambler, performing data whitening, restores content-independent uniformity of packet loss. They also note the implications of their work for other (N, K) block-coded systems and discuss how this effect will manifest itself in a scrambler-based system. A conjecture is made that there is a need to build converged systems with the combinations of physical, data link, and network layers optimized to interact correctly. In the meantime, what will become increasingly necessary is both an identification of the potential for failure and the need to plan around it.",2005,0, 7256,"Reducing the ""No Fault Found"" problem: Contributions from expert-system methods","A positive sign that problem is important is when it has many different names. The No Fault Found (NFF), No Defect Found (NDF), No Problem Found (NPF), Retest-OK (RETOK) or Could Not Duplicate (CND) problem certainly meets this criterion. When service personnel need to resolve a problem in the aircraft they remove Field-Replaceable Units (FRUs) from the serviced aircraft and replace them with spare parts. The removed FRUs are then sent to the depot level for repair. If depot-level testing cannot discover any failure in the FRUs functionality, they report a finding of NFF, NDF, NPF or CND, depending on their organization's type and regulations, and return the FRU to the spares pool.",2002,0, 7257,Robust adaptive sliding-mode fault-tolerant control with L2-gain performance for flexible spacecraft using redundant reaction wheels,"A robust adaptive fault-tolerant control approach for attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheels/actuator failures, external disturbances and time-varying inertia-parameter uncertainties. More specifically, a robust controller based on sliding-mode control scheme is first designed to ensure that the equilibrium point in the closed-loop system is uniform ultimate bounded stability, incorporating constraints on actuator failures, whose failure time instants, patterns and values are unknown, as motivated from a practical spacecraft control application. Then, this controller is redesigned such that an assumption on a bound required of the unknown and time-varying inertia matrix is released by using online estimation for this bound. The prescribed robust performance is also evaluated by L2-gain less than a given small level for the penalty output signal. Complete stability and performance analysis are presented, and illustrative simulation results of an application to flexible spacecraft show that the high precise attitudes control and vibration suppression are successfully achieved using various scenarios of control effect failures.",2010,0, 7258,Adaptive backstepping fault-tolerant control for flexible spacecraft with bounded unknown disturbances,"A robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are actuator (reaction wheels) failures, external disturbances and unknown inertia-parameter uncertainties. The controller is designed based on a backstepping sliding mode control scheme. It ensures that the equilibrium points in the closed-loop system exhibits uniform ultimate bounded stability in the presence of unknown uncertainties and bounded disturbances, incorporating constraints on actuator failures, whose failure time instants, patterns and values are unknown, as motivated from a practical spacecraft control application. It is proved to be effective also in the presence of disturbance due to the flexibility, provided that appropriate robustness conditions on the controller gains are satisfied. Complete stability and performance analysis are presented and illustrative simulation results of an application to flexible spacecraft show that the high precise attitudes control and vibration suppression are successfully achieved when considering various scenarios of control effect failures.",2009,0, 7259,Organic Fault-Tolerant Controller for the Walking Robot OSCAR,"A robust, fault-tolerant autonomous walking robot OSCAR and its control architecture are presented. Organic Computing methods, self-organization in particular, are used to make the robot tolerant to faults of parts of itself (defective motors e.g.) or unforeseen situations it might be faced within an unstructured environment. Inspired by nature, our robot generates its gaits autonomously. In case of an injury or even the loss of one leg, OSCAR will adapt its gait to only use the remaining legs and walk ahead in a self-organizing way. We describe modifications we applied to a second prototype that allows that robot to optimize its gaits further to be able to walk also in difficult terrain.",2007,0, 7260,An improved technique for reducing false alarms due to soft errors,"A significant fraction of soft errors in modern microprocessors has been reported to never lead to a system failure. Any concurrent error detection scheme that raises alarm every time a soft error is detected is not well heeded because most of these alarms are false and responding to them will affect system performance negatively. This paper improves state of the art in detecting and preventing false alarms. Existing techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate benefit of false alarm identification in implementing a roll-back recovery system by first calculating the optimum check pointing interval for a roll-back recovery system and then showing that the optimal number of check-points decreases by orders of magnitude when exclusion techniques are used even if the implementation of exclusion technique is not perfect",2006,0, 7261,Fault detection in Flexible Assembly Systems using Petri net,"A significant part of the activities in a manufacturing system involve assembly tasks. Nowadays, these tasks are object of automation due to the market increasing demand for quality, productivity and variety of the products. Consequently, the automation of assembly systems should consider flexibility to face product diversification, functionalities, delivery times, and volumes involved. However, these systems are vulnerable to faults due to the characteristic of their mechanism and the complex interaction among their control devices. In this context, the present work is focused on the modeling design of flexible assembly systems control, including the occurrence of faults. The proposed method structures a sequence of steps for the models construction of assembly processes and their fault detection, based on the theory of discrete events systems and Petri net. This work use in special, production flow schema/mark flow graph (PFS/MFG) technique to describe and model the flexible assembly systems control through a rational and systematic procedure, as well as, the processes data record based on quantitative techniques for fault detection. This approach is applied to a flexible assembly systems installed and in operation to compare the effectiveness of the developed procedure.",2008,0, 7262,A fault section detection method using ZCT when a single phase to ground fault in ungrounded distribution system,"A single line to ground fault (SLG) detection in ungrounded network is very difficult, because fault current magnitude is very small. It is generated by a charging current between distribution line and ground. As it is very small, is not used for fault detection in case of SLG. So, SLG has normally been detected by switching sequence method which makes customers experience blackouts. A new fault detection algorithm based on comparison of zero-sequence current and line-to-line voltage phases is proposed. The algorithm uses ZCT installed to ungrounded distribution network. The proposed in this paper algorithm has the advantage that it can detect fault phase and distinguish a faulted section as well. The simulation tests of proposed algorithm were performed using Matlab Simulink and the results are presented in the paper.",2010,0, 7263,Using Maintainability Based Risk Assessment and Severity Analysis in Prioritizing Corrective Maintenance Tasks,"A software product spends more than 65% of its lifecycle in maintenance. Software systems with good maintainability can be easily modified to fix faults. We define maintainability-based risk as a product of two factors: the probability of performing maintenance tasks and the impact of performing these tasks. In this paper, we present a methodology for assessing maintainability-based risk in the context of corrective maintenance. The proposed methodology depends on the architectural artifacts and their evolution through the life cycle of the system. In order to prioritize corrective maintenance tasks, we combine components' maintainability- based risk with the severity of a failure that may happen as a result of unfixed fault. We illustrate the methodology on a case study using UML models.",2007,0, 7264,Power Factor Correction and Compensation of Unbalanced Loads by PWM DC/AC Converters,"A solution developed for power quality conditioning (PQC) applied in distributed generation system is presented. The method is exploiting the capabilities of the DC/AC converter of a system that has been developed to generate electric power by utilizing alternative, renewable and waste energies. The power quality conditioning is based on the application of space vector theory, using an algorithm easy to implement. The paper describes a control structure that is capable of both power factor correction and elimination of negative sequence components in the mains current. The theoretical analysis is confirmed by computer simulation results",2006,0, 7265,Study for the Scheme to Shut Valves and Topology Recomposition under Fault Conditions to Heat-Supply Network,"A sort of shutting valves scheme was brought forward when heat-supply network failed and a kind of network recomposition system of topology was suggested here. Based on the normal condition topology structure, this system could automatically recompose fault conditions topology structure according to the way of shutting valves scheme. Validity and effectiveness of this method was verified by a real case and this method provide a feasible way to hydraulic analysis of ring-shaped and non symmetrical heat-supply network under fault conditions.",2010,0, 7266,Laser vision correction with an all-solid-state UV laser,"A stable all-solid-state laser system emitting at 210 nm is used for refractive surgery of human eyes. We present data that imply smooth ablation profiles, exceptionally fast wound healing and the potential for customized ablation.",2003,0, 7267,On structural vs. functional testing for delay faults,"A structurally testable delay fault might become untestable in the functional mode of the circuit due to logic or timing constraints or both. Experimental data suggests that there could be a large difference in the number of structurally and functionally testable delay faults. However, this difference is usually calculated based only on logic constraints. It is unclear how this difference would change if timing constraints were taken into consideration, especially when using statistical timing models. In this paper, our goal is to better understand how structural and functional test strategies might affect the delay test quality and consequently, change our perception of the delay test results.",2003,0, 7268,Fault prediction technique based on hybrid verification,"A survey on fault prediction of new region of safety techniques study in petrochemical industry, a new technology is proposed which based on MPT software for fault prediction, this method which makes use of verification idea of hybrid system converts fault prediction into a new format through MPT software, then maintenance plan is worked out, it can promise that the devices can run in a max efficiency and the cost waste can reduce if it integrates with traditional maintenance plan. This approach is simple and reliable, and it is a new method of fault prediction.",2010,0, 7269,Noise Cancellation Signal Processing Method and Computer System for Improved Real-Time Electrocardiogram Artifact Correction During MRI Data Acquisition,"A system was developed for real-time electrocardiogram (ECG) analysis and artifact correction during magnetic resonance (MR) scanning, to improve patient monitoring and triggering of MR data acquisitions. Based on the assumption that artifact production by magnetic field gradient switching represents a linear time invariant process, a noise cancellation (NC) method is applied to ECG artifact linear prediction. This linear prediction is performed using a digital finite impulse response (FIR) matrix, that is computed employing ECG and gradient waveforms recorded during a training scan. The FIR filters are used during further scanning to predict artifacts by convolution of the gradient waveforms. Subtracting the artifacts from the raw ECG signal produces the correction with minimal delay. Validation of the system was performed both off-line, using prerecorded signals, and under actual examination conditions. The method is implemented using a specially designed Signal Analyzer and Event Controller (SAEC) computer and electronics. Real-time operation was demonstrated at 1 kHz with a delay of only 1 ms introduced by the processing. The system opens the possibility of automatic monitoring algorithms for electrophysiological signals in the MR environment",2007,0, 7270,An Optimized Simulation-Based Fault Injection and Test Vector Generation Using VHDL to Calculate Fault Coverage,"A technique is described for the automatic insertion of fault models into VHDL gate models, using a specific algorithm to calculate fault coverage. This procedure does not require any modification to the structural description of a circuit using these models. Additional optimized algorithms are added to illustrate better calculation of fault coverage of a VHDL based combinational logic circuit.",2009,0, 7271,An improved high performance three phase AC-DC boost converter with input power factor correction,"A three-phase 3-level unidirectional AC/DC converter is proposed to achieve almost unity power factor and reduction of harmonics distortion. A power factor corrector using the hysteresis current control technique is presented to improve the power quality at the rectifier side. A high-power-factor rectifier based on a neutral point switch clamped scheme is presented. A control scheme for the proposed rectifier is propounded to draw a sinusoidal line current with nearly unity power factor, achieve balanced neutral point voltage and regulate the DC bus voltage. A hysteresis current control scheme is used to track the line current in phase with the mains voltage. The line current command is derived from a voltage controller and a phase- locked loop circuit. A capacitor voltage compensator is employed in the proposed control algorithm to achieve the balanced neutral point voltage. The effectiveness and validity of the proposed control strategy is verified through computer simulation results. The simulation result reveals that the proposed control technique offers considerable improvement in Power factor and reduction in total harmonic distortion.",2007,0, 7272,Errors in Attacks on Authentication Protocols,A tool for automated validation of attacks on authentication protocols has been used to find several flaws and ambiguities in the list of attacks described in the well known report by Clark and Jacob. In this paper the errors are presented and classified. Corrected descriptions of the incorrect attacks are given for the attacks that can be easily repaired,2007,0, 7273,Design and implementation of fault-tolerant transactional agents for manipulating distributed objects,"A transactional agent is a mobile agent which manipulates objects distributed in computers. A transactional agent is composed of routing and manipulation subagent. A way to move to computers is decided in the routing agent. Objects in each computer are manipulated in a manipulation agent. After visiting computers, a routing agent makes a decision on commitment by using the commitment condition. In addition, objects obtained from a computer in the manipulation agent have to be delivered to other computers where the transactional agent is performed. A schedule to visit computers is made from the input-output relation of manipulation agents. We discuss a model of transactional agent and implementation of a transactional agent on database servers and evaluate the transactional agents. We evaluate the transactional agent model in terms of accessing time compared with the traditional client-server model.",2005,0, 7274,A novel fault-dependent-time-settings algorithm for overcurrent relays,"A typical structure of electric power distribution networks is radial or a normally open-loop structure with a single supply path between the high-voltage supply substation and the end-consumers. The selectivity of a protection scheme for the radial network is generally achieved through time-coordination of different protection devices (relays, reclosers and fuses). However, future distribution networks may become more meshed (normally closed-loop) in order to integrate distributed generators (DGs) and more active demand-side response. This will, in turn, require novel protection schemes since the current time-coordinated protection which assumes radial network structure may no longer be effective. Furthermore, it is also recognized that today's practice of automatically disconnecting DGs when a fault occurs in their vicinity may cause un-necessary generation shortages. It is, instead, desired to adjust protection so that a DG unit remains connected to the system when nearby faults occur as long as there are no related safety problems. This paper introduces a novel fault-dependent time settings algorithm for overcurrent relays which is capable of overcoming these problems. The implementation of the proposed algorithm on the existing protection scheme is enabled by using communications among the relays. The proposed algorithm ensured a reduced relay tripping time in the primary protection zone by enhancing the selectivity of the existing protection scheme. Consequently, it becomes possible to operate distributed generators during times when nearby faults occur. While the use of the proposed algorithm is illustrated in the distribution network, the same algorithm is also applicable to the transmission networks.",2009,0, 7275,Using neural networks for fault diagnosis,"A universal fault instance model, which aims to solve problems existing in the present technology of fault diagnosis, such as the lack of universality, the difficulty in the use of real time systems and the dilemma of stability and plasticity, is proposed. An experiment demonstrates that the FANNC used can successfully settle the problems mentioned above by its effective incremental ability and processing new input patterns via one round learning",2000,0, 7276,N4ITK: Improved N3 Bias Correction,"A variant of the popular nonparametric nonuniform intensity normalization (N3) algorithm is proposed for bias field correction. Given the superb performance of N3 and its public availability, it has been the subject of several evaluation studies. These studies have demonstrated the importance of certain parameters associated with the B-spline least-squares fitting. We propose the substitution of a recently developed fast and robust B-spline approximation routine and a modified hierarchical optimization scheme for improved bias field correction over the original N3 algorithm. Similar to the N3 algorithm, we also make the source code, testing, and technical documentation of our contribution, which we denote as ??N4ITK,?? available to the public through the Insight Toolkit of the National Institutes of Health. Performance assessment is demonstrated using simulated data from the publicly available Brainweb database, hyperpolarized 3He lung image data, and 9.4T postmortem hippocampus data.",2010,0, 7277,Virtual sensor for fault detection and isolation in flight control systems - fuzzy modeling approach,"A virtual sensor for normal acceleration has been developed and implemented in the flight control system of a small commercial aircraft. The inputs of the virtual sensor are the consolidated outputs of dissimilar sensor signals. The virtual sensor is a fuzzy model of the Takagi-Sugeno type and it has been identified from simulated data, using a detailed, realistic Matlab/SimulinkTM model used by the aircraft manufacturer. This virtual sensor can be applied to identify a failed sensor in the case that only two real sensors are available and even to detect a failure of the last available sensor",2000,0, 7278,An approach to regulating the DC-link voltage of a voltage-source BTB system during power line faults,"A voltage-source BTB (back-to-back) system for the purpose of achieving power flow control and/or line-frequency change in transmission systems has attractive features of reliable and continuous operation even during power line faults. However, an overvoltage appearing across the DC link during the faults should be limited to as low as possible because it does affect the power device ratings. This paper proposes a DC-link voltage controller for effectively suppressing the overvoltage during power line faults. This controller is characterized by compensating for a time delay inherent in each current controller, and for a power flow imbalance occurring during power line faults. The validity of the proposed controller is confirmed by theory and computer simulation.",2003,0, 7279,Repetitive waveform correction technique for CVCF-SPWM inverters,"A way to synthesize a repetitive controller directly with SPWM inverters is presented. It combines the advantages of repetitive control (low THD with nonlinear loads) and SPWM technique (easy implementation, low cost, reliable operation, etc). To cancel out the high resonant peak of the SPWM inverter, the repetitive controller incorporates a notch filter. Design of the controller is discussed in detail, and a case design is presented. Low THD (1.4%) is achieved with nonlinear load, this proves the method to be a cost-effective upgrading technique for SPWM inverter products",2000,0, 7280,Fast Fault-Tolerant Time Synchronization for Wireless Sensor Networks,"A wireless sensor network (WSN) typically consists of a large number of small-sized devices that have very low computational capability, small amounts of memory and the need to conserve energy as much as possible (most commonly by entering suspended mode for extended periods of time). Previous approaches for WSN time synchronization do not satisfactorily address all of the requirements of WSN environments. Thus, this paper proposes a new fault-tolerant WSN time sychronization algorithm that is extremely fast (when compared to previous algorithms), achieves a guaranteed level of time synchronization for all non-faulty nodes, can accommodate nodes that enter suspended mode and then wake up, utilizes very little communication and computation resources (thereby leaving those resources available for use by other applications), operates in a completely decentralized manner and tolerates up to f faulty nodes. The efficacy of the proposed algorithm is shown using analysis and experimental results.",2008,0, 7281,Prediction Error Prioritizing Strategy for Fast Normalized Partial Distortion Motion Estimation Algorithm,"A prediction error prioritizing-based normalized partial distortion search algorithm for fast motion estimation is proposed in this letter. The distortion behavior of each pixel in a macroblock is first analyzed to point out the priority/order of sum of absolute difference calculation. Afterward, the normalized partial distortion search algorithm is applied for half-stop of the distortion calculation. In addition, a dynamic search range decision algorithm is adopted for automatically changing the size of the search range to further increase the motion estimation speed. The computational complexity can be reduced significantly through the proposed algorithm, though leaving a PSNR degradation that could be dismissed.",2010,0, 7282,Quasi-Static Analysis of Defected Ground Structure,"A quasi-static equivalent circuit model of a dumbbell shaped defected ground structure (DGS) is developed. The equivalent circuit model is derived from the equivalent inductance and capacitance developed due to the perturbed returned current path on the ground and the narrow gap, respectively. The theory is validated against the commercial full-wave solver CST Microwave Studio. Finally, the calculated results are compared with the measured results. Good agreement between the theory, the commercially available numerical analyses and the experimental results validates the developed theoretical model.",2005,0, 7283,A real-time hardware fault detector using an artificial neural network for distance protection,"A real-time fault detector for the distance protection application, based on artificial neural networks, is described. Previous researchers in this field report use of complex filters and artificial neural networks with large structure or long training times. An optimum neural network structure with a short training time is presented. Hardware implementation of the neural network is addressed with a view to improve the performance in terms of speed of operation. By having a smaller network structure the hardware complexity of implementation reduces considerably. Two preprocessors are described for the distance protection application which enhance the training performance of the artificial neural network many fold. The preprocessors also enable real-time functioning of the artificial neural network for the distance protection application. Design of an object oriented software simulator, which was developed to identify the hardware complexity of implementation, and the results of the analysis are discussed. The hardware implementation aspects of the preprocessors and of the neural network are briefly discussed",2001,0, 7284,Redundancy Design Strategy of DCS Based on Component's Fault Model,"A redundancy design strategy based on component's fault model of device was proposed for distributed control system (DCS). This paper analyzed component's fault model of typical device based on the characteristic of DCS, sorted the faults of components according to the feature which is individual and diagnostic online between the faults of redundancy devices, and put forward the invalidation rate equation of the redundancy system. Then proposed a redundancy design method and verify method of reliability based on the analysis of component's fault model in redundancy device. At last, the experiment data of field application were collected and these data have verified the conclusion above.",2009,0, 7285,RFID-based information system for preventing medical errors,"A report by the Institute of Medicine of the National Academy of Sciences estimates that as many as 98,000 people die in U.S. hospitals each year because of medical errors. In this project, we propose an innovative IT-based approach to prevent errors in various medical processes by utilizing advances in radio frequency identification (RFID) and wireless communications. The goal of the study is to perform an in-depth study of existing RFID technologies for patient care in medical facilities. In the paper, a new system architecture that integrates various wireless technologies such as RFID and Wi-Fi was proposed. In the pilot study, we primarily focused on the limitations and shortcomings of passive RFID technologies in medical settings. Our experimental results show the reliability challenge in the current passive EPC Gen 2 RFID systems for use in a dynamic medical environment.",2009,0, 7286,Use of fault tree analysis to improve residential gateway testing,"A residential gateway, heart of the strategy of most Telcos, is a centralized intelligent device between the operator's access network and the home's network. It terminates all external access networks and enables residential services to be delivered to the consumer. Besides a plethora of useful services, the growth in market depends upon the reputation of its resilience (availability, reliability and security). This emphasizes a near zero fault design and efficient testing should be taken care before its launch into the market. This paper deals with the analysis of failures, both from test and field data, aiming to increase the efficiency of laboratory testing. Using fault tree analysis, we study the faults that have passed through the testing phase and created failures in the customer premises. With the help of defined specifications, we have identified the zones in which testing in the laboratory needs to be improved.",2007,0, 7287,Noise identification and fault diagnosis for the new products of the automobile gearbox,"A noise identification and fault diagnosis system for the new products of the automobile gearbox is introduced. The framework of the developed software is described, which includes function modules as data acquisition, feature extracting, time frequency transform, order analysis, learning and training, and so on. The prototype system has been partially put in practice in a certain automobile gear-box manufacture company.",2009,0, 7288,Robust Counting Via Counter Braids: An Error-Resilient Network Measurement Architecture,"A novel counter architecture, called counter braids, has recently been proposed for accurate per-flow measurement on high-speed links. Inspired by sparse random graph codes, counter braids solves two central problems of per-flow measurement: one-to-one flow-to-counter association and large amount of unused counter space. It eliminates the one-to-one association by randomly hashing a flow label to multiple counters and minimizes counter space by incrementally compressing counts as they accumulate. The random hash values are reproduced offline from a list of flow labels, with which flow sizes are decoded using a fast message passing algorithm. The decoding of counter braids introduces the problem of collecting flow labels active in a measurement epoch. An exact solution to this problem is expensive. This paper complements the previous proposal with an approximate flow label collection scheme and a novel error-resilient decoder that decodes despite missing flow labels. The approximate flow label collection detects new flows with variable-length signature counting Bloom filters in SRAM, and stores flow labels in high-density DRAM. It provides a good trade-off between space and accuracy: more than 99 percent of the flows are captured with very little SRAM space. The decoding challenge posed by missing flow labels calls for a new algorithm as the original message passing decoder becomes error-prone. In terms of sparse random graph codes, the problem is equivalent to decoding with graph deficiency, a scenario beyond coding theory. The error-resilient decoder employs a new message passing algorithm that recovers most flow sizes exactly despite graph deficiency. Together, our solution achieves a 10-fold reduction in SRAM space compared to hash-table based implementations, as demonstrated with Internet trace evaluations.",2009,0, 7289,A novel fault finding and identification strategy using pseudorandom binary sequences for multicore power cable troubleshooting,"A novel fault finding correlation methodology, using pseudorandom binary sequences (PRBS), is presented as an alternative to time domain reflectometry (TDR) for multi-core power cable fault location and identification. The fulcrum of this method is the cross correlation (CCR) of the fault echo response with the input pseudonoise (pN) test sequence which results in a unique signature for identification of the fault type, if any, or load termination present as well as its distance from the point of test stimulus insertion. This troubleshooting procedure can used in a number of key industrial scenarios embracing overhead power lines and underground cables in inaccessible locations. A key feature is the potential usage of pseudonoise sequences for long distance fault funding over several cycles at low amplitude levels online to reject normal mains voltage, communications signal traffic and extraneous noise pickup for the purpose of multiple fault coverage, resolution and identification. In this paper a single phase transmission line model is presented with PRBS stimulus injection under known load terminations to mimic fault conditions encountered in practice for proof of concept. Simulation results, for known resistive fault terminations, with measured CCR response demonstrate the effectiveness of the PRBS test method in fault type identification and location. Key experimental test results are also presented for pN fault finding using a four core SWA copper power cable, under laboratory controlled conditions, which substantiates the accuracy of PRBS diagnostic CCR method of fault recognition and location using a range of resistive fault terminations. The accuracy of the method is further validated through theoretical calculation via estimated fault reflection coefficients, voltage standing wave ratios and comparison with known fault resistance terminations, known apriori, and link distances in power line experimental testing.",2009,0, 7290,Error Analysis of L1-Regularized Support Vector Machine for Beta-Mixing Sequence,"Abstract-In this paper, the extension work on the performance of l1-regularized support vector machine(l1-svm) from the classical independent and identically distributed input sequence to the stationary -mixing input sequence is considered. We establish the bound of generalization error for the l1-mixing stationary sequence. It is interesting that our result is available even the size of the dictionary considered is infinite, which is different from most previous results of l1-regularized methods. From the established bound of the generalization error of l1-svm, we develop a sparsity oracle inequality of l1-svm for -mixing input sequence. Following the sparsity oracle inequality, the sufficient condition for the consistency of l1-svm with stationary -mixing input sequence can be obtained.",2010,0, 7291,Avoiding Defects,Accepting some of the testing team's responsibility by writing your own tests lets you trade the time you spend fixing defects for less time spent avoiding them in the first place.,2007,0, 7292,Statistical analysis of time series data on the number of faults detected by software testing,"According to a progress of the software process improvement, the time series data on the number of faults detected by the software testing are collected extensively. In this paper, we perform statistical analyses of relationships between the time series data and the field quality of software products. At first, we apply the rank correlation coefficient to the time series data collected from actual software testing in a certain company, and classify these data into four types of trends: strict increasing, almost increasing, almost decreasing, and strict decreasing. We then investigate, for each type of trend, the field quality of software products developed by the corresponding software projects. As a result of statistical analyses, we showed that software projects having trend of almost or strict decreasing in the number of faults detected by the software testing could produce the software products with high quality.",2002,0, 7293,Analysis - The terrors and the errors [IT Change Management],"According to last month??s Sophos `Security Threat Report??, concern is increasing that computer applications running critical national infrastructures are vulnerable to malevolent hacks. Such hacks could in theory switch control of power and gas supplies, say, to the keyboards of hostile entities, enabling them to wreak damage and disruption. Similar threats face crucial financial computer platforms that underpin national economies, and even emergency services communication channels.",2010,0, 7294,A Combined Method Based on Expert System and BP Neural Network for UAV Systems Fault Diagnosis,"According to the complexity, variety and nonlinear mode of UAV system's faults, a combined method based on expert system and BP neural network was proposed for the diagnosis. Besides BP neural network, the design of the expert system based on BP neural network and its software implementation were depicted respectively. This method can overcome the insufficiency of traditional expert system such as lack of the mechanism of effective self-learning and self-adaption. Through the diagnosis example of telemetry & telecontrol system of UAV, the results show that the new expert system can diagnose UAV system effectively and have a good application prospect in the field of fault diagnosis.",2010,0, 7295,Distributed Application Service Fault Management Using Bayesian Network,"According to the features of the distributed application service fault management, we propose a hybrid fault propagation model for fault detection, which includes a multi-layer FPM model and a two-layer FPM model. And the diagnosis process is divided into two procedures: application service fault diagnosis and network service fault diagnosis. Because the observation of faults is uncertain, we map the fault diagnosis model to Bayesian network to carry out uncertainty reasoning. To improve the inference speed, we add the bucket elimination algorithm with minimum deficiency for better order. In addition, according to the sparse nature of the multi-layer FPM model graph, we use ancestral set to simplify the graph to improve the inference algorithm. As experiments shown, the optimized bucket elimination is improved a lot at speed.",2009,0, 7296,Bad Words: Finding Faults in Spirit's Syslogs,"Accurate fault detection is a key element of resilient computing. Syslogs provide key information regarding faults, and are found on nearly all computing systems. Discovering new fault types requires expert human effort, however, as no previous algorithm has been shown to localize faults in time and space with an operationally acceptable false positive rate. We present experiments on three weeks of syslogs from Sandia's 512-node ""Spirit"" Linux cluster, showing one algorithm that localizes 50% of faults with 75% precision, corresponding to an excellent false positive rate of 0.05%. The salient characteristics of this algorithm are (1) calculation of nodewise information entropy, and (2) encoding of word position. The key observation is that similar computers correctly executing similar work should produce similar logs.",2008,0, 7297,Fault-tolerant vibration control in a networked and embedded rocket fairing system,"Active vibration control using piezoelectric actuators in a networked and embedded environment has been widely applied to solve the rocket fairing vibration problem. However, actuator failures may lead to performance deterioration or system dysfunction. To guarantee the desired system performance, the remaining actuators should be able to coordinate with each other to compensate for the damaging effects caused by the failed actuator in a timely manner. Further, in the networked control environment, timing issues such as sampling jitter and network-induced delay should be considered in the controller design. In this study, a timing compensation approach is implemented in an adaptive actuator failure compensation controller to maintain the fairing system performance by also considering the detrimental effects from real-time constraints. In addition, time-delay compensation in the networked control system is discussed, which is able to reduce damaging effects of network-induced delays.",2004,0, 7298,A Fault Detection Service for Cluster-Based Ad Hoc Network,"Ad Hoc Network becomes widely used for many specific fields. In case of frequently happened node failures and message losses, in order to expedite the development of applications, we propose a middleware service for the failure detection to support these applications. Based on the hierarchical-based and Gossip-based detection techniques, a fault detection service for cluster-based Ad Hoc Network integrating the proposed adaptive fault detector is proposed. Simulations show that the proposed detection service achieves a preferable system performance.",2010,0, 7299,The improvement on the measuring precision of detecting fault composite insulators by using electric field mapping,"According to the statistics of the fault composite insulators, most of the defects take place at the high voltage (HV) end of the insulators. At present, the minimum defect length detected possibly at HV end of the insulators is about 7 cm by using electric field mapping device, which could not indicate the defects located between the last shed and the HV electrode. Therefore, it is important to improve the measuring precision that is suitable for indicating the defects less than 7 cm in inspecting the fault composite insulators based on electric field mapping device. In order to enhance the measuring precision of the device, we analyzed the electric field distribution along with an insulator by using the commercial software ANSYS. We found that a 5 cm defect can be found if we collect two to three electric field data between the two sheds. Therefore, we added a photoelectric cell array to trigger the device for collecting more data between the two sheds. The tests were conducted in our laboratory by using our new device. The results from our experiments show that the sensitivity of detecting the defects is increased and our new device can indicate the defects less than 5 cm at the HV end without grading rings.",2005,0, 7300,Identification of parameter errors,"Accuracy of the network parameters has significant impact on the performance of almost all energy management system applications. Network model is built based on these parameters which are stored in the company's data base and are not typically inspected unless they are flagged by the user or a software package. Hence, errors associated with network parameters are not easily and commonly detected leading to varying degrees of inaccuracies in the results of network applications that rely on the network data base. This paper reviews some recent methods that are proposed for automatic detection, identification and correction of parameter errors in network data bases.",2010,0, 7301,Correction of Attitude Fluctuation of Terra Spacecraft Using ASTER/SWIR Imagery With Parallax Observation,"Accurate attitude estimation of spacecrafts is the main requirement to provide good geometric performance of remote-sensing imagery. The Advanced Spaceborne Thermal Emission and Reflection Radiometer/short-wave-infrared subsystem has six linear-array sensors arranged in parallel, and each line scans the same ground target with a time interval of 356.238 ms between neighboring bands. The registration performance between bands becomes worse when attitude fluctuation occurs during a time lag between observations. Since the time resolution of the line scan is higher than that of the attitude information provided from the satellite, attitude data are estimated with a high frequency. We succeeded in correcting the image-registration error using the revised attitude information. As a result, the image distortion of 0.2 pixels caused by spacecraft-attitude jitter is reduced to less than 0.08 pixels, showing that band-to-band registration errors of a sensor with parallax observation are available to improve the image distortion caused by attitude fluctuation.",2008,0, 7302,Fast and efficient phase conflict detection and correction in standard-cell layouts,"Alternating-aperture phase shift masking (AAPSM), a form of strong resolution enhancement technology (RET) is used to image critical features on the polysilicon layer at smaller technology nodes. This technology imposes additional constraints on the layouts beyond traditional design rules. Of particular note is the requirement that all critical features be flanked by opposite-phase shifters, while the shifters obey minimum width and spacing requirements. A layout is called phase-assignable if it satisfies this requirement. Phase conflicts between shifters have to be removed to enable the use of AAPSM for layouts that air not phase-assignable. Previous work has sought to detect a suitable set of phase conflicts to be removed, as well as correct them as well as correct them. This paper has two key contributions: (1) a new computationally efficient approach to detect a minimal set of phase conflicts, which when corrected produces a phase-assignable layout; (2) a novel layout modification scheme for correcting these phase conflicts in standard-cell blocks. Unlike previous formulations of this problem, the proposed solution for the conflict detection problem does not frame it as a graph bipartization problem. Instead, a simpler and more computationally efficient reduction is proposed. This simplification greatly improves the runtime, while maintaining the same improvements ill the quality of results. An average runtime speedup of 5.9 is achieved using the new flow. A new layout modification scheme for correcting phase conflicts in large standard-cell blocks is also proposed. The proposed layout modification scheme can handle all phase conflicts in large standard-cell blocks with small increases in area. Our experiments show that the percentage area increase for making typical standard-cell blocks phase-assignable ranges from 3.4-9.1%.",2005,0, 7303,Simulation-based Assessment of the Impact of Contrast Medium on CT-based Attenuation Correction in PET,"Although diagnostic quality CT relies on the administration of contrast agents to allow improved lesion detectability, the misclassification of contrast medium as high density bone during CT-based attenuation correction (CTAC) procedure is known to overestimate the attenuation coefficients, thus biasing activity concentration estimates of PET data. In this study, the influence of contrast medium on CTAC was investigated through Monte Carlo (MC) simulations and experimental phantom studies. Our recently developed MC x-ray CT simulator and the Eidolon 3D PET MC simulator were used to generate realigned PET/CT data sets. The influence of contrast medium was studied by simulation of a cylindrical phantom containing different concentrations of contrast medium. Moreover, an experimental study using an anthropomorphic striatal phantom was conducted for quantitative evaluation of errors arising from the presence of contrast medium by calculating the apparent recovery coefficient (ARC). The ARC was 190.7% for a cylindrical volume of interest located in the main chamber of the striatal phantom containing contrast medium corresponding to 2000 Hounsfield units, whereas the ARC was overestimated by less than 5% for the main chamber and 2% for the left/right putamen and caudate nucleus compared to the absence of contrast medium. It was concluded that the contrast-enhanced CT images may create considerable artifacts during CTAC in regions containing high concentrations of contrast medium.",2006,0, 7304,Accurate Measurement of Bone Mineral Density Using Clinical CT Imaging With Single Energy Beam Spectral Intensity Correction,"Although dual-energy X-ray absorptiometry (DXA) offers an effective measurement of bone mineral density, it only provides a 2-D projected measurement of the bone mineral density. Clinical computed tomography (CT) imaging will have to be employed for measurement of 3-D bone mineral density. The typical dual energy process requires precise measurement of the beam spectral intensity at the 80 kVp and 120 kVp settings. However, this is not used clinically because of the extra radiation dosage and sophisticated hardware setup. We propose an accurate and fast approach to measure bone material properties with single energy scans. Beam hardening artifacts are eliminated by incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process. Bone mineral measurement from single energy CT correction is compared with that of dual energy correction and the commonly used DXA. Experimental results show that single energy correction is compatible with dual energy CT correction in eliminating beam hardening artifacts and producing an accurate measurement of bone mineral density. We can then estimate Young's modulus, yield stress, yield strain and ultimate tensile stress of the bone, which are important data for patient specific therapy planning.",2010,0, 7305,RI2N/DRV: Multi-link ethernet for high-bandwidth and fault-tolerant network on PC clusters,"Although recent high-end interconnection network devices and switches provide a high performance to cost ratio, most of the small to medium sized PC clusters are still built on the commodity network, Ethernet. To enhance performance on commonly used Gigabit Ethernet networks, link aggregation or binding technology is used. Currently, Linux kernels are equipped with software named Linux Channel Bonding (LCB), which is based IEEE802.3ad link aggregation technology. However, standard LCB has the disadvantage of mismatch with the TCP protocol; consequently, both large latency and bandwidth instability can occur. Fault-tolerance feature is supported by LCB, but the usability is not sufficient. We developed a new implementation similar to LCB named Redundant Interconnection with Inexpensive Network with Driver (RI2N/DRV) for use on Gigabit Ethernet. RI2N/DRV has a complete software stack that is very suitable for TCP, an upper layer protocol. Our algorithm suppresses unnecessary ACK packets and retransmission of packets, even in imbalanced network traffic and link failures on multiple links. It provides both high-bandwidth and fault-tolerant communication on multi-link Gigabit Ethernet. We confirmed that this system improves the performance and reliability of the network, and our system can be applied to ordinary UNIX services such as network file system (NFS), without any modification of other modules.",2009,0, 7306,Automatic fault location and isolation system for the electric traction overhead lines,"Amtrak is utilizing the electrification of the New Haven to Boston section at 25 kV, 60 Hz AC for all electric service between Washington DC and Boston. Efficient train operation requires uninterrupted availability of power supply to the trains. This, in turn, necessitates quick isolation of faulted section of overhead contact system and restoration of power in the healthy sections. An overview of the principles of automatic fault location and isolation system, which is installed, though not yet placed in service, to automatically isolate the faulted sections of overhead contact system with minimal time delay, is presented.",2002,0, 7307,Genome-Wide Search for Splicing Defects Associated with Amyotrophic Lateral Sclerosis (ALS),"Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease caused by the degeneration of motor neurons. Although the cause of ALS is unknown, mutations in the gene that produces the SOD1 enzyme are associated with some cases of familial ALS. SOD1 is a powerful antioxidant that protects the body from damage caused by superoxide, a toxic free radical. It has been proposed that defects in splicing of some mRNAs, induced by oxidative stress, can play a role in ALS pathogenesis. Alterations of splicing patterns have also been observed in ALS patients and in ALS murine models, suggesting that alterations in the splicing events can contribute to ALS progression. Using Exon 1.0 ST GeneChips, which allow the definition of alternative splicing events (ASEs) , the SH-SY5Y neuroblastoma cell line has been profiled after treatment with paraquat, which by inducing oxidative stress alters the patterns of alternative splicing. Furthermore, the same cell line stably transfected with wt and ALS mutant SOD has also been profiled. The integration of the two ALS models efficiently moderates ASE false discovery rate, one of the most critical issues in high-throughput ASEs detection. This approach allowed the identification of a total of 14 splicing events affecting respectively both internal coding exons and 5' UTR of known gene isoforms.",2009,0, 7308,Adaptive unequal error protection for subband image coding,"An adaptive subband image coding system is proposed to investigate the performance offered by implementing unequal error protection among the subbands and within the subbands. The proposed system uses DPCM and PCM codecs for source encoding the individual subbands, and a family of variable rate channel codes for forward error correction. A low resolution family of trellis coded modulation codes and a high resolution family of punctured convolutional codes are considered. Under the constraints of a fixed information rate, and a fixed transmission bandwidth, for any given image, the proposed system adaptively selects the best combination of channel source coding rates according to the current channel condition. Simulations are performed on the AWGN channel, and comparisons are made with corresponding systems where the source coder is optimized for a noiseless transmission (classical optimization) and a single channel code is selected. Our proposed joint source-channel systems greatly outperform any of the nonadaptive conventional nonjoint systems that use only a single channel code at all channel SNRs, extending the useful channel SNR range by an amount that depends on the code family. A nonjoint adaptive equal error protection system is considered which uses the classically optimized source codec, but chooses the best single channel code for the whole transmission according to the channel SNR. Our systems outperform the corresponding adaptive equal error protection system by at most 2 dB in PSNR; and more importantly, show a greater robustness to channel mismatch. It is found that most of the performance gain of the proposed systems is obtained from implementation of unequal error protection among the subbands, with at most 0.7 dB in PSNR additional gain achieved by also applying unequal error protection within the subbands. We use and improve a known modeling technique which enables the system to configure itself optimally for the transmission of an arbitrary image, by only measuring the mean of lowest frequency subband and variances of all the subbands",2000,0, 7309,Protection of DVR against Short Circuit Faults at the Load Side,"An additional control scheme has been proposed in this paper for a dynamic voltage restorer (DVR), to protect it against load short circuit conditions. When overcurrents occur in the distribution system, under the proposed scheme the DVR reverses its injected voltage polarity so as to minimise the current flow. The detection method is based on impedance measurement feedback. The advantage of the scheme is that no additional over current device or protection is required for the DVR and it is easy to implement. The proposed control scheme has been validated through simulation.",2006,0, 7310,Fault tolerance with real-time Java,"After having drawn up a state of the art on the theoretical feasibility of a system of periodic tasks scheduled by a preemptive algorithm at fixed priorities, we show in this article that temporal faults can occur all the same within a theoretically feasible system, that these faults can lead to a failure of the system and that we can use the data calculated during control of admission to install detectors of faults and to define a factor of tolerance. We show then the results obtained on a system of periodic tasks coded with Java real-time and carried out with the virtual machine jRate. These results show that the installation of the detectors and the tolerance to the faults makes an improvement of the behavior of the system in the presence of faults",2006,0, 7311,Cassini spacecraft's in-flight Fault Protection redesign for unexpected regulator malfunction,"After the launch of the Cassini ?Mission-to-Saturn? Spacecraft, the volume of subsequent mission design modifications was expected to be minimal due to the rigorous testing and verification of the Flight Hardware and Flight Software. For known areas of risk where faults could potentially occur, component redundancy and/or autonomous Fault Protection (FP) routines were implemented to ensure that the integrity of the mission was maintained. The goal of Cassini's FP strategy is to ensure that no credible Single Point Failure (SPF) prevents attainment of mission objectives or results in a significantly degraded mission, with the exception of the class of faults which are exempted due to low probability of occurrence. In the case of Cassini's Propulsion Module Subsystem (PMS) design, a waiver was approved prior to launch for failure of the prime regulator to properly close; a potentially mission catastrophic single point failure. However, one month after Cassini's launch when the fuel & oxidizer tanks were pressurized for the first time, the prime regulator was determined to be leaking at a rate significant enough to require a considerable change in Main Engine (ME) burn strategy for the remainder of the mission. Crucial mission events such as the Saturn Orbit Insertion (SOI) burn task which required a characterization exercise for the PMS system 30 days before the maneuver were now impossible to achieve. This paper details the steps that were necessary to support the unexpected malfunction of the prime regulator, the introduction of new failure modes which required new FP design changes consisting of new/modified under-pressure & over-pressure algorithms; all which must be accomplished during the operation phase of the spacecraft, as a result of a presumed low probability, waived failure which occurred after launch.",2010,0, 7312,Research on distributed system of measuring error bearing online based on CAN,"Aim for disadvantages of error measurement in Bearing Corporation, the distributed system for measuring bearing error on-line is achieved. It contains one superior controller and many sub-level computers with connected by controller area network (CAN for short). Sub-level computers are used collected error data and displayed. Superior controller is used error data analysis and database management. At last, superior controller connects with sub-level computers by using the network communication technology and serial communication technology, so an available design of monitoring and controlling system is formed. It is mainly used to examine the quality during the process of producing the parts of bearing. And what's more it also has the quality management function that is based on the statistics and analysis of the examining result. The actual application has shown the usefulness and the practical value of this system",2006,0, 7313,A fault detection and diagnosis system based on input and output residual generation scheme for a CSTR benchmark process,"Aim of this study is to propose fault detection and diagnosis (FDD) algorithm based on input and output residuals that consider both sensor and actuator faults separately. The existing methods which have capability of fault diagnosis and its magnitude estimation suffer from great computational complexity, so they would not be suitable for the real-time applications. The proposed method in this paper has the advantage of simple structure and straightforward computations but at the cost of losing precision. The introduced approach incorporates an auxiliary-PI controller in a feedback configuration with an extended Kalman filter (EKF) algorithm to constitute an actuator input-output residual generator (AIORG) unit. Similarly, a sensor output residual generator (SORG) unit is realized with an EKF-based algorithm to cover for simultaneous sensor possible faults. The generated residuals are then fed to a FDD unit to extract diagnostic and fault estimation results using a threshold-based inference mechanism. A set of test scenarios is conducted to demonstrate the performance capabilities of the proposed FDD methodology in a simulated continuous stirred tank reactor (CSTR) benchmark against sensor and actuator faults.",2010,0, 7314,Research on Real-time Monitor and Fault Diagnosis Specialist System for Micro-plus Air Pressure Conveying Fly Ash,"Aimed at micro-plus air pressure conveying fly ash applied in coal-fired power plant widely, a suit of real-time monitor and fault diagnosis specialist system was designed. The real-time monitor system can choose local or upper operator optionally, display the current operating condition of micro-plus air pressure conveying fly ash system on computer display and simulative board timely and facilely. The fault diagnosis specialist system has used modularization structure and laminal design idea, so the interface is friendly and the usage is facility, moreover, it can judge the fault of micro-plus air pressure conveying fly ash system, and prepare the processing measures exactly. The suit of real-time monitor and fault diagnosis specialist system is not only used as visual aid in experiment, but also as the important monitor system in master control room of coal-fired power plant.",2007,0, 7315,Fault Diagnosis Technology of Software-Intensive Equipment Based on Information Fusion,"Aiming at limitations of the current fault diagnosis of the software-intensive equipment (SIE), this paper presents a method of fault diagnosis for software-intensive equipment based on information fusion technology. The multi-resource fusion diagnosis structure and the fusion arithmetic are detailed in this paper. The method establishes system structure of fault diagnosis, realizes the feature level data fusion algorithm based on fuzzy integral, establishes the fuzzy phenomenon subset and subjection degree function corresponding with different fault type, carries on fuzzy synthesized judge of fault type, and then fuses fuzzy reasoning diagnosis result again using D-S fusion model and method. The method is proved to be effective for fault location by instance. It makes diagnosed information more definite and improves the accuracy of diagnosis.",2008,0, 7316,Wavelet Codes for Algorithm-Based Fault Tolerance Applications,"Algorithm-based fault tolerance (ABFT) methods, which use real number parity values computed in two separate comparable ways to detect computer-induced errors in numerical processing operations, can employ wavelet codes for establishing the necessary redundancy. Wavelet codes, one form of real number convolutional codes, determine the required parity values in a continuous fashion and can be intertwined naturally with normal data processing. Such codes are the transform coefficients associated with an analysis uniform filter bank which employs downsampling, while parity-checking operations are performed by a syndrome synthesis filter bank that includes upsampling. The data processing operations are merged effectively with the parity generating function to provide one set of parity values. Good wavelet codes can be designed starting from standard convolutional codes over finite fields by relating the field elements with the integers in the real number space. ABFT techniques are most efficient when employing a systematic form and methods for developing systematic codes are detailed. Bounds on the ABFT overhead computations are given and ABFT protection methods for processing that contains feedback are outlined. Analyzing syndromes' variances guide the selection of thresholds for syndrome comparisons. Simulations demonstrate the detection and miss probabilities for some high-rate wavelet codes.",2010,0, 7317,Byzantine Fault Tolerance for Nondeterministic Applications,"All practical applications contain some degree of non- determinism. When such applications are replicated to achieve Byzantine fault tolerance (BFT), their nondeterministic operations must be controlled to ensure replica consistency. To the best of our knowledge, only the most simplistic types of replica nondeterminism have been dealt with. Furthermore, there lacks a systematic approach to handling common types of nondeterminism. In this paper, we propose a classification of common types of replica nondeterminism with respect to the requirement of achieving Byzantine fault tolerance, and describe the design and implementation of the core mechanisms necessary to handle such nondeterminism within a Byzantine fault tolerance framework.",2007,0, 7318,Accelerating learning from experience: avoiding defects faster,"All programmers learn from experience. A few are rather fast at it and learn to avoid repeating mistakes after once or twice. Others are slower and repeat mistakes hundreds of times. Most programmers' behavior falls somewhere in between: They reliably learn from their mistakes, but the process is slow and tedious. The probability of making a structurally similar mistake again decreases slightly during each of some dozen repetitions. Because of this a programmer often takes years to learn a certain rule-positive or negative-about his or her behavior. As a result, programmers might turn to the personal software process (PSP) to help decrease mistakes. We show how to accelerate this process of learning from mistakes for an individual programmer, no matter whether learning is currently fast, slow, or very slow, through defect logging and defect data analysis (DLDA) techniques",2001,0, 7319,Ultrasonic fault detection by processing of signals from fixed transceiver system,"An ultrasonic nonscanning method of flaw detection employing coded excitation with a new approach to the signal processing is investigated. An ultrasonic M-sequence signal is transmitted through the medium under investigation and reflections from faults are received by plural receivers at different positions. The cross correlation function between the original M-sequence and the reflected signal is used to detect any reflection, and based on it, the fault is located. To suppress the self-noise of the system, a new approach to the design of an inverse filter is described. The concept of coincident peak enhancement can be very useful in some cases of ultrasonic NDT to remove false peaks from the correlator's output",2001,0, 7320,Behavioural modelling of analogue faults in VHDL-AMS - a case study,"Analogue fault simulation is needed to evaluate the quality of tests, but is very computationally intensive. Behavioural simulation is more abstract and thus faster than fault simulation. Using a phase-locked loop as a case study, we show how behavioural fault models can be derived from transistor-level fault simulations and that faulty behaviour can be accurately modeled.",2004,0, 7321,Terrorist risk evaluation using a posteriori fault trees,"Analysis of a terrorist attack involves an event that has already occurred and an a posteriori fault tree can be utilized. This differs from a conventional a priori fault tree used for prediction. Since the top event has a known probability of unity, only ratios of the lower event probabilities are needed to solve and these are easier to estimate. If one assumes that probability ratios are the same for both types of fault trees, then a posteriori analysis aids a priori analysis. Use of a sequence diagram helps direct the work of first responders",2006,0, 7322,A Software Fault Tree Metric,"Analysis of software fault trees exposes hardware and software failure events that lead to unsafe system states, and provides insight on improving safety throughout each phase of the software lifecycle. Software product lines have emerged as an effort to achieve reuse, enhance quality, and reduce development costs of safety-critical systems. Safety-critical product lines amplify the need for improved analysis techniques and metrics for evaluating safety-critical systems since design flaws can be carried forward though product line generations. This paper presents a key node safety metric for measuring the inherent safety modeled by software fault trees. Definitions related to fault tree structure that impact the metric's composition are provided, and the mathematical basis for the metric is examined. The metric is applied to an embedded control system as well as to a collection of software fault tree product lines that include mutations expected to improve or degrade the safety of the system. The effectiveness of the metric is analyzed, and observations made during the experiments are discussed",2006,0, 7323,Analytic track solutions with range-dependent errors,Analytic solutions are presented for track covariance matrices for exoatmospheric (disturbance-free) tracking with measurement errors proportional to a power of range. Also discussed is the use of segmented trajectories for cases in which the range power and other tracking parameters are piecewise constant,2000,0, 7324,A Dynamic Probability Fault Localization Algorithm Using Digraph,"Analyzed here is a probability learning fault localization algorithm based on directed graph and set-covering. The digraph is constituted as following: get the deployment graph of managed business from the topography of network and software environment; generate the adjacency matrix (Ma); compute the transitive matrix (Ma 2) and transitive closure (Mt) and obtain dependency matrix (R). When faults occur, the possible symptoms will be reflected in R with high probability in fault itself, less probability in Ma, much less in Ma 2 and least in Mt. MCA+ is a probability max covering algorithm taking lost and spurious symptom into account. DMCA+ is dynamic probability updating algorithm through learning run-time fault localization experience. When fail to localize the faults, probabilities of real faults will be updated with an increment. The simulation results show the validity and efficiency of DMCA+ under complex network. In order to promote detection rate, multi-recommendation strategy is also investigated in MCA+ and DMCA+.",2009,0, 7325,A Flexible SoPC-based Fault Injection Environment,"Analyzing the behavior of ICs faced to soft errors is now mandatory, even for applications running at sea level, to prevent malfunctions in critical applications such as automotive. This paper introduces a novel prototyping-based fault injection environment that enables to perform several types of dependability analyses in a common optimized framework. The approach takes advantage of hardware speed and of software flexibility offered by embedded processors to achieve optimized trade-offs between experiment duration and processing complexity. The repartition of tasks between hardware and embedded software is defined with respect to the type of circuit to analyze",2006,0, 7326,API Fuzz Testing for Security of Libraries in Windows Systems: From Faults To Vulnerabilites,"Application programming interface (API) fuzz testing is used to insert unexpected data into the parameters of functions and to monitor for resulting program errors or exceptions in order to test the security of APIs. However, vulnerabilities through which a user cannot insert data into API parameters are not security threats, because attackers cannot exploit such vulnerabilities. In this paper, we propose a methodology that can automatically find paths between inputs of programs and faulty APIs. Where such paths exist, faults in APIs represent security threats. We call our methodology Automated Windows API Fuzz Testing II (AWAFTII). This method extends our previous research for performing API fuzz testing into the AWAFTII process. The AWAFTII process consists of finding faults using API fuzz testing, analyzing those faults, and searching for input data related to parameters of APIs with faults. We implemented a practical tool for AWAFTII and applied it to programs in the system folder of Windows XP SP2. Experimental results show that AWAFTII can detect paths between input of programs and APIs with faults.",2008,0, 7327,The design of an electrical fault-waveform regenerator,"An electrical fault-waveform regenerator (EFWG) is proposed in this paper originally, including its hardware structure and software flow chart. And the EFWG is mainly relying on a power amplifier (PA) to regenerate the electrical fault waveforms recorded by fault recorders (FR), so a digital closed-loop modification technique (DCLMT), which is distinct from the predistortion technique (PDT) widely used nowadays, is conceived to counteract the inherent nonlinear distortions of the PA. The EFWG is actually designed to serve as a novel and more effective alternative to the common used protection-relay testing apparatuses (PRTA).",2008,0, 7328,Electrical approach to defect depth estimation by stepped infrared thermography,"An electrical modelling and analysis is presented for thermal nondestructive evaluation of materials by stepped infrared (IR) thermography. A one-dimensional (1D) electrical analysis based on the Laplace transform technique of network-analysis and time delay in an RC ladder network is given for defect depth estimation. Defect depth is evaluated based on the time instants at which surface temperature evolution over the defect and nondefect regions of the material deviates from its constant initial slope, corresponding to the response of a semi-infinite material under similar conditions of step heating. The method is applicable even for three-dimensional geometries of materials having unknown thermal properties and variable surface emissivity. Experimental and simulated results validate the proposed method and give good estimation of defect depth, even under noisy conditions for a thermally anisotropic material and a nonflat bottom type of defect.",2004,0, 7329,Novel method for selective detection of earth faults in high impedance grounded distribution networks,"An elementary and reliable detection of earth faults in impedance grounded networks results in considerable benefits for the utility both in terms of outage duration and personal safety. This report describes an entirely new, only current measuring method; a method, which fulfils the standards of cost efficiency and reliability. Despite the seeming simplicity of the approach it is also demonstrated that it is an excellent method to detect arcing cable earth faults.",2005,0, 7330,On the Error Resilience of Rate Smoothing using Explicit Slice-Based Mode Selection,"An encoder based rate smoothing scheme which uses explicit slice-based mode selection has been proposed. The algorithm provides significantly smoother bitstream from a variable bit rate (VBR) encoder and reduces the network queuing delay while maintaining quality for the end-user. In this paper we investigate the error robustness of the proposed scheme based on H.264/AVC codec. The results show that, the proposed scheme maintains error resilience properties without extra delay. Compared to the standard frame based scheme, the algorithm provides almost the same average distortion while reducing the distortion variance which helps improving the subjective video quality. We investigate the performance in random and bursty packet loss environments, and it shows that the proposed scheme is applicable for error prone networks, for example, wireless networks.",2007,0, 7331,Active error recovery for reliable multicast,"An error recovery scheme is essential for large-scale reliable multicast. We design, implement, and evaluate an improved active error recovery scheme for reliable multicast (AERM). The AERM uses soft-state storage to facilitate fast error recovery. It has the following features: a simple NAK suppression and aggregation mechanism, an efficient hierarchical RTT measurement mechanism, an effective local recovery and scoped retransmission mechanism, and a periodical ACK mechanism. We implement the AERM and study its characteristics in NS2. We also compare performance with ARM and AER/NCA, both of which are representative active reliable multicast protocols. The results indicate that AERM can achieve considerable performance improvement with limited support from routers. Our work also confirms that active networks can benefit some applications and become a promising network computing platform in the future",2001,0, 7332,Error resilient video transmission over wireless networks,"An error resilient architecture for video transmission over mobile wireless networks is presented. Radio link layer, transport layer, and application layer are combined to deal with high error rate in wireless environments. The algorithms for both sender and receiver are given. An adaptive algorithm is presented to automatically adjust parity data length in error control. The performance of the proposed algorithm is analyzed through experimental studies.",2003,0, 7333,Evolutionary design of lifting scheme wavelet-packet adaptive filters for elevator fault detection,"An evolutionary-based procedure for designing adaptive filters based on second-generation wavelet (lifting scheme) packet decomposition for industrial fault detection is presented. The proposed procedure is validated by an experimental case study for induction motor fault diagnosis in an elevator system. Preliminary results on two typologies of faults, broken rotor bars and static air gap eccentricity, are discussed by showing encouraging performance.",2010,0, 7334,Error Rate Analysis of Band-Limited BPSK With Nakagami/Nakagami ACI Considering Nonlinear Amplifier and Diversity,"An exact expression of the average error rate is developed for a band-limited binary phase-shift keying (BPSK) system in the presence of adjacent channel interference (ACI) considering a Nakagami fading channel. The system employs a root-raised cosine (RRC) filter both in the transmitter and receiver sides and a transmitter power amplifier (PA) that may have nonlinear input/output characteristics. These practical considerations are accurately taken into account. By utilizing the characteristic function (CF) method of error-rate analysis, we interestingly blend the concepts of time and frequency domains and develop an error-rate expression that is simple and general for several system parameters, including adjacent channel separation, RRC filter roll-off factor, and PA output back-off (OBO). Hence, the error rate in the presence of cochannel interference (CCI) is obtained as a special case. The developed results for a single-branch receiver are then extended for diversity receivers, and several interesting observations are made.",2010,0, 7335,Tolerance to unbounded Byzantine faults,"An ideal approach to deal with faults in large-scale distributed systems is to contain the effects of faults as locally as possible and, additionally, to ensure some type of tolerance within each fault-affected locality. Existing results using this approach accommodate only limited faults (such as crashes) or assume that fault occurrence is bounded in space and/or time. In this paper, we define and explore possibility/impossibility of local tolerance with respect to arbitrary faults (such as Byzantine faults) whose occurrence may be unbounded in space and in time. Our positive results include programs for graph coloring and dining philosophers, with proofs that the size of their tolerance locality is optimal. The type of tolerance achieved within fault-affected localities is self-stabilization. That is, starting from an arbitrary state of the distributed system, each non-faulty process eventually reaches a state from where it behaves correctly as long as the only faults that occur henceforth (regardless of their number) are outside the locality of this process.",2002,0, 7336,Toward a quantifiable definition of software faults,"An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely-used definitions are not measurable; there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results. As part of our on-going work in modeling software faults, we have developed a method of unambiguously identifying and counting faults. Specifically, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.",2002,0, 7337,"""ITRS test challenges need defect based test: fact or fiction?""","An important distinction between traditional ""logical fault model"" based testing and defect based testing is the potential for the latter to better handle emerging defect types and changing circuit sensitivities in VDSM circuits. ITRS gives specific examples of emerging defects including the potential for more particle-related blocked-etch resistive opens that result from the change from a subtractive aluminum process to damascene Cu. A second example derives from aggressive scaling into the nanometer domain which increases the probability of incomplete etch and the occurrence of resistive vias.",2004,0, 7338,Evidence-Based Analysis and Inferring Preconditions for Bug Detection,"An important part of software maintenance is fixing software errors and bugs. Static analysis based tools can tremendously help and ease software maintenance. In order to gain user acceptance, a static analysis tool for detecting bugs has to minimize the incidence of false alarms. A common cause of false alarms is the uncertainty over which inputs into a program are considered legal. In this paper we introduce evidence-based analysis to address this problem. Evidence-based analysis allows one to infer legal preconditions over inputs, without having users to explicitly specify those preconditions. We have found that the approach drastically improves the usability of such static analysis tools. In this paper we report our experience with the analysis in an industrial deployment.",2007,0, 7339,Estimation of software diversity by fault simulation and failure searching,"An important problem for computer-based systems is providing fault tolerance for unknown (at the time of commencement of service) systematic design errors. Such design errors can have a long latency in normal operation and only become apparent under specific conditions associated with particular combinations of input and internal system states. The use of 'diverse' software versions remains a possible approach to prevent coincidental failure, but its potential value has never been quantified. This paper presents the application of data-flow and constant perturbation to simulate the introduction of faults or errors into programs and explores methods to establish the magnitudes and locations of the associated input space failure regions. Used together, these two techniques enable failure behaviour to be described in a quantitative way and provide a method to estimate the diversity of multi-version software. A simple case and a industrial software are studied to illustrate the applications of the approach.",2001,0, 7340,A binary spelling interface with random errors,"An algorithm for design of a spelling interface based on a modified Huffman's algorithm is presented. This algorithm builds a full binary tree that allows to maximize an average probability to reach a leaf where a required character is located when a choice at each node is made with possible errors. A means to correct errors (a delete-function) and an optimization method to build this delete-function into the binary tree are also discussed. Such a spelling interface could be successfully applied to any menu-orientated alternative communication system when a user (typically, a patient with devastating neuromuscular handicap) is not able to express an intended single binary response, either through motor responses or by using of brain-computer interfaces, with an absolute reliability",2000,0, 7341,A software fault tolerance method for safety-critical systems: effectiveness and drawbacks,"An automatic software technique suitable for on-line detection of transient errors due to the effects of the environment (radiation, EMC,...) is presented. The proposed approach, particularly well suited for low-cost safety-critical microprocessor-based applications, has been validated through fault injection experiments and radiation testing campaigns. The experimental results demonstrate the effectiveness of the approach in terms of fault detection capabilities. Undetected faults have been analyzed to point out the limitations of the method.",2002,0, 7342,Autonomous fault tolerant multi-robot coordination for object transportation based on Artificial Immune System,"An efficient coordination strategy is required in order to realize an efficient and autonomous multi-robot cooperative system. This paper presents an approach for coordination among robots prior to cooperative object transportation. This paper does not address the coordination that is required during cooperation, which has been researched by others. In the present paper, fault tolerant coordination is achieved using methodology of artificial immune system. The approach developed here is based on binding affinity between an antibody and an antigen, and the structure of antibody in a human immune system. The developed methodology is verified through physical experiments.",2009,0, 7343,"Providing efficient, robust error recovery through randomization","An efficient error recovery algorithm is essential for reliable multicast in large groups. This paper presents RRMP, a randomized reliable multicast protocol which improves the robustness of traditional tree-based protocols by diffusing the responsibility of error recovery among all members in a group. Both simulation and experimental results show that the protocol achieves good performance",2001,0, 7344,"Argus: Low-Cost, Comprehensive Error Detection in Simple Cores","Argus, a novel approach for detecting errors in simple processor cores, dynamically verifies the correctness of the four tasks performed by a von Neumann core: control flow, data flow, computation, and memory access. Argus detects transient and permanent errors, with far lower impact on performance and chip area than previous techniques.",2008,0, 7345,Unequal error protected transmission with dynamic classification in H.264/AVC,"As an efficient error resilient tool in H.264, FMO (Flexible Macroblock Ordering) still has 2 disadvantages: (1) unacceptable bitrate overhead, and (2) unsuitable for widely used UEP (unequal error protected) transmission. In this paper, to overcome these 2 disadvantages, a dynamic FMO classification (DFMOC) is proposed. For disadvantage(l), in DFMOC since lots of MBs in the same slice are placed together, thus the bitrate overhead is smaller. For disadvantage^), DFMOC generates 2 slices and each of them takes unequal priority in transmission by the large and small motion area extraction. After employing LDPC coding for UEP transmission strategy, experiment shows DFMOC has a better error robustness while still keeps less bitrate overhead compared with traditional FMO mode: the PSNR has 1 to 2 db outperforming and the bitrate overhead keeps no more than 5% which is about a half of traditional FMO overhead.",2007,0, 7346,Sequential Bayesian bit error rate measurement,"As bit error rates decrease, the time required to measure a bit error rate (BER) or perform a BER test (i.e., to determine that a particular communications device's BER is less than some acceptable limit) increases dramatically. One cause of long measurement times is the difficulty of deciding a priori how many bits to measure to establish the BER to within a predetermined confidence interval width. This paper explores a new approach to deciding how many bits to measure, namely a sequential Bayesian approach. As measurement proceeds, the posterior distribution of BER is checked to see if the conclusion can be made that the BER rate is known to be within the desired range with high enough probability. Desired properties of the posterior distribution such as the maximum a postiori estimate and confidence limits can be computed quickly using off-the-shelf numerical software. Examples are given of using this method on bit error data measured with an Agilent 81250 parallel BER tester.",2004,0, 7347,Fault Emulation for Dependability Evaluation of VLSI Systems,"Advances in semiconductor technologies are greatly increasing the likelihood of fault occurrence in deep-submicrometer manufactured VLSI systems. The dependability assessment of VLSI critical systems is a hot topic that requires further research. Field-programmable gate arrays (FPGAs) have been recently pro posed as a means for speeding-up the fault injection process in VLSI systems models (fault emulation) and for reducing the cost of fixing any error due to their applicability in the first steps of the development cycle. However, only a reduced set of fault models, mainly stuck-at and bit-flip, have been considered in fault emulation approaches. This paper describes the procedures to inject a wide set of faults representative of deep-submicrometer technology, like stuck-at, bit-flip, pulse, indetermination, stuck-open, delay, short, open-line, and bridging, using the best suitable FPGA- based technique. This paper also sets some basic guidelines for comparing VLSI systems in terms of their availability and safety, which is mandatory in mission and safety critical application contexts. This represents a step forward in the dependability benchmarking of VLSI systems and towards the definition of a framework for their evaluation and comparison in terms of performance, power consumption, and dependability.",2008,0, 7348,Annealing of Irradiated-Induced defects in power MOSFETs,"An innovative method of device characterization is experimented to qualify annealing Gamma-ray damage in power MOSFETs. The degradation of structural parameters of the body-drain junction for a dose rate of 103.8 rad.mn-1 is presented. Temperature annealing effects, at 100C, are discussed and analyzed against the evolution of the density trapped oxide and interface charges.",2009,0, 7349,Induction motor mechanical fault online diagnosis with the application of artificial neural network,"An online fault diagnostic algorithm for induction motor mechanical faults is presented based on the application of artificial neural networks. Two mechanical faults, the rotor bar breakage and air gap eccentricity, are considered. New feature coefficients obtained by wavelet packet decomposition of the stator current are used together with the slip speed as the input of a multi-layer neural network. The proposed algorithm is proved to be able to distinguish healthy and faulty conditions with high accuracy",2001,0, 7350,An Evaluation of Similarity Coefficients for Software Fault Localization,"Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important technique for the development of dependable software. In this paper we study different similarity coefficients that are applied in the context of a program spectral approach to software fault localization (single programming mistakes). The coefficients studied are taken from the systems diagnosis/automated debugging tools Pinpoint, Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). We evaluate these coefficients on the Siemens Suite of benchmark faults, and assess their effectiveness in terms of the position of the actual fault in the probability ranking of fault candidates produced by the diagnosis technique. Our experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the tools mentioned. In terms of the amount of code that needs to be inspected, this coefficient improves 5% on average over the next best technique, and up to 30% in specific cases",2006,0, 7351,Automated Fault Diagnosis in Embedded Systems,"Automated fault diagnosis is emerging as an important factor in achieving an acceptable and competitive cost/dependability ratio for embedded systems. In this paper, we survey model-based diagnosis and spectrum-based fault localization, two state-of-the-art approaches to fault diagnosis that jointly cover the combination of hardware and control software typically found in embedded systems. We present an introduction to the field, discuss our recent research results, and report on the application on industrial test cases. In addition, we propose to combine the two techniques into a novel, dynamic modeling approach to software fault localization.",2008,0, 7352,Automatic fault detection and diagnosis implementation based on intelligent approaches,"Automatic fault detection and diagnosis has always been a challenge when monitoring rotating machinery. Specifically, bearing diagnostics have seen an extensive research in the field of fault detection and diagnosis. In this paper we present two automatic diagnosis procedures-a fuzzy classifier and a neural network-which deal with different implementation questions: the use of a priori knowledge, the computation cost, and the decision making process. The challenge is not only to be capable of diagnosing automatically but also to generalize the process regardless of the measured signals. Two actions are taken in order to achieve some kind of generalization of the application target: the use of normalized signals and the study of Basis Pursuit feature extraction procedure",2005,0, 7353,Update on distribution system fault location technologies and effectiveness,Automatic fault location is an area of significant interest and research in the industry. This paper provides an update on the work performed to date with various utilities and the fault location systems. Basic information on the techniques used to locate faults is provided as well as several examples of where these techniques have been deployed.,2009,0, 7354,Correcting asr outputs: Specific solutions to specific errors in French,"Automatic speech recognition (ASR) systems are used in a large number of applications, in spite of the inevitable recognition errors. In this study we propose a pragmatic approach to automatically repair ASR outputs by taking into account linguistic and acoustic information, using formal rules or stochastic methods. The proposed strategy consists in developing a specific correction solution for each specific kind of errors. In this paper, we apply this strategy on two case studies specific to French language. We show that it is possible, on automatic transcriptions of French broadcast news, to decrease the error rate of a specific error by 11.4% in one of two the case studies, and 86.4% in the other one. These results are encouraging and show the interest of developing more specific solutions to cover a wider set of errors in a future work.",2008,0, 7355,Model-based diagnosis of an automotive engine using several types of fault models,"Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles",2002,0, 7356,Fault-Tolerant Ethernet-Based Vehicle On-Board Networks,Automotive industry is in continuous evolution. Luxury as well as comfort is important. Safety measures are the most important in designing a moving vehicle. Safety can be studied from a mechanical point of view as well as from an electrical point of view. This paper presents a simulation study for fault-tolerant sensor networks for cars' on-board control. On-board communication and control networks are built using gigabit Ethernet. OPNET simulations showed feasibility and success of the proposed model with mixed traffic for real-time and non-real-time applications,2006,0, 7357,A Fault Tolerance Optimal Neighbor Load Balancing Algorithm for Grid Environment,"Availability of grid resources is dynamic, which raises the need to develop robust and effective applications against changing circumstances. A major challenge in a distributed dynamic grid is fault tolerance. The more resources and components involved the more complicated and error-prone the system becomes. In a grid with potentially thousands of machines connected to each other, the reliability of individual resources cannot be guaranteed. Hence, the probability of failure is higher in grid computing and the failure of resources affects task execution fatally. Therefore, a fault tolerance model is essential in grid. Also grid services are often expected to meet some minimum levels of Quality of Service (QoS) for a desirable operation. Common fault tolerance techniques in computational grid are normally achieved with checkpoint-recovery and task replication on alternative resources in case of a system outrage. However, the load balancing with fault tolerance strategies applied for a grid suffer from several deficiencies: some load balancing with fault tolerance models use checkpoint-recovery techniques to tolerate failures which leads to increase in average wait time thereby increasing the mean response time, while other models depend on task replication to tolerate failures which reduces grid efficiency in terms of optimal resource utilization under varying load. To address these deficiencies, an efficient fault tolerant load balancing model named as Optimal Neighbour (OP) model has been proposed. The fault tolerant load balancing model is dynamic, decentralized and symmetric initiated. The simulation results show that the Optimal Neighbour (OP) fault tolerant load balancing model yields better results when compared with the novel fault tolerant load balancing model.",2010,0, 7358,The TIRAN approach to reusing software implemented fault tolerance,"Available solutions for fault tolerance in embedded automation are often based on strong customisation, have impacts on the whole life-cycle, and require highly specialised design teams, thus making dependable embedded systems costly and difficult to develop and maintain. The TIRAN project develops a framework which provides fault tolerance capabilities to automation systems, with the goal of allowing portable, reusable and cost-effective solutions. Application developers are allowed to select, configure and integrate in their own environment a variety of software-based functions for error detection, confinement and recovery provided by the framework",2000,0, 7359,Fault-Tolerant CORBA: From Specification to Reality,"Based on 10 years' personal experience implementing academic prototypes of FT-CORBA, helping to establish the FT-CORBA standard, and subsequently developing and selling its commercial implementations, this critique provides an overview of the FT-CORBA standard's specifications from the viewpoint of realizing its concrete implementation for real-world applications",2007,0, 7360,Application of compensation method in calculating symmetrical short circuit fault,"Based on compensation method, Symmetrical short-circuit fault current formula and nodal voltage formula are deduced in this paper. In the deduction process the time of calculating matrix inversion is eliminated for the node-admittance matrix being not modified when the fault occurs, however triangular matrix method is applied to calculate nodal impedance matrix at the program entrance in the process based on the original network nodal admittance matrix, thus the solution of the electrical network state variables is speed up with preparing data for fault calculation in advance. After further assumptions and simplification, short-circuit current formula is more efficient to estimate the size of short-circuit current, and it is very suitable for real-time online applications. At the same time, current coefficient power contributing which is different from power current distribution coefficient is put forward. A optimal node is found for new power supply to limit short-circuit current by judging the size of current coefficient power contributing.",2010,0, 7361,An Advanced Study on Fault Location System for China Railway Automatic Blocking and Continuous Transmission Line,"Based on the analysis of the applicability of various fault location methods in railway automatic blocking and continuous transmission line (RABCTL), S-injection method is considered to be a feasible method. An advanced S- injection applied in fault location system is put forward. Based on the traditional S-injection, the technology of wireless sensor network is brought in. It can realize wireless detector to network automatically in certain range, and improve the flexibility of communication system. The theory of the fault location system is presented, some key problems in practical process of fault location are researched, and the design of wireless detector is briefly introduced in the paper.",2008,0, 7362,A data flow fault coverage metric for validation of behavioral HDL descriptions,"Behavioral HDL descriptions are commonly used to capture the high-level functionality of a hardware circuit for simulation and synthesis. The manual process of creating a behavioral description is error prone, so significant effort must be made to verify the correctness of behavioral descriptions. Simulation-based validation and formal verification are both techniques used to verify correctness. We investigate validation because formal verification techniques are frequently intractable for large designs. The first step toward a behavioral validation technique is the development of a validation fault coverage metric which can be used to evaluate the likelihood of design defect detection with a given test sequence. We propose a validation fault coverage metric which is based on an analysis of the control data flow description associated with the behavior. The proposed metric identifies a subset of paths through the data flow which must be traversed during testing to detect faults. The proposed metric is a tractable compromise between the statement coverage metric which requires only that each statement be executed, and the path coverage metric which requires that all data flow paths be executed. Data flow paths are identified based on the relative code locations of definitions and uses of variables which may be assigned incorrectly due to a design error. We propose an efficient method to compute all data flow paths which must be traversed, and we generate coverage results for several benchmark VHDL circuits for comparison to other approaches.",2000,0, 7363,A study of transfer function estimation error and a practical multibeam antenna system with precoding interference cancellation,"Beyond IMT-2000, systems must use high ability interference canceling technology to achieve high speed and high capacity transmission. This paper proposes a kind of precoding technology that, in a practical manner, can completely cancel downlink interference in the same space and the same frequency band. The concept of this technology is to multiply the transmitting signal by the inverse matrix of the space transfer functions matrix sent by each mobile station. Its high interference cancellation performance is evaluated by considering the error in estimating the transfer function. A new scheme is proposed that is extremely practical because it eliminates the need to measure the delay. A fully implemented system is described.",2001,0, 7364,Linear prediction error method for blind identification of periodically time-varying channels,"Blind channel estimation for single-input multiple-output (SIMO) periodically time-varying channels is considered using only the second-order statistics of the data. The time-varying channel is assumed to be described by a complex exponential basis expansion model (CE-BEM). The linear prediction error method for blind identification of time-invariant channels is extended to time-varying channels represented by a CE-BEM. Sufficient conditions for identifiability are investigated. The cyclostationary nature of the received signal is exploited to consistently estimate the time-varying correlation function of the data from a single observation record. The proposed method requires the knowledge of the active basis functions but not the channel length (an upper bound suffices). Several existing methods require precise knowledge of the channel length. Equalization of the time-varying channel, given the estimated channel, is investigated. Computer simulation examples are presented to illustrate the approach and to compare it with two existing approaches.",2002,0, 7365,Investigation and Performance Evaluation of different Bluetooth voice packets against ambient error conditions,"Bluetooth allows three high quality voice (HV) packets to carry voice information over the Industrial, Scientific and Medical (ISM) band. The bit error rate (BER) measurements have been used in this report to evaluate the performance of voice packets in the presence of noise and interference. Starting from the problem definition and classifying the voice packets into HV1, HV2 and HV3 type, Bluetooth system has been unpacked. The paper also discuses Bluetooth related wireless standards operating in the same ISM band such as IEEE 802.11 b, whose presence in the close proximity causes interference to Bluetooth operation. Finally a Matlab based simulink model has been used to analyze the BER. The maximum tolerable BER of 5times10-2 is required to recover a perceptible quality of voice information. With respect to BER the HV2 type outperforms HV1 and HV3 due to the low SNR requirement for the same BER",2006,0, 7366,Modeling and Error Analysis of Dynamically Tuned Gyroscope Based on Bond Graph,"Bond graph method is a unified graphic method for modelling of engineering systems in several different energy domain. The open loop system of a Dynamically Tuned Gyroscope (DTG) is modelled using bond graph according to its structure. Using the bond graph method drift error caused by 2N angular vibration of the driving shaft is analysed, which shows the validity and intuitional effect of this method to analyse the drift error, and is helpful to model the DTG more precisely and completely, and to realize the integrative design and simulation of structure and control.",2010,0, 7367,JDF: detecting duplicate bug reports in Jazz,"Both developers and users submit bug reports to a bug repository. These reports can help reveal defects and improve software quality. As the number of bug reports in a bug repository increases, the number of the potential duplicate bug reports increases. Detecting duplicate bug reports helps reduce development efforts in fixing defects. However, it is challenging to manually detect all potential duplicates because of the large number of existing bug reports. This paper presents JDF (representing Jazz Duplicate Finder), a tool that helps users to find potential duplicates of bug reports on Jazz, which is a team collaboration platform for software development and process management. JDF finds potential duplicates for a given bug report using natural language and execution information.",2010,0, 7368,Boundary Value Testing Using Integrated Circuit Fault Detection Rule,"Boundary value testing is a widely used functional testing approach. This paper presents a new boundary value selection approach by applying fault detection rules for integrated circuits. Empirical studies based on Redundant Strapped-Down Inertial Measurement Unit of the 34 program versions and 426 mutants compare the new approach to the current boundary value testing methods. The results show that the approach proposed in this paper is remarkably effective in conquering test blindness, reducing test cost and improving fault coverage.",2009,0, 7369,Nonlinear Errors Correction of Pressure Sensor Based on BP Neural Network,"BP neutral network and its improved algorithms are applied to compensate sensor's performance. The defects of BP, for example, converging slowly, being easy to converge to minimum of one part are improved efficiently. Training programs are done. Results show that the performance of sensor is improved highly. Network has a high converging speed and good precision. The correction precision increases with the increasing number of nodes in the hidden layer. When the number of nodes in the hidden layer is 18 and the neural network model converges in average 28 iterations, the Error Index is less than 10-3.",2009,0, 7370,Sensitivity of volumetric brain analysis to systematic and random errors,"Brain structures are subjected to changes caused by genetic, environmental factors or disease and gender, age related reasons. Different methods have been developed to capture these changes, but their agreement over the same data may significantly vary. These variations are attributed to the methodological differences of the employed methods. Popular tools for measuring brain atrophy are SIENA and SIENAX that have been compared in previous studies but no work has carefully weeded out the possible bias and confounding that can be easily introduced to such analyses. The present work tackles the problem of bias, confounding and random variation factors that are inserted in any volumetric analysis methods from the selection of the input data. Evidence showed that normalized brain volume and brain volume change can be used to characterize group differences between healthy and non-healthy subjects in cases where it is not possible to circumvent the studied confounds, which can be introduced in any similar case-control study.",2010,0, 7371,Motion correction in PET brain studies,"Brain studies recorded with positron emission tomography (PET) may last between a few minutes and some hours. Head motion during long scans lead not only to blurred images, but may also seriously disturb the kinetic analysis of the metabolic information contained in the PET data. With the increase of scanner resolution this problem becomes more and more important. In the last years some methods to correct for motion have been proposed. Here we describe these approaches, especially those examined and established in our PET laboratory. We also report our first experiences regarding the accuracy of motion correction and the consequences related to the linearized calculation of metabolic images.",2005,0, 7372,Realize a self-recovering function during phase-sequence fault of a digital servo system,"As for a servo control system of the permanent magnet synchronous motor (PMSM), this paper analyzes fault causation of motor start-up that is on account of mistaken connection with motor phase-sequence, and kinds of faults have been classified in the paper. In addition, a method that can realize motor rotor oriented control using TMS320F240 digital signal processor (DSP) is proposed. With the motor rotor position can be acquired by an incremental photo-electricity encoder, the connected sequence of motor phases can be judged by using stator voltage vector oriented control technology, also it can realize self-recovering function by adjusting output phase-sequence of the controller by programming. Experiment results verify that motor can self-recover fast when phase-sequence faults happened and can start rapidly, and the proposed method can be used in servo control system.",2006,0, 7373,ISOMAP Algorithm-Based Feature Extraction for Electromechanical Equipment Fault Prediction,"As for the difficult problem of sensitive feature extraction during fault prediction for nonlinear electromechanical equipment, nonlinear dimensionality reduction ISOMAP (isometric feature mapping) algorithm is introduced based on the comprehensive analysis of the operation state data to reduce the dimensionality of high dimensional operation data and acquire sensitive fault features, furthermore, the ISOMAP dimensionality reduction result is verified by compiling algorithm programs based on MATLAB platform. The suitability of applying ISOMAP algorithm to dimensionality reduction of high dimensional data is discussed in terms of algorithm principle in this paper, and calculation steps of the algorithm are provided. According to MATLAB tests, ISOMAP algorithm is able to reduce dimensionality greatly and find out the essential data structure so as to provide a new method for extracting sensitive features of electromechanical equipment fault from another point of view.",2009,0, 7374,An evaluation of a slice fault aware tool chain,"As FPGA sizes and densities grow, their manufacturing yields decrease. This work looks toward reclaiming some of this lost yield. Several previous works have suggested fault aware CAD tools for intelligently routing around faults. In this work we evaluate such an approach quantitatively with respect to some standard benchmarks. We also quantify the trade-offs between performance and fault tolerance in such a method. Leveraging existing CAD tools, we show up to 30% of slices being faulty can be tolerated. Such approaches could potentially allow manufacturers to sell larger chips with manufacturing faults as smaller chips using a nomenclature that appropriately captures the reduction in logic resources.",2010,0, 7375,Efficient data broadcast scheme on wireless link errors,"As portable wireless computers become popular, mechanisms to transmit data to such users are of significant interest. Data broadcast is effective in dissemination-based applications to transfer the data to a large number of users in the asymmetric environment where the downstream communication capacity is relatively much greater than the upstream communication capacity. Index based organization of data transmitted over wireless channels is very important to reduce power consumption. We consider an efficient (1:m) indexing scheme for data broadcast on unreliable wireless networks. We model the data broadcast mechanism on the error prone wireless networks, using the Markov model. We analyze the average access time to obtain the desired data item and find that the optimal index redundancy (m) is SQRT[Data/{Index*(1-p)K}], where p is the failure rate of the wireless link, Data is the size of the data in a broadcast cycle, Index is the size of index, and K is the index level. We also measure the performance of data broadcast schemes by parametric analysis",2000,0, 7376,BulletProof: a defect-tolerant CMP switch architecture,"As silicon technologies move into the nanometer regime, transistor reliability is expected to wane as devices become subject to extreme process variation, particle-induced transient errors, and transistor wear-out. Unless these challenges are addressed, computer vendors can expect low yields and short mean-times-to-failure. In this paper, we examine the challenges of designing complex computing systems in the presence of transient and permanent faults. We select one small aspect of a typical chip multiprocessor (CMP) system to study in detail, a single CMP router switch. To start, we develop a unified model of faults, based on the time-tested bathtub curve. Using this convenient abstraction, we analyze the reliability versus area tradeoff across a wide spectrum of CMP switch designs, ranging from unprotected designs to fully protected designs with online repair and recovery capabilities. Protection is considered at multiple levels from the entire system down through arbitrary partitions of the design. To better understand the impact of these faults, we evaluate our CMP switch designs using circuit-level timing on detailed physical layouts. Our experimental results are quite illuminating. We find that designs are attainable that can tolerate a larger number of defects with less overhead than naive triple-modular redundancy, using domain-specific techniques such as end-to-end error detection, resource sparing, automatic circuit decomposition, and iterative diagnosis and reconfiguration.",2006,0, 7377,Study of the impact of hardware fault on software reliability,"As software plays increasingly important roles in modern society, reliable software becomes desirable for all stakeholders. One of the root causes of software failure is the failure of the computer hardware platform on which the software resides. Traditionally, fault injection has been utilized to study the impact of these hardware failures. One issue raised with respect to the use of fault injection is the lack of prior knowledge on the faults injected, and the fact that, as a consequence, the failures observed may not represent actual operational failures. This paper proposes a simulation-based approach to explore the distribution of hardware failures caused by three primary failure mechanisms intrinsic to semiconductor devices. A dynamic failure probability for each hardware unit is calculated. This method is applied to an example Z80 system and two software segments. The results lead to the conclusion that the hardware failure profile is location related, time dependent, and software-specific",2005,0, 7378,A Systematic Approach for Integrating Fault Trees into System Statecharts,"As software systems are encompassing a wide range of fields and applications, software reliability becomes a crucial step. The need for safety analysis and test cases that have high probability to uncover plausible faults are necessities in proving software quality. System models that represent only the operational behavioral of a system are incomplete sources for deriving test cases and performing safety analysis before the implementation process. Therefore, a system model that encompasses faults is required. This paper presents a technique that formalizes a safety model through the incorporation of faults with system specifications. The technique focuses on introducing semantic faults through the integration of fault trees with system specifications or statechart. The method uses a set of systematic transformation rules that tries to maintain the semantics of both fault trees and statechart representations during the transformation of fault trees into statechart notations.",2008,0, 7379,Adapting to Intermittent Faults in Future Multicore Systems,"As technology continues to scale, future multicore processors become more susceptible to a variety of hardware failures. In particular, intermittent faults, are expected to become especially problematic (S. Borkar et al., 2003), (C. Constantinescu, 2007). A circuit is susceptible to intermittent faults when manufacturing process variation or in-progress wear-out causes the parameters (e.g., resistance, threshold voltage, etc.) of devices within the circuit to vary beyond design expectations (C. Constantinescu, 2007). This susceptibility, combined with certain operating conditions, such as thermal hot-spots and voltage fluctuations, can result in timing errors - even if these temperatures and voltages, for example, are well within the specified ""acceptable"" margins. Unlike transient faults, which disappear quickly, or permanent faults, which persist indefinitely, the occurrence of intermittent faults is bursty in nature. Depending on the cause, these bursts of frequent faults can last from several cycles to several seconds or more, effectively rendering a core useless during this time.",2007,0, 7380,Integration of service-level monitoring with fault management for end-to-end multi-provider ethernet services,"Assuring end-to-end service quality in a multi- provider Ethernet environment is a challenging task. Operation and maintenance issues have become more and more complex due to the gradual extension of the Ethernet technology from local- to wide-area networks and the increasingly frequent use of layer-2 virtual private networks. End-to-end Ethernet network management is currently under standardization, with a focus on connectivity fault management and performance management. However, none of the tools and research prototypes available to date integrate service-level monitoring with fault management functions such as event correlation or root cause analysis for interconnected Ethernet networks. In this paper, we address the issue by proposing an integrated service-level monitoring and fault management framework. Our event processing module can handle various events generated by network nodes or pollers. We also describe service-level monitoring and fault management methods that are fine-tuned for managing end-to-end multi- provider Ethernet services.",2007,0, 7381,Fault Tolerant Delay Insensitive Inter-chip Communication,"Asynchronous interconnect is a promising technology for communication systems. Delay Insensitive (DI) interconnect eliminates relative timing assumptions, offering a robust and flexible approach to on- and inter-chip communication. In the SpiNNaker system - a massively parallel computation platform -a DI system-wide communication infrastructure is employed which uses a 4-phase 3-of-6 code for on-chip communication and a 2-phase 2-of-7 code for inter-chip communication. Fault-tolerance has been evaluated by randomly injecting transient glitches into the off-chip wires. Fault simulation reveals that deadlock may occur in either the transmitter or the receiver as handshake protocols are disrupted. Various methods have been tested for reducing or eliminating deadlock, including a novel phase-insensitive 2-phase to 4-phase converter, a priority arbiter for reliable code conversion and a scheme that allows independent resetting of the transmitter and receiver to clear deadlocks. Simulation results confirm that these methods enhance the fault tolerance of the DI communication link, in particular making it significantly more resistant to deadlock.",2009,0, 7382,The Application of Travelling Wave Fault Locators in China,"At present there are more than 1500 TWFL equipments in operation in China. The fault location accuracy obtained with TWFL equipment in practice has been better than 500 m. The TWFL systems and the methods of detecting the travelling waves used in China are described. Some typical fault location results from high voltage AC and DC transmission lines are presented. The ""seamless recording"" of transient signals to ensure the capture of fault transients during severe thunderstorm activity is introduced. The application of TWFL to distribution lines is also described.",2008,0, 7383,Research on automatic detection for defect on bearing cylindrical surface,"At present, it is still by using manual method to detect the defects on micro bearing surface. The method is laborious and time consuming. Moreover, it has a low efficiency and high miss-detection rate. Due to the fact, an on-line automatic detection system is developed using linear CCD. The proposed system is composed of three subsystems: the detection environment setting subsystem, the automatic detection subsystem, and the data management subsystem. In order to lead the above subsystems to cooperate with each other, control software is developed with LabVIEW 8.5 as a platform. Experimental results indicate that the system realizes the predefined functions and caters to the requirements of stability, real-time performance, and accuracy. Thus it can be applied for actual production.",2010,0, 7384,Temporal envelope correction for attack restoration in low bit-rate audio coding,"At reduced bit rates, the audio compression affects transient parts of signals, which results in pre-echo and loss of attack character. We propose in this paper an attacks restoration method based on the correction of the temporal envelope of the decoded signal, using a small set of coefficients transmitted through an auxiliary channel. The proposed approach is evaluated for single and multiple coding-decoding, using objective perceptual measures. The experimental results for MP3 and AAC coding exhibits an efficient restoration of the attacks and a significant improvement of the audio quality.",2009,0, 7385,Discovering rules for fault management,"At the heart of the Internet revolution is global telecommunication systems. These systems, initially designed for voice traffic, provide the vast backbone bandwidth capabilities necessary for Internet traffic. They have built-in redundancy and complexity to ensure robustness and quality of service. To facilitate this, requires complex fault identification and management systems. Fault identification and management is generally handled by reducing the amount of alarm events (symptoms) presented to the operating engineer through monitoring, filtering and masking. The ultimate goal is to determine and present the actual underlying fault. While en-route to automated fault identification, it is useful to derive rules and techniques to attempt to present less symptoms with greater diagnostic assistance. With these objectives in mind, computer-assisted human discovery and human-assisted computer discovery techniques are discussed.",2001,0, 7386,Integration of InSAR Time-Series Analysis and Water-Vapor Correction for Mapping Postseismic Motion After the 2003 Bam (Iran) Earthquake,"Atmospheric water-vapor effects represent a major limitation of interferometric synthetic aperture radar (InSAR) techniques, including InSAR time-series (TS) approaches (e.g., persistent or permanent scatterers and small-baseline subset). For the first time, this paper demonstrates the use of InSAR TS with precipitable water-vapor (InSAR TS + PWV) correction model for deformation mapping. We use MEdium Resolution Imaging Spectrometer (MERIS) near-infrafred (NIR) water-vapor data for InSAR atmospheric correction when they are available. For the dates when the NIR data are blocked by clouds, an atmospheric phase screen (APS) model has been developed to estimate atmospheric effects using partially water-vapor-corrected interferograms. Cross validation reveals that the estimated APS agreed with MERIS-derived line-of-sight path delays with a small standard deviation (0.3-0.5 cm) and a high correlation coefficient (0.84-0.98). This paper shows that a better TS of postseismic motion after the 2003 Bam (Iran) earthquake is achievable after reduction of water-vapor effects using the InSAR TS + PWV technique with coincident MERIS NIR water-vapor data.",2009,0, 7387,Misalignment between emission scan and X-ray CT derived attenuation maps causes artificial defects in attenuation corrected myocardial perfusion SPECT,"Attenuation correction increases the specificity of myocardial perfusion SPECT. However, we noticed an unusually high number of scans with defects only visible in the attenuation-corrected images. We suspected these to be false positive readings. Visual inspection using the supplied software suggested mismatch in the ventrodorsal (Y-) direction between SPECT images and transmission maps as an explanation. As the fusion tool only allows for a coarse grading, we wrote software to quantify the mismatch. A phantom study was done to verify that the observed mismatch can cause the defect patterns visible in the attenuation-corrected images. 25 patients who showed the most pronounced artifact were chosen for re-alignment and re-evaluation. Overall, the defects in the attenuation-corrected images got less intense (15/25) or even vanished (6/25) after re-aligning emissions images and transmission maps. In response to our complaints, the vendor replaced the support rollers which should prevent the bed from deflecting with a re-engineered, more robust version. No more clinically relevant artefacts were observed after this modification. Evaluation of another 28 probably-normal patients showed the mean mismatch between emission and transmission scan to be significantly reduced. We conclude that great care has to be taken to ensure correct alignment of the scans even in a dual-modality imaging device. Bed deflection can be a major source for misalignment and artifacts.",2004,0, 7388,The adaptive agent architecture: achieving fault-tolerance using persistent broker teams,"Brokered multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to specify robust brokered architectures that can recover from broker failures, and we present the adaptive agent architecture (AAA) to show the feasibility of this approach. The previous teamwork theory based on joint intentions assumes that team members remain in a team as long as the team exists. We extend this theory to allow dynamic broker teams whose members can change with time. We also introduce a theory of restorative maintenance goals that enables the brokers in an AAA broker team to start new brokers and recruit them to the broker team. As a result, an AAA-based multi-agent system can maintain a specified number of functional brokers in the system despite broker failures, thus effectively becoming a self-healing system",2000,0, 7389,Detecting heap smashing attacks through fault containment wrappers,"Buffer overflow attacks are a major cause of security breaches in modern operating systems. Not only are overflows of buffers on the stack a security threat, overflows of buffers kept on the heap can be too. A malicious user might be able to hijack the control flow of a root-privileged program if the user can initiate an overflow of a buffer on the heap when this overflow overwrites a function pointer stored on the heap. The paper presents a fault-containment wrapper which provides effective and efficient protection against heap buffer overflows caused by C library functions. The wrapper intercepts every function call to the C library that can write to the heap and performs careful boundary checks before it calls the original function. This method is transparent to existing programs and does not require source code modification or recompilation. Experimental results on Linux machines indicate that the performance overhead is small",2001,0, 7390,An empirical study on bug assignment automation using Chinese bug data,"Bug assignment is an important step in bug life-cycle management. In large projects, this task would consume a substantial amount of human effort. To compare with the previous studies on automatic bug assignment in FOSS (free/open source software) projects, we conduct a case study on a proprietary software project in China. Our study consists of two experiments of automatic bug assignment, using Chinese text and the other non-text information of bug data respectively. Based on text data of the bug repository, the first experiment uses SVM to predict bug assignments and achieve accuracy close to that by human triagers. The second one explores the usefulness of non-text data in making such prediction. The main results from our study includes that text data are most useful data in the bug tracking system to triage bugs, and automation based on text data could effectively reduce the manual effort.",2009,0, 7391,A Framework of Bug Reporting System Based on Keywords Extraction and Auction Algorithm,"Bug report is becoming a main approach to improve and perfect a complicated piece of software, but analyzing the bug report is a tedious and time-consuming job, especially when the number of bug reports is huge and many duplicate reports are mixed within the incoming bug reports. In this paper, we present a framework to automatically triage and detect the duplicate bug reports by keywords extraction and combine those existing relative reports to form a more integrated and complete bug report, and then assign the report to appropriate developer based on auction rules and developer's experiences. After having fixed the bug, the developer can submit some new keywords related to the fixed bug to the system, in which we can complement the keywords repository.",2010,0, 7392,Perspectives on bugs in the Debian bug tracking system,"Bugs in Debian differ from regular software bugs. They are usually associated with packages, instead of software modules. They are caused and fixed by source package uploads instead of code commits. The majority are reported by individuals who appear in the bug database once, and only once. There also exists a small group of bug reporters with over 1,000 bug reports each to their name. We also explore our idea that a high bug-frequency for an individual package might be an indicator of popularity instead of poor quality.",2010,0, 7393,"Fault-tolerant hard-real-time communication of dynamically reconfigurable, distributed embedded systems","Building up hard-real-time networks for distributed systems with dynamical structures poses severe design challenges. As the set of nodes varies, the use of protocols, which request a complete setup with every modification, becomes prohibitive. This paper presents a new concept (called TrailCable) that aims at increasing the flexibility in building up hard-real-time communication systems. The concept consists of an extendable network made up of point-to-point connections, with each node acting as a router with an integrated communication scheduler. The protocol offers the possibility to use unallocated bandwidth for periodic data transmissions. Moreover, functionalities that guarantee the communication channels to be fault-tolerant are also included. Through the collection of these properties, the protocol described in this paper is well adapted to highly dynamic, distributed real-time systems. As a case study, this new communication protocol is applied to an innovative railway system, the so-called RailCab.",2005,0, 7394,The application of embedded network communication to remote fault diagnosis system,"By contrasting and analyzing the existing network communication methods in a remote fault diagnosis system, this paper introduces a low cost and reliable system plan including embedded network communication and data acquisition. In the system, only by adding an embedded network communication interface based on the existing electromechanical equipment and with serial communication, network communication and some other technologies, users can collect data from the spot equipment to the public platform of remote fault diagnosis and let the public diagnosis platform implement the remote inspection and fault diagnosis of multi users' electromechanical equipment.",2002,0, 7395,Analysis and Design for a Novel Single-Stage High Power Factor Correction Diagonal Half-Bridge Forward AC–DC Converter,"By means of components placement, the buck-boost and diagonal half-bridge forward converters are combined to create a novel single-stage high power factor correction (HPFC) diagonal half-bridge forward converter. When both the PFC cell and dc-dc cell operate in DCM, the proposed converter can achieve HPFC and lower voltage stress of the bulk capacitor. The circuit analysis of the proposed converter operating in DCM+DCM mode is presented. In order to design controllers for the output voltage regulation, the ac small-signal model of the proposed converter is derived by the averaging method. Based on the derived model, the proportional integral (PI) controller and minor-loop controller are then designed. The simulation and experimental results show that the proposed converter with the minor-loop controller has faster output voltage regulation than that with the PI controller despite the variations of line voltage and load. Finally, a 100-W prototype of the proposed ac-dc converter is implemented and the theoretical result is experimentally verified",2006,0, 7396,Application of Computations with Calculation Error Exclusion for Computation Geometry Algorithms,"Calculation errors while using floating-point computation are considered as a source of computational geometry algorithms incorrect results. Methods of decreasing such errors influence on the algorithm are discussed. Computations with calculation error exclusion as a way of such errors elimination are proposed. Computations with calculation error exclusion defined as a kind of computation where results of certain arithmetic operations could be represented exactly (computations in rational numbers, for example). Algorithm of finding cases where floating-point computation causes incorrect results is proposed.",2009,0, 7397,Soft error propagation in floating-point programs,"As technology scales, VLSI performance has experienced an exponential growth. As feature sizes shrink, however, we will face new challenges such as soft errors (single-event upsets) to maintain the reliability of circuits. Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations. Furthermore, we introduce a novel model to deal with soft errors. More specifically, we assume soft errors have occurred in memory and try to know how the errors will manifest in the results of programs. Therefore, some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analyzing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations. The simulation results enable us to tolerate certain types of soft errors without expensive error detection and correction processing.",2010,0, 7398,Dynamic tracing screening system's research of the electronic key's surface defects,"As the machine vision finds application in the present industry widely, this paper proposes the dynamic tracing automatic screening (DTAS) algorithm. Aiming to detect the electronic button's surface, this DTAS algorithm can judge whether this key is damaged or not. Detection uses the image adaptive angle rotation algorithm, the threshold value law, variance law, and the region boundary law in the process. The experiment results show that the DTAS is feasible and simple, the localization is precision, the ability of anti-noise is strong very much, the speed of the computation is very quick, and the realtime is very good when the electronic button is detected.",2010,0, 7399,Enhancing application robustness through adaptive fault tolerance,"As the scale of high performance computing (HPC) continues to grow, application fault resilience becomes crucial. To address this problem, we are working on the design of an adaptive fault tolerance system for HPC applications. It aims to enable parallel applications to avoid anticipated failures via preventive migration, and in the case of unforeseeable failures, to minimize their impact through selective checkpointing. Both prior and ongoing work are summarized in this paper.",2008,0, 7400,Dependable Network-on-Chip Router Able to Simultaneously Tolerate Soft Errors and Crosstalk,"As the technology scales down into deep sub-micron domain, more IP cores are integrated in the same die and new communication architectures are used to meet performance and power constraints. However, the same technologic advance makes devices and interconnects more sensitive to new types of malfunctions and failures, such as crosstalk and transient faults. This paper proposes fault tolerant techniques to protect NoC routers against the occurrence of soft errors and crosstalk at the same time, with minimum area and performance overhead. Experimental results show that a cost-effective protection alternative can be achieved by the combination of error correction codes and time redundancy techniques",2006,0, 7401,Nonstationary Motor Fault Detection Using Recent Quadratic TimeFrequency Representations,"As the use of electric motors increases in the aerospace and transportation industries where operating conditions continuously change with time, fault detection in electric motors has been gaining importance. Motor diagnostics in a nonstationary environment is difficult and often needs sophisticated signal processing techniques. In recent times, a plethora of new time-frequency distributions has appeared, which are inherently suited to the analysis of nonstationary signals while offering superior frequency resolution characteristics. The Zhao-Atlas-Marks distribution is one such distribution. This paper proposes the use of these new time-frequency distributions to enhance nonstationary fault diagnostics in electric motors. One common myth has been that the quadratic time-frequency distributions are not suitable for commercial implementation. This paper also addresses this issue in detail. Optimal discrete-time implementations of some of these quadratic time-frequency distributions are explained. These time-frequency representations have been implemented on a digital signal processing platform to demonstrate that the proposed methods can be implemented commercially.",2008,0, 7402,Using Redundant Threads for Fault Tolerance of OpenMP Programs,"As the wide application of multi-core processor architecture in the domain of high performance computing, fault tolerance for shared memory parallel programs becomes a hot spot of research. For years, checkpointing has been the dominant fault tolerance technology in this field, and recently, many research works have been engaged with it. However, to those programs which deal with large amount of data, checkpointing may induce massive I/O transfer, which will adversely affect scalability. To deal with such a problem, this paper proposes a fault tolerance approach, making use of redundancy, for shared memory parallel programs. Our scheme avoids saving and restoring computational state during the program's execution, hence does not involve I/O operations, so presents explicit advantage over checkpointing in scalability. In this paper, we introduce our approach and the related compiler tool in detail, and give the experimental evaluation result.",2010,0, 7403,Design and analysis of fault tolerant architectures by model weaving,Aspect-oriented modeling is proposed to design the architecture of fault tolerant systems. Notations are introduced that support the separate and modularized design of functional and dependability aspects in UML class diagrams. This notation designates sensitive parts of the architecture and selected architecture patterns that implement common redundancy techniques. A model weaver is presented that constructs both the integrated model of the system and the dependability model on the basis of the analysis sub-models attached to the architecture patterns. In this way fault tolerance mechanisms can be systematically analyzed when they are integrated into the system.,2005,0, 7404,Error detection by selective procedure call duplication for low energy consumption,"As commercial off-the-shelf (COTS) components are used in system-on-chip (SoC) design technique that is widely used from cellular phones to personal computers, it is difficult to modify hardware design to implement hardware fault-tolerant techniques and improve system reliability. Two major concerns of this paper are to: (a) improve system reliability by detecting transient errors in hardware, and (b) reduce energy consumption by minimizing error-detection overhead. The objective of this new technique, selective procedure call duplication (SPCD) is to keep the system fault-secured (preserve data integrity) in the presence of transient errors, with minimum additional energy consumption. The basic approach is to duplicate computations and then to compare their results to detect errors. There are 3 choices for duplicate computation: (1) duplicating every statement in the program and comparing results, (2) re-executing procedures through duplicated procedure calls, and comparing results, and (3) re-executing the whole program, and comparing the final results. SPDC combines choices (1) and(2). For a given program, SPCD analyzes procedure-call behavior of the program, and then determines which procedures can have duplicated statements [choice(1)] and which procedure calls can be duplicated [choice (2)] to minimize energy consumption with reasonable error-detection latency. Then, SPCD transforms the original program into a new program that can detect errors with minimum additional energy consumption by re-executing the statements or procedures. SPCD was simulated with benchmark programs; it requires less than 25% additional energy for error detection than previous techniques that do not consider energy consumption.",2002,0, 7405,Effective Fault Injection Model for Variant Network Traffic,"As cyber attacks by the malicious users are increased with vulnerabilities in software, fuzz testing is emerging as an effective way to find out a security bug. Fuzz testing is mainly used in verifying the robustness of software by injecting the random or semi-valid data to areas such as network port, API and user interface. In fuzz testing of network software, the repeated transmission of packet is necessary and all network fuzz tools are depending on the recording scheme of packets for it. The characteristic causes a big overhead in the situation that network traffic is variant in doing the same task. This paper identifies four disadvantages of the general network fuzzer with the packet recording and replaying scheme. Their most expensive cost is to code a routine to handle the variant traffic of each same upcoming communication. By proposing fuzz model to inject the fault into the packet at the real-time, we address the weakness in the existing network fuzz tools. Last, we experiment the implemented tool, named RINF, against Windows RPC based service, and show that it works effectively comparing with the exiting.",2007,0, 7406,Modeling soft-error susceptibility for IP blocks,"As device geometries continue to shrink, single event upsets are becoming of concern to a wider spectrum of system designers. These ""soft errors"" can be a nuisance or catastrophic, depending on the application, but they must be understood and their effects budgeted for. Ultimately, experimental measurement is needed to quantify soft error rates, but after-the-fact measurement is too late to make changes. This paper shows a methodology that can be used to estimate the soft error properties of individual IP blocks by using a combination of critical charge calculations and experimental data.",2005,0, 7407,Power-efficient and fault-tolerant circuits and systems,"As devices become smaller, circuits and systems are more vulnerable to soft errors caused by radiation and other environmental upsets. Fault tolerance measured by mean time to failure (MTTF) is desired, especially if no extra area, power and delay and little change of the existing design flow are introduced. Using FPGA as a testbed, this paper first presents fault tolerance techniques applying (1) logic don't care and path re-convergence (ROSE) and (2) in-place logic re-writing (IPR). Both increase MTTF by 2X with little or no overhead. Particularly, IPR does not change circuit placement and routing, and can be readily used with the existing industrial design flow. It also leads to a self evolution method to enhance fault tolerance for FPGA based circuits and systems. The ideas presented in the paper can be extend to handle regular logic fabrics, which are natural to nano-technologies and are also preferred by design for manufacturability (DFM) in scaled CMOS technologies.",2009,0, 7408,The research on fault diagnosis of distribution network based on rough set theory,"Based on the rough set theory, a new distribution network fault diagnosis approach to deal with the imperfect alarm signals that caused by malfunction or failing operation of protection relays and circuit breakers, error in the communication equipment is proposed. Due to rough set theory can effectively handle the imprecise problems without any ancestor information except the data set itself, a decision table including all kinds of fault cases is established by considering the signals of protection relays and circuit breakers, and the approach can extract diagnosis rules from the set of fault samples directly. The inherent redundancy in the alarm information is exposed. Finally, a practical fault diagnosis program of the typical distribution network is proposed by using Visual C# language based on Visual Studio. NET developing platform. The result shows the validity of the proposed method.",2009,0, 7409,Integration of Multivariate Control Charts and Neural Networks to Determine the Faults of Quality Characteristic(s) in a Multivariate Process,"Because of advanced technology, there are many aspects of quality characteristics in a product. Typical univariate statistical process control ( SPC) charts may not be suitable for monitoring processes that have multiple quality characteristics. As a consequence, the multivariate control charts are developed to simultaneously monitor multiple quality characteristics of a process. Like the function of a univariate SPC chart, the process is hypothesized to be out of control when a signal is triggered by a multivariate SPC chart. The problem is that, it is difficult to interpret the signal for a multivariate SPC chart due to the multiple quality characteristics of a process That is, which quality characteristic(s) is (are) attributed to this out-of-control signal. If the characteristic(s) that is (are) at fault can be quickly and correctly determined, the corresponding remedial actions can be taken to tune the process in time. Therefore, this identification is a very important issue for industry processes.",2007,0, 7410,Bayesian fault diagnosis in large-scale measurement systems,"Because of both increasing complexity and increasing geographic distribution, faults in measurement systems are becoming more troublesome to diagnose in all life-cycle phases: manufacturing, deployment, and operation. This paper considers which features are necessary in an automatic diagnosis system for large-scale measurement systems. The paper describes MonteJade, a diagnosis system designed with these essential features in mind. Examples are given of MonteJade diagnosing faults and determining the next debugging actions to perform, in a measurement system constructed from more than one hundred components.",2004,0, 7411,Pattern Recognition System of Optical Fiber Fusion Defect Based on Fuzzy Neural Network in EPON,"Because of having many advantages, optical fiber network is applied widely in high-tech fields. But the existence of optical fiber fusion defects will debase the quality of message transmission. A set of defect recognized system is established based on the compensatory fuzzy neural network of using wavelet and with fast algorithm in this paper. The dasiaenergy- defectpsila method to extract eigen value is used firstly, then defect classification is recognized by fuzzy neural network. The results of simulation show that the model established by making use of this algorithm has higher efficiency, and the possibility of wrap in local minimum value of the network during the training process is smaller, which can compare to approach the precision utmost steadily and classification recognize the defect precision.",2009,0, 7412,Application of fuzzy neural network in optical fiber fusion defect recognition system by AS1773 protocol,"Because of having many advantages, optical fiber network is applied widely in high-tech fields. But the existence of optical fiber fusion defects will debase the quality of message transmission. A set of defect recognized system is established based on the compensatory fuzzy neural network of using wavelet and with fast algorithm in this paper. The `energy-defect' method to extract eigenvalue is used firstly, then defect classification is recognized by fuzzy neural network. The results of simulation show that the model established by making use of this algorithm has higher efficiency, and the possibility of wrap in local minimum value of the network during the training process is smaller, which can compare to approach the precision utmost steadily and classification recognize the defect precision. The experiment result can verify this system have very high accurate rate to forecast the fusion defects and satisfy the demand of the engineering application based on AS1773 protocol, which provide the technique guarantee for further realizing the optical fiber fusion quantity monitor system.",2010,0, 7413,Modeling and Simulation of Multi-operation Microcode-Based Built-In Self Test for Memory Fault Detection and Repair,"As embedded memory area on-chip is increasing and memory density is growing, problem of faults is growing exponentially. Newer test algorithms are developed for detecting these new faults. These new March algorithms have much more number of operations than the March algorithms existing earlier. An architecture implementing these new algorithms is presented here. This is illustrated by implementing the newly defined March SS algorithm. Along with the fault diagnosis a word-oriented memory Built-in Self Repair methodology, which supports on-the-fly memory repair, is employed to repair the faulty locations indicated by the MBIST controller presented.",2010,0, 7414,Spread programming using orthogonal code for alleviating bit errors of NAND flash memory,"As flash memory requires higher density, bit error rate becomes more important. In this paper, we propose Direct Sequence Spread Spectrum (DSSS) based spread programming that makes flash based storage to be tolerant against errors.",2010,0, 7415,Adaptive Interaction Fault Location Based on Combinatorial Testing,"Combinatorial testing aims to detect interaction faults, which are triggered by interaction among parameters in system, by covering some specific combinations of parametric values. Most works about combinatorial testing focus on detecting such interaction faults rather than locating them. Based on the model of interaction fault schema, in which the interaction fault is described as a minimum fault schema and several corresponding parent-schemas, we propose an iterative adaptive interaction fault location technique for combinatorial testing. In order to locate interaction faults that detected in combinatorial testing, such technique utilizes delta debugging strategy to filtrate suspicious schemas by generating and running additional test cases iteratively. The properties, which include both recall and precision, of adaptive interaction fault location techniques are also analyzed in this paper. Analytical results suggest that the high scores in both recall and precision are guaranteed. It means that such technique can provide an efficient guidance for the applications of combinatorial testing.",2010,0, 7416,"Development of a low cost, fault tolerant, and highly reliable command and data handling computer (PulseTM)","Command and data handling computers have been designed to manage the different and many remote interface units within the satellite bus platform. In this distributed architecture, the command and data handling requires low throughput processors (1-4 MIPS) to pass data to other units or for download to ground stations for further processing. The advent of very large radiation hardened ASICs has enabled the application of powerful processing of the RHPPc (based on the Motorola licensed PowerPC 603e) with a simplified IEEE-1394 backplane bus to provide a highly reliable and cost competitive centralized command and data handling sub-system as described. This robust architecture is tailorable and easily modified to meet the varying needs of the satellite and space transportation applications. By using a commercially compliant processor (RHPPc is fully compliant with the instruction set of PowerPc 603e processor), software and its tools, which are one of the most complex, high risk, and expensive undertakings of the system architecture for a satellite bus controller, become a low risk design issue and much more cost effective. An extensive array of Commercial Off The Shelf (COTS) software tools is currently available for the Power PC processor family, rendering the software development environment associated with the PulseTM to be a relatively low impact on the overall program, thus reducing the overall program recurring and non-recurring cost. PulseTM supports most of the COTS operating systems with the current Board Support Package (both basic and custom) being designed to be VxWorks compliant",2000,0, 7417,Hypervisor-Based Virtual Hardware for Fault Tolerance in COTS Processors Targeting Space Applications,"Commercial off the shelf processors are becoming mandatory in space applications to satisfy the ever-growing demand for on-board computing power. As a result, architecture able to withstand the harshness of the space environment are needed to cope with the errors that may affect such processors, which are not specifically designed for being used in space. Beside design and implementation costs, validation of the obtained architecture is a very cost- and time-consuming operation. In this paper we propose an architecture to quickly develop dependable embedded systems using time redundancy. The main novelty of the approach lies in the usage of a hyper visor for implementing seamlessly time redundancy, consistency checking, temporal and spatial segregation of programs that are needed to guarantee a safe execution of the application software. The proposed architecture needs to be validated only once then, provided that the same hyper visor is available for different hardware platforms, it can be deployed without the need for re-validation. We describe a prototypical implementation of the approach and we provide experimental data that assess the effectiveness of the approach.",2010,0, 7418,Error probability analysis of IP Time To Live covert channels,"Communication is not necessarily made secure by the use of encryption alone. The mere existence of communication is often enough to raise suspicion and trigger investigative actions. Covert channels aim to hide the very existence of the communication. The huge amount of data and vast number of different protocols in the Internet makes it ideal as a high-bandwidth vehicle for covert communications. A number of researchers have proposed different techniques to encode covert information into the IP time to live (TTL) field. This is a noisy covert channel since the TTL field is modified between covert sender and receiver. For computing the channel capacity it is necessary to know the probability of channel errors. In this paper we derive analytical solutions for the error probabilities of the different encoding schemes. We simulate the different encoding schemes and compare the simulation results with the analytical error probabilities. Finally, we compare the performance of the different encoding schemes for an idealised error distribution and an empirical TTL error distribution obtained from real Internet traffic.",2007,0, 7419,Fault-adaptive control: a CBS application,"Complex control applications require a capability for accommodating faults in the controlled plant. Fault accommodation involves the detection and isolation of faults, and taking an appropriate control action that mitigates the effect of the faults and maintains control. This requires the integration of fault diagnostics with control, in a feedback loop. The paper discusses how a generic framework for building fault-adaptive control systems can be created using a model based approach. Instances of the framework are examples of complex CBSs (computer based systems) that have novel capabilities.",2001,0, 7420,Scalable fault tolerant architecture for complex event processing systems,"Complex Event Processing (CEP) is one of the fastest emerging fields in computer science. Modern CEP systems which are critical for organizations to maintain their business processes and to achieve excellence through operational responsiveness should be scalable and fault tolerant. The research project epZilla is to build a highly scalable, highly available, real time, fault tolerant distributed architecture for CEP systems.",2010,0, 7421,Runtime Diversity against Quasirandom Faults,"Complex software based systems that have to be highly reliable, are increasingly confronted with fault types whose corresponding failures appear to be random, although they have a systematic cause. This paper introduces and defines these ""quasirandom"" faults. They have certain inconvenient common properties such as their difficulty to be reproduced, their strong state dependence and their likelihood to be found in operational systems after testing. However, these faults are also likely to be detected or tolerated with the help of diversity in software, and even low level diversity which can be achieved during runtime is a promising means against them. The result suggests, that runtime diversity can improve software reliability in complex systems.",2009,0, 7422,Investigating the accuracy of defect estimation models for individuals and teams based on inspection data,"Defect content estimation approaches, based on data from inspection, estimate the total number of defects in a document to evaluate the quality of the product and the development process. Objective estimation approaches require a high-quality measurement process, potentially suffer from overfitting, and may underestimate the number of defects for inspections that yield few data points. Reading techniques for inspection, which focus the attention of the inspectors on particular parts of the inspected document, may influence their subjective estimates. In this paper we consider approaches to aggregate subjective estimates of individual inspectors in a team to alleviate individual bias. We evaluate these approaches with data from an experiment in a university environment where 177 inspectors in 30 teams inspected a software requirements document. Main findings of the experiment were that reading techniques considerably influenced the accuracy of inspector estimates. Further team estimates improved both estimation accuracy and variation compared to individual estimates and one of the best empirically evaluated objective estimation approaches.",2003,0, 7423,Computer-aided fault to defect mapping (CAFDM) for defect diagnosis,"Defect diagnosis in random logic is currently done using the stuck-at fault model, while most defects seen in manufacturing result in bridging faults. In this work we use physical design and test failure information combined with bridging and stuck-at fault models to localize defects in random logic. We term this approach computer-aided fault to defect mapping (CAFDM). We build on top of the existing mature stuck-at diagnosis infrastructure. The performance of the CAFDM software was tested by injecting bridging faults into samples of a Streaming audio controller chip and comparing the predicted defect locations and layers with the actual values. The correct defect location and layer was predicted in all 9 samples for which scan-based diagnosis could be performed. The experiment was repeated on production samples that failed scan test, with promising results",2000,0, 7424,Silicon Wafer Defect Extraction Based on Morphological Filter and Watershed Algorithm,"Defect extraction techniques are studied regarding the silicon wafer surface defect. We design a new filter based on multiple structuring elements and suggest an improved marker-based and region merging watershed. To begin with, the filter which generalized close-opening and open-closing filter based on the morphological filter with multiple structuring elements is introduced to eliminate the noise and simplify the image and morphological gradient image while preserving the details. And then in order to reduce the over-segmentation of the watershed algorithm, this paper suggests an improved marker-based and region merging method, region average gray value and edge strength criterion is used in merging operation and has a good effect on segmentation. Finally, the improved watershed algorithm is applied to the filtered gradient image to get the defect contours. The experiments show that this method can eliminate the noises and extract accurately location and closed region contours, which lays a good foundation for defect feature extraction and selection.",2008,0, 7425,Using occurrence properties of defect report data to improve requirements,"Defect reports generated for faults found during testing provide a rich source of information regarding problematic phrases used in requirements documents. These reports indicate that faults often derive from instances of ambiguous, incorrect or otherwise deficient language. In this paper, we report on a method combining elements of linguistic theory and information retrieval to guide the discovery of problematic phrases throughout a requirements specification, using defect reports and correction requests generated during testing to seed our detection process. We found that phrases known from these materials to be problematic have occurrence properties in requirements documents that both allow the direction of resources to prioritize their correction, and generate insights characterizing more general locations of difficulty within the requirements. Our findings lead to some recommendations for more efficiently and effectively managing certain natural language issues in the creation and maintenance of requirements specifications.",2005,0, 7426,Testing and defect tolerance: a Rent's rule based analysis and implications on nanoelectronics,"Defect tolerant architectures will be essential for building economical gigascale nanoelectronic computing systems to permit functionality in the presence of a significant number of defects. The central idea underlying a defect tolerant configurable system is to build the system out of partially perfect components, detect the defects and configure the available good resources using software. In this paper we discuss implications of defect tolerance on power area, delay and other relevant parameters for computing architectures. We present a Rent's rule based abstraction of testing for VLSI systems and evaluate the redundancy requirements for observability. It is shown that for a very high interconnect defect density, a prohibitively large number of redundant components are necessary for observability and this has adverse affect on the system performance. Through a unified framework based on a priori wire length estimation and Rent's rule we illustrate the hidden cost of supporting such an architecture.",2004,0, 7427,Minimising the Context Prediction Error,"Context prediction mechanisms proactively provide information on future contexts. Due to this knowledge novel applications become possible that provide services with proactive knowledge to users. The most serious problem of context prediction mechanisms lies in a basic property of prediction itself. A prediction is always a guess. Since erroneous predictions may cause the application to behave insufficiently, prediction errors have to be minimised. The accuracy of prediction is seriously affected by the reliability of the context data that is utilised by the method. We study two paradigms for context prediction and compare their potential prediction accuracy. We show that the designer of context prediction architectures has to choose wisely as to which prediction paradigm to follow in order to maximise the accuracy of the whole architecture. We also introduce a simulation environment and present simulation results that support the gained insights regarding context prediction.",2007,0, 7428,Algorithms for efficient symbolic detection of faults in context-aware applications,"Context-aware and adaptive applications running on mobile devices pose new challenges for the verification community. Current verification techniques are tailored for different domains (mostly hardware) and the kind of faults that are typical of applications running on mobile devices are difficult (or impossible) to encode using the patterns of ldquotraditionalrdquo verification domains. In this paper we present how techniques similar to the ones used in symbolic model checking can be applied to the verification of context-aware and adaptive applications. More in detail, we show how a model of a context-aware application can be encoded by means of ordered binary decision diagrams and we introduce symbolic algorithms for the verification of a number of properties.",2008,0, 7429,Computing Maximum Error and Reduced Threshold of Mining Frequent Patterns in Data Stream,"Controlling the space consumption and improving the precision of mining result is two challenges of frequent patterns mining in data stream. The parameter A which denotes the maximum error is widely used to reduce the space consumption. In this paper, we firstly propose a computational strategy for identifying maximum error, consist of resource awareness and polynomial approximate, and then propose a reduced threshold for improving mining accuracy.",2009,0, 7430,Coverage of a microarchitecture-level fault check regimen in a superscalar processor,"Conventional processor fault tolerance based on time/space redundancy is robust but prohibitively expensive for commodity processors. This paper explores an unconventional approach to designing a cost-effective fault-tolerant superscalar processor. The idea is to engage a regimen of microarchitecture-level fault checks. A few simple microarchitecture-level fault checks can detect many arbitrary faults in large units, by observing microarchitecture-level behavior and anomalies in this behavior. Previously, we separately proposed checks for the fetch and decode stages, rename stage, and issue stage of a contemporary superscalar processor. While each piece hinted at the possibility of a complete regimen - for an overall fault-tolerant superscalar processor - this totality was not explored. This paper provides the culmination by building a full regimen into a superscalar processor. We show for the first time that the regimen-based approach provides substantial coverage of an entire superscalar processor. Analysis reveals vulnerable areas which should be the focus for regimen additions.",2008,0, 7431,ConvexFit: an optimal minimum-error convex fitting and smoothing algorithm with application to gate-sizing,"Convex optimization has gained popularity due to its capability to reach global optimum in a reasonable amount of time. Convexity is often ensured by fitting the table data into analytically convex forms such as posynomials. However, fitting the look-up tables into the posynomial forms with minimum error itself may not be a convex optimization problem and hence excessive fitting errors may be introduced. In this paper, we propose to directly adjust the look-up table values into a numerically convex look-up table without explicit analytical form. We show that numerically ""convexifying"" the table data with minimum perturbation can be formulated as a convex semidefinite optimization problem and hence optimality can be reached in polynomial time. Without an explicit form limitation, we find that the fitting error is significantly reduced while the convexity is still ensured. As a result, convex optimization algorithms can still be applied. Furthermore, we also develop a ""smoothing"" algorithm to make the table data smooth and convex to facilitate the optimization process. Results from extensive experiments on industrial cell libraries demonstrate that our method reduces 30 fitting error over a well-developed posynomial fitting algorithm. Its application to circuit tuning is also presented.",2005,0, 7432,Reaching efficient fault-tolerance for cooperative applications,"Cooperative applications are widely used, e.g. as parallel calculations or distributed information processing systems. Whereby such applications meet the users demand and offer a performance improvement, the susceptibility to faults of any used computer node is raised. Often a single fault may cause a complete application failure. On the other hand, the redundancy in distributed systems can be utilized for fast fault detection and recovery. So, we followed an approach that is based an duplication of each application process to detect crashes and faulty functions of single computer nodes. We concentrate on two aspects of efficient fault-tolerance-fast fault detection and recovery without delaying the application progress significantly. The contribution of this work is first a new fault detecting protocol for duplicated processes. Secondly, we enhance a roll forward recovery scheme so that it is applicable to a set of cooperative processes in conformity to the protocol",2000,0, 7433,Correction for desynchronization of EEG and fMRI clocks through data interpolation optimizes artifact reduction,"Co-registration of EEG (Electroencephalogram)- and fMRI (functional magnetic resonance imaging) remains a challenge due to the large artifacts induced on the EEG by the MR (magnetic resonance) sequence gradient and RF pulses. We present an algorithm, based on the average-subtraction method, which is able to correct EEG data for gradient and RF pulse artifacts. We optimized artifact reduction by correcting the misalignment of EEG and fMRI data samples, resulting from the asynchronous sampling of EEG and fMRI data, through interpolation of EEG data. A clustering algorithm is proposed to account for the variability of the pulse artifact. Results show that the algorithm was able to keep the spontaneous brain activity while removing gradient and pulse artifacts with only a subtraction of selectively averaged data. Pulse artifact clustering showed that most of the variability was due to the time jitter of the pulse artifact markers. We show that artifact reduction by average-subtraction is optimized by interpolating the EEG data to correct for asynchronously sampled EEG and fMRI data.",2007,0, 7434,An effective fault-tolerant routing methodology for direct networks,"Current massively parallel computing systems are being built with thousands of nodes, which significantly affect the probability of failure. M. E. Gomex proposed a methodology to design fault-tolerant routing algorithms for direct interconnection networks. The methodology uses a simple mechanism: for some source-destination pairs, packets are first forwarded to an intermediate node, and later, from this node to the destination node. Minimal adaptive routing is used along both subpaths. For those cases where the methodology cannot find a suitable intermediate node, it combines the use of intermediate nodes with two additional mechanisms: disabling adaptive routing and using misrouting on a per-packet basis. While the combination of these three mechanisms tolerates a large number of faults, each one requires adding some hardware support in the network and also introduces some overhead. In this paper, we perform an in-depth detailed analysis of the impact of these mechanisms on network behaviour. We analyze the impact of the three mechanisms separately and combined. The ultimate goal of this paper is to obtain a suitable combination of mechanisms that is able to meet the trade-off between fault-tolerance degree, routing complexity, and performance.",2004,0, 7435,Hierarchical Analysis of Short Defects between Metal Lines in CMOS IC,"Current paper proposes a new hierarchical approach to defect-oriented testing of CMOS circuits. The method is based on critical area extraction for identifying the possible shorted pairs of nets on the basis of the chip layout information, combined with logic-level test pattern generation. The novel contributions of the paper are a new bridging fault simulator and a test pattern generator, which are able to handle defects creating feedbacks into the circuit. As a preprocessing step, a combined stuck-at test set from two different test pattern generators implementing alternative strategies (pseudorandom and deterministic) were created. Nevertheless, many short defects were not covered by this extended stuck-at approach. Analyses carried out in this paper show that the stuck-at tests are not covering up to 4% of the shorts (both testable and untestable). The test coverage (fault efficiency) can be increased by the new generator by up to 0.4% in comparison to full stuck-at test. Layout analysis for a set of benchmarks has been performed. The experiments indicate how the number of bridging faults of non-zero probability is dependent on the circuit size.",2008,0, 7436,Software-Based Non-invasive Geometric Correction of Projector,"Curved screens are often used in virtual reality vision systems. But distortion happens when projecting on a curved surface. Some special projectors and equipment have been invented to solve this problem. Instead of using such expensive hardware equipment, we propose a software-based non-invasive geometric correction method in this paper, which can rectify the distorted image projected on any curved screen interactively without modifying source code. For applications based on OpenGL, we make a new opengl32.dll to intercept the OpenGL API functions (especially SwapBuffer) in Windows operating system's library search path. And most of our rectification was implemented in the function we wrote to substitute the original SwapBuffer. Considering different demands from different users, we leaved the picture quality and the response time of the system controllable.",2007,0, 7437,Fault Region Localization: Product and Process Improvement Based on Field Performance and Manufacturing Measurements,"Customer feedback in the form of warranty/field performance is an important direct indicator of quality and robustness of a product. Linking warranty information to manufacturing measurements can identify key design and process variables that are related to warranty failures. Warranty data have been traditionally used in reliability studies to determine failure distributions and warranty cost. This paper proposes a novel fault region localization methodology to link warranty failures to manufacturing measurements (hence, to design and process parameters) for diagnosing warranty failures and to perform tolerance revaluation. The methodology consists of identifying relations between warranty failures and design/process variables using rough sets-based analysis on training data consisting of warranty information and manufacturing measurements. The methodology expands the rough set-based analysis by introducing parameters for inclusion of noise and uncertainty of warranty data classes. Based on the identified parameters related to the failure, a revaluation of the original tolerances can be performed to improve product robustness. The proposed methodology is illustrated using case studies of two warranty failures from the electronics industry. Note to Practitioners-Warranty failures are indicative of the performance and robustness of the product. Warranty failures, especially those that occur early (e.g., within six months after sale), can be caused by interactions between various design and process characteristics of the individual components of the product. Due to the large number of components and the interactions between them, it is difficult to identify all of these relations during design. Furthermore, it is difficult to replicate actual product usage in the field during the design stage. The methodology proposed in this paper integrates a product's warranty failure information with that of measurement data collected during manufacturing, to identify relevant des- - ign and process variables related to the failures. It also identifies the warranty fault region within the original design tolerance window for the parameters. This can help in avoiding warranty failure(s) through design changes and/or tolerance revaluation. The methodology was applied in the electronics and semiconductor industries",2006,0, 7438,Comparison of Data Mining and Neural Network Methods on Aero-engine Vibration Fault Diagnosis,"Data mining and artificial neural network (ANN) have been extensively applied on machinery fault diagnosis. Aero-engine, as one kind of rotating machine with complex structure and high rotating speed, has complicated vibration faults. ANN is a good tool for aero-engine fault diagnosis, since they have strong ability to learn complex nonlinear functions. Data mining has advantages of discovering knowledge from mountain of data, providing a simple way to interpret complex decision problem, and automatically extract diagnostic rules to replace the expert's advice. This paper presents application of the two methods on aero-engine vibration fault diagnosis and then makes a comparison between them. From the study of this paper, both the two methods are effective on aeroengine vibration fault diagnosis, while each of them has its individual quality.",2007,0, 7439,Case-based reasoning: diagnosis of faults in complex systems through reuse of experience,"Case Based Reasoning (CBR) is a technique for solving problems based on experience. The technique's intuitive approach is finding increasing use in complex system diagnostics. This paper describes the principles of CBR, its benefits and some of the problems associated with the initial capture of experience data. One approach aimed at improving initial data capture using Model Based Reasoning (MBR) techniques is also introduced. A real-life implementation of CBR is discussed, describing how the technique has been applied in the diagnosis of faults on diesel-electric freight locomotives",2000,0, 7440,A case-based reasoning approach for fault detection state in bridges assessment,"Case-based reasoning (CBR) systems use previous knowledge stored in data bases to solve current issues. Nowadays, CBR approach is widely used in different areas with good results. Such an area is system diagnose and fault detection. In this paper, we suggest a new approach, based on CBR, for fault detection of reinforced concrete bridges on polluted area. Nowadays the bridge inspection and diagnose is usually performed by civil engineers mainly on visual inspections, action involving subjective judgments and uncertainties. So far, intelligent diagnose on bridges was made based on models approaches, like fuzzy models and neural networks models. This paper aims to present a new approach based on CBR paradigm, to solve the above mentioned issues. We suggest and implement the architecture for data gathering and data base storing. Also, the CBR paradigm is tested by using jCOLIBRI platform.",2008,0, 7441,Definition and Extraction of Causal Relations for QA on Fault Diagnosis of Devices,"Causal relations in ontology should be defined based on the inference types necessary to solve the tasks specific to application, as well as domain. In this paper, we present a model to define and to extract causal relations for application ontology, which is targeted, as a case study, to serve a question-answering (QA) system on fault-diagnosis of electronic devices. In the first phase, causal categories are defined by identifying the generic inference patterns of QA on fault-diagnosis. In the second, the semantic relations between concepts in the corpus denoting the causal categories are defined as causal relations. In the third, instances of causal relations are extracted using the lexical patterns from the definitional statements of terms in domain, and extended with information from thesaurus. On the evaluation by domain experts, our model shows precision of 92.3% in classifying relations at the definition phase and precision of 80.7% in identifying causal relations at the extraction phase.",2008,0, 7442,A Novel Red-Eye Correction Method Based on AdaBoost Algorithm for Mobile Phone and Digital Camera,"Caused by light reflected off the subject's retina, red-eye is a troublesome problem in consumer photography. Correction of red eyes without any human intervention is an important task. There are some algorithms existing for red-eye detection, but almost all of them have less accuracy, in addition, they cannot support both high pixel and single red-eye. In this paper, a novel approach is proposed to eliminate red-eyes in the digital images automatically with a satisfactory result. This method gets the face region first by AdaBoost algorithm and then detects the red-eye on the top part of the face region, corrects the red-eye in the eye region for recovering the image's original color at last. Experiments in the platform of mobile phone and digital camera show that this method can eliminate the red-eye with high accuracy of 87%, which is higher than the best known technology of face detection base on complexion by 7%, and it can also support 8 million pixels image, moreover, the method has advantages of robustness and real-time computability.",2009,0, 7443,CCD camera calibrations and projection error analysis,"CCD camera calibration experiments show different results depending on the position of the camera taking the picture of the calibration model. In this paper, four different positions of the Sony XC-55 CCD camera are taken for calibrations and their calibration results are compared. For the camera calibration, the Tsai technique is used. Our experimental results of the CCD camera calibrations show that the calibration obtained from the camera position farthest from the calibration model makes the perspective projection of 3D points closer to their computer image points measured",2000,0, 7444,Signal processing and fault detection with application to CH-46 helicopter data,"Central to Qualtech Systems mission is its testability and maintenance software (TEAMS) and derivatives. Many systems comprise components equipped with self-testing capability; but, if the system is complex (and involves feedback and if the self-testing itself may occasionally be faulty) tracing faults to a single or multiple causes is difficult. However, even for systems involving many thousands of components the PC-based TEAMS provides essentially real-time system-state diagnosis. Until recently TEAMS operation was passive: its diagnoses were based on whatever data sensors could provide. Now, however, a signal-processing (SP) frontend matched to inference needs is available. Many standard signal processing primitives, such as filtering, spectrum analysis and multi-resolution decomposition are available; the SP toolbox is also equipped with a (supervised) classification capability based on a number of decision-making paradigms. This paper is about the SP toolbox. We show its capabilities, and demonstrate its performance on the CH-46 Westland data set",2000,0, 7445,Layering Model and Fault Diagnosis Algorithm for Internet Services,"Challenges of Internet service fault management are analyzed in this paper, and a layering model is recommended. Bipartite graph is chosen to be the fault propagation model (FPM) for each layer. A window-based fault diagnosis algorithm MAlg (multi-window algorithm) is proposed for the bipartite FPM. MAlg takes into account the correlation of adjacent time windows. As a result it can reduce the impact of improper time window setting. Simulation results prove the validity and efficiency of MAlg",2006,0, 7446,Critical path selection for delay fault testing based upon a statistical timing model,"Critical path selection is an indispensable step for testing of small-size delay defects. Historically, this step relies on the construction of a set of worst-case paths, where the timing lengths of the paths are calculated based upon discrete-valued timing models. The assumption of discrete-valued timing models may become invalid for modeling delay effects in the deep submicron domain, where the effects of timing defects and process variations are often statistical in nature. This paper studies the problem of critical path selection for testing small-size delay defects, assuming that circuit delays are statistical. We provide theoretical analysis to demonstrate that the new path-selection problem consists of two computationally intractable subproblems. Then, we discuss practical heuristics and their performance with respect to each subproblem. Using a statistical defect injection and timing-simulation framework, we present experimental results to support our theoretical analysis.",2004,0, 7447,An end-to-end approach for the automatic derivation of application-aware error detectors,"Critical Variable Recomputation (CVR) based error detection provides high coverage for data critical to an application while reducing the performance overhead associated with detecting benign errors. However, when implemented exclusively in software, the performance penalty associated with CVR based detection is unsuitably high. This paper addresses this limitation by providing a hybrid hardware/software tool chain which allows for the design of efficient error detectors while minimizing additional hardware. Detection mechanisms are automatically derived during compilation and mapped onto hardware where they are executed in parallel with the original task at runtime. When tested using an FPGA platform, results show that our approach incurs an area overhead of 53% while increasing execution time by 27% on average.",2009,0, 7448,Workshop on fault diagnosis and tolerance in cryptography,"Cryptographic devices are becoming increasingly ubiquitous and complex, making reliability an important design objective. Moreover, the diffusion of mobile, low-price consumer electronic equipment containing cryptographic components makes them more vulnerable to attack procedures, in particular to those based on injection of faults. This workshop aims at providing researchers in both the dependability and cryptography communities an opportunity to start bridging the gap between fault diagnosis and tolerance techniques, and cryptography.",2004,0, 7449,Software Self-Testing of a Symmetric Cipher with Error Detection Capability,"Cryptographic devices are recently implemented with different countermeasures against side channel attacks and fault analysis. Moreover, some usual testing techniques, such as scan chains, are not allowed or restricted for security requirements. In this paper, we analyze the impact that error detecting schemes have on the testability of an implementation of the advanced encryption standard, in particular when software-based self-test techniques are envisioned. We show that protection schemes can improve concurrent error detection, but make initial testing more difficult.",2008,0, 7450,Error-resilient coding of 3-D graphic models via adaptive mesh segmentation,"Current coding techniques for 3-D graphic models mainly focus on coding efficiency, which makes them extremely sensitive to channel errors due to the irregular mesh structure. We introduce a new approach for error-resilient coding of arbitrary 3-D graphic models by extending the error-free constructive traversal compression scheme proposed by Li and Kuo (see MPEG-4 Tokyo Meeting, 1998, Contribution Doc. M3324, and Proc. IEEE, vol.86, p.1052-63, 1998). A 3-D mesh of an arbitrary structure is partitioned into pieces of a smaller uniform size with joint boundaries. The size of a piece is determined adaptively based on the channel error rate. The topology and geometry information of each joint boundary and each piece of a connected component is coded independently. The coded topology and first several important bit-planes of the joint-boundary data are protected against channel errors by using the Bose-Chaudhuri-Hocquenghem error-correcting code. At the decoder, each piece is decoded and checked for channel errors. The decoded joint-boundary information is used to perform data recovery and error concealment on the corrupted piece data. All decoded pieces are combined together according to their configuration to reconstruct all connected components of the complete 3-D model. Our experiments demonstrate that the proposed approach has excellent error resiliency at a reasonable bit-rate overhead. The techniques is also capable of incrementally rendering one connected component of the 3-D model at a time",2001,0, 7451,Efficient fault-tolerant scheme based on the RSA system,"Data security on the Internet is an area of considerable concern. A recently proposed fault-tolerant scheme for data encryption that was based on the RSA system is shown to be flawed. The flaw occurs if a malicious receiver keeps a sender's message and corresponding signature, then changes the message whilst retaining the existing signature. Under these circumstances the sender cannot deny having written the forged message since it carries his/her correct signature. An improvement to enhance the security and performance of the existing RSA-based scheme is presented.",2003,0, 7452,Fault-based testing of database application programs with conceptual data model,"Database application programs typically contain program units that use SQL statements to manipulate records in database instances. Testing the correctness of data manipulation by these programs is challenging. When a tester provides a database instance to test such a program, the program unit may output faulty SQL statements and, hence, manipulate inappropriate database records. Nonetheless, these failures may only be revealed in very specific database instances. This paper proposes to integrate SQL statements and the conceptual data models of an application for fault-based testing. It proposes a set of mutation operators based on the standard types of constraint used in the enhanced entity-relationship model. These operators are semantic in nature. This semantic information guides the construction of affected attributes and join conditions of mutants. The usefulness of our proposal is illustrated by an example in which a missing-record fault is revealed.",2005,0, 7453,Customizing Virtual Machine with Fault Injector by Integrating with SpecC Device Model for a Software Testing Environment D-Cloud,"D-Cloud is a software testing environment for dependable parallel and distributed systems using cloud computing technology. We use Eucalyptus as cloud management software to manage virtual machines designed based on QEMU, called FaultVM, which have a fault injection mechanism. D-Cloud enables the test procedures to be automated using a large amount of computing resources in the cloud by interpreting the system configuration and the test scenario written in XML in D-Cloud front end and enables tests including hardware faults by emulating hardware faults by FaultVM flexibly. In the present paper, we describe the customization facility of FaultVM used to add new device models. We use SpecC, which is a system description language, to describe the behavior of devices, and a simulator generated from the description by SpecC is linked and integrated into FaultVM. This also makes the definition and injection of faults flexible without the modification of the original QEMU source codes. This facility allows D-Cloud to be used to test distributed systems with customized devices.",2010,0, 7454,Enhance Fault Localization Using a 3D Surface Representation,"Debugging is a difficult and time-consuming task in software engineering. To locate faults in programs, a statistical fault localization technique makes use of program execution statistics and employs a suspiciousness function to assess the relation between program elements and faults. In this paper, we develop a novel localization technique by using a 3D surface to visualize previous suspiciousness functions and using fault patterns to enhance such a 3D surface. By clustering realistic faults, we determine various fault patterns and use 3D points to represent them. We employ spline method to construct a 3D surface from those 3D points and build our suspiciousness function. Empirical evaluation on a common data set, Siemens suite, shows that the result of our technique is more effective than four existing representative such techniques.",2010,0, 7455,Fault adaptive kinematic control using multi processor system,"Decentralized autonomous control architecture and self-organizing control architecture have several advantages in space robots-the most fascinating of which is an adaptation for partial faults. The Ministry of Posts and Telecommunications and the Communications Research Laboratory have proposed an ""Orbital Maintenance System"" (OMS) that maintains a space system. We have developed a modular-type manipulator, which can be controlled by distributed processors in each module and can overcome partial failures. In this paper, we introduce a decentralized control algorithm for module-type manipulators and discuss its performance in computer simulations and experiments. The algorithm proved useful for inspection in module-type manipulators, and robust to partial faults",2000,0, 7456,Development of a virtual environment for fault diagnosis in rotary machinery,"Component fault analysis is a very widely researched area and requires a great deal of knowledge and expertise to establish a consistent and accurate tool for analysis. This paper will discuss a virtual diagnostic tool for fault detection of rotary machinery. The diagnostic tool has been developed using FMCELL software, which provides a 3D graphical visualization environment to modeling rotary machinery with virtual data acquisition capabilities. The developed diagnostic tool provides a virtual testbed with suitable graphical user interfaces for rapid diagnostic fault analysis of machinery. In this paper, we will discuss details of this newly developed virtual diagnostic model using FMCELL software and present our approach for diagnostics of a mechanical bearing test bed (TSU-BTB). Furthermore, we will provide some examples of how the virtual diagnostic environment can be used for performing machinery fault diagnostics. Using a frequency pattern matching superimposing technique, the model is proven to be able to detect primary faults in machines with fair accuracy and reliability",2001,0, 7457,2D-BERT: Two dimensional burst error recovery technique,"Compressed images such as JPEG when transmitted over communication channels are susceptible to channel noise and/or erasure, which causes losses of blocks of pixels or macroblocks. Recovery of a large block in an image is a difficult task. In this paper we propose a novel method that addresses this problem and can exactly recover a lost block. Moreover, the proposed algorithm is fast enough to be implemented in real time. Simulation results show that our method is faster than previous methods proposed in the literature with less distortion.",2007,0, 7458,A new approach for real time fault diagnosis in induction motors based on vibration measurement,"Condition monitoring is used to increase machinery availability and machinery performance, reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient real time vibration measurement and analysis instruments is capable of providing warning and predicting faults at early stages. In this paper, a new methodology for the implementation of vibration measurement and analysis instruments in real time based on circuit architecture mapped from a MATLAB/Simulink model is presented. In this study, signal processing applications such as FIR filters and fast Fourier transform are treated as systems, which are implemented in hardware using a system generator toolbox, which translates a Simulink model in a hardware description language - HDL for FPGA implementations.",2010,0, 7459,Classification of surface defects on hot rolled steel using adaptive learning methods,"Classification of local area surface defects on hot rolled steel is a problematic task due to the variability in manifestations of the defects grouped under the same defect label. The paper discusses the use of two adaptive computing techniques, based on supervised and unsupervised learning, with a view to establishing a basis for building reliable decision support systems for classification",2000,0, 7460,Reliable online water quality monitoring as basis for fault tolerant control,"Clean data are essential for any kind of alarm or control system. To achieve the required level of data quality in online water quality monitoring, a system for fault tolerant control was developed. A modular approach was used, in which a sensor and station management module is combined with a data validation and an event detection module. The station management module assures that all relevant data, including operational data, is available and the state of the monitoring devices is fully documented. The data validation module assures that unreliable data is detected, marked as such, and that the need for sensor maintenance is timely indicated. Finally, the event detection module marks unusual system states and triggers measures and notifications. All these modules were combined into a new software package to be used on water quality monitoring stations.",2010,0, 7461,Extending IEEE 1588 to fault tolerant clock synchronization,"Clock synchronization over packet-oriented networks is an enabling technology for many distributed applications especially in automation. To this end it is of great interest to obtain a worst-case bound on the deviation between the clocks of any two nodes, known as clock precision. This article describes the steps involved in order to achieve a higher precision over Ethernet-based LANs without degrading fault tolerance and determinism aspects. We explain how the statistical time synchronization in IEEE 1588 could be extended by a deterministic algorithm to support those features in an orthogonal fashion.",2004,0, 7462,A Predictive Fault Tolerance Agent based on Ubiquitous Computing for A Home Study System,"DOORAE (Distance Object Oriented Collaboration Environment) is a framework for supporting development on multimedia collaborative environment. It provides functions well capable of developing multimedia distance education system for students as well as teachers. It includes session management, access control, concurrency control and handling late comers. There are two approaches to software architecture on which applications for multimedia distance education environment in situation-aware middleware are based. This paper proposes a new model of fault tolerance agent based on situation-aware ubiquitous computing for a multimedia home study system which is based on CARV.",2007,0, 7463,Inter-frame error concealment using graph cut technique for video transmission,"Due to channel noise and congestion, video data packets can be lost during transmission in error-prone networks, which severely affects the quality of received video sequences. The conventional inter-frame error concealment (EC) methods estimate a motion vector (MV) for a corrupted block or reconstruct the corrupted pixel values using spatial and temporal weighted interpolation, which may result in boundary discontinuity and blurring artifacts of the reconstructed region. In this paper, we reconstruct corrupted macroblock (MB) by predicting sub-partitions and synthesizing the corrupted MB to reduce boundary discontinuity and avoid blurring artifacts. First, we select the optimal MV for each neighboring boundary using minimum side match distortion from a candidate MV set, and then we calculate the optimal cut path between the overlapping regions to synthesize the corrupted MB. The simulation results show that our proposed method is able to achieve significantly higher PSNR as well as better visual quality than using the H.264/AVC reference software.",2010,0, 7464,"Fault tolerance adaptation requirements vs. quality-of-service, realtime and security in dynamic distributed systems","Due to deregulation of electricity market and the trend towards distributed electricity generation based on renewable energy (e.g. wind energy), electric power infrastructure relies increasingly on communication infrastructure. Communication infrastructure has to fulfil requirements as survivability and dependability to secure a high availability and reliability of electric power infrastructure. For this requirements, the communication infrastructure must be able to adapt to changes (e.g. due to dynamic environment) and to allow adaptation of quality-of-service (e.g. graceful degradation), and not the least to support adaptive fault tolerance. Given the criticality of the information transferred (e.g. control data and messages) the communication infrastructure requires support for security (e.g. confidentiality, integrity etc.) as well. Because communication infrastructure handles control data, part of the traffic (or communication) has also requirements for real-time support. In this paper relations between the requirements for adaptive fault tolerance and real-time, quality-of-service and/or security are analyzed. It can be noticed that sometimes tradeoffs are required (e.g. real-time vs. fault tolerance) or the opposite, the two characteristics can use the same mechanisms to improve their support (e.g. fault tolerance vs. quality-of-service). Also, they can depend on each other for dependability support (e.g. fault tolerance vs. security). More detailed analysis depends on the specific cases (e.g. underlying distributed system, target applications such as communication infrastructure of electric power infrastructure). However, the relationships (i.e. dependencies) between the requirements for fault tolerance, real-time, quality-of-service and/or security should not be ignored, neither considered too late in the design decisions if dependability is an objective",2006,0, 7465,A Novel Clock-Fault Detection and Self-Recovery Circuit for Reliable Nanoelectronics System,"Due to discrepancies in manufacturing process and the probabilistic nature of quantum mechanical phenomenon, fault-tolerant architectures are a prerequisite to build reliable nanoelectronic systems from unreliable nanoelectronic devices. Various defects and interference such as doping discrepancies, supply noise and cross-talks could lead to clock irregularity and malformed clock signals in nanoelectronic systems, thus resulting in faulty operations of sequential circuits. As a result, fault tolerance clock distribution is more and more important in nanoelectronic systems. In this paper, a novel architecture for clock recovery is delivered. Very simple circuit is designed for time-to-voltage converter with transforming the error of time to the error of voltage. In order to illustrate the fault-tolerance capability by the detection and recovery circuitry, a prototype CMOS design of this proposed circuit is presented. Simulation shows that the proposed architecture is very suite for integration to nanoelectronic circuit design to realize clock recovery.",2009,0, 7466,"A unified model for timing speculation: Evaluating the impact of technology scaling, CMOS design style, and fault recovery mechanism","Due to fundamental device properties, energy efficiency from CMOS scaling is showing diminishing improvements. To overcome the energy efficiency challenges, timing speculation has been proposed to optimize for common-case timing conditions, with errors occurring under worst-case conditions detected and corrected in hardware. Although various timing speculation techniques have been proposed, no general framework exists for reasoning about the trade-offs and high-level design considerations of timing speculation. This paper develops two models to study the end-to-end behavior of timing speculation: a hardware-level efficiency model that considers the effects of process variations on path delays, and a complementary system-level recovery model. When combined, the models are used to assess the impact of technology scaling, CMOS design style, and fault recovery mechanism on the efficiency of timing speculation. Our results show that (1) efficiency gains from timing speculation do not improve as technology scales, (2) ultra-low power (sub-threshold) CMOS designs benefit most from timing speculation - we report a 47% potential energy-delay reduction, and (3) fine-grained fault recovery is key to significant energy improvements. The combined model uses only high-level inputs to derive quantitative energy efficiency benefits without any need for detailed simulation, making it a potentially useful tool for hardware developers.",2010,0, 7467,State of art of fault current limiters and their impact on overcurrent protection,"Due to increasing fault current levels in transmission and distribution level, considerable attention is given towards developing fault current limiter (FCL) devices in recent years. Most of the FCL technologies are in their initial stages of deployment and are being assessed before their use become more widespread. As the objective of FCL is to limit the fault current, it will have impact on protection system. This paper gives an overview of the different FCL technologies, improvements made in FCL devices and assesses their impacts on overcurrent relay protection. An analysis on overcurrent relay operating time and therefore co-ordination has been presented based on the results obtained from a network splitting model developed using PSCAD/EMTDC.",2009,0, 7468,Supporting fault-tolerant on-board evolution in aerospace reconfigurable platform,"Due to long distance and terrible environment, it is difficult to maintain spacecrafts once they were launched. Traditional on-board software maintenance (OBSM) often fails because of broken communication and unexpected situation. This has generated a pressing need for self-adaptive, self-repairing and self-upgrading capabilities in On-Board Maintenance (OBM) applications. In this paper, we propose a fault-tolerant onboard evolutionary platform which implements relocation of hardware and software tasks in spaceborne computing systems. Tasks can either run in software space or be put into hardware task slots according to energy-efficient or real-time requirements. The allocation of hardware and software tasks will evolve in terms of current condition to meet real-time, energy-efficient or environment requirements. Sleeping tasks, redundant FPGAs and version switching control are combined together to achieve fault-tolerance in the on-board evolutionary platform. The design theories and strategies of proposed prototype are described in detail. Simulation and experiment results are discussed as well.",2007,0, 7469,Power module with solid state circuit breakers for fault-tolerant applications,"Due to safety reasons in some applications and especially in aerospace applications a fail safe operability is crucial. One solution for this operability is redundancy but size, weight and costs are doubled. Another approach is to design a fault tolerant system which ensures the fail safe operability. This paper describes the design of a fault tolerant system for a drive application. The fault tolerant inverter consists of four phase legs, three phase legs for the conventional inverter topology and a redundant one. Additionally there are solid state circuit breakers in the system in order to minimize the conduction losses during operation. In order to minimize the switching losses during normal operation newest IGBT and silicon carbide diodes are used. Two phase legs with circuit breakers are integrated in one power module. Due to the enhanced functionality the modules are called Enhanced Intelligent Power Modules (EIPM). Two of these power modules are used for the fault tolerant inverter topology. The results, simulations and first measurements are shown in order to demonstrate the functionality of the system. This work was done in the European funded project MOET together with Airbus.",2010,0, 7470,Fault locaization mechanism using integrated resource information model in next generation network,"Due to the appearance of the next generation network, the network capabilities have been changed from network centric to service centric architecture, which is able to provide converged services such as voice and video services on the same platform. However, since the IP-based converged network is a complicated infrastructure to manage and operate using existing management systems, one of the major issues is how to ease the operational complexity to contribute to reduction of operational expenditure (OPEX) and improvement of customer satisfaction. To address this issue, this paper proposes an extension of an integrated resource information model that is capable of end-to-end resource management from network resource information to application resources and service/customer resources based on the information framework (SID) defined by the TeleManagement forum. In addition, we propose the next generation fault localization mechanism, which enables easy identification of not only the root cause of a failure, but also the extent of the impact of a failure due to both network failure and performance degradation, by utilizing interworking mechanism and scanning algorism, According to the concept, we have successfully demonstrated the proposed fault localization mechanism based on the integrated resource information model using a network testbed by cooperating the resource management system and the fault management system.",2010,0, 7471,A hierarchical fault tolerant architecture for component-based service robots,"Due to the benefits of reusability and productivity, component-based approach has become the primary technology in service robot system development. However, because component developer cannot foresee the integration and operating condition of the components, they cannot provide appropriate fault tolerance function, which is crucial for commercial success of service robots. The recently proposed robot software frames such as MSRDS (Microsoft Robotics Developer Studio), RTC (Robot Technology Component), and OPRoS (Open Platform for Robotic Services) are very limited in fault tolerance support. In this paper, we present a hierarchically-structured fault tolerant architecture for component-based robot systems. The framework integrates widely-used, representative fault tolerance measures for fault detection, isolation, and recovery. The system integrators can construct fault tolerance applications from non-fault-aware components, by declaring fault handling rules in configuration descriptors or/and adding simple helper components, considering the constraints of components and the operating environment. To demonstrate the feasibility and benefits, a fault tolerant framework engine and test robot systems are implemented for OPRoS. The experiment results with various simulated fault scenarios validate the feasibility, effectiveness and real-time performance of the proposed approach.",2010,0, 7472,Spatial Optical Distortion Correction in an FPGA,"Due to the complexities of the image processing algorithms, correcting spatial distortion of optical images quickly and efficiently is a major challenge. This paper describes an efficient pipelined parallel architecture for optical distortion correction in imaging systems using a low cost FPGA device. The proposed architecture produces a fast, almost realtime solution for the correction of image distortion implemented using VHDL HDL with a single Xilinx FPGA XCS31000-4 device. The experimental results show that the barrel and pincushion distortion can be corrected with a very low residual error. The system architecture can be applied to other imaging processing algorithms in optical systems",2006,0, 7473,Fault detection in dynamic systems via decision fusion,"Due to the growing demands for system reliability and availability of large amounts of data, efficient fault detection techniques for dynamic systems are desired. In this paper, we consider fault detection in dynamic systems monitored by multiple sensors. Normal and faulty behaviors can be modeled as two hypotheses. Due to communication constraints, it is assumed that sensors can only send binary data to the fusion center. Under the assumption of independent and identically distributed (1ID) observations, we propose a distributed fault detection algorithm, including local detector design and decision fusion rule design, based on state estimation via particle filtering. Illustrative examples are presented to demonstrate the effectiveness of our approach.",2008,0, 7474,Combined Reconstruction and Motion Correction in SPECT Imaging,"Due to the long imaging times in SPECT, patient motion is inevitable and constitutes a serious problem for any reconstruction algorithm. The measured inconsistent projection data lead to reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT if not corrected. To address this problem a new approach for motion correction is introduced. It is purely based on the measured SPECT data and therefore belongs to the data-driven motion correction algorithm class. However, it does overcome some of the shortcomings of conventional methods. This is mainly due to the innovative idea to combine reconstruction and motion correction in one optimization problem. The scheme allows for the correction of abrupt and gradual patient motion. To demonstrate the performance of the proposed scheme extensive 3D tests with numerical phantoms for 3D rigid motion are presented. In addition, a test with real patient data is shown. Each test shows an impressive improvement of the quality of the reconstructed image. In this note, only rigid movements are considered. The extension to non-linear motion, as for example breathing or cardiac motion, is straightforward and will be investigated in a forthcoming paper.",2009,0, 7475,Asymmetries in soft-error rates in a large cluster system,"Early in the deployment of the ASC Q cluster supercomputer system, an unexpectedly high rate of soft errors were observed in the board-level cache subsystems of the constituent AlphaServer ES45 systems that make up the compute component of this large cluster. A series of tests and experiments was undertaken to validate the hypothesis that this frequency was consistent with the high level of terrestrial secondary cosmic-ray neutron flux resulting from the high elevation of its installation site. The overall success of this effort is reported elsewhere in this issue. This paper reports on three secondary phenomena that were observed during these tests and experiments: Error logs were collected from all servers during a representative period and examined for nonrandom event rates, which would indicate a systematic cause. The only significant result of this exploration was the discovery of a latent soft-error discovery effect, and a self-shielding effect, whereby the servers positioned physically higher in their racks suffered disproportionately higher soft-error rates. This excess was examined and found to be consistent with established shielding effect of the high-Z composition of the constituents of the overlying systems. Experiments with individual ES45 systems in an artificial neutron beam at the Los Alamos Neutron Science Center facility have established that the soft-error rates observed in the SRAM parts is significantly dependent on the incident direction of the neutrons in the beam. These asymmetries could be exploited as part of a strategy for mitigating the frequency of soft errors in future computer systems.",2005,0, 7476,Correction algorithm for the proximity effect in e-beam lithography,"e-beam lithography is a technique capable of fabricate sub-micrometer planar structures. The ultimate resolution in this technique is limited mainly by the proximity effect where the dose accumulated in one spacial point is affected by the irradiated dose in its neighborhood. The relevance of this effect in one particular pattern strongly depends on its geometry, the sensitivity of the resist and the physical characteristics of the substrate. In this work we present a numerical algorithm to calculate the nominal dose needed to be applied in each point of the geometry that results in an optimal net dose for an efficient pattern transfer.",2008,0, 7477,Fault management in ECLIPSE,"ECLIPSE is a next-generation virtual telecommunication network that provides multimedia services that integrate voice, video, text and images. ECLIPSE facilitates the modular decomposition of new telecommunication services. In this paper, we sketch the challenges we face in making ECLIPSE highly available when running on top of a heterogenous and widely distributed system. We describe how we approach these challenges using the modular decomposition of services provided by ECLIPSE, together with its novel failure detection and recovery strategies",2001,0, 7478,A case study on the impact of customer communication on defects in agile software development,"Effective communication and feedback are crucial in agile development. Extreme programming (XP) embraces both communication and feedback as interdependent process values which are essential for projects to achieve successful results. Our research presents the empirical results from four different case studies. Three case studies had partially onsite customers and one had an onsite customer. The case studies used face-to-face communication to different extents along with email and telephone to manage customer-developer communication inside the development iterations. Our results indicate that an increased reliance on less informative communication channels results in higher defect rates. These results suggest that the selection of communication methods, to be used inside development iterations, should be a factor of considerable importance to agile organizations working with partially available customers. This paper also proposes some guidelines for selecting proper communication methods",2006,0, 7479,Experimental evaluation of error-detection mechanisms,"Effective error-detection is paramount for building highly dependable computing systems. A new methodology, based on physical and simulated fault injection, has been developed for assessing the effectiveness of error-detection mechanisms. This approach has 2 steps: (1) transient faults are physically injected at the IC pin level of a prototype, in order to derive the error-detection coverage. Experiments are carried out in a 3-dimensional space of events. Fault location, time of occurrence, and duration of the injected fault are the dimensions of this space. (2) Simulated fault-injection is performed to assess the effectiveness of new error-detection mechanisms, designed to improve the detection coverage. Complex circuitry, based on checking for protocol violations, is considered. A temporal model of the protocol checker is used, and transient faults are injected in signal traces captured from the prototype system. These traces are used as inputs of the simulation engine. s-confidence intervals of the error-detection coverage are derived, both for the initial design and the new detection mechanism. Physical fault-injection, carried out on a prototype server, proved that several signals were sensitive to transient faults and error-detection coverage was unacceptably low. Simulated fault injection shows that an error-detection mechanism, based on checking for protocol violations, can appreciably increase the detection coverage, especially for transient faults longer that 200 nanoseconds. Additional research is required for improving the error-detection of shorter transients. Fault injection experiments also show that error-detection coverage is a function of fault duration: the shorter the transient fault, the lower the coverage. As a consequence, injecting faults that have a unique, predefined duration, as it was frequently done in the past, does not provide accurate information on the effectiveness of the error-detection mechanisms. Injecting only permanent faults leads to unrealistically high estimates of the coverage. These experiments prove that combined physical and simulated fault injection, performed in a 3-dimensional space of events, is a superior approach, which allows the designers to accurately assess the efficacy of various candidate error-detecti- on mechanisms without building expensive test circuits.",2003,0, 7480,A GA-Based Effective Fault-Tolerant Model for Channel Allocation in Mobile Computing,"Efficient channel allocation to mobile hosts is of utmost importance in a cellular network. A genetic algorithm (GA), which is a useful tool in solving optimization problems, is explored to design a fault-tolerant cellular channel allocation model that allows a cell to continue communicating with its mobile hosts, even if there are insufficient channels available in the cell. Sometimes, the load over a cell may increase to the extent that it needs more channels than it actually has in order to handle the traffic. On the other hand, it is quite possible that the load in some other cell is less than its channel capacity, resulting in underutilization of the channels. This problem is solved by temporarily taking unutilized channels from cells that have lesser load and allocating them to the cells that are overloaded. We propose a model that reuses available channels more efficiently. The model also considers the handoff by using the reserved channel technique. A reserved pool of channels makes the model fault tolerant. Thus, the proposed work uses GA for fault-tolerant dynamic channel allocation to minimize the average number of blocked hosts and handoff failures in the mobile computing network. Simulation experiments evaluate the performance of the proposed model. Comparison of the results with the two recent earlier models reveals that the proposed model works better in serving mobile hosts.",2008,0, 7481,Effect of voltage stability constraints and corrective control on pricing components,"Electricity markets that have adopted nodal locational marginal pricing schemes decompose prices into three components. These components relate to generation, congestion and losses. The usual scheme of pricing follows the mandate that when the market clears, every node is assigned the same value for the generation component. And so it follows that the difference in price across any pair of nodes is because of the congestion and loss component. In this paper we examine the congestion component in the light of new constraints that will arise because of the transfer limits imposed by voltage stability across critical interfaces.",2008,0, 7482,Wavelet Approach for ECG Baseline Wander Correction and Noise Reduction,Electrocardiographic (ECG) analysis plays an important role in safety assessment during new drug development and in clinical diagnosis. The pre-processing of ECG analysis consists of low-frequency baseline wander (BW) correction and high-frequency artifact noise reduction from the raw ECG. We present approaches for BW correction and de-noising based on discrete wavelet transformation (DWT). We estimate the BW via coarse approximation in DWT with recommendations for how to select wavelets and the maximum depth for decomposition level. We reduce the high-frequency noise via empirical Bayes posterior median wavelet shrinkage method with level-dependent and position dependent thresholding values. The methods are applied to a real example. The experimental results indicate that the proposed method can effectively remove both low- and high-frequency noise,2005,0, 7483,The Largest Amount of Fault Information in Analog Circuit Fault Diagnosis Based on Multisim Circuit Simulation,"Electronic equipment analog circuit has high failure rate, factors cause failure are fuzzy, fault diagnosis is complex. This method use Multisim circuit simulation software for band-pass filter circuit simulation and analysis, setting the soft and hard failures to obtain simulation results. Using the largest amount of fault information diagnosis method to seek the greatest amount of fault informative feature groups and determine the failure source. This method of fault diagnosis provides a reference for the measured data.",2010,0, 7484,"Fault Detection, Isolation, and Localization in Embedded Control Software","Embedded control software reacts to plant and environment conditions in order to enforce a desired functionality, and exhibit hybrid dynamics: control-loops together with switching logic. Control software can contain errors (faults), and fault-tolerance methods must be developed to enhance system safety and reliability. We present an approach for fault detection and isolation that is key to achieving fault-tolerance. Detection approach is hierarchical involving monitoring both the control software, and the controlled-system. The latter is necessary to safeguard against any incompleteness of software level properties. A model of the system being monitored is not required, and further the approach is modular and hence scalable. When fault is detected at the system level, an isolation of a software fault is achieved by using residue methods to rule out any hardware (plant) fault. We also proposed a method to localize a software fault (to those lines of code that contain the fault). The talk will be illustrated through a servo control application.",2008,0, 7485,Exploiting Mobile Agents for Structured Distributed Software-Implemented Fault Injection,"Embedded distributed real-time systems are traditionally used in safety-critical application areas such as avionics, healthcare, and the automotive sector. Assuring dependability under faulty conditions by means of fault tolerance mechanisms is a major concern in safety-critical systems. From a validation perspective, Software-Implemented Fault Injection (SWIFI) is an approved means for testing fault tolerance mechanisms. In recent work, we have introduced the concept of using mobile agents for distributed SWIFI in time-driven real-time systems. This paper presents a prototypical implementation of the agent platform for the OSEKtime real-time operating system and the FlexRay communication system. It is further shown, how to implement fault injection experiments by means of mobile agents in a structured manner following a classification of faults in terms of domain, persistence, and perception. Based on experiments conducted on ARM-based platforms, selected results are described in detail to demonstrate the potential of mobile agent based fault injection.",2006,0, 7486,Decision Trees-Aided Self-Organized Maps for Corrective Dynamic Security,"Difficulties in expanding the generation and transmission system force modern power systems to operate often close to their stability limits, in order to meet the continuously growing demand. An effective way to face power system contingencies that can lead to instability is load shedding. This paper proposes a machine learning framework for the evaluation of load shedding for corrective dynamic security of the system. The proposed method employs a self-organized map with decision trees nested in some of its nodes in order to classify the load profiles of a power system. The method is applied on a realistic model of the Hellenic power system and its added value is shown by comparing results with the ones obtained from the application of simple self-organized maps and simple decision trees.",2008,0, 7487,A Color Error Correction Mode for Digital Camera Based on Polynomial Curve Generation,"Digital camera is one of the main devices in computer and multimedia technology and its color management model is the key to guarantee color consistency in succedent image production and transfers. The paper presents a color conversion model for digital camera based on polynomial curve generation. Firstly, color rendering principle of digital camera is analyzed. Then digital camera data is pretreated to a unitary field to deduce final model. Thirdly, standard color target is taken for experimental sample and substitutes color blocks in color shade district for complete color space to solve the difficulties of experimental color blocks selecting; Fourthly, the model using polynomial curve generation algorithm to correct color error is deduced; Finally, the realization and experiment results show that, compared with some methods which have relatively high accuracy, the algorithm can improve color conversion accuracy and can satisfy the engineering requirement in digital camera color management .",2008,0, 7488,An Error Diffusion Based Algorithm for Hiding an Image in Distinct Two Images,"Digital halftoning is an important process to convert a continuous-tone image into a binary image with pure black and white pixels. This process is necessary when printing a monochrome or color image by a printer with limited number of ink colors. The main contribution of this paper is to present a halftoning method that conceals a binary image into two binary images. More specifically, three distinct gray scale images are given, such that one of them should be hidden in the other two gray scale images. Our halftoning method generates three binary images that reproduce the tone of the corresponding original three gray scale images. Quite surprisingly, the secret binary image can be seen by overlapping the other two binary images. In other words, the secret binary image is hidden in two public binary images. Also, it is very hard to guess the secret image using only one of the two public images, and these two public images are necessary to get the secret image. The resulting images show that our halftoning method hides and recovers the original images. Hence, our halftoning technique can be used for watermarking as well as amusement purpose.",2008,0, 7489,PD pulse sequence analysis and its relevance for on-site PD defect identification and evaluation,"Digital partial discharge (PD) diagnosis has become a state of the art, but computer aided procedures based on pattern recognition principles are negatively effected by on-site disturbances. The benefits and limits of expert diagnosis compared to machine intelligent systems are discussed. It is pointed out that a sufficient noise resistivity and test voltage independency of the diagnosis concept is a must for a successful on-site PD defect identification and evaluation. Experimental results are presented, taken into account especially novel evaluation procedures, which are more independent from the applied PD sensors. It is shown that PD defect identification on site at operating voltages and respectively disturbed data is still a challenge. A method of resolution with an intelligent noise resistant diagnosis concept is discussed. A hierarchical and redundant approach is presented, which improves the on site PD source identification as well as on-site risk assessment.",2005,0, 7490,Data mining for distribution system fault classification,"Digital relaying equipment at substations allow for large amounts of data storage that can be triggered by predetermined system conditions. Some of this information retrieved from relays at several locations in a local utility's service territory has been mined for determining trends and relationships. Data mining aims to make sense of the retrieved data by revealing meaningful relationships. This paper discusses some useful data mining techniques that are applied to data recorded by overcurrent relays at several substations. The purpose is to classify faults, verify relay settings and determine fault induced trip per substations. High accuracy is obtained.",2005,0, 7491,A Method of the Rules Extraction for Fault Diagnosis Based on Rough Set Theory and Decision Network,"Directing to the inconsistency of the fault diagnosis information, a method of the rules extraction for fault diagnosis based on rough set theory and decision network is proposed. The fault diagnosis decision system attributes are reduced through discernibility matrix and discernibility function firstly, and then a decision network with different reduced levels is constructed. Initialize the network's node with the attribute reduction sets and extract the decision rule sets according to the node of the decision network. In addition, the coverage degree based on confidence degree was applied to filter noise and evaluate the extraction rules. The availability of this method is proved by a fault diagnosis example of rotating machines.",2008,0, 7492,The system of copper strips surface defects inspection based on intelligent fusion,"Directing towards the characteristics and importance of copper strips surface inspection, this paper proposes a copper strips surface inspection system based on artificial intelligence from the standpoint of artificial intelligence. It uses computer vision to capture defects images, uses Hu invariant moments as the eigenvectors and uses BP neural network based on genetic algorithm combined with expert system to train and learn the samples. The trained weights are used to diagnose as the knowledge base and a copper surface defects inspection system based on intelligent fusion is constructed . The experimental results proved that intelligent fusion model can compliment all kinds of intelligent models and has a better performance than single model on copper strips surface inspection.",2008,0, 7493,From High Availability Systems to Fault Tolerant Production Infrastructures,"Disaster Recovery Infrastructures, which become an important part of all major IT infrastructures, are usually implemented in a stand-by mode, anticipating a disaster to justify their existence. This paper suggests the transformation of existing High Availability Standby Systems to fully Fault Tolerant Production Infrastructures in order to increase productivity, effectiveness and availability. The most important differences of the two approaches are presented and a transformation strategy that brings together technical and managerial points of view is suggested.",2007,0, 7494,A novel approach to calculate the severity and priority of bugs in software projects,"Discovering and fixing software bugs is a difficult maintenance task, and a considerable amount of effort is devoted by software developers on this issue. In the world of software one cannot get rid of the bugs, fixes, patches etc. each of them have a severity and priority associated to it. There is not yet any formal relation between these components as both of these either depends on the developer and tester or on customer and project manger to be decided on. On one hand, the priority of a component depends on the cost and the efforts associated with it. While on the other, the severity depends on the efforts required to accomplish a particular task. This research paper proposes a formula that can draw a relationship among severity and priority.",2010,0, 7495,XceptionTM - enhanced automated fault-injection environment,"Discusses Xception, an automated fault injection environment that enables accurate and flexible V&V (verification & validation) and evaluation of mission and business critical computer systems using fault injection. Xception is designed to accommodate a variety of fault injection techniques (according to a wide range of configurations of the tool) and emulate in this way different classes of faults, with particular emphasis to hardware and software faults.",2002,0, 7496,Two Types of Action Error: Electrophysiological Evidence for Separable Inhibitory and Sustained Attention Neural Mechanisms Producing Error on Go/No-go Tasks,"Disentangling the component processes that contribute to human executive control is a key challenge for cognitive neuroscience. Here, we employ event-related potentials to provide electrophysiological evidence that action errors during a go/no-go task can result either from sustained attention failures or from failures of response inhibition, and that these two processes are temporally and physiologically dissociable, although the behavioral errora nonintended responseis the same. Thirteen right-handed participants performed a version of a go/no-go task in which stimuli were presented in a fixed and predictable order, thus encouraging attentional drift, and a second version in which an identical set of stimuli was presented in a random order, thus placing greater emphasis on response inhibition. Electrocortical markers associated with goal maintenance (late positivity, alpha synchronization) distinguished correct and incorrect performance in the fixed condition, whereas errors in the random condition were linked to a diminished N2P3 inhibitory complex. In addition, the amplitude of the error-related negativity did not differ between correct and incorrect responses in the fixed condition, consistent with the view that errors in this condition do not arise from a failure to resolve response competition. Our data provide an electrophysiological dissociation of sustained attention and response inhibition.",2009,0, 7497,RAID Architecture with Correction of Corrupted Data in Faulty Disk Blocks,"Disk capacities and processor performances have increased dramatically ever since. With rising storage space the probability of failures gets higher. Reliability of storage systems is achieved by adding extra disks for redundancy, like RAID-Systems or separate backup space in general. These systems cover the case when disks fail but do not recognize corrupted data in faulty blocks. Especially new storage systems like Solid State Drives are more vulnerable to corrupted data as cells are AagingA over time. We propose to add error detection and correction of data to a RAID-system without increasing the amount of space needed to store redundancy information compared to the common implementation RAID 6. To overcome higher computation complexity the implementation uses parallel execution paths available in modern Multicore and Multiprocessor systems.",2009,0, 7498,FATCAR-AUV: Fault Tolerant Control Architecture of AUV,"Due to versatility, compact size, independence and covertness, autonomous underwater vehicles are a highly valuable asset in the underwater battle space. Possible missions for autonomous underwater vehicles (AUVs) range from dedicated and organic mine counter measure (MCM), rapid environmental assessment, and special operations to reconnaissance, surveillance and intelligence. The wide range of possible applications and their partly contradictory nature calls for a complex control architecture, which consequently increases the possibility of components and systems failure, for which fault tolerant architecture is required. In this paper not only the hardware requirements of AUVs are mentioned (section-I) but also the principles of fault tolerant control system (section-II) are extended, keeping in view the diverse and sundry requirement of AUVs. In the end the reliability block diagram (RBD) analysis (section-III) of the proposed architecture is presented which shows extend of fault tolerance incorporated by using this architecture.",2007,0, 7499,Imaging for foundation defects using GPR,"During construction of an RCC structure over the reclaimed ground following collapse of old structure, severe difficulties to piling were encountered due to obstruction by debris of previously collapsed RCC structure buried in the foundation regime. Hence GPR profiling along series of lines were carried along several lines at the twin constructions sites. GPR survey revealed presence of RCC debris scattered at a depth of 7-8m along the line of pile foundations. By identifying these scatters, the most likely trouble free locations for piling were demarcated. This facilitated quick solution to foundation obstructions and the erection of new structure was completed without further complications.",2010,0, 7500,Defect Classification Algorithm for IC Photomask Based on PCA and SVM,"During IC photomask vision inspection, considering problem that fine image defectpsilas fineness, complex shape, extraction feature difficultly, and effect by noise easily, presented defect identification classification algorithm based on PCA (principal components analysis) and SVM (support vector machine). It resolved the problem that fine and complex defect was difficult to classify, by merits of the extracting image global feature with PCA, and high accuracy and generalization capability with SVM. Regard class distance as criterion to construct the binary tree in multi-class SVM classification algorithm. It resolved the problem that the structure of binary tree affected the accuracy of classifier, and upgraded defect classification accuracy finally. Experiments show that six defects classification accuracy by this method is up to 97.8%, higher than best accuracy 93.3% by BP network and 83.3% by method based on region. And the training and inspecting time is few. In result, itpsilas an effective method for fineness defect identification and classification.",2008,0, 7501,A genetic algorithm to design error correcting codes,"During the digital information transmitting through a channel, practically inevitable errors due to the presence of noise and to other factors are produced, such as interferences, echoes, etc. Then, it is necessary to establish ways, if not to avoid the errors, at least for be able to recognize its presence and, if it is possible, to correct them. Correcting of some manner the message in the receiver side cheapens the price of a transmission process, therefore otherwise there would be that to repeat the message when is corrupted along the way. In these cases the use of error correcting codes is suitable. In the actual communication systems the control of error is performed by means of the application of special codes that add redundancy. This aggregate redundancy permits to detect and/or to correct the errors occurred during the data transmission. In this work we center us in the binary linear block codes. The problem to find an error correcting code that corrects a maximum number of errors given is a NP-complete problem. By this reason, we have undertaken this problem by means of a genetic algorithm. In this article we expose the implementation and results of the genetic algorithm, having obtained some good values compared with other works",2006,0, 7502,IP datacasting and channel error handling with DVB-H,"DVB-H is a terrestrial digital TV standard which consumes less power and allow user to move freely when receiving signals. Its deployment also signifies the convergence between broadcast network and data networks as both video signals and other data programs are transmitted in a shared media. In this paper we discuss the channel error problems under different scenarios of this convergence. Partial solutions, such as enhanced forward error correction were provided by the standard but other aspects still need further exploration. We present some early results and discuss possible directions in the future.",2005,0, 7503,Dynamic Case Based Reasoning in Fault Diagnosis and Prognosis,"Dynamic case based reasoning (DCBR) is a powerful technique for fault diagnosis and prognosis. DCBR allows the accumulation of experience from inclusion of the new cases and therefore accommodates learning. In this paper, we discuss DCBR and develop a methodology for using it in a dynamic model of a chiller system",2005,0, 7504,Analysis and enhancement of software dynamic defect models,"Dynamic defect models are used to estimate the number of defects in a software project, predict the release date and required effort of maintenance, and measure the progress and quality of development. The literature suggests that defects projection over time follows a Rayleigh distribution. In this paper, data concerning defects are collected from several software projects and products. Data projection showed that the previous assumption of the Rayleigh distribution is not valid for current projects which are much more complex. Empirical data collected showed that defect distribution in even simpler software projects cannot be represented by the Rayleigh curves due to the adoption of several types of testing on different phases in the project lifecycle. The findings of this paper enhance the well known Puntam's defect model and propose new performance criteria to support the changes occurred during the project. Results of fitting and predicting the collected data show the superiority of the new enhanced defect model over the original defect model.",2009,0, 7505,Dynamic Fault Handling Mechanisms for Service-Oriented Applications,"Dynamic fault handling is a new approach for dealing with fault management in service-oriented applications. Fault handlers, termination handlers and compensation handlers are installed at execution time instead of being statically defined. In this paper we present this programming style and our implementation of dynamic fault handling in JOLIE, providing finally a nontrivial example of its usage.",2008,0, 7506,Fault Tolerant Mechanism in Dynamic Multi-homed IPv6 Mobile Networks,"Dynamic mobile network is a kind of moving network which has multiple independent wireless personal area networks interconnecting with MANET networking. Since a mobile device like a cellular phone can work as a mobile router for a dynamic mobile network, there are some fault problems related with traffic overload, reliability and energy consumption. And the essential issue of highly dynamic mobile network is fault tolerant mechanism. In this paper, we propose fast path redirection mechanism and candidate GMR election mechanism in order to enhance fault tolerance. As a result, our approaches support more reliable and seamless connectivity to all the nodes in mobile networks. Finally simulation results show the performance and efficiency of the mechanisms in terms of energy consumption and packet loss.",2007,0, 7507,PathExpander: Architectural Support for Increasing the Path Coverage of Dynamic Bug Detection,"Dynamic software bug detection tools are commonly used because they leverage run-time information. However, they suffer from a fundamental limitation, the path coverage problem: they detect bugs only in taken paths but not in non-taken paths. In other words, they require bugs to be exposed in the monitored execution. This paper makes one of the first attempts to address this fundamental problem with a simple hardware extension. First, we propose PathExpander, a novel design that dynamically increases the code path coverage of dynamic bug detection tools with no programmer involvement. As a program executes, PathExpander selectively executes non-taken paths in a sandbox without side effects. This enables dynamic bug detection tools to find bugs that are present in these non-taken paths and would otherwise not be detected. Second, we propose a simple hardware extension to control the huge overhead in its pure software implementation to a moderate level. To further minimize overhead, PathExpander provides an optimization option to execute non-taken paths on idle cores in chip multi-processor architectures that support speculative execution. To evaluate PathExpander, we use three dynamic bug detection methods: dynamic software-only checker (CCured), dynamic hardware-assisted checker (iWatcher) and assertions; and conduct side-by-side comparison with PathExpander's counterpart software implementation. Our experiments with seven buggy programs using general inputs that do not expose the tested bugs show that PathExpander is able to help these tools detect 21 (out of 38) tested bugs that are otherwise missed. This is because PathExpander increases the code coverage of each test case from 40% to 65% on average, based on the branch coverage metric. When applications are tested with multiple inputs, the cumulative coverage also significantly improves by 19%. We also show that PathExpander introduces modest false positives (4 on average) and overhead (less than 9.9%). The 3-4- orders of magnitude lower overhead compared with pure-software implementation further justifies the hardware design in PathExpander",2006,0, 7508,"Dynamic fault-tolerance and metrics for battery powered, failure-prone systems","Emerging VLSI technologies and platforms are giving rise to systems with inherently high potential for runtime failure. Such failures range from intermittent electrical and mechanical failures at the system level, to device failures at the chip level. Techniques to provide reliable computation in the presence of failures must do so while maintaining high performance, with an eye toward energy efficiency. When possible, they should maximize battery lifetime in the face of battery discharge non-linearities. This paper introduces the concept of adaptive fault-tolerance management for failure-prone systems, and a classification of local algorithms for achieving system-wide reliability. In order to judge the efficacy of the proposed algorithms for dynamic fault-tolerance management, a set of metrics, for characterizing system behavior in terms of energy efficiency, reliability, computation performance and battery lifetime, is presented. For an example platform employed in a realistic evaluation scenario, it is shown that system configurations with the best performance and lifetime are not necessarily those with the best combination of performance, reliability, battery lifetime and average power consumption.",2003,0, 7509,Defect-Aware High-Level Synthesis Targeted at Reconfigurable Nanofabrics,"Entering the nanometer era, a major challenge to current design methodologies and tools is how to effectively address the high defect densities projected for nanoelectronic technologies. To this end, a reconfiguration-based defect-avoidance methodology for defect-prone nanofabrics was proposed. It judiciously architects the nanofabric, using probabilistic considerations, such that a very large number of alternative implementations can be mapped into it, enabling defects to be circumvented at configuration time, in a scalable way. Building on this foundation, in this paper, a synthesis framework aimed at implementing this new design paradigm is proposed. A key novelty of the approach with respect to traditional high-level synthesis (HLS) is that, rather than carefully optimizing a single (""deterministic"") solution, the goal is to simultaneously synthesize a large family of alternative solutions, so as to meet the required probability of successful configuration, or yield, while maximizing the average performance of the family of synthesized solutions. Experimental results generated for a set of representative benchmark kernels, assuming different defect regimes and target yields, empirically show that the proposed algorithms can effectively explore the complex probabilistic design space associated with this new class of HLS problems",2007,0, 7510,Blind correction of human lymphocyte images by optimal thresholding in UWT domain,"Epidemiological studies have shown that the presence of chromosome damage or instability in human lymphocytes create the Micro Nucleuses (MNs) not incorporated into one of the daughter nuclei. Recently, the Image Flow Cytometer (IFC) was pointed out in order to acquire the image of the human lymphocytes, and to detect the MNs. The alterations introduced by the image acquisition system of the IFC causes errors in the automatic recognition, Therefore, proper methods were pointed out to correct them. The recurrent alterations are bad exposure, out of focus, motion blur, and Gaussian noise. In order to more increase the number of images correctly processed to detect and count the MNs, in the paper the new and optimized method able to perform the blind correction of the image affected by Gaussian noise is proposed. The method is based on the Un-decimated discrete Wavelet Transform (UWT). By taking into account the characteristics of the human lymphocyte images, the optimal value of threshold is evaluated with the aim to improve the quality of corrected image. Experimental tests are performed to compare the proposed correction method with the ones already available in the recent literature.",2010,0, 7511,An algorithm for exploiting modeling error statistics to enable robust analog optimization,"Equation-based optimization using geometric programming (GP) for automated synthesis of analog circuits has recently gained broader adoption. A major outstanding challenge is the inaccuracy resulting from fitting the complex behavior of scaled transistors to posynomial functions. Fitting over a large region can be grossly inaccurate, and in fact, poor posynomial fit can lead to failure to find a true feasible solution. On the other hand, fitting over smaller regions and then selecting the best region, incurs exponential complexity. In this paper, we advance a novel optimization strategy that circumvents these dueling problems in the following manner: by explicitly handling the error of the model in the course of optimization, we find a potentially suboptimal, but feasible solution. This solution subsequently guides a range-refinement process of our transistor models, allowing us to reduce the range of operating conditions and dimensions, and hence obtain far more accurate GP models. The key contribution is in using the available oracle (SPICE simulations) to identify solutions that are feasible with respect to the accurate behavior rather than the fitted model. The key innovation is the explicit link between the fitting error statistics and the rate of the error uncertainty set increase, which we use in a robust optimization formulation to find feasible solutions. We demonstrate the effectiveness of our algorithm on a two benchmarks: a two-stage CMOS operational amplifier and a voltage controlled oscillator designed in TSMC 0.18m CMOS technology. Our algorithm is able to identify superior solution points producing uniformly better power and area values under gain constraint with improvements of up to 50% in power and 10% in area for the amplifier design. We also demonstrate that when utilizing the models with the same level of modeling error, our method yields solutions that meet the constraints while the violations for the standard method were as high as 4- - 5% and larger than 15% for several constraints.",2010,0, 7512,Error analysis and reduction for a simple sensor-microcontroller interface,"Error analysis of a resistive sensor-to-microcontroller interface based on pulse-width modulation and time ratio measurement shows that input and output resistances in digital ports produce zero, gain, and nonlinearity errors. The time-ratio technique cancels these errors when the sensor resistance equals the calibration reference resistor, and reduces errors in the neighborhood of that point. For sensors with wide dynamic range, several calibration resistors selected according to the sensor resistance range will further reduce those errors. Alternatively, two-point calibration and time ratio measurements yield errors that can be smaller than 0.5 for a sensor resistance from about 600 to 3550 ",2000,0, 7513,Implementation of near Shannon limit error-correcting codes using reconfigurable hardware,"Error correcting codes (ECCs) are widely used in digital communications. New types of ECCs have been proposed which permit error-free data transmission over noisy channels at rates which approach the Shannon capacity. For wireless communication, these new codes allow more data to be carried in the same spectrum, lower transmission power, and higher data security and compression. One new type of ECC, referred to as Turbo Codes, has received a lot of attention, but is computationally expensive to decode and difficult to realize in hardware. Low density parity check codes (LDPCs), another ECC, also provide near Shannon limit error correction ability. However, LDPCs use a decoding scheme which is much more amenable to hardware implementation. This paper first presents an overview of these coding schemes, then discusses the issues involved in building an LDPC decoder using reconfigurable hardware. It presents a hypothetical LDPC implementation using a commercial FPGA, which will give an idea of future research issues and performance gains",2000,0, 7514,Improved Error Metric of Terrain Rendering for Flying High Over the Terrain,"Error metric is the key problem for generating level of detail of terrain surface, which affects the deviation between the number of triangles and triangular mesh. After analyzing various present methods of calculating error metric, we present an improved algorithm based on roughness. This paper takes the roughness which expresses relief amplitude to be an factor, by which we calculate the error metric, form the sphere of error according to the position of viewpoint, choose appropriate LOD model for each frame. The improved error metrics is applied to the high-altitude terrain when flying high over the terrain. The experiment results show that our algorithm reduces the number of required triangles in terrain rendering, and ensures correct rendering of the entire outline of high steep terrain.",2009,0, 7515,Effects of Defects on the Thermal and Optical Performance of High-Brightness Light-Emitting Diodes,"Defects in terms of voids, cracks, and delaminations are often generated in light-emitting diodes (LEDs) devices and modules. During various manufacturing processes, accelerated testing, inappropriate handling, and field applications, defects are most frequently induced in the early stage of process development. One loading is due to the nonuniform loads caused by temperature, moisture, and their gradients. In this research, defects in various cases are modeled by a nonlinear finite-element method (FEM) to investigate the existence of interfaces, interfacial open and contacts in terms of thermal contact resistance, stress force nonlinearity, and optical discontinuity, in order to analyze their effects on the LED's thermal and optical performance. The simulation results show that voids and delaminations in the die attachment would enhance the thermal resistance greatly and decrease the LED's light extraction efficiency, depending on the defects' sizes and locations generated in packaging.",2009,0, 7516,An adaptive fault-tolerant mechanism for Web services based on context awareness,"Dependability and adaptability are both the important issues of Web services. With the consideration of the features of Web services, an adaptive fault-tolerant mechanism is presented. It is based on a two-aspect context model including functional context and network context. Based on the context model, the paper designs an adaptive fault-tolerant framework for Web services, which involves in a series of fault-tolerant technologies, including construction of a layered failure detection system, dynamically adjustable failure detection, and an adaptive service restarting process. The experimental results show the effect of the adaptive mechanism, such as reducing the unnecessary restarting operations and promoting the accuracy of failure detection.",2009,0, 7517,Using event-streams for fault-management in MAS,"Dependability is a key issue in the deployment of every multi-agent system (MAS). Only if its services are perceived as dependable (e.g. available, reliable, secure and safe) will the MAS be considered useful to ensure the domain specific dependability requirements, it is essential to enable the MAS to detect and react to critical states of agents. This can be done by either enabling the agent to deal with unexpected situations or by adding a fault-management component to the platform. This work presents an event-based fault-management system and presents the results of its evaluation.",2004,0, 7518,Wafer Topography-Aware Optical Proximity Correction,"Depth of focus is the major contributor to lithographic process margin. One of the major causes of focus variation is imperfect planarization of fabrication layers. Presently, optical proximity correction (OPC) methods are oblivious to the predictable nature of focus variation arising from wafer topography. As a result, designers suffer from manufacturing yield loss as well as loss of design quality through unnecessary guardbanding. In this paper, the authors propose a novel flow and method to drive OPC with a topography map of the layout that is generated by chemical-mechanical polishing simulation. The wafer topography variations result in local defocus, which the authors explicitly model in the OPC insertion and verification flows. In addition, a novel topography-aware optical rule check to validate the quality of result of OPC for a given topography is presented. The experimental validation in this paper uses simulation-based experiments with 90-nm foundry libraries and industry-strength OPC and scattering bar recipes. It is found that the proposed topography-aware OPC (TOPC) can yield up to 67% reduction in edge placement errors. TOPC achieves up to 72% reduction in worst case printability with little increase in data volume and OPC runtime. The electrical impact of the proposed TOPC method is investigated. The results show that TOPC can significantly reduce timing uncertainty in addition to process variation",2006,0, 7519,Realistic Monte Carlo simulation of Ga-67 imaging for optimization and evaluation of correction methods,"Describes a comprehensive Monte Carlo program tailored for efficient simulation of realistic Ga-67 SPECT data through the entire range of photon emission energies. The program incorporates several new features developed by us and by others, and is now being used to optimize and evaluate the performance of various methods of compensating for photon scatter, attenuation, and nonstationary distance- and energy-dependent detector resolution. Improvements include (a) the use of a numerical torso phantom with accurate organ source and attenuation maps obtained by segmenting CT images of an RSDTM anthropomorphic heart/thorax phantom, modified to include 8 axillary lymph nodes, (b) accelerated photon propagation through the attenuator using a variant of the maximum rectangular region (MRR) algorithm of Suganuma and Ogawa, and (c) improved variance reduction using modified spatial sampling for simulation of large-angle collimator penetration, scatter and lead X-rays. Very-high-count projections were simulated in 55 energy windows spaced irregularly in the range, 60-370 keV; these essentially noise-free images are used as a basis for generating Poisson noise realizations characteristic of 72-hour post-injection Ga-67 studies. Comparisons of spatial and energy distributions demonstrate good agreement between data experimentally measured from the RSD phantom and those simulated from the mathematical segmentation of the same phantom",2000,0, 7520,Fault-tolerant adaptive scheduling for embedded real-time systems,"Describes a fault-tolerant algorithm which uses a time-value scheduling approach to detect faults, sustain high processor utilization, and ensure timely execution of critical tasks",2001,0, 7521,Fault detection of flight critical systems,"Describes initial results of a project developing fault tolerant control systems for critical aircraft systems and focuses on the early detection of faulty components. The main goal is the use of signal processing techniques to analyze sensor information and data-mine changes that could be attributable to faulty behavior. The approach compares well with residual-based techniques but requires only output data. Hence, it could be applied to situations where residual-based approaches are not feasible. We present the use of orthogonal filter banks as the processing elements to create fault indicators. The case study is an F14 jet fighter. Using computer simulations we create various faults and monitor the plane angle of attack. The filter bank creates a number of orthogonal components, some of which have a clearly distinct pre and post fault behavior. This behavior changes with the type of fault suggesting that it is possible to classify the faults. Another issue addressed is the generation of alarm signals based on the results of the fault detector. The paper discusses how the information in the components can be processed to create automatic alarm systems",2001,0, 7522,Balancing rewrapping error and smoothness in two dimensional phase unwrapping problems,Describes two classes of two-dimensional phase unwrapping algorithms. It is shown that combining advantages from each of these classes is useful in improving phase-unwrapping solutions.,2002,0, 7523,Design of COTS-based fault-tolerant multiprocessor real-time operating system,"Design of ultra-reliable real-time fault-tolerant multiprocessor systems is complexity and time-consumed. Previous systems mainly rely on special hardware and special operating systems. Current commercial real time operating systems can't satisfy the requirement of ultra-reliable systems, we present the approach to the design of fault-tolerant operating system, which based on COTS software and hardware. By introducing the redundant mechanism on multiprocessor systems into the COTS operating system's kernel, our fault-tolerant operating system completely supports the ultra-reliable real time application.",2004,0, 7524,Error Detection Using Model Checking vs. Simulation,"Design simulation and model checking are two alternative and complementary techniques for verifying hardware designs. This paper presents a comparison between the two techniques based on detection of design errors, performance, and memory use. We perform error detection experiments using model checking and simulation to detect errors injected into a verification benchmark suite. The results allow a quantitative comparison of simulation and model checking which can be used to understand weaknesses of both approaches",2006,0, 7525,"Automation blues - Designers of large system-on-chip products are keen to automate the process of pulling together the many cores that go into them, but the work has exposed fault lines between the chipmaking companies and the tools vendors that would hope to profit from the move","Designers of large system-on-chip products are keen to automate the process of pulling together the many cores that go into them, but the work has exposed fault lines between the chipmaking companies and the tools vendors that would hope to profit from the move",2007,0, 7526,Fault-tolerant deployment of embedded software for cost-sensitive real-time feedback-control applications,"Designing cost-sensitive real-time control systems for safety-critical applications requires a careful analysis of the cost/coverage trade-offs of fault-tolerant solutions. This further complicates the difficult task of deploying the embedded software that implements the control algorithms on the execution platform that is often distributed around the plant (as it is typical, for instance, in automotive applications). We propose a synthesis-based design methodology that relieves the designers from the burden of specifying detailed mechanisms for addressing platform faults, while involving them in the definition of the overall fault-tolerance strategy. Thus, they can focus on addressing plant faults within their control algorithms, selecting the best components for the execution platform, and defining an accurate fault model. Our approach is centered on a new model of computation, fault tolerant data flows (FTDF), that enables the integration of formal validation techniques.",2004,0, 7527,Fault detection for high availability RAID system,"Designing storage systems to provide high availability in the face of failures needs the use of various data protection techniques, such as dual-controller RAID. The failure of controller may cause data inconsistencies of RAID storage system. Heartbeat is used to detect controllers whether survival. So, the heartbeat cycle's impact on the high availability of a dual-controller hot-standby system has become the key of current research. To address the problem of fixed setting heartbeat in building high availability system currently, an adaptive heartbeat fault detection model of dual controller, which can adjust heartbeat cycle based on the frequency of data read-write request, is designed to improve the high availability of dual-controller RAID storage system. Additionally, this heartbeat mechanism can be used for other applications in distributed settings such as detecting node failures, performance monitoring, and query optimization. Based on this model, the high availability stochastic Petri net model of fault detection was established and used to evaluate the effect of the availability. In addition, we define a AHA (Adaptive Heart Ability) parameter to scale the ability of system heartbeat cycle to adapt to the environment which is changing. The results show that, relatively speaking with fixed configuration, the design is valid and effective, and can enhance dual controller RAID system high availability.",2010,0, 7528,An Approach to Fault-Tolerant Three-Phase Matrix Converter Drives,"Despite numerous research efforts in matrix converter-based drives, a study of fault-tolerant topology and control strategy for a matrix converter drive has not been presented in the literature. This paper proposes a matrix converter structure and a modulation technique for the remedial operation in case of opened switch faults and single-phase open circuits. The fault compensation is achieved by reconfiguring the matrix converter topology with the help of a connecting device. Based on the redefined converter structure, a fault-tolerant modulation algorithm is developed to reshape output currents of two unfaulty phases for obtaining continuous operation. The proposed method allows improved system reliability and fault-tolerant capability with no backup leg and no parallel redundancy. Simulation and experimental results are shown to demonstrate the feasibility of the proposed fault-tolerant approach to the matrix converter drives.",2007,0, 7529,MAS and fault-management,"Despite the considerable efforts spent on researching and developing multi-agent systems (MAS) there is a noticeable absence of deployed systems. In the past the MAS research community ignored this problem - arguing that it is not a genuine MAS problem and consequently of lesser importance than other unsolved issues like cooperation, coordination, negotiation and communication. However, as the field matures, empirical evaluations of techniques and systems are more commonly used and deployment issues like the management of a MAS become increasingly important. This paper introduces a generic framework for fault-management in MAS that has been successfully tested in a large scale MAS.",2004,0, 7530,Pattern recognition of super-alloy friction welding joint defect by wavelet packet and wavelet neural network,"Detecting the super-alloy friction welding specimens of GH4169 by using the UltraPAC system, aiming at the characteristic of defects, the method of analyzing and extracting the defect eigenvalue by using wavelet packet analysis and pattern recognition by making use of the wavelet neural network is discussed. This method can realize to extract the interrelated information which can reflect defect characteristic from the ultrasonic information being detected and analysis it by the information. Constructing the network model for realizing the qualitative recognition of defects which is improved though experiment finally. The results show that the wavelet packet analysis adequately make use of the information in time-domain and in frequency-domain of the defected echo signal, multi-level partition the frequency bands and analyze the high-frequency part further which don 't been subdivided by multi-resolution analysis, and choose the interrelated frequency bands to make it suited with signal spectrum. Thus, the time-frequency resolution is rising, the good local amplificatory property of the wavelet neural network and the study characteristic of multi-resolution analysis can achieve the higher accuracy rate of the qualitative classification of welding defect.",2007,0, 7531,Recognition of impulse fault patterns in transformers using Kohonen's self-organizing feature map,"Determination of exact nature and location of faults during impulse testing of transformers is of practical importance to the manufacturer as well as designers. The presently available diagnostic techniques more or less depend on expertized knowledge of the test personnel, and in many cases are not beyond ambiguity and controversy. This paper presents an artificial neural network (ANN) approach for detection and diagnosis of fault nature and fault location in oil-filled power transformers during impulse testing. This new approach relies on high discrimination power and excellent generalization ability of ANNs in a complex pattern classification problem, and overcomes the limitations of conventional expert or knowledge-based systems in this field. In the present work the ""self-organizing feature map"" (SOFM) algorithm with Kohonen's learning has been successfully applied to the problem with good diagnostic accuracy",2002,0, 7532,Towards the next generation of bug tracking systems,"Developers typically rely on the information submitted by end-users to resolve bugs. We conducted a survey on information needs and commonly faced problems with bug reporting among several hundred developers and users of the APACHE, ECLIPSE and MOZILLA projects. In this paper, we present the results of a card sort on the 175 comments sent back to us by the responders of the survey. The card sort revealed several hurdles involved in reporting and resolving bugs, which we present in a collection of recommendations for the design of new bug tracking systems. Such systems could provide contextual assistance, reminders to add information, and most important, assistance to collect and report crucial information to developers.",2008,0, 7533,Using design patterns and constraints to automate the detection and correction of inter-class design defects,"Developing code free of defects is a major concern for the object oriented software community. The authors classify design defects as those within classes (intra-class), those among classes (inter-classes), and those of semantic nature (behavioral). Then, we introduce guidelines to automate the detection and correction of inter-class design defects. We assume that design patterns embody good architectural solutions and that a group of entities with organization similar, but not equal, to a design pattern represents an inter-class design defect. Thus, the transformation of such a group of entities, such that its organization complies exactly with a design pattern, corresponds to the correction of an inter-class design defect. We use a meta-model to describe design patterns and we exploit the descriptions to infer sets of detection and transformation rules. A constraint solver with explanations uses the descriptions and rules to recognize groups of entities with organizations similar to the described design patterns. A transformation engine modifies the source code to comply with the recognized distorted design patterns. We apply these guidelines on the Composite pattern using PTIDEJ, our prototype tool that integrates the complete guidelines",2001,0, 7534,Irregular pixel defects in hybrid IR-sensitive integrated circuits and their effect on results of measurements in medicine and multichannel spectrometry,"Distinctive features of effects of irregular pixel defects arisen in lineand 2D-array multielement photodetecting devices designed on the base of hybrid integrated circuits of ""sandwich"" type are investigated. The experimental results are presented for the photosensitive structures consisted of indium-arsenide and silicon chips. Possibility to solve properly IR-thermographic and spectrometric problems with use of detectors containing pixel defects is considered.",2002,0, 7535,Thema: Byzantine-fault-tolerant middleware for Web-service applications,"Distributed applications composed of collections of Web services may call for diverse levels of reliability in different parts of the system. Byzantine fault tolerance (BFT) is a general strategy that has recently been shown to be practical for the development of certain classes of survivable, client-server, distributed applications; however, little research has been done on incorporating it into selective parts of multi-tier, distributed applications like Web services that have heterogeneous reliability requirements. To understand the impacts of combining BFT and Web services, we have created Thema, a new BFT middleware system that extends the BFT and Web services technologies to provide a structured way to build Byzantine-fault-tolerant, survivable Web services that application developers can use like other Web services. From a reliability perspective, our enhancements are also novel in that they allow Byzantine-fault-tolerant services: (1) to support the multi-tiered requirements of Web services, and (2) to provide standardized Web services support for their own clients (through WSDL interfaces and SOAP communication). In this paper we study key architectural implications of combining BFT with Web services and provide a performance evaluation of Thema using the TPC-W benchmark.",2005,0, 7536,Performance evaluation of gang scheduling in distributed real-time systems with possible software faults,"Distributed real-time systems play an increasingly vital role in our daily life. The most important aspect of such systems is the scheduling algorithm, which must guarantee that every job in the system will meet its deadline. In this paper we evaluate by simulation the performance of strategies for the scheduling of parallel jobs (gangs) in a homogeneous distributed real-time system with possible software faults. We provide an alternative version for each scheduling policy, which allows imprecise computations, and we propose a performance metric applicable to our problem. Our simulation results show that the alternative versions of the algorithms exhibit promising performance.",2008,0, 7537,An efficient and scalable approach for implementing fault-tolerant DSM architectures,"Distributed Shared Memory (DSM) architectures are attractive to execute high performance parallel applications. Made up of a large number of components, these architectures have however a high probability of failure. We propose a protocol to tolerate node failures in cache-based DSM architectures. The proposed solution is based on backward error recovery and consists of an extension to the existing coherence protocol to manage data used by processors for the computation and recovery data used for fault tolerance. This approach can be applied to both Cache Only Memory Architectures (COMA) and Shared Virtual Memory (SVM) systems. The implementation of the protocol in a COMA architecture has been evaluated by simulation. The protocol has also been implemented in an SVM system on a network of workstations. Both simulation results and measurements show that our solution is efficient and scalable",2000,0, 7538,Awareness Behavioral Modelling for Fault Management of Agent-Based Computer Supported Cooperative Work,"Distributed software systems enable enterprise-wide computing using business applications installed in different geographical areas. The complexity of distribution in this type of software highlights the significance of managing cooperation between distributed nodes. Computer supported cooperative work (cscw) transfers business requirements for cooperation to computer-based technical solutions. In such dynamic and distributed systems, when exceptional faults occur - cooperation is hard to achieve without enough knowledge about the context, which is called awareness. The dynamic nature of distributed software systems directs us to use agents for management purposes. In this paper, we propose a behavioral model of awareness that agents need to possess for effective fault management while they are engaged in cooperative work. The main contribution of this paper is presented through an example scenario in network management.",2009,0, 7539,Developing Fault Tolerant Distributed Systems by Refinement,"Distributed systems are usually large and complex systems composed of various components. System components are subject to various errors. These failures often require error recovery to be conducted at architectural-level. However, due to complexity of distributed systems, specifying fault tolerance mechanisms at architectural level is complex and error prone. In this paper, we propose a formal approach to specifying components and architectures of fault tolerant distributed and reactive systems. Our approach is based on refinement in the action system formalism - a framework for formal model-driven development of distributed systems. We demonstrate how to specify and refine fault tolerant components and complex distributed systems composed of them. The proposed approach provides designers with a systematic method for developing distributed fault tolerant systems.",2010,0, 7540,Automated fault analysis using an intelligent monitoring system,"Distribution feeders are complex systems comprised of numerous components, which are expected to function properly for decades. Electrical, mechanical and weather-related stresses combine to degrade components. Degradation accumulates over time, gradually impairing components' ability to perform properly and ultimately leading to failures, faults and outages. Work at Texas A&M has documented electrical parametric changes that occur as apparatus degrade. Taking advantage of these changes holds promise for helping utilities improve service quality and reliability, but intelligent algorithms and systems are required to acquire, analyze and otherwise manage the significant volume of data necessary to realize such benefits.",2009,0, 7541,An Intelligent BIST Mechanism for MEMS Fault Detection,"Diversity of application fields and properties of new materials generate new failure mechanisms in micro electro mechanical systems (MEMS). Now if we take into account the lessons from the past in microelectronics, we note that failure analysis played a major rule not only in development time reduction but also in qualification and reliability evaluation Most of the researches which have been done in MEMS reliability are about new material properties and fabrication technologies. Only a few fault detection methods have been introduced for fault detection in MEMS. Some of these methods can be used only for special MEMS. Additionally most of them need a precise model of system. In this paper a new intelligent method is proposed for fault detection in MEMS. In addition some parts of proposed neural network are changed in order to implement it as a BIST mechanism.",2007,0, 7542,Radiation fault modeling and fault rate estimation for a COTS based space-borne supercomputer,"Development of the Remote Exploration and Experimentation (REE) Commercial Off The Shelf (COTS) based space-borne supercomputer requires a detailed model of Single Event Upset (SEU) induced faults and fault-effects. Extensive ground based radiation testing has been performed on several generations of the Power PC processor family and related components. A set of relevant environments for NASA missions have been analyzed and detailed. Combining radiation test data, environmental data and architectural analysis, we have developed a radiation fault model for the REE system. The fault model is hierarchically organized and includes scaling factors and optional parameters for fault prediction in future technologies and alternative architectures. It has been implemented in a generic tool, which allows for ease of input and straight forward porting. The model currently includes the Power PC750 (G3), PCI bridge chips, L2 cache SRAM, main memory DRAM, and the Myrinet packet switched network. In this paper, we present the REE radiation fault model and accompanying tool set. We explain its derivation, its structure and use, and the work being done to validate it.",2002,0, 7543,Joint Base Station Placement and Fault-Tolerant Routing in Wireless Sensor Networks,"Fault tolerance techniques have been widely used in wireless sensor networks. Base station placement to maximize the network lifetime has also been well studied. However, limited research has been done on the joint base station placement and fault-tolerant routing problem. To fill this void, we study this problem and present a fully polynomial time approximation scheme in this paper. Our scheme can compute a (1 - A)approximation with a running time bounded by a polynomial in 1/A and the input size of the instance. Despite our solution is presented for the model where the base station can be placed anywhere, however it can be easily extended to cases where forbidden areas are present or candidate locations for the base station are given. To the best of our knowledge, this paper is the first theoretical result on this problem.",2009,0, 7544,Surviving errors in component-based software,"Fault tolerance techniques use some form of redundancy (e.g. hardware, software, data) to deal with runtime errors and provide system repair, state restoration and error masking. However, these techniques come with a high cost in terms of system complexity and time penalties during system execution, which not all system can afford. A cheaper alternative is to survive an error by removing the affected part of the system and gracefully degrade to a lower state of functionality. In component-based software, graceful degradation of system functionality translates into the gradual removal of the components that are affected by errors. The modular nature of component-based software makes the consideration of graceful degradation in the system design a straightforward task. Even for component-based software that is designed without any provision for graceful degradation, a mechanism can be added to the runtime system to operate on the component bindings and provide graceful degradation.",2005,0, 7545,"Design, implementation and performance of fault-tolerant message passing interface (MPI)","Fault tolerant MPI (FTMPI) enables fault tolerance to the MPICH, an open source GPL licensed implementation of MPI standard by Argonne National Laboratory's Mathematics and Computer Science Division. FTMPI is a transparent fault-tolerant environment, based on synchronous checkpointing and restarting mechanism. FTMPI relies on non-multithreaded single process checkpointing library to synchronously checkpoint an application process. Global replicated system controller and cluster node specific node controller monitors and controls check pointing and recovery activities of all MPI applications within the cluster. This work details the architecture to provide fault tolerance mechanism for MPI based applications running on clusters and the performance of NAS parallel benchmarks and parallelized medium range weather forecasting models, P-T80 and P-TI26. The architecture addresses the following issues also: Replicating system controller to avoid single point of failure. Ensuring consistency of checkpoint files based on distributed two phase commit protocol, and robust fault detection hierarchy.",2004,0, 7546,Extending fault trees with an AND-THEN gate,Fault trees have been used for software safety analysis in various safety critical systems. The PRIORITY-AND gate was proposed because the conventional AND gate cannot be used to represent the sequential order of the events. The paper shows that even PRIORITY-AND gate is not expressive enough to represent the relative temporal order of the events precisely. We extend the fault trees with an AND-THEN gate that is the corresponding gate of the logical connective TAND. This increases the expressive power of the fault trees. The AND-THEN gate can represent relative temporal relations precisely,2000,0, 7547,Reliability analysis of large fault trees using the Vesely failure rate,"Fault trees provide a compact, graphical, intuitive method to analyze system reliability. However, combinatorial fault tree analysis methods, such as binary decision diagrams, cannot be used to find the reliability of systems with repairable components. In such cases, the analyst should use either Markov models explicitly or generate Markov models from fault trees using automatic conversion algorithms. This process is tedious and generates huge Markov models even for moderately sized fault trees. In this paper, the use of the Vesely failure rate as an approximation to the actual failure rate of the system to find the reliability-based measures of large fault trees is demonstrated. The main advantage of this method is that it calculates the reliability of a repairable system using combinatorial methods such as binary decision diagrams. The efficiency of this approximation is demonstrated by comparing it with several other approximations and provide various bounds for system reliability. The usefulness of this method in finding the other reliability measures such as MTBF, MTTR, MTTF, and MTTFF is shown. Finally, extending this method to analyze complex fault trees containing static and dynamic modules as well as events represented by other modeling tools.",2004,0, 7548,Concurrent error detection of fault-based side-channel cryptanalysis of 128-bit symmetric block ciphers,"Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based concurrent error detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for symmetric encryption algorithms based on the inverse relationship that exists between encryption and decryption at algorithm level, round level and operation level and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations of AES finalist 128-bit symmetric encryption algorithms.",2001,0, 7549,Using Search Methods for Selecting and Combining Software Sensors to Improve Fault Detection in Autonomic Systems,"Fault-detection approaches in autonomic systems typically rely on runtime software sensors to compute metrics for CPU utilization, memory usage, network throughput, and so on. One detection approach uses data collected by the runtime sensors to construct a convex-hull geometric object whose interior represents the normal execution of the monitored application. The approach detects faults by classifying the current application state as being either inside or outside of the convex hull. However, due to the computational complexity of creating a convex hull in multi-dimensional space, the convex-hull approach is limited to a few metrics. Therefore, not all sensors can be used to detect faults and so some must be dropped or combined with others. This paper compares the effectiveness of genetic-programming, genetic-algorithm, and random-search approaches in solving the problem of selecting sensors and combining them into metrics. These techniques are used to find 8 metrics that are derived from a set of 21 available sensors. The metrics are used to detect faults during the execution of a Java-based HTTP web server. The results of the search techniques are compared to two hand-crafted solutions specified by experts.",2010,0, 7550,Staged-fault testing for high impedance fault data collection,"Faulted lines must be repaired and returned to service in the shortest possible time to provide reliable service to the customers. Fault detection is an important aspect of system protection, as it involves personnel as well as equipment safety. The research on high impedance fault (HTF) detection techniques is mainly based on conventional methods, familiar to the power system engineers. TXU Electric Delivery (TXUED) and ABB have initiated a project to collect HIF data from staged-fault testing, to assist ABB in the research and development of suitable techniques for detecting high impedance faults. Since most protective relay engineers have spent their careers avoiding faults on the electric system, they are not usually comfortable with the process of staging faults on a system in service. The purpose of this paper is to describe one method of staging faults that is both safe for utility personnel performing the tests and results in no interruption or break in service to the utility consumers connected to the distribution feeder. Also, the analysis of the resulting HIF test data is discussed. These staged fault tests occurred on January 27-29, 2004 and also on September 27-28, 2004 on a distribution feeder at the TXU Huntington Substation near Lufkin, Texas.",2005,0, 7551,A fault-injection attack on Fiat-Shamir cryptosystems,"Fault-injection attacks and cryptanalysis is a realistic threat for systems implementing cryptographic algorithms. We revisit the fault-injection attacks on the Fiat-Shamir authentication scheme, a popular authentication scheme for service providers like pay per view television, video distribution and cellular phones. We present a new and effective attack on cryptosystems that implement the Fiat-Shamir identification scheme. The attack is successful against all system configurations in contrast to the original Bellcore attack, which has been proven incomplete (easy to defend against).",2004,0, 7552,Characterization of Upset-induced Degradation of Error-mitigated Highspeed I/O's Using Fault Injection,"Fault-injection experiments on Virtex-II FPGAs quantify failure and degradation modes in I/O channels incorporating triple modular redundancy (TMR). With increasing frequency (to 100 MHz), full TMR under both I/O standards investigated shows more configuration bits have a measurable performance effect.",2005,0, 7553,Automated Analysis of Faults and Disturbances: Utility Expectations and Benefits - CenterPoint Energy,"Faults and disturbances in a utility power system are typically analyzed using recorded data captured by digital fault recorders, digital protective relays, and other intelligent electronic devices. Several steps are involved in the process including communication to the devices, downloading files, viewing data, and protection engineers performing the actual analysis. This paper discusses automated analysis of this data, and expectations and benefits at centerpoint energy.",2007,0, 7554,Impedance-based fault location techniques for transmission lines,"Faults that occur on transmission lines are relatively more common in countries that have extreme weather. Most faults in such countries are caused by lightning storms. For long distance transmission lines, it takes a long time to find the faulted point. This not only prolongs the time of removing and recovering the fault, but also increases economical damages. If fault can be precisely located, maintenance crew can reach the faulted point quickly, and remove the fault in time. So the precise locating of the faulted point on a transmission line is very important to remove fault, improvement of system reliability, and decrease of economic damages as an inherent consequence of a long term outages. Hence, the fault location methods become of much importance for utilities and researches. In this paper, three of single-terminal impedance-based fault location techniques will be investigated to show the reliability and deficiency of each technique.",2009,0, 7555,Fault-tolerance in distributed query processing,"Fault-tolerance has long been a feature of database systems, with transactions supporting the structuring of applications so as to ensure continuation of updating applications in spite of machine failures. For read-only queries the perceived wisdom has been that support for fault-tolerance is too expensive to be worthwhile. Distributed query processing is coming to be seen as a promising way of implementing applications that combine structured data and analysis operations in dynamic distributed settings such as computational grids. Such a query may be long-running and having to redo the whole query after a failure may cause problems (e.g. if the result may trigger business or safety critical activities). This work describes and evaluates a new scheme for adding fault-tolerance to distributed query processing through a rollback-recovery mechanism. The high level expression of user requests in a physical algebra offers opportunities for tuning the fault-tolerance provision so as to reduce the cost, and give better performance than employment of generic fault-tolerance mechanisms at the lowest level of query processing. This paper outlines how the publicly-available OGSA-DQP computational grid-based distributed query processing system can be modified to include support for fault-tolerance and presents a performance evaluation which includes measurements of the cost of both protocol overheads and rollback-recovery, for a set of example distributed queries.",2005,0, 7556,An efficient Routing Scheme to provide more Fault-tolerance for an Irregular Multistage Interconnection Network,"Fault-tolerance in an interconnection network is very important for its continuous operation over a relatively long period of time. Fault-tolerance is the ability of the system to continue operating in the presence of faults. In this paper a new irregular network IABN has been proposed and an efficient routing procedure has been defined to study the fault tolerance of the network. The behavior of this network has been analysed and compared with regular network ABN, under fault free conditions and in the presence of faults. It has been found that in an IABN, there are six possible paths between any source-destination pair, whereas ABN has only two paths. Thus the proposed network IABN is more fault-tolerant.",2009,0, 7557,H.264 video communication based refined error concealment schemes,"Error resilience is an essential problem for video communications, such as digital TV broadcasting, mobile video terminals and video telephone. The latest video compression standard H.264/AVC provides more coding efficiency for a wide range of video consumer applications. Yet H.264 video streams are still vulnerable to transmission errors. In this paper, a set of error concealment techniques are proposed to provide error resilience based on new coding and network characteristics of H.264. The temporal concealment involves a method of subblock-based refined motion compensated concealment using weighting boundary match, which improves the ability to deal with high motion activity areas. The spatial concealment scheme involves an algorithm of refined directional weighted spatial interpolation, which could protect object edge integrity. Combining the above algorithms, an adaptive spatial/temporal estimation method with low complexity is presented. Transmission over typical 3GPP/3GPP2 mobile IP channels is simulated with a wide range of bit rate and BER. The refined concealment techniques provide more error robustness for video consumer electronics than those suggested in H.264 without any encoder-side modifications.",2004,0, 7558,Probability based power aware error resilient coding,"Error resilient encoding in video communication is becoming increasingly important due to data transmission over unreliable channels. In this paper, we propose a new power-aware error resilient coding scheme based on network error probability and user expectation in video communication using mobile handheld devices. By considering both image content and network conditions, we can achieve a fast recoverable and energy-efficient error resilient coding scheme. More importantly, our approach allows system designers to evaluate various operating points in terms of error resilient level and energy consumption over a wide range of system operating conditions. We have implemented our scheme on an H.263 video codec algorithm, compared it with the previous AIR, GOP and PGOP coding schemes, and measured energy consumption and video quality on the IPAQ and Zaurus PDAs. Our experimental results show that our approach reduces energy consumption by 34%, 24% and 17% compared with AIR, GOP and PGOP schemes respectively, while incurring only a small fluctuation in the compressed frame size. In addition, our experimental results prove that our approach allows faster error recovery than the previous AIR, GOP and PGOP approaches. We believe our error resilient coding scheme is therefore eminently applicable for video communication on energy-constrained wireless mobile handheld devices.",2005,0, 7559,Error Resilient Mode Decision in Scalable Video Coding,"Error resilient macroblock mode decision has been extensively investigated in the literature for single-layer video coding, for which error resilient mode decision is also called as intra refresh. In this paper, we present a loss-aware rate-distortion optimized macroblock mode decision algorithm for scalable video coding, wherein more macroblock coding modes than intra and inter are involved. Thanks to the good performance, the proposed method has been adopted into the joint scalable video model by the joint video team.",2006,0, 7560,Proxy-based error tracking for H.264 based real-time video transmission in mobile environments,"Error tracking (ET) is an error resilience technique for real-time video transmission over error-prone communication channels. In this paper we propose proxy-based ET for communication scenarios where the sender is in the wired Internet and the receiver is connected via a wireless link. Our main assumptions in this work are that there is a strong imbalance between the transmission rates available in the wired and the wireless Internet and that the round-trip delay is mainly caused by the wired part of the connection. We also assume that the majority of the packet loss is caused by the wireless link. In order to allow the proxy server to perform error tracking, an additional update stream is sent through the wired network. This additional information is used by the proxy to improve the performance on the wireless link We show that under these assumptions proxy-based error tracking leads to significantly improved performance for H.264 based real-time video communication in comparison to traditional end-to-end error tracking",2004,0, 7561,Improving sidescan sonar mosaic accuracy by accounting for systematic errors,"Estimating the systematic errors associated with sensor geometry during data acquisition and applying the appropriate corrections can improve the accuracy of a sidescan sonar mosaic. We first review existing techniques for assembling mosaics and explain where systematic errors arise within typical sidescan sonar surveys. We then describe a new technique called systematic error reduction and compare its advantages to the classical ""warping"" or ""tie-point alignment"" technique. The main advantages of the systematic error reduction technique are that it works on an absolute, geographic reference frame, does not require fully overlapped images, and can be applied/modified in real time. We also describe how to find the optimal vector offset and show the results obtained on a real data set. As the success of our method depends on user interaction (e.g., identifying seafloor features or targets to be used for vector offset estimation, specifying acceptable offset ranges), considerable effort has been made to develop an intuitive GUI as well as a set of consistency tests to help guide the operator toward appropriate choices. Consistency tests include checks on whether a reference target has been allocated to the appropriate cluster, visual confirmation of whether targets are converging or diverging with vector offset application, and graphs for each offset component showing how its range of values affects the global dispersion of clusters. Finally, we provide some thoughts on future development directions, in particular to the estimation of offsets that vary spatially and temporally",2001,0, 7562,Error Analysis of Euclidean Reconstruction from Translational Motion,"Euclidean reconstruction of an object is one of the challenging problems in computer vision. This paper uses the singular value decomposition (SVD) method to solve the problem of Euclidean reconstruction from translational motion, under one calibrated camera. This paper mainly analyzes how the different elements (camera intrinsic parameters, corresponding error, the value and the direction of translation) affect the reconstruction results. The error analysis is based on both the simulated data experiments and the real data experiment. The results of experiments show that the corresponding error, the value and the direction of translation are the main elements affect the reconstruction results.",2009,0, 7563,Evaluating speech recognition in the context of a spoken dialogue system: critical error rate,"Evaluating a speech recognition system is a key issue towards understanding its deficiencies and focusing potential improvements on useful aspects. When a system is designed for a given application, it is particularly relevant to have an evaluation procedure that reflects the role of the system in this application. Evaluating continuous speech recognition through word error rate is not completely appropriate when the speech recognizer is used as spoken dialogue system input. Some errors are particularly harmful, when they concern content words for example, while some others do not have any impact on the following comprehension step. The attempt is not to evaluate natural language understanding but to propose a more appropriate evaluation of speech recognition, by making use of semantic information to define the notion of critical errors.",2001,0, 7564,An Automated Approach for Scheduling Bug Fix Tasks,"Even if a development team uses the best Software Engineering practices to produce high-quality software, end users may find defects that were not previously identified during the software development life-cycle. These defects must be fixed and new versions of the software incorporating the patches that solve them must be released. The project manager must schedule a set of error correction tasks with different priorities in order to minimize the time required to accomplish these tasks and guarantee that the more important issues have been fixed. Given the large number of distinct schedules, an automatically tool to find good schedules may be helpful to project managers. This work proposes a method which captures relevant information from bug repositories and submits them to a genetic algorithm to find near optimal bug correction task schedules. We have evaluated the approach using a subset of the Eclipse bug repository and it suggested better schedules than the actual schedules followed by Eclipse developers.",2010,0, 7565,Assessing and improving the effectiveness of logs for the analysis of software faults,"Event logs are the primary source of data to characterize the dependability behavior of a computing system during the operational phase. However, they are inadequate to provide evidence of software faults, which are nowadays among the main causes of system outages. This paper proposes an approach based on software fault injection to assess the effectiveness of logs to keep track of software faults triggered in the field. Injection results are used to provide guidelines to improve the ability of logging mechanisms to report the effects of software faults. The benefits of the approach are shown by means of experimental results on three widely used software systems.",2010,0, 7566,Exact symbol-error probability analysis for orthogonal space-time block codes: two- and higher dimensional constellations cases,"Exact expressions are obtained for the symbol-error probability of orthogonal space-time block codes at the output of the coherent maximum-likelihood decoder in the general case of arbitrary input signal constellation and code. Such expressions are derived for the cases of both deterministic (fixed) and random Rayleigh/Ricean fading channels, and both the two- and higher dimensional constellations.",2004,0, 7567,9D-6 Signal Analysis in Scanning Acoustic Microscopy for Non-Destructive Assessment of Connective Defects in Flip-Chip BGA Devices,"Failure analysis in industrial applications often require methods working non-destructively for allowing a variety of tests at a single device. Scanning acoustic microscopy in the frequency range above 100 MHz provides high axial and lateral resolution, a moderate penetration depth and the required non-destructivity. The goal of this work was the development of a method for detecting and evaluating connective defects in densely integrated flip-chip ball grid array (BGA) devices. A major concern was the ability to automatically detect and differentiate the ball-connections from the surrounding underfill and the derivation of a binary classification between void and intact connection. Flip chip ball grid arrays with a 750 mum silicon layer on top of the BGA were investigated using time resolved scanning acoustic microscopy. The microscope used was an Evolution II (SAM TEC, Aalen, Germany) in combination with a 230 MHz transducer. Short acoustic pulses were emitted into the silicon through an 8 mm liquid layer. In receive mode reflected signals were recorded, digitized and stored at the SAM's internal hard drive. The off-line signal analysis was performed using custom-made MATLAB (The Mathworks, Natick, USA) software. The sequentially working analysis characterized echo signals by pulse separation to determine the positions of BGA connectors. Time signals originated at the connector interface were then investigated by wavelet- (WVA) and pulse separation analysis (PSA). Additionally the backscattered amplitude integral (BAI) was estimated. For verification purposes defects were evaluated by X-ray- and scanning electron microscopy (SEM). It was observed that ball connectors containing cracks seen in the SEM images show decreased values of wavelet coefficients (WVC). However, the relative distribution was broader compared to intact connectors. It was found that the separation of pulses originated at the entrance and exit of the ball array corresponded to the condition of- the connector. The success rate of the acoustic method in detecting voids was 96.8%, as verified by SEM images. Defects revealed by the acoustic analysis and confirmed by SEM could be detected by X-ray microscopy only in 64% of the analysed cases. The combined analyses enabled a reliable and non destructive detection of defect ball-grid array connectors. The performance of the automatically working acoustical method seemed superior to X-ray microscopy in detecting defect ball connectors.",2007,0, 7568,Fast soft error rate computing technique based on state probability propagating,"Fast soft error sensitivity characterization technique is essential for the soft error tolerance optimization of modern VLSI circuits. In this paper, an efficient soft error evaluation technique based on syntax analysis and state probability propagating technique is developed, which can automatically analyze the soft error rate of combinational logic circuits and the combinational part of sequential circuits in Verilog synthesized netlist within a few seconds. We implemented the idea in a software tool called HSECT-ANLY, which use Verilog syntax analysis to automate the soft error rate evaluation procedure and state propagating technique to speed up the analyzation process. By using HSECT-ANLY, experiments are carried out on some ISCAS'85 and ISCAS'89 benchmark circuits implemented with TSMC 0.18 mum technology and results are obtained. The result comparison with the traditional test vector propagating technique shows that the introduced method is much faster (2-3 magnitudes speeding up) with some accuracy losses, and be very suitable for the reliability optimization as the sub-algorithm of the optimization algorithms such as genetic algorithms to evaluate the fitness (soft error rate) rapidly.",2009,0, 7569,SLICED: Slide-based concurrent error detection technique for symmetric block ciphers,"Fault attacks, wherein faults are deliberately injected into cryptographic devices, can compromise their security. Moreover, in the emerging nanometer regime of VLSI, accidental faults will occur at very high rates. While straightforward hardware redundancy based concurrent error detection (CED) can detect transient and permanent faults, it entails 100% area overhead. On the other hand, time redundancy based CED can only detect transient faults with minimum area overhead but entails 100% time overhead. In this paper we present a general time redundancy based CED technique called SLICED for pipelined implementations of symmetric block cipher. SLICED SLIdes one encryption over another and compares their results for CED as a basis for protection against accidental faults and deliberate fault attacks.",2010,0, 7570,PSO and ANN-based fault classification for protective relaying,"Fault classification in electric power system is vital for secure operation of power systems. It has to be accurate to facilitate quick repair of the system, improve system availability and reduce operating costs due to mal-operation of relay. Artificial neural networks (ANNs) can be an effective technique to help to predict the fault, when it is provided with characteristics of fault currents and the corresponding past decisions as outputs. This paper describes the use of particle swarm optimisation (PSO) for an effective training of ANN and the application of wavelet transforms for predicting the type of fault. Through wavelet analysis, faults are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. The parameters selected for fault classification are the detailed coefficients of all the phase current signals, measured at the sending end of a transmission line. The information is then fed into ANN for classifying the faults. The proposed PSO-based multi-layer perceptron neural network gives 99.91% fault classification accuracy. Moreover, it is capable of producing fast and more accurate results compared with the back-propagation ANN. Extensive simulation studies were carried out and a set of results taken from the simulation studies are presented in this paper. The proposed technique when combined with a wide-area monitoring system would be an effective tool for detecting and identifying the faults in any part of the system.",2010,0, 7571,Fault detection and accommodation in dynamic systems using adaptive neuro-fuzzy systems,"Fault detection and accommodation plays a very important role in critical applications. A new software redundancy approach based on all adaptive neuro-fuzzy inference system (ANFIS) is introduced. An ANFIS model is used to detect the fault while another model is used to accommodate it. An accurate plant model is assumed with arbitrary additive faults. The two models are trained online using a gradient-based approach. The accommodation mechanism is based on matching the output of the plant with the output of a reference model. Furthermore, the accommodation mechanism does not assume a special type of system or nonlinearity. Simulation studies prove the effectiveness of the new system even when a severe failure occurs. Robustness to noise and inaccuracies in the plant model are also demonstrated",2001,0, 7572,Facilitating Conventional Intelligent Techniques in the Fault Diagnosis of a Modern Financial Terminal Machine,"Fault detection and diagnosis is a very important and emerging topic in technological, as well as in medical applications. The request for more precise and fast methods is still emerging. In this paper, after a brief introduction to the concept of fault diagnosis and the basic contributions of artificial intelligence techniques in the field, we present the application of a conventional intelligent diagnostic method in a technological system commonly used in the financial sector, i.e. a financial terminal used for electronic payments. The pilot application was implemented using conventional software development tools, such as a high level expert system development tool available for free for academic and evaluation purposes",2005,0, 7573,Fault diagnosis of the machines in power plants using LPC,Fault diagnosis and monitoring of the machines operation in the power plants play an important role in safety operation and maintenance of those operating machines. In this paper we propose the fault diagnosis algorithm using the LPC coefficients with sound acquisition system from the operating machines through the single LPC spectrum is possible.,2004,0, 7574,"Fault detection, diagnostics, and prognostics: software agent solutions","Fault diagnosis and prognosis are important tools for the reliability, availability and survivability of navy all-electric ships. Extending the fault detection and diagnosis into predictive maintenance increases the value of this technology. The traditional diagnosis can be viewed as a single diagnostic agent having a model of the whole system to be diagnosed. This becomes inadequate when the system becomes large and distributed as on the electric ships. For such systems, the software multi-agents may offer a solution. This paper first presents a brief review on the traditional fault diagnosis method with an emphasis on its application to electric motors as important components on the all-electric ship. The software agent technology is then introduced. The discussion is made about how this technology supports the drastic manning reduction requirements for the future navy ships. Examples are given on the existing naval applications of diagnostic and prognostic software agents.",2005,0,5836 7575,A Novel Approach for Fault-Tolerant Ethernet Implementation,Fault-tolerant Ethernet (FTE) aims to keep data flowing in the network continuously even if there are faults in the network. This is an important requirement in mission critical systems that cannot afford for the network to go down even for a few seconds. In this paper we propose a novel approach for fault-tolerant Ethernet implementation. The proposed approach is a hybrid of conventional hardware and layer 2 software based approaches. Our approach mostly shows the strengths of conventional approaches. We then demonstrate our approach using a single network with redundant cable model that can be easily expanded into multi-cable model.,2008,0, 7576,Fault-Tolerant Routing for P2P System with Structured Topology,"Fault-tolerant routing in existing P2P technologies is still not ideally solved. A new P2P routing algorithm FT-p2p is proposed, which is mainly used to optimize the fault-tolerant routing. The algorithm is based on directed graph and division of P2P network into two layers. The maintenance of routing information and network stability mostly depends on high-performance peers. When low-performance peers encounter difficulties, they may obtain routing-information service or data-relay service from high-performance peers. Experimental results indicate that FT-p2p algorithm is superior to Chord, Tapestry and Koodre algorithm in fault-tolerant routing.",2008,0, 7577,Fault-tolerant scheduling using primary-backup approach for optical grid applications,"Fault-tolerant scheduling is an important issue for optical gird applications because of a wide range of grid resource failures. To improve the availability of the DAGs (directed acyclic graphs), a primary-backup approach is considered when making DAG scheduling decision. Experiments demonstrate the effectiveness and the practicability of the proposed scheme.",2009,0, 7578,An optimal parallel average voting for fault-tolerant control systems,"Fault-tolerant systems are such systems that can continue their operation, even in presence of faults. Redundancy as one of the main techniques in implementation of fault-tolerant control systems uses voting algorithms to choose the most appropriate value among multiple redundant and probably faulty results. Average (mean) voter is one of the commonest voting methods which is suitable for decision making in highly-available and long-missions applications in which the availability and speed of the system is critical. In this paper, we introduce a new generation of average voter based on parallel algorithms. Since parallel algorithms normally have high processing speed and are especially appropriate for large scale systems, we have used them to achieve an optimal parallel average voting algorithm with a time complexity of O (log n) with n/log n equations processors on EREW shared-memory for applications where the size of input space is large.",2010,0, 7579,Demonstration and suppression of numerical divergence errors in FDTD analysis of practical microwave problems,"Field divergence emulation in FDTD is revisited, and new theoretical aspects as well as problems of practical importance are revealed and resolved. Various choices of divergence definition are discussed in terms of their predictive power. It is shown that total FDTD solutions inevitably violate Gauss law in dipole radiation or eigenvalue analysis. The theory of S- and P-eigenmodes is applied to understand these problems and to restore their physical solutions. Recipes for extracting correct radiation efficiency, radiation resistance, Q-factors and modal field patterns in the presence of P-modes are proposed.",2002,0, 7580,A genetic algorithm for automated horizon correlation across faults in seismic images,"Finding corresponding seismic horizons which have been separated by a fault is still performed manually in geological interpretation of seismic images. The difficulties of automating this task are due to the small amount of local information typical for those images, resulting in a high degree of interpretation uncertainty. Our approach is based on a model consisting of geological and geometrical knowledge in order to support the low-level image information. Finding the geologically most probable matches of several horizons across a fault is a combinatorial optimization problem, which cannot be solved exhaustively since the number of combinations increases exponentially with the number of horizons. A genetic algorithm (GA) has been chosen as the most appropriate strategy to solve the optimization problem. Our implementation of a GA is adapted to this particular problem by introducing geological knowledge into the solution process. The results verify the suitability of the method and the appropriateness of the parameters chosen for the horizon correlation problem.",2005,0, 7581,Simulation-based bug trace minimization with BMC-based refinement,"Finding the cause of a bug can be one of the most time-consuming activities in design verification. This is particularly true in the case of bugs discovered in the context of a random simulation- based methodology, where bug traces, or counter-examples, may contain several hundred thousand cycles. In this work we propose Butramin, a bug trace minimizer. Butramin considers a bug trace produced by a random simulator or a semi-formal verification software and produces an equivalent trace of shorter length. Butramin applies a range of minimization techniques, deploying both simulation-based and formal methods, with the objective of producing highly reduced traces that still expose the original bug. We evaluated Butramin on a range of designs, including the publicly available picoJava microprocessor. Our experiments show that in most cases Butramin is able to reduce traces to a small fraction of their initial size, in terms of cycle length and signals involved.",2005,0, 7582,Why people don't develop effective corrective actions,"Finding the root causes of problems is only half the story. Before a problem is fixed, someone must develop effective corrective actions. Why, even when the root causes are staring them in the face, do so many people still fail to develop effective corrective actions? This paper will draw on the observations of the authors' experience with incident investigators and investigative teams at power plants and across a wide variety of industries and cover the three common reasons why people DON'T develop effective corrective actions: (1) people start fixing problems before finding the problem's root causes; (2) management has no real commitment to fix problems; (3) people developing corrective actions can't see ""Outside the Box"". The authors will then suggest ideas to help investigators develop effective corrective actions. Finally, the authors discuss the lessons they learned when implementing one of the ideas for improving corrective actions.",2002,0, 7583,A Universal Fault Diagnostic Expert System Based on Bayesian Network,"Fault diagnosis is an area of great concern of any industry to reduce maintenance cost and increase profitability in the mean time. But most of the researches tend to rely on sensor data and equipment structure, which are expensive because each category of equipment differs from the others. Thus developing a universal system remains a key challenge to be solved. A universal expert system is developed in this paper making full use of expertspsila knowledge to diagnose the possible root causes and the corresponding probabilities for maintenance decision making support. Bayesian network was chosen as the inference engine of the system through raw data analysis. Improved causal relationship questionnaire and probability scale method were applied to construct the Bayesian network. The system has been applied to the production line of a chipset factory and the results show that the system can support decision making for fault diagnosis promptly and correctly.",2008,0, 7584,Fault diagnosis of continuous systems using discrete-event methods,"Fault diagnosis is crucial for ensuring the safe operation of complex engineering systems. Although discrete- event diagnosis methods are used extensively, they do not easily apply to parametric fault isolation in systems with complex continuous dynamics. This paper presents a novel discrete- event system diagnosis approach for abrupt parametric faults in continuous systems that is based on a qualitative abstraction of measurement deviations from the nominal behavior. Our approach systematically generates a diagnosis model from bond graphs that is used to analyze system diagnosability and derive the discrete-event diagnoser. The proposed approach is applied to an electrical power system diagnostic testbed.",2007,0, 7585,Low Voltage Fault Attacks on the RSA Cryptosystem,"Fault injection attacks are a powerful tool to exploit implementative weaknesses of robust cryptographic algorithms. The faults induced during the computation of the cryptographic primitives allow to extract pieces of information about the secret parameters stored into the device using the erroneous results. Various fault induction techniques have been researched, both to make practical several theoretical fault models proposed in open literature and to outline new kinds of vulnerabilities. In this paper we describe a non-invasive fault model based on the effects of underfeeding the power supply of an ARM general purpose CPU. We describe the methodology followed to characterize the fault model on an ARM9 microprocessor and propose and mount attacks on implementations of the RSA primitives.",2009,0, 7586,Incorporating error detection and online reconfiguration into a regular architecture for the advanced encryption standard,"Fault injection based attacks on cryptographic devices aim at recovering the secret keys by inducing an error in the computation process. They are now considered a real threat and countermeasures against them must be taken. In this paper, we describe an extension to an existing AES architecture proposed by Mangard et al. (2003), which provides error detection and fault tolerance by exploiting the high regularity of the architecture. The proposed design is capable of performing online error detection and reconfiguring internal data paths to protect against faults occurring in the computation process. We also describe how different redundancy levels provide protection against different numbers of errors. The presented design incorporating fault detection and tolerance has the same throughput as the base architecture but incurs a nonnegligible area overhead. This overhead is about 40% for the fault detection circuitry and 134% for the entire fault detection and tolerance (through reconfiguration). Although quite high, this overhead is still lower than for reference solutions such as duplication (providing detection) and triple modular redundancy (providing fault masking).",2005,0, 7587,A Modified Debugging Infrastructure to Assist Real Time Fault Injection Campaigns,"Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures",2006,0, 7588,Definition and performance evaluation of a fault localization technique for an NGN IMS network,"Fault Localization (FL) is a critical task for operators in the context of e-TOM (enhanced Telecom Operations Map) assurance process, in order to reduce network maintenance costs and improve availability, reliability, and performance of network services. This paper investigates, in a practical perspective, the use of a well known FL technique, named codebook technique, for the IMS control layer of a real Next Generation Network, deploying wireline VoIP and advanced communication services. Moreover, we propose some heuristics to generate optimal codebooks, i.e. to find the minimum set of symptoms (alarms) to be monitored in order to obtain the desired level of robustness to spurious or missing alarms and modelling errors in the root cause detection, and we evaluate their performance through extensive simulations. Finally, we provide a list of some practical Key Performance Indicators, the value of which is compared against specific thresholds. When a threshold is exceeded, an alarm is generated and used by the FL processing.",2009,0, 7589,A Simple Coverage-Based Locator for Multiple Faults,"Fault localization helps spotting faults in source code by exploiting automatically collected data. Deviating from other fault locators relying on hit spectra or test coverage information, we do not compute the likelihood of each possible fault location by evaluating its participation in failed and passed test cases, but rather search for each failed test case the set of possible fault locations explaining its failure. Assuming a probability distribution of the number of faults as the only other input, we can compute the probability of faultiness for each possible fault location in presence of arbitrarily many faults. As the main threat to the viability of our approach we identify its inherent complexity, for which we present two simple bypasses. First experiments show that while leaving room for improvement, our approach is already feasible in practical cases.",2009,0, 7590,Fault Detection and Localization in Smart Grid: A Probabilistic Dependence Graph Approach,"Fault localization in the nation's power grid networks is known to be challenging, due to the massive scale and inherent complexity. In this study, we model the phasor angles across the buses as a Gaussian Markov random field (GMRF), where the partial correlation coefficients of GMRF are quantified in terms of the physical parameters of power systems. We then take the GMRF-based approach for fault diagnosis, through change detection and localization in the partial correlation matrix of GMRF. Specifically, we take advantage of the topological hierarchy of power systems, and devise a multi-resolution inference algorithm for fault localization, in a distributed manner. Simulation results are used to demonstrate the effectiveness of the proposed approach.",2010,0, 7591,Application for fault location in electrical power distribution systems,"Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system.",2007,0, 7592,"Current Sensor Fault Detection, Isolation, and Reconfiguration for Doubly Fed Induction Generators","Fault tolerance is gaining growing interest to increase the reliability and availability of distributed energy sources. Current sensor fault detection, isolation, and reconfiguration are presented for a voltage-oriented controlled doubly fed induction generator, which is mainly used in wind turbines. The focus of this analysis is on the isolation of the faulty sensor and the actual reconfiguration. During a short period of open-loop operation, the fault is isolated by looking at residuals calculated from observed and measured signals. Then, replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. Laboratory measurement results are included to prove that the proposed concept leads to good results.",2009,0, 7593,Validation of fault tolerance mechanisms of an onboard system,"Fault tolerance is of importance to increase the dependability of computer systems. However, it is hard to be evaluated and validated, which will be either time-consuming or costly. Fault injection is an effective method to validate fault tolerance mechanisms. Among different fault injection techniques, software-implemented fault injection (SWIFI) is the most promising one. Although interesting, major drawbacks of existing SWIFI are that temporal and spatial overheads induced into target systems will influence its behavior. In this paper, fault tolerance of an onboard system is tested by a new SWIFI technique which will not induce any additional overheads into the target system under test. The technique is based on embedded processor debug interface standard, Nexus, which ensures that faults are injected into the target system with its behavior being traced while not altering normal execution of the target system",2006,0, 7594,Fault Tolerance in Cluster Federations with O2P-CF,"Fault tolerance is one of the key issues for large scale applications executed on high performance computing systems. In a cluster federation, clusters are gathered to provide huge computing power. To work efficiently on such systems, networks characteristics have to be taken into account: the latency between two nodes of different clusters is much higher than the latency between two nodes of the same cluster. In this paper, we present O2P-CF a message logging protocol well-suited to provide fault tolerance for message passing applications executed on cluster federations. O2P-CF is based on the combination of O2P, an extremely optimistic message logging protocol, with a pessimistic message logging protocol.",2008,0, 7595,Application of Multisim8 in the Circuits Fault Simulation,"Fault simulation is an important step in the study of circuits fault diagnosis, the choosing of the software has a big effect on the efficiency and correctness. The paper introduces Multisim8 simulation software used in the circuits fault simulation, using a typical digital-analog circuit to illustrate the process of the fault simulation and something that should be paid attention to. It can be proved that the process is simple and the effect is good. Multisim8 is propitious to the fault simulation of digital-analog circuits specially, because it can simulate digital and analog components together.",2007,0, 7596,Fault tolerant scheduling of precedence task graphs on heterogeneous platforms,"Fault tolerance and latency are important requirements in several applications which are time critical in nature: such applications require guaranties in terms of latency, even when processors are subject to failures. In this paper, we propose a fault tolerant scheduling heuristic for mapping precedence task graphs on heterogeneous systems. Our approach is based on an active replication scheme, capable of supporting epsiv arbitrary fail-silent (fail-stop) processor failures, hence valid results will be provided even if epsiv processors fail. We focus on a bi-criteria approach, where we aim at minimizing the latency given a fixed number of failures supported in the system, or the other way round. Major achievements include a low complexity, and a drastic reduction of the number of additional communications induced by the replication mechanism. Experimental results demonstrate that our heuristics, despite their lower complexity, outperform their direct competitor, the FTBAR scheduling algorithm [3].",2008,0, 7597,Embryonics+immunotronics: a bio-inspired approach to fault tolerance,"Fault tolerance has always been a standard feature of electronic systems intended for long-term missions. However, the high complexity of modern systems makes the incorporation of fault tolerance a difficult task. Novel approaches to fault tolerance can be achieved by drawing inspiration from nature. Biological organisms possess characteristics such as healing and learning that can be applied to the design of fault-tolerant systems. This paper extends the work on bio-inspired fault-tolerant systems at the University of York. It is proposed that by combining embryonic arrays with an immune inspired network, it is possible to achieve systems with higher reliability",2000,0, 7598,Using Fault Current Limiter to minimize effect of Thyristor Controlled Series Capacitor on over reach problem of distance protection,"FACTS series devices such as thyristor control series capacitor (TCSC) are used to improve the power transfer capability of long transmission lines. These series connected FACTS devices inject a series voltage with the line and change the line impedance and cause the impedance seen by distance relay to be lower or higher than the actual line impedance and the distance relay either over reaches or under reaches. In this paper, a new method is proposed to minimize the impact of TCSC on distance relay. In this method each TCSC is equipped with the Variable Impedance Fault Current Limiter, when fault is occurred, this Fault Current Limiter enters in the system and by injecting series impedance, minimizes the impact of TCSC on distance relay. The simulation results verify the suggested method.",2009,0, 7599,Determining the faulted phase,"In August 1999, a lightning strike caused a misoperation of a relay installed in the late 1980s. The relay misoperation caused a two-minute outage at a petrochemical plant and led to an exhaustive root-cause analysis. The misoperation can be attributed to incorrect fault type selection in a distance element-based, 1980s-era relay. Two separate events in different locations, one in December 2007 and another in March 2009, highlight additional incorrect operations that occurred due to the same problem and root cause. The recent events remind us that this topic is still important and should be reviewed. This paper shares details about three challenging case studies and their root causes. Methodical root-cause analysis techniques are used, including mathematical simulation and testing of old and newer relay designs. This paper contrasts distance and fault identification algorithms, demonstrates methodical analysis techniques, and proposes solutions. Fault type selection logic is discussed, and the evolution and improvement of faulted phase selection logic over several decades is demonstrated. A newer relay design, available since 1993, is proven to have improved performance, namely better security, for these challenging cases.",2010,0, 7600,A Quickly Skew Correction Algorithm of Bill Image,"In automatic document image processing system, skew correction and black margin removal are important principal steps. In this paper, a new algorithm of quickly skew correction is proposed. Based on characteristics of bill images, the positions of four corners were obtained and the slant angle is calculated. Using the slant angle to rotate bill image and removal black margin, the needed bill image can be obtained. Experiments show that our methods can reduce black margin noise well and can detect the slanting angle of an image rapidly and accurately.",2010,0, 7601,Integrated Fault Tolerant System for Automotive Bus Networks,"In automotive bus networks all the electronic components are interconnected to transmit and receive signals. The number of vehicles equipped with electronic components is increasing rapidly by replacing traditional mechanical and hydraulic systems. Nowadays most cars are functioning properly via many Electronic Control Units (ECUs), sensors and actuators, among which more than 2500 electronic signals are exchanged. There are several bus systems that have been developed for automotive bus systems to satisfy the different requirements of automotive applications: Local Interconnection Network (LIN), Controller Area Network (CAN), FlexRay and Media Oriented System Transport (MOST). However, there are more demands of combining these different bus networks in order to increase the efficiency and safety of the vehicle systems. The integrated automotive bus system, which communicates with software/hardware components on the different bus systems in a car, is more challenging problem. The discussion on how to interconnect those automotive bus networks in a fault tolerant way is addressed in the paper.",2010,0, 7602,Source Code Retrieval for Bug Localization Using Latent Dirichlet Allocation,"In bug localization, a developer uses information about a bug to locate the portion of the source code to modify to correct the bug. Developers expend considerable effort performing this task. Some recent static techniques for automatic bug localization have been built around modern information retrieval (IR) models such as latent semantic indexing (LSI); however, latent Dirichlet allocation (LDA), a modular and extensible IR model, has significant advantages over both LSI and probabilistic LSI (pLSI). In this paper we present an LDA-based static technique for automating bug localization. We describe the implementation of our technique and three case studies that measure its effectiveness. For two of the case studies we directly compare our results to those from similar studies performed using LSI. The results demonstrate our LDA-based technique performs at least as well as the LSI-based techniques for all bugs and performs better, often significantly so, than the LSI-based techniques for most bugs.",2008,0, 7603,Modeling of dynamic errors in algorithmic A/D converters,"In communication applications, the requirements on A/D converters are high and increasing. To be able to design high-perfomance converters, it is important to understand the speed limitations. In this work, performance decrease caused by dynamic errors related to settling time of the switched circuits at high sampling frequencies is investigated",2001,0, 7604,A hierarchical framework for fault propagation analysis in complex systems,"In complex systems, there are few critical failure modes. Prognostic models are focused at predicting the evolution of those critical faults, assuming that other subsystems in the same system are performing according to their design specifications. In practice, however, all the subsystems are undergoing deterioration that might accelerate the time evolution of the critical fault mode. This paper aims at analyzing this aspect, i.e. interaction between different fault modes in various subsystems, of the failure prognostic problem. The application domain focuses on an aero propulsion system of the turbofan type. Creep in the high-pressure turbine blade is one of the most critical failure modes of aircraft engines. The effects of health deterioration of low-pressure compressor and high-pressure compressor on creep damage of high-pressure turbine blades are investigated and modeled.",2009,0, 7605,An Interaction-Pattern-Based Approach to Prevent Performance Degradation of Fault Detection in Service Robot Software,"In component-based robot software, it is crucial to monitor software faults and deal with them on time before they lead to critical failures. The main causes of software failures include limited resources, component-interoperation mismatches, and internal errors of components. Message-sniffing is one of the popular methods to monitor black-box components and handle these types of faults during runtime. However, this method normally causes some performance problems of the target software system because the fault monitoring and detection process consumes a significant amount of resources of the target system. There are three types of overheads that cause the performance degradation problems: frequent monitoring, transmission of a large amount of monitoring-data, and the processing time for fault analysis. In this paper, we propose an interaction-pattern-based approach to reduce the performance degradation caused by fault monitoring and detection in component-based service robot software. The core idea of this approach is to minimize the number of messages to monitor and analyze in detecting faults. Message exchanges are formalized as interaction patterns which are commonly observed in robot software. In addition, important messages that need to be monitored are identified in each of the interaction patterns. An automatic interaction pattern-identification method is also developed. To prove the effectiveness of our approach, we have conducted a performance simulation. We are also currently applying our approach to silver-care robot systems.",2010,0, 7606,Identifying error proneness in path strata with genetic algorithms,"In earlier work we have demonstrated that GA can successfully identify error prone paths that have been weighted according to our weighting scheme. In this paper we investigate whether the depth of strata in the software affects the performance of the GA. Our experiments show that the GA performance changes throughout the paths. It performs better in the upper, less in the middle and best in the lower layer of the paths. Although various methods have been applied for detecting and reducing errors in software, little research has been done into partitioning a system into smaller, error prone domains for software quality assurance. To identify error proneness in software paths is important because by identifying them, they can be given priority in code inspections or testing. Our experiments observe to what extent the GA identifies errors seeded into paths using several error seeding strategies. We have compared our GA performance with random path selection.",2005,0, 7607,Improving testability and fault analysis in low level design,"In earlier, Fault Analysis (FA) has been exploited for several aspects of analog and digital testing. These include, test development, Design for Test (DFT) schemes qualification, and fault grading. Higher quality fault analysis will reduce the number of defective chips that slip past the tests and end up in customer's systems. This is commonly referred to as defective parts per million (DPM) that are shipped. This paper attempts to improve the fault diagnosis, controllability and testability of testing methodology. The proposed test method takes the advantage of good fault coverage in low level designs. In this low level design, IDDQ fault was focused and the testability has been enhanced in the testing procedure using a simple fault injection technique. The faults have been diagnosed by building a Built-in Current Sensor (BISC). Here the design under test (DDT) is two-stage CMOS Operational amplifier. The simulated result confirms that the number of patterns used for testing is reduced and the test coverage is also increased.",2010,0, 7608,Applying fault correction profiles,"In general, software reliability models have focused on modeling and predicting the failure detection process and have not given equal priority to modeling the fault correction process. However, it is important to address the fault correction process in order to identify the need for process improvements. Process improvements, in turn, will contribute to achieving software reliability goals. We introduce the concept of a fault correction profile - a set of functions that predict fault correction events as a function of failure detection events. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Related to the fault correction profile is the goal fault correction profile. This profile represents the fault correction goal against which the achieved fault correction profile can be compared. This comparison motivates the concept of fault correction process instability, and the attributes of instability. Applying these concepts to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that the need for process improvement can be identified, and that improvements in process would contribute to meeting product reliability goals.",2003,0, 7609,Fault correction profiles,"In general, software reliability models have focused on modeling and predicting the failure detection process and have not given equal priority to modeling the fault correction process. However, it is important to address the fault correction process in order to identify the need for process improvements. Process improvements, in turn, will contribute to achieving software reliability goals. We introduce the concept of a fault correction profile "" a set of functions that predict fault correction events as a function of failure detection events. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Related to the fault correction profile is the goal fault correction profile. This profile represents the fault correction goal against which the achieved fault correction profile can be compared. This comparison motivates the concept of fault correction process instability, and the attributes of instability. Applying these concepts to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that the need for process improvement can be identified, and that improvements in process would contribute to meeting product reliability goals.",2003,0, 7610,On the use of Dirichlet process mixtures for the modelling of pseudorange errors in multi-constellation based localisation,"In Global Navigation Satellite Systems (GNSS) positioning, the receiver measures the pseudoranges with respect to each observable navigation satellite and determines the user position. The use of many constellations should lead to highly available, highly accurate navigation anywhere. However, it is important to notice that even if modern receivers achieve high position accuracy in line-of-sight (LOS) conditions, multipath propagation highly degrades positioning performances even in multi-constellation based localisation (AGPS + GalileoA for instance). In urban area, some obstacles (cars, pedestrians, etc) can appear suddenly and thus can induce a random error in the pseudorange measure. We address here the case where the noise probability density functions are of unknown functional form. A flexible Bayesian nonparametric noise model based on Dirichlet process mixtures (DPM) is introduced. Hence, the paper will contain two main parts. The first part focuses on the modelling of the pseudorange noises using DPMs and its suitability in the estimation problem handled by an efficient particle filter. The other part contains interesting validation schemes.",2009,0, 7611,On-line detection of low-level arcing faults in metal-clad electrical apparatus,"In history of metal-clad electrical apparatus, many equipment damages and burndown cases have been reported. The low-level electric arcing faults, ultimately causing high level arcs, lead to extreme pressure development in switchgear rooms causing severe explosions and burndowns. This paper discusses the development of a microprocessor based detection system, based on four different physical phenomena, for reliable and accurate detection of arcing faults in metal-clad electrical apparatus. Actual real-time arcing was generated on a dry type 15 kVA, 230/115V Y/ transformer enclosed in a 303021 metallic enclosure. Some results showing the performance of the detection system are presented in the paper",2000,0, 7612,Fault Diagnosis System of Transformer Based on GAS Chromatography,"In a certain extent, the gas component and content dissolved in the transformer oil indicate the insulation aging and fault degree of the transformer. The fault diagnosis system of transformer based on gas chromatography applies the analysis method of gas chromatography to transform each of the concentration into corresponding electric signal. After data processing, the signal is transmitted into working station to form chromatogram and made a fault diagnosing with grey relational theory. The system can detect the gas component, content, and speed of gas producing. It can find out the potential fault in the transformer as soon as possible and forecast the development trend of fault to ensure safe and economical running of the transformer",2006,0, 7613,Efficient compensation for frequency-dependent errors in analog reconstruction filters used in IQ modulators,"In a direct-conversion transmitter system, the nonideal transfer characteristics of practical analog subsystems can have significant adverse effects on the performance of the transmitter system. While a significant body of research exists on correcting static errors (including gain imbalance, phase errors, and DC offsets) in the analog subsystems, relatively little attention has been paid to the impact of nonideal frequency-dependent characteristics in the analog reconstruction filters that are employed in direct-conversion structures. This paper presents a new computationally efficient and numerically robust method for automatic digital compensation for nonideal frequency-dependent characteristics in the signal reconstruction filters of a quadrature (IQ) modulator structure. The new compensation technique is used in conjunction with existing compensation approaches that compensate for static errors.",2005,0, 7614,Towards a Multi-agent Framework for Fault Tolerance and QoS Guarantee in P2P Networks,"In a distributed P2P (peer to peer) network, each computer is able to act as a server for the others. Collaboration and sharing resources are the main purpose of this distributed heterogonous network. Users need to promptly access the vast amount of data and easily use other user's result. In other words, the processing ability is improved. In this paper, a novel model named FQ (fault tolerant and QoS aware) is proposed using multi-agent technology to respond every information/QoS (quality of service) requirement. The best resources are selected for users accordingly to communication cost and traffic. Intelligent agents cooperate with each other to perform the proper tasks in the distributed network. We describe the agents' behaviour to support fault tolerance and QoS. This model is more flexible and more efficient in managing requests autonomously.",2008,0, 7615,A language-driven tool for fault injection in distributed systems,"In a network consisting of several thousands computers, the occurrence of faults is unavoidable. Being able to test the behavior of a distributed program in an environment where we can control the faults (such as the crash of a process) is an important feature that matters in the deployment of reliable programs. In this paper, we present FAIL (for FAult Injection Language), a language that permits to elaborate complex fault scenarios in a simple way, while relieving the user from writing low level code. Besides, it is possible to construct probabilistic scenarios (for average quantitative tests) or deterministic and reproducible scenarios (for studying the application's behavior in particular cases). We also present FCI, the FAIL cluster implementation, that consists of a compiler, a runtime library and a middleware platform for software fault injection in distributed applications. FCI is able to interface with numerous programming languages without requiring the modification of their source code, and the preliminary tests that we conducted show that its effective impact at runtime is low.",2005,0, 7616,Investigation of observer-performance in MAP-EM reconstruction with anatomical priors and scatter correction for lesion detection in 67Ga images,"In a previous work, we showed that anatomical priors can improve lesion detection in simulated Ga67 images of the chest. We herein expand and enhance our previous investigations by adding scatter in the projections, by using the triple energy window scatter compensation method and by implementing a new scheme for image reconstruction. Phantom images are created using the SIMIND Monte Carlo simulation software and the mathematical cardiac-torso (MCAT) phantom. The anatomical data are the original, noise-free slices of the MCAT phantom. Images are reconstructed using the DePierro algorithm. Two weights for the prior are tested (0.005 and 0.02). The following reconstruction scheme is used to reach convergence: The 120 projections are reconstructed successively with 4, 8, 24, 60, and 120 projections per subset with 1,1,1,1, and 50 iterations respectively; the result of each reconstruction is used as an initial estimate for the next reconstruction. Several strategies were investigated: no anatomical prior information, and anatomical information for organs and/or lesion. Lesion detection was performed by a numerical observer with an LROC task. Strategies including anatomical priors yield better results in terms of lesion detection, as compared to the strategy using no prior and only post-reconstruction Gaussian smoothing.",2003,0, 7617,Duplicate bug reports considered harmful ... really?,"In a survey we found that most developers have experienced duplicated bug reports, however, only few considered them as a serious problem. This contradicts popular wisdom that considers bug duplicates as a serious problem for open source projects. In the survey, developers also pointed out that the additional information provided by duplicates helps to resolve bugs quicker. In this paper, we therefore propose to merge bug duplicates, rather than treating them separately. We quantify the amount of information that is added for developers and show that automatic triaging can be improved as well. In addition, we discuss the different reasons why users submit duplicate bug reports in the first place.",2008,0, 7618,A fault-tolerant model of wireless sensor-actor network,"In a wireless sensor and actor network (WSAN), a group of sensors, actors, and actuation devices are geographically distributed and linked by wireless networks. Sensors gather information for an event occurring in the physical world and send them to actors. Actors can perform appropriate actions on actuation devices by making a decision on receipt of sensed information from sensors. Sensors are low cost, low powered devices with limited energy, computation, and wireless communication capabilities. Sensors may not only stop by fault but also suffer from arbitrary faults. Furthermore, wireless communication is less reliable due to noise and shortage of power of sensors. Reliable realtime communication among sensors, actors, and actuation devices is required in WSAN applications. In order to realize the reliability and realtimeness, we newly propose a multi-actor/multi-sensor (MAMS) model where each sensor sends sensed information to multiple actors and each actor receives sensed information from multiple sensors in an event area. Actors are required to causally/totally order events from multiple sensors and actions on actuation devices. In addition, multiple actors may perform actions on receipt of sensed information. Multiple redundant executions of an action on each device have to be prevented and conflicting actions on each device from multiple actors have to be serialized. In this paper, we discuss how to realize reliable, ordered delivery of sensed information to actors from sensors on the basis of global time and how to reliably and non-redundantly perform actions with realtime constraints",2006,0, 7619,On extending aggregate relation in UML and its errors evaluation,"In aggregate relation, there are properties between whole object and part object because the whole object manages and controls the part object. But, UML does not provide support for these semantic properties in aggregate relation. In order to describe real world clearly and correctly, UML is extended for theirs visual expression and theirs error evaluation rules based on algebraic theory are presented.",2010,0, 7620,Aircraft orientation error correction based on image aided navigation paper,"In allusion to the character of aircraft orientation error in inertial navigation system increasing with its working time and error accumulation, airborne image aided navigation system is imported in inertial navigation system to combine integrated navigation system. To acquire high orientation accuracy and high matching speed a new algorithm based on multi-scale edge detection and generalized point theory is put forward. At first multi-scale edge detection algorithm is adopted to realize airborne image edge detection, then image matching algorithm is used based on generalized point theory is used to eliminate error accumulation of peg-top excursion and accelerometer error to correction aircraft aircraft pose error to increase navigation accuracy of aircraft. To prove the algorithm is effective and robust, simulative experiment is done. The simulative experiment indicated that aircraft orientation accuracy is better than inertial navigation system, and its orientation error is less than 15 meters after airborne image aided navigation correction.",2010,0, 7621,Fault Diagnosis Technology for NAMP Based on BP Neural Network,"In allusion to the insufficiency of the Built-in-test-equipment (BITE) of NAMP and the ground fault diagnosis equipment, the fault diagnosis theory and methods for a certain type NAMP based on BP neural network are investigated, and the fault diagnosis example is provided with the typical test item. The technology simplifies the structure of the fault diagnosis system, and has a farther effective distinguish from the source of fault diagnosed by BITE, and isolates the fault from the LRU level to the SRU level.",2008,0, 7622,Plant-wide mass balance using extended support vector regression based data reconciliation and gross error detection,"In any modern petrochemical plant, the plant-wide mass data rendering the real conditions of manufacturing is the key to the operation managements such as production planning, production scheduling and performance analysis. Because of the characteristic of data reconciliation and gross error detection, it is quite suitable to address plant-wide mass balance problem using data reconciliation and gross error detection techniques. In this paper, an extended support vector regression approach for data reconciliation and gross error detection is proposed to achieve plant-wide mass balance, which can simultaneously detect and estimate measurement errors and missing mass movement information. The simulation results demonstrate that the proposed approach is effective and accurate.",2010,0, 7623,Polygonal mesh simplification with face color and boundary edge preservation using quadric error metric,"In applications such as scientific and medical visualization, highly detailed polygonal meshes are needed. Rendering these polygonal meshes usually exceeds the capabilities of graphics hardware. To improve rendering efficiency and maintain proper interactivity, the polygonal mesh simplification technique is commonly used to reduce the number of polygons of the mesh and construct a multiresolution representation. We propose a new and simple constraint scheme based on the quadric error metric proposed by Garland and Heckbert (1997, 1998) to preserve face colors and boundary edges during the simplification process. In addition, our method generates progressive meshes that store the polygonal mesh in a continuous multi-resolution representation. According to our experimental results, this new method is successful in preserving face colors and boundary edges. Moreover, we compare the latency of resolution changes for progressive meshes of various models.",2002,0, 7624,The use GSM and web based SCADA for monitoring Fault Passage Indicators,"GSM communications and Web Based Telemetry have allowed Liander to deploy a large population of Fault Passage Indicators across their network, integrating the data with their Energy Management System, to effectively reduce the number of customer minutes lost.",2010,0, 7625,A new temporal error concealment method for H.264 using adaptive block sizes,"H.264 adopts new coding tools such as variable block size, quarter-pixel-accuracy motion estimation/compensation, multiple reference frames, loop filter, etc. The adoption of these tools enables a macroblock to have more information compared with previous standards. In H.264 each macroblock can have up to sixteen motion vectors, four reference frames, and a mode of the macroblock, all of which can be used for temporal error concealment. The H.264 error concealment in the informative part, however, uses a similar method to prior coding standards that consider only motion vectors of macroblocks adjacent to the lost macroblock. In this paper, we propose an effective temporal error concealment algorithm for H.264-coded video. It uses not only motion vectors and reference frames but the modes of macroblocks adjacent to the lost macroblock as well. Depending on the modes of neighboring macroblocks, each lost macroblock is concealed on the basis of different block sizes. Simulation results show the proposed method yields better video quality than conventional approaches.",2005,0, 7626,Joint explicit FMO map and error concealment for wireless video transmission,"H.264/AVC has a new error-resilient tool called flexible macroblock ordering (FMO). In this paper, we present a method to generate explicit FMO map based on distortion by simulating spatial and temporal error concealment at the encoder and a new error concealment method at decoder for wireless network. The error concealment method at the decoder is applied according to the residual information derived from the distortion simulation at the encoder that generated the FMO map. Our simulation result indicate the our proposed method of explicit FMO map generation and error concealment scheme could reduce the number of undecodable MB and improve quality of video under wireless network.",2009,0, 7627,A Variable Printer Model in Tone-Dependent Error Diffusion Halftone,"Halftone is a technique used for binary devices such printers to simulate the continuous tone image. But the quality of halftone prints produced by printers can be limited by dot gain and dot-placement errors. In this paper, we propose a variable printer model which can be combined in tone-dependent error diffusion (TDED). First we analyses the characteristic of the test printer, and based on the experimental data we propose a variable inkjet printer model. With this printer model, TDED algorithm shows better quality than traditional.",2008,0, 7628,A new hybrid fault detection technique for systems-on-a-chip,"Hardening SoCs against transient faults requires new techniques able to combine high fault detection capabilities with the usual requirements of SoC design flow, e.g., reduced design-time, low area overhead, and reduced (or ) accessibility to source core descriptions. This paper proposes a new hybrid approach which combines hardening software transformations with the introduction of an Infrastructure IP with reduced memory and performance overheads. The proposed approach targets faults affecting the memory elements storing both the code and the data, independently of their location (inside or outside the processor). Extensive experimental results, including comparisons with previous approaches, are reported, which allow practically evaluating the characteristics of the method in terms of fault detection capabilities and area, memory, and performance overheads.",2006,0, 7629,Complex reliability evaluation of voters for fault tolerant designs,"Hardware voters are bit voters computing a majority of n input bits. An m-out-of-n hardware bit voter is a circuit with n bit inputs, and 1 bit output y, such that y=1 if at least m-out-of-n inputs bits have the value 1. A hardware voter can be constructed as two level AND-OR (equivalently OR-AND and other structures) using CMOS VLSI technology. The goal of the paper is to present reliability estimations, failure modes and effects and criticality analysis (FMECA) of voting networks at the transistor level, in CMOS VLSl implementation. FMECA is performed using the functional tree of the system, representing the data flow from the lowest level functional block up to the higher level functional blocks. The main idea of this research is to identify the best designs of voting circuits in terms of reliability parameters and to identify their critical failures and effects",2001,0, 7630,Suppression of vibration due to transmission error of harmonic drives using peak filter with acceleration feedback,"Harmonic drives are widely used in robotics and precision positioning systems because of their unique properties such as near-zero backlash, high speed reduction ratio. However, they possess a periodic positioning error known as the transmission error and it produces a speed ripple on the gear output shaft. This is critical when the frequency of the speed variation coincides with the resonant frequency of the control system. This paper presents a speed control system to suppress the vibrations caused by the transmission error from harmonic drives. First, a new analysis scheme is proposed to analyze the effects of the transmission error. Next, a peak filter using load side acceleration information is designed and added into the servo loop in parallel with the existing controller. Simulation and experimental results show that the proposed system regulates the load side velocity satisfactorily and suppresses the vibration caused by the transmission error effectively.",2008,0, 7631,A fault tolerance approach for distributed systems using monitoring based replication,"High availability is a desired feature of a dependable distributed system. Replication is a well-known technique to achieve fault tolerance in distributed systems, thereby enhancing availability. We propose an approach relying on replication techniques and based on monitoring information to be applied in distributed systems for fault tolerance. Our approach uses both active and passive strategies to implement an optimistic replication protocol. Using a proxy to handle service calls and relying on service replication strategies, we effectively deal with the complexity and overhead issues. This paper presents an architecture for implementing the proxy based on monitoring data and the replication management. Experimentation and application testing using an implementation of the architecture is presented. The architecture is demonstrated to be a viable technique for increasing dependability in distributed systems.",2010,0, 7632,Identification of winding faults in electric machines using a high frequency method,"High frequency measurements are used more and more frequently as a diagnostic tool for investigation of machine windings. As a base are measurement results of frequency dependencies of the winding admittance. Therefore, relations between different kinds of winding faults and changes of frequency dependencies of winding admittances should be determined. Much research work is done in that field, which is aiming to determine the sensitivity of this method and to develop the recognition criteria with regards to the type and range of faults. This paper presents the comparison of measurement results of the admittance performed on the electric machine windings of different construction. The influence of turn-to-turn faults between adjacent winding wires on the admittance waveform has been also investigated.",2007,0, 7633,A New Method to Predict Software Defect Based on Rough Sets,"High quality software should have as few defects as possible. Many modeling techniques have been proposed and applied for software quality prediction. Software projects vary in size and complexity, programming languages, development processes, etc. We research the correlation of software metrics focusing on the data sets of software defect prediction. A rough set model is presented to reduce the attributes of data sets of software defect prediction in this paper. Experiment shows its splendid performance.",2008,0, 7634,Research on Non-contact Measurement System for Grinding Surface Roundness Error,"High speed and high precision are main research directions in modern mechanical manufacturing field; more and more parts are manufactured by high speed grinding. Effects of precision measurement instruments have obvious importance to realize zero waste manufacturing. This need to research test equipments for dynamic and quasi dynamic, even integrate with manufacturing equipments. The paper is on a new type measurement and control system using PMAC controller as a core of measurement and motion control, applied multithread technology of WIN32, multimedia timer and double buffer storage technology to realize to acquire from PMAC register and display real time. Solved real time data acquisition, mass storage and display, the experiments indicate cylindrical grinding machining and measurement realize integration.",2010,0, 7635,Timed uniform consensus resilient to crash and timing faults,"-timed uniform consensus is a stronger variant of the traditional consensus and it satisfies the following additional property: The correct process terminates its execution within a constant time (-timeliness), and no two processes decide differently (un",2004,0, 7636,DECOUPLE: defect current detection in deep submicron IDDQ,"IDDQ test concept for deep submicron (DSM) devices named DECOUPLE (Defect Current Observation Under the Proportion of intrinsic Leakage currents) is proposed. A new clustering method obtained two defect free groups from a production data set by abstracting from a signature of intrinsic leakage current that is independent of process variations. Possible pass/fail tests, diagnosis, and detection of parametric defect currents are discussed on the data set. Another example of the pass/fail tests on a second product is presented",2000,0, 7637,Content-Adaptive Motion Compensated Frequency Selective Extrapolation for error concealment in video communication,"If digital video data is transmitted over unreliable channels such as the internet or wireless terminals, the risk of severe image distortion due to transmission errors is ubiquitous. To cope with this, error concealment can be applied on the distorted data at the receiver. In this contribution we propose a novel spatio-temporal error concealment algorithm, the Content-Adaptive Motion Compensated Frequency Selective Extrapolation. The algorithm operates in two stages, whereas at first the motion in a distorted sequence is estimated. After that, a model of the signal is generated for concealing the distortion. The novel algorithm is based on an already existent error concealment algorithm. But by adapting the model generation to the content of a sequence, the novel algorithm is able to exploit the remaining information, which is still available in the distorted sequence, more effectively compared to the original algorithm. In doing so, a visually noticeable gain of up to 0.51 dB PSNR compared to the underlying algorithm and more than 3 dB compared to other error concealment algorithms can be achieved.",2010,0, 7638,On discretization error of image spectrum restoration,"Image processing in discrete systems assumes an application of digital filtration methods and the spectral analysis using discrete (fast) fourier transformation. In many applied problems the error brought by a processing method or a computing system, is comparable with the error of final result getting, the matter of extent of an error of data discretization therefore is topical. The problem of definition of the relative error of discretization of fourier transformation is solved in the offered paper. It is shown that for considered types of real signals the discretization error decreases in inverse proportion to the squared number of discrete points. For some simple types of functions exact expressions for discrete spectrum and the approached expressions of discretization error corresponding to them are received at the big number of points.",2003,0, 7639,Stochastic approach to error estimation for image-guided robotic systems,"Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking - series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.",2010,0, 7640,Gamma camera PET with low energy collimators: characterization and correction of scatter,"Imaging of myocardial viability with fluorodeoxyglucose (FDG) is possible with positron emission tomography (PET) and with SPECT. The image resolution from SPECT is poor but the data is reported to provide clinical information comparable to PET. Studies have just begun to appear using gamma camera PET and either axial slat collimators or open frame graded absorbers to image myocardial viability. Alternatively, it may be possible to use standard low energy collimators en detecting coincidences. Although image quality may suffer, it may be possible to devise methods such that no clinical information is lost. Such an approach also paves the way for dual isotope sequential or simultaneous imaging of coincidence and single photons. Here, we characterize the scatter fraction and scatter distribution of gamma camera PET with low energy collimators, and investigate the improvements possible with a convolution-subtraction scatter correction scheme. Monte Carlo simulations, line sources, a realistic phantom, and a human study were used. The scatter fraction was found to be almost identical to that obtained with axial slat collimators on a triple head gamma camera hybrid PET scanner. Images acquired with low energy collimators were degraded but still of good quality compared to acquisitions using axial collimation. The scatter correction scheme showed a degree of improvement over reconstructions without scatter correction. This approach is useful not only toward making sequential or simultaneous dual isotope imaging possible, but may also be useful to save time in a busy clinic that does both SPECT scans and cardiac FDG studies, since collimators would not need to be changed.",2002,0,5870 7641,Application of fault-tree analysis to troubleshooting the NASA GRC icing research tunnel,"In 1997, the NASA Glenn Research Center in Cleveland Ohio conducted an extensive troubleshooting effort on its icing research wind tunnel (IRT). This effort utilized fault tree analysis as an analytical troubleshooting tool in order to attack a serious nonrepeatability problem with IRT icing cloud parameters. The analysis team investigated many problem areas that were indicated by the fault tree. The fault tree analysis was an important aid in the troubleshooting effort which pulled in organizations needed to investigate the problem and caused a number of simultaneous corrective actions to occur. These actions were critical in restoring the repeatability of the icing cloud parameters even though no single root cause of the problem could be identified. In addition, the analysis team found 9 system level considerations which were brought forward to improve tunnel operations. The possibility of implementing the recommendations was investigated by the IRT staff. We learned that the effectiveness of the analysis was greatly enhanced by a team approach, and by a critical team review of all fault tree sections constructed. Personnel investigated the possibility that certain fault events could exist even when it appeared that the likelihood of an event was small. Intermediate events or potential primary faults were not removed from the fault tree unless sufficient evidence was presented to do so. The IRT management also realized that the fault tree analysis report provided a baseline and history for the investigation of this nonrepeatability problem",2001,0, 7642,Specifications overview for counter mode of operation. Security aspects in case of faults,"In 2001, after a selection process, NIST added the counter mode of operation to be used with the advanced encryption standard (AES). In the NIST recommendation a standard incrementing function is defined for generation of the counter blocks which are encrypted for each plaintext block, IPsec Internet draft (R. Housley et al., May 2003) and ATM security specifications contain implementation specifications for counter mode standard incrementing function. In this paper we present those specifications. We analyze the probability to reveal useful information in case of faults in standard incrementing function described in NIST recommendation. The confidentiality of the mode can be compromised with the fault model presented in this paper. We recommend another solution to be used in generation of the standard incrementing function in the context of the counter mode.",2004,0, 7643,"Online design bug detection: RTL analysis, flexible mechanisms, and evaluation","Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls. This paper proposes a flexible, low-overhead mechanism to detect the occurrence of design bugs during on-line operation. First, we analyze the actual design bugs found and fixed in a commercial chip- multiprocessor, Sun's OpenSPARC Tl, to understand the behavior and characteristics of design bugs. Our RTL analysis of design bugs shows that the number of signals that need to be monitored to detect design bugs is significantly larger than suggested by previous studies that analyzed design bugs at a higher level using processor errata sheets. Second, based on the insights obtained from our analyses, we propose a programmable, distributed online design bug detection mechanism that incorporates the monitoring of bugs into the flip-flops of the design. The key contribution of our mechanism is its ability to monitor all control signals in the design rather than a set of signals selected at design time. As a result, it is very flexible: when a bug is discovered after the processor is shipped, it can be detected by monitoring the set of control signals that trigger the design bug. We develop an RTL prototype implementation of our mechanism on the OpenSPARC Tl chip multiprocessor. We found its area overhead to be 10% and its power consumption overhead to be 3.5% over the whole OpenSPARC Tl chip.",2008,0, 7644,Applying FIRMAMENT to test the SCTP communication protocol under network faults,"How to apply a fault injector to evaluate the dependability of a network protocol implementation is the main focus of this paper. In the last years, we have been developing FIRMAMENT, a tool to inject faults directly into messages that pass through the kernel protocol stack. SCTP is a promising new protocol over IP that, due its enhanced reliability, is competing with TCP where dependability has to be guaranteed. Using FIRMAMENT we evaluate the error coverage and the performance degradation of SCTP under faults. Performing a complete fault injection campaign over a third party software give us a deep insight about the additional test strategies that are needed to reach significant dependability measures.",2009,0, 7645,Fault tolerance based on the publish-subscribe paradigm for the BonjourGrid middleware,"How to federate the machines of all Boinc, Condor and XtremWeb projects? If you believe in volunteer computing and want to share more than one project then BonjourGrid may help. In previous works, we proposed a novel approach, called BonjourGrid, to orchestrate multiple instances of Institutional Desktop Grid middleware. It is our way to remove the risk of bottleneck and failure, and to guarantee the continuity of services in a distributed manner. Indeed, BonjourGrid can create a specific environment for each user based on a given computing system of his choice such as XtremWeb, Condor or Boinc. This work investigates, first, the procedure to deploy Boinc and Condor on top of BonjourGrid and, second, proposes a fault tolerant approach based on passive replication and virtualization to tolerate the crash of coordinators. The novelty resides here in an integrated environment based on Bonjour (publication-subscription mecanism) for both the coordination protocol and for the fault tolerance issues. In particular, it is not so frequent to our knowledge to describe and to implement a fault tolerant protocol according to the pub-sub paradigm. Experiments, conducted on the Grid'5000 testbed, illustrate a comparative study between Boinc (respectively Condor) on top of BonjourGrid and a centralized system using Boinc (respectively Condor) and second prove the robustness of the fault tolerant mechanism.",2010,0, 7646,Fault prediction of boilers with fuzzy mathematics and RBF neural network,"How to predict potential faults of a boiler in an efficient and scientific way is very important. A lot of comprehensive research has been done, and promising results have been obtained, especially regarding the application of intelligent software. Still there are a lot of problems to be studied. It combines fuzzy mathematics with. RBF neural network in an intuition and natural way. Thus a new method is proposed for the prediction of the potential faults of a coal-fired boiler. The new method traces the development trend of related operation and state variables. The new method has been tested on a simulation machine. And its predicted results were compared with those of traditional statistical results. It is found that the new method has a good performance.",2005,0, 7647,Fast error recovery algorithm for Internet video using feedback channel and multi-reference frame,"Hybrid compressed video is very vulnerable for packet loss when transmitted over Internet. Feedback channel is very useful to combat error propagation. Multi-reference frame is another useful tool to enhance both coding efficiency and robustness. In this paper, we propose an algorithm which combines feedback channel and multi-reference frame. Both theoretical analysis and experimental results show that our scheme can best utilize the feedback message to recover rapidly from packet loss. No additional delay is incurred and no coding efficiency loss is penalized.",2003,0, 7648,Research on Fault Tolerance in Hybrid P2P-based Collaborative Systems,"Hybrid P2P technique recently has become more and more popular for collaborative systems. However, reliability is issue of major importance in the server end of these systems. TMR technique for Collaborative System (CS-TMR) is presented in this paper to improve the server reliability. This software schema incorporates three homogeneous microcomputers and provides the fault-tolerant function through package interfaces to server applications. As it is COS-based, the method is more general-purpose, and programmers need not pay too much attention to the fault tolerance technology in detail. This method helps the collaborative server work in normal and degraded (duple or even single modular) modes, and can tolerate transient or permanent faults. Meanwhile, due to the importance of the server in hybrid P2P server, server application upgrade is impossible without server stop. A novel seamless software upgrade method is brought forward through the intelligent state-transition-control in CS-TMR package to reduce the cost of software upgrade.",2007,0, 7649,Identification of the faulted line using controlled short circuit of PT delta-connected winding,"For ungrounded and resonant grounded systems, it is very difficult to identify the line experiencing a single-phase to ground fault. This is due to the fact that such grounding schemes produce very small fault current. This paper presents a novel method that can overcome the difficulties. The idea is to convert the ungrounded system into a grounded system temporarily through a controlled short circuit of the distribution bus potential transformer (PT) delta-connected winding. The result is a controllable ground fault current that is large enough for identifying the faulted line and yet small enough not to cause power quality problems. Results of theoretical analysis and computer simulation have confirmed the effectiveness of the proposed method.",2009,0, 7650,Modeling and instrumentation for fault detection and isolation of a cooling system,"Functional redundancy techniques for fault detection and isolation (FDI) in dynamic systems requires close interaction between system instrumentation, modeling and analysis. Effective FDI requires detailed and accurate models to trade and analyze system behavior, including transient phenomena that result from faults. It also requires appropriate instrumentation technology to provide the measurements to capture and analyze system behavior. Models and measurements must be matched carefully to provide sufficient observability and effective analysis. In this paper we demonstrate the development of FDI systems for complex applications by presenting the modeling and instrumentation of an automobile combustion engine cooling system. We have developed a qualitative parameter estimation methodology for FDI. A system model is represented as a graph that captures the dynamic behavior of the system. To demonstrate the applicability a small leak is artificially introduced in the cooling system and accurately detected and isolated",2000,0, 7651,Fault Tolerant Permanent Magnet Motor Drive Topologies for Automotive X-By-Wire Systems,"Future automobiles will be equipped with by-wire systems to improve reliability, safety and performance. The fault tolerant capability of these systems is crucial due to their safety critical nature. Three fault tolerant inverter topologies for permanent magnet brushless dc motor drives suitable for automotive x-by-wire systems are analyzed. A figure of merit taking into account both cost and post-fault performance is developed for these drives. Simulation results of the two most promising topologies for various inverter faults are presented. The drive topology with the highest post-fault performance and cost effectiveness is built and evaluated experimentally.",2008,0, 7652,Fault Tolerance Mechanisms for SLA-aware Resource Management,"Future grid systems will demand for properties like runtime responsibility, predictability, and a guaranteed service quality level. In this context, service level agreements have central importance. Many ongoing research projects already focus on the realization of required mechanisms at grid middleware layer. However, only concentrating on grid middleware is not enough. Also the underlying resource management systems have to provide an increased QoS level, since they provide their resources to grid environments. The EU-funded project HPC4U aims at realizing an SLA-aware resource management system. It allows the grid user to negotiate on SLAs, assuring the adherence with agreed SLAs by means of application-transparent checkpointing, snapshotting, and migration",2005,0, 7653,Defect-tolerant FPGA switch block and connection block with fine-grain redundancy for yield enhancement,"Future process nodes have such small feature sizes that there will be an increase in the number of manufacturing defects per die. For large FPGAs, it will be critical to tolerate multiple defects (Campregher et al., 2005). We propose a number of changes to the detailed routing architecture of island-style FPGAs to tolerate multiple random, distributed interconnect defects without re-routing and with minimal impact on signal timing. Our scheme is a user option prebuilt into an architecture, requiring +11% area for additional multiplexers. Unused (spare) wiring tracks are also needed, bringing total overhead to 24% to tolerate stuck-at or open faults, or 34% to include bridging. User circuits that do not fully stress the routing network already have these tracks freely available. The delay penalty is programmable: 5-10% if defect rates are expected to be sufficiently low, but can be as high as 25% if defect rates are high. Our schemes can tolerate more than 10 interconnect defects for large array sizes of 128 128. Unlike row/column redundancy schemes, our schemes are scalable: they naturally tolerate more defects as the FPGA array size increases. This work is the first detailed analysis of fine-grained defect-tolerant schemes in FPGAs.",2005,0, 7654,REESE: a method of soft error detection in microprocessors,"Future reliability of general-purpose processors (GPPs) is threatened by a combination of shrinking transistor size, higher clock rates, reduced supply voltages, and other factors. It is predicted that the occurrence of arbitrary transient faults, or soft errors, will dramatically increase as these trends continue. The authors develop and evaluate a fault-tolerant microprocessor architecture that detects soft errors in its own data pipeline. This architecture accomplishes soft error detection through time redundancy, while requiring little execution time overhead. Our approach, called REESE (REdundant Execution using Spare Elements), first minimizes this overhead and then decreases is even further by strategically adding a small number of functional units to the pipeline. This differs from similar approaches in the past that have not addressed ways of reducing the overhead necessary to implement time redundancy in GPPs.",2001,0, 7655,GOF analysis for Gaussianity assumption of range errors in WSN,"Gaussianity assumption is prevalent and fundamental to many statistical theories and engineering applications. Range measurement errors are generally assumed to reveal Gaussian distribution. We wish to analyze the hypothesis that Xi ~ N(, 2) for i = 0, 1, ..., n - 1 where n is total number of range measurements X. To scrutinize this hypothesis, instead of relying on artificially generated random variables, real time ranging data is obtained from experiments using IEEE 802.15.4 compliant devices, covering outdoor/indoor environment with both Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) conditions. Distribution of range measurements are analyzed using four goodness-of-fit (GOF) tests i.e. Graphical technique, Linear correlation coefficient (), Anderson-Darling (A2), and Chi-squared (2). It is observed that majority of the outcomes are same in all the tests with a high percentage disagreement with the assumption.",2010,0, 7656,Geolocation Error Analysis of the Special Sensor Microwave Imager/Sounder,"Geolocation errors in excess of 20-30 km have been observed in the special sensor microwave imager/sounder (F-16 SSMIS) radiometer observations when compared with accurate global shoreline databases. Potential error sources include angular misalignment of the sensor spin axis with the spacecraft zenith, sample timing offsets, nonuniform spin rate, antenna deployment offsets, spacecraft ephemeris, and approximations of the geolocation algorithm in the Ground Data Processing Software. An analysis methodology is presented to automate the process of quantifying the geolocation errors rapidly in terms of partial derivatives of the radiometer data in the along-scan and along-track directions and is applied to the SSMIS data. Angular and time offsets are derived for SSMIS that reveal the root cause(s) of the geolocation errors, while yet unresolved, are systematic, correctable in the ground processing software, and may be reduced to less than 4-5 km (1-sigma).",2008,0, 7657,"HGRID: Fault Tolerant, Log2N Resource Management for Grids","Grid resource discovery service is currently a very important focus of research. We propose a scheme that presents essential characteristics for efficient, self-configuring and fault-tolerant resource discovery and is able to handle dynamic attributes, such as memory capacity. Our approach consists of an overlay network with a hypercube topology connecting the grid nodes and a scalable, fault-tolerant, self-configuring search algorithm. By design, the algorithm improves the probability of reaching all working nodes in the system even in the presence of failures (inaccessible, crashed or heavy loaded nodes). We analyze the static resilience of the presented approach, that is to say, how well the algorithm is able to find resources without having to update the routing tables. The results show that the presented approach has a high static resilience.",2009,0, 7658,Distributed Fault Management for Computational Grids,"Grid resources having heterogeneous architectures, being geographically distributed and interconnected via unreliable network media, are at the risk of failure. Grid environment consists of unreliable resources; therefore, fault tolerant mechanisms can not be ignored. Some scientific jobs require long commitments of grid resources whose failures may not be overlooked. We need a flexible management of these failures by considering the failure of fault manager itself. In this paper we propose the concept of distributed management of failures without engaging the resources for this particular task exclusively. Resources performing the fault management may also participate in serving the long running user jobs. Each sub-job of the main user job is inspected by an individual resource. In case of failure inspector resource takes over in place of inspected resource. Contributions of this paper are: elimination of single point of failure and proposed concept's ability to be integrated with variety of grid middleware",2006,0, 7659,A method of calibrating fixing errors based on simplified ant colony algorithm,"Fixing positioning errors calibration of payload will help to improve the payload's performance. Several kinds of methods of correcting errors and their own characteristics are referred to and a new fixing positioning errors correction method of photoelectric payload of aerial mobile platform is proposed based on simplified ant colony system. By simulation of both minor errors and major errors, rapid convergence and high accuracy is achieved, which avoids the limitation of previous methods, only having effect on minor fixing positioning errors. Furthermore, the new method is tested and validated by practical data. In a way, the new method will help to solve practical fixing positioning errors calibration problem.",2009,0, 7660,Spherical Near-Field Antenna Measurements: A Review of Correction Techniques,"Following an introductory review of spherical near-field scanning measurements, with emphasis on the general applicability of the technique, we present a survey of the various methods to improve measurement accuracy by correcting the acquired data before performing the transform and by special processing of the resulting data following the transform. A post-processing technique recently receiving additional attention is the IsoFilter technique that assists in suppressing extraneous stray signals due to scattering from antenna range apparatus.",2007,0, 7661,The Study of Fault Location and Safety Control for the Mine Ventilating Fan,"For highly mine with mash gas, a fault location method of rough set theory for mine ventilator is presented, where fault attributes are divided into ten kinds, these decision tables of blocking partition are established; the rough set core of every decision table is got through reduction; the blocking tables with the dissimilarity fault kind combined to make a decision table; then the minimum decision set of the table is deduced further using synthesis analysis. Based on this minimum decision set, an equipment of automation fault location and safety control is made, which based on C51 single-chip computer for mine local ventilator, the efficiency of the equipment is proved, and the result in engineering experiment is as good as the test. The number of this decision rule is 5 percent of the total rule number by normal rough set",2006,0, 7662,Simulation of a doubly-fed induction machine for wind turbine generator fault analysis,"For modern large wind farms, it is more and more interesting to design an efficient diagnostics system oriented to wind turbine generators based on doubly-fed induction machine (DFIM). In this paper, a complete system will be analyzed by suitable simulations to deeply study fault influence and to identify the best diagnostic procedure to perform predictive maintenance. All the research efforts have been developed on different signature analysis (XSA) in order to detect or to predict electrical and mechanical faults in wound-rotor induction machines. They will be applied on wind turbine generators and their effectiveness will be studied.",2005,0, 7663,Improving Precision Using Mixed-level Fault Diagnosis,"For nanometer manufacturing fabrication process, it is critical to narrow down the defect location for successful physical failure analysis. This paper presents a mixed-level diagnosis technique, which first performs diagnosis at logic level, and then performs switch-level analysis to locate a defect at transistor level. An efficient single pass mixed-mode diagnosis flow proposed to isolate defects within a cell. Experimental results showed significant improvement in precision over traditional logic diagnosis with only a fractional increase in run-time. The proposed mixed-level diagnosis technique was applied to successfully isolate silicon defects",2006,0, 7664,A Time-Triggered Communication Protocol for CAN-based Networks with a Fault-Tolerant Star Topology,"For nearly two decades, the Controller Area Network (CAN) protocol has been used in the development of distributed embedded systems. Studies previously carried out have illustrated how various Shared-Clock (S-C) algorithms can be used with CAN-based microcontrollers to implement time-triggered network architectures. In these previous studies, it has been assumed that the underlying network topology is a bus; in the present paper, we explore some implications of working with a fault-tolerant star-based network topology and a modified S-C algorithm. A representative case study is employed and quantitative comparisons are made between equivalent bus- and star-based systems.",2010,0, 7665,Active fault detection in nonlinear systems using auxiliary signals,In recent years active approaches for fault detection using test signals have been developed. This paper reports on progress in extending one of these approaches from linear systems to nonlinear systems. Theoretical results are presented on the use of linearizations which is based on sufficiently small nonlinearities. An optimization based approach is also presented for large nonlinearities. Examples are given.,2008,0, 7666,Reduction of defects caused by chemical mechanical polishing of oxide surfaces and contamination of the wafer bevel,"In recent years chemical mechanical polishing has become the most relevant planarization technique that is applied for technologies with structures below 0.35 mum. During the CMP process slurry ingredients like abrasive particles, additives and the polishing pad are continuously in direct contact with the wafer. Therefore CMP is also known as a source of critical defects like microscratches, slurry particles and other surface contaminations on the wafer. This paper describes CMP process and hardware improvements that were implemented in a continuous defect reduction project focussing on oxide CMP processes in the BEOL.",2009,0, 7667,Investigation of the effects of transmission faults upon a renewable energy generating plant,"In recent years the number of renewable energy generators connected to Ireland's electricity grid has steadily increased. The Republic of Ireland is now expected to source 13.2% of the electricity it consumes from renewables by 2010, which represents a significant challenge to the electricity system operators and planners. This paper describes the modelling and simulation of a small hybrid wind/hydro generating plant connected to the distribution network. The effects upon the plant of transmission network faults and continuous voltage unbalance are investigated.",2005,0, 7668,Control Mechanisms of Perceived Phase Error on Synchronized Tapping Toward Communication Support Systems based on Psychological Timing Control,"In recent years, information technology rapidly increases the opportunity of human communication. However, communication supporting systems, such as tele-presence and CSCW, were mainly developed based on physical interaction and not on psychological one. In this article, we analyzed the psychological interaction mechanism of human timing control using synchronization tapping task. Particularly, we focused on the phase error correction, and the following 2 facts were found: 1) perceived timing is not coincident with physical timing; and 2) the perceived timing is controlled by two types of phase error correction mechanisms. One requires attentional resources and the other does not. These results suggest that timing control for human communication support system should be based on his/her perceived timing, and such psychological timing control mechanism is useful for realizing the human-like communication systems",2006,0, 7669,A New Neural Network Approach to Machine Tool Thermally Induced Error,"In recent years, neural network methods with different architectures and training strategies are widely used in machine tool thermal error compensation field, but there are still many problems such as low model accuracy, long training time and bad generalized ability. An integrated neural network classifier is proposed for compensation of thermal error in the paper. The investigation shows that the proposed method has higher classification precision and reliability, and is an ideal pattern classifier. Real cutting experiments are conducted on a CNC turning machine to validate the effectiveness of the method. Both simulation and experiment indicate that the proposed method is quite effective and ubiquitous.",2009,0, 7670,Programming of PIC Micro-Controller for Power Factor Correction,"In recent years, the power quality of the AC system has become a great concern due to the rapidly increased numbers of electronic equipment, power electronics and high voltage power system. In order to reduce harmonic contamination in power lines and improve transmission efficiency, power factor correction research became a hot topic. Many control methods for the power factor correction (PFC) were proposed. This paper describes the design and development of a three-phase power factor corrector using PIC (programmable interface circuit) micro-controlling chip. This involves sensing and measuring the power factor value from the load using PIC and sensors, then using proper algorithm to determine and trigger sufficient switching capacitors in order to compensate excessive reactive components, thus withdraw PF near to unity; as a result acquires higher efficiency and better quality AC output. Various power factor correction methods will also be discussed upon their applications in a specific sections",2007,0, 7671,Differential distinguishing attack on the Shannon stream cipher based on fault analysis,"In reference, some weak points in the design of the Shannon stream cipher and a differential distinguisher with complexity of O(214.92) keystream bits (i.e. O(29.92) keystream words) were presented. Another distinguishing attack based on a multidimensional linear transformation was presented in which require 2106.996 keystream words. Both of these attacks need to have access to the initial state that is unlikely. In this paper, a likely attack using fault analysis method is exploited to solve the mentioned problem. Additionally, a new distinguisher is proposed which improves the attack complexity to four times the complexity of running the Shannon stream cipher. Only are two differential outputs needed for successful attack with error probability equal to 0.001.",2008,0, 7672,Effect on Earth Fault Detection Based on Energy Function Caused by Imbalance of Three-Phase Earth Capacitance in Resonant Grounded System,"In resonant grounded system when single-phase earth fault occurs, for arc suppression coil cannot compensate the active current of the zero-sequence grounding current passing through the fault point, the zero-sequence active current passing through the fault line is the sum of those passing through the non-faults. Therefore in theory the zero-sequence active current passing through the fault line is greater than others in absolute value. Energy function is the integrating of product of zero-sequence voltage with zero-sequence current. The fault line's energy function value is negative, while the non-fault's is positive. If three-phase earth capacitance is imbalanced, zero-sequence voltage produces, which will produce zero-sequence current passing through all the line. This zero-sequence current is the key variable that influences the phase-angle between non-fault zero-sequence current and the zero-sequence voltage. This paper analyzes the influences in different compensation conditions in theory. Simulation results are presented to demonstrate that the validity of earth fault line detection based on energy function will be worse and worse accompanied with the increasing of earth resistance when single-phase earth fault occurs on the imbalanced phase, and indicate that energy function has its validity scope.",2006,0, 7673,Reliability Prediction Method Based on Function and Fault Reasoning for Electronic Products,"In respect of the lack of suitable method of mission reliability prediction for complex product, this paper puts forward a method of mission reliability prediction for circuit based on function simulation and fault reasoning. It builds the topology reasoning model and function simulation model of circuit with the aid of existing mature EDA (Electronic Design Automation) simulation tool, simulates and analyzes hard fault and soft fault of the components composed of circuit separately, and achieves the response of components failure on circuit. Then the idea of Monte Carlo simulation is used to carry on statistical analysis of fault data and calculate the circuit mission reliability. This paper presents the basic idea of this method, expatiates the principle and techniques of circuit functional fault simulation and circuit fault reasoning. Finally, this paper analyzed a representative case using the method mentioned in the paper. It does not need to build products reliability model using this method. Automatic reliability prediction could be realized in the same time with circuit design, which provides huge advantage to circuit designer. This paper makes exploratory researches on the method of integrated prediction of performance and reliability, and it is a strong complement to the existing reliability prediction method.",2010,0, 7674,A Study of Modified Testing-Based Fault Localization Method,"In software development and maintenance, locating faults is generally a complex and time-consuming process. In order to effectively identify the locations of program faults, several approaches have been proposed. Similarity-aware fault localization (SAFL) is a testing-based fault localization method that utilizes testing information to calculate the suspicion probability of each statement. Dicing is also another method that we have used. In this paper, our proposed method focuses on predicates and their influence, instead of on statements in traditional SAFL. In our method, fuzzy theory, matrix calculating, and some probability are used. Our method detects the importance of each predicate and then provides more test data for programmers to analyze the fault locations. Furthermore, programmers will also gain some important information about the program in order to maintain their program accordingly. In order to speed up the efficiency, we also simplified the program. We performed an experimental study for several programs, together with another two testing-based fault localization (TBFL) approaches. These three methods were discussed in terms of different criteria such as line of code and suspicious code coverage. The experimental results show that the proposed method from our study can decrease the number of codes which have more probability of suspicion than real bugs.",2008,0, 7675,Runtime repair of software faults using event-driven monitoring,"In software with emergent properties, despite the best efforts to remove faults before execution, there is a high likelihood that faults will occur during runtime. These faults can lead to unacceptable program behavior during execution, even leading to the program terminating unexpectedly. Using a distributed event-driven runtime software-fault monitor to repair faulty states creates an enforceable runtime specification. Using such an architecture can help ensure that emergent systems operate within specification, increasing the reliability of such software.",2010,0, 7676,Application of Fault Tree Knowledge in Reasoning of Safety Risk Assessment Expert System in Petrochemical Industry,"In studying the safety risk assessment expert system in petrochemical industry, the conception of fault tree based on binary decision diagrams (BDD) is proposed for fault tree analysis. Taking the boiler fouling explosion for example, the fault tree model and BDD model are built. On the basis of the fault tree model and BDD model, the stress is put on the knowledge representation (KR) of BDD based fault tree in the safety risk assessment expert system and the fault tree knowledge based mechanism of reasoning in this paper.",2009,0, 7677,Exact mathematical solution for the principal field emission correction function used in Standard Fowler-Nordheim theory,"In summary, this work puts the mathematics of the Schottky-Nordheim barrier onto a more complete and fully respectable basis. It also opens the way to new developments.",2007,0, 7678,Investigations on fault tolerant clock synchronization within a powerline communication structure,"In modern powerline communication (PLC) systems, clock synchronization is a very crucial issue. First the PLC network itself needs synchronized clocks for controlling the time-sliced communication, second also backbone networks and access points have to be coordinated in a fault tolerant fashion in order to ensure fast log-on and log-off of nodes travelling from one access point to another. This paper presents an approach to synchronize clocks in such a system with special support of IEEE 1588 compliant master groups. In the lower levels of the hierarchical system, attention has to be paid to the special behaviour of the PLC network. To tackle this, a methodology to use the IEEE 1588 format and protocol stack is presented. Finally measurements of the behaviour of the clock quality are analysed for both, Ethernet and PLC, by evaluating the Allan deviation",2006,0, 7679,Some applications of fault isolation and diagnosis based on Labview,"In modern technologies the virtual equipment techniques can greatly help the fault diagnosis of the gear box in the sense that they can share the hardware and software recourses and compromise a self test system. In this paper the Labview was first used to set up the data acquisition and analysis platform. The data pre-processing software was designed including resonance demodulation and synchronizing average in the time domain, which can provide the analyzed data for the post-diagnosis. Then the BP neural network model was utilized for the fault diagnosis. The experiment results showed that the combination of these two processing methods can help reject the noise signals, thus expose the fault attributes and greatly reduce the false diagnosis.",2010,0, 7680,Error Estimation for the Computation of Force Using the Virtual Work Method on Finite Element Models,"In most finite element analysis, the numerical error range of the computed force must be specified. In this paper, an error estimation method for force computation using the virtual work method is presented. The merits of the proposed method include its simplicity and its applicability in force computation when the objects being studied are touching other objects.",2009,0, 7681,High-level address exploration for an error correction optical disk subsystem: improving design productivity,"In most real-time multimedia systems, major design bottlenecks are related to memory organisation and management related issues. Usually, in order to meet design constraints at low cost, designers of the memory management units directly think about low-level (register-transfer level) implementation aspects of the architecture. However, this approach typically results into time-consuming global design iterations. In contrast, we propose a high-level design exploration methodology where design bottlenecks related especially to the address generation issues are exposed as early as possible in the design trajectory, hence reducing design time considerably. The effectiveness of our approach is illustrated on the memory processor of an optical disk error correction application, Using our approach, a substantial part of the design search space has been explored in a few man-weeks instead of the months of effort required by more conventional design flows. Furthermore, not only design time is dramatically improved but also implementation cost and system performance, hence resulting in a highly productive design flow",2000,0, 7682,Application method of wavelets in the fault diagnosis of motion system,"In motion system, many types of faults are related with the abnormity of torque signal. A method was presented which was based on the wavelets function. The compactly supported orthonormal wavelets were introduced. Under it, the fault could be detected, and the type of fault could be diagnosed as well. For using convenience, the flow chart of using this method was also offered. On X-Y motion control system, collision experiments were implemented for test the given method. Using the sampled torque signals, the practicability was verified from the clear diagnosis of different types of collisions.",2008,0, 7683,Maximum-likelihood versus maximum a posteriori based local illumination and color correction algorithm for multi-view video,"In multi-view video, illumination and color inconsistency among different views always exist because of imperfect camera calibration, CCD noise, camera positions and orientations, etc. Since illumination and color inconsistency greatly reduce the coding efficiency and rendering quality of multiview video, effective illumination and color correction modules are necessary for practical multi-view video processing system. In this paper, we proposed two local illumination and color correction algorithms. In these two algorithms, the correction matrix is estimated by applying maximum likelihood (ML) and maximum a posteriori (MAP) methods respectively. According to the Bayes rule, the MAP estimate is determined by two terms: error conditional density model (likelihood model)and priori conditional density model. Experimental results show that both the ML and MAP based correction matrices greatly improve the illumination and color consistency among different views. Moreover, images corrected by MAP based correction matrix look much nicer than those corrected by ML based correction matrix.",2009,0, 7684,Predicting Eclipse Bug Lifetimes,"In non-trivial software development projects planning and allocation of resources is an important and difficult task. Estimation of work time to fix a bug is commonly used to support this process. This research explores the viability of using data mining tools to predict the time to fix a bug given only the basic information known at the beginning of a bug's lifetime. To address this question, a historical portion of the Eclipse Bugzilla database is used for modeling and predicting bug lifetimes. A bug history transformation process is described and several data mining models are built and tested. Interesting behaviours derived from the models are documented. The models can correctly predict up to 34.9% of the bugs into a discretized log scaled lifetime class.",2007,0, 7685,SOA-Based Alarm Integration for Multi-Technology Network Fault Management,"In order for the service provider to offer better quality of telecom services to customers, one of the possible ways is to monitor and control all kinds of deployed network resources, which are used to support the operation of these services, and to proactively analyze and recover any trouble reported from the network resources. In this study, two system integration scenarios along with the associated interface specifications for multi-technology resource alarm notification and retrieval services have been described. An NGOSS-based development methodology was followed, and a number of useful commercials tools were introduced to facilitate the evolution and transformation of legacy BSS/OSS, so that these BSS/OSS are able to support loosely-coupled interoperability by using standards-based interfaces based on the service-oriented architecture. The functionalities of multi-technology network fault management were realized by means of JMS and Web Service techniques. The implementation described in this paper shows the feasibility of the proposed development methodology.",2008,0, 7686,Design and Reduction of UML-PN Models of Power Plant's Fault Management System,"In order to analysis models of fault management systems in power plants, a method combining software engineering techniques and DEDS (discrete event dynamic system, DEDS) models was put forward in this paper. Fault detection system was proposed as an example to set up models. Firstly, the UML charts and corresponding Petri nets were founded. Then, in order to optimize the model, reduction rules were employed. In the end, temporal logic of temporal Petri nets was used to illustrate the exterior functionality of reduction net. The results indicate that the reduction net remains the functionality, and is more effective in analysis.",2009,0, 7687,Executable assertions for detecting data errors in embedded control systems,"In order to be able to tolerate the effects of faults, we must first detect the symptoms of faults, i.e. the errors. This paper evaluates the error detection properties of an error detection scheme based on the concept of executable assertions aiming to detect data errors in internal signals. The mechanisms are evaluated using error injection experiments in an embedded control system. The results show that using the mechanisms allows one to obtain a fairly high detection probability for errors in the areas monitored by the mechanisms. The overall detection probability for errors injected to the monitored signals was 74%, and if only errors causing failure are taken into account we have a detection probability of over 99%. When subjecting the target system to random error injections in the memory areas of the application, i.e., not only the monitored signals, the detection probability for errors that cause failure was 81%",2000,0, 7688,A Temporal-domain Error Concealment Algorithm Effective in Motion Boundary Improvement,"In order to ease the motion boundary in temporal-domain algorithms, the paper propose to utilize the luminance of macro-block(MB) around the lost MB, to obtain a more precise and dense motion vectors, the rational function be used in motion vector interpolation. Mutation points in the vector field are detected, as a combination of movement of the border, in the same way, the missing pieces of motion vector interpolation are available along the direction of the boundary through the final match algorithm,Experimental results show that the algorithm is better.",2009,0, 7689,Fault Injection Scheme for Embedded Systems at Machine Code Level and Verification,"In order to evaluate software from the third party whose source codes are not available, after a careful analysis of the statistic data sorted by orthogonal defect classification, and the corresponding relation between patterns of high level language programs and machine codes, we propose a fault injection scheme at machine code level suitable respectively to the IA32 ARM and MIPS architecture, which takes advantage of mutating machine code. To prove the feasibility and validity of this scheme, two sets of programs are chosen as our experimental target: Set I consists of two different versions of triangle testing algorithms, and Set II is a subset of the Mibench which is a collection of performance benchmark programs designed for embedded systems; we inject both high level faults into the source code written in C language and the corresponding machine code level faults directly into the executables, and monitor their running on Linux. The results from experiments show that at least 96% of total similarity degree is obtained. Therefore, we conclude that the effect of injecting corresponding faults on both the source code level and machine code level are mostly the same. Therefore, our scheme is rather useful in analyzing system behavior under faults.",2009,0, 7690,Application of pre-function information in software testing based on defect patterns,"In order to improve precision of software static testing based on defect patterns, inter-function information was extended and applied in software static testing. Pre-function information includes two parts, the effect of context to invoked function and the constraint of invoked function to context, which can be used to detect the common defects, such as null pointer defect, un-initial defect, dangling pointer defect, illegal operation defect, out of bounds defect and so on. Experiments show that, pre-function information can reduce false negative in software static testing effectively.",2009,0, 7691,Fault Diagnosis Platform For Radar Circuit Based on Virtual Instrument,"In order to improve the efficiency of the Radar circuit maintenance, in this paper, based on PXI & GPIB Hybrid Bus and virtual instrument technology, a fault diagnosis platform for Radar circuit is developed, and also a fuzzy fault tree reasoning algorithm is presented. This platform is composed of two parts: hardware and software. The hardware consists of a built-in controller and 3 kind of virtual instruments which are digital storage oscilloscope, digital multimeter and spectrum analyzer. The software is constructed by a fuzzy fault tree reasoning algorithm and an expert knowledge database which includes 198 fault tree flow charts of 20-style Radars. By experiment of this platform, the result shows that, with virtual instruments, fuzzy fault tree reasoning algorithm and expert knowledge database, the platform can locate the Radar circuit fault effectively.",2010,0, 7692,An Energy Based Approach of Electromagnetism Applied to Adaptive Meshing and Error Criteria,"In order to improve the finite-element modeling of macroscopic eddy currents (associated with motion and/or a time-varying electrical excitation), an original error criterion for adaptive meshing, based on a local power conservation, is proposed. Then, the importance of the order element in the error computation is investigated. Finally, the criterion is coupled to a ldquobubblerdquo mesh generator, and an adaptive meshing of a 2D induction heating case is performed.",2008,0, 7693,Modeling and simulation of high temperature resistive superconducting fault current limiters,"In order to introduce the high temperature superconducting fault-current limiters (SFCL) into electric power systems, we need a way to conveniently predict the limiting characteristic in a given situation. The most important physical property dominating the current limiting behavior of the SFCL is the electric field-current density (E-J) characteristics of high temperature superconductors (HTS) which is dependent on temperatures. So we have developed an EMTP/ATP model of high temperature resistive type SFCL using MODELS language based on E J characteristics of HTS. Real-time circuit current is as an input signal to the SFCL model, and the output of the model is controlled by a TACS-controlled time-dependent resistance. The operating characteristics and limitation behaviors of SFCL have been investigated in detail.",2004,0, 7694,Fault Diagnosis of Gas Blower Based on Genetic Fuzzy Neural Network,"In order to make full use of the capability of GA's global searching and BP network's local searching, a genetic fuzzy neural network model is proposed. And the way of fault characteristic parameters' fuzzy processing and optimizing the weights and thresholds of ANN by GA are studied. As a result, the convergence speed and convergence precision are greatly increased. Application to the fault diagnosis of a gas blower system shows that the new model overcomes the low learning rate and local minimum of BP algorithm and the fault diagnosis precision is effectively improved.",2009,0, 7695,Integrated Genetic Neural Networks and Its Application in Fault Diagnosis,"In order to over come the problems about slow rate of convergence and falling easily into part minimums in BP algorithm, a new improved genetic BP algorithm was put forward. To determine whether the network fall into part minimum point, a discriminant of part minimum was put forth in the training process of neural network. Genetic algorithm was used to revise the weights of the neural network if the BP algorithm fell into minimums. The integrated genetic neural network fault diagnosis system was set up based on both the information fusion technology and actual fault diagnosis, which taking the sub-genetic neural network as primary diagnosis from different sides, then gained the conclusions through decision-making fusion. It can be educed from the examples that it takes full advantage of diversified characteristic information, and improves the diagnosis rate.",2008,0, 7696,Dynamic Causality Diagram in Fault Diagnosis,"In order to overcomes some shortages of Belief Network dynamic causality diagram is put forward. Its knowledge expression, reasoning, probability computing and also the model of causality diagram used for system fault diagnosis, the model constructing method and reasoning algorithm are proposed. At last, an application example in the fault diagnosis of the nuclear power plant is given which shows that the method is effective.",2009,0, 7697,Correlating voltage sags with line faults and lightning,"In order to position themselves for the coming competitive energy marketplace, utilities must judiciously improve their transmission and distribution system to provide the highest quality power to their customers. In this light, the knowledge of what caused particular voltage sags can be invaluable in deciding where to spend a limited capital improvement budget. This article describes an integrated system of databases that correlates voltage sags with transmission line faults and any corresponding lightning strikes that caused the fault. By determining the critical locations for susceptibility to lightning-induced faults, remedial measures can be tactically implemented to improve both the lightning protection and the fault performance of the system. In addition, disturbances not caused by lightning can be identified so that other causes can be determined",2001,0, 7698,Prediction-Table Based Fault-Tolerant Real-Time Scheduling Algorithm,"In order to predict accurately whether primary versions of real-time tasks is executable in software fault-tolerant module, a new algorithm, PTBA, prediction-table based algorithm, is presented. PTBA uses prediction-table to predict whether a host primary can meet its pre-deadline. Prediction-table contains the pre-assignment information of tasks between the current time and the alternates' notification time. If the prediction result shows that host primary has not enough time to execute, it will be aborted. Otherwise, prediction-table is referenced to schedule tasks with low overhead. The novelty of PTBA is that it schedules primaries according to their corresponding alternates' notification time and has no extra scheduling overhead in prediction-table mode. Simulation results show that PTBA allows more execution time for primaries and wastes less processor time than the well-known similar algorithms. PTBA is appropriate to the situation where the periods of tasks are short and software fault probability is low",2006,0, 7699,Joint Generalized Antenna Combination and Symbol Detection Based on Minimum Bit Error Rate: A Particle Swarm Optimization Approach,"In order to reduce hardware cost and achieve superior performance in multi-input multi-output (MIMO) systems, this paper proposes a novel scheme for joint antenna combination and symbol detection. More specifically, the new approach simultaneously determines the transformation weighting for antenna combination to lower the RF chains called for and to design the minimum bit error rate (MBER) detector to effectively mitigate the impairment due to interference. The joint decision statistic, however, is highly nonlinear and the particle swarm optimization (PSO) algorithm is employed to reduce the computational overhead. Conducted simulation results show that the new approach yields satisfactory performance with reduced computational overhead compared with pervious works.",2008,0, 7700,Design for N+1 fault-tolerant integrated solar controller,"In order to solve capacity increasing of solar power system and the system's paralysis caused by the failure of host controller, a new controller was designed in this paper. Control system adopted two-level architecture which consisted of host controller and power modules. CAN bus technology was used for communication. It was easy to form solar power system of different capacities through combination of multiple power modules. When the host controller was bad, the first power module could be used as host controller to ensure the normal operation of the system. Prototyping had been made and had passed through high-low temperature test. Verification test was conducted in simulated environment. Experimental results show that the scheme is right and feasible.",2010,0, 7701,Knowledge Acquisition Model for Satellite Fault Diagnosis Expert System,"In order to solve the bottleneck problem of building an expert system, a knowledge acquisition model of fault diagnosis expert system for satellites was presented. Firstly, a data discretization algorithm based on fuzzy sets was put forward to do discretization work for decision table. Secondly, a rule extraction algorithm was brought forth to extract productive rules from decision table. Thirdly, we take an example to demonstrate how to extract productive rules for fault diagnosis expert system for satellites. The operation parameters of a satellite's power system were collected and discretized to construct a decision table. We employed attribute reduction algorithm based on discernibility matrix to do attribute reduction and then extract productive rules using the rule extraction algorithm we presented. The comparison between the rules extracted by rough sets software Rosetta and our model demonstrated the correctness and effective of our knowledge acquisition model.",2009,0, 7702,Research on identification between inrush current and internal faults of power transformer based on H-S transform,"In order to solve the problem of misoperation of transformer differential relay due to the inrush current, inrush current and internal faults must be discriminate effectively. This paper presents a novel approach by using hyperbolic S-transform (HST) which is a very powerful tool for non-stationary signal analysis giving the information of transient currents both in time and in frequency domains. The signal was transformed to phase space by using HST and the features extracted, the time-frequency contours are obtained after HST, it is found that the time-frequency contours in case of inrush current are different from that in case of internal faults. By calculating the energy and the ratio of energy level 1 and level 9 we found that ratio 6 is the threshold between inrush current and fault current.",2010,0, 7703,On Diagnosis Prototype System for Motor Faults Based on Immune Model,"In the fault diagnosis of motor, the early detection of incipient faults of components plays an important role. To support the detection of incipient and unknown faults, an immunology based Fault diagnosis prototype system was proposed. The immune system for fault diagnosis is comprised of innate immune layer and adaptive immune layer. In the system, innate immune layer directs recognition of known fault; Adaptive immune layer learns the incipient and unknown fault pattern. The simulation shows that a satisfying result can be acquired by using the layer computation model.",2009,0, 7704,Speech Enhancement Combining Optimal Smoothing and Errors-In-Variables Identification of Noisy AR Processes,"In the framework of speech enhancement, several parametric approaches based on an a priori model for a speech signal have been proposed. When using an autoregressive (AR) model, three issues must be addressed. (1) How to deal with AR parameter estimation? Indeed, due to additive noise, the standard least squares criterion leads to biased estimates of AR parameters. (2) Can an estimation of the variance of the additive noise for each speech frame be obtained? A voice activity detector is often used for its estimation. (3) Which estimation rules and techniques (filtering, smoothing, etc.) can be considered to retrieve the speech signal? Our contribution in this paper is threefold. First, we propose to view the identification of the noisy AR process as an errors-in-variables problem. This blind method has the advantage of providing accurate estimations of both the AR parameters and the variance of the additive noise. Second, we propose an alternative algorithm to standard Kalman smoothing, based on a constrained minimum variance estimation procedure with a lower computational cost. Third, the combination of these two steps is investigated. It provides better results than some existing speech enhancement approaches in terms of signal-to-noise-ratio (SNR), segmental SNR, and informal subjective tests.",2007,0, 7705,Two-bunch orbit correction using the Wakefield kick,"In the KEKB injector linac, a two-bunch acceleration scheme has been used for doubling the positron injection rate to the KEKB low-energy-ring (LER). In this operation mode, the multi-bunch transverse wake field caused by the first bunch affects the beam orbit of the second bunch. In the KEKB linac, an orbit correction method based on the average minimum of two-bunch orbits has been adopted, and has worked stably. However, a new two-bunch orbit correction method is strongly required to make the loss of charge less. We propose a new two-bunch orbit correction method based on a local bump method. In this scheme, some local bumps are intentionally constructed in a low-energy area. Adjusting the local bump height can control the wake field strength affecting the second bunch. In this paper, we report on the results of a preliminary beam test to confirm that this new method is useful",2003,0, 7706,Dynamic Security Assessment to protect systems after severe fault situations,"In the last 10 years the number of severe fault situations and black-outs world wide is increasing. The classical static security assessment is used to monitor the system situation after contingencies, but is not able to take into account the complex dynamic behaviour of an electrical system together with the control of generators and grid equipment like switched capacitors or FACTS together with the protection reaction in unforeseeable situations after severe system faults. The paper describes a modern dynamic security assessment (DSA) system which allows handling predefined dynamic contingencies in real-time and intelligent proceeding and evaluation. The base of the system is the system simulation tool PSStrade NETOMAC, which can simulate the dynamic behaviour of large electrical systems including control and protection. A contingency builder allows the user to define the interesting contingency scenarios like outage of grid elements, generators or combination of outages or system faults like short circuits, etc. The events can be calculated in real time which means, that about 200 contingency cases can be handled in about 10 minutes, depending on the system size. The DSA-system analyses the events using an intelligent and flexible criteria editor which gives the opportunity to select criteria for critical system time behaviour. These criteria allow to observe how critical a system reacts checking under voltages, frequency, angle differences in the grid, overcurrents, machine angle, etc. The information about severe cases is available in a protocol for easy recalculation of critical cases in details with more parameters checked and monitored. The results can be used to monitor the overall situation of a system periodically. The automatic monitoring of critical events is under construction.",2006,0, 7707,An efficient hardware-software approach to network fault tolerance with InfiniBand,"In the last decade or so, clusters have observed a tremendous rise in popularity due to excellent price to performance ratio. A variety of Interconnects have been proposed during this period, with InfiniBand leading the way due to its high performance and open standard. Increasing size of the InfiniBand clusters has reduced the mean time between failures of various components of these clusters tremendously. In this paper, we specifically focus on the network component failure and propose a hybrid hardware-software approach to handling network faults. The hybrid approach leverages the user-transparent network fault detection and recovery using Automatic Path Migration (APM), and the software approach is used in the wake of APM failure. Using Global Arrays as the programming model, we implement this approach with Aggregate Remote Memory Copy Interface (ARMCI), the runtime system of Global Arrays. We evaluate our approach using various benchmarks (siosi7, pentane, h2o7 and siosi3) with NWChem, a very popular ab initio quantum chemistry application. Using the proposed approach, the applications run to completion without restart on emulated network faults and acceptable overhead for benchmarks executing for a longer period of time.",2009,0, 7708,Maintenance planning for fault tolerant designs used in critical applications,"In the last few years, digital systems, especially computers, have been incorporated into commercial and military aircraft flight control systems, and industrial controllers. A system used in a typical critical application is the architecture of the X-29 aircraft flight control system, based on triple modular redundancy. The control system uses three identical computers performing the same operations. The results from each computer are examined and the output from the system is formed via a majority vote of the three results. For such a system, used in critical applications, it is extremely important, besides other evaluations, to plan the maintenance logistic support, knowing the failure rates and cost of all its blocks, the system quantity and installation sites, possible stock and spare source locations, etc",2001,0, 7709,Eliminating Concurrency Bugs with Control Engineering,"In the multicore era, concurrency bugs threaten to reduce programmer productivity, impair software safety, and erode end-user value. Control engineering can eliminate concurrency bugs by constraining software behavior, preventing runtime failures, and offloading onerous burdens from human programmers onto automatically synthesized control logic.",2009,0, 7710,Effectiveness of the frequency analysis of the stator current in the rotor fault detection of induction motors,"In the paper problems connected with the rotor diagnostics of induction motors using frequency and time-frequency methods are presented. The most frequently used methods are discussed, like: fast Fourier transform FFT, short-time Fourier transform STFT and the wavelet transform WT. The comparative analysis of the efficiency of above mentioned methods in the failure sensing of cage rotors is presented.",2008,0, 7711,Hardware Fault Free Simulation for SOC,In the paper structure functional multi-valued hardware model of digital device is offered; two-circuits structure functional multi-valued hardware model of digital device for multiple input patterns co-simulation and multiple increasing of performance transient analysis in sequential structures is proposed; automatic model of HDL-code transmission process to data structure for digital system on chip analysis and verification with hardware is proposed.,2007,0, 7712,All Bits Are Not Equal - A Study of IEEE 802.11 Communication Bit Errors,"In IEEE 802.11 Wireless LAN (WLAN) systems, techniques such as acknowledgement, retransmission, and transmission rate adaptation, are frame-level mechanisms designed for combating transmission errors. Recently sub-frame level mechanisms such as frame combining have been proposed by the research community. In this paper, we present results obtained from our bit error study for identifying sub-frame error patterns because we believe that identifiable bit error patterns can potentially introduce new opportunities in channel coding, network coding, forward error correction (FEC), and frame combining mechanisms. We have constructed a number of IEEE 802.11 wireless LAN testbeds and conducted extensive experiments to study the characteristics of bit errors and their location distribution. Conventional wisdom dictates that bit error probability is the result of channel condition and ought to follow corresponding distribution. However our measurement results identify three repeatable bit error patterns that are not induced by channel conditions. We have verified that such error patterns are present in WLAN transmissions in different physical environments and across different wireless LAN hardware platforms. We also discuss our current hypotheses for the reasons behind these bit error probability patterns and how identifying these patterns may help improving WLAN transmission robustness.",2009,0, 7713,Correction of paradoxical vision by simulated depth cue and inverted mirror image for laparoscopic surgery,"In laparoscopic surgery, surgeons often encounter paradoxical vision according to their position against the camera position. Such a paradoxical vision evokes confusion and surely deteriorates surgical performance. Previous researches indicated inverted mirror image is useful to compensate this problem though upside-down inversion makes depth sensation perplex. To solve the dilemma, we proposed modified method that displays inverted mirror image plus perspective projection that adds simulated depth cue. After preparing the image adjustment software, trainer box, touch panel device to measure the motion of the forceps, we tested its validity in 36 participants including 10 experienced surgeons. Each participants were requested to push buttons following computer's assignment for ten times of task at three conditions: 1. stands parallel to the camera, 2. stands at the opposite side of the camera, 3. stands at the opposite side of the camera watching inverted mirror image with perspective projection. Mean duration of time for completion of the task was 23.1+/-5.0 seconds for the subjects stand parallel to the camera. It was 98.0+/-96.5 seconds and 43.3+/-24.3 seconds for the subject stands at the opposite side of the camera without and with transformed image, respectively. Paradoxical vision significantly deteriorated performance (p<;0.001), however, it was compensated by the inverted mirror vision with perspective projection significantly (p<;0.0015). In ""rope passing"" and ""bead drop"" trial, tested by 10 experienced surgeons, inverted and perspective vision significantly compensated deteriorated performance under paradoxical vision. Inverted mirror image with perspective projection would be useful tool for correction of paradoxical vision in laparoscopic surgery, and further research would be warranted to make such system practical.",2010,0, 7714,Evaluating the Effect of Agile Methods on Software Defect Data and Defect Reporting Practices - A Case Study,"In large, traditional software development projects, the number of defects can be considerably high. Agile methods promise code quality improvement, but while embracing the agile methods, software development organizations have realized that defects still do exist and must be managed. When the development is distributed over several sites, defect management can become even more challenging. In this study we analyzed defect data in a large multi-site organization during the first twelve months of their agile transformation. Complementing information was gathered by a survey, which was conducted in the organization twice: after six and after twelve months of starting the agile transformation. The results indicate that the defect reporting practices changed after the agile adoption was started, the defect inflow was more stable and the defect closing speed improved.",2010,0, 7715,Fault Tolerant Job Scheduling in Computational Grid,"In large-scale grids, the probability of a failure is much greater than in traditional parallel systems [I]. Therefore, fault tolerance has become a crucial area in grid computing. In this paper, we address the problem of fault tolerance in term of resource failure. We devise a strategy for fault tolerant job scheduling in computational grid. Proposed strategy maintains history of the fault occurrence of resource in grid information service (GIS). Whenever a resource broker has job to schedule it uses the resource fault occurrence history information from GIS and depending on this information use different intensity of check pointing and replication while scheduling the job on resources which have different tendency towards fault. Using check pointing proposed scheme can make grid scheduling more reliable and efficient. Further, it increases the percentage of jobs executed within specified deadline and allotted budget, hence helping in making grid trustworthy. Through simulation we have evaluated the performance of the proposed strategy. The experimental results demonstrate that proposed strategy effectively schedule the grid jobs in fault tolerant way in spite of highly dynamic nature of grid",2006,0, 7716,Determining the point of minimum error for 6DOF pose uncertainty representation,"In many augmented reality applications, in particular in the medical and industrial domains, knowledge about tracking errors is important. Most current approaches characterize tracking errors by 66 covariance matrices that describe the uncertainty of a 6DOF pose, where the center of rotational error lies in the origin of a target coordinate system. This origin is assumed to coincide with the geometric centroid of a tracking target. In this paper, we show that, in case of a multi-camera fiducial tracking system, the geometric centroid of a body does not necessarily coincide with the point of minimum error. The latter is not fixed to a particular location, but moves, depending on the individual observations. We describe how to compute this point of minimum error given a covariance matrix and verify the validity of the approach using Monte Carlo simulations on a number of scenarios. Looking at the movement of the point of minimum error, we find that it can be located surprisingly far away from its expected position. This is further validated by an experiment using a real camera system.",2010,0, 7717,Error-correcting codes for automatic control,"In many control-theory applications one can classify all possible states of the device by an infinite state graph with polynomially-growing expansion. In order for a controller to control or estimate the state of such a device, it must receive reliable communications from its sensors; if there is channel noise, the encoding task is subject to a stringent real-time constraint. We show a constructive on-line error correcting code that works for this class of applications. Our code is computationally efficient and enables on-line estimation and control in the presence of channel noise. It establishes a constructive (and optimal-within-constants) analog, for control applications, of the Shannon coding theorem.",2005,0, 7718,The importance of life cycle modeling to defect detection and prevention,"In many low mature organizations dynamic testing is often the only defect detection method applied. Thus, defects are detected rather late in the development process. High rework and testing effort, typically under time pressure, lead to unpredictable delivery dates and uncertain product quality. This paper presents several methods for early defect detection and prevention that have been in existence for quite some time, although not all of them are common practice. However, to use these methods operationally and scale them to a particular project or environment, they have to be positioned appropriately in the life cycle, especially in complex projects. Modeling the development life cycle, that is the construction of a project-specific life cycle, is an indispensable first step to recognize possible defect injection points throughout the development project and to optimize the application of the available methods for defect detection and prevention. This paper discusses the importance of life cycle modeling for defect detection and prevention and presents a set of concrete, proven methods that can be used to optimize defect detection and prevention. In particular, software inspections, static code analysis, defect measurement and defect causal analysis are discussed. These methods allow early, low cost detection of defects, preventing them from propagating to later development stages and preventing the occurrence of similar defects in future projects.",2002,0, 7719,An ought-to-do deontic logic for reasoning about fault-tolerance: the diarrheic philosophers,"In the present paper we use a variation of a well-known example (dining philosophers) to illustrate how deontic logics can be used to specify, and verify, systems with fault- tolerant characteristics. Towards this goal, we first introduce our own version of a prepositional deontic logic, and then some of its most important meta properties are described. Our main goal is to show that our deontic formalism is suitable for use in practical examples, and also to prepare the ground for more inclusive formalisms.",2007,0, 7720,Channel capacity and average error rates in generalised-K fading channels,"In the present study, the performance of digital communication systems operating over a composite fading channel modelled by the generalised-K distribution is analysed and evaluated. Novel closed-form expressions for the outage performance, the average bit error probabilities of several modulation schemes and the channel capacity under four different adaptive transmission schemes are derived. The analytical expressions are used to investigate the impact of different fading parameters of this composite fading channel model on the average bit error rate performance for a variety of digital modulation schemes and the spectral efficiency of different adaptive transmission policies.",2010,0, 7721,Measurement Data Correction for Emission Tomography,"In the problems of statistical reconstruction of emission tomography images, Bayesian reconstruction, or maximum a posteriori (MAP) method, has proved its superiority over others among all the regularization methods. To further improve the reconstruction, this paper presents a novel statistical image reconstruction method based on coupled feedback (CF) iterative model for emission tomography. This CF iterative algorithm updates the noisy emission sinogram (the measurement data of the detectors) using the latest reconstructed image. The experiments and the performance analysis confirm the virtue of the new method.",2009,0, 7722,The effect of the time window width of correlation method on single-ended modern traveling wave based fault location principles,"In the technique of single-ended modern traveling wave based fault location principles for transmission lines, traveling wave correlation method is a classical algorithm applied to detect the fault reflected surge. But in actual application, the lack of an effective way to choose the time window results in the limitation of correlation method - the time window width affects the correlation coefficient value which is an important indicator to show the similarity of the waveforms. This paper presents a new conception called optimal time window width of correlation method, and analyzes different factors probably affecting the width by means of EMTP-ATP and Matlab applied to simulations. Further more, the basic idea of new correlation method based on multi-time-window is proposed, which could be applied to improve the reliability of fault location technique.",2008,0, 7723,Design and emulate on motor fault diagnosis system,"In the traditional motor fault diagnosis, only a certain type of motor fault diagnosis was diagnosed. The less amount of information leads to diagnostic conclusions unreliable. In this article, a new fault diagnosis method was put forward. Information fusion technology, stator current and rotor vibration signals as a diagnostic characteristics input signal were introduced into the motor fault diagnosis. Neural network method was applied to the fault identification. In order to improve the diagnostic precision, the input signs were divided into the stator current signal related and the rotor vibration signal related. They separately adopt a diagnosis sub-network to complete different aspects of fault diagnosis. Finally, each sub-network diagnostic results information fusion were carried out and the final diagnosis results were got. The simulation of the diagnostic method showed that it is feasible that the neural network data fusion applied to the motor fault diagnosis.",2010,0, 7724,Test Patterns for Verilog Design Error Localization,"In this article we briefly state the idea behind model-based diagnosis and its application to debugging RTL (Register Transfer Level) Verilog designs. In providing a debugging model for the Verilog HDL (Hardware Description Language) we rely on a specific abstraction (trace semantics) that captures solely quiescent states of the design. In this vein we manage to overcome the inherent complexity issues of event-based Verilog without relying on specific fault models. To leverage test patterns for design error localization we propose the filtering approach and relate it to the concept of Ackermann constraints. Notably, our empirical results demonstrate that our novel technique considerably increases the diagnosis resolution even under presence of only a couple of test cases. The article outlines a case study comprising several circuits, where the proposed technique allowed one for excluding 95 per cent of the Verilog code from being faulty by merely considering a couple of test cases.",2009,0, 7725,Analysis of the soft error effects on CAN network controller,"In this article, the effects of the single event upset on a Controller Area Network (CAN) controller and its effects on the network is being evaluated. The experiment is done using SINJECT fault injection tool in a simulation based environment. Three mail modules of the controller are used in three independent set of experiments in one of the CAN controllers of the network. The results show that the main cause of the network failure is the bit stream processor. 6.7% of the injected faults in the bit stream processor led to the network failure. On the other hand, the registers sub-module of the controller showed to be most fault tolerant. The experiment showed that 0.3% of the faults in the registers module results in network failure, and the bit timing module is responsible for the failure of the whole network in 3.2% of the injected single event upset faults.",2010,0, 7726,Comparisons of multipath modeling strategies for the estimation of GPS positioning error,"In this article, two objectives were planned: the choice of an appropriate electromagnetic multipath model, and a suitable description of the environment.",2009,0, 7727,Identification inrush current and internal faults of transformer based on hyperbolic S-transform,"In order to solve the problem of mis-operation of transformer differential relay owing to the inrush current, internal faults and inrush current must be discriminate effectively. In this paper, a novel approach of identification between inrush current and internal faults based on hyperbolic S-transform (HST) which is a very powerful tool for nonstationary signal analysis giving the information of transient currents both in time and in frequency domains is presented. The signal is transformed to phase space by using HST and the features are detected. It is found that the time-frequency contours in case of inrush current are different from that in case of internal faults. The results obtained by using HST and discrete wavelet transform (DWT) were compared. Comparison results indicate that the time-frequency localization characteristics are more distinct in ST domain, and HST has strong capability in noise reduction. The spectral energy and standard deviation from the HST of signal are computed, classification of inrush current and internal faults is done by BP neural networks. Simulation results indicate that this technique is effective and feasible.",2009,0, 7728,Research of steep-front wave impulse voltage test effectiveness in finding internal fault of composite insulators,"In order to verify the effectiveness of steep-front impulse voltage testing in finding the internal faults of composite insulators, some insulators with faults are modeled which include conductive channel, semi-conductive airy channel and partial little air bubbles that occur separately at different places. A steep-front wave impulse voltage test (steepness of wave front is 1000-4000 kV/s) is respectively made for these faulty and normal insulators. At the same-time the internal electric field intensity and its distribution in the insulator is calculated by making use of infinite element analysis software in order to test whether breakdown has occurred. The result is consistent with the experimental one. The final result shows that steep-front wave impulse voltage testing plays a very effective part in finding severe faults of the insulator, however, tiny faults are not easy found using this method. The result of this research gives a reference to revise the steep-front wave impulse voltage test standard",2001,0, 7729,Effect of the feature vector size on the generalization error: the case of MLPNN and RBFNN classifiers,"In pattern recognition literature, it is well known that a finite number of training samples cause practical difficulties in designing a classifier. Moreover, the generalization error of the classifier tends to increase as the number of features gets large. We study the generalization error of several classifiers (MLPNN, RBFNN, K NN) in high dimensional spaces, under a practical condition: the ratio of the training sample to the dimensionality is small. Experimental results show that the generalization error of neuronal classifiers decreases as a function of dimensionality while it increases for statistical classifiers",2000,0, 7730,Feedback aided content adaptive unequal error protection based on Wyner-Ziv coding,"In prior work, we proposed an unequal error protection algorithm based on Wyner-Ziv coding for error resilient video transmission. In subsequent work it was demonstrated that using either of content adaptive unequal error protection or feedback aided unequal error protection individually improved error resilience performance. In the current paper we propose to combine the use of a content adaptive function, implemented at the encoder, with the channel loss feedback provided by the decoder. The experimental results demonstrate improved rate distortion performance.",2009,0, 7731,A new filter scheme for the filtering of fault currents,"In protection relaying schemes, the digital filter unit plays the essential roles to calculate the accurate phasor. However, the decaying DC components in fault currents always cause false operations of relay systems. This paper presents a new filter scheme which can remove such components from fault currents. Using the proposed scheme, the full-cycle DFT (FCDFT) only needs one-cycle samples to obtain an accurate fundamental phasor. The decaying DC component is removed by an iterative computation. The proposed filter scheme can help digital filters achieve accurate results rapidly. Simulations results illustrate the effectiveness of this new algorithm for distance relaying applications.",2009,0, 7732,Fast Algorithms for Testing Fault-Tolerance of Sequenced Jobs with Deadlines,"In queue-based scheduling systems jobs are executed according to a predefined sequential plan. During execution, faults may occur that cause jobs to re-execute, thus delaying the whole schedule. It is thus important to determine (in real-time) whether the given set of pre-ordered jobs is fault-tolerant, that is, if all jobs will always meet their deadlines. This allows, for instance, to decide online whether to admit a new urgent job into the queue while still guaranteeing that the whole schedule remains fault-tolerant. Our goal in this work is to design efficient algorithm for testing fault tolerance of sequenced jobs in the presence of transient faults. We consider different fault models that specify which fault patterns are allowed to occur and how soon failed jobs can be restarted. For each fault model we provide efficient algorithms that determine the feasibility of all jobs in the schedule. Our algorithms are exact and run in time linear in the number of jobs (deterministically, or with very high probability, depending on the fault model), and thus can be used to make real-time decisions.",2007,0, 7733,Correction to A Fast Incremental Hypervolume Algorithm [Dec 08 714-723],"In the above titled paper (ibid., vol. 12, no. 6, pp. 714-723, Dec. 08), there was an error in the pseudo-code for the incremental hypervolume by slicing objectives (IHSO) that might prevent its easy implementation. The corrected pseudo-code is presented here.",2009,0, 7734,Mobile agent fault tolerance in autonomous decentralized database systems,"In the background of e-business, the autonomous decentralized database system has been proposed to cope with the dynamic and heterogenous requirements of users. In a diversified environment of service provision and service access, the ADDS provides flexibility to integrate heterogeneous and autonomous systems while assuring the timeliness and high availability. In this system, mobile agent based updates has been shown effective to resolve the consistency issue. The use of mobile agent, however, is critical and requires reliability in regard to mobile agent failures that may lead to bad response time and hence the availability of the system may lost. In this paper, we propose technique for resolving the mobile agent failures that are the most critical aspect of fault tolerance in the system. The technique employs only the local knowledge of the system hence the autonomy and decentralization of the system are preserved.",2002,0, 7735,Investigating Connector Faults in the Time-Triggered Architecture,"In the context of distributed real-time systems as deployed in the avionic and the automotive domain a substantial number of system malfunctions result from connector faults. For instance, a middle class car has more than 40 electronic control units (ECUs) interconnected by a heterogenous network infrastructure consisting of hundreds of wires and connections. Connector faults such as loose contacts impose a challenging task for the technician at the service station. This paper investigates to what extent the use of time-triggered communication protocols, in particular the TTP C2 communication controller, helps in identifying connector faults. We perform fault injection campaigns to judge whether the status information provided by the TTP C2 controller is sufficient for the detection of connector faults. The derived results constitute an important input for online analysis mechanisms",2006,0, 7736,Software product improvement with inspection. A large-scale experiment on the influence of inspection processes on defect detection in software requirements documents,"In the early stages of software development, inspection of software documents is the most effective quality assurance measure to detect defects and provides timely feedback on quality to developers and managers. The paper reports on a controlled experiment that investigates the effect of defect detection techniques on software product and inspection process quality. The experiment compares defect detection effectiveness and efficiency of a general reading technique that uses checklist based reading, and a systematic reading technique, scenario based reading, for requirements documents. On the individual level, effectiveness was found to be higher for the general reading technique, while the focus of the systematic reading technique led to a higher yield of severe defects compared to the general reading technique. On a group level, which combined inspectors' contributions, the advantage of a reading technique regarding defect detection effectiveness depended on the size of the group, while the systematic reading technique generally exhibited better defect detection efficiency",2000,0, 7737,Modeling Depth Estimation Errors for Side Looking Stereo Video Systems,"In the EU funded Integrated Project APROSYS, a side pre-crash sensing system will be set up consisting of a stereo video rig and a radar network. A second goal of APROSYS is to provide tools for the efficient development of future products based on such a sensing system. If a stereo rig points to the side of a moving road vehicle, then maximum angular velocities in azimuth are typically very large. Synchronous operation of the stereo video cameras therefore becomes highly important for correct depth estimation, and hence crucial for pre-crash sensing. This paper proposes a tool for automated verification of the synchronicity of a stereo rig. It consists of a running light clock and automatic image analysis. An error model relates synchronization errors to depth estimation errors. For a given pair of cameras, the tool is applied to give an upper bound to the resulting depth estimation error for the APROSYS application scenario. The tool can be used as a standard for quality control of future product developments",2006,0, 7738,Diagnosis by Image Recovery: Finding Mixed Multiple Timing Faults in a Scan Chain,"In this brief, we present a robust new paradigm for diagnosing a scan chain with multiple faults that could have different fault types. As compared to previous methods, the major advantage of ours is the ability to not only target mixed multiple types of timing faults in the same scan chain but also tolerate non-ideal conditions, e.g., when these faults only manifest themselves intermittently. Unlike the previous matching-based algorithms, we formulate the diagnosis problem as an image recovery process featuring a dynamic windowing technique and a running sequence handling technique. Experimental results on a number of real designs show that this paradigm can successfully deal with some situations beyond existing methods.",2007,0, 7739,Size reduction and harmonic suppression of rat-race hybrid coupler using defected ground structure,"In this letter, a defected ground structure (DGS) is applied to design a compact microstrip rat-race hybrid coupler. The proposed structure can achieve both a significant reduction of size and harmonic signal. By embedding the DGS section, it is observed that the resonant frequency of the hybrid coupler is significantly lowered, which can lead to a large amount of size reduction for a fixed frequency operation. Besides, the third harmonic signal is suppressed to -30 dB with respect to a conventional rat-race hybrid coupler. In this case, the measured insertion loss is comparable to that of a conventional hybrid coupler.",2004,0, 7740,Capacity and error probability analysis of orthogonal space-time block codes over correlated nakagami fading channels,"In this letter, system capacity and error probability of orthogonal space-time block coding (STBC) is considered for PAM/PSK/QAM modulation in correlated Nakagami fading channels. The approach is based on an equivalent scalar AWGN (additive white Gaussian noise) channel with a channel gain proportional to the Frobenius norm of the matrix channel for the STBC. Closed form capacity and error probability expressions are derived for Nakagami fading channels. Numerical results are given to illustrate the theory",2006,0, 7741,Bit-Error-Rate Performance Enhancement of All-Optical Clock Recovery at 42.66 Gb/s Using Passive Prefiltering,"In this letter, we demonstrate the bit-error-rate (BER) performance enhancement of an all-optical clock recovery device at 42.66 Gb/s using a prefiltering operation in front of a self-pulsating semiconductor laser. The prefilter is composed of a simple passive fiber-Bragg-grating-based Fabry-Perot bandpass filter. The assessment is obtained thanks to BER measurement using a data remodulation technique.",2008,0, 7742,Unequal Error Protection for backward compatible 3-D video transmission over WiMAX,"In this paper, an unequal error protection (UEP) scheme for the transmission of 3-D (three - dimensional) video over WiMAX communication channel is proposed. The colour plus depth map stereoscopic video is coded with backward compatibility using a scalable video coding (SVC) architecture, where users with conventional video decoders/receivers can receive the conventional 2-D (three-dimensional) video stream whereas users with SVC decoders/receivers and necessary 3-D video displays may render 3-D video. The proposed error protection scheme is based on the perceptual importance of the coded 3-D video components. The UEP method allocates more protection to the colour video packets than the depth map packets in order to receive good quality 2-D/3-D video. The protection levels are assigned through allocating differentiated transmission power for colour and depth map video packets during transmission. On the fly power allocation is based on the estimated distortion of the colour image sequence. The objective and perceptual quality evaluations show that the proposed UEP scheme improves the quality of 2-D video while achieving pleasing quality for 3-D viewers.",2009,0, 7743,Auto Regressive Model and Weighted Least Squares Based Packet Video Error Concealment,"In this paper, auto regressive (AR) model is applied to error concealment for block-based packet video encoding. Each pixel within the corrupted block is restored as the weighted summation of corresponding pixels within the previous frame in a linear regression manner. Two novel algorithms using weighted least squares method are proposed to derive the AR coefficients. First, we present a coefficient derivation algorithm under the spatial continuity constraint, in which the summation of the weighted square errors within the available neighboring blocks is minimized. The confident weight of each sample is inversely proportional to the distance between the sample and the corrupted block. Second, we provide a coefficient derivation algorithm under the temporal continuity constraint, where the summation of the weighted square errors around the target pixel within the previous frame is minimized. The confident weight of each sample is proportional to the similarity of geometric proximity as well as the intensity gray level. The regression results generated by the two algorithms are then merged to form the ultimate restorations. Various experimental results demonstrate that the proposed error concealment strategy is able to increase the peak signal-to-noise ratio (PSNR) compared to other methods.",2010,0, 7744,Correctness of fault-tolerant cluster-based beacon vector routing for ad hoc networks,"In this paper, correctness is proven for a new clustering method and fault-tolerant routing approach to beacon vector routing. The correctness is proven through termination, liveness, and safety properties. The complexity of the proposed algorithms is analyzed. The clustering approach provides load balancing between clusters, and organizes beacon placement which reduces the number of hops for packet transmissions. The fault-tolerant routing approach improves significantly the percentage of successful packet transmission attempts, and reduces flooding in the network, in the presence of multiple simultaneous faults.",2005,0, 7745,Decision Tree-Based Preventive and Corrective Control Applications for Dynamic Security Enhancement in Power Systems,"In this paper, decision tree (DT)-based preventive and corrective control methods are proposed to enhance the dynamic security of power systems against the credible contingencies causing transient instabilities. Preventive and corrective controls such as generation rescheduling and load shedding schemes, respectively, are developed based on the security regions and boundaries that are calculated in the space of appropriate decision variables. The security regions and boundaries are determined by the rules of DTs that are developed by the generated knowledge bases. This work also involves improving the accuracy of security boundaries as well as the optimal solutions for the fuel cost and load shedding optimization problems encountered in the preventive and corrective controls. The methods are implemented on the Entergy power system model.",2010,0, 7746,Software Agents Situated in Primary Distribution Networks: A Cooperative System for Fault and Power Restoration Management,"In this paper, extended research upon the potentials of implementing distributed artificial intelligence technology to achieve high degrees of independency in distribution network protection and restoration processes is presented. The work that has already been done in the area of agent-based and/or knowledge-based applications and expert systems is briefly reviewed. The authors justify the need to distribute activities in contradiction to the centralized methodologies. A proper model of the real environment is introduced in order to define the designing parameters of a prototype agent entity, which is the part of a cooperative network-management system. The system's goal is to autonomously perform effective fault management upon medium-voltage power distribution lines. The structure of the agent entity is then described by means of the agent behaviors being implemented. The cooperative operations of the proposed system and its computer simulation are presented. Simulation results are being evaluated. Finally, general conclusive remarks are made.",2007,0, 7747,Fault-tolerant explicit MPC of PEM fuel cells,"In this paper, fault-tolerant explicit MPC control of fuel cell systems is presented. MPC is one of the control methodologies that allows to introduce fault-tolerance more easily. Here, this capability is extended using recent explicit MPC control theory. Explicit MPC control allows to derive offline the control without using optimization. Moreover, it allows to introduce as additional parameters faults since it is based on parametric programming. This makes possible to change in real-time controller parameters without recomputing the MPC controller or having a bank of pre-computed MPC controllers. Finally, the proposed approach is assessed on a known test bench PEM fuel cell system.",2007,0, 7748,Improved MB Mode Prediction in Extended Spatial Scalability with Error Resilient Coding,"In this paper, macroblock (MB) mode prediction in extended spatial scalability (ESS) with error resilience is investigated. We extend the MB mode derivation method originally proposed by our lab to improve the efficiency of inter-layer prediction in ESS, to error prone transmission conditions. The performance of our method is evaluated in metrics such as PSNR gain and complexity reduction, which is represented by percentage change in number of MBs for each partition type at the decoder for various tested sequences. Simulation results show that significant complexity reduction is reportedly achieved with average 50% increase in the number of MBs with 1616 mode in enhancement layer and average PSNR-Y gain of 1.35 dB at the decoder compared to JVT-W030 under testing conditions specified in JVT-V302.",2010,0, 7749,Fault diagnosis modeling of power systems using Petri Nets,"In this paper, Petri Nets (PN) is used for accurately fault diagnosis in power systems when some incomplete and uncertain alarm information of protective relays and circuit breakers is detected. After reviewing the Petri nets theory, then, models of fault diagnosis based on PN are built, and their corresponding logical testifications are carried out. Finally, the validity and feasibility of this method is illustrated by simulation results. It is shown from several cases that the faulted system elements can be diagnosed correctly by use of these models. By proposed method, it is possible to reduce diagnosis time and increase correctness of results compared to traditional methods. It is also suitable to online applications. The proposed method can easily be adapted to different power system network configurations.",2010,0, 7750,Power quality improvement using a new structure of fault current limiter,"In this paper, power quality improvement by using a new structure of non superconducting fault current limiter (NSFCL) is discussed. This structure prevents voltage sags on Point of Common Coupling (PCC) just after fault occurrence, because of its fast operation. On the other hand, previously used structures produce harmonics on load voltage and have ac losses in normal operation. New structure has solved this problem by using dc voltage source. The proposed structure is simulated using PSCAD/EMTDC software and simulation results are presented to validate the effectiveness of this structure.",2010,0, 7751,Distributed coordination of task migration for fault-tolerant FlexRay networks,"In this paper we present an approach to increase the fault tolerance in FlexRay networks by introducing backup nodes to replace defect ECUs (Electronic Control Units). In order to reduce the memory requirements of such backup nodes, we distribute redundant tasks over different nodes and propose the distributed coordinated migration of tasks of the defect ECU to the backup node at runtime. This approach enhances our former work in, where we extended the FlexRay bus schedule by redundant slots to consider changes in the communication/slot assignment and investigated and evaluated different solutions to migrate the redundant tasks to the backup node using the static and/or dynamic segment of the communication cycle for transmissions. We present the approach of distributed coordination for migration and communication instead of additional dedicated coordinator nodes to further increase the fault tolerance. With this approach we improve the safety of FlexRay networks by avoiding a possible single point of failure due to a dedicated coordinator node also minimizing the necessary time needed for a reconfiguration after an ECU failure. Furthermore, we reduce the overhead within the communication and the demand for additional hardware components.",2010,0, 7752,Register Transfer Level Concurrent Error Detection in Elliptic Curve Crypto Implementations,In this paper we present an register transfer level (RTL) concurrent error detection (CED) technique targeting hardware implementations of elliptic curve cryptography (ECC). The proposed mixed hardware- and time-redundancy based CED techniques use the mathematical properties of the underlying Galois field as well as the ECC primitives to detect both soft errors and permanent faults with low area overhead. Results for sequential implementations of GF multiplication and inverse operations yielded an area overhead of 30% and a time overhead of 120%.,2007,0, 7753,Impact of Error Characteristics of an Indoor 802.11g WLAN on TCP Retransmissions,"In this paper we present the results of extensive measurements made over an experimental wired-to-wireless testbed, which consisted of a TCP protocol combined with a real-world indoor IEEE 802.11g WLAN. We investigated the effects of signal attenuation due to client distance from the AP on the 802.11 frame error rates (FER) and consequently on the segment loss rates and retransmission behavior of TCP at the sender in the fixed network. Specifically, we experimented with different modulation schemes belonging to the OFDM 802.11g PHY in order to gauge differences in performance between them, comparing real-world FERs calculated from actual frame captures against SNR, for both the forward and reverse WLAN channel directions. We also present real-world distributions of frame retransmissions made over the WLAN by the AP, with useful findings. Our results confirm that the reverse channel of a WLAN possesses a higher FER than the forward channel, and poses a greater threat to TCP's retransmission mechanism.",2008,0, 7754,Bayesian Networks for Fault Detection under Lack of Historical Data,In this paper we propose a Bayesian Network approach as a promissory data fusion technique for surveillance of sensors accuracy. We prove the usefulness of this method even in case when there is not enough feasible data to construct the model in traditional way. In presence of this data constrains we suggest an inversion of the causal relationship. This approach proves to be a possible solution to help the expert in the conditional probabilities assessment process. As a result a working model is constructed what would not be possible using traditional Bayesian Network approach.,2009,0, 7755,A Study on Character Recognition Error Correction at Higher Level Recognition Step for Mathematical Formulae Understanding,"In this paper we propose a method for correcting character recognition errors at the higher level recognition step in an understanding system for mathematical formulae. The system consists of two-level recognition steps: the low level recognition including character recognition, and the higher level recognition including layout recognition. We use the layout information recognized in the latter step to correct the character recognition errors by using two sources of information. One is based on some keywords such as mathematical function names, and the other is based on a cost tree and co-occurrence probabilities between symbols. The efficacy of the proposed method is verified by some experimental results, and the character recognition rate increased from 80.2% to 89.2%",2006,0, 7756,Rate-Distortion Optimal Video Transport Over IP with Bit Errors,"In this paper we propose a method for video delivery over bit error channels. In particular, we propose a rate distortion optimal method for slicing and unequal error protection (UEP) of packets over bit error channels. The proposed method performs full frame based search using a novel dynamic programming approach to determine the optimal slicing configuration in a practically short time. Also we propose a rate and distortion estimation technique that decreases the time to evaluate the objective function for a slice configuration. The proposed method can perform rate-distortion UEP that can be used over forward error correction (FEC) capable channels. We show that the proposed method successfully exploit the local dynamics of a video frame and perform more than 1 dB better than common methods.",2006,0, 7757,Early error detection in systems-on-chip for fault-tolerance and at-speed debugging,"In this paper we propose a new method for the design of duplex fault-tolerant systems with early error detection and high availability. All the scannable memory elements (flip-flops) of the duplicated system are implemented as multimode memory elements according to Singh et al. (1999), thus allowing during normal operation the accumulation of a signature of its states in its scan-paths. By continuously comparing a 1-bit sequence of the compacted scan-out outputs of the accumulated signatures of the duplicated systems an error can be already detected and a recovery procedure started before an erroneous result appears at the system outputs when a computations is completed. The accumulation of a signature during normal operation can also be used for debugging at-speed. For this application the system need not be duplicated",2001,0, 7758,FPGA-based fault injection for microprocessor systems,"In this paper we propose an approach to speed-up fault injection campaigns for the evaluation of dependability properties of processor-based systems. The approach exploits FPGA devices for system emulation, and new techniques are described, allowing emulating the effects of faults and to observe faulty behavior. The proposed approach combines the speed of hardware-based techniques, and the flexibility of simulation-based techniques. Experimental results are provided showing that speed-up figures up to 3 orders of magnitude with respect to state-of-the-art simulation-based techniques can be achieved",2001,0, 7759,Autonomic Fault Identification for Ubiquitous Self Healing Systems,In this paper we report the results of our experiments conducted for fault identification in self healing systems. The proposed scheme is designed to identify faults in a large-scale system where frequent faults can increase downtime. The experiments show that the scheme proposed has linear computational complexity and highly portable.,2010,0, 7760,Analytical redundancy based predictive fault tolerant control of a steer-by-wire system using nonlinear observer,"In this paper, a nonlinear observer based analytical redundancy methodology is presented for fault tolerant control of a steer by wire (SBW) system. A long-range predictor based on Diophantine identity has been utilized to improve the fault detection efficiency. The overall predictive fault tolerant control strategy was then implemented and validated on a steer by wire hardware in loop bench. The experimental results showed that the overall robustness of the SBW system was not sacrificed through the usage of analytical redundancy for sensors along with the designed FDIA algorithm. Moreover, the experimental results indicated that the faults could be detected faster using the developed analytical redundancy based algorithms for attenuating-type faults.",2010,0, 7761,A VCO with harmonic suppressed and output power improved using defected ground structure,"In this paper, a novel defected ground structure (DGS) is proposed for suppressing harmonic and increasing the output power of a voltage-controlled oscillator (VCO) in microwave circuits. The DGS is formed with connecting in parallel two periodic structures which have different center frequencies (in a ratio of 2:3) of the stopband and provides the bandgap characteristic in certain frequency bands. Simulated and experimental results show that the microstrip line with DGS has a wide low-pass band for the fundamental frequency and a stopband for the second harmonic with good performance. To evaluate the effects of DGS on microwave VCOs, two GaAs field-effect transistor (FET) VCOs have been designed and fabricated. One of them has a 50 microstrip line with DGS at the output section, while the other has only 50 straight line. Measured results show that DGS suppresses the second harmonic more than -20dBm at the output and yields improved output power by 3-5%.",2003,0, 7762,Feature Extraction for Short-Circuit Fault Detection in Permanent-Magnet Synchronous Motors Using Stator-Current Monitoring,"In this paper, a novel frequency pattern and competent criterion are introduced for short-circuit-fault recognition in permanent-magnet synchronous motors (PMSMs). The frequency pattern is extracted from the monitored stator current analytically and the amplitude of sideband components at these frequencies is introduced as a proper criterion to determine the number of short-circuited turns. Impacts of the load variation on the proposed criterion are investigated in the faulty PMSM. In order to demonstrate the aptitude of the proposed criterion for precise short-circuit fault detection, the relation between the nominated criterion and the number of short-circuited turns is specified by the mutual information index. Therefore, a white Gaussian noise is added to the simulated stator current and robustness of the criterion is analyzed with respect to the noise variance. The occurrence and the number of short-circuited turns are predicted using support-vector machine as a classifier. The classification results indicate that the introduced criterion can detect the short-circuit fault incisively. Simulation results are verified by the experimental results.",2010,0, 7763,A Decision-Tree-Based Method for Fault Classification in Single-Circuit Transmission Lines,"In this paper, a novel method for fault classification in single-circuit transmission lines is presented. The proposed method needs voltages and currents of only one side of the protected line. After detecting the exact time of fault inception, the fault type is recognized by means of a decision-tree algorithm (DT) which is formerly trained by applying the odd harmonics of the measured signals, up to the 19th. Simulation results have shown that the proposed method can classify the faults in less than a quarter of a cycle with highest possible accuracy.",2010,0, 7764,Online monitoring and fault diagnosis system of Power Transformer,"In this paper, a novel transformer monitoring system is presented. This system combines expert system and neural network system, which improves the accuracy of fault diagnosis, monitor the trends of the condition of power transformer, timely detect transformer failure, avoid power outages accidents, and improve supply reliability. A wireless communication method to transmit the signal collected from transformer was designed to overcome the limitations of cable transmission, because of the difficult for cabling in the remote areas.",2010,0, 7765,"Low delay, error robust wireless video transmission architecture for video communication","In this paper, a novel video transmission architecture is proposed to meet the low-delay and error robust requirement of wireless video communications. This architecture uses FEC coding and ARQ protocol to provide efficient bandwidth access from wireless link. In order to reduce ARQ delay, a video proxy server is implemented at the base station. This video proxy not only reduces the ARQ response time but also provides error tracking functionality. We use H.263 as the experimental platform for our architecture. Experiments show that average luminance PSNR decreases only 0.35db for the ""Foreman"" sequence under a random error condition of 10-3 error probability.",2002,0, 7766,Error Resilient Coding Based on Reversible Data Embedding Technique for H. 264/AVC Video,"In this paper, a prescription-based error concealment (PEC) method is proposed. PEC relies on pre-analyses of the concealment error image (CEI) for I-frames and the optimal error concealment scheme for P-frames at encoder side. CEI is used to enhance the image quality at decoder side after error concealment (by spatial interpolation or zero motion) of the corrupted intra-coded MBs. A set of pre-selected error concealment methods is evaluated for each corrupted inter-coded MB to determine the optimal one for the decoder. Both the CEI and the scheme indices are considered as the prescriptions for decoder and transmitted along with the video bit stream based on a reversible data embedding technique. Experiments show that the proposed method is capable of achieving PSNR improvement of up to 1.48 dB, at a considerable bit-rate, when the packet loss rate is 20%",2005,0, 7767,Checksum-Based Probabilistic Transient-Error Compensation for Linear Digital Systems,"In this paper, a probabilistic compensation technique for minimizing the effect of transient errors effect is proposed. The focus is to develop a compensation technique for DSP applications in which exact error compensation is not necessary and end-to-end system level performance is degraded minimally as long as the impact of the ldquonoiserdquo injected into the system by the transient errors is minimized. The proposed technique, called checksum-based probabilistic compensation, uses real-number checksum codes for error detection and partial compensation. Traditional coding techniques need a code of distance three and relatively complex calculations for perfect error correction. Here, it is shown that a distance-two code can be used to perform probabilistic error compensation in linear systems with the objective of improving the signal-to-noise ratio in the presence of random transient errors. The goal is to have a technique with small power and area overhead and to perform compensation in real time with negligible latency. The proposed technique is comprehensive and can handle errors in the combinational circuitry and storage elements. Comparison against a system with no error correction shows that up to 13-dB SNR improvement is possible. The area, power, and timing overheads of the proposed technique are analyzed.",2009,0, 7768,Fault tolerant DTC for six-phase symmetrical induction machine,"In this paper, a new fault tolerant direct torque control (DTC) algorithm for six phase induction machines (6PIM) is introduced. The machine presents two sets of three phase windings spatially shifted by 60 electrical degrees. The aim of the proposed approach consists in computing an average stator voltage vector in order to control the mean values of the stator flux and the electromagnetic torque over a sampling period under open phase. The main advantages are fixed switching frequency, low torque ripples and reduced line current ripples in comparison with classical DTC. Simulation and experimental results show satisfying performances and validate the proposed method.",2009,0, 7769,Minimum Mean-Squared Error Estimation of Mel-Frequency Cepstral Coefficients Using a Novel Distortion Model,"In this paper, a new method for statistical estimation of Mel-frequency cepstral coefficients (MFCCs) in noisy speech signals is proposed. Previous research has shown that model-based feature domain enhancement of speech signals for use in robust speech recognition can improve recognition accuracy significantly. These methods, which typically work in the log spectral or cepstral domain, must face the high complexity of distortion models caused by the nonlinear interaction of speech and noise in these domains. In this paper, an additive cepstral distortion model (ACDM) is developed, and used with a minimum mean-squared error (MMSE) estimator for recovery of MFCC features corrupted by additive noise. The proposed ACDM-MMSE estimation algorithm is evaluated on the Aurora2 database, and is shown to provide significant improvement in word recognition accuracy over the baseline.",2008,0, 7770,The MVA-Volt Index: A Screening Tool for Predicting Fault-Induced Low Voltage Problems on Bulk Transmission Systems,"In this paper, a new method for steady-state screening of buses to identify possible fault-induced delayed-voltage recovery problems is presented. The method is particularly useful for (1) prioritizing substation relaying and breaker maintenance, (2) evaluating impacts of system improvement alternatives, (3) identifying most-effective locations for undervoltage load shedding, and (4) locating voltage (reactive) support equipment. The method is based on a single index value for each bus that is determined by considering the amount of load impacted by a fault at that bus using a weighted summation. The steady-state screening algorithm has been validated using dynamic simulations and has been used to successfully identify as a potentially severe contingency the location of an actual event that occurred on the Southern Electric System in the summer of 1999.",2008,0, 7771,Two-center Radial Basis Function Network For Classification of Soft Faults in Electronic Analog Circuits,"In this paper, a new neural network architecture with two-center radial basis functions (TCRB functions, TCRBF) in the hidden layer was presented. The special shape of TCRB function was introduced to enhance the efficiency of soft faults classification in electronic analog circuits. The application of TCRB functions in neural network classifier gives possibility to reduce the number of neurons in its hidden layer in comparison to radial basis function network with Gaussian basis functions. Additionally there is an improvement in testability of circuit under test (CUT) through decreasing the classification error. This article shows results obtained for lowpass 2th order filter.",2007,0, 7772,Dynamic Behavior Model for Short-Circuit Current of Power System with Capacitor and a Reducing Fault Current Method,"In this paper, a new performance for capacitor compensators is proposed to limit the fault current in power systems which capacitors used. In the normal mode of operation, the shunt capacitors banks as reactive power compensators that delivers reactive power to increase the power factor. When faults states occurs, the capacitor can reduce fault current peak value, a dynamic model for calculating short-circuit current of power system with reactive compensators is established by using the equations describing balanced three-phase power system short-circuit equivalent circuit in Matlab-Simlink program. Simulations performed in MATLAB/Simulink environment indicate that the proposed performance for capacitor compensators performs well to limit the fault currents of distribution systems and line-voltage drops.",2009,0, 7773,Liftoff Correction Based on the Spatial Spectral Behavior of Eddy-Current Images,"In this paper, a new processing scheme for compensating the liftoff effect is proposed to increase the signal-to-noise ratio of measurements carried out when a duralumin plate with artificially manufactured defects was scanned with eddy-current probes. The mathematical algorithm is based on the spatial frequency behavior of the output signals with the liftoff distance (the distance between the probe used for inspection and the sample) and its equivalent form (deconvolution) in the spatial domain. Results are presented for measurements performed at three different distances for the same defect of the duralumin plate.",2010,0, 7774,A Scenario of Tolerating Interaction Faults Between Otherwise Correct Systems,"In this paper, a new scenario for tolerating interaction faults is presented. We also address the problem of designing a system capable of tolerating interaction faults generated by the other system. The scenario and other concepts defined in this paper were discussed with more detail. A system is defined as a pair of sub-systems that use a communication standard to interact. Interaction occurs with the exchange of a sequence of messages, each containing a set of data fields. The system that can exhibit faulty behavior is called the external unit. The other system, which is designed to tolerate faults, is the adaptable unit",2006,0, 7775,"Energy efficient, delay sensitive, fault tolerant wireless sensor network for military monitoring","In this paper, a new TDMA based wireless sensor network (WSN) for military monitoring is proposed. The most important design considerations of the newly developed sensor network system are energy consumption, delay, scalability and fault tolerance. There are three main parts of the system: time synchronization based on the sink with high range transmitter, SyncHRT; data indicator slot mechanism, DISM; and a new time scheduling mechanism ft_DTSM. Analytic model has shown that energy consumption performance of the newly proposed system is better than most of the existing MAC layers for WSN. According to simulation studies, its delay performance is also superior to most of the other WSN systems. Although there may be other architectures which perform better than our system for certain performance metrics, the newly proposed military monitoring system can operate with a good optimization on energy consumption, delay and fault tolerance.",2008,0, 7776,A new EDAC technique against soft errors based on pulse detectors,"In this paper, a new technique is proposed in order to assure the reliability of digital circuits against single event upsets (SEUs) and multi-bit upsets (MBUs), which are a major concern in a radiation environment like space. This proposal reduces the area cost of triple modular redundancy (TMR), offering a similar protection level. In order to show the reliability and the area savings for this technique, it has been implemented using a commercial library to protect registers of different size. A software fault injection platform has been used in order to verify the reliability of the proposed technique.",2008,0, 7777,Sonar-based obstacle avoidance using a path correction method for autonomous control of a biped robot for the learning stratification and performance,"In this paper, a simple path correction and obstacle avoidance method with a bipedal robot for the learning stratification and performance, using an ultrasonic sensor and electronic compass sensor, is proposed. This bipedal robot was comprised using the Lego NXT Intelligent Bricks. The proposed method is implemented on an autonomous humanoid robot (the ARSR). One ultrasonic sensor and one electronic compass sensor are installed on the ARSR to detect environmental information including obstacles, the distance to the obstacle, and the directional angle of the robot. Based on the obtained information, an obstacle avoidance and path correct method is proposed to decide the ARSR's behavior so that it can avoid obstacles and move effectively to the destination area. Obstacle avoidance experiments are carried out to confirm the effectiveness of the proposed method and confirm the learning stratification and performance.",2010,0, 7778,PID controlled synchronous motor for power factor correction,"In this paper, a synchronous motor controlled by a PID based on a PIC 18F452 microcontroller has been studied under three different working conditions using varying excitation currents. Due to the complexity of PID parameters such as integrative and derivative terms, their conversion to digital systems has proven difficult. Hence, the collection of errors in a specified time period has been multiplied be means of a sampling period rather than complex integral algorithms. The difference between the error rate and its previous value has been divided into a sampling period to obtain the derivative operation. Therefore, a PID controlled algorithm has been embedded into a microcontroller which is easily implemented without complex algorithms. In addition, the design of this study includes an LCD based visual interface, allowing users to instantly monitor the current, the voltage and the power factor of the synchronous motor.",2009,0, 7779,Spatial and temporal adaptive error concealment algorithm for MPEG-2,"In this paper, an adaptive error concealment (AEC) algorithm is presented for MPEG-2 video streams. For the I -pictures in the sequence, the criterion for the decision between spatial and temporal methods is based on a measurement of the image activity. Since the motion vectors (MVs) above and below are very different in magnitude and direction from each other in P- and B-pictures, the proposed error concealment (EC) program is performed in three stages: The first stage is to split a missing MB into two sub-MBs. The second stage is to estimate the coding mode of the two sub-MBs from the mode of the top and bottom MBs of the lost MB. The next stage, based on the estimated coding mode, is to conceal the two sub-MBs separately. The experimental results showed that when the bit error rate (BER) of several video sequences were 1.210-5, the PSNR-values for the proposed AEC method were approximately 2-3 dB over the PSNR-values for the conventional methods.",2004,0, 7780,Methodology of fault injection based on EDA software,"In this paper, an agile fault injection method by means of discretional fault mode is introduced, in order to go over the localization of the conventional fault injection method based on the hardware, and to offer a new means that is used to validate the testability indexes of PHM system. After describing the basic principle of such method, this paper discusses universal elements and groupware modeling, fault injection and fault confirm method in detail. Finally, by using an audio processing circuit belonged to some airplane, it is validated that this fault injection method is valid and it will be significant for engineering practicability.",2010,0, 7781,An efficient Euclidean algorithm for Reed-Solomon code to correct both errors and erasures,"In this paper, an efficient Euclidean decoding algorithm is presented to solve the Berlekamp's key equation of Reed-Solomon (RS) code for correcting erasures as well as errors by replacing the initial condition of the Euclidean algorithm with the erasure locator polynomial and the Forney syndrome polynomial. By this proposed algorithm, the errata locator polynomial and errata evaluator polynomial can be obtained simultaneously without the computations of polynomial division and field element inversion. Moreover, the whole recursive decoding produce for solving Berlekamp's key equation could be performed with a fixed number of iterations. And, the weights used to reduce the degree of the errata evaluator polynomial at each iteration can be extracted from the coefficient of fixed degree. As a consequence, the complexity of RS decoder to correct both errors and erasures is reduced substantially. Therefore, this proposed algorithm provides more modular, regular and simple for both software and hardware implementation. An example using this proposed algorithm is given for a (255,239) RS code for correcting erasures and errors with s + 2v 16.",2003,0, 7782,An error resilience scheme on an end-to-end distortion analysis for video transmission over Internet,"In this paper, an end-to-end rate-distortion (R-D) estimation recursive model is proposed, on the basis of the statistical analysis of the spatial and temporal prediction as well as a general error concealment method by the video decoder. It takes the source quantization error, channel error, and the subsequent error spatial-temporal propagation into account. Then a corresponding integrated error resilience scheme, which adopts a global optimal intra-macroblock (MB) refreshment ratio and visual acuity selection procedure, is introduced. Moreover, it employs a combined source-channel bit allocation strategy, which could estimate an instantaneous available source transmission rate under a given time-varying network condition. The network experiments show that this scheme is effective and applicable for a better and more consistent picture quality in the receiver end.",2004,0, 7783,A Run-time Estimate Method of Measurement Error Variance for Kalman Estimator,"In this paper, an estimate method of control system measurement error variance for Kalman estimator at run-time is proposed. In general, control system's measurement error variance is measured by some measuring instruments or tuned after implementation of Kalman estimator, but the precise measuring the control system measurement error variance is too hard. Moreover, even though we could tune the measurement error variance value, it could not deal with changes of system measurement error variance because of the phenomena of aging of system sensors or unpredictable external disturbances. So we introduce a method to estimate measurement error variance of the system for Kalman estimator at run-time to improve the performance of the estimator. Also a simple simulation and experimental result will be presented to show the effect of the proposed method.",2007,0, 7784,Polynomial fault diagnosis of linear analog circuits,In this paper a diagnostic method for analog electronic circuits is presented. The approach is based on determination of the variations of circuit functions with use of higher order sensitivity coefficients. Knowing values of sensitivity coefficients one can formulate multivariate polynomial equations. The solution of test equations with respect to element's deviations results in fault identification. This task can be realized by using Grobner bases to transform a nonlinear equation into triangular form. It is solved by successive computation of the univariate polynomial equation and back substitution. Solution of the polynomial equation is treated as the eigenvalue problem of companion matrix by QR algorithm. Numerical results are presented to clarify method and prove its efficiency.,2007,0, 7785,Categorization of minimum error forecasting zones using a geostatistic wind speed model,"In this paper a geostatistic wind speed model is applied to trace a wind speed map, based on data from official measurement weather stations distributed within the region of Andalucia-Spain. Each station's performance is assessed by comparing real measurements to those resulting from the linear interpolation of the rest. Once an error is associated to the station, the error is drawn in a map, in which minimum error zones can be delimited. Frequency and wind speed in each direction are the magnitudes of interest to get a first categorization of wind resources associated to the region. The interest of the method relies in the possibility of forecasting everywhere within the region with an error inside the tolerable margins.",2009,0, 7786,Grid-connected PV system with power-factor correction capability,"In this paper a grid-connected PV system is presented. Its main feature is that, besides injecting energy into the grid, it behaves as an active filter. The reference signal used to modulate the output current is obtained through an adaptive filter. The main advantages of the technique are that the system becomes almost independent of the parameter variations, and that it lends itself to a very simple hardware implementation. A 1 kW prototype was built and tested. The results obtained show good compensation of the current drawn by nonlinear loads.",2002,0, 7787,Model Structures Used in Rotor Defect Identification of a Squirrel Cage Induction Machine,"In this paper a method of detection of broken bars in squirrel cage induction machine is presented. This method is based on the determination of discrete parameters of the transfer functions of the induction machine by model structures such as ARMAX, ARX, IV and OE model structures",2006,0, 7788,A new low cost fault tolerant solution for mesh based NoCs,"In this paper a new fault tolerant routing algorithm with minimum hardware requirements and extremely high fault tolerance for 2D-mesh based NoCs is proposed. The LCFT (Low Cost Fault Tolerant) algorithm, removes the main limitations (forbidden turns) of the famous XY. So not only many new routes will be added to the list of selectable paths as well as deadlock freedom, but also it creates high level of fault tolerance. All these things are yielded only by the cost of adding one more virtual channel (for a total of two). Results show that LCFT algorithm can work well under almost bad conditions of faults in comparison with the already published methods.",2010,0, 7789,Induction motor fault diagnosis based on analytic wavelet transform via Frequency B-Splines,"In this paper a new methodology of transient motor current signature analysis (TMCSA) is proposed. The approach consists on obtaining a 2D time frequency plot representing the time-frequency evolution of all the harmonics present on an electric machine transient current. Identifying characteristic patterns in the time-frequency plane, produced by some of the fault related components, permits the machine diagnosis. Unlike other CWT based methods, this work uses complex frequency B-splines wavelets. It is shown that these wavelets enable high detail in the time-frequency maps and an efficient filtering in the region neighbouring the main frequency. These characteristics make easy the identification of the patterns related to the fault components. As an example, the technique has been applied to no load startup currents of healthy motors and motors with broken bars, showing the Complex FBS Wavelets capabilities. The diagnosis has been done via the identification of the upper sideband harmonic.",2009,0, 7790,A new spatial interpolation method for error concealment,"In this paper a new spatial interpolation method is presented to deal with packet loss in transmission of compressed image data. The reconstruction process follows the steps as below. Firstly, the edge information of lost block is coarsely inferred using the intact pixels around the lost block. A NBP (nearest border prior) method is applied to the area with no edge information detected. To the areas where edge is detected, an adaptive directional interpolation is used to restore the lost block. Experiments show that the missing area can be restored with the edge effectively preserved.",2004,0, 7791,A novel correction method for a low cost sensorless control system of IPMSM electrical drives,"In this paper a novel correction method for sensorless Field Oriented Control System of Brushless internal Permanent Magnet Electrical Drives is introduced discussed and experimentally validated. The novelty of this control system is the estimation of rotor speed and angular position is based on the back electromotive force space vector determination, without the aid of voltage probes. Actual voltage signals needed for estimation are replaced with the reference ones given by the current controller. This choice obviously introduces an error that have been vanished by means of a new compensating function or with the aid of data coming from experimental tests. Experimental verifications of the proposed sensorless control system were made with the aid of a flexible test bench for Brushless Motor electrical Drives. The test results presented in the paper show the validity of the proposed low cost sensorless control system and, above all, underline the good performances of the sensorless control system also with a so reduced equipment.",2008,0, 7792,Scaling the discrete cosine transformation for fault-tolerant real-time execution,In this paper we examine the scalability of several implementations of the 2-dimensional discrete cosine transformation in the context of image processing. By scaling down the quality of the transformation the required computational complexity also decreases. Using several benchmark images we can show that no significant loss of image quality results from downscaling the computational complexity by up to 60%. This property can be used to switch between different quality levels during the execution of the DCT. A low quality level is used if only few time remains to finish the computation; otherwise a higher quality level can be used. For a certain execution model we show that this switching between quality levels can be used to meet the real-time demands of the executed image processing application even in the presence of a permanent fault in the execution units.,2009,0, 7793,Bandwidth effect on distance error modeling for indoor geolocation,"In this paper we introduce a model for the distance error measured from the estimated time of arrival (TOA) of the direct path (DP) between the transmitter and the receiver in a typical multipath indoor environment. We use the results of a calibrated Ray tracing software in a sample office environment. First we divide the whole floor plan into LOS and Obstructed LOS (OLOS), and then we model the distance error in each environment considering the variation of bandwidth of the system. We show that the behavior of the distance error in LOS environment can be modeled as Gaussian, while behavior of the OLOS is a mixture of Gaussian and exponential distribution. We also related the statistics of the distributions to the bandwidth of the system.",2003,0, 7794,Making an SCI fabric dynamically fault tolerant,"In this paper we present a method for dynamic fault tolerant routing for SCI networks implemented on Dolphin Interconnect Solutions hardware. By dynamic fault tolerance, we mean that the interconnection network reroutes affected packets around a fault, while the rest of the network is fully functional. To the best of our knowledge this is the first reported case of dynamic fault tolerant routing available on commercial off the shelf interconnection network technology without duplicating hardware resources. The development is focused around a 2-D torus topology, and is compatible with the existing hardware, and software stack. We look into the existing mechanisms for routing in SCI. We describe how to make the nodes that detect the faulty component do routing decisions, and what changes are needed in the existing routing to enable support for local rerouting. The new routing algorithm is tested on clusters with real hardware. Our tests show that distributed databases like MySQL can run uninterruptedly while the network reacts to faults. The solution is now part of Dolphin Interconnect Solutions SCI driver, and hardware development to further decrease the reaction time is underway.",2008,0, 7795,Modeling discrete event systems with faults using a rules based modeling formalism,"In this paper we present a methodology which makes the task of modeling failure prone discrete event systems (DESs) considerably less cumbersome, less error prone, and more user-friendly. In order to model failures, we augment the signals set of the rules based formalism proposed by the co-authors of this paper, to include binary valued fault signals, the values representing either a non-faulty or a faulty state of a certain failure type. The rules based modeling formalism is further extended to model real-time systems, and we apply it to model delay-faults of the system as well. The model of a failure prone DES in the rules based can automatically be converted into an equivalent (timed)-automaton model for the analysis in an automaton model framework.",2002,0, 7796,Robust Page Segmentation Based on Smearing and Error Correction Unifying Top-down and Bottom-up Approaches,"In this paper we present a robust multi-pass page segmentation algorithm. The first pass uses a modified smearing algorithm and the second pass performs a hybrid of bottom-up and top-down segmentation on the output of the first pass. Unlike traditional approaches, the bottom-up and top-down steps are based on primitive results of a smearing based page segmentation algorithm. Therefore, ""split"" and ""merge"" processes start with text blocks that are mostly true text blocks but a few of them are either touching or broken. We present experimental results on newspaper and journal documents from different languages to demonstrate the robustness and language independence of our approach.",2007,0, 7797,Accurate Physical Modeling of Discretization Error in 1-D Perfectly Matched Layers Using Finite-Difference Time-Domain Method,"In this paper we present an accurate physical model of discretization error in a 1-D perfectly matched layer (PML) using the finite-difference time-domain method. The model is based on the concept of the discrete wave impedance of the PML. This concept implies that the wave impedance in the discretized space changes, with respect to the continuous value, when absorption occurs. These changes depend on the absorption per unit length, as well as on the discretization step. In the discretized space, both, the magnitude and phase of wave impedance are modified. We employ numerical simulations obtained using a 1-D code to test the proposed model. We then compare the results with those obtained from coaxial wave guide geometry using a commercial 3-D software package. One important consequence of this modeling scheme is the feasibility of the PML without return losses due to discretization error. In practice, numerical results show that by correctly adjusting the electromagnetic parameters of the PML (electric permittivity and magnetic permeability), a significant improvement in the reflection characteristics is obtained. In some cases, it could be as much as 78 dB. The remaining return losses are successfully explained as second-order effects related to the discontinuity of electromagnetic parameters at the interface between the simulation space and PML.",2008,0, 7798,A fault tolerant control system for hexagram inverter motor drive,"In this paper, a fault tolerant control method for hexagram inverter motor drive is proposed. Due to its unique topology, the hexagram inverter is able to tolerate certain degree of switch failure with a proper control method. The proposed method consists of fault detection, fault isolation and post fault control. A simple fault isolation method is to use fuses in the DC links to disable the whole inverter module with switch failure. When a fault is detected in one inverter module, it is isolated by turning on all switches temporarily to blow out the fuse in the DC link, the gate drive signals for this faulty inverter module and its interconnecting legs are then disabled. After one inverter module is disabled, the hexagram inverter works in the post fault two-phase mode. The post fault control algorithm is initiated to control the two remaining output currents in order to maintain a smooth torque operation for the motor drive. Simulations and a small-scale PMSM motor drive experiment verified the proposed fault tolerant control system design.",2010,0, 7799,A Fault Diagnosis Model Based on Language-Valued Reasoning,"In this paper, a language-valued variable is introduced into fault diagnosis. By combining global criteria optimization with maximum membership method, a new model of fault diagnosis is put up based on language-valued reasoning. This method provides a resolution to the problem which can't be resolved by numeric model.",2009,0, 7800,A realistic fault simulation model for EEPROM memories,"In this paper, a list of faults is injected in an elementary memory array circuit. Electrical simulation results show impact of each fault on EEPROM cells logical values and threshold voltages. Results are analyzed and fault coverage of EEPROM memories standard test patterns is evaluated.",2009,0, 7801,A low-complexity unequal error protection of H.264/AVC video using adaptive hierarchical QAM,"In this paper, a low-complexity unequal error protection (UEP) of H.264/AVC coded video using adaptive hierarchical quadrature amplitude modulation (HQAM), which takes into consideration the non-uniformly distributed importance of intracoded frame (I-frame) and predictive coded frame (P-frame) is proposed. Simulation results show that in terms of average peak signal-to-noise ratio (average PSNR), our proposed (EEP) scheme outperforms the equal error protection (EEP) by up to 5 dB",2006,0, 7802,Compensation of Random and Systematic Timing Errors in Sampling Oscilloscopes,"In this paper, a method of correcting both random and systematic timebase errors using measurements of only two quadrature sinusoids made simultaneously with a waveform of interest is described. The authors estimate the fundamental limits to the procedure due to additive noise and sampler jitter and demonstrate the procedure with some actual measurements",2006,0, 7803,A novel dual-band printed antenna with a defected ground plane for WLAN applications,"In this paper, a miniaturized dual-band printed antenna with a defected ground plane for wireless local area network (WLAN) applications is proposed. The radiating elements of the antenna consist of nesting ellipses with different major axis and a part of strip line. The antenna can achieve dual-band operation (2.4GHz and 5.2/5.8GHz). The simulation results show that the proposed antenna satisfies the requirements of WLAN 802.11b/g and 802.11a. The antenna is with the dimensions of 39.0mm30.0mm0.8mm. The effects of the key structure parameters on the antenna performances are also analyzed and presented. The proposed antenna is fabricated and measured. The simulated and measured results agree well with each other.",2010,0, 7804,A novel wavelet-based algorithm for discrimination of internal faults from magnetizing inrush currents in power transformers,"In this paper, a new algorithm based on processing differential current is proposed for digital differential protection of power transformers by considering different behaviors of the differential currents under fault and inrush current conditions. In this method, a criterion function is defined in terms of difference of amplitude of wavelet coefficients over a specific frequency band. The criterion function is then used for three phases, and internal faults are precisely discriminated from inrush current less than a quarter a cycle after the disturbance; this is one advantage of the method. Another advantage of the proposed method is that the fault detection algorithm does not depend on the selection of thresholds. The merit of this method is demonstrated by simulation of different faults and switching conditions on a power transformer using PSCAD/EMTDC software. Also the proposed algorithm is tested offline using data collected from a prototype laboratory three-phase power transformer. The test results show that the new algorithm is very quick and accurate",2006,0, 7805,Compact Wireless Devices with Defected-Ground Structures,"In this paper are presented some investigations on micro- strip defected ground structures (DGS). It is shown that the presence of a slot in the ground plane can substantially enhance the electric coupling, or the electric part of a mixed coupling between resonators and its external feedings. The proposed technique can eliminate the very narrow coupling gaps needed for a tight coupling and thus can relax the fab rication tolerances.",2006,0, 7806,Identification of mechanical defects in MEMS using dynamic measurements for application in the production monitoring,In this paper investigations for the non-destructive characterization of MEMS (Micro-Electro-Mechanical- Systems) are presented that can be applied in the production monitoring in early stages. Different aspects and experimental results are shown for silicon membrane structures with artificial faults. The structures were characterized by their resonant frequencies and associated mode shapes measured via laser-Doppler vibrometry. The consequence of the artificial faults is investigated on the basis of the ratios of measured resonant frequencies and the quantified comparison of mode shapes using the MAC-value. The artificial faults have a significant influence on the dynamic properties which depends on their size. According to the results a systematical approach for the dynamic characterization is derived from the results of the investigations for a possible application in the production monitoring.,2009,0, 7807,A Novel High Impedance Fault Location for Distribution Systems Considering Distributed Generation,"In this paper it is proposed a novel high impedance fault detection and location scheme for power distribution feeders with distributed generation. The proposed scheme is capable to obtain precise fault location estimations for both linear low impedance and non-linear high impedance faults. This last class of faults represents an important subject for the power distribution utilities because they can be difficult to detect and locate by the protection devices commonly used in todays electric distribution systems. The proposed scheme uses real time data which are processed in a way that the fault detection and location can be estimated by a set of characteristics extracted from the voltage and current signals measured at the substation. This characteristic set is classified by an artificial neural network based scheme whose output results in a fault detection and location. The scheme is based on the calculation of the symmetrical components of the current signal harmonics at the relay point. Other traditional fault detection and location methodologies were also implemented, making possible to obtain comparative results. The scheme was applied in two simulated feeders. The results of this work shows, that the proposed methodology is worthy of continued research objecting real time applications",2006,0, 7808,MR-based attenuation correction for a whole-body sequential PET/MR system,"In this paper MR-based attenuation method was implemented for a clinical whole-body PET/MR system. While awaiting future clinical evaluation, the algorithm seems promising from preliminary patient data evaluation, from both qualitative PET image quality and quantification accuracy. While the 3-segment MRAC resembles the results from short PET transmission scans, future work will be to implement and validate segmentation of more tissue classes, such as cortical bone. Robustness of MR image truncation compensation and incorporation of flexible coils need to be improved.",2009,0, 7809,On the error-control coding techniques used in GSM/EDGE radio access networks,In this paper the error-control coding techniques used in GSM/EDGE Radio Access Network (GERAN) are considered. Application of coding schemes is restricted by the corresponding traffic channels (TCH). Knowing the general scheme is necessary for modeling the work of the complete error-control system. This knowledge can be useful for the implementation of educational software used for the investigation of the properties of different codecs and their characteristics in radio channels using various radio channel models.,2004,0, 7810,Analysis and Implementation of LMS Algorithm with Coding Error in the DSP TMS320C6713,"In this work we present the analysis and implementation in a digital signal processor (DSP), of a variant of the least mean square (LMS) algorithm. Modification is based on codifying the error of the algorithm, in order to reduce the design complexity for its implementation in digital adaptive filters, because the error is made up of whole values. The results demonstrate an increase in the convergence speed; it's affected indirectly by the convergence factor, and to obtain a floating point operation reduction, which accelerates processing. These, to demonstrate the results obtained from the implementation of the algorithm in the digital signal processor TMS320C6713 by Texas instruments.",2008,0, 7811,Coupled Equilibrium Model of Hybridization Error for the DNA Microarray and TagAntitag Systems,"In this work, a detailed coupled equilibrium model is presented for predicting the ensemble average probability of hybridization error per chip-hybridized input strand, providing the first ensemble average method for estimating postannealing microarray/TAT system error rates. Following a detailed presentation of the model and implementation via the software package NucleicPark, under a mismatched statistical zipper model of duplex formation, error response is simulated for both mean-energy and randomly encoded TAT systems versus temperature and input concentration. Limiting expressions and simulated model behavior indicate the occurrence of a transition in hybridization error response, from a logarithmically convex function of temperature for excess inputs (high-error behavior), to a monotonic, log-linear function of temperature for dilute inputs (low-error behavior), a novel result unpredicted by uncoupled equilibrium models. Model scaling behavior for random encodings is investigated versus system size and strand-length. Application of the model to TAT system design is also undertaken, via the in silico evolution of a high-fidelity 100-strand TAT system, with an error response improved by nine standard deviations over the performance of the mean random encoding",2007,0, 7812,PD-SOI MOSFETs: interface effect on point defects and doping profiles,"In this work, the influence of the Silicon/Buried Oxide interface (Si/BOX) on the electrical characteristics of silicon-on-insulator (SOI) MOSFETs is investigated by means of numerical simulations. Considering the State-of-Art dopant diffusion models and the recombining effect of Si/BOX interface on point defect, process simulations were performed to investigate the two-dimensional diffusion behaviour of the dopant impurities. The impact of the Si/BOX is investigated by analyzing the standard electrical characteristics of CMOS devices. Finally, a new electrical characterization methodology is detailed to better analyze dopants lateral diffusion profiles.",2009,0, 7813,Fault-Tolerance Analysis of a Wireless Sensor Network with Distributed Classification Codes,"In this work, we analyze the performance of a wireless sensor network with distributed classification codes, where independence across sensors, including local observations, local classifications and sensor-fusion link noises, is assumed. In terms of large deviations technique, we establish the necessary and sufficient condition under which the minimum Hamming distance fusion error vanishes as the number of sensors tends to infinity. With the necessary and sufficient condition and the upper performance bounds, the relation between the fault-tolerance capability of a distributed classification code and its pair-wise Hamming distances is characterized",2006,0, 7814,Fault Tolerant Data Collection in Heterogeneous Intelligent Monitoring Networks,"In this work, we focus on the problem of fault tolerant data collection in heterogeneous Intelligent Monitoring Networks(IMNs). IMNs are expected to have a wide range of applications in many fields such as forest monitoring, structural monitoring, and industrial plant monitoring. We present our fault tolerant data collection scheme in the hierarchical structure of IMNs. We use an interesting technique borrowed from the popular BitTorrent software to maintain a highly efficient and robust data collection in IMNs with heterogeneous and faulty devices. In our proposed scheme, monitoring sensors are instructed to randomly select some overheard transmissions and process them in data fusion. Our preliminary study confirmed the benefits of the fault tolerant data collection strategy.",2010,0, 7815,Inline automated defect classification: a novel approach to defect management,"In this work, we have developed a specific automated classification scheme based on defect size, pattern density, and optical polarity. According to our defectivity control plan, this scheme has been applied to brightfield inspections across multiple devices, as well as to our 120 nm and 90 nm technologies. The classifiers account for both size and location, while distinguishing between killer and non killer defects. The defects in array areas have higher kill ratios (KR) than those in logic or open areas. Similarly, extra-pattern defects (bright pixels) have higher KR than particles (dark pixels). After a period of baselining, we were able to set control limits for each class at each layer. Out-of-spec events can be triggered by both total and killer defect densities. As a result, we can now focus our scanning electron microscope (SEM) review primarily on those defects classified by inline automated defect classification (iADC) as killers, thus enabling better utilization of this expensive tool. As the population of SEM-reviewed defects became biased, we re-normalized to the individual iADC bin populations. iADC classification can ultimately be used to prioritize yield activity based on bin-to-defect correlation. Calculating kill ratios per iADC bin by layer provides statistically significant information, unlike total defect densities. We have shown that iADC can be used for the three main defectivity control activities: excursion monitoring, evaluation of experiments, and yield improvement through yield impact evaluation",2005,0, 7816,Distortion-optimized FEC for unequal error protection in MPEG-4 video delivery,"In this work, we present a distortion-based erasure code for realizing unequal error protection in video delivery services. Prior to transmission, the resulting distortion caused by packet erasure is estimated for each of the video packets. This Information Is used to protect the video packets with higher distortion values strongly, whereas video packets with very low distortion values are not worth being protected. We present a local search algorithm that delivers near-optimal parity codes which minimize the expected overall distortion at the receiver. Furthermore, we suggest the deployment of our FEC erasure code at video proxies located at the edge to access networks with high packet erasure rates (e.g. mobile networks). This helps detecting and correcting packet losses efficiently, by tailoring the error protection code characteristics to the observed packet erasure rate of the receiver. Our erasure code is a parity code and relics solely on XOR-operations for encoding and decoding; hence, it shows very high efficiency compared to other more sophisticated codes. Simulation results show that our introduced erasure code outperforms the classical Reed-Solomon erasure codes in terms of perceived video quality, while running considerably faster.",2004,0, 7817,Dynamic fault tree analysis based on Petri nets,"In traditional dynamical fault tree analysis, it is necessary to modularize DFT tree firstly so as to obtain static subtrees and dynamic subtrees. Generally, binary decision diagram (BDD) and Markov chains are utilized in the DFT to process static and dynamic subtrees, respectively. However, due to the possibility of state combinatorial explosion problem in Markov chain, it is difficult to analyze system with DFT in some cases. This paper investigated Petri net method in DFT in order to solve this problem. An example of processor system is analyzed with the proposed Petri net based DFT, which contains many dynamic logic gates in two classes. The analysis results show that the proposed method can overcome the state combinatorial explosion problem and guarantee high accuracy.",2009,0, 7818,Dynamic Multi-mode Switching Error Concealment Algorithm for H.264/AVC Video Applications,"In video communication based consumer device applications, the compressed video is extremely fragile to transmission errors due to channel noise. Error concealment (EC) is an efficient way to recover a damaged video sequence. The existing EC algorithms for compressed video typically apply uniform EC mode to entire video sequence regardless of frame features, or lack in sensible mode-switching considerations. This paper proposes a new dynamic multi-mode switching (DMS) EC algorithm. The proposed algorithm provides several EC modes for intra- and inter- frames. The proposed smart decoder is able to adaptively apply these modes in different situations. The mode switching in DMS depends on the spatial/temporal correlation, estimated edge features of the corrupted Macroblock (MB), the smoothness of the reconstructed MB and etc. The proposed DMS algorithm has been evaluated and compared with 3 classical EC algorithms. The experimental results show that the DMS algorithm can achieve significant gains in peak signal noise ratio (PSNR) and therefore improves the decoded picture quality in video applications.",2008,0, 7819,A fragile watermark error detection scheme for wireless video communications,"In video communications over error-prone channels, compressed video streams are extremely sensitive to bit errors. Often random and burst bit errors impede correct decoding of parts of a received bitstream. Video decoders normally utilize error concealment techniques to repair a damaged decoded frame, but the effectiveness of these error concealment schemes relies heavily on correctly locating errors in the bitstream. In this paper, we propose a fragile watermark-based error detection and localization scheme called ""force even watermarking (FEW)"". A fragile watermark is forced onto quantized DCT coefficients at the encoder. If at the decoder side the watermark is no longer intact, errors exist in the bitstream associated with a particular macro-block (MB). Thanks to the watermark, bitstream errors can accurately be located at MB level, which facilitates proper error concealment. This paper describes the algorithm, model and analysis of the watermarking procedure. Our simulation results show that compared to the syntax-based error detection schemes, the proposed FEW scheme significantly improves the error detection capabilities of the video decoder, while the peak signal-to-noise ratio loss and additional computational costs due to watermark embedding and extraction are small.",2005,0, 7820,Effect of sensing errors on wideband cognitive OFDM radio networks,"In wideband spectrum sensing, multiband joint detection, which jointly detects the signal energy over multiple frequency bands, is efficient in improving the dynamic spectrum utilization and reducing interference to the primary users. In this paper, we investigate the effect of sensing errors due to the time offset between the sensing and decision making on the multiband joint detection technique. By considering the primary user's spectrum usage model, we formulate the optimization problem for the multiband joint detection for the non-cooperative and hard decision cooperative schemes. Numerical results show that sensing errors lead to performance degradation in terms of the aggregate opportunistic throughput and false alarm probability. However, cooperative sensing is effective in improving the performance of the multiband detection technique in the presence of sensing errors.",2010,0, 7821,FT-CoWiseNets: A Fault Tolerance Framework for Wireless Sensor Networks,"In wireless sensor networks (WSNs), faults may occur through malfunctioning hardware, software errors or by external causes such as fire and flood. In business applications where WSNs are applied, failures in essential parts of the sensor network must be efficiently detected and automatically recovered. Current approaches proposed in the literature do not cover all the requirements of a fault tolerant system to be deployed in an enterprise environment and therefore are not suitable for such applications. In this paper we investigate these solutions and present FT-CoWiseNets, a framework designed to improve the availability of heterogeneous WSNs through an efficient fault tolerance support. The proposed framework satisfies the requirements and demonstrates to be more adequate to business scenarios than the current approaches.",2007,0, 7822,Modeling transformers with internal incipient faults,"Incipient fault detection in transformers can provide early warning of electrical failure and could prevent catastrophic losses. To develop transformer incipient fault detection technique, a transformer model to simulate internal incipient faults is required. This paper presents a methodology to model internal incipient winding faults in distribution transformers. These models were implemented by combining deteriorating insulation models with an internal short circuit fault model. The internal short circuit fault model was developed using finite element analysis. The deteriorating insulation model, including an aging model and an arcing model connected in parallel, was developed based on the physical behavior of aging insulation and the arcing phenomena occurring when the insulation was severely damaged. The characteristic of the incipient faults from the simulation were compared with those from some potential experimental incipient fault cases. The comparison showed the experimentally obtained characteristic's of terminal behavior of the faulted transformer were similar to the simulation results from the incipient fault models",2002,0, 7823,Transport layer multihoming for fault tolerance in FCS networks,"In this paper, we document a potential inefficiency in the current SCTP retransmission policy. The current scheme intends to improve the chance of success by exploiting the redundant paths between multihomed endpoints, but we have found that the current SCTP retransmission policy often degrades performance. We comparatively evaluate an alternative retransmission policy and show that the current SCTP retransmission policy unexpectedly performs worse under certain conditions. Our analysis exposes the problem and we discuss four possible solutions.",2003,0, 7824,Rate-distortion analysis of weighted prediction for error resilience,"In this paper, we extend our work in [1] and address an approach to theoretically analyze the rate-distortion (R-D) performance of the weighted prediction feature provided within the scope of H.264/AVC. We consider the weighted prediction as a standard-compatible leaky prediction approach for the purpose of error resilience. We adopt a quantization noise model that explicitly formulates the relationship between the data rate and the distortion in the mean-square-error (MSE) sense. We derive a comprehensive rate-distortion function for both the error- free scenario and the one with error drift. Through adjusting the weight coefficients in H.264/AVC, we also simulate H.264/AVC video streaming over error-prone networks and obtain the operational rate-distortion results using various leaky factors for both error-free and error- drift scenarios. We compare our theoretical results with the operational R-D curves and demonstrate that the theoretical results conform with the operational results.",2008,0, 7825,Bug Classification Using Program Slicing Metrics,"In this paper, we introduce 13 program slicing metrics for C language programs. These metrics use program slice information to measure the size, complexity, coupling, and cohesion properties of programs. Compared with traditional code metrics based on code statements or code structure, program slicing metrics involve measures for program behaviors. To evaluate the program slicing metrics, we compare them with the Understand for C++ suite of metrics, a set of widely-used traditional code metrics, in a series of bug classification experiments. We used the program slicing and the Understand for C++ metrics computed for 887 revisions of the Apache HTTP project and 76 revisions of the Latex2rtf project to classify source code files or functions as either buggy or bug-free. We then compared their classification prediction accuracy. Program slicing metrics have slightly better performance than the Understand for C++ metrics in classifying buggy/bug-free source code. Program slicing metrics have an overall 82.6% (Apache) and 92% (Latex2rtf) accuracy at the file level, better than the Understand for C++ metrics with an overall 80.4% (Apache) and 88% (Latex2rtf) accuracy. The experiments illustrate that the program slicing metrics have at least the same bug classification performance as the Understand for C++ metrics.",2006,0, 7826,A low-cost SEU fault emulation platform for SRAM-based FPGAs,"In this paper, we introduce a fully automated low cost hardware/software platform for efficiently performing fault emulation experiments targeting SEUs in the configuration bits of FPGA devices, without the need for expensive radiation experiments. We propose a method for significantly reducing the fault list by removing the faults on unused LUT bit positions. We also target the design flip-flops found in the configurable logic blocks (CLBs) inside the FPGA. Run-time reconfigurability of Virtex devices using JBits is exploited to provide the means not only for fault injection but fault detection as well. First, we consider five possible application scenarios for evaluating different self-test schemes. Then, we apply the least favorite and most time consuming of these scenarios on two 32times32 multiplier designs, demonstrating that transferring the simulation processing workload to FPGA hardware can allow for acceleration of simulation time of more than two orders of magnitude",2006,0, 7827,An adaptive and reliable data-path determination for Fault-Tolerant Ethernet using heartbeat mechanism,"In this paper, we introduce an implementation of the heartbeat mechanism to determine the route (called data-path) for communication between two nodes in a subnet. The data-path is adaptive and reliable in order to keep data flowing in the network continuously even if there are faults in the network. This is a utility for Fault-Tolerant Ethernet (FTE) implementation. Our implementation has been designed and developed as an essential module in a new fault-tolerant, large-scale network scheme called scalable autonomous fault-tolerant Ethernet (SAFE). The SAFE scheme has been developed for use in the combat system data network (CSDN). Our module not only detects the network fault but also provides the information for the fault recovery process. The implementation has been also validated through experiments in various network failure scenarios.",2010,0, 7828,Design of Resource Space Model in Fault Diagnosis Knowledge of Rotating Machinery,"In this paper, we introduce the resource space model (RSM) which is a novelty semantic data model, to semantically store and manage information. Then we use the methods to classify some rotating mechanical fault diagnosis knowledge in semantic, construct RSM of part rotating mechanical fault diagnosis knowledge. In the end, we simply analyze the modelpsilas semantic characteristics in searching and management.",2008,0, 7829,Fault Analysis in OSS Based on Program Slicing Metrics,"In this paper, we investigate the barcode OSS using two of Weiser's original slice-based metrics (tightness and overlap) as a basis, complemented with fault data extracted from multiple versions of the same system. We compared the values of the metrics in functions with at least one reported fault with fault-free modules to determine a) whether significant differences in the two metrics would be observed and b) whether those metrics might allow prediction of faulty functions. Results revealed some interesting traits of the tightness metric and, in particular, how low values of that metric seemed to indicate fault-prone functions. A significant difference was found between the tightness metric values for faulty functions when compared to fault-free functions suggesting that tightness is the `better' of the two metrics in this sense. The overlap metric seemed less sensitive to differences between the two types of function.",2009,0, 7830,Automatic detection and diagnosis of faults in generated code for procedure calls,"In this paper, we present a compiler testing technique that closes the gap between existing compiler implementations and correct compilers. Using formal specifications of procedure-calling conventions, we have built a target-sensitive test suite generator that builds test cases for a specific aspect of compiler code generators: the procedure-calling sequence generator. By exercising compilers with these specification-derived target-specific test suites, our automated testing tool has exposed bugs in every compiler tested on the MIPS and one compiler on the SPARC. These compilers include some that have been in heavy use for many years. Once a fault has been detected, the system can often suggest the nature of the problem. The testing system is an invaluable tool for detecting, isolating, and correcting faults in today's compilers.",2003,0, 7831,Byzantine Fault Tolerant Coordination for Web Services Business Activities,"In this paper, we present a comprehensive study on the threats towards the coordination services for Web services business activities and explore the most optimal solution to mitigate such threats. A careful analysis of the state model of the coordination services reveals that it is sufficient to use a lightweight Byzantine fault tolerance algorithm that avoids performing expensive total ordering of all request messages. The algorithm and the associated mechanisms have been incorporated into an open-source framework implementing the standard Web services business activity specification and an extension protocol that enables the separation of the coordination tasks from the business logic. The performance evaluation results obtained using the working prototype confirm the optimality of our solution.",2008,0, 7832,Algorithms for compacting error traces,"In this paper, we present a concept of compacting the error traces generated by pseudo-random/random simulations. The new shorter error trace not only decreases the time of the user's debugging process but also reduces the simulation time required to verify the bug fixes. Two algorithms, CET1 and CET2, are presented to perform the task of compacting the error trace. Both algorithms first use an efficient approach to eliminate the redundant states, to generate the unique states of the error trace. Then, CET1 builds the connected graph of these unique states by computing the reachable states by one cycle for each unique state, and then applies Dykstra's shortest path algorithm to find out the shortest error trace in the connected graph. Compared with CET1, CET2 computes the reachable states by one cycle for those unique states, when they are needed in Dykstra's shortest path algorithm to find the shortest error trace. After finding the shorter trace, the corresponding input/output test vectors are generated. The experimental results show that both algorithms can reduce the length of error traces dramatically for most cases using reasonable memory. For cases requiring longer CPU time to find the shortest trace, CET2 is up to 37 times faster than CET1.",2003,0, 7833,Error-Resilient Video Encoding and Transmission in Multirate Wireless LANs,"In this paper, we present a cross-layer approach for video transmission in wireless LANs that employs joint source and application-layer channel coding, together with rate adaptation at the wireless physical layer (PHY). While the purpose of adopting PHY rate adaptation in modern wireless LANs like the IEEE 802.11a/b is to maximize the throughput, in this paper we exploit this feature to increase the robustness of wireless video. More specifically, we investigate the impact of adapting the PHY transmission rate, thus changing the throughput and packet loss channel characteristics, on the rate-distortion performance of a transmitted video sequence. To evaluate the video quality at the decoder, we develop a cross-layer modeling framework that considers jointly the effect of application-layer joint source-channel coding (JSCC), error concealment, and the PHY transmission rate. The resulting models are used by an optimization algorithm that calculates the optimal JSCC allocation for each video frame, and PHY transmission rate for each outgoing transport packet. The comprehensive simulation results obtained with the H.264/AVC codec demonstrate considerable increase in the PSNR of the decoded video when compared with a system that employs separately JSCC and PHY rate adaptation. Furthermore, our performance analysis indicates that the optimal PHY transmission rate calculated by the proposed algorithm, can be significantly different when compared with rate adaptation algorithms that target throughput improvement.",2008,0, 7834,Design-space exploration of fault-tolerant building blocks for large-scale quantum computing,"In this paper, we present a design methodology for quantifying the role each building component of a logical fault-tolerant building block for quantum computers plays in the performance of the logical block. A logical building block is the set of operations necessary to execute a fault-tolerant circuit structure in quantum programs, such as the network of operations implementing a logical quantum bit. By analyzing the interaction between the algorithmic structure of a building block and the number of lower-level elements where faults are likely to occur, we can quantify the sensitivity of logical building blocks to two things: (1) to changes in the failure rates of the lower level elements comprising a proposed microarchitecture model, which are defined as logic gates, memory mechanisms, and data communication mechanisms; and (2) to transformation of the program structure for each building block through compilation techniques. We further show how this information can be used to develop optimized building blocks by inserting the gathered design constraints in our compilation mechanisms.",2007,0, 7835,A self-tuning DVS processor using delay-error detection and correction,"In this paper, we present a dynamic voltage scaling (DVS) technique called Razor which incorporates an in situ error detection and correction mechanism to recover from timing errors. We also present the implementation details and silicon measurements results of a 64-bit processor fabricated in 0.18-m technology that uses Razor for supply voltage control. Traditional DVS techniques require significant voltage safety margins to guarantee computational correctness at the worst case combination of process, voltage and temperature conditions, leading to a loss in energy efficiency. In Razor-based DVS, however, the supply voltage is automatically reduced to the point of first failure using the error detection and correction mechanism, thereby eliminating safety margins while still ensuring correct operation. In addition, the supply voltage can be intentionally scaled below the point of first failure of the processor to achieve an optimal tradeoff between energy savings from further voltage reduction and energy overhead from increased error detection and correction activity. We tested and measured savings due to Razor DVS for 33 different dies and obtained an average energy savings of 50% over worst case operating conditions by scaling supply voltage to achieve a 0.1% targeted error rate, at a fixed frequency of 120 MHz.",2006,0, 7836,Fault Dictionary Based Scan Chain Failure Diagnosis,"In this paper, we present a fault dictionary based scan chain failure diagnosis technique. We first describe a technique to create small dictionaries for scan chain faults by storing differential signatures. Based on the differential signatures stored in a fault dictionary, we can quickly identify single stuck-at fault or timing fault in a faulty chain. We further develop a novel technique to diagnose some multiple stuck-at faults in a single scan chain. Comparing with fault simulation based diagnosis technique, the proposed fault dictionary based diagnosis technique is up to 130 times faster with same level of diagnosis accuracy and resolution.",2007,0, 7837,SAFE: Scalable Autonomous Fault-tolerant Ethernet,"In this paper, we present a new fault-tolerant Ethernet scheme called SAFE (scalable autonomous fault-tolerant ethernet). SAFE scheme is based on software approach which takes place in layer 2 and layer 3 of the OSIRM. The goal of SAFE is to provide scalability, autonomous fault detection and recovery. SAFE divides a network into several subnets and limits the number of nodes in a subnet. Network can be extended by adding additional subnets. All nodes in a subnet automatically detect faults and perform fail-over by sending and receiving Ethernet based heartbeat each other. For inter-subnet fault recovery, SAFE manages master nodes in each subnet. Master nodes communicate each other using IP packets to exchange the subnet status. We also propose a master election algorithm to recover faults of master node automatically. Proposed SAFE performs efficiently for large scale network and provides fast and autonomous fault recovery.",2009,0, 7838,Error-Resilient Video Transmission for Short-Range Point-to-Point Wireless Communication,"In this paper, we present a novel error resiliency approach for short-range point-to-point wireless video transmission systems that have particularly stringent delay constraints. We propose to utilize per-packet feedback information that can be generated and transmitted instantaneously from the receiver to the transmitter in such systems to help control the transmission errors. A framework for error resilient video transmission under low-latency constraints is presented, which incorporates the instantaneous feedback into the video encoder to stop error propagation, as well as into a channel adaptive retransmission scheme at the transmitter to reduce the residual packet loss rate. The channel adaptive retransmission scheme is integrated into the system without introducing any additional delay, which is achieved by dynamically adjusting the resource allocation between video source coding and retransmission, where different resource allocation strategies are applied for different channel conditions. The performance of the proposed framework is evaluated in a real-time video transmission system, showing significant video quality improvements for a wide range of channel conditions.",2010,0, 7839,REST-Based SOA Application in the Cloud: A Text Correction Service Case Study,"In this paper, we present a REST-based SOA system, Set It Right (SIR), where people can get feedback on and help with short texts. The rapid development of the SIR system, enabled by designing it as a set of services, and also leveraging commercially offered services, illustrates the strength of the SOA paradigm. Finally, we evaluate the Cloud Computing techniques and infrastructures used to deploy the system and how cloud technology can help shorten the time to market and lower the initial costs.",2010,0, 7840,A Two-Pass Diagnosis Method for Accurately Locating Logic Faults in VLSI Circuits,"In this paper, we present a two-pass approach for digital circuit fault diagnosis. By using the two-pass diagnosis method we can significantly improve the logic fault diagnostic resolution while keeping the test cost manageable. We performed fault diagnostic simulations on dozens of ISCAS89, ITC99, and other benchmark circuits and we are able to narrow down the list of likely faults to a single fault or multiple equivalent faults that are indistinguishable from each other.",2007,0, 7841,Adaptive Error-Resilience Transcoding and Fairness Grouping for Video Multicast Over Wireless Networks,"In this paper, we present a two-pass intra-refresh transcoder for on-the-fly enhancing error resilience of a compressed video in a three-tier streaming system. Furthermore, we consider the problem of multicasting a video to multiple clients with diverse channel conditions. We propose a MINMAX loss rate estimation scheme to determine a single intra- refresh rate for all the clients in a multicast group. For the scenario that a quality variation constraint is imposed on the users, we also propose a grouping method to partition a multicast group of heterogeneous users into a minimal number of sub-groups to minimize the channel bandwidth consumption while meeting the quality variation constraint and achieving fairness among all sub-groups. Experimental results show that the proposed method can effectively mitigate the error propagation due to packet loss as well as achieve fairness not only among all sub-groups and also clients in a multicast group.",2007,0, 7842,Multi-class unsupervised classification with label correction of HRCT lung images,"In this paper, we present an automated texture based unsupervised system for the classification of lung high resolution computed tomography findings in emphysema, ground-glass opacity, honeycombing and bronchiectasis. The classification techniques used in our study are based on cluster analysis of textural features. Variations of traditional K-means clustering are applied in the HRCT setting. A novel technique called label correction capable of segmenting ""true"" labelled pixel groups within the regions outlined by domain experts is presented. Label correction helps to ""clean"" the training data before supervised learning, and also provides more accurate evaluation on the testing data. The system was tested on 321 HRCT scans comprising varying diseases together with normal scans and successfully evaluated on the manually labelled scans by the doctors. In addition, the image segmentation results were visually validated by the radiologists.",2004,0, 7843,Failure analysis of a fault-tolerant 2-node server system,"In this paper, we present an integrated model of hardware and software failures of a fault-tolerant 2-node server system used in a real-life application of an archive system. Each node runs a distinct component of the server application software and identical copies of a fault monitoring service. The fault monitoring service on each node monitors the status of its local application software as well as the availability of the hardware and software on the other node. Upon a node failure, the fault monitoring service on the good node transfers the application software on the failed node to the good node. Upon the failure of an application software component or fault monitoring service, an automatic restoration is performed by the available fault monitoring service. The failed nodes are restored on a first-come, first-serve basis by a single repair facility. The failure and restoration processes of the hardware and software are highly dependent on the status of other components as well as the sequence of failure events. Therefore, we employ a decomposition method that uses both combinatorial analysis as well as Markov-based state space analysis to solve the problem. The proposed method allows us to extend the analysis easily for the cases of multiple nodes, software components, and different repair policies",2006,0, 7844,A Lightweight Fault Tolerance Framework for Web Services,"In this paper, we present the design and implementation of a lightweight fault tolerance framework for Web services. With our framework, a Web service can be rendered fault tolerant by replicating it across several nodes. A consensusbased algorithm is used to ensure total ordering of the requests to the replicated Web service, and to ensure consistent membership view among the replicas. The framework is built by extending an open-source implementation of the WS-ReliableMessaging specification, and all reliable message exchanges in our framework conform to the specification. As such, our framework does not depend on any proprietary messaging and transport protocols, which is consistent with the Web services design principles. Our performance evaluation shows that our implementation is nearly optimal and the framework incurs only moderate runtime overhead.",2007,0, 7845,Multiple disease (fault) diagnosis with applications to the QMR-DT problem,"In this paper, we present three classes of computationally efficient algorithms that can handle cases with hundreds of positive findings in QMR-DT(Quick Medical Reference, Decision-Theoretic) Network. These include Lagrangian Relaxation Algorithm (LRA), Primal Heuristic Algorithm (PHA), and Approximate Belief Revision Algorithm (ABR). These algorithms solve the QMR-DT problem by finding the most likely set of diseases given the findings. Extensive computational experiments have shown that LRA obtains the best solutions among the three algorithms proposed within a relatively small processing time. We also show that the Variational Probabilistic Inference method is a special case of our LRA. The solutions are generic and have application to multiple fault diagnosis in complex industrial systems.",2003,0, 7846,Design and evaluation of a fault-tolerant adaptive router for parallel computers,"In this paper, we propose a design methodology for fault-tolerant adaptive routers for parallel and distributed computers. The key idea of our method is integrating minimal and non-minimal routing that is supported by independent virtual channels (VCs). Distinguishing the routing functions for each set of VCs simplifies the design of fault-tolerant algorithms. After describing the method, we show an application of a routing algorithm for two-dimensional mesh and torus networks. This algorithm, called Detour-NF, supports three routing modes: deterministic, minimal fully adaptive and non-minimal fault-tolerant operations. We also discuss the hardware cost and operational speed of minimal and non-minimal routers based on our design, which uses hardware description language (HDL). Communication performance and fault-tolerance are demonstrated by an HDL simulation. The experimental results show that supporting both minimal and non-minimal routing modes is advantageous for high-bandwidth and low-latency communication, as well as fault-tolerance.",2003,0, 7847,DYFARS: Boosting Reliability in Fault-Tolerant Heterogeneous Distributed Systems Through Dynamic Scheduling,"In this paper, we propose a dynamic and reliability- driven real-time fault-tolerant scheduling algorithm on heterogeneous distributed systems (DYFARS). Primary-backup copy scheme is leveraged by DYFARS to tolerate both hardware and software failures. Most importantly, DYFARS employs reliability costs as its main objective to dynamically schedule independent, non-preemptive aperiodic tasks, therefore system reliability is enhanced without additional hardware costs. A salient difference between our DYFARS and existing scheduling approaches is that DYFARS considers backup copies in both active and passive forms; therefore, DYFARS is more flexible than the existing scheduling schemes in the literature. Finally, simulation experiments are carried out to compare DYFARS with existing similar algorithm, experiment results show that DYFARS is superior to existing algorithm regarding both schedulability and reliability.",2007,0, 7848,Stability and performance of the stochastic fault tolerant control systems,"In this paper, the stability and performance of the fault tolerant control system (FTCS) are studied. The analysis is based on a stochastic framework of integrated FTCS, in which the system component failure and the fault detection and isolation (FDI) scheme are characterized by two Markovian parameters. In addition, the model uncertainties and noise/disturbance are treated in the same framework. The sufficient conditions for stochastic stability and the system performance using a stochastic integral quadratic constraint are developed. A simulation study on an example system is performed with illustrative results obtained.",2003,0, 7849,Integrating fault-tolerant feature into TOPAS parallel programming environment for distributed systems,"In this paper, TOPAS-a new parallel programming environment for distributed systems-is presented. TOPAS automatically analyzes data dependence among tasks and synchronizes data, which reduces the time needed for parallel program developments. TOPAS also provides supports for scheduling, dynamic load balancing and fault tolerance. Experiments show simplicity and efficiency of parallel programming in TOPAS environment with fault-tolerant integration, which provides graceful performance degradation and quick reconfiguration time for application recovery.",2002,0, 7850,Tunable bandstop filters using defected ground structure with active devices,"In this paper, tunable bandstop filters using defected ground structure (DGS) sections are presented. The DGS section is obtained by etching the conventional ground plane. To adjust the stopband, voltage controlled variable capacitor diodes (VVC) are equipped on the DGS section. The center frequency of the filter is varied from 1.44 GHz to 3.19 GHz as the reverse bias voltage is changed from 0 V to 20 V. The stopband of the proposed filter has a tunable range of 76%. Using more than two DGS units, deeper suppression and wider tunable bandwidth are yielded. Good agreements between the simulated and measured results are obtained.",2005,0, 7851,Towards On-line Adaptation of Fault Tolerance Mechanisms,"In this paper, we address the crucial issue of online software adaptation: how to determine if the system is in an adaptable state? To solve this issue, we advocate the use of both Component-Based Software Engineering (CBSE) and reflective technologies. Such technologies enable a metamodel of the software architecture to be established to represent both structural and behavioral aspects. Based on some requirements expressed by the software designer, and using online animation of the software model (CBSE metamodels and Petri Nets), we propose an approach to 1/decide when the system (or a sub-system) is adaptable, and 2/ guide the system towards an adaptable state. Finally, this approach is applied to the online adaptation of replication mechanisms on a small case study.",2010,0, 7852,Quality of IT service delivery Analysis and framework for human error prevention,"In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the human-error prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.",2010,0, 7853,On the throughput of multicasting with incremental forward error correction,"In this paper, we consider a multicasting model that uses incremental forward error correction (FEC). In this model, there is one sender and rn receivers. The sender uses an ideal (n,n(1-p),np) FEC code to code a group of n(1-p) data packets with additional np redundant packets so that any set of n(1-p) packets received by a receiver can be used to recover the original n(1-p) data packets. Packets to the receivers are lost independently with probability q. For this model, we prove several strong laws of large numbers for the asymptotic throughput as n . The asymptotic throughput is characterized by the unique solution of an equation in terms of p, q, and r. These strong laws not only provide theoretical justification for several important observations made in the literature, but also provide insights that might have impact on future design of multicasting protocols.",2005,0, 7854,Provisioning fault-tolerant scheduled lightpath demands in WDM mesh networks,"In this paper, we consider the problem of routing and wavelength assignment (RWA) of fault-tolerant scheduled lightpath demands (FSLDs) in all optical wavelength division multiplexing (WDM) networks under single component failure. In scheduled traffic demands, besides the source, destination, and the number of lightpath demands between a node-pair, their set-up and tear-down times are known, in this paper, we develop integer linear programming (ILP) formulations for dedicated and shared scheduled end-to-end protection schemes under single link/node failure for scheduled traffic demand with two different objective functions: 1) minimize the total capacity required for a given traffic demand while providing 100% protection for all connections; and 2) given a certain capacity, maximize the number of demands accepted while providing 100% protection for accepted connections. The ILP solutions schedule both the primary and end-to-end protection routes and assign wavelengths for the duration of the traffic demands. As the time disjointness that could exist among fault-tolerant scheduled lightpath demands is captured in our formulations, it reduces the amount of global resources required. The numerical results obtained from CPLEX indicate that dedicated scheduled (with set-up and tear-down times) protection provides significant savings (up to 33 %) in capacity utilization over dedicated conventional (without set-up and tear-down times) end-to-end protection scheme; shared scheduled protection provides considerable savings (up to 21 %) in capacity utilization over shared conventional end-to-end protection schemes. Also the numerical results indicate that shared scheduled protection achieves the best performance followed by dedicated scheduled protection scheme, and shared conventional end-to-end protection in terms of the number of requests accepted, for a given network capacity.",2004,0, 7855,Closed-form expressions for symbol error rates of MIMO-MRC with channel estimation error,"In this paper, we derive closed form expressions for average symbol error rate of M-ary signalling for MIMO-MRC (MRT) systems over Rayleigh fading channels taking account of channel estimation errors. Following the moment generating function (mgf), we formulate the final expressions in the form of hypergeometric functions. The expressions are valid for an arbitrary number of antennas and any modulation levels. MRT system suffers significant performance degradation due to Gaussian estimation errors. Evenly distributed antenna system offers better trade-offs.",2009,0, 7856,A Safety Analysis Method Using Fault Tree Analysis and Petri Nets,"In this paper, we describe a safety analysis method that utilizes two models, namely, Petri nets to model the behavioral aspects of a system, and fault tree analysis to model failure and hence unacceptable behaviors of a system. Using petri nets and fault tree analysis, we should be able to perform both forward and backward reachability analyses that are related to acceptable and unacceptable behaviors of a system. To show the feasibility of our proposed method, a case study, railroad crossing system, has been conducted.",2009,0, 7857,"LBG-SQUARE Fault-Tolerant, Locality-Aware Co-Allocation in P2P Grids","In this paper, the deployment and execution of iterative stencil applications on a P2P grid middleware are investigated. So-called iterative stencil applications are composed of sets of heavily-communicating, long-running tasks. They thus require co-allocation of multiple reliable resources for extended periods of time. P2P grids are totally decentralized and provide on-demand, transparent access to edge resources, e.g. Internet-connected, non-dedicated desktop computers. A P2P grid has the potential to provide access to a large number of resources at the fraction of the cost of a dedicated cluster. However, edge resources are heterogeneous in performance and intrinsically unreliable: task execution failures are common due to resource preemption or resource failure. Furthermore, P2P grid schedulers usually target sets of independent computational Tasks, i.e. so-called Bags of Tasks applications. It is therefore not trivial to deploy and run an iterative stencil application on a P2P grid. Checkpointing is a common fault-tolerance mechanism in high performance distributed computing, often based on a centralized architecture. Locality-aware co-allocation in P2P grids has been recently investigated. Checkpointing and locality-aware co-allocation yet have to be integrated in P2P grids. We propose to provide co-allocation through an existing middleware-level Bag of Tasks scheduling mechanism. We also introduce a layer of fault-tolerance for the iterative stencils that relies on a scalable, application-level, P2P checkpointing mechanism. Finally, LBG-SQUARE is described. This software results from the combination of a specific Iterative Stencil application (a computational fluid dynamics simulation software called LaBoGrid) with a P2P grid middleware (Lightweight Bartering Grid).",2008,0, 7858,Fault tolerant strategies under open phase fault for doubly salient electro-magnet motor drives,"In this paper, the fault tolerant system for doubly salient electro-magnet motor(DSEM) drives have been proposed to maintain the control performance under open phase faults of inverter. The proposed fault tolerant system provides compensation for open-circuit faults in power inverter. The fault identification is quickly achieved by phase current and voltage changes between lower legs and middle point of DC-link. The drive system after fault identification is reconstructed by the FSTP(four-switch three-phase) topology connecting a faulty leg to the middle point of DC-link using bidirectional switches. The fault tolerant system quickly recovers the control performance by short detecting time. Experiments confirm the feasibility of the proposed fault tolerant system.",2007,0, 7859,Forward link capacity evaluation for W-CDMA with amplitude limiter and forward error correction,"In this paper, the influence of the amplitude limiter of combined multiuser signals in the base station of the wideband code division multiple access (W-CDMA) system is described. The transmission performance under Rayleigh fading and interference from an adjacent 6-cell environment are evaluated by computer simulation; the tool of its simulation is MATLAB software. The relationship between limiter level and capacity degradation is clarified. Furthermore, it is shown that the introduction of the limiter is effective in reducing the dynamic range of the power amplifier. The effect of forward error correction (FEC) is also described. The results show that it is effective to use FEC in compensating the degradation by the amplitude limiter.",2002,0, 7860,Faults diagnosis methodology for the WaferNet interconnection network,"In this paper, the interconnection network (WaferNet) which is part of an active and reconfigurable prototyping board, named WaferBoardtrade, is analyzed to derive efficient defect diagnosis. The WaferNet structure spans an entire silicon wafer that inevitably contains defects, due to the nature of the microfabrication process, and defect management strategies are inserted in the design flow. Defects must be accurately located to efficiently reconfigure the circuit around them. Key differences between a conventional printed circuit board and WaferNet justify the proposed diagnosis methodology. A sequential walking-one algorithm and a broadcast algorithm are proposed to locate shorts or stuck-at faults in the network. It is shown that dedicated hardware architectures must be integrated in the network to locate those defects in a reasonable time. Analysis shows that the proposed diagnosis time complexity is O(n2), where n is the number of cells in the matrix. An upper bound time limit is calculated that depends on both the size and the number of faults in the circuits.",2009,0, 7861,"Hardware and software realization of time error measurements with real-time assessment of ADEV, TDEV, and MTIE","In this paper, the measuring system SP-4000, which allows for the analysis of timing signals in telecommunication networks, has been described briefly. The solutions that make possible the cooperation between time error meters of this system and the real-time assessment of the parameters have been described. Because of the limitations of computer technology, the user can choose real-time computation for only a single channel. The current version does not allow simultaneous traditional computations and real-time computations. We expect that in the future such computations will be possible for channels that are not chosen for real-time assessment of ADEV, TDEV, and MTIE. We also plan to increase the number of channels capable of real-time computations.",2010,0, 7862,Robust fault detection for LPV systems using a consistency-based state estimation approach and zonotopes,"In this paper, the problem of robust fault detection for Linear Parameter Varying (LPV) systems using a consistency-based state estimation approach is proposed. Robustness is faced by considering process/measurement noises and parametric uncertainties following a set-membership approach. A set-membership state estimator based on propagating the uncertainty using zonotopes is proposed as means to check the consistency between model and measurements. Modeling uncertainty is represented by bounding model parameters in intervals. Process and measurement noise are also considered unknown but bounded. Finally, an example based on Twin Rotor MIMO System (TRMS) is proposed to validate the proposed approach.",2009,0, 7863,Detection and Diagnosis of Static Scan Cell Internal Defect,"In this paper, we study the impact, detection and diagnosis of the defect inside a scan cell, which is called scan cell internal defect. We first use SPICE simulation to understand how a scan cell internal defect impacts the operation of a single scan cell. To study the detectability and diagnosability of a scan cell internal defect in a production test environment, we inject scan cell internal defects into a scan-based industrial design and perform fault simulation by using production scan test patterns. Next, we evaluate how effective an existing scan chain diagnosis technique based on traditional fault models can diagnose scan cell internal defect. We finally propose a new diagnosis algorithm to improve scan cell internal defect diagnostic resolution using scan cell internal fault model. Experimental results show the effectiveness of the proposed scan cell internal fault diagnosis technique.",2008,0, 7864,A Lower Bound on the Probability of Undetected Error for Binary Constant Weight Codes,"In this paper, we study the probability of undetected error for binary constant weight codes (BCWCs). First, we derive a new lower bound on the probability of undetected error. Next, we show that this bound is tight if and only if the BCWCs are generated from certain t-designs. This means that such BCWCs are uniformly optimal for error detection. Thus, we prove a conjecture of Xia, Fu, Jiang and Ling. Furthermore, we determine the distance distributions of such BCWCs. Finally, we derive some bounds on the exponent of the probability of undetected error for BCWCs. These bounds enable us to extend the region in which the exponent of the probability of undetected error is exactly determined",2006,0, 7865,Fault detection in a mixed H2/H setting,"In this paper, we study the problem of fault detection in linear time-invariant systems, which contain two types of inputs: inputs with fixed and known spectral densities and inputs with bounded power. It is shown that this problem can be formulated as a mixed H2/H filtering problem where the filter gain is computed by solving a pair of coupled Riccati equations. Alternatively, it is shown that the filter gain may be computed by transforming the problem into a convex optimization problem with a set of linear matrix inequality (LMI) constraints. The application of our results is illustrated by an example.",2003,0, 7866,MPEG-2 to WMV Transcoder With Adaptive Error Compensation and Dynamic Switches,"In this paper, we study the problem of video transcoding from MPEG-2 to Windows Media Video (WMV) format, together with several desired functionalities such as bit-rate reduction and spatial resolution downscaling. Based on in-depth analysis of error propagation behavior, we propose two architectures (for different typical application scenarios) that are unique in their complexity scalability and adaptive drifting error control, which in return provide a mechanism to achieve a desired tradeoff between complexity and quality. We perform extensive experiments for various design targets such as complexity, scalability, performance tradeoff, and drifting control effect. The proposed transcoding architectures can be straightforwardly applied to the MPEG-2 to MPEG-4 transcoding applications due to the significant overlap between the MPEG-4 and WMV coding technology",2006,0, 7867,Simulation and diagnosis in the stator fault of induction motors,"In this paper, when the stator fault, the stator input voltage of motor are deduced, based on the mathematical model of induction motor in the reference frame. Establish and simulate the stator fault three-phase induction motor model to get the stator fault currents. Using filters to remove the 50 Hz fundamental component of stator fault currents, then do the spectral analysis of these currents. The simulation results can detect a clear (1/np)f1 harmonic component of stator fault current. Based on this simulation results we can accurately diagnose the motor stator fault. This diagnostic method has a great help to predict and identify the stator fault.",2010,0, 7868,Turkish Word Error Detection Using Syllable Bigram Statistics,"In this study, we have designed and implemented a system, which uses n-gram statistical language model in order to facilitate optical character recognition, speech synthesis and recognition systems. First, the syllables bigram frequencies are extracted from Turkish corpora. Then, the test database including the words, which are written correctly and wrongly, is created. The probability of the words appears the given text is calculated and the wrongly and, correctly written words are determined. The system finds the wrongly written words about 86.13% with the proposed approach and the correctly written words are found about 88.32%",2006,0, 7869,A fully 3D iterative image reconstruction algorithm incorporating data corrections,"In this study, we implemented a fully 3D maximum likelihood ordered subsets expectation maximization (ML-OSEM) reconstruction algorithm with two methods for corrections of randoms, and scatter coincidences: (a) measured data were pre-corrected for randoms and scatter, and (b) corrections were incorporated into the iterative algorithm. In 3D PET acquisitions, the random and scatter coincidences constitute a significant fraction of the measured coincidences. ML-OSEM reconstruction algorithms make assumptions of Poisson distributed data. Pre-corrections for random and scatter coincidences result in deviations from that assumption, potentially leading to increased noise and inconsistent convergence. Incorporating the corrections inside the loop of the iterative reconstruction preserves the Poisson nature of the data. We performed Monte Carlo simulations with different randoms fractions and reconstructed the data with the two methods. We also reconstructed clinical patient images. The two methods were compared quantitatively through contrast and noise measurements. The results indicate that for high levels of randoms, incorporating the corrections inside the iterative loop results in superior image quality.",2004,0, 7870,A scheme for peer-to-peer live streaming with multi-source multicast and forward error correction,"In this paper, we propose a scheme for peer-to-peer (P2P) live streaming with multi-source multicast and forward error correction. In our scheme, there is a control topology for membership management, and a multi-source multicast tree for data delivery. The control topology facilitates peers to locate multiple sources for media content, and the multi- source multicast tree make the system adaptive to node churn and packet loss. Simulation results show that the performance of our proposed method is significantly better than that of BitTorrent-Like (BT-Like) systems.",2008,0, 7871,Affordable Multi-Coding Parallel Error Rate Measurement System,"In this paper, we propose a solution for an affordable parallel error rate measurement system based on simple digital hardware interface, and software bringing competitive advantages towards traditional instruments solutions in terms of robustness to both glitch and duty cycle variations. Another advantage of the architecture is the flexibility to measure independent channels communicating at different data rates. Measurement speed optimization is also explained",2005,0, 7872,Memory Yield Improvement through Multiple Test Sequences and Application-Aware Fault Models,"In this paper, we propose a way to improve the yield of memory products by selecting the appropriate test strategy for memory Built- in Self-Test (BIST). We argue that by testing the memory through a sequence of test algorithms which differ in their fault coverage, it is possible to bin the memory into multiple yield bins and increase the yield and product revenue. Further, the test strategy must take into consideration the usage model of the memory. Thus, a number of video and audio buffers are used in sequential access mode, but are overtested using conventional memory test algorithms which model a large number of defects which do not impact the operation of the buffers. We propose a binning strategy where memory test algorithms are applied in different order of strictness such that bins have a specific defect / fault grade. Depending on the applications some of these bins need not be discarded but sold at a lower price as the functionality would never catch the fault due to its usage of memory. We introduce the notion of a test map for the on-chip memories in a SoC and provide results of yield simulation on two specific test strategies called ""Most Strict First"" and ""Least Strict First"". Our simulations indicate that significant improvements in yield are possible through the adoption of the proposed technique. We show that the BIST controller area and run-time overheads also reduce when information about the usage model of the memory, such as sequential access, is exploited.",2008,0, 7873,Bit Error Rate Performance Enhancement of a Retrodirective Array Over a Conventional Fixed Beam Array in a Dynamic Multipath Environment,"In this paper, we provide experimental evidence to show that enhanced bit error rate (BER) performance is possible using a retrodirective array operating in a dynamically varying multipath environment. The operation of such a system will be compared to that obtained by a conventional nonretrodirective array. The ability of the array to recover amplitude shift keyed encoded data transmitted from a remote location whose position is not known a priori is described. In addition, its ability to retransmit data inserted at the retrodirective array back to a spatially remote beacon location whose position is also not known beforehand is also demonstrated. Comparison with an equivalent conventional fixed beam antenna array utilizing an identical radiating aperture arrangement to that of the retrodirective array are given. These show that the retrodirective array can effectively exploit the presence of time varying multipath in order to give significant reductions in BER over what can be otherwise achieved. Additionally, the retrodirective system is shown to be able to deliver low BER regardless of whether line of sight is present or absent.",2010,0, 7874,ADEV Calculated from Phase Noise Measurements and Its Possible Errors Due to FFT Sampling,"In this paper, we show that fast Fourier transform (FFT) sampling plays an important role in the calculation of Allan deviation (ADEV) while using the numerical integration as a tool for the time and frequency (T&F) conversion. In order to avoid generation of unreasonable ADEV values, FFT sampling data are re-generated with logarithmic frequency space using an interpolation skill. Therefore, results from both the numerical integration and the power-law processes could match each other quite well. Besides, spurs in spectral density have non-neglectful influences upon ADEV results. For example, when the data of our lab's phase noise measurement system are processed, the ADEV generated from the spectral density with spurs may reach to three times the one while spurs are removed",2006,0, 7875,Software simulation tools on forward error correction schemes for the wireless transmission of MPEG4 AAC audio bitstreams,"In this paper, software tools which simulate the error correcting capability of forward error correction (FEC) codes, for the transmission of MPEG advanced audio coding (AAC) bitstreams over a wireless channel, are described. The tools consist of two applications. The first application is a channel transmission simulator with the features to set the choice of FEC codes, wireless channel characteristics and observe the simulation results. The second application is a graph plotting software that facilitates analysis of simulation results. Such a tool makes it convenient to investigate the transmission of audio data over wireless links.",2005,0, 7876,Stochastic error-correcting parsing for OCR post-processing,"In this paper, stochastic error-correcting parsing is proposed as a powerful and flexible method to post-process the results of an optical character recognizer (OCR). Deterministic and nondeterministic approaches are possible under the proposed setting. The basic units of the model can be words or complete sentences, and the lexicons or the language databases can be simple enumerations or may convey probabilistic information from the application domain",2000,0, 7877,Fault management in avionics telecommunication using passive testing,"In this paper, the author employs the Communicating Finite State Machine (CFSM) model for avionics telecommunication networks to investigate fault management using passive testing. First, he introduces the concept of passive testing. Then, he introduces the CFSM model and the observer model with necessary assumptions and justification. The author introduces the fault model and the fault detection algorithm using passive testing. He presents the new passive testing approach for fault location, fault identification, and fault coverage based on the CFSM model. Examples are given for each fault management function to illustrate our approach. Then, the illustrates the effectiveness of the new technique through simulation of a practical protocol example, the network layer protocol for the Aeronautical Telecommunication Networks. Finally, future extensions and potential trends are discussed",2001,0, 7878,On the error modeling of dead reckoned data in a distributed virtual environment,"In this paper, the authors aim to analyze the error of dead reckoned data generated from received data in a discrete temporal axis in a distributed virtual environment (DVE). That is, compared with the data received in continuous time, data acquired in discrete time has a certain degradation or uncertainty of information. Our way of analysis is to introduce a mathematical model of this degradation with regard to the metrics of the temporal interval. We introduced polynomial models for dead reckoning method between frames using parametrics calculated from the data over the last several frames. By employing the method of error analysis of a numerical analysis to the above polynomial models, we formulate theoretical models which approximate the statistical error of dead reckoned data based on parameters such as the update interval and changes in the data. This study enables the discussion on the optimality of the update interval in a DVE using dead reckoning. Finally, we evaluate the adaptability of the theoretical model by conducting simulation experiments generated by pen motion of writing string of letters by human. As a result, we confirm that the proposed theoretical model closely approximates the average error in the simulation",2005,0, 7879,Error-Resilient and Low-Complexity Onboard Lossless Compression of Hyperspectral Images by Means of Distributed Source Coding,"In this paper, we propose a lossless compression algorithm for hyperspectral images inspired by the distributed-source-coding (DSC) principle. DSC refers to separate compression and joint decoding of correlated sources, which are taken as adjacent bands of a hyperspectral image. This concept is used to design a compression scheme that provides error resilience, very low complexity, and good compression performance. These features are obtained employing scalar coset codes to encode the current band at a rate that depends on its correlation with the previous band, without encoding the prediction error. Iterative decoding employs the decoded version of the previous band as side information and uses a cyclic redundancy code to verify correct reconstruction. We develop three algorithms based on this paradigm, which provide different tradeoffs between compression performance, error resilience, and complexity. Their performance is evaluated on raw and calibrated AVIRIS images and compared with several existing algorithms. Preliminary results of a field-programmable gate array implementation are also provided, which show that the proposed algorithms can sustain an extremely high throughput.",2010,0, 7880,Training-Based Color Correction for Camera Phone Images,"In this paper, we propose a method for improving the color rendition of low quality cell phone camera images. The proposed method is based on a multi layer stochastic framework whose parameters are learned in an offline training procedure using the well known expectation maximization (EM) algorithm. The color correction algorithm functions by first making soft assignments of images into defect classes and then processing images in each defect class with an optimized algorithm, which we refer to as resolution synthesis-based color correction (RSCC). The parameters of the color correction algorithm are trained using pairs of low quality images, obtained from real cell phone cameras, and high quality spatially registered reference images, captured with a high quality digital still camera. We present experimental results comparing the performance of our method to some existing commercial color correction algorithms.",2007,0, 7881,Detecting intrusion faults in remotely controlled systems,"In this paper, we propose a method to detect an unauthorized control signal being sent to a remote-controlled system (deemed an ldquointrusion faultrdquo or ldquointrusionrdquo) despite attempts to conceal the intrusion. We propose adding a random perturbation to the control signal and using signal detection techniques to determine the presence of that signal in observations of the system. Detection of these perturbations indicates that an authorized or ldquotrustedrdquo operator is in control of the system. We analyze a worst case scenario (in terms of detection of the intrusion), discuss construction of signal detectors, and demonstrate our method through a simple example of a point robot with dynamics.",2009,0, 7882,Efficient Probing Techniques for Fault Diagnosis,"Increase in the network usage and the widespread application of networks for more and more performance critical applications has caused a demand for tools that can monitor network health with minimum management traffic. Adaptive probing holds a potential to provide effective tools for end-to-end monitoring and fault diagnosis over a network. In this paper we present adaptive probing tools that meet the requirements to provide an effective and efficient solution for fault diagnosis. In this paper, we propose adaptive probing based algorithms to perform fault localization by adapting the probe set to localize the faults in the network. We compare the performance and efficiency of the proposed algorithms through simulation results.",2007,0, 7883,Error detection enhancement in COTS superscalar processors with event monitoring features,"Increasing use of commercial off-the-shelf (COTS) superscalar processors in industrial, embedded, and real-time systems necessitates the development of error detection mechanisms for such systems. This shows an error detection scheme called committed instructions counting (CIC) to increase error detection in such systems. The scheme uses internal performance monitoring features and an external watchdog processor (WDP). The performance monitoring features enable counting the number of committed instructions in a program. The scheme is experimentally evaluated on a 32-bit Pentium processor using software implemented fault injection (SWIFI). A total of 8181 errors were injected into the Pentium processor. The results show that the error detection coverage varies between to 90.92% and 98.41%, for different workloads.",2004,0, 7884,An Error-Minimizing Approach to Regularization in Indirect Measurements,"Indirect measurements often amount to the estimation of the parameters of a mathematical model that describes the object under investigation, and this process may numerically be ill conditioned. Various regularization techniques are used to solve the problem. This paper shows that popular regularization methods can be depicted as special cases of a generalized approach based on a penalty term in the minimized criterion function and how different kinds of a priori knowledge can be engaged into each of them. A new function, which depends on the estimate bias and variance, is proposed to find a regularization parameter that minimizes the error of estimation, as well as a novel approach for nonlinear estimation that results in the iterative minimization (IM) method. The superiority of IM with respect to the conventional Marquardt procedure is demonstrated. Based on analysis, it also follows that the regularization technique can be used even in the case of numerically well-conditioned indirect measurements, decreasing the total error of estimation.",2010,0, 7885,Generalized model of three-phase induction motor for fault analysis,"Induction motors are critical components in many industrial processes. In spite of their robustness they do occasionally fail and their resulting unplanned downtime can prove very costly. Therefore, condition monitoring of electrical machines has received considerable attention in recent years. A suitable model enables motor faults to be simulated and the change in corresponding parameters to be predicted without physical experimentation. This paper presents a mathematical foundation and theoretical analysis of asymmetric stator faults in induction machines. A three-phase induction motor is simulated with simple differential equations. Experimental verification of excitation current under healthy condition is given.",2008,0, 7886,Assessment of PROFIBUS networks using a fault injection framework,"Industrial control systems architectures have been evolving to the decentralization of control tasks. This evolution associated with the time-critical nature of these tasks, increases the dependability requirements for one of their most critical components: the communication system. Therefore, this is an important aspect on the control system design which must be properly evaluated. In this paper the dependability of a PROFIBUS network is assessed. By using of a fault injection framework the network operation is disturbed with fault scenarios which are representative of industrial environments. From these experiments, the stability of the PROFIBUS logical ring is analyzed, the main outages causes are identified and their probabilities are obtained",2005,0, 7887,Malleable neural networks in fault detection of complex systems,"Industrial machining centres are composed of complex integrated subsystems with independent critical issues. Neural networks (NN) is capable of monitoring unset of faults, however, complexity of the many possible failure modes and various levels of intensity may deteriorate the accuracy of NN. This paper presents malleable neural networks architecture for condition monitoring and fault diagnosis of a subsystem of a machining centre. A central NN is trained with faulty status of operation at the core stage which is then able to discern between healthy and all possible faulty states. NNs are then modulated to learn each failure mode with their different intensity levels. Diagnostic is initially made by the central module, then, the network is reconfigured by an interprocess call to adapt to an appropriate topology and knowledgebase to detect the severity level of the fault. The monitoring system uses steady state values of sensitive parameters of current and pressure transducers. If the parameters are out of a predefined healthy range, a nondestructive test will be initiated, which produces a transient response as input pattern to the NNs. Testing the NN based monitoring system with 395 failure modes showed that in 99.2% of cases the network was successful to accurately identify the cause and severity of the failures.",2005,0, 7888,Sequential decision fusion for controlled detection errors,"Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using Hidden Markov Model (HMM) based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.",2010,0, 7889,Investigating the active guidance factor in reading techniques for defect detection,"Inspections are an established quality assurance technique. In order to optimize the inspection performance, different reading techniques, such as checklist-based reading and scenario-based reading have been proposed. Various experiments have been conducted to evaluate which of these techniques produces better inspection results (i.e., which finds more defects with less effort). However, results of these empirical investigations are not conclusive yet. Thus, the success factors of the different reading approaches need to be further analyzed. In this paper, we report on a preliminary empirical study that examined the influence of the active guidance factor (provided by scenario-based approaches) when inspecting requirements specification documents. First results show that active guidance is accepted with favor by inspectors and suggest that it is better suited for larger and more complex documents.",2004,0, 7890,The effect of the number of inspectors on the defect estimates produced by capture-recapture models,"Inspections can be made more cost-effective by using capture-recapture methods to estimate post-inspection defects. Previous capture-recapture studies of inspections used relatively small data sets compared with those used in biology and wildlife research (the origin of the models). A common belief is that capture-recapture models underestimate the number of defects but their performance can be improved with data from more inspectors. This increase has not been evaluated in detail. This paper evaluates new estimators from biology not been previously applied to inspections. Using a data from seventy-three inspectors, we analyze the effect of the number of inspectors on the quality of estimates. Contrary to previous findings indicating that Jackknife is the best estimator, our results show that the SC estimators are better suited to software inspections. Our results also provide a detailed analysis of the number of inspectors necessary to obtain estimates within 5% to 20% of the actual.",2008,0, 7891,Transfer function analysis using STFT for improvement of the fault detection sensitivity in transformer impulse test,"Insulation of transformer windings may shift as a result of short circuit current or impact during transportation. The shift modifies the dielectric space between the layers of the windings and may cause an insulation breakdown, leading to a complete transformer failure. As transformers are very costly to replace, it is important that their condition determined accurately without having to dismantle the apparatus to visually inspect it. Testing of winding insulation is performed by using the standard impulse test, via applying fast Fourier transform (FFT) to analyze the transformer state (healthy or faulty) in the frequency domain, as a transfer function (TF). Nonetheless, one of the shortcomings of the FFT is that it cannot be used with non-stationary signals. Voltage and current waveforms in the transformer are treated as nonstationary signals, especially when there is a fault. In addition, FFT does not give any information on the time at which a frequency component occurs. To obtain better signature analysis and to increase the detection sensitivity, this paper suggests a new method using short time Fourier transform (STFT) in the transfer function analysis. It is hoped that this high resolution method will help to reduce the subjective judgments of technicians when making decisions about changes in the winding structure of the transformer.",2005,0, 7892,A review on fault prognostics in integrated health management,"Integrated health management (IHM) is an advanced technology which integrated artificial intelligence with advanced test and information technologies. Having gone through fault detection, isolation and reconfiguration and immerged with the state of arts reasoning technologies, IHM monitors and controls the function of critical systems and components in order to ensure safe and efficient operation. An IHM system usually comprises seven functional modules, namely data acquisition, signal/feature extraction, condition assessment, diagnostics, prognostics, decision reasoning and human interface. Among them, fault prognostics are not only the core of IHM, but also an important guarantee to reduce the costs of life-cycle maintenance, and to improve system security. Fault prognostics is the process to project the current health state of equipment into the future taking into account estimates of future usage profiles. It may report health status at a future time, or may estimate the remaining useful lifetime (RUL) of a machine given its projected usage profile. In recent years, fault prognostics are under unprecedented attentions. And it is becoming the most challenging research area which is so-called crystal ball of IHM. Based on the theory, methods and routes adopted in the practical application, fault prognostics is generally fallen into three main categories, namely model-based approaches, knowledge-based approaches and data-based approaches. Then, based on the analysis of some typical applications on each approaches, the strengths and weaknesses of each approach are further discussed. Finally, according to the current research situation at home and abroad, the future development trend of fault prognostics is also presented.",2009,0, 7893,Teraflops supercomputer: architecture and validation of the fault tolerance mechanisms,"Intel Corporation developed the Teraflops supercomputer for the US Department of Energy (DOE) as part of the Accelerated Strategic Computing Initiative (ASCI). This is the most powerful computing machine available today, performing over two trillion floating point operations per second with the aid of more than 9,000 Intel processors. The Teraflops machine employs complex hardware and software fault/error handling mechanisms for complying with DOE's reliability requirements. This paper gives a brief description of the system architecture and presents the validation of the fault tolerance mechanisms. Physical fault injection at the IC pin level was used for validation purposes. An original approach was developed for assessing signal sensitivity to transient faults and the effectiveness of the fault/error handling mechanisms. Dependency between fault/error detection coverage and fault duration was also determined. Fault injection experiments unveiled several malfunctions at the hardware, firmware, and software levels. The supercomputer performed according to the DOE requirements after corrective actions were implemented. The fault injection approach presented in this paper can be used for validation of any fault-tolerant or highly available computing system",2000,0, 7894,Extracting Error Handling to Aspects: A Cookbook,"It is usually assumed that exception handling code can be better modularized by the use of aspect-oriented programming (AOP) techniques. However, recent studies argue that the ad hoc use of AOP can be detrimental to the quality of a system. When refactoring exception handling code to aspects, developers and maintainers need to follow clear and simple principles to obtain a well-structured system design. Otherwise, typical problems that stem from poorly designed/implemented error handling code will arise, e.g. resource leaking and swallowed exceptions. In this paper, we propose a classification for error handling code based on the factors that we found out have more influence on its aspectization. Moreover, we present a scenario catalog comprising combinations of these factors and analyze how these scenarios positively or negatively affect the task of aspectizing exception handling. We evaluated the proposed catalog through a case study where we used it to guide the aspectization of exception handling in two real systems.",2007,0, 7895,The Study of Non-uniformity Correction Algorithm for IRFPA Based on Neural Network,"It is very important to study non-uniformity correction algorithm in infrared focal plane array (IRFPA). In order to improve the convergence speed and non-stability in traditional neural network non-uniformity correction algorithm, a new scene-based non-uniformity correction algorithm for IRFPA was designed in this paper. The algorithm firstly arrange a pixelpsilas gray value and its around eight pixelspsila gray value from small to big and compute the mid 5 valuespsila mean in this new sequence as the pixelpsilas new gray value. Then using a traditional neural network algorithm do a non-uniformity correction on the infrared image again. Besides, we try to use a new estimating algorithm to calculate precisely the scope of the convergence constant in iterative equations. Compared with the result of several algorithms, the new algorithm has better correction effect than other three algorithms, and gets faster convergence speed.",2008,0, 7896,A time-frequency method for multiple fault detection in three-phase induction machines,"It is well known that induction machine stator current is a nonstationary signal and its properties change with respect to operating conditions. The computed spectrum using the fast Fourier transform (FFT) does not provide accurate time-domain information about the operating conditions. As a result, the FFT spectrum analysis makes difficult to recognize fault conditions from the normal operation of the induction machine. A time-frequency approach is proposed in order to track some frequency components associated to electrical and load faults. This method is applied to a 18.5 kW three-phase induction machine with three broken rotor bars and under the effect of load torque variation which can result from a load fault.",2005,0, 7897,Probabilistic Compensation for Digital Filters Using Pervasive Noise-Induced Operator Errors,"It is well known that scaled CMOS technologies are increasingly susceptible to induced soft errors and environmental noise. Probabilistic checksum-based error detection and compensation has been proposed in the past for scaled DSP circuits for which a certain level of inaccuracy can be tolerated as long as system-level quality-of-service (QoS) metrics are satisfied. Although the technique has been shown to be effective in improving the SNR of digital filters, it can only handle errors that occur in the system states. However, the transient-error rate of combinational logic is increasing with technology scaling. Therefore, handling errors in the arithmetic logic circuitry of DSP systems is also essential. This is a significantly more difficult task due to the fact that a single error at the output of an adder or multiplier can propagate to more than one system state causing multiple states to be erroneous. In this paper, a unified scheme that can address probabilistic compensation for errors both in the system states and in the embedded adders and multipliers of DSP filters is developed. It is shown that by careful checksum code design, significant SNR improvements (up to 13 dB) can be obtained for linear filters in the presence of soft errors.",2007,0, 7898,A study of flight-critical computer system recovery from space radiation-induced error,"It is well known that space radiation, containing energetic particles such as protons and ions, can cause anomalies in digital avionics onboard satellites, spacecraft, and aerial vehicles flying at high altitude. Semiconductor devices embedded in these applications become more sensitive to space radiation as the features shrink in size. One of the adverse effects of space radiation on avionics is a transient error known as single event upset (SEU). Given that it is caused by bit-flips in computer memory, SEU does not result in a damaged device. However, the SEU induced data error propagates through the run-time operational flight program, causing erroneous outputs from a flight-critical computer system. This study was motivated by a need for a cost-effective solution to keep flight-critical computers functioning after SEU occurs. The result of the study presents an approach to recover flight-critical computer systems from SEU induced error by using an identity observer array. The identity observers replicate the state data of the controller in distinct data partitions. The faulty controller can be recovered by replacing data image of the faulty data partition with that of the healthy data partition. The methodology of applying such an approach from the fault tolerant control perspective is presented. The approach is currently being tested via computer simulation",2002,0, 7899,Local magnetic error estimation using action and phase jump analysis of orbit data,"It's been shown in previous conferences that action and phase jump analysis is a promising method to measure normal quadrupole components, skew quadrupole components and even normal sextupole components. In this paper, the action and phase jump analysis is evaluated using new RHIC data.",2007,0, 7900,Design and Implementation of a Java Fault Injector for Exhaustif SWIFI Tool,"Java is a successful programming environment and its use has grown from little embedded applications until enterprise network servers based on J2EE. This intensive use of Java demands the validation of their fault tolerance mechanisms to avoid unexpected behavior of the applications at runtime. This paper describes the design and implementation of a fault injector for the ldquoExhaustifregrdquo SWIFI tool. A specific fault model for Java applications that include class corruption/substitution at loading time, method call interception and unexpected exception thrown is proposed. The injector uses the JVMTI (Java virtual machine tool interface) to perform bytecode instrumentation at runtime to carry out the fault model previously defined. Finally a XML formalization of the specific Java fault model is proposed. This approach, JVMTI + XML fault model description, provides complete independency between the system under test and the fault injection tool, as well the interoperability with another SWIFI tools.",2009,0, 7901,A Jittered-Sampling Correction Technique for ADCs,"Jittered sampling raises the noise floor in Analogue to Digital Converters (ADCs). This leads to a decrease in its Signal to Noise ratio (SNR) and its effective number of bits (ENOB). This extended abstract proposes a technique that compensate for the effects of sampling with a jittered clock. A novel technique based on phase demodulation of the clock oscillator and Taylor series approximation is proposed to counter the effects of clock jitter in ADCs. Since jitter is caused by phase noise, phase demodulation provides a good estimate of the instantaneous jitter. A VLSI implementation of Taylor series is used to predict the input signal value at the correct time instant.",2008,0, 7902,An improved fault ride-through strategy for doubly fed induction generator-based wind turbines,"Keeping the generators operating during transient grid faults becomes an obligation for the bulk wind generation units connected to the transmission network and it is highly desired for distribution wind generators. A proposed scheme is implemented to keep the wind-power DFIG operating during transient grid faults. Challenges imposed on the generator configuration and the control during the fault and recovering periods are presented. A comprehensive time domain model for the DFIG with the decoupled dq controller is implemented using Matlab/Simulink software. Intensive simulation results are discussed to ensure the validity and feasibility of the proposed fault ride through technique. The scheme protects the DFIG components, fulfills the grid code requirements and optimises the hardware added to the generator.",2008,0, 7903,Effect of correlations on the symbol error rate of orthogonal space-time block codes in keyhole MIMO channels,"Keyhole channels generally characterize rank-efficient MIMO channels, which were predicted theoretically and also observed experimentally. In this paper, we present error rate performance analysis of orthogonal space-time block codes (STBCs) in correlated keyhole multiple-input multiple-output (MIMO) channels, which is not only suitable for exponential correlation models, but also for constant correlation models. Using a moment-generation-function (MGF) based approach, we provide the exact expressions of the average symbol error rate (SER). Simulation results are provided to validate our analytical results.",2009,0, 7904,Research on obstacles of knowledge sharing of transformation of science and technology achievements by fault tree analysis,"Knowledge sharing (KS) is important means to improve the rate of transformation of sci-tech achievements (TSA). The obstacles of KS of TSA are mainly from sharing subject obstacles, sharing environment obstacles and shared knowledge itself obstacles. Structure importance, probability importance and critical importance are analyzed based on expert investigation by fault tree analysis (FTA) to make a conclusion that main obstacles of KS of TSA are unreasonable organization structure and lack of technical platform.",2008,0, 7905,A Unity Power Factor Correction Preregulator with Fast Dynamic Response Based on a Low-Cost Microcontroller,"Low cost passive Power Factor Correction (PFC) and Single-Stage PFC converters cannot draw a sinusoidal input current and are only suitable solutions to supply low power levels. PFC preregulators based on the use of a multiplier solve such drawbacks, but a second stage DC/DC converter is needed to obtain fast output voltage dynamics. The output voltage response of PFC preregulators can be improved by increasing the corner frequency of the output voltage feedback loop. The main drawback to obtaining a faster converter output response is the distortion of the input current. This paper describes a simple control strategy to obtain a sinusoidal input current. Based on the static analysis of output voltage ripple, a modified sinusoidal reference is created using a low cost microcontroller in order to obtain a input sinusoidal current. This reference replaces the traditional rectified sinusoidal input voltage reference in PFC preregulators with multiplier control. Using this circuitry, PFC preregulator topologies with galvanic isolation are suitable solutions to design a power supply with fast output voltage response (10 ms or 8.33 ms) and low line current distortion. Finally, theoretical and simulated results are validated using a 500 W prototype.",2007,0, 7906,Fault ride through of DFIG wind turbines during symmetrical voltage dip with crowbar or stator current feedback solution,"Low Voltage Ride Through is an important feature for wind turbine systems to fulfill grid code requirements. In case of wind turbine technologies using doubly fed induction generators the reaction to grid voltage disturbances is sensitive. Hardware or software protection must be implemented to protect the converter from tripping during severe grid voltage faults. In this paper two methods for low voltage ride through of symmetrical grid voltage dips are investigated. As a basis, an analysis of the rotor voltages during grid fault is given. First, the conventional hardware method using a crowbar is introduced. Then the stator current reference feedback solution is presented. Both methods are investigated and compared by simulation results using 2 MW wind turbine system parameters. Measurement results on a 22 kW laboratory DFIG test bench show the effectiveness of the proposed control technique.",2010,0, 7907,Low error rate LDPC decoders,"Low-density parity-check (LDPC) codes have been demonstrated to perform very close to the Shannon limit when decoded iteratively. However challenges persist in building practical high-throughput decoders due to the existence of error floors at low error rate levels. We apply high-throughput hardware emulation to capture errors and error-inducing noise realizations, which allow for in-depth analysis. This method enables the design of LDPC decoders that operate without error floors down to very low bit error rate (BER) levels. Such emulation-aided studies facilitate complex systems designs.",2009,0, 7908,Automatic fault detection and diagnosis in complex software systems by information-theoretic monitoring,"Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In this paper we use normalized mutual information as a similarity measure to identify clusters of correlated metrics, without knowing the specific form. We show how we can apply the Wilcoxon rank-sum test to identify anomalous behaviour. We present two diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, and SigScore, which incorporates knowledge of component dependencies. We evaluate our mechanisms in the context of a complex enterprise application. Through fault injection experiments, we show that we can detect 17 out of 22 faults without any false positives. We diagnose the faulty component in the top five anomaly scores 7 times out of 17 using SigScore, which is 40% better than when system structure is ignored.",2009,0, 7909,A simulation framework for fault-tolerant clock synchronization in industrial automation networks,"Many applications such as distributed measurements or real-time networks benefit from a common notion of time. Protocols providing high precision and simple clock synchronization are necessary to achieve such a common time base. However, most of the available protocols are lacking with regard to fault tolerance and performance in case of a fault. The project IMAGINE (introduction of master group based industrial Ethernet) overcomes these limitations by introducing a fault-tolerant IEEE 1588 master group. A proof of concept for a large-or even medium-scale network is, however, very difficult to obtain under laboratory conditions. Therefore, a simulation framework has been developed, which is presented in this paper.",2007,0, 7910,Architectural-Based Validation of Fault-Tolerant Software,"Many architecture-centred approaches have been proposed for constructing dependable component-based systems. However, few of them provide an integrated solution for their development that combines fault prevention, fault removal, and fault tolerance techniques. This paper proposes a rigorous development approach based on an architectural abstraction, which combines formal methods and robustness testing. The architectural abstraction assumes a crash failure semantics, and when it is instantiated as an architectural element provides the basis for architecting fault tolerant systems. The architecture is formally specified using the B-method and CSP. Assurances that the software system is indeed dependable are obtained by combining formal specification for removing ambiguities from the architectural representation, and robustness testing for validating the source code against its software architecture. The feasibility of the proposed approach is illustrated in the context of a financial critical system.",2009,0, 7911,The Effectiveness of Automated Static Analysis Tools for Fault Detection and Refactoring Prediction,"Many automated static analysis (ASA) tools have been developed in recent years for detecting software anomalies. The aim of these tools is to help developers to eliminate software defects at early stages and produce more reliable software at a lower cost. Determining the effectiveness of ASA tools requires empirical evaluation. This study evaluates coding concerns reported by three ASA tools on two open source software (OSS) projects with respect to two types of modifications performed in the studied software CVS repositories: corrections of faults that caused failures, and refactoring modifications. The results show that fewer than 3% of the detected faults correspond to the coding concerns reported by the ASA tools. ASA tools were more effective in identifying refactoring modifications and corresponded to about 71% of them. More than 96% of the coding concerns were false positives that do not relate to any fault or refactoring modification.",2009,0, 7912,Design of LDPC decoders for improved low error rate performance: quantization and algorithm choices,"Many classes of high-performance low-density parity-check (LDPC) codes are based on parity check matrices composed of permutation submatrices. We describe the design of a parallel-serial decoder architecture that can be used to map any LDPC code with such a structure to a hardware emulation platform. High-throughput emulation allows for the exploration of the low bit-error rate (BER) region and provides statistics of the error traces, which illuminate the causes of the error floors of the (2048, 1723) Reed-Solomon based LDPC (RS-LDPC) code and the (2209, 1978) array-based LDPC code. Two classes of error events are observed: oscillatory behavior and convergence to a class of non-codewords, termed absorbing sets. The influence of absorbing sets can be exacerbated by message quantization and decoder implementation. In particular, quantization and the log-tanh function approximation in sum-product decoders strongly affect which absorbing sets dominate in the errorfloor region. We show that conventional sum-product decoder implementations of the (2209, 1978) array-based LDPC code allow low-weight absorbing sets to have a strong effect, and, as a result, elevate the error floor. Dually-quantized sum-product decoders and approximate sum-product decoders alleviate the effects of low-weight absorbing sets, thereby lowering the error floor.",2009,0, 7913,Fast Selection of Small and Precise Candidate Sets from Dictionaries for Text Correction Tasks,"Lexical text correction relies on a central step where approximate search in a dictionary is used to select the best correction suggestions for an ill-formed input token. In previous work we introduced the concept of a universal Levenshtein automaton and showed how to use these automata for efficiently selecting from a dictionary all entries within a fixed Levenshtein distance to the garbled input word. In this paper we look at refinements of the basic Levenshtein distance that yield more sensible notions of similarity in distinct text correction applications, e.g. OCR. We show that the concept of a universal Levenshtein automaton can be adapted to these refinements. In this way we obtain a method for selecting correction candidates which is very efficient, at the same time selecting small candidate sets with high recall.",2007,0, 7914,On the Wireless Transmitters Linear and Nonlionear Distortions Detection and Pre-Correction,Linear and nonlinear distortions that manifested along the wireless transmitter scheme are of great importance since they influence the communication link performance. This paper proposes a comprehensive study of these distortions through the investigation of their sources and the means to characterize them for pre-correction purposes. Both long term and short term linear distortion are concerned in this paper. Measurement results conducted on typical 100Watt wireless UMTS transmitter point out the benefit of a concurrent pre-compensation for both linear and nonlinear distortion effects using digital baseband predistortion technique,2006,0, 7915,An execution slice and inter-block data dependency-based approach for fault localization,"Localizing a fault in a program is a complex and time-consuming process. In this paper we present a novel approach using execution slice and inter-block data dependency to effectively identify the locations of program faults. An execution slice with respect to a given test case is the set of code executed by this test, and two blocks are data dependent if one block contains a definition that is used by another block or vice versa. Not only can our approach reduce the search domain for program debugging, but also prioritize suspicious locations in the reduced domain based on their likelihood of containing faults. More specifically, the likelihood of a piece of code containing a specific fault is inversely proportional to the number of successful tests that execute it. In addition, the likelihood also depends on whether this piece of code is data dependent on other suspicious code. A debugging tool, DESiD, was developed to support our method. A case study that shows the effectiveness of our method in locating faults on an application developed for the European Space Agency is also reported.",2004,0, 7916,Automated Generation of Similar Paths for Localizing Program Faults,"Localizing a program fault accurately in debugging is complex and time-consuming. In the process of fault diagnosing, identifying or generating the successful test paths as similar as possible to the failed test is core for the effectiveness of faults localization. A method for calculating the similarity between two test paths based on analyzing difference between program control-flow is defined, and a novel algorithm based on DD-graph for generating similar path set directly from a failed test is proposed. It is experimentally proved that the proposed algorithm can generate a similar path set for a failed path and can help to localize the program faults.",2009,0, 7917,Tool Support for Fault Localization Using Architectural Models,"Locating software faults is a problematic activity in many systems. Existing tool approaches usually work close to the system implementation, requiring the developer to perform tedious code analyses in which the amount of information she must manage is usually overwhelming. This problem calls for approaches able to work at higher abstraction levels than code. In this context, we present a tool approach, called FLABot, to assist fault-localization tasks. A novelty of FLABot is that it reasons about faults using software architecture information. Based on Use-case-maps and system logs, FLABot performs a heuristic search for possible faulty functions in the architecture, and then maps these functions to code sections. This allows the developer to quickly navigate large systems and spot code regions that may contain faults, which can be further debugged using conventional techniques. Our preliminary experiments have shown that FLABot is practical and reduces the efforts for discovering faults.",2009,0, 7918,Error Corrected Rectangulation Method for Location Determination of Mobile in GSM Network,Location determination in GSM network is the major challenging issue due to the several complexities on both software and hardware side. The location of mobile node is always an important issue in GSM network because location based services to facilitate the user is gaining popularity in public. An error corrected rectangulation method based on intensity (ECRMI) is proposed that estimate the location of the mobile using the similar idea as DGPS.,2008,0, 7919,A Novel Collaborative Sparse-Anchored Localization Algorithm and Error Analysis for WSN,"Location information is very crucial for the sensing data in wireless sensor networks (WSN). But localization precision and cost are two challenging problems for practical localization applications. We proposed a novel algorithm to tackle these challenges. Our collaborative sparse-anchored scheme, CSA, utilizes multi-hop anchors collaboration to improve localization accuracy and ratio, especially for unknown nodes on edge of network. Furthermore, it overcomes ill-equation problem of minimal mean square algorithm (MMS). Error analysis demonstrates that position error and ranging error of some anchors can incur huge error. So it is necessary to judge the feasibility of reference anchor. Our experimental results verify validity and accuracy. It improves feasibility and cost of WSN positioning techniques, significantly.",2008,0, 7920,Testing for missing-gate faults in reversible circuits,"Logical reversibility occurs in low-power applications and is an essential feature of quantum circuits. Of special interest are reversible circuits constructed from a class of reversible elements called k-CNOT (controllable NOT) gates. We review the characteristics of k-CNOT circuits and observe that traditional fault models like the stuck-at model may not accurately represent their faulty behavior or test requirements. A new fault model, the missing gate fault (MGF) model, is proposed to better represent the physical failure modes of quantum technologies. It is shown that MGFs are highly testable, and that all MGFs in an N-gate k-CNOT circuit can be detected with from one to [N/2] test vectors. A design-for-test (DFT) method to make an arbitrary circuit fully testable for MGFs using a single test vector is described. Finally, we present simulation results to determine (near) optimal test sets and DFT configurations for some benchmark circuits.",2004,0, 7921,Compensation of inertia error in brake dynamometer testing,"Loss in terms of windage and bearing friction is an important origin of inertia error to be compensated in brake dynamometer testing, acquisition of which has always been a troublesome problem. An indirect method of loss measurement using speed data under null pipeline pressure is described in this paper. Mathematical model of resistance torque or energy loss is calculated by regression of collected speed data using SPSS software. Error compensation of two inertia simulating methods, torque control method and energy compensation method, is discussed. Experiments of the former are conducted on NT11 brake dynamometer, which proves it to be effective in eliminating inertia error.",2009,0, 7922,Tracking down software bugs using automatic anomaly detection,"Introduces DIDUCE, a practical and effective tool that aids programmers in detecting complex program errors and identifying their root causes. By instrumenting a program and observing its behavior as it runs, DIDUCE dynamically formulates hypotheses of invariants obeyed by the program. DIDUCE hypothesizes the strictest invariants at the beginning, and gradually relaxes the hypothesis as violations are detected to allow for new behavior. The violations reported help users to catch software bugs as soon as they occur. They also give programmers new visibility into the behavior of the programs such as identifying rare corner cases in the program logic or even locating hidden errors that corrupt the program's results. We implemented the DIDUCE system for Java programs and applied it to four programs of significant size and complexity. DIDUCE succeeded in identifying the root causes of programming errors in each of the programs quickly and automatically. In particular, DIDUCE is effective in isolating a timing-dependent bug in a released JSSE (Java Secure Socket Extension) library, which would have taken an experienced programmer days to find. Our experience suggests that detecting and checking program invariants dynamically is a simple and effective methodology for debugging many different kinds of program errors across a wide variety of application domains.",2002,0, 7923,Syntheses of relative method to correction and analysis of its features,"Is Designed, and explored, relative method to correction and determined its, main technical features",2004,0, 7924,An architecture for physical injection of complex fault scenarios in CAN networks,"It has been reported that some particular fault scenarios may cause malfunction of the controller area network protocol. Although such scenarios are very unlikely, they become relevant when attempting to use the CAN protocol for critical applications. The fault injector described in this paper induces these fault scenarios at the physical layer of the CAN protocol by means of a software tool and a set of specifically designed circuits. Therefore, and in contrast to previous solutions, this fault injector is suitable to evaluate most of the dependability mechanisms that have been proposed for CAN networks.",2003,0, 7925,A method for diagnosing resistive open faults with considering adjacent lines,"It is believed that resistive open faults can cause small delay defects at wires, contacts, and/or vias of a circuit. However, it remains to be elucidated whether any methods could diagnose resistive open faults. We propose a method for diagnosing resistive open faults by using a diagnostic delay fault simulation with the minimum detectable delay fault size. We also introduce a fault excitation function for the resistive open fault to improve the accuracy of the diagnostic result. The fault excitation function for the resistive open fault can determine a size of an additional delay at a faulty line with considering the effect of the adjacent lines. We demonstrated that the proposed method is capable of identifying fault locations for the resistive open fault with a small computation cost.",2010,0, 7926,Improved fault tolerant broadcasts in CAN,"It is generally considered that the controller area network (CAN) guarantees atomic broadcast properties through its extensive error detection and signalling mechanisms. However, it is known that these mechanisms may fail, and messages can be delivered in duplicate by some receivers or delivered only by a subset of the receivers. This misbehaviour may be disastrous if the CAN network is used to support replicated applications. In order to prevent such inconsistencies, a set of atomic broadcast protocols is proposed, taking advantage of CAN synchronous properties to minimise its run-time overhead. The paper presents such set of protocols, and demonstrates how they can be used for the development of distributed real-time applications.",2001,0, 7927,Influence of window width selection in fault diagnosis of loudspeaker based on Short-time Fourier Transform,"It is important that selects window function and width to Short-time Fourier Transform(STFT). Especially, when diagnosing fault loudspeaker, the different analysis window function and width can influent the result of analysis on respond signal of loudspeaker. So, it proposed a selecting analysis window function and width method based on the energy correction factor, the maximum side lobe and main lobe peak value in the frequency domain. Through reasonable selecting analysis window function and width, it can reduce the influence of truncation of signal caused by the Gibbs phenomenon and resolve the frequency resolution problem on STFT. Therefore, it can be more accurately to extract the fault feature of loudspeaker, and improve the loudspeaker on-line automatic fault detection accuracy.",2010,0, 7928,Error-Correcting Output Coding for the Convolutional Neural Network for Optical Character Recognition,"It is known that convolutional neural networks (CNNs) are efficient for optical character recognition (OCR) and many other visual classification tasks. This paper applies error-correcting output coding (ECOC) to the CNN for segmentation-free OCR such that: 1) the CNN target outputs are designed according to code words of length N; 2) the minimum Hamming distance of the code words is designed to be as large as possible given N. ECOC provides the CNN with the ability to reject or correct output errors to reduce character insertions and substitutions in the recognized text. Also, using code words instead of letter images as the CNN target outputs makes it possible to construct an OCR for a new language without designing the letter images as the target outputs. Experiments on the recognition of English letters, 10 digits, and some special characters show the effectiveness of ECOC in reducing insertions and substitutions.",2009,0, 7929,Impact of advancements on automated relay testing over checking earth fault characteristics,It is known that microprocessor based relay testing has become a challenge nowadays mainly because of the increased complexity. Most of the modem numerical relays have provisions to detect phase-to-ground faults for any type of neutral compensation. This raises the standard for everybody in the relay industry to introduce improved settings management as well as reliable test plans. To create a manual test plan becomes increasingly difficult so the focus on developing fully automated test plans is no longer a luxury but a necessity. This paper describes some of the challenges in developing an automatic test plan for earth fault detection functions.,2004,0, 7930,A background removing method of MR images and its application in the intensity nonuniformity correction methods,"Intensity nonuniformity correction is a necessary preprocessing method in MR images segmentation. Most of intensity nonuniformity correction methods need to remove the background of the images. In this paper, we propose a segmentation method based on the region growing to remove the background as a instead of the threshold method which is wildly used in the intensity nonuniformity correction methods. We tested this method on a lot of MR images and applied it to the N3 method. The experiments showed that it performances better than the threshold method.",2008,0, 7931,Performance analysis of a fault-tolerant distributed-shared memory protocol on the SOME-bus multiprocessor architecture,"Interconnection networks allowing multiple simultaneous broadcasts are becoming feasible, mostly due to advances in fiber-optics and VLSI technology. Distributed-shared-memory implementations on such networks promise high performance even for applications with small granularity. This paper summarizes the architecture of one such implementation, the simultaneous optical multiprocessor exchange bus, and examines the performance of an augmented DSM protocol which provides fault tolerance by exploiting the natural DSM replication of data in order to maintain a recovery memory in each processing node. Theoretical and simulation results show that the additional data replication necessary to create fault-tolerant DSM causes no reduction in system performance during normal operation and eliminates most of the overhead at checkpoint creation. Data blocks which are duplicated to maintain the recovery memory may be utilized by the regular DSM protocol, reducing network traffic, and increasing the processor utilization significantly.",2003,0, 7932,A Novel Routing Algorithm for Achieving Static Fault-Tolerance in 2-D Meshes,"Interconnection networks encompass a large number of technologies; from chip-to-chip communications to the system area networks (SANS), and in particular as the communication medium for multiprocessors. Interconnection networks offer communication with high reliability, high throughput, and low latency, all being vital factors for closely cooperating units. In the event that the interconnection network fails, the remainder of the system is left disconnected. Thus, it is essential to be able to keep the graceful degradation of reliability in these systems, even in the presence of faulty components. Adaptive fault-tolerant routing algorithms have been the subject of extensive research in recent years. In this paper, this issue is addressed through a new fault-tolerant routing algorithm to prevent static faults in interconnection networks with 2-D mesh topology. The suggested algorithm requires no change to the way packets are routed in the fault-free case, can be easily implemented, does not require the use of routing tables, and is well-suited for use in high-performance systems.",2010,0, 7933,An FFT-based method to evaluate and compensate gain and offset errors of interleaved ADC systems,"Interleaved analog-digital converter (ADC) systems can be used to increase the sampling rate for a given ADC implementation technique. In theory, the maximum sampling rate that can be achieved is limited only by the bandwidth and the practical limits related to the power and space of integrated circuits. In this paper, a solution to increase the sampling rate of a digitizing system based on interleaved ADCs is presented. An error analysis, which takes into consideration offset and gain errors of the different ADC channels, is performed in order to quantify the effect of such errors in the system's performance. A software method based on the fast Fourier transform is presented for offset and gain error compensation of interleaved ADC associations. Numerical simulations and experimental results are used to validate the theory and the proposed compensation algorithm.",2004,0, 7934,Towards understanding the effects of intermittent hardware faults on programs,"Intermittent hardware faults are bursts of errors that last from a few CPU cycles to a few seconds. They are caused by process variations, circuit wear-out, and temperature, clock or voltage fluctuations. Recent studies show that intermittent fault rates are increasing due to technology scaling and are likely to be a significant concern in future systems. We study the propagation of intermittent faults to programs; in particular, we are interested in the crash behaviour of programs. We use a model of a program that represents the data dependencies in a fault-free trace of the program and we analyze this model to glean some information about the length of intermittent faults and their effect on the program under specific fault and crash models. The results of our study can aid fault detection, diagnosis and recovery techniques.",2010,0, 7935,Handling errors in parallel programs based on happens before relations,"Intervals are a new model for parallel programming based on an explicit happens before relation. Intervals permit fine-grained but high-level control of the program scheduler, and they dynamically detect and prevent deadlocking schedules. In this paper, we discuss the design decisions that led to the intervals model, focusing on error detection and handling. Our error propagation scheme makes use of the happens before relation to detect and abort dependent tasks that occur between the point where a failure occurs and where the failure is handled.",2010,0, 7936,NFTAPE: a framework for assessing dependability in distributed systems with lightweight fault injectors,"Many fault injection tools are available for dependability assessment. Although these tools are good at injecting a single fault model into a single system, they suffer from two main limitations for use in distributed systems: (1) no single tool is sufficient for injecting all necessary fault models; (2) it is difficult to port these tools to new systems. NFTAPE, a tool for composing automated fault injection experiments from available lightweight fault injectors, triggers, monitors, and other components, helps to solve these problems. We have conducted experiments using NFTAPE with several types of lightweight fault injectors, including driver-based, debugger-based, target-specific, simulation-based, hardware-based, and performance-fault injections. Two example experiments are described in this paper. The first uses a hardware fault injector with a Myrinet LAN; the other uses a Software Implemented Fault Injection (SWIFI) fault injector to target a space-imaging application",2000,0, 7937,A study on fault arc and its influence on digital fault locator performance,"Many fault locator algorithms were developed to be operated on digital relay data. The methods were derived from frequency domain equations and were established by a phasor simulation. The algorithms were based on the assumption that a fault arc had a constant impedance and did not show a nonlinear effect. But in practice, a fault arc in air is known to show nonlinearity due to its physical characteristic. To study the influence of the nonlinearity on the accuracy of a digital fault locator, a time domain simulation must be performed considering the characteristic of the fault arc. In the present paper, a time domain model of a fault locator and that of a fault arc are represented using the MODELS language in the ATP-EMTP. Various fault types are modeled considering constant and nonlinear fault resistances. The simulation results have shown that the impedance relay type method using one terminal voltage and currents is influenced by the nonlinear characteristic of the fault arc, while the current diversion ratio method using one terminal currents for two circuits is not. A sensitivity analysis of the locating error with respect to the model parameters of the nonlinear arc was performed for the impedance relay type method. It has been confirmed that the greater the degree of nonlinearity, the greater the location error",2001,0, 7938,A Continuous Fault Countermeasure for AES Providing a Constant Error Detection Rate,"Many implementations of cryptographic algorithms have shown to be susceptible to fault attacks. To detect manipulations, countermeasures have been proposed. In the case of AES, most countermeasures deal with the non-linear and the linear part separately, which either leaves vulnerable points at the interconnections or causes different error detection rates across the algorithm. In this paper, we present a way to achieve a constant error detection rate throughout the whole algorithm. The use of extended AN+B codes together with redundant table lookups allows to construct a countermeasure that provides complete protection against adversaries who are able to inject faults of byte size or less. The same holds for adversaries who skip an instruction. Other adversaries are detected with a probability of more than 99%.",2010,0, 7939,A Distributed Workflow Mapping Algorithm for Minimum End-to-End Delay under Fault-Tolerance Constraint,Many large-scale scientific applications feature distributed computing workflows of complex structures that must be executed and transferred in shared wide-area networks consisting of unreliable nodes and links. Mapping these computing workflows in such faulty network environments for optimal latency while ensuring certain fault tolerance is crucial to the success of eScience that requires both performance and reliability. We construct analytical cost models and formulate workflow mapping as an optimization problem under failure rate constraint. We propose a distributed heuristic mapping solution based on recursive critical path to achieve minimum end-to-end delay and satisfy a pre-specified overall failure rate for a guaranteed level of fault tolerance. The performance superiority of the proposed mapping solution is illustrated by extensive simulation-based comparisons with existing mapping algorithms.,2010,0, 7940,Improving MapReduce fault tolerance in the cloud,"MapReduce has been used at Google, Yahoo, FaceBook etc., even for their production jobs. However, according to a recent study, a single failure on a Hadoop job could cause a 50% increase in completion time. Amazon Elastic MapReduce has been provided to help users perform data-intensive tasks for their applications. These applications may have high fault tolerance and/or tight SLA requirements. However, MapReduce fault tolerance in the cloud is more challenging as topology control and (data) rack locality currently are not possible. In this paper, we investigate how redundent copies can be provisioned for tasks to improve MapReduce fault tolerance in the cloud while reducing latency.",2010,0, 7941,A Method for Diagnosing the Cylinder Fault of Engine Based on Artificial Neural Network,"Measured the instantaneous speed of diesel engine and analyzed the mechanism of fault diagnosis with speed signal. Processed the speed signal with method of spectrum analysis and complexity analysis and analyzed the change law of speed in condition of mis-cylinder. Calculated [K,C] complexity of speed signal and got the features of mis-cylinder fault. Set up a BP neural network to diagnose mis-cylinder fault of diesel engine.",2008,0, 7942,A research of an improved ellipse method in magnetoresistive sensors error compensation,"Measurement of the weak geomagnetic signal is very prone to the factors of the environmental interference. Through theory analysis and simulation research, it is found that the error compensation ellipse method commonly applied in engineering practice is unable to reduce the quadrant error due to soft iron materials effectively. In response to the phenomenon that the ellipse revolves around the long and short axes in the plane, the conception and mathematical description of the rotation factor are brought forward, in the meantime, the error compensation experiments and data analysis concerning this improved ellipse method are also carried out. The result of research shows that the error compensation effect by using this particular method is improved significantly compared with the ellipse method.",2009,0, 7943,Structure in errors: a case study in fingerprint verification,"Measuring the accuracy of biometrics systems is important. Accuracy estimates depend very much on the quality of the test data that are used: including poor quality data will degrade the accuracy estimates. Factors that determine the good quality data and poor quality data can not be revealed by simple accuracy estimates. We propose a novel methodology to analyze how the overall accuracy estimate of a system relates to the specific quality of biometrics samples. Using a large collection of fingerprint samples, we present an analysis of system accuracy, which suggests that a significant part of the error is due to few fingers.",2002,0, 7944,Error correction for diffraction and multiple scattering in free-space microwave measurement of materials,"Metamaterials often have sharp resonances in permittivity or permeability at microwave frequencies. The sizes of the inclusions are of the order of millimeters, and this means that it is more convenient to carry out the measurement in free space. Time gating is often used in the free-space method to remove multiple scattering from the antennas and the surrounding objects. However, this lowers the resolution in the frequency domain, making it difficult to resolve the resonances reliably. Diffraction around the sample could also reduce measurement accuracy. A calibration procedure, based on the 16-term error model, which removes the need for time gating by correcting for both multiple scattering and diffraction, is developed. This procedure is tested on carbonyl iron composite and split-ring resonators, and the results are presented.",2006,0, 7945,Gains achieved by symbol-by-symbol rate adaptation on error-constrained data throughput over fading channels,"Methods for symbol-by-symbol channel feedback and adaptation of symbol durations have been recently proposed. In this paper, we quantitatively analyze the gain in error-constrained data throughput due to such an extremely rapid adaptation of symbol durations to fast-time-varying channels. The results show that a symbol-by-symbol adaptation can achieve a throughput gain by orders of magnitude over a frame-by-frame adaptation.",2007,0, 7946,Semiparametric RMA Background-Correction for Oligonucleotide Arrays,Microarray technology has provided an opportunity to simultaneously monitor the expression levels of a large number of genes in response to intentional perturbations. A necessary step towards successful use of microarray technology is background correction which aims to remove noise. One of the most popular algorithms for background correction is the robust multichip average (RMA) procedure which relies on an unjustified parametric assumption. In this paper we first check the fitness of the RMA model using a graphical approach and then propose a new background correction method based on a semiparametric RMA model (semi-RMA). Evaluation of the proposed approach based on spike-in data and MAQC (microarray quality control project) data shows our semi-RMA model provides a better fit to microarray data than other approaches.,2007,0, 7947,Global behavior of neural error correction,"Neural information coding is often characterized as noisy and unreliable, typically because spike trains are irregular in appearance and experimental stimuli presented at different times produce different responses. This paper describes ongoing work investigating the effects and significance of errors in spike trains. More specifically, here we test the dependence of error correction in neural coding on a postsynaptic neuron's extant behavior, described in nonlinear dynamical terms. We show that the time for a neuron to recover from an error varies significantly, depending on its preceding stationary behavior and that behavior's position within the cell's global bifurcation structure. This implies a model of neural computation based not solely on attractors or motion within a state space, but rather motion within a global response space.",2004,0, 7948,3DPPS for early detection of arcing faults,"New approach to high impedance fault detection, which allows for detecting it basing on yet some random arcing at the beginning of the fault, is presented in this paper. The proposed solution was developed within novel protection methodology - 3D power protection scheme (3DPPS). The identification of the fault is based on monitoring of symmetry deviations of three phase voltage or current signals. Fundamental signal components carry the biggest amount of information on the actual state of the protected system and are processed in order to extract out the necessary information proving an occurrence of a high impedance fault that must be cleared for safety purposes.",2010,0, 7949,Generating Minimal Fault Detecting Test Suites for Boolean Expressions,"New coverage criteria for Boolean expressions are regularly introduced with two goals: to detect specific classes of realistic faults and to produce as small as possible test suites. In this paper we investigate whether an approach targeting specific fault classes using several reduction policies can achieve that less test cases are generated than by previously introduced testing criteria. In our approach, the problem of finding fault detecting test cases can be formalized as a logical satisfiability problem, which can be efficiently solved by a SAT algorithm. We compare this approach with respect to the well-known MUMCUT and Minimal-MUMCUT strategies by applying it to a series of case studies commonly used as benchmarks, and show that it can reduce the number of test cases further than Minimal-MUMCUT.",2010,0, 7950,Symbol-error probability and bit-error probability for optimum combining with MPSK modulation,"New expressions are derived for the exact symbol error probability and bit-error probability for optimum combining with multiple phase-shift keying. The expressions are for any numbers of equal-power cochannel interferers and receive branches. It is assumed that the aggregate interference and noise is Gaussian and that both the desired signal and interference are subject to flat Rayleigh fading. The new expressions have low computational complexity, as they contain only a single integral form with finite limits and finite integrand.",2004,0, 7951,Fault-tolerant routing for satellite command and control,"New satellite systems, such as transformational communications (TC) and space based infrared systems (SBIRS), strive to support higher bandwidth, greater connectivity, and growing capabilities. In This work we propose and evaluate a fault-tolerant routing scheme designed for such systems with integrated ground and satellite networks. This routing scheme uses a number of previously computed routes to rapidly respond to node and link failures. Multiple routes are pre-computed for various fault scenarios; these routes are stored in each satellite. To evaluate this scheme, we measure the effectiveness with which it responds to failures using simulation. We evaluated the performance of our fault-tolerant routing scheme using extensive simulation and considering several types of satellite faults. We employed a simulation integrating the satellite modeling capabilities of the Satellite Orbit Analysis Program (SOAP) with network modeling capabilities of Network Simulator version two (ns-2). A tool internal to The Aerospace Corporation, SOAP provides highly accurate models for satellite ephemeris propagation in this case used by ns-2 for the purpose of determining satellite positions. Our results show that the impact on the delay performance is negligible with single or multiple link failures; with satellite failures, although the maximum delay increases by 30%, the average delay remains the same.",2004,0, 7952,New results on periodic sequences with large k-error linear complexity,"Niederreiter showed that there is a class of periodic sequences which possess large linear complexity and large k- error linear complexity simultaneously. This result disproved the conjecture that there exists a trade-off between the linear complexity and the k-error linear complexity of a periodic sequence by Ding et al.. Using the entropy function in coding theory, we obtain three main results which hold for much larger k than those of Niederreiter et al.: a) sequences with maximal linear complexity and almost maximal k-error linear complexity with general periods; b) sequences with maximal linear complexity and maximal k-error linear complexity with special periods; c) sequences with maximal linear complexity and almost maximal k-error linear complexity in the asymptotic case with composite periods.",2008,0, 7953,On the value of static analysis for fault detection in software,"No single software fault-detection technique is capable of addressing all fault-detection concerns. Similarly to software reviews and testing, static analysis tools (or automated static analysis) can be used to remove defects prior to release of a software product. To determine to what extent automated static analysis can help in the economic production of a high-quality product, we have analyzed static analysis faults and test and customer-reported failures for three large-scale industrial software systems developed at Nortel Networks. The data indicate that automated static analysis is an affordable means of software fault detection. Using the orthogonal defect classification scheme, we found that automated static analysis is effective at identifying assignment and checking faults, allowing the later software production phases to focus on more complex, functional, and algorithmic faults. A majority of the defects found by automated static analysis appear to be produced by a few key types of programmer errors and some of these types have the potential to cause security vulnerabilities. Statistical analysis results indicate the number of automated static analysis faults can be effective for identifying problem modules. Our results indicate static analysis tools are complementary to other fault-detection techniques for the economic production of a high-quality software product.",2006,0, 7954,Error analysis of 3Dc-based normal map compression and its application to optimized quantization,"Normal mapping is one of the most essential technologies for realistic three-dimensional computer graphics. In conventional normal map compression such as 3Dc, only the x and y components are encoded and the z components are restored based on the normalizing condition. In this paper, we present an intuitively comprehensive error analysis for this approach. As a result, we reveal in what condition compression error becomes larger. We also present a non-linear quantization algorithm based on the formula for better compression performance than the conventional approaches. Experimental results using 300 normal map demonstrate that the PSNR is improved by 0.29 dB on average. Our algorithm is compatible with random access and highly-parallel processing on GPU.",2008,0, 7955,In search of world class performance during fault situations,"Northern Ireland Electricity (NIE) is implementing a major IT program to support and improve its business requirements. On 26th December 1998 Northern Ireland experienced the worst storm in a generation; as a result of its impact-both in terms of damage to the electricity network infrastructure and public perception-the implementation of several projects within this overall IT program was accelerated. NIE are aware that customer expectations continue to rise. During fault conditions the provision of accurate, up-to-date information to customers is as important as actual fault repair. In early December 1999 NIE implemented a new Call Handling and Trouble Management System which will make significant improvements to the customer service it provides. This is the first step on its journey to achieving world class performance during faults",2001,0, 7956,Towards Reliability and Fault-Tolerance of Distributed Stream Processing System,"Not so long ago data warehouses were used to process data sets loaded periodically. We could distinguish two kinds of ETL processes: full and incremental. Now we often have to process real-time data and analyse them almost on-the-fly, so the analysis are always up to date. There are many possible applications for real-time data warehouses. In most cases two features are important: delivering data to the warehouse as quick as possible, and not losing any tuple in case of failures. In this paper we propose an architecture for gathering and processing data from geographically distributed data sources. We present theoretical analysis, mathematical model of a data source, and some rules of system modules configuration. At the end of the paper our future plans are described briefly.",2007,0, 7957,"SLAM With Joint Sensor Bias Estimation: Closed Form Solutions on Observability, Error Bounds and Convergence Rates","Notable problems in Simultaneous Localization and Mapping (SLAM) are caused by biases and drifts in both exteroceptive and proprioceptive sensors. The impacts of sensor biases include inconsistent map estimates and inaccurate localization. Unlike Map Aided Localisation with Joint Sensor Bias Estimation (MAL-JSBE), SLAM with Joint Sensor Bias Estimation (SLAM-JSBE) is more complex as it encompasses a state space, which increases with the discovery of new landmarks and the inherent map to vehicle correlations. The properties such as observability, error bounds and convergence rates of SLAM-JSBE using an augmented estimation theoretic, state space approach, are investigated here. SLAM-JSBE experiments, which adhere to the derived constraints, are demonstrated using a low cost inertial navigation sensor suite.",2010,0, 7958,Notice of Retraction
A Method of Improving Precision in Software Testing Based on Defect Patterns,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

This paper presents a method, control-flow interval iteration in presence of interprocedural side-effect analysis, to improve test precision in software testing based on defect patterns, then an exact interval computation can be used in defect test algorithm of DTS. Our experiment shows that the DTS, using this method, can reduce the false positive rate and drop defect rate effectively.",2009,0, 7959,Spatial non-stationary correlation noise modeling for Wyner-Ziv error resilience video coding,"Most of the Wyner-Ziv (WZ) video coding schemes in literature model the correlation noise (CN) between original frame and side information (SI) by a given distribution whose parameters are estimated in an offline process. In this paper, an online CN modeling algorithm is proposed towards a more practical WZ-based error resilient video coding (WZ-ERVC). In ERVC scenario, the side-information is typically generated from the error concealed picture instead of bi-directional motion prediction. The proposed online CN modeling algorithm achieves the so-called classification gain by exploiting the spatially non-stationary characteristics of the motion field and texture. The CN between the source and error concealed SI is modeled by a Laplacian mixture model, where each mixture component represents the statistical distribution of prediction residuals and the mixing coefficients portray the motion vectors estimation error. Experimental results demonstrate significant performance gains both in rate and distortion versus the conventional Laplacian model.",2009,0, 7960,Hard-to-detect errors due to the assembly-language environment,"Most programming errors are reasonably simple to understand and detect. One of the benefits of a high-level language is its encapsulation of error-prone concepts such as memory access and stack manipulation. Assembly-language programmers do not have this luxury. In our quest to create automated error-prevention and error-detection tools for assembly language, we need to create a comprehensive list of possible errors. We are not considering syntax errors or algorithmic errors. Assemblers, simple testing, and automated testing can detect those errors. We want to deal with design errors that are direct byproducts of the assembly-language environment or result from a programmer's lack of understanding of the assembly-language environment. Over many years of assembly-language instruction, we have come across a plethora of errors. Understanding the different types of errors and how to prevent and detect them is essential to our goal of creating automated error-prevention and error-detection tools. In this paper we list and explain the types of errors we have cataloged.",2007,0, 7961,Fault tree analysis of a fire hazard of a power distribution cabinet with Petri Nets,Motivation of this study is to verify system safety analysis of HAVELSAN Peace Eagle Program developed hardware items for Ground Support Systems. A preliminary hazard analysis for each of the hardware developed items are performed and safety hazard analysis models are constructed with risk assessment of hazards based on their probability of occurrences for future operational and maintenance activities. An example for this kind of analysis the system safety fault tree analysis model of a Ground Support Segment Mission Simulator subsystem Power Distribution Adapter Cabinet design with hazardous risk assessments criteria according to the military standard specifications. Same analysis approach then modeled with Petri Nets that has extensions from fault tree analysis approach and enables the modeler to represent the probability of occurrences in the system design phase. Same model can be built in the specification phase which creates the potential for early validation of the system design behavior.,2010,0, 7962,Targeting error simulator for image-guided prostate needle placement,"Motivation: Needle-based biopsy and local therapy of prostate cancer depend multimodal imaging for both target planning and needle guidance. The clinical process involves selection of target locations in a pre-operative image volume and registering these to an intra-operative volume. Registration inaccuracies inevitably lead to targeting error, a major clinical concern. The analysis of targeting error requires a large number of images with known ground truth, which has been infeasible even for the largest research centers. Methods: We propose to generate realistic prostate imaging data in a controllable way, with known ground truth, by simulation of prostate size, shape, motion and deformation typically encountered in prostatic needle placement. This data is then used to evaluate a given registration algorithm, by testing its ability to reproduce ground truth contours, motions and deformations. The method builds on statistical shape atlas to generate large number of realistic prostate shapes and finite element modeling to generate high-fidelity deformations, while segmentation error is simulated by warping the ground truth data in specific prostate regions. Expected target registration error (TRE) is computed as a vector field. Results: The simulator was configured to evaluate the TRE when using a surface-based rigid registration algorithm in a typical prostate biopsy targeting scenario. Simulator parameters, such as segmentation error and deformation, were determined by measurements in clinical images. Turnaround time for the full simulation of one test case was below 3 minutes. The simulator is customizable for testing, comparing, optimizing segmentation and registration methods and is independent of the imaging modalities used.",2010,0, 7963,Diagnosis of Induction Motor Faults in the Fractional Fourier Domain,"Motor current signature analysis (MCSA) is a well-established method for the diagnosis of induction motor faults. It is based on the analysis of the spectral content of a motor current, which is sampled while a motor runs in steady state, to detect the harmonic components that characterize each type of fault. The Fourier transform (FT) plays a prominent role as a tool for identifying these spectral components. Recently, MCSA has also been applied during the transient regime (TMCSA) using the whole transient speed range to create a unique stamp of each harmonic as it evolves in the time-frequency plane. This method greatly enhances the reliability of the diagnostic process compared with the traditional method, which relies on spectral analysis at a single speed. However, the FT cannot be used in this case because the fault harmonics are not stationary signals. This paper proposes the use of the fractional FT (FrFT) instead of the FT to perform TMCSA. This paper also proposes the optimization of the FrFT to generate a spectrum where the frequency-varying fault harmonics appear as single spectral lines and, therefore, facilitate the diagnostic process. A discrete wavelet transform (DWT) is used as a conditioning tool to filter the motor current prior to its processing by the FrFT. Experimental results that are obtained with a 1.1-kW three-phase squirrel-cage induction motor with broken bars are presented to validate the proposed method.",2010,0, 7964,High Frequency Resolution Techniques for Rotor Fault Detection of Induction Machines,"Motor current signature analysis (MCSA) is the reference method for the diagnosis of medium-large machines in industrial applications. However, MCSA is still an open research topic, as some signatures may be created by different phenomena, wherein it may become sensitive to load and inertia variations, and with respect to an oscillating load torque, although suitable data normalization can be applied. Recently, the topic of diagnostic techniques for drives and low to medium size machines is becoming attractive, as the procedure can be embedded in the drive at no additional thanks to a dedicated firmware, provided that a suitable computational cost is available. In this paper, statistical time-domain techniques are used to track grid frequency and machine slip. In this way, either a lower computational cost or a higher accuracy than traditional discrete Fourier transform techniques can be obtained. Then, the knowledge of both grid frequency and machine slip is used to tune the parameters of the zoom fast Fourier transform algorithm that either increases the frequency resolution, keeping constant the computational cost, or reduces the computational cost, keeping constant the frequency resolution. The proposed technique is validated for rotor faults.",2008,0, 7965,Application of multi-agent in control and fault diagnosis systems,"Multi-agent system with distributed structure is an important research field in intelligent control and fault diagnosis system. Based on the research of cooperation and coordination function of multi-agent, a systematic structure which integrates control, diagnosis and monitoring is established and those models, cooperation strategy, reasoning machine are also designed. This new decentralization system has been successfully used in actual product line, and provides a new way for industrial control problem.",2004,0, 7966,A new deterministic fault tolerant wormhole routing strategy for k-ary 2-cubes,"Multicomputers have experienced a rapid development during the last decade. Multicomputers rely on an interconnection network among processors to support the message-passing mechanism. Therefore, the reliability of the interconnection network is very important for the reliability of the whole system. In this paper a new fault-tolerant routing algorithm, which is based on dimension order routing, is proposed for k-ary 2-cubes. Packets are sent to their destination through XY routing algorithm and if this transmission is not possible, YX routing algorithm is applied. The XY routing algorithm nullifies offset in ""X"" direction before routing in ""Y"" direction, but the YX routing algorithm first nullifies offset in ""Y"" direction and then start routing in ""X"" direction. For evaluation, this algorithm is compared with the Gomez method [1] which uses intermediate nodes for tolerating faults. The results show that our method is preferred, especially in the environments where the fault probability is low and the message generation rate is high.",2010,0, 7967,Notice of Retraction
Based on Integrated Adaptive Fuzzy Neural Network Tolerance Analog Circuit Fault Diagnosis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Based on multiple input single output of adaptive fuzzy neural network, this paper design the integrated adaptive fuzzy neural network based on the Takagi-Sugeno type fuzzy rules, adopt a hybrid learning algorithm to train the network connection weights, optimize membership function. Simulation results verified the effectiveness and feasibility of this method.",2010,0, 7968,A Fault Injection Approach Based on Operational Profile,"Now that most security violations (may cause vulnerabilities) are triggered by the way the software including potential vulnerabilities interacts with malicious users and theist abnormal runtime environment, we generate test cases from the aspect of user of target software. We develop the operational profile to generate the test cases and the requirement specification of security is generated to check the data collected from the experiment. This approach emulates the faults as accurately as possible in order to trigger new vulnerabilities, and it can inject faults both in the random and determine mode. Test cases are injected during the running application, without instrument the code.",2009,0, 7969,Reconfigurable Fuzzy Takagi Sugeno Networked Control using Cooperative Agents and Local Fault Diagnosis,"Nowadays dynamic behaviour performed by a computer network system shows the posibility to be addressed from the perspective of a control system. This paper discusses the use of Fuzzy Takagi Sugeno real time control and local fault diagnosis with hardware-in-the-loop (HIL) magnetic levitator (maglev) using xPC Target. Here xPC Target is used as operating environment for real time processing and to connect a computer network system. In that respect, this paper proposes a control reconfiguration approach based upon a cooperative agent strategy and local fault diagnosis using Takagi-Sugeno technique. Several stages are studied, how local fault diagnosis produce a warning value, how computer network is reconfigured, as well as how control techniques are modified using Fuzzy Takagi-Sugeno Control.",2007,0, 7970,Test mode method and strategy for RF-based fault injection analysis for on-chip relaxation oscillators under EMC standard tests or RFI susceptibility characterization,"Nowadays some microcontroller clock circuits have been implemented using relaxation oscillators instead of quartz type approach to attend cost effective designs. The oscillator is compensated over temperature and power supply and trimming during device test phase adjusts the oscillation frequency on target to overcome process variations. In that way, the relaxation oscillator becomes competitive with regard to ceramic resonator options. However, robust applications as industrial, automotive and aero spatial, requires aggressive EMC tests reproducing the behavior in these environments. High levels of RF interference introduce frequency deviation, jitter or clock corruption causing severe faults on the application. This work discusses the impact of RF interference in relaxation oscillators proposing a strategy to implement test mode in microcontrollers and other complex SOCs, allowing yet characterization and fault debug. Theoretical analysis and experimental results with a silicon implementation are presented and discussed.",2010,0, 7971,Concurrent and simple digital controller of an AC/DC converter with power factor correction based on an FPGA,"Nowadays, most digital controls for power converters are based on DSPs. This paper presents a field programmable gate array (FPGA) based digital control for a power factor correction (PFC) flyback AC/DC converter. The main difference from DSP-based solutions is that FPGAs allow concurrent operation (simultaneous execution of all control procedures), enabling high performance and novel control methods. The control algorithm has been developed using a hardware description language (VHDL), which provides great flexibility and technology independence. The controller has been designed as simple as possible while maintaining good accuracy and dynamic response. Simulations and experimental results show the feasibility of the method, opening interesting possibilities in power converters control.",2003,0, 7972,A Hardware-Scheduler for Fault Detection in RTOS-Based Embedded Systems,"Nowadays, Real-Time Operating Systems (RTOSs) are often adopted in order to simplify the design of safety-critical applications. However, real-time embedded systems are sensitive to transient faults that can affect the system causing scheduling dysfunctions and consequently changing the correct system behavior. In this context, we propose a new hardware-based approach able to detect faults that change the tasks' execution time and/or the tasks' execution flow in embedded systems based on RTOS. To demonstrate the effectiveness and benefits of using the proposed approach, we implemented a hardware prototype named Hardware-Scheduler (Hw-S) that provides real-time monitoring of the Plasma Microprocessor's RTOS in order to detect the above mentioned types of faults. The Hw-S has been evaluated in terms of the introduced area overhead and fault detection capability.",2009,0, 7973,Automatic Spelling Correction Rule Extraction and Application for Spoken-Style Korean Text,"Nowadays, spoken-style text is prevailing because lots of information are being written in spoken-style such as Short-Message-Service(SMS) messages. However, the spokenstyle text contains more spelling errors than the traditional written-style text. In this paper, we propose a rule-based spelling correction system which can automatically extract spelling correction rules from the correction corpus and apply extracted rules to spelling errors of input sentences. In order to preserve both high precision and high recall, we devise a candidate-elimination algorithm which determines appropriate context size of spelling correction rules based on rule accuracy. Experimental results showed that the proposed system can extract 42,537 spelling correction rules and apply the rules to correct spelling errors on the test corpus and thus, the rate of precision is increased from 31.08% to 79.04% on the basis of message unit.",2007,0, 7974,Design about Real-time Fault Detection Information System of the Transformer Substation,"Nowadays, the automation of our countrypsilas power grid operation and management has reached a high level, by contrast, the automation of analysis and management of relay protection monitoring, system fault and protection action behavior seems relatively backward. This paper emphatically introduces the design of fault information system based on real-time detection platform aiming at 220 kV transformer substation, including design of master station and sub-station, communication realization between master station and substation and security protection of the system and briefly clarifies the functions which the system can realize according to the characteristics of power grid.",2009,0, 7975,UML Specification and Correction of Object-Oriented Anti-patterns,"Nowadays, the detection and correction of software defects has become a very hard task for software engineers. Most importantly, the lack of standard specifications of these software defects along with the lack of tools for their detection, correction and verification forces developers to perform manual modifications; resulting not only in mistakes, but also in costs of time and resources. The work presented here is a study of the specification and correction of a particular type of software defect: Object-Oriented anti-patterns. More specifically, we define a UML based specification of anti-patterns and establish design transformations for their correction. Through this work, we expect to open up the possibility to automate the detection and correction of these kinds of software defects.",2009,0, 7976,An efficient error concealment method for mobile TV broadcasting,"Nowadays, TV broadcasting has found its application in mobile terminal, however, due to the prediction structure of video coding standards, compressed video bitstreams are vulnerable to wireless channel disturbances for real-time transmission. In this paper, we propose a novel temporal error concealment method for mobile TV sequences. The proposed ordering methods utilizes continuity feature among adjacent frames, so that both inter and intra error propagation are alleviated. Combined with our proposed fuzzy metric based boundary matching algorithm (FBMA) which provides more accurate distortion function, experiment results show our proposal achieves better performance under error-prone channel, compared existing error concealment algorithms.",2009,0, 7977,Modeling Fault Tolerant Services in Service-Oriented Architecture,"Nowadays, using of Service-Oriented Architectures (SOA) is spreading as a flexible architecture for developing dynamic enterprise systems.In this paper we investigate fault tolerance mechanisms for modeling services in service-oriented architecture. We propose a metamodel (formalized by a type graph) with graph rules for monitoring services and their communications to detect existing faults.",2009,0, 7978,Enhancing fault prediction on automatic foundry processes,"Microshrinkages are known as probably the most difficult defects to avoid in high-precision foundry. This failure renders the casting invalid, with the subsequent cost increment. Modelling the foundry process as an expert knowledge cloud allows machine learning algorithms to foresee the value of a certain variable, in this case, the probability that amicroshrinkage appears within a casting. In this paper, we extend previous research on foundry production control by adapting and testing support vector machines and decision trees for the prediction in beforehand of microshrinkages. Finally, we compare the obtained results and show that decision trees are more suitable than the rest of the counterparts for the prediction of microshrinkages.",2010,0, 7979,mRegistry: a registry representation for fault diagnosis,"Microsoft Windows uses the notion of registry to store all configuration information. The registry entries have associations and dependencies. For example, the paths to executables may be relative to some home directories. The registry being designed with faster access as one of the objectives does not explicitly capture these relations. In this paper, we explore a representation that captures the dependencies more explicitly using shared and unifying variables. This representation, called mRegistry exploits the tree-structured hierarchical nature of the registry, is concept-based and obtained in multiple stages. mRegistry captures intra-block, inter-block and ancestor-children dependencies (all leaf entries of a parent key in a registry put together as an entity constitute a block thereby making the block as the only child of the parent). In addition, it learns the generalized concepts of dependencies in the form of rules. We show that mRegistry has several applications: fault diagnosis, prediction, comparison, compression etc.",2005,0, 7980,A fast method to minimize L error norm for geometric vision problems,"Minimizing L error norm for some geometric vision problems provides global optimization using the well- developed algorithm called SOCP (second order cone programming). Because the error norm belongs to quasi- convex functions, bisection method is utilized to attain the global optimum. It tests the feasibility of the intersection of all the second order cones due to measurements, repeatedly adjusting the global error level. The computation time increases according to the size of measurement data since the number of second order cones for the feasibility test inflates correspondingly. We observe in this paper that not all the data need be included for the feasibility test because we minimize the maximum of the errors; we may use only a subset of the measurements to obtain the optimal estimate, and therefore we obtain a decreased computation time. In addition, by using L image error instead of L2 Euclidean distance, we show that the problem is still a quasi-convex problem and can be solved by bisection method but with linear programming (LP). Our algorithm and experimental results are provided.",2007,0, 7981,A Model-Based Soft Errors Risks Minimization Approach,"Minimizing the risk of system failure in any computer structure requires identifying those components whose failure is likely to impact on system functionality. Clearly, the degree of protection or prevention required against faults is not the same for all components. Tolerating soft errors can be much improved if critical components can be identified at an early design phase and measures are taken to lower their criticalities at that stage. This improvement is achieved by presenting a criticality ranking (among the components) formed by combining a prediction of faults, consequences of them, and a propagation of errors at the system modeling phase; and pointing out ways to apply changes in the model to minimize the risk of degradation of desired functionalities. Case study results are given to validate the approach.",2009,0, 7982,Automatic registration of cardiac PET/CT for attenuation correction,"Misalignments of images in cardiac Positron Emission Tomography (PET)-CT imaging may lead to erroneous Attenuation Correction (AC) and mis-diagnosis. Such misalignment may be corrected manually prior to reconstruction and clinical assessment; however this step is laborious and may be subject to operator variability. The aim of this study is to assess the performance of an algorithm to automatically align CT to PET prior to AC. We conclude that automatic registration is a viable option for the task of aligning cardiac CT and PET for AC, with a consistency comparable to that of using manual alignment.",2008,0, 7983,Byzantine Fault-Tolerant Web Services for n-Tier and Service Oriented Architectures,"Mission-critical services must be replicated to guarantee correctness and high availability in spite of arbitrary (Byzantine) faults. Traditional Byzantine fault tolerance protocols suffer from several major limitations. Some protocols do not support interoperability between replicated services. Other protocols provide poor fault isolation between services leading to cascading failures across organizational and application boundaries. Moreover, traditional protocols are unsuitable for applications with tiered architectures, long-running threads of computation, or asynchronous interaction between services. We present Perpetual, a protocol that supports Byzantine fault-tolerant execution of replicated services while enforcing strict fault isolation. Perpetual enables interaction between replicated services that may invoke and process remote requests asynchronously in long-running threads of computation. We present a modular implementation, an Axis2 Web Services extension, and experimental results that demonstrate only a moderate overhead due to replication.",2008,0, 7984,Cross layer error-control scheme for video quality support over 802.11b wireless LAN,"Mitigating the impact of errors on video quality over wireless network has been a major issue of concern which requires highly efficient and effective scheme. The dynamic and heterogeneous nature of the wireless network requires highly sophisticated approach to mitigate the impact of transmission error on video quality. The trade-off between delay and video quality should be considered while designing such applications to reasonably maintain video quality in wireless channel. In order to significantly reduce the impact of high error bit and error burst on transmitted video, more efficient error correction scheme are needed. In this paper, we paper presents an approach using forward error correction and cross layer mechanism which dynamically adapts with the channel condition to recover the loss packets in order to enhance the perceived video quality. The scenario has been simulated using NS-2 and it shows more dramatic improvement in video quality.",2009,0, 7985,The bone slot effect study of PI procedure for craniosynostosis correction plan based on finite element method,"Objective: The main purpose of correction craniosynostosis is to reopen cranial sutures with some bone slots in order to free the skull transformation with the brain development from the closed craniums. The skull rigidity is severely depend on the shape of bone slots, the main purpose of this paper is to obtain a right bone slots shape to decrease skull rigidity. Finite element method is utilized to calculate the stress distribution and deformation clouds of different schemes vary in bone slots shape. Methods: A Congenital craniosynostosis case is selected to design the surgery treatment plan. A modified PI-shape craniosynostosis correction plan is used and bone slots used for reconstruction the cranial suture are in variance to simulate the stress distribution and a best scheme is advanced. The cranial bone and endocranium 3D models are reconstructed from CT data, and then tetrahedron element models for finite element analysis are established. For the instantaneous stress take into account when the slots shape change, the viscoelastic material properties of the cranial bone and endocranium are ignored here. Results: The skull rigidity is apparent difference in kinds surgery schemes. Different bone slot shapes induce different cranium stress distribution and skull rigidity. Appropriate bone slots can make the cranium to win the right stress values and distribution and displacements. Conclusion: The results of stress distribution and deformation of cranial bone under the intracranial pressure after the correction craniosynostosis operation can be obtained by the finite element method. These results reflect the ability of the cranial bone expanding with the brain tissues growth. The finite element method is an available way, with which surgical prediction can be made to guide surgeons to make the decision of improving surgical treatment.",2010,0, 7986,Large-scale fault isolation,"Of the many distributed applications designed for the Internet, the successful ones are those that have paid careful attention to scale and robustness. These applications share several design principles. In this paper, we illustrate the application of these principles to common network monitoring tasks. Specifically, we describe and evaluate 1) a robust distributed topology discovery mechanism and 2) a mechanism for scalable fault isolation in multicast distribution trees. Our mechanisms reveal a different design methodology for network monitoring-one that carefully trades off monitoring fidelity (where necessary) for more graceful degradation in the presence of different kinds of network dynamics.",2000,0, 7987,Securing AES Implementation against Fault Attacks,"On smart card environment, speed and memory optimization of cryptographic algorithms are an ongoing preoccupation. In addition, there is the necessity to protect the device against various attacks. In this paper we present a fault attack detection scheme for the AES using digest values. They are deduced from the mathematical description of each AES individual transformation. The security of our countermeasure is proved in a realistic fault model. Moreover we show that it can be combined with data masking to thwart efficiently both FA and DPA. Eventually, implementations of our method are presented, showing that it can be an interesting alternative to the traditional doubling countermeasure method.",2009,0, 7988,Research on Web-Based Multi-Agent System for Aeroengine Fault Diagnosis,"On the analysis of current state of aeroengine remote diagnosis, collaborative mechanism based on multi-agent was introduced to overcome the obstacles of conventional remote fault diagnosis. The model of aeroengine remote collaborative diagnosis based on multi-agent was put forward on analysis of the positional relationship of all agents in the collaborative environment and the relationship between collaborative agents and roles in the course of collaboration. Some key technologies such as coordination mechanism, task assignment mechanism, agent interaction mechanism, case-based reasoning (CBR) in treatment agent, and the analytic hierarchy process (AHP) in decision analysis were discussed and specific methods of realization were given concretely. Based on these, a Web-based prototype system for aeroengine fault diagnosis was developed on the JADE (Java Agent DEvelopment Framework) platform. The process of system implementation and a case example of fault diagnosis were presented to illustrate and prove the proposed system's applicability. Running results show the feasibility and reliability of the framework, which will be helpful to integrate the aeroengine diagnosis knowledge, improve the diagnosis efficiency effectively and decrease the aeroengine diagnosis cost remarkably.",2008,0, 7989,A New Approach for Transient Fault Injection Using Symbolic Simulation,"One effective fault injection approach involves instrumenting the RTL in a controlled manner to incorporate fault injection, and evaluating the behaviour of the faulty RTL whilst running some benchmark programs. This approach relies on checking the effects of faults whilst the design is executing a specific binary image, and therefore the true impact of the fault is limited by the shadow of the program image. Another limitation of this approach is the use of extra hardware for fault injection which is not needed during the fault-free running of the design. The aim of this paper is to propose a new approach for transient fault injection based on symbolic simulation and model checking that circumvents the problems experienced due to application dependent fault injection and RTL modification. In this paper we present our approach and analyse the effect of transient faults on the fetch unit of a 32-bit multi-cycle RISC processor. Our approach can be applied generally to any faulty design, not necessarily a processor.",2008,0, 7990,Fault models and test generation for hardware-software covalidation,Mixed hardware-software systems constitute a strong paradigm shift for system validation. The main barriers to overcome are finding the right fault models and optimizing the validation flow. This article presents a research summary of these issues.,2003,0, 7991,Cache management of dynamic source routing for fault tolerance in mobile ad hoc networks,"Mobile ad hoc networks have gained more and more research attention. They provide wireless communications without location limitations and pre-built fixed infrastructures. Because of the absence of any static support structure, ad hoc networks are prone to link failure. This has become the most serious cause of throughput degradation when using TCP over ad hoc networks. Some researchers chose dynamic source routing (DSR) as the routing protocol and showed that disabling the assignment of a route directly from cache gives better performance. We introduce an efficient cache management mechanism to increase the TCP throughput by replying with a route directly from the cache of DSR and perform cache recovery when a host failure has occurred. We use simulations to compare the performance of our algorithm with the original DSR under the link failure prone environment due to mobility. We also provide the simulation results when host failures are considered in the ad hoc networks",2001,0, 7992,An Efficient Forward and Backward Fault-Tolerant Mobile Agent System,"Mobile agent is a special program and it can switch and execute the task of user commands among the networks and hosts. During the task executing, the mobile agent can convey the data, state and program code to another host in order to autonomously execute and continue the task in another host. While the mobile agent is executing the task at any of the software and hardware fault incurred or network problems, there will be two conditions as following lists: 1. Users continuously wait the reply from the agent, but users will never have the reply because of the some faults incurred to the agent in the networks or hosts. 2. Users assign a new agent as the former agent has been lost to restart the former task, but the former agent only congested the delay problem of the network or host. This causes that these two agents have the same task to be executed. Therefore, the fault detecting and recovering of the mobile agent are important issues to be discussed. However, this paper propose a front behind failure detection and recovery method that the task agent has to report the task process at the present stage to the former and latter agent and agents will exchange their messages for the present stage. This is more accurate than the method in for the task at present because it can reduce the loading of the task fault report from the network congested.",2008,0, 7993,A Survey of Fault Tolerance Techniques in Mobile Agents and Mobile Agent Systems,"Mobile agent technology is one of the fastest growing and emerging areas for application development in the past few years. A mobile agent is a computer program that acts autonomously on behalf of a user/application and travels to the network of diverse machines. For the mobile agent's technology to survive, it is necessary that the mobile agents should be reliable. In this prospective, fault tolerance for mobile agents and mobile agents systems is of substantial importance. This survey paper evaluates fault tolerance techniques used in mobile agents and mobile agent systems. These techniques are evaluated on the basis defined parameters. Survey concludes that MoCA and CHAMELEON are appropriate techniques for fault tolerance in mobile agents and mobile agent systems respectively.",2009,0, 7994,Fault Tolerance in Mobile Agent Systems by Cooperating the Witness Agents,"Mobile agents travel through servers for perform their programs and fault tolerance is fundamental and important in their itinerary. In the paper, are considering and described existent methods of fault tolerance in mobile agents. Then the method is considered that which uses cooperating agents to fault tolerance and to detect server and agent failure, meaning three type of agents involved: actual agent which performs programs for its owner, witness agent which monitors the actual agent and the witness agent after itself, probe which is sent for recovery the actual agent or the witness agent on the side of the witness agent. Traveling agent through servers, the witness is created by actual agent. Scenarios of failure and recovery of server and agent are discussed in the method. During performing the actual agent, the witness agents are increased by addition the servers. Proposed scheme is that minimizes the witness agents as far as possible, because with considering and comparing could concluded that existing all of witness agent is not necessary on the initial servers. Simulation of this method is done by C-Sim",2006,0, 7995,Model-Based Development of Fault-Tolerant Embedded Software,"Model based development has become the state of the art in software engineering. Unfortunately there are only few model-based tools available for the design of fault- tolerant embedded software: while there exist many different code generators for application code, the generation of system aspects like process management, communication in a distributed system and fault-tolerance mechanisms is very complex due to the heterogeneity of the embedded systems. We think that the design of an all-embracing code generator, that supports a priori all platforms (the combination of hardware, operating system and programming language) is impossible. Rather it is necessary to concentrate on a code generator architecture that allows an easy extension of the code generation ability. In this paper we present one possible solution: generating the code on the basis of templates, that solve different recurring aspects of safety-critical embedded software. By the use of a technique similar to preprocessor macros, these templates can be implemented in an application independent fashion. The code generator can then adapt these templates to the application by extracting the necessary information out of the model provided by the application developer. A first realization of this approach is also mentioned in this paper.",2006,0, 7996,"Correction to ""Improving business communication"" on page 40 of the Feb. 2007 issue of IEEE Microwave Magazine the MicroBusiness column","Municipal Wi-Fi networks are currently a hot topic (with over 4 million Google postings as of December 2006). Regrettably, however, lobbying by incumbent service providers succeeded in some states to enact legislation that prevents municipalities from competing. While this significantly limits public Wi-Fi opportunities, at least for the time being, viable business plans present the most significant challenge for two main reasons: the inclusion of a free or reduced-rate ""digital inclusion"" service plan and the competitive pressures from alternative services, e.g., cell phone data and DSL. Business risk exposure is highest for network builders and operators and lowest for equipment suppliers",2007,0, 7997,Effective Diagnostic Pattern Generation Strategy for Transition-Delay Faults in Full-Scan SOCs,"Nanometric circuits and systems are increasingly susceptible to delay defects. This paper describes a strategy for the diagnosis of transition-delay faults in full-scan systems-on-a-chip (SOCs). The proposed methodology takes advantage of a suitably generated software-based self-test test set and of the scan-chains included in the final SOC design. Effectiveness and feasibility of the proposed approach were evaluated on a nanometric SOC test vehicle including an 8-bit microcontroller, some memory blocks and an arithmetic core, manufactured by STMicroelectronics. Results show that the proposed technique can achieve high diagnostic resolution while maintaining a reasonable application time.",2009,0, 7998,A new narrowband active noise control system in the presence of sensor error,"Narrowband active noise control (ANC) systems have many real-life applications where the noise signals generated by rotating machines are modeled as sinusoidal signals in additive noise. However, when the timing signal sensor, such as a tachometer that is used to extract the signal frequencies, and the cosine wave generator contain errors, the reference signal frequencies fed to each ANC channel will then be different from the noise signal true frequencies. This difference is referred to as frequency mismatch (FM). In this paper, through extensive simulations we demonstrate that the performance capabilities of a conventional narrowband ANC system using the filtered-X LMS (FXLMS) algorithm degrades significantly even for an FM as small as 1%. Next, we propose a new narrow-band ANC system that will successfully compensate for the performance degradations due to FM. The amplitude/phase adjustment and the FM mitigations are performed simultaneously in a harmonic fashion such that the influence of the FM (sensor error) can be removed almost completely. Simulation results are provided to demonstrate the effectiveness of the proposed new system.",2004,0, 7999,NFMi: an inner-domain network fault management system,"Network fault management has been an active research area for a long period of time because of its complexity, and the returns it generates for service providers. However, most fault management systems are currently custom-developed for a particular domain. As communication service providers continuously add greater capabilities and sophistication to their systems in order to meet demands of a growing user population, these systems have to manage a multi-layered network along with its built-in legacy logical processing procedure. Stream processing has been receiving a lot of attention to deal with applications that generate large amounts of data in real-time at varying input rates and to compute functions over multiple streams, such as network fault management. In this paper, we propose an integrated inter-domain network fault management system for such a multi-layered network based on data stream and event processing techniques. We discuss various components in our system and how data stream processing techniques are used to build a flexible system for a sophisticated real-world application. We further identify a number of important issues related to data stream processing during the course of the discussion of our proposed system, which will further extend the boundaries of data stream processing.",2005,0, 8000,Least order fault and model detection using multi-models,Multi-model based fault detection is often a viable alternative to various multi-model based state estimation techniques using banks of Kalman filters. A main advantage of the fault detection techniques based approach is the possibility to use detectors having low order dynamics with disturbance decoupling capabilities. The proposed synthesis algorithm of detectors relies on numerically reliable rational nullspace techniques enhanced with optimal tuning of detection sensitivities. The applicability of the multi-model based approach to solve fault identification problems is illustrated by solving a flight actuator fault detection problem with simultaneous faults.,2009,0, 8001,Fault tolerant permanent magnet synchronous machine for electric power steering systems,"Multiphase motor drives posses many advantages over the traditional three-phase motor drives such as reducing the amplitude and increasing the frequency of torque pulsation, reducing the stator current per phase without increasing the voltage per phase, lowering the dc link current harmonics and higher reliability. By increasing the number of phases it is also possible to increase the torque per ampere for the same volume and improving the fault tolerance of the drive. The present paper approaches in a comparative manner two winding connection for a 12 stator slots/10 rotor pole permanent magnet synchronous machine. The magnetic filed and developed electromagnetic torque will be analyzed.",2008,0, 8002,Clock Domain Crossing Fault Model and Coverage Metric for Validation of SoC Design,"Multiple asynchronous clock domains have been increasingly employed in system-on-chip (SoC) designs for different I/O interfaces. Functional validation is one of the most expensive tasks in the SoC design process. Simulation on register transfer level (RTL) is still the most widely used method. It is important to quantitatively measure the validation confidence and progress for clock domain crossing (CDC) designs. In this paper, we propose an efficient method for definition of CDC coverage, which can be used in RTL simulation for a multi-clock domain SoC design. First, we develop a CDC fault model to present the actual effect of metastability. Second, we use a temporal dataflow graph (TDFG) to propagate the CDC faults to observable variables. Finally, CDC coverage is defined based on the CDC faults and their observability. Our experiments on a commercial IP demonstrate that this method is useful to find CDC errors early in the design cycles",2007,0, 8003,Cryptographic Test Correction,"Multiple choice questionnaires (MCQs) are an assessment procedure invented in 1914. Today, they're widely used in education, opinion polls, and elections. When we first encountered MCQs in the university environment, we faced the daunting challenge of having to grade 600 of them. This article explores the possibility of safely transferring part of an MCQ's correction burden to the examinee - in this case, students - when sophisticated technological means such as optical character recognition systems aren't available. The MCQ grader uses a scoring algorithm C to compute the student's final mark. We call such a procedure a cryptographic test correction (CTC) scheme.",2008,0, 8004,Effective Static Analysis to Find Concurrency Bugs in Java,"Multithreading and concurrency are core features of the Java language. However, writing a correct concurrent program is notoriously difficult and error prone. Therefore, developing effective techniques to find concurrency bugs is very important. Existing static analysis techniques for finding concurrency bugs either sacrifice precision for performance, leading to many false positives, or require sophisticated analysis that incur significant overhead. In this paper, we present a precise and efficient static concurrency bugs detector building upon the Eclipse JDT and the open source WALA toolkit (which provides advanced static analysis capabilities). Our detector uses different implementation strategies to consider different types of concurrency bugs. We either utilize JDT to syntactically examine source code, or leverage WALA to perform interprocedural data flow analysis. We describe a variety of novel heuristics and enhancements to existing analysis techniques which make our detector more practical, in terms of accuracy and performance. We also present an effective approach to create inter-procedural data flow analysis using WALA for complex analysis. Finally we justify our claims by presenting the results of applying our detector to a range of real-world applications and comparing our detector with other tools.",2010,0, 8005,Reliability Growth Modeling for Software Fault Detection Using Particle Swarm Optimization,"Modeling the software testing process to obtain the predicted faults (failures) depends mainly on representing the relationship between execution time (or calendar time) and the failure count or accumulated faults. A number of unknown function parameters such as the mean failure function mu(t;beta) and the failure intensity function lambda(t;beta) are estimated using either least-square or maximum likelihood estimation techniques. Unfortunately, the model parameters are normally in nonlinear relationships. This makes traditional parameter estimation techniques suffer many problems in finding the optimal parameters to tune the model for a better prediction. In this paper, we explore our preliminary idea in using particle swarm optimization (PSO) technique to help in solving the reliability growth modeling problem. The proposed approach will be used to estimate the parameters of the well known reliability growth models such as the exponential model, power model and S-shaped models. The results are promising.",2006,0, 8006,Self Adaptive Application Level Fault Tolerance for Parallel and Distributed Computing,"Most application level fault tolerance schemes in literature are non-adaptive in the sense that the fault tolerance schemes incorporated in applications are usually designed without incorporating information from system environments such as the amount of available memory and the local or network I/O bandwidth. However, from an application point of view, it is often desirable for fault tolerant high performance applications to be able to achieve high performance under whatever system environment it executes with as low fault tolerance overhead as possible In this paper, we demonstrate that, in order to achieve high reliability with as low performance penalty as possible, fault tolerant schemes in applications need to be able to adapt themselves to different system environments. We propose a framework under which different fault tolerant schemes can be incorporated in applications using an adaptive method. Under this framework, applications are able to choose near optimal fault tolerance schemes at run time according to the specific characteristics of the platform on which the application is executing.",2007,0, 8007,Mesh simplification algorithm based on absolute curvature-weighted quadric error metrics,"Most existing mesh simplification algorithms ignore some important geometric features during the simplification process, especially in the low-level models. A novel edge collapse based mesh simplification algorithm, which utilizes the absolute curvature of the simplified vertex as one factor of the simplification error metrics is presented in this paper. Since the half-edge collapse is adopted in the proposed algorithm, the storage requirements can be efficiently reduced. Experimental results demonstrate that the proposed algorithm can preserve the geometric feature of the original mesh very well as well as maintaining the simplification error.",2010,0, 8008,Considering the Dependency of Fault Detection and Correction in Software Reliability Modeling,"Most existing software reliability growth models (SRGMs) focused on the fault detection process, while the fault correction process was ignored by assuming that the detected faults can be removed immediately and perfectly. However, these assumptions are not realistic. The fault correction process is a critical part in software testing. In this paper, we studied the dependency of the fault detection and correction processes in view of the number of faults. The ratio of corrected fault number to detected fault number is used to describe the dependency of the two processes, which appears S-shaped. Therefore, we adopt the logistical function to represent the ratio function. Based on this function, both fault correction and detection processes are modeled. The proposed models are evaluated by a data set of software testing. The experimental results show that the new models fit the data set of fault detection and correction processes very well.",2008,0, 8009,FIMD-MPI: a tool for injecting faults into MPI application,"Parallel computing is seeing increasing use in critical applications. The need therefore arises to test the robustness of parallel applications in the presence of exceptional conditions, or faults. Communication-software-based fault injection is an extremely flexible approach to robustness testing in message-passing parallel computers. A fault injection methodology and tool that use this approach are presented. The tool, known as FIMD-MPI, allows injection of faults into MPI-based applications. The structure and operation of FIMD-MPI are described and the use of the tool is illustrated on an example fault-tolerant MPI application",2000,0, 8010,Tolerating Concurrency Bugs Using Transactions as Lifeguards,"Parallel programming is hard, because it is impractical to test all possible thread interleavings. One promising approach to improve a multi-threaded program's reliability is to constrain a production run's thread interleavings in such a way that untested interleavings are avoided as much as possible. Such an approach would avoid hard-to-test rare thread interleavings in production runs, and thereby improve correctness. However, a key challenge in realizing this goal is in determining thread interleaving constraints from the tested correct interleavings, and enforcing them efficiently in production runs. In this paper, we propose a new method to determine thread interleaving constraints from the tested interleavings in the form of lifeguard transactions (LifeTxes). An untested code region initially is contained in a single LifeTx. As the code region is tested over more thread interleavings, its original LifeTx is automatically split into multiple smaller LifeTxes so that the newly tested interleavings are permitted in production runs. To efficiently enforce LifeTx constraints in production runs, we propose a hardware design similar to the eager conflict detection capability that exist in a conventional hardware transactional memory (TM) systems, but without the need for versioning, rollback and unbounded TM support.We show that 11 out of 14 real concurrency bugs in programs like Apache, MySQL and Mozilla could be avoided using the proposed approach for a negligible performance overhead.",2010,0, 8011,Parameterization of a model-based 3D whole-body PET scatter correction,"Parameterization of a fast implementation of the Ollinger model-based 3D scatter correction method for PET has been evaluated using measured phantom data from a GE PET AdvanceTM. The Ollinger method explicitly estimates the 3D single-scatter distribution using measured emission and transmission data and then estimates the multiple-scatter as a convolution of the single-scatter. The main algorithm difference from that implemented by Ollinger (1996) is that the scatter correction does not explicitly compute scatter for azimuthal angles; rather, it determines 2D scatter estimates for data within 2D ""super-slices"" using as input data from the 3D direct-plane (non-oblique) slices. These axial super-slice data are composed of data within a parameterized distance from the center of the super-slice. Such a model-based method can be parameterized, choice of which may significantly change the behavior of the algorithm. Parameters studied in this work included transaxial image downsampling, number of detectors to calculate scatter to, multiples kernel width and magnitude, number and thickness of super-slices and number of iterations. Measured phantom data included imaging of the NEMA NU-2001 image quality phantom, the IQ phantom with 2 cm extra water-equivalent tissue strapped around its circumference and an attenuation phantom (20 cm uniform cylinder with bone, water and air inserts) with two 8 cm diameter water-filled non-radioactive arms placed by it's side. For the IQ phantom data, a subset of NEMA NU-2001 measures were used to determine the contrast-to-noise, lung residual bias and background variability. For the attenuation phantom, ROIs were drawn on the nonradioactive compartments and on the background. These ROIs were analyzed for inter and intra-slice variation, background bias and compartment-to-background ratio. Results: In most cases, the algorithm was most sensitive to multiple-scatter parameterization and least sensitive to transaxial downsampling. The algorithm showed convergence by the second iteration for the metrics used in this study. Also, the range of the magnitude of change in the metrics analyzed was small over all changes in parameterization. Further work to extend these results to other more realistic phantom and clinical dataset- s is warranted.",2001,0, 8012,Concurrent Error Detection in Digit-Serial Normal Basis Multiplication over GF(2m),"Parity prediction schemes have been widely studied in the past. Recently, it has been demonstrated that this prediction scheme can achieve fault-secureness in arithmetic circuits for stuck-at and stuck-open faults. For most cryptographic applications, encryption/decryption algorithms rely on computations in very large finite fields. The hardware implementation may require millions of logic gates and this may lead to the generation of erroneous outputs by the multiplier. In this paper, a concurrent error detection (CED) technique is used in the digit-serial basis multiplier over finite fields of characteristic two. It is shown that all types of normal basis multipliers possess the same parity prediction function.",2008,0, 8013,An adaptive PMU based fault detection/location technique for transmission lines. II. PMU implementation and performance evaluation,"Part I of this paper set sets forth theory and algorithms for adaptive fault detection/location technique, which is based on phasor measurement unit (PMU). This paper is Part II of this paper set, A new timing device named Global Synchronism Clock Generator, GSCG including its hardware and software design is described in this paper, Experimental results show that the synchronized error of rising edge between the two GSCGs clock is well within 1 ps when the clock frequency is below 2.499 MHz. The measurement results between Chung-Jeng and Chang-Te 161 kV substations of Taiwan Power company by PMU equipped with GSCG is presented and the accuracy for estimating parameters of line is verified. The new developed DFT based method (termed as smart discrete Fourier transform, SDFT) and line parameter estimation algorithm are combined with PMU configuration to form the adaptive fault detector/locator system. Simulation results have shown that SDFT method can extract exact phasors in the presence of frequency deviation and harmonics, The parameter estimation algorithm can also trace exact parameters very well, The SDFT method and parameter estimation algorithm can achieve accuracies of up to 99.999% and 99.99%, respectively. The EMTP is used to simulate a 345 kV transmission line of Taipower System. Results have shown that the proposed technique yields correct results independent of fault types and is insensitive to the variation of source impedance, fault impedance and line loading. The accuracy of fault location estimation achieved can be up to 99.9% for many simulated cases, The proposed technique will be very suitable for implementation in an integrated digital protection and control system for transmission substations",2000,0, 8014,A single error correction double burst error detection code,"Particle radiation induced single event upset (SEU), if undetected in computer memory, can have potentially catastrophic effects. An improved error indicating system capable of correcting single errors and detecting multiple adjacent bit burst errors is discussed. This system uses the minimum number of redundant bits possible, and in some cases the number of bits is equivalent to simple parity checking.",2003,0, 8015,Recovering Connected Error Region Based on Adaptive Error Concealment Order Determination,"Parts of compressed video streams may be lost or corrupted when being transmitted over bandwidth limited networks and wireless communication networks with error-prone channels. Error concealment (EC) techniques are often adopted at the decoder side to improve the quality of the reconstructed video. Under the conditions of a high rate of data packets that arrives at the decoder corrupted, it is likely that the incorrectly decoded macro-blocks (MBs) are concentrated in a connected region, where important spatial reference information is lost. The conventional EC methods usually carry out the block concealment following a lexicographic scan (from top to bottom and from left to right of the image), which would make the methods ineffective for the case that the corrupted blocks are grouped in a connected region. In this paper, a temporal error concealment method, adaptive error concealment order determination (AECOD), is proposed to recover connected corrupted regions. The processing order of an MB in a connected corrupted region is adaptively determined by analyzing the external boundary patterns of the MBs in its neighborhood. The performances, on several video sequences, of the proposed EC scheme have been compared with those obtained by using other error concealment methods reported in the literature. Experimental results show that the AECOD algorithm can improve the recovery performance with respect to the other considered EC methods.",2009,0, 8016,A new approach in patient motion correction for cardiac SPECT: A simulation study,"Patient motion artifacts created in cardiac SPECT imaging can lead to misinterpretation of the images, resulting in false diagnoses. This simulation study proposes a new technique for patient motion correction (MC), where we utilize a modified template projection/reconstruction (TPR) algorithm to perform a voxel-by-voxel correction to the original image. Using NCAT, we developed two female phantoms with large breasts containing a non-beating heart (heart: background = 5:1). Phantom 1 had a healthy heart, and phantom 2 had a heart with a small (10%) perfusion defect in the lateral wall (severity = 50%). The SimSET code was used to perform simulations for both phantoms modeling cardiac SPECT acquisitions with Tc-99m, 128 A 128 matrix, and 60 camera stops. In addition to two standard (no motion) acquisitions (ST) for each phantom, seven acquisitions with different degrees of phantom motion were created by manually shifting a selected number of projections in a given direction (motion ranged from 8 to 22 mm). MC images (MCI) were created using a modified TPR, where the projected template was adjusted to match the motion detected in the experimental projections by aligning the center of mass in each projection. All reconstructions were performed using OSEM with resolution recovery and attenuation correction. For all simulated movements, the MCI images exhibited improvements in both standard deviation (SD) and mean accuracy relative to the uncorrected experimental reconstructions (ER). On average, the accuracies calculated for ER, MCI and the ST reconstructions were 68%, 77%, and 76%, respectively. The average SD for ER, MCI and the ST reconstructions were 5.2, 4.0, and 4.0, respectively. Our proposed technique offers a voxel-by-voxel motion correction, which provides improved image accuracy and standard deviation of counts relative to the uncorrected images and, in many cases, the images created without patient motion.",2009,0, 8017,Investigation of motion induced errors in scatter correction for the HRRT brain scanner,"Patient motion during PET scans introduces errors in the attenuation correction and image blurring leading to false changes in regional radioactivity concentrations. However, the potential effect that motion has on simulation-based scatter correction is not fully appreciated. Specifically for tracers with high uptake close to the edge of head (e.g. scalp and nose) as observed with [11C]Verapamil, mismatches between transmission and emission data can lead to significant quantification errors and image artefacts due to over scatter correction. These errors are linked with unusually high values in the scatter scaling factors (SSF) returned during the single scatter simulation process implemented in the HRRT image reconstruction. Reconstruction of -map with TXTV (an alternative -map reconstruction using non-linear filtering rather than brain segmentation and scatter correction of the transmission data) was found to improve the scatter simulation results for [11C]Verapamil and [18F]FDG. The errors from patient motion were characterised and quantified through simulations by applying realistic transformations to the attenuation map (-map). This generated inconsistencies between the emission and transmission data, and introduced large over-corrections of scatter similar to some cases observed with [11C]Verapamil. Automated Image Registration (AIR) based motion correction was also implemented, and found to remove the artifact and recover quantification in dynamic studies after aligning all the PET images to a common reference space.",2010,0, 8018,Correction of patient movement with a phase-only correlation method in a SPECT study,"Patient movement during SPECT and PET data acquisition makes serious distortions in reconstructed images. In most conventional methods a correction of this movement is conducted in the sinogram space. However, the direction of movement occurs at an angle that is parallel to collimator holes, making it is difficult to detect the movement of an object. This paper proposes a new correction method of patient movement. This method basically uses a phase-only correlation method. We applied the phase-only correlation method to both a sinogram space and image space. That is, we apply a one dimensional Fourier transform to a measured sinogram and detect an angle at which a movement occurs with the phase information of neighboring projection data. After we detect the angle at which a movement takes place, we split the sinogram into two angular regions and reconstruct images corresponding to these angular regions. We also apply a one-dimensional Fourier transform to these reconstructed images and estimate the extent of movement. Next we replace the wrong projection data with correct data that are calculated by forward projection of the image positioned correctly. By reconstructing an image with a corrected sinogram we could obtain a distortion free image. This paper showed the validity of our method with some simulations.",2010,0, 8019,Speedup of data access using error correcting codes in peer-to-peer networks,Peer-to-peer networks are networks of heterogeneous computers sharing flies or services. This paper proposes to use a data storage scheme using maximum distance separable codes to optimize the dissemination of the data in the network in order to globally enhance the data access.,2003,0, 8020,Designing Reliable Architecture for Stateful Fault Tolerance,"Performance and fault tolerance are two major issues that need to be addressed while designing highly available and reliable systems. The network topology or the notion of connectedness among the network nodes defines the system communication architecture and is an important design consideration for fault tolerant systems. A number of fault tolerant designs for specific multi-processor architecture exists in the literature, but none of them discriminates between stateless and stateful failover. In this paper, we propose a reliable network topology and a high availability framework which is tolerant up to a maximum of k node faults in a network and is designed specifically to meet the needs of stateful failover. Assuming the nodes in the network are capable of handling multiple processes, through our design we have been able to prove that in the event of k node failures the load can be uniformly distributed across the network - ensuring load balance. We also provide an useful characterization for the network, which under the proposed framework ensures one hop communication between the required nodes",2006,0, 8021,Combined forward error control and packetized zerotree wavelet encoding for transmission of images over varying channels,"One method of transmitting wavelet based zerotree encoded images over noisy channels is to add channel coding without altering the source coder. A second method is to reorder the embedded zerotree bitstream into packets containing a small set of wavelet coefficient trees. We consider a hybrid mixture of these two approaches and demonstrate situations in which the hybrid image coder can outperform either of the two building block methods, namely on channels that can suffer packet losses as well as statistically varying bit errors",2000,0, 8022,Rail defect diagnosis using wavelet packet decomposition,"One of the basic tasks in railway maintenance is inspection of the rail in order to detect defects. Rail defects have different properties and are divided into various categories with regard to the type and position of defects on the rail. This paper presents an approach for the detection of defects in rail based on wavelet transformation. Multiresolution signal decomposition based on wavelet transform or wavelet packet provides a set of decomposed signals at distinct frequency bands, which contains independent dynamic information due to the orthogonality of wavelet functions. Wavelet transform and wavelet packet in tandem with various signal processing methods, such as autoregressive spectrum, energy monitoring, fractal dimension, etc., can produce desirable results for condition monitoring and fault diagnosis. Defect detection is based on decomposition of the signal acquired by means of magnetic coil and Hall sensors from the railroad rail, and then applying wavelet coefficients to the extracted signals. Comparing these extracted coefficients provides an indication of the healthy rail from defective rail. Experimental results are presented for healthy rail and some of the more common defects. Deviation of wavelet coefficients in the healthy rail case from the case with defects shows that it is possible to classify healthy rails from defective ones.",2003,0, 8023,Anomaly-based Fault Detection System in Distributed System,"One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, inter connectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem. In this paper, we present an innovative approach based on statistical and data mining techniques to detect faults (hardware or software) and also identify the source of the fault. In our approach, we monitor and analyze in realtime all the interactions between all the components of a distributed system. We used data mining and supervised learning techniques to obtain the rules that can accurately model the normal interactions among these components. Our anomaly analysis engine will immediately produce an alert whenever one or more of the interaction rules that capture normal operations is violated due to a software or hardware failure. We evaluate the effectiveness of our approach and its performance to detect software faults that we inject asynchronously, and compare the results for different noise level.",2007,0, 8024,Application-Level Fault-Tolerance Solutions for Grid Computing,"One of the key functionalities provided by Grid systems is the remote execution of applications. This paper introduces a research proposal on fault-tolerance mechanisms for the execution of sequential and message-passing parallel applications on the Grid. A service-based architecture called CPPC-G is proposed. The CPPC (Controller/Precompiler for Portable Checkpointing) framework is used to insert checkpointing instrumentation into the application code. CPPC-G services will be in charge of the submission and monitoring of the application execution, management of checkpoint files generated by CPPC-enabled applications, and detection and automatic restart of failed executions. The development of the CPPC-G architecture will involve research in different areas such as storage and management of data files (checkpointfiles); automatic selection of suitable computing resources; reliable detection of execution failures and robustness issues to make the architecture fault-tolerant itself.",2008,0, 8025,Multiagent technology for fault tolerance and flexible control,"One of the main characteristics of multiagent systems (MAS) is fault tolerance. When an agent is unavailable for some reason, another agent with similar capabilities can theoretically compensate for this loss. The system can be designed not only for this kind of fault tolerance but also for others. Many key aspects of fault tolerance in MAS are described in this correspondence including social knowledge and physical distribution. We present a MAS framework that has been designed for control applications, in which fault tolerance and flexibility are key parts of the system",2006,0, 8026,Mean-Squared Error Sampling and Reconstruction in the Presence of Noise,"One of the main goals of sampling theory is to represent a continuous-time function by a discrete set of samples. Here, we treat the class of sampling problems in which the underlying function can be specified by a finite set of samples. Our problem is to reconstruct the signal from nonideal, noisy samples, which are modeled as the inner products of the signal with a set of sampling vectors, contaminated by noise. To mitigate the effect of the noise and the mismatch between the sampling and reconstruction vectors, the samples are linearly processed prior to reconstruction. Considering a statistical reconstruction framework, we characterize the strategies that are mean-squared error (MSE) admissible, meaning that they are not dominated in terms of MSE by any other linear reconstruction. We also present explicit designs of admissible reconstructions that dominate a given inadmissible method. Adapting several classical estimation approaches to our particular sampling problem, we suggest concrete admissible reconstruction methods and compare their performance. The results are then specialized to the case in which the samples are processed by a digital correction filter",2006,0, 8027,Compensation of third-harmonic field error in LHC main dipole magnets,"One of the main requirements for the operations of the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) is a suitable correction of multipole errors in magnetic field. The feed-forward control of the LHC is based on the Field Description for the LHC (FiDel), capable of forecasting the magnet's behavior in order to generate adequate current ramps for main and corrector magnets. Magnetic measurements campaigns aimed at validating the model underlying FiDel highlighted the need for improving the harmonic compensation of the third-harmonic (b3) component of the main LHC dipoles. In this paper, the results of a new measurement campaign for b3 harmonic compensation, carried out through the new Fast Acquisition Measurement Equipment (FAME), are reported. In particular, the mechanism and the measurement procedure of the compensation, as well as the new perspectives opened by preliminary experimental results, are illustrated.",2010,0, 8028,Does Prosopagnosia Take the Eyes Out of Face Representations? Evidence for a Defect in Representing Diagnostic Facial Information following Brain Damage,"One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.",2005,0, 8029,Dynamic aspects for runtime fault determination and recovery,"One of the most promising applications of aspect oriented programming (AOP) is the area of fault tolerance and recovery. In traditional programming languages, error handling code must be closely interwoven with program logic. AOP allows the programmer to take a more modular approach - error handling code can be woven into the code by expressing it as an aspect. One major impediment to handling error code in this way is that while errors are a dynamic, runtime property, most research on AOP has focused on static properties. In this paper, we propose a method for handling a variety of run-time faults as dynamic aspects. First, we separate fault handling into two different notions: fault determination, or the discovery of faults within a program, and fault recovery, or the logic used to recover from a fault. Our position is that fault determination should be expressed as dynamic aspects. We propose a system, called Rescue, that exposes underlying features of the virtual machine in order to express faults as variety of run-time constraints. We show how our methodology can be used to address several of the flaws in state of the art fault handling techniques. This includes their limitations in handling parallel and distributed faults, their obfuscated nature and their overly simplistic notion of what a ""fault"" actually may comprise",2006,0, 8030,SQualTrack: A Tool for Robust Fault Detection,"One of the techniques used to detect faults in dynamic systems is analytical redundancy. An important difficulty in applying this technique to real systems is dealing with the uncertainties associated with the system itself and with the measurements. In this paper, this uncertainty is taken into account by the use of intervals for the parameters of the model and for the measurements. The method that is proposed in this paper checks the consistency between the system's behavior, obtained from the measurements, and the model's behavior; if they are inconsistent, then there is a fault. The problem of detecting faults is stated as a quantified real constraint satisfaction problem, which can be solved using the modal interval analysis (MIA). MIA is used because it provides powerful tools to extend the calculations over real functions to intervals. To improve the results of the detection of the faults, the simultaneous use of several sliding time windows is proposed. The result of implementing this method is semiqualitative tracking (SQualTrack), a fault-detection tool that is robust in the sense that it does not generate false alarms, i.e., if there are false alarms, they indicate either that the interval model does not represent the system adequately or that the interval measurements do not represent the true values of the variables adequately. SQualTrack is currently being used to detect faults in real processes. Some of these applications using real data have been developed within the European project advanced decision support system for chemical/petrochemical manufacturing processes and are also described in this paper.",2009,0, 8031,Remote Fault Estimation and Thevenin Impedance Calculation from Relays Event Reports,"One of the typical features in modern relays is the generation of event reports during a disturbance. Event reports are records of regularly taken samples of the line currents and voltages as seen by the relay during a disturbance. This paper describes a software tool developed for the use of these event reports to perform the tasks of fault classification and estimation. The program estimates the 60 Hz values of currents and voltages by applying DFT (discrete Fourier transform) to the samples recorded by the relay every quarter of a cycle. Using these values the program performs the following calculations: 1) classification of fault, 2) estimation of the fault distance from the relay location and 3) Thevenin's impedance of the system at the fault point as well as a Thevenin's equivalent impedance of the system in front and behind the relay. The program was initially tested with data generated from ATP simulations and later with data from relays installed in an operating electrical network. The results show that the software is highly reliable, providing accurate estimation of faults and Thevenin's equivalent impedances",2006,0, 8032,Error Analysis and Compensation of Precision Parallel Robot for Sensor Locating in ICF,"One parallel robot had been developed to support the sensor and make it orient precisely in the test of Inertia! Confinement Fusion (ICF). In this paper some methods for improving the accuracy of parallel robot were discussed to guarantee the position accuracy of the sensor in vacuum environment. Firstly, error sources decreasing the accuracy of parallel robot were analyzed and some measures in structure design were adopted to improve the accuracy. Then error model of the robot was set up by the complete differential theory based on the parallel robot inverse kinematics equation. Therefore the position error of the robot could be reduced further by error compensation, which was realized through software. It's proved by the experiments that the error model was of simple and effective, the locating accuracy of parallel robot could be improved through error compensation markedly and the position accuracy reached 1mum, which could satisfy the sensor locating accuracy requirement of ICF research.",2007,0, 8033,Fault-tolerance for exascale systems,"Periodic, coordinated, checkpointing to disk is the most prevalent fault tolerance method used in modern large-scale, capability class, high-performance computing (HPC) systems. Previous work has shown that as the system grows in size, the inherent synchronization of coordinated checkpoint/restart (CR) limits application scalability; at large node counts the application spends most of its time checkpointing instead of executing useful work. Furthermore, a single component failure forces an application restart from the last correct checkpoint. Suggested alternatives to coordinated CR include uncoordinated CR with message logging, redundant computation, and RAID-inspired, in-memory distributed checkpointing schemes. Each of these alternatives have differing overheads that are dependent on both the scale and communication characteristics of the application. In this work, using the Structural Simulation Toolkit (SST) simulator, we compare the performance characteristics of each of these resilience methods for a number of HPC application patterns on a number of proposed exascale machines. The result of this work provides valuable guidance on the most efficient resilience methods for exascale systems.",2010,0, 8034,Application of Particle Swarm Optimization to PMSM Stator Fault Diagnosis,"Permanent magnet synchronous motors (PMSM) are frequently used to high performance applications. Accurate diagnosis of incipient faults can significantly improve system availability and reliability. This paper proposes a new scheme for the automatic diagnosis of turn-to-turn short circuit faults in PMSM stator windings. Both the fault location and fault severity are diagnosed using a particle swarm optimization (PSO) algorithm. The performance of the motor under the fault conditions is simulated through lumped-parameter models. Waveforms of the machine phase currents are monitored, based on which a fitness function is formulated and PSO is used to identify the fault location and fault size. Simulation results in MATLAB provide preliminary verification of the diagnosis scheme.",2006,0, 8035,Multimodal target correction by local bone registration: A PET/CT evaluation,"PET/CT guidance for percutaneous interventions allows biopsy of suspicious metabolically active bone lesions even when no morphological correlation is delineable in the CT images. Clinical use of PET/CT guidance with conventional step-by-step technique is time consuming and complicated especially in cases in which the target lesion is not shown in the CT image. Our recently developed multimodal instrument guidance system (IGS) for PET/CT improved this situation. Nevertheless, bone biopsies even with IGS have a trade-off between precision and intervention duration which is proportional to patient and personnel exposure to radiation. As image acquisition and reconstruction of PET may take up to 10 minutes, preferably only one time consuming combined PET/CT acquisition should be needed during an intervention. In case of required additional control images in order to check for possible patient movements/deformations, or to verify the final needle position in the target, only fast CT acquisitions should be performed. However, for precise instrument guidance accounting for patient movement and/or deformation without having a control PET image, it is essential to be able to transfer the position of the target as identified in the original PET/CT to a changed situation as shown in the control CT. Therefore, we present a pipeline for faster target-position correction by isolating and registering the bone of interest, as shown in the control CT, with the CT dataset of the original PET/CT acquisition. Challenges such as the masking of the bone of interest and registration robustness in the presence of the needle and its associated metal artifacts are also addressed in this work. Our results confirmed the feasibility of clinically using this technique for target correction on PET/CT bone intervention, and motivated us to incorporate it as part of our IGS for multimodal intervention.",2010,0, 8036,"Video image based attenuation correction for PETbox, a preclinical PET tomograph","PETBox is a new simplified bench top PET scanner dedicated for pre-clinical imaging of mice. It has only two facing detector heads in a static gantry. Using iterative methods, limited-angle reconstruction of 3D images is possible. The geometry of the PETBox is such that very oblique emission angles are detected traversing significant lengths of tissue, making attenuation correction necessary. To that effect, we have developed a method by which two orthogonal optical views are combined to create a 3-dimensional estimate of the subject. This estimate is used to produce attenuation correction data that significantly improve the quantitative accuracy of the reconstructed images. In this paper, we present the method and evaluate its accuracy.",2009,0, 8037,Defect localization using photon emission microscopy analysis with the combination of OBIRCH analysis,"Photon emission microscopy analysis with the combination of OBIRCH analysis are very effective for defect localization, which can decrease analysis cycle time and improve success rates remarkably. In this paper, some cases are presented to show how to locate defects quickly by photon emission microscopy analysis with the combination of OBIRCH analysis.",2010,0, 8038,Implementation of an analytically based scatter correction in SPECT reconstructions,"Photon scattering is one of the main effects contributing to the degradation of image quality and to quantitative inaccuracy in nuclear imaging. We have developed a scatter correction based on the analytic photon distribution (APD) method, and implemented it in an iterative image reconstruction algorithm. The performance of the method was evaluated using computer simulated projection data, experimental data obtained from physical phantoms, and patient data. The scatter corrected images were compared to images that were only corrected for attenuation and collimator blurring. In the simulation studies our results could also be compared to an ideal scatter correction in which images were reconstructed only from unscattered photon data. In all cases, our scatter corrected images demonstrate improved image contrast. In the simulated data the contrast for images only corrected for attenuation and collimator blurring were on average 29% poorer than the ideal correction. Images for which our scatter correction was applied had contrast that was on average within 3% of the ideal correction. The scatter corrected reconstruction requires between 4 to 5 hours of total CPU time, on a 1.7 GHz processor with 1Gb RDRAM, using clinical data.",2003,0, 8039,Predicting Defect Content and Quality Assurance Effectiveness by Combining Expert Judgment and Defect Data - A Case Study,"Planning quality assurance (QA) activities in a systematic way and controlling their execution are challenging tasks for companies that develop software or software-intensive systems. Both require estimation capabilities regarding the effectiveness of the applied QA techniques and the defect content of the checked artifacts. Existing approaches for these purposes need extensive measurement data from his-torical projects. Due to the fact that many companies do not collect enough data for applying these approaches (es-pecially for the early project lifecycle), they typically base their QA planning and controlling solely on expert opinion. This article presents a hybrid method that combines commonly available measurement data and context-specific expert knowledge. To evaluate the methodpsilas applicability and usefulness, we conducted a case study in the context of independent verification and validation activities for critical software in the space domain. A hybrid defect content and effectiveness model was developed for the software requirements analysis phase and evaluated with available legacy data. One major result is that the hybrid model provides improved estimation accuracy when compared to applicable models based solely on data. The mean magni-tude of relative error (MMRE) determined by cross-validation is 29.6% compared to 76.5% obtained by the most accurate data-based model.",2008,0, 8040,Bridging the Gap between Fault Trees and UML State Machine Diagrams for Safety Analysis,"Poorly designed software systems are one of main causes of accidents in safety-critical systems, and thus, the importance of safety analysis for software has greatly increased over the recent years. Software safety can be improved by analyzing both its desired and undesired behaviors, and this in turn requires expressive power such that both can be modeled. However, there is a considerable gap between modeling methods for desired and undesired behaviors. Therefore, we propose a method to bridge the gap between fault trees (for undesired behavior) and UML state machine diagrams (for desired behavior). More specifically, we present rules and algorithms that facilitate the transformation of a hazard (in the context of fault trees) to a UML state machine diagram. We illustrate our proposed approach via an example on a microwave-oven system. Our proposed transformation can help engineers identify how the hazards may occur, thereby allowing them to prevent the hazard from occurring.",2010,0, 8041,Physical and conceptual identifier dispersion: Measures and relation to fault proneness,"Poorly-chosen identifiers have been reported in the literature as misleading and increasing the program comprehension effort. Identifiers are composed of terms, which can be dictionary words, acronyms, contractions, or simple strings. We conjecture that the use of identical terms in different contexts may increase the risk of faults. We investigate our conjecture using a measure combining term entropy and term context coverage to study whether certain terms increase the odds ratios of methods to be fault-prone. Entropy measures the physical dispersion of terms in a program: the higher the entropy, the more scattered across the program the terms. Context coverage measures the conceptual dispersion of terms: the higher their context coverage, the more unrelated the methods using them. We compute term entropy and context coverage of terms extracted from identifiers in Rhino 1.4R3 and ArgoUML 0.16. We show statistically that methods containing terms with high entropy and context coverage are more fault-prone than others.",2010,0, 8042,"Bugs, moths, grasshoppers, and whales","Popular wisdom has it that the term ""debug"" dates back to 1947, when a team working on the Harvard University Mark II Aiken Relay Calculator removed a moth trapped between one of the relays. The team affixed the moth in their log book, and wrote next to it: ""First actual case of bug being found."" The hardware equivalent of software bugs are design errors. Next to those, microelectronics suffers from manufacturing defects. The hardware community uses the term ""debug"" for locating and resolving design errors, and uses the term ""diagnosis"" for pinpointing the cause and location of manufacturing defects. But is ""debug"" really the best term to use? A bug is small and perhaps hard to find, but once located it is easy for a human to defeat. However, some design errors have consequences of giant proportions. Perhaps a better term would be ""dewhale.""",2008,0, 8043,"Data-driven reliability modeling, based on data mining in distribution network fault statistics",Power distribution fault statistics provide splendid resource for extracting experimental knowledge. The extracted knowledge includes the inherit characteristics of the network assets. Analysis and estimation of failures require a comprehensive understanding of faults in terms of the relevant effective parameters. This paper outlines a data-driven model to represent momentary failure rate in terms of the most influential factors based on the study of the recorded historical fault data as well as the expert's experiments in the Greater Tehran Electricity Distribution Company. A methodology is presented for momentary fault causes identification and model construction using artificial neural networks. Satisfactory results indicate that the developed model can easily be implemented to estimate other fault types in power distribution systems.,2009,0, 8044,Power Factor Correction and Active Filtering Technology Application for Industrial Power Systems with Non-linear Loads,Power factor and harmonics problem in industrial power systems with non-linear loads is presented. Traditional and modern advanced technologies for power factor improvement and harmonic mitigation are reviewed. Advantages of power electronics based static reactive power compensation (STATCOM) and active harmonic filtering are shown. Design issues for power electronics converter operating as STATCOM&active filter are discussed. Modeling and experiment results are presented for both advanced technologies,2006,0, 8045,A method based on Analytical Hierarchy Process for generator fault diagnosis,"Power generator is the most important equipment in power system, and the security and reliability of power system are influenced by its running status. Generators failure or generators break down can cause high financial consequences. For this reason, several concepts for the condition monitoring of generators have been developed. Large volume of available data from monitoring systems can overwhelm personnel and require extensive interpretation. Correctly handling this information requires expertise in generator design, operating limits, alarm processing and interpretation. Because expertise is often a scarce quantity, decisions must often be made when expert group is unavailable. The purpose of this article is to collect significant generator data in order to form a comprehensive analysis by Analytical Hierarchy Process (AHP) method. The AHP is a multi criteria analysis approach, where is used in order to combine the results of diagnostic methods and draw conclusions on the most probable failure. A fault diagnosis structure has been designed and several comparison charts have been generated. As a case study characteristics of a generator have been analyzed using Expert Choice (EC) software. The EC results confirmed high performance of AHP method for utilizing in generator fault diagnosis.",2010,0, 8046,Study on remote fault diagnosis system using multi monitoring methods on dredger,"Power machinery and key equipments are regarded as the monitoring objects. Remote fault diagnosis system based Web is proposed using multi monitoring methods in dredger, which gives a comprehensive consideration of monitoring technology about performance parameters, lubricant oil, vibration and instantaneous speed. The modern ship maintenance management mode of two modes, three levels and four methods is established and used in the selected dredgers of Changjiang Waterway Bureau. Monitoring subsystem on ship, diagnostic subsystem in lab and maintenance decision subsystem in dredgers maintenance management center are designed and realized using computer technology and data fusion technology. Remote fault diagnosis system has been realized using wireless communication technology between the dredger and the technology center. The system to be used on the dredgers has proved to be effective.",2010,0, 8047,Detection of insulation faults on disc-type winding transformers by means of leakage flux analysis,"Power transformers figure amongst the most costly pieces of equipment used in electrical systems. Major research effort has therefore focused on detecting failures of their insulating systems prior to unexpected machine outage. Although several industrial methods exist for the on-line and off-line monitoring of power transformers, all of them are expensive and complex, and require the use of specific electronic instrumentation. For these reasons, this paper will present on-line analysis of transformer leakage flux as an efficient alternative procedure for assessing machine integrity and detecting the presence of insulating failures during their earliest stages. A 12kVA 400V/400V power transformer was specifically manufactured for the study. A finite element model of the machine was designed to obtain the transient distribution of leakage flux lines in the machine's transversal section under normal operating conditions and when shorted turns are intentionally produced. Very cheap, simple sensors, based on air core coils, were built in order to measure the leakage flux of the transformer, and non-destructive tests were also applied to the machine in order to analyse pre- and post-failure voltages induced in the coils. Results point to the ability to detect very early stages of failure, as well as of locating the position of the shorted turn in the transformer windings.",2009,0, 8048,The use of characteristic features of wireless cellular networks for transmission of GNSS assistance and correction data,"Precise Global Navigation Satellite System (GNSS) positioning using Real Time Kinematics (RTK) correction data is currently utilized in many fields of surveying, mapping and precision agriculture. In the near future, sub decimeter precision data usage is expected to extend to autonomous vehicles navigation and public safety areas. To satisfy this increasing demand of precision positioning correction bandwidth, new techniques and protocols in assistance and correction data transmission are needed. This paper reviews one such possible technique involving sending correction dataset via public wireless cellular networks. The data will be transmitted through a hybrid system integrating correction data broadcasted in the wireless cellular network control plane with AGNSS assistance data and correction metadata in the user plane. Through this system, the bandwidth intensive, low refresh rate data of GNSS system ephemeris, reference station and satellite identification is omitted from the main data stream. Instead, a constant bit rate (CBR) stream for correction data is used and bandwidth is conserved. The results show that the proposed system can achieve scalability required for widespread usage of sub decimeter level positioning data from GNSS.",2010,0, 8049,How Long Will It Take to Fix This Bug?,"Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating naive predictions by a factor of four.",2007,0, 8050,Average Error Performance of M-ary Modulation Schemes in Nakagami-q (Hoyt) Fading Channels,"Presented are exact-form expressions for the average error performance of various coherent, differentially coherent, and noncoherent modulation schemes in Nakagami-q (Hoyt) fading channels. The expressions are given in terms of the Lauricella hypergeometric function, FD (n); for nges1, which can be evaluated numerically using its integral or converging series representation. It is shown that the derived expressions reduce to some existing results for Rayleigh fading as special cases",2007,0, 8051,A novel method for closed-loop error correction microwave and millimeter wave QPSK modulator,"QPSK modulators at microwave and millimeter waves can be very useful for direct modulation communication links. The main problem with such modulators is the error in phase and amplitude balance, which is quite large at MM-waves and degrades the performance of the modulator (carrier rejection, deviation from 90 degrees between states, etc.). In this paper we introduce a novel approach featuring an error correction scheme, which is simple to implement both in hybrid and MMIC forms. A very important feature of the new method is the self-generated reference signal, which enables a simple (low cost) and self contained implementation in a MMIC form of a high quality QPSK MM-wave modulator.",2001,0, 8052,An Improved Two-Point-Correction Method to Remove the Effect of the Radiance from Camera Interior on Infrared Image,"Radiance coming from the interior of an uncooled infrared camera has a significant effect on infrared image. This paper presents a three-phase scheme for coping with the effect. The first phase requires the infrared images of the high temperature blackbody and the low temperature blackbody for various camera interior temperatures. The second phase forms functions of pixel values in the high and low temperature blackbody images with camera interior temperature by least square fitting. With the aid of functions, the third phase determines the high and low temperature blackbody images from the camera interior temperature, and then removes the effect of radiance from the interior of the camera on infrared image. Theory analysis and experiment results show that the method can remove the effect effectively.",2007,0, 8053,Logic soft errors: a major barrier to robust platform design,"Radiation induced soft errors in flip-flops, latches and combinational logic circuits, also called logic soft errors, pose a major challenge in the design of robust platforms for enterprise computing and networking applications. Associated power and performance overheads are major barriers to the adoption of classical fault-tolerance techniques to protect such systems from soft errors. Design-for-functional-test and debug resources can be reused for built-in soft error resilience during normal system operation resulting in more than an order of magnitude reduction in the undetected soft error rate. This design technique has negligible area and speed penalties, and the chip-level power penalty is significantly smaller compared to classical fault-tolerance techniques",2005,0, 8054,Rational Fitting of S-Parameter Frequency Samples With Maximum Absolute Error Control,"Rational fitting techniques are often used for the macromodeling of linear systems from tabulated S-parameter frequency samples. This letter proposes a modified weighting scheme for the Vector Fitting algorithm that iteratively minimizes the maximum absolute error over the frequency range of interest, rather than the least-squares error. By considering the appropriate error measure in the fitting process, it is possible to obtain more accurate broadband macromodels without increasing the number of poles. The effectiveness of the approach is illustrated by several numerical examples.",2010,0, 8055,A framework for fault tolerance in distributed real time systems,"Real time systems have a characteristic that they should be fault tolerant. In this paper, a fault tolerance mechanism for real time systems is proposed. First a model is discussed which is a modification of distributed recovery block and is based on distributed computing. Then a model is proposed which is based on distributed computing along with feed forward artificial neural network methodology. The proposed technique is based on execution of design diverse variants on replicated hardware, and assigning weights to the results produced by variants. Thus the proposed method encompasses both the forward and backward recovery mechanism, but main focus is on forward recovery.",2005,0, 8056,Mobile robot fault tolerant control introducing ARTEMIC,"Real-time applications should timely deliver synchronized data-sets, minimize latency in their response and meet their performance specifications in the presence of disturbances and faults. The fault tolerant behavior in mobile robots refers to the possibility to autonomously detect and identify faults as well as the capability to continue operating after a fault occurred. This paper introduces a real-time distributed control application with fault tolerance capabilities for differential wheeled mobile robots, named ARTEMIC. Specific design, development and implementation details will be provided in this paper.",2010,0, 8057,Fault-Tolerant Distributed Stream Processing System,"Real-time data processing systems are more and more popular nowadays. Data warehouses not only collect terabytes of data, they also process endless data streams. To support such a situation, a data extraction process must become a continuous process also. Here a problem of a failure resistance arises. It is important not only to process a set of data on time, even more important is not to lose any data when a failure occurs. We achieve this by applying a redundant distributed stream processing. In this paper, we present a fault-tolerant system designed for processing data streams originating from geographically distributed sources",2006,0, 8058,Error recovery for interactive video transmission over the Internet,"Real-time interactive video transmission in the current Internet has mediocre quality because of high packet loss rates. Loss of packets in a video frame manifests itself not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagation problem is inherent in any motion compensation-based video codec. In this paper, we present a new error recovery scheme, called recovery from error spread using continuous updates (RESCU), that effectively alleviates error propagation in the transmission of interactive video. The main benefit of the RESCU scheme is that it allows more time for transport-level recovery such as retransmission and forward error correction to succeed while effectively masking out delays in recovering lost packets without introducing any playout delays, thus making it suitable for interactive video communication. Through simulation and real Internet experiments, we study the effectiveness and limitations of our proposed techniques and compare their performance to that of existing video error recovery techniques including H.263+ (NEWPRED). The study indicates that RESCU is effective in alleviating the error spread problem and can sustain much better video quality with less bit overhead than existing video error recovery techniques under various network environments.",2000,0, 8059,Fault detection in model predictive controller,"Real-time monitoring and maintaining model predictive controller (MPC) is becoming an important issue with its wide implementation in the industries. In this paper, a measure is proposed to detect faults in MFCs by comparing the performance of the actual controller with the performance of the ideal controller. The ideal controller is derived from the dynamic matrix control (DMC) in an ideal work situation and treated as a measure benchmark. A detection index based on the comparison is proposed to detect the state change of the target controller. This measure is illustrated through the implementation for a water tank process.",2004,0, 8060,A Dynamic Fault-Tolerant Model for Open Distributed Computing,"Open distributed computer systems are some of the most successful structures ever designed for the computer community together with their undisputed benefits for users. However, their complexity has also introduced a few side-effects, most notably the unpredictable nature of the underlying environments and reconfiguration burdens imposed by environmental changes. Thus, to gain high level of system performance, a required level of reliability has to be maintained. In this paper, we propose a mechanism to analyze the underlying environmental faults and failures. This model provides an adaptable fault-tolerant approach in order to address unanticipated events and unpredictable hazards in distributed systems. Therefore, this model maintains the required reliability by analyzing the environment and selects the optimal replication strategy for existing conditions. This pragmatic and theoretically appealing approach is a part of the Juice system which supports adaptation properties for open distributed environments",2006,0, 8061,Forecasting field defect rates using a combined time-based and metrics-based approach: a case study of OpenBSD,"Open source software systems are critical infrastructure for many applications; however, little has been precisely measured about their quality. Forecasting the field defect-occurrence rate over the entire lifespan of a release before deployment for open source software systems may enable informed decision-making. In this paper, we present an empirical case study often releases of OpenBSD. We use the novel approach of predicting model parameters of software reliability growth models (SRGMs) using metrics-based modeling methods. We consider three SRGMs, seven metrics-based prediction methods, and two different sets of predictors. Our results show that accurate field defect-occurrence rate forecasts are possible for OpenBSD, as measured by the Theil forecasting statistic. We identify the SRGM that produces the most accurate forecasts and subjectively determine the preferred metrics-based prediction method and set of predictors. Our findings are steps towards managing the risks associated with field defects",2005,0, 8062,Optical Electric Current Transformer Error Measuring System Based on Virtual Instrument,"Optical electric current transformer is introduced in this paper. Based on virtual instrument and digital signal processing technology, error measuring system is designed by using LabVIEW software and DAQ-2006 data acquisition card. There are harmonic analysis module, error analysis module, and history data analysis module in error measuring system. Functions of these three modules are introduced. Measuring channel's error and protection channel's error of optical electric current transformer, are measured. Random phase error curve and ratio error curve of measuring channel in different test current are plotted. The result shows that, this error measuring system achieves accuracy standard, and is able to measure phase error and ratio error of optical electric current transformer exactly.",2007,0, 8063,Optimum design of a class of fault tolerant isotropic Gough-Stewart platforms,"Optimal geometric design is of key importance to the performance of a manipulator. First, this paper extends the work in Y. Yi, et al., (2004) to generate a class of isotropic Gough-Stewart platforms (GSPs) with an odd number of struts. Then, it develops methods for finding a highly fault tolerant GSP from that class. Two optimization criteria are considered, isotropy and fault tolerance. To meet the mission critical needs imposed by laser weapons applications, nine-strut isotropic GSPs that retain kinematic stability despite the loss of any three struts are found. First, we develop methods for generating a five parameter class of isotropic nine-strut GSPs. Next, new measures of fault tolerance are introduced and used to optimize the free parameter space. The optimized design is much more fault tolerant than the GSP currently baselined for the airborne laser.",2004,0, 8064,Probability of error calculation of OFDM systems with frequency offset,"Orthogonal frequency-division multiplexing (OFDM) is sensitive to the carrier frequency offset (CFO), which destroys orthogonality and causes intercarrier interference (ICI), Previously, two methods were available for the analysis of the resultant degradation in performance. Firstly, the statistical average of the ICI could be used as a performance measure. Secondly, the bit error rate (BER) caused by CFO could be approximated by assuming the ICI to be Gaussian. However, a more precise analysis of the performance (i.e., BER or SER) degradation is desirable. In this letter, we propose a precise numerical technique for calculating the effect of the CFO on the BER or symbol error in an OFDM system. The subcarriers can be modulated with binary phase shift keying (BPSK), quaternary phase shift keying (QPSK), or 16-ary quadrature amplitude modulation (16-QAM), used in many OFDM applications. The BPSK case is solved using a series due to Beaulieu (1990). For the QPSK and 16-QAM cases, we use an infinite series expression for the error function in order to express the average probability of error in terms of the two-dimensional characteristic function of the ICI",2001,0, 8065,Software Fault Prediction Models for Web Applications,"Our daily life increasingly relies on Web applications. Web applications provide us with abundant services to support our everyday activities. As a result, quality assurance for Web applications is becoming important and has gained much attention from software engineering community. In recent years, in order to enhance software quality, many software fault prediction models have been constructed to predict which software modules are likely to be faulty during operations. Such models can be utilized to raise the effectiveness of software testing activities and reduce project risks. Although current fault prediction models can be applied to predict faulty modules of Web applications, one limitation of them is that they do not consider particular characteristics of Web applications. In this paper, we try to build fault prediction models aiming for Web applications after analyzing major characteristics which may impact on their quality. The experimental study shows that our approach achieves very promising results.",2010,0, 8066,Measuring errors for massive triangle meshes,"Our proposal is a method for computing the distance between two surfaces modeled by massive triangle meshes which can not be both loaded entirely in memory. The method consists in loading at each step a small part of the two meshes and computing the symmetrical distance for these areas. These areas are chosen in such a way as the orthogonal projection, used to compute this distance, have to be in it. For this, one of the two meshes is simplified and then a correspondence between the simplified mesh and the triangles of the input meshes is done. The experiments show that the proposed method is very efficient in terms of memory cost, while producing results comparable to the existent tools for the small and medium size meshes. Moreover, the proposed method enables us to compute the distance for massive meshes.",2010,0, 8067,Selecting ANN structures to find transmission faults,"Programming models based on artificial neural networks (ANN) have seen increased usage. ANNs are used in various fields (industry, medicine, finance, or new technology, among others). There is a wide range of possible power system applications of neural networks in operation and control processes, including stability assessment, security monitoring, load forecasting, state estimation, load flow analysis, contingency analysis, emergency control actions, HVDC system design, etc. This article features an automatic system that selects the most adequate ANN structure to solve any type of problem. The ANN Automatic Selection System (SARENEUR) was implemented in a specific case in order to obtain a neural network structure that shows better results in fault location within a two-terminal transmission line. The fault location is obtained according to the values of steady-state voltages and currents measured at one end",2001,0, 8068,A Fault-Tolerant Legion Authentication System,"Protecting resources from unauthorized access are one of the most premier requirements of any distributed environment. Legion uses an Authentication Object that represents the user to system. AuthenticationObject is responsible for authenticating user and issuance of certificate, which is used by resource object to check validity of request. AuthenticationObject contains the password and private key of the user. User will access AuthenticationObject by using legion object identifier (LOID) containing public key of the user. If the AuthenticationObject gets deleted then there is no way to recover or regenerate th is AuthenticationObject because private key cannot be recreated. Thus, current system is not fault-tolerant and reliable. Therefore, user needs to be recreated. This paper proposes a fault-tolerant, reliable and more dynamic authentication mechanism",2006,0, 8069,Defect-Tolerant Design and Optimization of a Digital Microfluidic Biochip for Protein Crystallization,"Protein crystallization is a commonly used technique for protein analysis and subsequent drug design. It predicts the 3-D arrangement of the constituent amino acids, which in turn indicates the specific biological function of a protein. Protein crystallization experiments are typically carried out in well-plates in the laboratory. As a result, these experiments are slow, expensive, and error-prone due to the need for repeated human intervention. Recently, droplet-based AdigitalA microfluidics have been used for executing protein assays on a chip. Protein samples in the form of nanoliter-volume droplets are manipulated using the principle of electrowetting-on-dielectric. We present the design of a multi-well-plate microfluidic biochip for protein crystallization; this biochip can transfer protein samples, prepare candidate solutions, and carry out crystallization automatically. To reduce the manufacturing cost of such devices, we present an efficient algorithm to generate a pin-assignment plan for the proposed design. The resulting biochip enables control of a large number of on-chip electrodes using only a small number of pins. Based on the pin-constrained chip design, we present an efficient shuttle-passenger-like droplet manipulation method and test procedure to achieve high-throughput and defect-tolerant well loading.",2010,0, 8070,Nonlinear model for offline correction of pulmonary waveform generators,"Pulmonary waveform generators consisting of motor-driven piston pumps are frequently used to test respiratory-function equipment such as spirometers and peak expiratory flow (PEF) meters. Gas compression within these generators can produce significant distortion of the output flow-time profile. A nonlinear model of the generator was developed along with a method to compensate for gas compression when testing pulmonary function equipment. The model and correction procedure were tested on an Assess Full Range PEF meter and a Micro DiaryCard PEF meter. The tests were performed using the 26 American Thoracic Society standard flow-time waveforms as the target flow profiles. Without correction, the pump loaded with the higher resistance Assess meter resulted in ten waveforms having a mean square error (MSE) higher than 0.001 L2/s2. Correction of the pump for these ten waveforms resulted in a mean decrease in MSE of 87.0%. When loaded with the Micro DiaryCard meter, the uncorrected pump outputs included six waveforms with MSE higher than 0.001 L2/s2. Pump corrections for these six waveforms resulted in a mean decrease in MSE of 58.4%.",2002,0, 8071,A Constraint Based Bug Checking Approach for Python,"Python is a powerful dynamically typed programming language. Dynamic typing brings great flexibility for programming. However, lack of static type checking, it is hard to detect some bugs before run time. We present a constraint framework based on Pythonpsilas structural equivalence type system. The framework does not introduce any new language features, thus without losing benefits of Pythonpsilas dynamic typing. Constraints are extracted from source code via static analysis and are used to check bugs, such as passing wrong parameters to function. A case study shows how to use the framework to check validity of function calls.",2009,0, 8072,A software-implemented fault injection methodology for design and validation of system fault tolerance,"Presents our experience in developing a methodology and tool at the Jet Propulsion Laboratory (JPL) for software-implemented fault injection (SWIFI) into a parallel-processing supercomputer which is being designed for use in next-generation space exploration missions. The fault injector uses software-based strategies to emulate the effects of radiation-induced transients occurring in the system hardware components. JPL's SWIFI tool set, which is called JIFI (JPL's Implementation of a Fault Injector), is being used in conjunction with an appropriate system fault model to evaluate candidate hardware and software fault tolerance architectures, to determine the sensitivity of applications to faults, and to measure the effectiveness of fault detection, isolation and recovery strategies. JIFI has been validated to inject faults into user-specified CPU registers and memory regions with a uniform random distribution in location and time. Together with verifiers, classifiers and run scripts, JIFI enables massive fault injection campaigns and statistical data analysis.",2001,0, 8073,Analysis of pressure and Blanchard altitude errors computed using atmospheric data obtained from an F-18 aircraft flight,"Pressure altitude is commonly utilized as an altitude reference for an inertial navigation system (INS) to damp the error growth in the inherently unstable vertical channel. A precise altitude reference for use in the INS vertical channel can be obtained using the Blanchard algorithm, which computes altitude from atmospheric pressure, temperature, aircraft ground velocity, and wind velocity data. This paper computes both the pressure and Blanchard altitudes for an entire test flight of an F-18 aircraft from the atmospheric data measured during the flight. The flight repeats 4 cycles of a climb, level-off, dive, level-off trajectory. The altitude computed from GPS during flight is considered to be the truth altitude. The errors in the pressure and Blanchard altitudes are computed and compared. In addition both altitude errors are analyzed in order to determine the scale factor, bias offset, and time delay utilizing the least square error fit method. The Blanchard altitude is a much more precise altitude reference than pressure altitude during actual flight of an F-18 aircraft.",2002,0, 8074,Recurring bug fixes in object-oriented programs,"Previous research confirms the existence of recurring bug fixes in software systems. Analyzing such fixes manually, we found that a large percentage of them occurs in code peers, the classes/methods having the similar roles in the systems, such as providing similar functions and/or participating in similar object interactions. Based on graph-based representation of object usages, we have developed several techniques to identify code peers, recognize recurring bug fixes, and recommend changes for code units from the bug fixes of their peers. The empirical evaluation on several open-source projects shows that our prototype, FixWizard, is able to identify recurring bug fixes and provide fixing recommendations with acceptable accuracy.",2010,0, 8075,Analysis of soft error rates in combinational and sequential logic and implications of hardening for advanced technologies,"Previous results and models have predicted that combinational logic errors would dominate over flip-flop errors for the past few technology nodes. However, recent experimental results show very little contribution from combinational-logic soft errors to overall soft-error rates. A model that explains the soft error rates as a function of frequency is developed to account for the inconsistency in observed data. Implications for hardening against soft errors for advanced technologies are discussed.",2010,0, 8076,An extended CORBA event service with support for load balancing and fault-tolerance,"Previously, the Object Management Group (OMG) has published a standard for a common object service, called the event service, to support decoupled and asynchronous communication between distributed CORBA object components. However, the service, albeit flexible, still suffers from a number of limitations. Among others, it has poor scalability, and it is not totally reliable. In view of these drawbacks, we propose a generic framework which extends the event service with built-in support for load balancing (both static and dynamic) and fault tolerance. These functions are achieved transparently without the intervention of the application objects",2000,0, 8077,A Pure Peer-To-Peer Desktop Grid framework with efficient fault tolerance,"P2P computing is the sharing of computer resources by direct exchange. P2P desktop grid is a P2P computing environment with desktop resources and usually built on the Internet infrastructure. The most important challenges for a P2P desktop grid involve: 1) minimizing reliance on central servers to achieve decentralization, 2) providing interoperability with other platforms, 3) providing interaction methodologies between grid nodes that overcome connectivity problems in the Internet environment, and 4) providing efficient fault tolerance to maintain performance with frequent faults. The main objective of this paper is to introduce a pure P2P desktop grid framework that built on Microsoft's .Net technology. The proposed framework composed of the following components, 1) a communication protocol based on both FTP and HTTP, for interaction between grid nodes to provide interoperability, 2) An efficient checkpointing approach to provide fault tolerance, and 3) Four interaction models for implementing connectivity for both serial and parallel execution. No reliance on central servers involved in the framework. Such framework will help in overcoming the problems associated to decentralization, interoperability, connectivity and fault tolerance. Performance evaluation has been implemented by running an application code built on variable dimensions' matrix multiplication on a desktop grid based on the proposed framework. Performed experiments have been focused on measuring the impact of failures on the execution time for different connectivity models. Experimental results show that using the proposed framework as an infrastructure for running distributed applications has a great impact on improving fault tolerance, beside achieving full decentralization, interoperability and solving connectivity problems.",2007,0, 8078,A condition number for point matching with application to registration and postregistration error estimation,"Selecting salient points from two or more images for computing correspondence is a well-studied problem in image analysis. This paper describes a new and effective technique for selecting these tiepoints using condition numbers, with application to image registration and mosaicking. Condition numbers are derived for point-matching methods based on minimizing windowed objective functions for 1) translation, 2) rotation-scaling-translation (RST), and 3) affine transformations. Our principal result is that the condition numbers satisfy KTrans KRST KAffine. That is, if a point is ill-conditioned with respect to point-matching via translation, then it is also unsuited for matching with respect to RST and affine transforms. This is fortunate since KTrans is easily computed whereas KRST and KAffine are not. The second half of the paper applies the condition estimation results to the problem of identifying tiepoints in pairs of images for the purpose of registration. Once these points have been matched (after culling outliers using a RANSAC-like procedure), the registration parameters are computed. The postregistration error between the reference image and the stabilized image is then estimated by evaluating the translation between these images at points exhibiting good conditioning with respect to translation. The proposed method of tiepoint selection and matching using condition number provides a reliable basis for registration. The method has been tested on a large number of diverse collection of images - multidate Landsat images, aerial images, aerial videos, and infrared images. A Web site where the users can try our registration software is available and is being actively used by researchers around the world.",2003,0, 8079,A light-weight process for capturing and evolving defect reduction experience,"Selecting technologies for developing software is a crucial activity in software projects. Defect reduction is an example of an area in which software developers have to decide what technologies to use. CeBASE is a NSF funded project that has the role of improving software development by providing decision support on the selection of techniques and tools. The decision support is based on empirical data organized in experience bases and refined into high-level models. Empirical data is collected through various activities, for example through eWorkshops in which experts discuss important issues, and formalized using the lightweight knowledge dust to knowledge pearl process.",2002,0, 8080,A dynamic replica selection algorithm for tolerating timing faults,"Server replication is commonly used to improve the fault tolerance and response time of distributed services. An important problem when executing time-critical applications in a replicated environment is that of preventing timing failures by dynamically selecting the replicas that can satisfy a client's timing requirement, even when the quality of service is degraded due to replica failures and excess load on the server. We describe the approach we have used to solve this problem in AQuA, a CORBA-based middleware that transparently replicates objects across a local area network. The approach we use estimates a replica's response time distribution based on performance measurements regularly broadcast by the replica. An online model uses these measurements to predict the probability with which a replica can prevent a timing failure for a client. A selection algorithm then uses this prediction to choose a subset of replicas that can together meet the client's timing constraints with at least the probability requested by the client. We conclude with experimental results based on our implementation.",2001,0, 8081,Fault Detection and Recovery in a Transactional Agent Model,"Servers can be fault-tolerant through replication and checkpointing technologies in the client server model. However, application programs cannot be performed and servers might block in the two-phase commitment protocol due to the client fault. In this paper, we discuss the transactional agent model to make application programs fault-tolerant by taking advantage of mobile agent technologies where a program can move from a computer to another computer in networks. Here, an application program on a faulty computer can be performed on another operational computer by moving the program. A transactional agent moves to computers where objects are locally manipulated. Objects manipulated have to be held until a transactional agent terminates. Some sibling computers which the transactional gent has visited might be faulty before the transactional agent terminates. The transactional agent has to detect faulty sibling computers and makes a decision on whether it commits/aborts or continues the computation by skipping the faulty computers depending on the commitment condition. For example, a transactional agent has to abort in the atomic commitment if a sibling computer is faulty. A transactional agent can just drop a faulty sibling computer in the at-least-one commitment. We evaluate the transactional agent model in terms of how long it takes for the transactional agent to treat faulty sibling computers .",2007,0, 8082,A Fault Tolerance Approach for Enterprise Applications,"Service oriented architectures (SOAs) have emerged as a preferred solution to tackle the complexity of large-scale, complex, distributed, and heterogeneous systems. Key to successful operation of these systems is their reliability and availability. In this paper, we propose an approach to creating fault tolerant SOA implementations based on an architectural pattern called Rich Services. Our approach is model-driven, focuses on interaction specifications as the means for defining services and managing their failures, and is technology independent. We leverage an enterprise service bus (ESB) framework to implement a system based on the fault tolerant Rich Service pattern. We evaluate our approach by measuring availability and reliability of an experimental system in the e-business domain.",2008,0, 8083,A Fault Detection Mechanism for SOA-Based Applications Based on Gauss Distribution,"Service-oriented architecture (SOA) is an ideal solution to build application system with low cost and high efficiency, but fault detection is not supported in most SOA-based applications. Based on Gauss distribution, a fault detection mechanism for SOA-based applications is proposed. The fault in SOA can be detected through comparing the calculated confidence interval with the predefined parameters at runtime according to the descriptor. Based on the fault detection algorithm, the reference service model is improved to support the proposed algorithm through adding some suitable components.",2009,0, 8084,Comparison of worst case errors in linear and neural network approximation,Sets of multivariable functions are described for which worst case errors in linear approximation are larger than those in approximation by neural networks. A theoretical framework for such a description is developed in the context of nonlinear approximation by fixed versus variable basis functions. Comparisons of approximation rates are formulated in terms of certain norms tailored to sets of basis functions. The results are applied to perceptron networks,2002,0, 8085,Improving the Fault Tolerance of Nanometric PLA Designs,"Several alternative building blocks have been proposed to replace planar transistors, among which a prominent spot belongs to nanometric filaments such as silicon nanowires (SiNWs) and carbon nanotubes (CNTs). However, chips leveraging these nanoscale structures are expected to be affected by a large amount of manufacturing faults, way beyond what chip architects have learned to counter. In this paper, the authors show a design flow, based on software mapping algorithms, to improve the yield of nanometric programmable logic arrays (PLAs). While further improvements to the manufacturing technology will be needed to make these devices fully usable, our flow can significantly shrink the gap between current and desired yield levels. Also, the approach does not need post-fabrication functional analysis and mapping, therefore dramatically cutting on verification costs. The authors check PLA yields by means of an accurate analyzer after Monte Carlo fault injection. The authors show that, compared to a baseline policy of wire replication, they achieve equal or better yields (8% over a set of designs) depending on the underlying defect assumptions",2007,0, 8086,Fault tolerance of feed-forward artificial neural network architectures targeting nano-scale implementations,"Several circuit architectures have been proposed to overcome logic faults due to the high defect densities that are expected to be encountered in the first generations of nanoelectronic systems. How feed-forward artificial neural networks can possibly be exploited for the purpose of conceiving highly reliable Boolean gates is the topic of this paper. Computer simulations show that feed-forward artificial neural networks can be trained to absorb faults while implementing Boolean functions of various complexity. Using this approach, it can be shown that very high device failure rates (up to 20%) can be accommodated. The cost is to be paid in terms of hardware overhead, which is comparable to the area cost of conventional hardware redundancy measures.",2007,0, 8087,Rake Reception With Channel Estimation Error,"Several digital cellular systems employ coherent RAKE reception, wherein channel estimates are used to combine despread values. In this paper, a maximum likelihood (ML) approach to RAKE combining that accounts for channel estimation error, including error correlation across despread values, is examined. Fading and noise correlation are also considered. The performance of the ML approach is compared to traditional RAKE combining as well as approximate ML-based approaches. Results show that when the channel estimation error and noise are of the same order, ML-based approaches can provide gains on the order of 1 dB over traditional RAKE reception",2006,0, 8088,Eliciting design requirements for maintenance-oriented IDEs: a detailed study of corrective and perfective maintenance tasks,"Several innovative tools have found their way into mainstream use in modern development environments. However, most of these tools have focused on creating and modifying code, despite evidence that most of programmers' time is spent understanding code as part of maintenance tasks. If new tools were designed to directly support these maintenance tasks, what types would be most helpful? To find out, a study of expert Java programmers using Eclipse was performed. The study suggests that maintenance work consists of three activities: (1) forming a working set of task-relevant code fragments; (2) navigating the dependencies within this working set; and (3) repairing or creating the necessary code. The study identified several trends in these activities, as well as many opportunities for new tools that could save programmers up to 35% of the time they currently spend on maintenance tasks.",2005,0, 8089,Designing and Reconfiguring Fault-Tolerant Hypercubes,"Several interconnection networks (such as meshes and hypercubes) can be modeled as circulant graphs. As a result, methods previously developed for constructing fault-tolerant solutions of circulant graphs can also be applied to these networks. Among these methods, the one based on the idea of ""offsets partitioning"" is the most efficient (for circulant graphs). We review this method in this paper, and extend its applications to hypercubes. Moreover, we develop new algorithms to reconfigure circulant graphs and hypercubes. Our results show that the fault-tolerant solutions obtained, and the reconfiguration algorithms developed are efficient.",2006,0, 8090,PEEC: a channel-adaptive feedback-based error,"Reliable transmission is a challenging task over wireless LANs since wireless links are known to be susceptible to errors. Although the current IEEE802.11 standard ARQ error control protocol performs relatively well over channels with very low bit error rates (BERs), this performance deteriorates rapidly as the BER increases. This paper investigates the problem of reliable transmission in a contention free wireless LAN and introduces a packet embedded error control (PEEC) protocol, which employs packet-embedded parity symbols instead of ARQ-based retransmission for error recovery. Specifically, depending on receiver feedback, PEEC adaptively estimates channel conditions and administers the transmission of (data and parity) symbols within a packet. This enables successful recovery of both new data and old unrecovered data from prior transmissions. In addition to theoretically analyzing PEEC, the performance of the proposed scheme is extensively analyzed over real channel traces collected on 802.11b WLANs. We compare PEEC performance with the performance of the IEEE802.il standard ARQ protocol as well as contemporary protocols such as enhanced ARQ and the hybrid ARQ/FEC. Our analysis and experimental simulations show that PEEC outperforms all three competing protocols over a wide range of actual 802.11b WLAN collected traces. Finally, the design and implementation of PEEC using an adaptive low-density-parity-check (A-LDPC) decoder is presented.",2008,0, 8091,Cleansing Test Suites from Coincidental Correctness to Enhance Fault-Localization,"Researchers have argued that for failure to be observed the following three conditions must be met: 1) the defect is executed, 2) the program has transitioned into an infectious state, and 3) the infection has propagated to the output. Coincidental correctness arises when the program produces the correct output, while conditions 1) and 2) are met but not 3). In previous work, we showed that coincidental correctness is prevalent and demonstrated that it is a safety reducing factor for coverage-based fault localization. This work aims at cleansing test suites from coincidental correctness to enhance fault localization. Specifically, given a test suite in which each test has been classified as failing or passing, we present three variations of a technique that identify the subset of passing tests that are likely to be coincidentally correct. We evaluated the effectiveness of our techniques by empirically quantifying the following: 1) how accurately did they identify the coincidentally correct tests, 2) how much did they improve the effectiveness of coverage-based fault localization, and 3) how much did coverage decrease as a result of applying them. Using our better performing technique and configuration, the safety and precision of fault-localization was improved for 88% and 61% of the programs, respectively.",2010,0, 8092,Using variable-length error-correcting codes in MPEG-4 video,Reversible variable length (RVL) codes are used in MPEG-4 video coding to improve its error resilience. Algorithms used to design variable-length error-correcting (VLEC) codes are modified so as to construct efficient RVL codes with a smaller average length than those found in the literature. It is also shown that RVL codes are a special (weak) class of VLEC codes. Consequently more powerful VLEC codes can be used in the MPEG-4 codec and it is shown that performance gains of up to 20 dB in peak signal to noise ratio (PSNR) can be obtained using a soft-decision sequential decoder with relatively simple VLEC codes. This increase in performance is obtained at the expense of an order of magnitude increase in decoding complexity,2005,0, 8093,Differential Histogram Modification-Based Reversible Watermarking with Predicted Error Compensation,"Reversible watermarking inserts watermark into digital media in such a way that visual transparency is preserved and the original media can be restored from the marked one without any loss of media quality. High capacity and high visual quality are a major requirement for reversible watermarking. In this paper, we present a novel reversible watermarking scheme that embeds message bits by modifying the differential histogram of adjacent pixels. Also, overflow and underflow problem is prevented with the proposed predicted error compensation scheme. Through experiments on various images, we prove that the presented scheme achieves 100% reversibility, high capacity, and high visual quality over other methods, while maintaining the induced-distortion low.",2010,0, 8094,A review of portable FES-based neural orthoses for the correction of drop foot,"Reviews the technological developments in neural orthoses for the correction of upper motor neurone drop foot since 1961, when the technique was first proposed by Liberson and his co-workers. Drop foot stimulator (DFS) developments are reviewed starting with hard-wired single-channel and multichannel surface functional electrical stimulation (FES) systems, followed by implanted drop foot stimulators, and then continuing with microprocessor-based surface and implanted drop foot stimulators. The review examines the role of artificial and ""natural"" sensors as replacements for the foot-switch as the primary control sensor in drop foot stimulators. DFS systems incorporating real-time control of FES and completely implanted DFS systems finish the review.",2002,0, 8095,Operational control and protection implications of fault current limitation in distribution networks,"Rising short-circuit fault current levels is one of the problems associated with the increased presence of distributed generation (DG) in electrical networks. A fault level management system involving superconducting fault current limiters (SCFCLs) is a potential solution to this issue. The typical applications of SCFCLs and their advantages over traditional fault current limitation measures are discussed. However, several technical issues remain, relating to: SCFCL post-fault recovery time; network control and protection; and maloperation of the SCFCL due to non-fault transient currents, such as transformer inrush. Initial solutions to these problems, involving a distributed software-based fault level management system, are presented.",2009,0, 8096,Utilisation of motion similarity in Colour-plus-Depth 3D video for improved error resiliency,"Robust 3D stereoscopic video transmission over error-prone networks has been a challenging task. Sustainability of the perceived 3D video quality is essential in case of channel losses. Colour-plus-Depth format on the other hand, has been popular for representing the stereoscopic video, due to its flexibility, low encoding cost compared to left-right stereoscopic video and backwards compatibility. Traditionally, the similarities existing between the colour and the depth map videos are not exploited during 3D video coding. In other words, both components are encoded separately. The similarities include the similarity in motion, image gradients and segments. In this work, we propose to exploit the similarity in the motion characteristics of the colour and the depth map videos by computing only a set of motion vectors and duplicating it for the sake of error resiliency. As the previous research has shown that the stereoscopic video quality is primarily affected by the colour texture quality, especially the motion vectors are computed for the colour video component and the corresponding vectors are used to encode the depth maps. Since the colour motion vectors are protected by duplication, the results have shown that both the colour video quality and the overall stereoscopic video quality are maintained in error-prone conditions at the expense of slight loss in depth map video coding performance. Furthermore, total encoding time is reduced by not calculating the motion vectors for depth map.",2010,0, 8097,The Intelligent System Design of Remote Fault Diagnosis of Reducer Based on GA and NN,"Reducer failure was analyzed and by use of BP neural network in the paper. Model of failure diagnosis was established. By using genetic algorithms, the value of neural networks, the threshold, and the network structure were optimized. Genetic neural network model was applied to the system design of remote reducer fault diagnosis. To compare training error curve of BP neural network with genetic neural network, it was shown that genetic neural network in the training of speed and accuracy higher than the neural network training model.",2009,0, 8098,Bug-Inducing Language Constructs,"Reducing bugs in software is a key issue in software development. Many techniques and tools have been developed to automatically identify bugs. These techniques vary in their complexity, accuracy and cost. In this paper we empirically investigate the language constructs which frequently contribute to bugs. Revision histories of eight open source projects developed in multiple languages are processed to extract bug-inducing language constructs. Twenty six different language constructs and syntax elements are identified. We find that most frequent bug-inducing language constructs are function calls, assignments, conditions, pointers, use of NULL, variable declaration, function declaration and return statement. These language constructs account for more than 70 percent of bug-inducing hunks. Different projects are statistically correlated in terms of frequencies of bug-inducing language constructs. Developers within a project and between different projects also have similar frequencies of bug-inducing language constructs. Quality assurance developers can focus code reviews on these frequent bug-inducing language constructs before committing changes.",2009,0, 8099,Evolutionary strategies and intrinsic fault tolerance,"Redundancy is a critical component to the design of fault tolerant systems; both hardware and software. This paper explores the possibilities of using evolutionary techniques to first produce a processing system that will perform a required function, and then consider its applicability for producing useful redundancy that can be made use of in the presence of faults, ie is it fault tolerant? Results obtained using evolutionary strategies to automatically create redundancy as part of the design process are given. The experiments are undertaken on a Virtex FPGA with intrinsic evolution taking place. The results show that not only does the evolutionary process produce useful redundancy, it is also possible to reconfigure the system in real-time on the Virtex device",2001,0, 8100,On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques,"Regression testing is an important activity in the software life cycle, but it can also be very expensive. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of test case prioritization techniques is to increase a test suite's rate of fault detection (how quickly, in a run of its test cases, that test suite can detect faults). Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited primarily to hand-seeded faults, largely due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults and that the use of hand-seeded faults can be problematic for the validity of empirical results focusing on fault detection. We have therefore designed and performed two controlled experiments assessing the ability of prioritization techniques to improve the rate of fault detection of test case prioritization techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. More importantly, a comparison of our results with those collected using hand-seeded faults reveals several implications for researchers performing empirical studies of test case prioritization techniques in particular and testing techniques in general",2006,0, 8101,Fault detection for mobile robots using redundant positioning systems,"Reliable navigation is a very important part of an autonomous mobile robot system. This means for instance that the robot should not lose track of its position, even if unexpected events like wheel slip and collisions occur. The standard approach to this problem is to construct a navigation system that is robust in itself. This paper proposes that detecting faults can also be made outside the normal navigation system, as an additional fault detector. Besides increasing the robustness, a means for detecting deviations is obtained, which can be important for the rest of the robot system, for instance the top level planner. The method uses two or more sources of robot position estimates, and compares them to detect unexpected deviation without getting deceived by drift or different characteristics in the position systems it gets information from. Both relative and absolute position sources can be used, meaning that existing positioning systems already implemented can be used in the detector. For detection purposes, an extended Kalman filter is used in conjunction with a CUSUM test. The detector is able to not only detect faults, but also give an estimate of when the fault occurred, which is useful for doing fault recovery. The detector is easy to implement, as it requires no modification of existing systems. Also the computational demands are very low. The approach is implemented and demonstrated on a mobile robot, using odometry and a scan matcher as sources of position information. It is shown that the system is able to detect wheel slip in real-time",2006,0, 8102,Comparison of accelerated DRAM soft error rates measured at component and system level,"Single event upsets from terrestrial cosmic rays (i.e. high-energy neutrons) are more important than alpha particle induced soft errors in modern DRAM devices. A high intensity broad spectrum neutron source from the Los Alamos Neutron Science Center (LANSCE) was used to characterize the nature of these upsets in DRAM technologies ranging from 180 nm down to 70 nm from several vendors at the DIMM component level using a portable memory tester. Another set of accelerated neutron beam tests were made with DRAM DIMMs mounted on motherboards. Soft errors were characterized using these two methods to determine the influence of neutron angle, frequency, data patterns and process technology. The purpose of this study is to analyze the effects of these differences on DRAM soft errors.",2008,0, 8103,An Integrated Phase to Ground Fault Protection for Neutral Compensated Power Networks,"Single phase to ground fault in a neutral insulated or compensated power networks is difficult to be detected. In order to synthetically employing multiple available fault information, a D-S evidence theory based fault detecting fusion method is investigated for neutral compensated power networks. Three kinds of fault information as transient, abrupt wave and real power are employed for the purpose. Concept of fault measure is introduced to satisfy fusion description. And the corresponding fault measure algorithms for the three kind of fault information are constructed. Employing the investigated scenario, a fault detecting device was developed with virtual instrument. The device was tested in a power distribution network and has operated in field. Testing and operating results proved the presented scenario. By properly employing the multifold fault information, the detecting scenario is reliable and adaptive.",2007,0, 8104,Site qualification above 1 GHz and SVSWR systemic errors,"Site VSR was developed under the belief that a test site can be characterized in the same fashion as a transmission line. The intent of the test is to determine if the spatial impedance of the site deviates from free space resulting in a reflection coefficient at the boundary. The key aspect of transmission line VSWR measurements is continuous movement of the clamp or RF transducer along the transmission line to find the maximum and minimum voltages. This article will review the inherent error in the SVSWR method written into CISPR 16-1-4. This error is due to the fact that the standard samples only six discrete locations. The spacing of those locations relative to the wavelength of the sampled standing wave results in an inherent compliance bias. In other words, the method in the standard will always result in the site performance being represented as better than it actually is and the true site performance is almost never actually measured. A simple mathematical model was developed to illustrate the problem. Empirical validation tests were conducted with sampling every millimeter. Test results validate the very substantial error term in the method. An alternative approach, time domain reflectivity, provides complete site evaluation faster and more accurately than the VSWR method will be briefly discussed.",2010,0, 8105,Slant Correction of Vehicle License Plate Based on Feature Point and Principal Component Analysis,"Slant Correction plays an important role in vehicle license plate automatic recognition system. In order to reduce or avoid adverse effects caused by noise disturbance and fragmentary frame of vehicle license plate, as well as speed up the computation, an approach based on feature point and principal component analysis (PCA) is presented. Feature points are considered as the edge of characters on the license plate, with a lined orderliness which reflects the slant angle of the plate. Firstly, a pretreatment process is carried out to extract the feature points of the plate, and the direction of the principal component, which is considered as the slant angle of the plate, is achieved through principal component analysis of the feature points, then the correction of the plate is accomplished. The experimental results demonstrate that, this method enjoys the advantages of being simple and less demanding in image quality and frame of license plate, and makes the correction of the plate easier and more precise compare to some other method such as Hough transform.",2008,0, 8106,Sliding mode methods for fault detection and fault tolerant control,Sliding mode methods have been historically studied because of their strong robustness properties to a certain class of uncertainty. This is achieved by employing nonlinear control/injection signals to force the system trajectories to attain in finite time a motion along a surface in the state-space. This paper will consider how these ideas can be exploited for fault detection (specifically fault signal estimation) and subsequently fault tolerant control. The paper will also describe applications of these ideas to aerospace systems. It will describe piloted flight simulator results associated with the GARTEUR AG16 action group on fault tolerant control. The results demonstrate the successful real-time implementation of the proposed fault tolerant control scheme on a motion flight simulator configured to represent the EL-AL aircraft.,2010,0, 8107,Fault Analysis of the Stream Cipher Snow 3G,"Snow 3G is the backup encryption algorithm used in the mobile phone UMTS technology to ensure data confidentiality. Its design - a combiner with memory - is derived from the stream cipher Snow 2.0, with improvements against algebraic cryptanalysis and distinguishing attacks. No attack is known against Snow 3G today. In this paper, a fault attack against Snow 3G is proposed. Our attack recovers the secret key with only 22 fault injections.",2009,0, 8108,A Generic Fault Countermeasure Providing Data and Program Flow Integrity,"So far many software countermeasures against fault attacks have been proposed. However, most of them are tailored to a specific cryptographic algorithm or focus on securing the processed data only. In this work we present a generic and elegant approach by using a highly fault secure algebraic structure. This structure is compatible to finite fields and rings and preserves its error detection property throughout addition and multiplication. Additionally, we introduce a method to generate a fingerprint of the instruction sequence. Thus, it is possible to check the result for data corruption as well as for modifications in the program flow. This is even possible if the order of the instructions is randomized. Furthermore, the properties of the countermeasure allow the deployment of error detection as well as error diffusion. We point out that the overhead for the calculations and for the error checking within this structure is reasonable and that the transformations are efficient. In addition we discuss how our approach increases the security in various kinds of fault scenarios.",2008,0, 8109,Can Knowledge Regarding the Presence of Countermeasures Against Fault Attacks Simplify Power Attacks on Cryptographic Devices?,"Side-channel attacks are nowadays a serious concern when implementing cryptographic algorithms. Powerful ways for gaining information about the secret key as well as various counter-measures against such attacks have been recently developed. Although it is well known that such attacks can exploit information leaked from different sources, most prior works have only addressed the problem of protecting a cryptographic device against a single type of attack. Consequently, there is very little knowledge on how a scheme for protecting a device against one type of side-channel attack may affect its vulnerability to other types of side-channel attacks. In this paper we focus on devices that include protection against fault injection attacks (using different error detection schemes) and explore whether the presence of such fault detection circuits affects the resistance against attacks based on power analysis. Using the AES S-Box as an example, we performed attacks on the unprotected implementation as well as modified implementations with parity check circuits or residue check circuits (mod3 and mod7). In particular, we focus on the question whether the knowledge of the presence of error detection circuitry in the cryptographic device can help an attacker who attempts to mount a power attack on the device. Our results show that the presence of error detection circuitry helps the attacker even if he is unaware of this circuitry, and that the benefit to the attacker increases with the number of check bits used for the purpose of error detection.",2008,0, 8110,An Integrated Silicon Carbide (SiC) Based Single Phase Rectifier with Power Factor Correction,"Silicon carbide (SiC) based power devices exhibit superior properties such as very low switching losses, fast switching behavior, improved reliability and high temperature operation capabilities. These properties contribute toward the ability to increase switching frequency, decrease the size of passive components and switches, and reduce the need for cooling, thus making the devices an excellent candidate for AC/DC power supplies. In this paper a SiC based integrated single phase rectifier with power factor correction (PFC) is presented. The proposed topology has many advantages including fewer semiconductor components; the presence of AC side inductor resulting in reduced EMI interference, and higher performance. This approach takes advantage of the superior properties of SiC devices and the reduced number of devices in the proposed converter to achieve higher efficiency, smaller size and better performance at high temperature. A performance and efficiency evaluation of the rectifier is presented and the results are compared with benchmark Si solutions",2005,0, 8111,Research of analysis method of power system real-time digital simulation error,"Simulation accuracy has great effects on power system analysis and control. In this paper, in order to analyze the power system real-time digital simulation error, prony algorithm is used to extract the features of signal. Based on the analysis results of this algorithm, residual similar index was used to characterize the global error of simulation, and then frequency, damping and amplitude similar indexes were employed to represent features error of transient signals respectively. A calculation program for power system real-time digital simulation accuracy evaluation was employed to analyze the measured waveforms and results of fault simulations on RTDS. Simulation results have proved the validity of the proposed analysis method of power system real time digital simulation error.",2010,0, 8112,Cross-talk correction for dual-isotope imaging with a dedicated cardiac SPECT camera,"Simultaneous dual-isotope perfusion imaging has the potential to reduce protocol durations and improve image alignment but requires cross-talk correction. The Discovery NM 530c dedicated cardiac camera offers two possible advantages over traditional cameras with respect to cross-talk. The cadmium zinc telluride (CZT) detectors have improved energy resolution and the pinhole collimators lead to reduced lead-X-ray production. The objective of this study is to determine the crosstalk fractions and blurring kernels required for 99mTc/201Tl rest/stress imaging. Forty-six pairs of single-isotope clinical 99mTc-tetrofosmin and 201Tl studies were examined. The patients were matched for height, weight, and gender. Listmode data (8min) were reprocessed to obtain the signal in the following windows: Tc-10 (140keV10%), Tc-8 (140keV8%), Tc-6 (140keV6%), Sc (100keV10%), Tl-167 (167keV8%), Tl-70a (70keV15%), and Tl-70b (70keV-15%+10%). The relative counts in each window were determined for both isotopes for rest and stress imaging. Symmetric 2D Gaussian kernels were found that minimized the squared difference between convolved photon distributions from one window and corresponding measured cross-talk. 14 sets of simulated dual-isotopes studies were created and the correction assessed by comparing the summed stress scores before and after correction. 99mTc contributes 1% of the counts in the Tl-167 window and 40-80% of the counts in the Tl-70 window. 201Tl contributes 1-10% of the counts in the 99mTc-photopeak windows. Crosstalk contamination is less with the narrower energy windows that capitalize on the improved energy resolution of the CZT system. The weighting factors for Tc-6 and Sc windows for optimal estimation of the Tl-70b window interference were: -0.24:1.24 with Gaussian =0.58 and 1.2 pixels respectively. Considering the 14 - - sets of simulated dual-isotope studies, the summed stress scores for the corrected data correlated more closely with truth than did the uncorrected data. The multi-energy window convolution subtraction approach has promise for correcting cross-talk in 201Tl/99mTc simultaneous dual-isotope myocardial perfusion imaging with a dedicated cardiac SPECT system.",2010,0, 8113,Frame Error Concealment Technique Using Adaptive Inter-Mode Estimation for H.264/AVC,"Since H.264/AVC achieves a high compression ratio by reducing spatio-temporal redundancy in video sequences, the payload of a single packet can often contain a whole frame encoded by H.264/AVC. Therefore, the loss of a single packet does not only cause the loss of a whole frame, but also produce error propagation into succeeding frames. To deal with this problem, in this paper, we propose a novel frame error concealment method for H.264/AVC. First, the proposed method extrapolates motion vectors from available neighboring frames onto the lost frame. Then, inter-modes of all macroblocks in the lost frame are adoptively estimated by using the extrapolated motion vectors and features of H.264/AVC. Experimental results exhibit that the proposed method outperforms the conventional methods in terms of both objective and subjective video quality.",2008,0, 8114,Maximizing the Fault Tolerance Capability of Fixed Priority Schedules,"Real-time systems typically have to satisfy complex requirements, mapped to the task attributes, eventually guaranteed by the underlying scheduler. These systems consist of a mix of hard and soft tasks with varying criticality, as well as associated fault tolerance requirements. Additionally, the relative criticality of tasks could undergo changes during the system evolution. Time redundancy techniques are often preferred in embedded applications and, hence, it is extremely important to devise appropriate methodologies for scheduling real-time tasks under failure assumptions.In this paper, we propose a methodology to provide a priori guarantees in fixed priority scheduling (FPS) such that the system will be able to tolerate one error per every critical task instance. We do so by using integer linear programming (ILP) to derive task attributes that guarantee re-execution of every critical task instance before its deadline, while keeping the associated costs minimized. We illustrate the effectiveness of our approach, in comparison with fault tolerant (FT) adaptations of the well-known rate monotonic (RM) scheduling, by simulations.",2008,0, 8115,Optimal object state transfer - recovery policies for fault tolerant distributed systems,"Recent developments in the field of object-based fault tolerance and the advent of the first OMG FT-CORBA compliant middleware raise new requirements for the design process of distributed fault-tolerant systems. In this work, we introduce a simulation-based design approach based on the optimum effectiveness of the compared fault tolerance schemes. Each scheme is defined as a set of fault tolerance properties for the objects that compose the system. Its optimum effectiveness is determined by the tightest effective checkpoint intervals, for the passively replicated objects. Our approach allows mixing miscellaneous fault tolerance policies, as opposed to the published analytic models, which are best suited in the evaluation of single-server process replication schemes. Special emphasis has been given to the accuracy of the generated estimates using an appropriate simulation output analysis procedure. We provide showcase results and compare two characteristic warm passive replication schemes: one with periodic and another one with load-dependent object state checkpoints. Finally, a trade-off analysis is applied, for determining appropriate checkpoint properties, in respect to a specified design goal.",2004,0, 8116,Single-Event Upset (SEU) Results of Embedded Error Detect and Correct Enabled Block Random Access Memory (Block RAM) Within the Xilinx XQR5VFX130,Recent heavy ion measurements of the single-event upset (SEU) cross section for 65 nm embedded block random access memory (Block RAM) are presented. Results of initial investigation into the on-chip Error Detection and Correction (EDAC) are also discussed.,2010,0, 8117,Intelligent Agents for Fault Tolerance: From Multi-agent Simulation to Cluster-Based Implementation,"Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.",2010,0, 8118,A self-orthogonal code and its maximum-likelihood decoder for combined channel estimation and error protection,"Recent researches have confirmed that better system performance can be obtained by jointly considering channel equalization and channel estimation in the code design, when compared with the system with individually optimized devices. However, the existing codes are mostly searched by computers, and hence, exhibit no good structure for efficient decoding. In this paper, a systematic construction for the codes for combined channel estimation and error protection is proposed. Simulations show that it can yield codes of comparable performance to the computer-searched best codes. In addition, the structural codes can now be maximum-likelihoodly decodable in terms of a newly derived recursive metric for use of the priority-first search decoding algorithm. Thus, the decoding complexity reduces considerably when compared with that of the exhaustive decoder. In light of the rule-based construction, the feasible codeword length that was previously limited by the capability of code-search computers can now be further extended, and therefore facilitates their applications.",2008,0, 8119,Elimination of crucial faults by a new selective testing method,"Recent software systems contain a lot of functions to provide various services. According to this tendency, software testing becomes more difficult than before and cost of testing increases so much, since many test items are required. In this paper we propose and discuss such a new selective software testing method that is constructed from previous testing method by simplifying testing specification. We have presented, in the previous work, a selective testing method to perform highly efficient software testing. The selective testing method has introduced an idea of functional priority testing and generated test items according to their functional priorities. Important functions with high priorities are tested in detail, and functions with low priorities are tested less intensively. As a result, additional cost for generating testing instructions becomes relatively high. In this paper in order to reduce its cost, we change the way of giving information, with respect to priorities. The new method gives the priority only rather than generating testing instructions to each test item, which makes the testing method quite simple and results in cost reduction. Except for this change, the new method is essentially the same as the previous method. We applied this new method to actual development of software tool and evaluated its effectiveness. From the result of the application experiment, we confirmed that many crucial faults can be detected by using the proposed method.",2002,0, 8120,Fault-Tolerant Distributed Deployment of Embedded Control Software,"Safety-critical feedback-control applications may suffer faults in the controlled plant as well as in the execution platform, i.e., the controller. Control theorists design the control laws to be robust with respect to the former kind of faults while assuming an idealized scenario for the latter. The execution platforms supporting modern real-time embedded systems, however, are distributed architectures made of heterogeneous components that may incur transient or permanent faults. Making the platform fault tolerant involves the introduction of design redundancy with obvious impact on the final cost. We present a design flow that enables the efficient exploration of redundancy/cost tradeoffs. After providing a system-level specification of the target platform and the fault model, designers can rely on the synthesis of the low-level fault-tolerance mechanisms. This is performed automatically as part of the embedded software deployment through the combination of the following three steps: replication, mapping, and scheduling. Our approach has a sound foundation in fault-tolerant data flow, a novel model of computation that simplifies the integration of formal validation techniques. Finally, we report on the application of our design flow to two case studies from the automotive industry: a steer-by-wire system from General Motors and a drive-by-wire system from BMW.",2008,0, 8121,Polyphase Downsampling Based Redundant Picture Coding for SVC Error Resiliency,Scalable video coding (SVC) is currently being developed as an extension to the H.264/AVC by the Joint Video Team (JVT). The SVC addresses coding schemes for reliable video delivery over heterogenous network for diverse clients using available system resources. SVC error resilience strives for reliability as well as efficiency of video transmission under unreliable network conditions. This paper presents a polyphase downsampling based redundant picture coding for SVC error resilience. Simulation results report that the proposed method outperforms the SVC error concealment method by 1.7 dB on average in terms of the PSNR.,2008,0, 8122,Error Resilient Coding and Error Concealment in Scalable Video Coding,"Scalable video coding (SVC), which is the scalable extension of the H.264/AVC standard, was developed by the Joint Video Team (JVT) of ISO/IEC MPEG (Moving Picture Experts Group) and ITU-T VCEG (Video Coding Experts Group). SVC is designed to provide adaptation capability for heterogeneous network structures and different receiving devices with the help of temporal, spatial, and quality scalabilities. It is challenging to achieve graceful quality degradation in an error-prone environment, since channel errors can drastically deteriorate the quality of the video. Error resilient coding and error concealment techniques have been introduced into SVC to reduce the quality degradation impact of transmission errors. Some of the techniques are inherited from or applicable also to H.264/AVC, while some of them take advantage of the SVC coding structure and coding tools. In this paper, the error resilient coding and error concealment tools in SVC are first reviewed. Then, several important tools such as loss-aware rate-distortion optimized macroblock mode decision algorithm and error concealment methods in SVC are discussed and experimental results are provided to show the benefits from them. The results demonstrate that PSNR gains can be achieved for the conventional inter prediction (IPPP) coding structure or the hierarchical bi-predictive (B) picture coding structure with large group of pictures size, for all the tested sequences and under various combinations of packet loss rates, compared with the basic joint scalable video model (JSVM) design applying no error resilient tools at the encoder and only picture copy error concealment method at the decoder.",2009,0, 8123,Error resilience of intra-die and inter-die communication with 3D spidergon STNoC,"Scaling down in very deep submicron (VDSM) technologies increases the delay, power consumption of on-chip interconnects, while the reliability and yield decrease. In high performance integrated circuits wires become the performance bottleneck and we are shifting towards communication centric design paradigms. Networks-on-chip and stacked 3D integration are two emerging technologies that alleviate the performance difficulties of on-chip interconnects in nano-scale designs. In this paper we present a design-time configurable error correction scheme integrated at link-level in the 3D Spidergon STNoC on-chip communication platform. The proposed scheme detects errors and selectively corrects them on the fly, depending on the critical nature of the transmitted information, making thus the correction software controllable. Moreover, the proposed scheme can correct multiple error patterns by using interleaved single error correction codes, providing an increased level of reliability. The performance of the link and its cost in silicon and vertical wires are evaluated for various configurations.",2010,0, 8124,An Intelligent MLFQ Scheduling Algorithm (IMLFQ) with Fault Tolerant Mechanism,"Scheduling algorithms are used in operating systems to optimize the usage of processors. One of the most efficient algorithms for scheduling is multi-layer feedback queue (MLFQ) algorithm which uses several queues with different quanta. The most important weakness of this method is the inability to define the optimized the number of the queues and quantum of each queue. These factors affect the response time directly. Also this algorithm does not show any considerable improvement in response time of the processes in comparison with the other scheduling algorithms. In this paper, a new algorithm is presented for solving these problems and minimizing the response time. In this algorithm recurrent neural network has been utilized to find both the number of queues and the optimized quantum of each queue. Also in order to prevent any probable faults in processes' response time computation, a new fault tolerant approach has been presented. The experimental results show that using the IMLFQ algorithm results in better response and waiting time in comparison with other scheduling algorithms",2006,0, 8125,A Posteriori Error Estimation and Adaptive Mesh Refinement Controlling in Finite Element Analysis of 3-D Steady State Eddy Current Fields,Several methods of a posteriori error estimation and adaptive refinement controlling in finite element analysis of 3-D steady-state eddy current field are described in this paper. An improved Z-Z method and a more efficient method of CIL are presented. The numerical models of TEAM Workshop Problem 7 and 21A are used to verify the validity of the presented method.,2010,0, 8126,A fault-tolerant approach to secure information retrieval,"Several private information retrieval (PIR) schemes were proposed to protect users' privacy when sensitive information stored in database servers is retrieved. However, existing PIR schemes assume that any attack to the servers does not change the information stored and any computational results. We present a novel fault-tolerant PIR scheme (called FT-PIR) that protects users' privacy and at the same time ensures service availability in the presence of malicious server faults. Our scheme neither relies on any unproven cryptographic assumptions nor the availability of tamper-proof hardware. A probabilistic verification function is introduced into the scheme to detect corrupted results. Unlike previous PIR research that attempted mainly to demonstrate the theoretical feasibility of PIR, we have actually implemented both a PIR scheme and our FT-PIR scheme in a distributed database environment. The experimental and analytical results show that only modest performance overhead is introduced by FT-PIR while comparing with PIR in the fault-free cases. The FT-PIR scheme tolerates a variety of server faults effectively. In certain fail-stop fault scenarios, FT-PIR performs even better than PIR. It was observed that 35.82% less processing time was actually needed for FT-PIR to tolerate one server fault.",2002,0, 8127,SFIDA: a software implemented fault injection tool for distributed dependable applications,"SFIDA, a new software implemented fault injection tool is described in this paper which can be used to test for dependability of distributed applications on the Linux platform. This has been integrated with a general debugging tool so that it has the functionality of both debugging and fault injection. It is assumed that the target application is composed of multiple components (programs) which cooperate for the result, and that its successful completeness is determined by a failure condition. SFIDA is capable of injecting transient and permanent hardware faults with emulating the error state incurred by hardware faults in the runtime environment of each program. It can also collect test results from all components and determine the soundness of the final result based on the failure condition.",2000,0, 8128,Differential Fault Analysis on SHACAL-1,"SHACAL-1, known as one of the finalists of the NESSIE project, originates from the compression component of the widely used hash function SHA-1. The requirements of confusion and diffusion are implemented through mixing operations and rotations other than substitution and permutation, thus there exists little literature on its immunity against fault attacks. In this paper, we apply differential fault analysis on SHACAL-1 in a synthetic approach. We introduce the random word fault model, present some theoretical arguments, and give an efficient fault attack based on the characteristic of the cipher. Both theoretical predications and experimental results demonstrate that, 72 random faults are needed to obtain 512 bits key with successful probability more than 60%, while 120 random faults are enough to obtain 512 bits key with successful probability more than 99%.",2009,0, 8129,Accuracy of motion correction methods for PET brain imaging,"Recently published methods for motion correction in neurological PET include the multiple acquisition frame (MAF) and LOR rebinning methods. The aim of the present work was to compare the accuracy of reconstructions obtained with these methods when multiple, arbitrary movements were applied to a Hoffman brain phantom during 3D list mode acquisition. A reflective target attached to the phantom enabled a Polaris optical motion tracking system to monitor the phantom position and orientation in the scanner coordinate frame. The motion information was used in the motion correction algorithms. The MAF method was applied to the list-mode data after sorting them into a series of dynamic frames, while the LOR rebinning method was applied directly to the list-mode data. A proportion of the list mode events had to be discarded during rebinning because the application of the corrective spatial transformation removed them from the 3D projection space. A correction for these 'lost' events was implemented as a global post-reconstruction scale factor, based on the overall fraction of lost events. Reconstructions from both motion correction methods were compared with a motion-free reference scan of the same phantom. Motion correction produced a marked improvement in image clarity and reduced errors with respect to the reference scan. LOR rebinning with global loss correction was found to be more accurate than the MAF method",2004,0, 8130,Failure analysis of open faults by using detecting/un-detecting information on tests,"Recently, manufacturing defects including opens in the interconnect layers have been increasing. Therefore, a failure analysis for open faults has become important in manufacturing. Moreover, the failure analysis for open faults under BIST environment is demanded. Since the quality of the failure analysis is engaged by the resolution of locating the fault, we propose the method for locating single open fault at a stem, based on only detecting/un-detecting information on tests. Our method deduces candidate faulty stems based on the number of detections for single stuck-at fault at each fan-out branches, by performing single stuck-at fault simulation with both detecting and un-detecting tests. To improve the ability of locating the fault, the method reduces the candidate faulty stems based on the number of detections for multiple stuck-at faults at fanout branches of the candidate faulty stem, by performing multiple stuck-at fault simulation with detecting tests.",2004,0, 8131,More on general error locator polynomials for a class of binary cyclic codes,"Recently, the general error locator polynomials have been widely used in the algebraic decoding of binary cyclic codes. This paper utilizes the proposed general error locator polynomial to develop an algebraic decoding algorithm for a class of the binary cyclic codes. This general error locator polynomial differs greatly from the previous general error locator polynomial. Each coefficient of the proposed general error locator polynomial is expressed as a binary polynomial in the single syndrome and the degrees of nonzero terms in the binary polynomial satisfy at least one congruence relation.",2010,0, 8132,Increasing FPGA resilience against soft errors using task duplication,"Reconfigurable computing systems are becoming increasingly widespread as they bring the flexibility of programmable systems and approach the performance of ASICs. While the prior research on FPGAs mainly studied issues such as performance, power, and area optimization, reliability related issues have not taken much attention. However, with increasing soft error rates, providing resilience to soft errors in FPGA based embedded platforms is becoming an increasingly important issue. This paper proposes an OS-directed task duplication scheme for increasing reliability by providing resilience against soft errors. The idea is to exploit the unused portions of the FPGA space to schedule duplicates of active tasks. The outputs of the primary and duplicate tasks are compared to check for the existence of soft errors.",2005,0, 8133,Mapping faults to failures in SQL manipulation commands,"Summary form only given. Databases and their applications are topics of interest to both academia and industry. However, they have received little attention towards improving the knowledge about their associated faults and failures. This lack of knowledge is an impediment to the definition of adequate software testing techniques applicable in this domain and to the development of quality software. We discuss issues arising from SQL manipulation failures and present the results of an investigation aiming at understanding the relationship between faults and failures. The analysis of data mapping indicates that: i) there is a many-to-many mapping between faults and failures; ii) failure dimensions are dependent on fault type, faulty command, and the database itself; and iii) knowledge of the manipulation faults is crucial to programming and testing database applications.",2005,0, 8134,Robustness with respect to error specifications,"Summary form only given. Formal specifications used in automatic verification typically describe the desired behavior of a system only in absence of environment failures. That is, specifications are often of the form A ->; G, where A is an assumption on the environment and G is the guarantee, the system should provide. This approach leaves the behavior of the system unspecified when A is not fulfilled and neither verification tools nor synthesis tools take such behavior into account. In practice, however, the environment may fail, due to incomplete specifications, operator errors, faulty implementations, transmission errors, and the like. Thus, a system should not only be correct, it should also be robust, meaning that it ""behaves 'reasonably' even in circumstances that were not anticipated in the requirements specification. In this talk we present a formal notion of robustness through graceful degradation for discrete functional safety properties: A small error by the environment should induce only a small error by the system, where the error is defined quantitatively as part of the specification, for instance, as the number of failures. Given such an error specification, we define a system to be robust if a finite environment error induces only a finite system error. As a more fine-grained measure of robustness, we define the notion of k-robustness, meaning that on average, the number of system failures is at most k times larger than the number of environment failures. We show that the synthesis question for robust systems can be solved in polynomial time as a one-pair Streett game and that the synthesis question for k-robust systems can be solved using ratio games. Ratio games are a novel type of graph games in which edges are labeled with a cost for each player, and the aim is to minimize the ratio of the sum o",2010,0, 8135,"Fast and flexible persistence: the magic potion for fault-tolerance, scalability and performance in online data stores","Summary form only given. We examine the architecture of computer systems designed to update, integrate and serve enterprise information. Our analysis of their scale, performance, availability and data integrity draws attention to the so-called 'storage gap'. We propose 'persistent memory' as the technology to bridge that gap. Its impact is demonstrated using practical business information processing scenarios. We conclude with a discussion of our prototype, the results achieved, and the challenges that lie ahead.",2004,0, 8136,An simple and efficient fault tolerance mechanism for divide-and-conquer systems,"Summary form only given. We study if fault tolerance can be made simpler and more efficient by exploiting the structure of the application. More specifically, we study divide-and-conquer parallelism, which is a popular and effective paradigm for writing parallel Grid applications. We have designed a novel fault tolerance mechanism for divide-and-conquer applications that reduces the amount of redundant computation by storing results of the discarded in a global (replicated) table. These results can later be reused, thereby minimizing the amount of work lost as a result of a crash. The execution time overhead of our mechanism is close to zero. Our mechanism can handle crashes of multiple processors or entire clusters at the same time.. It can also handle crashes of the root node that initially started the parallel computation. We have incorporated our fault tolerance mechanism in Satin, which is a Java-based divide-and-conquer system. Satin is implemented on top of the Ibis communication library. The core of Ibis is implemented in pure Java, without using any native libraries. The Satin runtime system and our fault tolerance extension also are written entirely in Java. The resulting system therefore is highly portable allowing the software to run unmodified on a heterogeneous Grid. We evaluated the performance of our fault tolerance scheme on a cluster of the Distributed ASCI Supercomputer 2 (DAS-2). In the first part of our tests, we show that the execution time overhead of our mechanism is close to zero. The results of the second part of our tests show that our algorithm salvages most of the work done by alive processors. Finally, we carried out tests on the European GridLab testbed. We ran one of our applications on a set of six heterogeneous parallel machines (four different operating systems, four different architectures) located in four different European countries. After manually killing one of the sites, the program recovered and finished normally.",2004,0, 8137,Fault-tolerant scheduling policy for grid computing systems,"Summary form only given. With the momentum gaining for the grid computing systems, the issue of deploying support for integrated scheduling and fault-tolerant approaches becomes paramount importance. Unfortunately, fault-tolerance has not been factored into the design of most existing grid scheduling strategies. To this end, we propose a fault-tolerant scheduling policy that loosely couples job scheduling with job replication scheme such that jobs are efficiently and reliably executed. Performance evaluation of the proposed fault-tolerant scheduler against a nonfault-tolerant scheduling policy is presented and shown that the proposed policy performs reasonably in the presence of various types of failures.",2004,0, 8138,Intelligent monitoring and fault tolerance in large-scale distributed systems,"Summary form only. Electronic devices are starting to become widely available for monitoring and controlling large-scale distributed systems. These devices may include sensing capabilities for online measurement, actuators for controlling certain variables, microprocessors for processing information and making realtime decisions based on designed algorithms, and telecommunication units for exchanging information with other electronic devices or possibly with human operators. A collection of such devices may be referred to as a networked intelligent agent system. Such systems have the capability to generate a huge volume of spatial-temporal data that can be used for monitoring and control applications of large-scale distributed systems. One of the most important research challenges in the years ahead is the development of information processing methodologies that can be used to extract meaning and knowledge out of the ever-increasing electronic information that will become available. Even more important is the capability to utilize the information that is being produced to design software and devices that operate seamlessly, autonomously and reliably in some intelligent manner. The ultimate objective is to design networked intelligent agent systems that can make appropriate real-time decisions in the management of large-scale distributed systems, while also providing useful high-level information to human operators. One of the most important classes of large-scale distributed systems deals with the reliable operation and intelligent management of critical infrastructures, such as electric power systems, telecommunication networks, water systems, and transportation systems. The design, control and fault monitoring of critical infrastructure systems is becoming increasingly more challenging as their size, complexity and interactions are steadily growing. Moreover, these critical infrastructures are susceptible to natural disasters, frequent failures, as well as malicio- - us attacks. There is a need to develop a common system-theoretic fault diagnostic framework for critical infrastructure systems and to design architectures and algorithms for intelligent monitoring, control and security of such systems. The goal of this presentation is to motivate the need for health monitoring, fault diagnosis and security of critical infrastructure systems and to provide a fault diagnosis methodology for detecting, isolating and accommodating both abrupt and incipient faults in a class of complex nonlinear dynamic systems. A detection and approximation estimator based on computational intelligence techniques is used for online health monitoring. Various adaptive approximation techniques and learning algorithms will be presented and illustrated, and directions for future research will be discussed.",2010,0, 8139,A Case Study of Bias in Bug-Fix Datasets,"Software quality researchers build software quality models by recovering traceability links between bug reports in issue tracking repositories and source code files. However, all too often the data stored in issue tracking repositories is not explicitly tagged or linked to source code. Researchers have to resort to heuristics to tag the data (e.g., to determine if an issue is a bug report or a work item), or to link a piece of code to a particular issue or bug. Recent studies by Bird et al. and by Antoniol et al. suggest that software models based on imperfect datasets with missing links to the code and incorrect tagging of issues, exhibit biases that compromise the validity and generality of the quality models built on top of the datasets. In this study, we verify the effects of such biases for a commercial project that enforces strict development guidelines and rules on the quality of the data in its issue tracking repository. Our results show that even in such a perfect setting, with a near-ideal dataset, biases do exist - leading us to conjecture that biases are more likely a symptom of the underlying software development process instead of being due to the used heuristics.",2010,0, 8140,Methodology for Reliability Evaluation of N-Version Programming Software Fault Tolerance System,"Software reliability can be improved by tolerating software faults, such as using N-version programming technique. Reliability evaluation is focused on the modeling and analysis techniques for fault prediction purpose. In this paper, a straightforward analysis method for evaluating reliability of software system established by N-version programming is proposed. The dependent failure parameters are assumed as random variables instead of constant. A case study is presented of the analysis of failure data from two software projects; the effectiveness of proposed evaluation methodology is demonstrated.",2008,0, 8141,Considering fault removal efficiency in software reliability assessment,"Software reliability growth models (SRGMs) have been developed to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Issues such as imperfect debugging and the learning phenomenon of developers have been considered in these models. However, most SRGMs assume that faults detected during tests will eventually be removed. Consideration of fault removal efficiency in the existing models is limited. In practice, fault removal efficiency is usually imperfect. This paper aims to incorporate fault removal efficiency into software reliability assessment. Fault removal efficiency is a useful metric in software development practice and it helps developers to evaluate the debugging effectiveness and estimate the additional workload. In this paper, imperfect debugging is considered in the sense that new faults can be introduced into the software during debugging and the detected faults may not be removed completely. A model is proposed to integrate fault removal efficiency, failure rate, and fault introduction rate into software reliability assessment. In addition to traditional reliability measures, the proposed model can provide some useful metrics to help the development team make better decisions. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power. The expected number of residual faults and software failure rate are also presented.",2003,0, 8142,A Hybrid Fault Tolerance Method for Recovery Block with a Weak Acceptance Test,"Software reliability represents a major requirement for safety critical applications. Several fault tolerance methods have been proposed to improve software reliability. These methods are based on either fault masking such as N-version programming or on fault detection such as in the recovery block method. The success of the recovery block method depends on a high quality of the effective acceptance test, which is sometimes very difficult to achieve. In this paper, we propose a hybrid fault tolerance method called recovery block with backup voting to improve the reliability of the normal recovery block in the case of a weak acceptance test. In the proposed method, a copy of the outcome of each version is stored in a cache memory as backup, and when the recovery block method fails to produce a correct output due to a weak acceptance test, the stored values are used as inputs to a voting method to produce the correct output. A Monte Carlo based simulation method is used to show the reliability improvement in the new proposed hybrid method as well as to show the decreased dependency of the new method on the quality of the acceptance test, which makes the new method more suitable for critical applications where the construction of an effective acceptance test is difficult.",2008,0, 8143,Towards a Bayesian Approach in Modeling the Disclosure of Unique Security Faults in Open Source Projects,"Software security has both an objective and a subjective component. A lot of the information available about that today is focused on the security vulnerabilities and their disclosure. It is less frequent that security breaches and failures rates are reported, even in open source projects. Disclosure of security problems can take several forms. A disclosure can be accompanied by a release of the fix for the problem, or not. The latter category can be further divided into voluntary and involuntary security issues. In widely used software there is also considerable variability in the operational profile under which the software is used. This profile is further modified by attacks on the software that may be triggered by security disclosures. Therefore a comprehensive model of software security qualities of a product needs to incorporate both objective measures, such as security problem disclosure, repair and, failure rates, as well as less objective metrics such as implied variability in the operational profile, influence of attacks, and subjective impressions of exposure and severity of the problems, etc. We show how a classical Bayesian model can be adapted for use in the security context. The model is discussed and assessed using data from three open source software projects. Our results show that the model is suitable for use with a certain subset of disclosed security faults, but that additional work will be needed to identify appropriate shape and scaling functions that would accurately reflect end-user perceptions associated with security problems.",2010,0, 8144,Predicting Fault Proneness of Classes Trough a Multiobjective Particle Swarm Optimization Algorithm,"Software testing is a fundamental software engineering activity for quality assurance that is also traditionally very expensive. To reduce efforts of testing strategies, some design metrics have been used to predict the fault-proneness of a software class or module. Recent works have explored the use of machine learning (ML) techniques for fault prediction. However most used ML techniques can not deal with unbalanced data and their results usually have a difficult interpretation. Because of this, this paper introduces a multi-objective particle swarm optimization (MOPSO) algorithm for fault prediction. It allows the creation of classifiers composed by rules with specific properties by exploring Pareto dominance concepts. These rules are more intuitive and easier to understand because they can be interpreted independently one of each other. Furthermore, an experiment using the approach is presented and the results are compared to the other techniques explored in the area.",2008,0, 8145,Research on application of software defect analysis based on PCA,"Software defect is an important factor that influences software quality. In this paper, we apply principal component analysis(PCA) to analyze the factors which lead to defects, and determine the important factors. In the process of software projects developing, we make use of the analysis results to improve the developing process. The results show that our method can avoid software defect effectively, and improve software quality.",2010,0, 8146,Estimation of software defects fix effort using neural networks,"Software defects fix effort is an important software development process metric that plays a critical role in software quality assurance. People usually like to apply parametric effort estimation techniques using historical lines of code and function points data to estimate effort of defects fixes. However, these techniques are neither efficient nor effective for a new different kind of project's fixing defects when code will be written within the context of a different project or organization. In this paper, we present a solution for estimating software defect fix effort using self-organizing neural networks.",2004,0, 8147,On the Trend of Remaining Software Defect Estimation,"Software defects play a key role in software reliability, and the number of remaining defects is one of most important software reliability indexes. Observing the trend of the number of remaining defects during the testing process can provide very useful information on the software reliability. However, the number of remaining defects is not known and has to be estimated. Therefore, it is important to study the trend of the remaining software defect estimation (RSDE). In this paper, the concept of RSDE curves is proposed. An RSDE curve describes the dynamic behavior of RSDE as software testing proceeds. Generally, RSDE changes over time and displays two typical patterns: 1) single mode and 2) multiple modes. This behavior is due to the different characteristics of the testing process, i.e., testing under a single testing profile or multiple testing profiles with various change points. By studying the trend of the estimated number of remaining software defects, RSDE curves can provide further insights into the software testing process. In particular, in this paper, the Goel-Okumoto model is used to estimate this number on actual software failure data, and some properties of RSDE are derived. In addition, we discuss some theoretical and application issues of the RSDE curves. The concept of the proposed RSDE curves is independent of the selected model. The methods and development discussed in this paper can be applied to any valid estimation model to develop and study its corresponding RSDE curve. Finally, we discuss several possible areas for future research.",2008,0, 8148,Fault monitoring and detection of distributed services over local and wide area networks,"Software development has evolved to incorporate the reusability of software components, enabling developers to focus on the requirements analysis without having to fully develop every component. Existing components that provide a given functionality can be reused by various applications. A parallel development has been the availability, through the World Wide Web, of data, transactions, and communications. These developments have led to the emergence of Web-services, collections of reusable code that use the Web communications paradigm for wider availability and communications between applications. In this context, with services coming and going, as well as possibly crashing, the issue of self-healing is of great relevance. How does an application learn that a remote service has become unavailable? In this paper we consider the issue of service failure detection and replacement, paying special attention to the relationship between the time it takes to find a replacement for a service, and the frequency of failure monitoring by the application",2006,0, 8149,TODO or to bug,"Software development is a highly collaborative activity that requires teams of developers to continually manage and coordinate their programming tasks. In this paper, we describe an empirical study that explored how task annotations embedded within the source code play a role in how software developers manage personal and team tasks. We present findings gathered by combining results from a survey of professional software developers, an analysis of code from open source projects, and interviews with software developers. Our findings help us describe how task annotations can be used to support a variety of activities fundamental to articulation work within software development. We describe how task management is negotiated between the more formal issue tracking systems and the informal annotations that programmers write within their source code. We report that annotations have different meanings and are dependent on individual, team and community use. We also present a number of issues related to managing annotations, which may have negative implications for maintenance. We conclude with insights into how these findings could be used to improve tool support and software process.",2008,0, 8150,A Fault Tree Analysis Based Software System Reliability Allocation Using Genetic Algorithm Optimization,"Software fault tree analysis is first adopted to establish the lower bound data (LBD) of individual modules in a software system. Due to the fact that both the internal relations within the system and the practical requirements enforced on every functional modules is formulated while analyzing the software fault, the assigned LBD using FTA is more reasonable compared with those using traditional AHP. Then the LBD are utilized in establishing the nonlinear programming model for the software utility oriented module reliability allocation optimization. In the end, the general frame of the simple genetic algorithm is implemented and linear programming prototype corresponding to the problem is simulated as a special case. Since the promoted algorithm has incorporated the merit of determining the module reliability LBD using software fault tree analysis with the global searching ability of genetic algorithm, the assigned reliability data to respective modules can ensure reliably running as well as enhancing utility to full extent of the software system.",2009,0, 8151,Has the bug really been fixed?,"Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix. To validate our approach, we implemented Fixation, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, Fixation returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated Fixation against them: Fixation successfully detected the extracted bad fixes.",2010,0, 8152,Automated software diversity for hardware fault detection,"Software in dependable systems must be able to tolerate or detect faults in the underlying infrastructure, such as the hardware. This paper presents a cost efficient automated method how register faults in the microprocessor can be detected during execution. This is done with the help of using compiler options to generate diverse binaries. The efficacy of this approach has been analyzed with the help of a CPU emulator, which was modified exactly for this purpose. The promising results show, that by using this approach, it is possible to automatically detect the vast majority of the injected register faults. In our simulations, two diverse versions have-despite of experiencing the same fault during execution - never delivered the same incorrect result, so we could detect all injected faults.",2009,0, 8153,Investigating the impact of reading techniques on the accuracy of different defect content estimation techniques,"Software inspections have established an impressive track record for early defect detection and correction. To increase their benefits, recent research efforts have focused on two different areas: systematic reading techniques and defect content estimation techniques. While reading techniques are to provide guidance for inspection participants on how to scrutinize a software artifact in a systematic manner, defect content estimation techniques aim at controlling and evaluating the inspection process by providing an estimate of the total number of defects in an inspected document. Although several empirical studies have been conducted to evaluate the accuracy of defect content estimation techniques, only few consider the reading approach as an influential factor. The authors examine the impact of two specific reading techniques: a scenario based reading technique and checklist based reading, on the accuracy of different defect content estimation techniques. The examination is based on data that were collected in a large experiment with students of the Vienna University of Technology. The results suggest that the choice of the reading technique has little impact on the accuracy of defect content estimation techniques. Although more empirical work is necessary to corroborate this finding, it implies that practitioners can use defect content estimation techniques without any consideration of their current reading technique",2001,0, 8154,Isolation of software defects: extracting knowledge with confidence,"Software maintenance is one of the most time and effort-consuming phases of software development life cycle. Maintenance managers and personnel look for methods and tools supporting scheduling and performing different software maintenance tasks. To make such tools relevant, they should provide the maintainer/manager with some quantitative input useful for purposes of interpretation and understanding what factors influence maintenance efforts and activities. In this paper, a comprehensive multi-model prediction system is proposed. It dwells on evidence theory and a number of rule-based models independently developed using different methods. Application of evidence theory leads to determination of confidence levels of the generated rules as well as obtained predictions. The study is concerned with the effort needed for the removal of defects in existing software maintenance data. A multi-model prediction system is developed and prediction results are analyzed.",2005,0, 8155,On Hardware Resource Consumption for Aspect-Oriented Implementation of Fault Tolerance,"Software-implemented fault tolerance is a widely used technique for achieving high dependability in cost-sensitive applications. One approach to implementing fault tolerance in software is to use aspect-oriented programming (AOP). This paper investigates the hardware overhead imposed by software mechanisms for time-redundant execution and control flow checking implemented by using AOP. The impacts on static and dynamic memory consumption as well as execution time are measured. The overheads caused by using AOP were shown to be an issue. However, two optimizations to the weaver that reduce the overhead caused by the AOP language weaver were identified. Using these optimizations the overhead was reduced to acceptable or even beneficial levels compared to using standard C.",2010,0, 8156,Design of IEC Portable Flickermeter with Error Correction,"Some improving methods are proposed concerning the existing problem of flickermeter. Firstly, the system error of flickermeter is tested by the simulation of voltage fluctuation with different frequencies that are provided by IEC. The error expression is obtained by polynomial fit. The instantaneous flicker sensation level S(t) is corrected by the mean of correction curve. Then, by calling function module of Matlab in LabVIEW, the parameters of digital filter can vary according to different sampling frequencies. Finally, the results of simulation experiment in LabVIEW show that the improved methods for design of flikermeter are feasible. And the experiment results of the Real Time Digital Simulator (RTDS) also indicate that the improved flikermeter possesses some characteristics such as flexibility of sample frequency, high accuracy of measurement, etc.",2010,0, 8157,Evaluation of Register-Level Protection Techniques for the Advanced Encryption Standard by Multi-Level Fault Injections,"Some protection techniques had been previously proposed for encryption blocks and applied to an AES encryption IP described at RT Level. One of these techniques had been validated by purely functional fault injections (i.e. algorithmic-level fault injections) against single- and multiple- bit errors. RT-Level fault injections have been performed recently on a few AES IPs and this paper summarizes the main results obtained, highlighting the new results and comparing the outcomes of the two fault injection levels.",2007,0, 8158,Field Inhomogeneity Correction Based on Gridding Reconstruction for Magnetic Resonance Imaging,"Spatial variations of the main field give rise to artifacts in magnetic resonance images if disregarded in reconstruction. With non-Cartesian k-space sampling, they often lead to unacceptable blurring. Data from such acquisitions are usually reconstructed with gridding methods and optionally restored with various correction methods. Both types of methods essentially face the same basic problem of adequately approximating an exponential function to enable an efficient processing with fast Fourier transforms. Nevertheless, they have commonly addressed it differently so far. In the present work, a unified approach is pursued. The principle behind gridding methods is first generalized to nonequispaced sampling in both domains and then applied to field inhomogeneity correction. Three new algorithms, which are compatible with a direct conjugate phase and an iterative algebraic reconstruction, are derived in this way from a straightforward embedding of the data into a higher dimensional space. Their evaluation in simulations and phantom experiments with spiral k-space sampling shows that one of them promises to provide a favorable compromise between fidelity and complexity compared with existing algorithms. Moreover, it allows a simple choice of key parameters involved in approximating an exponential function and a balance between the accuracy of reconstruction and correction",2007,0, 8159,On the Accuracy of Spectrum-based Fault Localization,"Spectrum-based fault localization shortens the test- diagnose-repair cycle by reducing the debugging effort. As a light-weight automated diagnosis technique it can easily be integrated with existing testing schemes. However, as no model of the system is taken into account, its diagnostic accuracy is inherently limited. Using the Siemens Set benchmark, we investigate this diagnostic accuracy as a function of several parameters (such as quality and quantity of the program spectra collected during the execution of the system), some of which directly relate to test design. Our results indicate that the superior performance of a particular similarity coefficient, used to analyze the program spectra, is largely independent of test design. Furthermore, near- optimal diagnostic accuracy (exonerating about 80% of the blocks of code on average) is already obtained for low-quality error observations and limited numbers of test cases. The influence of the number of test cases is of primary importance for continuous (embedded) processing applications, where only limited observation horizons can be maintained.",2007,0, 8160,Fault detection for uncertain sampled-data systems,Studies the fault detection (FD) problem for uncertain sampled-data systems. A direct design approach of the discrete-time FD system is presented which takes the intersample behavior into account. The resulting FD system achieves the best compromise between the sensitivity to the faults and the robustness to the unknown disturbances and model uncertainty.,2002,0, 8161,Identification of test process improvements by combining fault trigger classification and faults-slip-through measurement,"Successful software process improvement depends on the ability to analyze past projects and determine which parts of the process that could become more efficient. One source of such an analysis is the faults that are reported during development. This paper proposes how a combination of two existing techniques for fault analysis can be used to identify where in the test process improvements are needed, i.e. to pinpoint which activities in which phases that should be improved. This was achieved by classifying faults after which test activities that triggered them and which phase each fault should have been found in, i.e. through a combination of orthogonal defect classification (ODC) and faults-slip-through measurement. As a part of the method, the paper proposes a refined classification scheme due to identified problems when trying to apply ODC classification schemes in practice. The feasibility of the proposed method was demonstrated by applying it on an industrial software development project at Ericsson AB. The obtained measures resulted in a set of quantified and prioritized improvement areas to address in consecutive projects.",2005,0, 8162,Using functional view of software to increase performance in defect testing,"Summary form only given, as follows. Software at its primitive level may be viewed as function or mapping according to some specification from set of inputs and their related outputs. In this view the system is considered as a `black box?? whose behavior can only be determined by analyzing its inputs and outputs. Within this domain higher performance in defect testing can be achieved by using techniques that can be executed within resource limitations, thus understanding the software requirements adequately enough to expose majority of errors in system function, performance and behavior. The paper uncovers some innovative input and test reduction approaches considering the functional view of software, leading to efficient and increased fault detection. Functional view of software, black box testing, test reduction, domain to range ratio, input output analysis, testability, equivalence partitioning.",2002,0, 8163,A fault tolerant protocol for massively parallel systems,"Summary form only given. As parallel machines grow larger, the mean time between failure shrinks. With the planned machines of near future, therefore, fault tolerance will become an important issue. The traditional method of dealing with faults is to checkpoint the entire application periodically and to start from the last checkpoint. However, such a strategy wastes resources by requiring all the processors to revert to an earlier state, whereas only one processor has lost its current state. We present a scheme for fault tolerance that aims at low overhead on the forward path (i.e. when there are no failures) and a fast recovery from faults, without wasting computation done by processors that have not faulted. The scheme does not require any individual component to be fault-free. We present the basic scheme and performance data on small clusters. Since it is based on Charm++ and Adaptive MPI, where each processor houses several virtual processors, the scheme has potential to reduce fault recovery time significantly, by migrating the recovering virtual processors.",2004,0, 8164,Error-tolerance,"Summary form only given. Because of trends in scaling, in the near future every high performance dice can contain a massive number of defects and process aggravated noise and performance problems. In an attempt to obtain useful yields, designers and test engineers need to adopt a qualitatively different approach to their work. They need to learn, enhance and deploy techniques such as fault- and defect-tolerance. For some applications, they may even apply error-tolerance, a somewhat controversial emerging paradigm. A circuit is error-tolerant (ET) with respect to an application, if (1) it contains defects that cause internal and may cause external errors, and (2) the system that incorporates this circuit produces acceptable results. In this presentation we illustrate and give quantitative bounds on several factors that shape the future of digital design. We compare and contrast defect and fault-tolerant schemes with that of error-tolerance. We discuss how yield can be optimized by appropriately selecting the granularity of spares in light of defect densities and interconnect complexity. Finally, we show that several large classes of consumer electronic applications are resilient to errors, and how error-tolerance can then be used to significantly enhance effective yield.",2004,0, 8165,An Improving Fault Detection Mechanism in Service-Oriented Applications Based on Queuing Theory,"SOA has become more and more popular, but fault tolerance is not fully supported in most existing SOA frameworks and solutions provided by various major software companies. SOA implementations with large number of users, services, or traffic, maintaining the necessary performance levels of applications integrated using an ESB presents a substantial challenge, both to the architects who design the infrastructure as well as to IT professionals who are responsible for administration. In this paper, we improve the performance model for analyzing and detecting faults based on the queuing theory. The performance of services of SOA applications is measuring in two categories (individual services and composite services). We improve the model of the individuals services and add the composite services performance measuring.",2008,0, 8166,Reliability Analysis of Embedded Applications in Non-Uniform Fault Tolerant Processors,"Soft error analysis has been greatly aided by the concept of Architectural vulnerability Factor (AVF) and Architecturally Correct Execution (ACE). The AVF of a processor is defined as the probability that a bit flip in the processor architecture will result in a visible error in the final output of a program. In this work, we exploit the techniques of AVF analysis to introduce a software-level vulnerability analysis. This metric allows insight into the vulnerability of instruction and software to hardware faults with a micro-architectural involved fault injection method. The proposed metric can be used to make judgments about the reliability of different programs on different processors with regard to architectural and compiler guidelines for improving the processor reliability.",2010,0, 8167,Soft error considerations for computer web servers,"Soft errors are caused by cosmic rays striking sensitive regions in electronic devices. Termed as single event upset (SEU), in the past this phenomenon mostly affected the high altitude systems or avionics. The small geometries of today's nanodevices and their use in high-density and high-complexity designs make electronic systems sensitive even to the ground-level radiation. Therefore, large computer systems like workstations or computer web servers have become major victims of single event upsets. Given that the idea of cloud computing is an unavoidable trend for the next generation internet, which might involve almost every company in the IT industry, the urgency and criticality of the reliability rise higher then ever. This paper illustrates how soft errors are a reliability concern for computer servers. The soft error reduction techniques that are significant for the IT industry are summarized and a possible soft error rate (SER) reduction method that considers the cosmic ray striking angle to redesign the circuit board layout is proposed.",2010,0, 8168,Combinational logic soft error analysis and protection,"Soft errors in combinational logic are increasingly contributing to the systems' failure rate and need to be addressed to ensure dependable operation of an IC. This paper presents first a new method for soft error analysis of combinational logic, second a method for increasing the robustness of combinational logic and third a software tool implementation for performing these operations on Verilog netlists in an automated way with minimum impact on performance. It is shown on ISCAS '85 benchmarks that it is possible to reduce the soft error sensitivity by more than 60% at the cost of 20% in area with a design solution using only standard library cells. Further reduction in area cost is possible when applying the proposed method to the internals of standard library cells. In contrast to transistor sizing approaches, the proposed method benefits from the smaller feature sizes of newer IC process technologies",2006,0, 8169,Data mining based on improved neural network and its application in fault diagnosis of steam turbine,"Steam turbine is an important equipment in the industry, especially in the electric power industry. Because of the complexity of steam turbine and particularity of its running environment, the fault rate of steam turbine is high and its harm is serious. So the fault diagnosis of steam turbine is a difficult problem. A novel approach for fault diagnosis of steam turbine based on improved neural network is brought forward, aimed at overcoming shortages of some current knowledge attaining methods. An application of artificial neural networks methodology was investigated using experimental data. Multiplayer backpropagation neural network with two hidden layers, hyperbolic tangent as the activation function and target function were studied. Neuro-fuzzy systems were also applied. Based on the ontology of neural network, the data mining algorithm for classified fault diagnosis rules about steam turbine is brought forward; its realization process is as follows: (1) computing the measurement matrix of effect; (2) extracting rules; (3) computing the importance of rules; (4) shearing the rules by genetic algorithm. An experimental system for data mining and fault diagnosis of steam turbine based on neural network is implemented. Its diagnosis precision is 84%. And experiments do prove that it is feasible to use the method to develop a system for fault diagnosis of steam turbine, which is valuable for further study in more depth.",2008,0, 8170,Modular bug detection with inertial refinement,"Structural abstraction/refinement (SAR) holds promise for scalable bug detection in software since the abstraction is inexpensive to compute and refinement employs pre-computed procedure summaries. The refinement step is key to the scalability of an SAR technique: efficient refinement should avoid exploring program regions irrelevant to the property being checked. However, the current refinement techniques, guided by the counterexamples obtained from constraint solvers, have little or no control over the program regions explored during refinement. This paper presents inertial refinement (IR), a new refinement strategy which overcomes this drawback, by resisting the exploration of new program regions during refinement: new program regions are incrementally analyzed only when no error witness is realizable in the current regions. The IR procedure is implemented as part of a generalized SAR method in the F-Soft verification framework for C programs. Experimental comparison with a previous state-of-the-art refinement method shows that IR explores fewer program regions to detect bugs, leading to faster bug-detection.",2010,0, 8171,Analysis of spreadsheet errors made by computer literacy students,"Spreadsheets have become a routine application in most organizations and universities. As a consequence, students are required to learn spreadsheet applications such as Microsoft Excel. The learning of spreadsheets is often accompanied by problems related to spreadsheet application and their mathematical content. The EXITS (Excel intelligent tutoring system) research project aims to develop a Microsoft Excel tutor that help students or learners to overcome their learning difficulties. In this paper, we analyse and classify spreadsheet errors made by students in order to determine the function that our system should perform and to generate an error library for student modelling purposes.",2004,0, 8172,A least error squares method for locating fault on coupled double-circuit HV transmission line using one terminal data,"The arcing fault has a high probability of occurence on a HV transmission line. To date, the transient resistance is always assumed as a linear constant resistance in most of the locating algorithms. On the basis of the previous achievements in arc modeling, the paper describes an ideal voltage-current transform characteristic relationship and its equivalent circuit representing for long arc, and then presents a novel short time-window method for locating arcing faults on coupled double-circuit HV transmission line using one-terminal voltage and current samples. The method proposed has the advantages as follows: (1) arcing discharge is taken into account; (2) CT saturation is considered for a short time-window, which can be chosen from the initial time of a fault, less than half period; and (3) far-terminal equivalent impedance is not required. Moreover, the requirement of high accuracy to fault location is ensured by using the least error squares method. A lot of digital simulation shows that the scheme is feasible. The method has been applied to Kunming electric network.",2002,0, 8173,A novel concept for a fault current limiter,"The AREVA T&D Technology Centre has proposed a novel type of fault current limiting reactor based on a laminated iron C-core with a demagnetised magnet in its airgap. The fault current limiter will automatically increase its reactance during a fault in a power system to which it is connected. In this paper, the principle used to change the magnetic nature of the device will be explained in detail. This paper will also describe the design of the fault current limiter and demonstrate its fault current limiting performance through a case study.",2006,0, 8174,Fault-tolerant drive-by-wire systems,"The article begins with a review of electronic driver assisting systems such as ABS, traction control, electronic stability control, and brake assistant. We then review drive-by-wire systems with and without mechanical backup. Drive-by-wire systems consist of an operating unit with an electrical output, haptic feedback to the driver, bus systems, microcomputers, power electronics, and electrical actuators. For their design safety, integrity methods such as reliability, fault tree and hazard analysis, and risk classification are required. Different fault-tolerance principles with various forms of redundancy are considered, resulting in fail-operational, fail-silent, and fail-safe systems. Fault-detection methods are discussed for use in low-cost components, followed by a review of principles for fault-tolerant design of sensors, actuators, and communication. We evaluate these methods and principles and show how they can be applied to low-cost automotive components and drive-by-wire systems. A brake-by-wire system with electronic pedal and electric brakes is then considered in more detail, showing the design of the components and the overall architecture. Finally, we present conclusions and an outlook for further development of drive-by-wire systems.",2002,0, 8175,Filter Hardware Cost Reduction by Means of Error Feedback,"The article presents an uncommon application of the error feedback-improved IIR filter. A simple method to reduce the hardware cost (silicon area) of the biquadratic section implementation by means of error feedback (EF) is described. The optimization method utilizes the fact that the filter with an EF is more resistant to roundoff noise than a filter without it. An iterative method is used to reduce the occupied silicon area. First the standard IIR filter is designed with the requested quantization properties. Then the EF-improved biquadratic section is designed to attain the same roundoff noise properties. The occupied silicon areas of both solutions are compared then. Although implementation of EF results in more arithmetic components and more complex filter control, the resulting structure attaining the same quantization noise is smaller under defined circumstances (filter with poles close to the unit circle). Results show it is possible to spare up to 22% of the occupied silicon area. Our findings are valid for FPGA as well as ASIC implementation of the IIR filters. Our method has an advantage in using a standard and already verified filtering IP core which results in design time reduction.",2007,0, 8176,Employing a fuzzy logic based method to the fault diagnosis of analog parts of electronic embedded systems,"The article presents the new approach to employing the fuzzy logic to faults detection and localisation in analog parts of electronic mixed-signal embedded systems. Elaborated diagnostic method requires very few external components and it utilizes resources of a microcontroller that control embedded systems. The paper introduces additionally the way of creating the faults dictionary, characterises main parameters of fuzzy faults detection and localisation models and describes the manner of operation on the fuzzy soft decision processor.",2007,0, 8177,Fault Detection in Distributed Climate Sensor Networks Using Dynamic Bayesian Networks,"The Atmospheric Radiation Measurement (ARM) program operated by the U.S. Department of Energy is one of the largest climate research programs dedicated to the collection of long-term continuous measurements of cloud properties and other key components of the earth's climate system. Given the critical role that collected ARM data plays in the analysis of atmospheric processes and conditions and in the enhancement and evaluation of global climate models, the production and distribution of high-quality data is one of ARM's primary mission objectives. Fault detection in ARM's distributed sensor network is one critical ingredient towards maintaining high quality and useful data. We are modeling ARM's distributed sensor network as a dynamic Bayesian network where key measurements are mapped to Bayesian network variables. We then define the conditional dependencies between variables by discovering highly correlated variable pairs from historical data. The resultant dynamic Bayesian network provides an automated approach to identifying whether certain sensors are malfunctioning or failing in the distributed sensor network. A potential fault or failure is detected when an observed measurement is not consistent with its expected measurement and the observed measurements of other related sensors in the Bayesian network. We present some of our experiences and promising results with the fault detection dynamic Bayesian network.",2010,0, 8178,How to predict software defect density during proposal phase,"The author has developed a method to predict defect density based on empirical data. The author has evaluated the software development practices of 45 software organizations. Of those, 17 had complete actual observed defect density to correspond to the observed development practices. The author presents the correlation between these practices and defect density in this paper. This correlation can and is used to: (a) predict defect density as early as the proposal phase, (b) evaluate proposals from subcontractors, (c) perform tradeoffs so as to minimize software defect density. It is found that as practices improve, defect density decreases. Contrary to what many software engineers claim, the average probability of a late delivery is less on average for organizations with better practices. Furthermore, the margin of error in the event that a schedule is missed was smaller on average for organizations with better practices. It is also interesting that the average number of corrective action releases required is also smaller for the organizations with the best practices. This means less downtime for customers. It is not surprising that the average SEI CMM level is higher for the organizations with the better practices",2000,0, 8179,Diagnosis of induction machine rotor defects from an approach of magnetically coupled multiple circuits,"The authors develop the squirrel cage induction machine models for the diagnosis of defects from an approach of magnetically coupled multiple circuits. The generalized models are established on the base of mathematical recurrences. The calculation of machine inductances (with and without rotor defects) is carried out by the tools of software MATLAB before the beginning of simulation under software SIMULINK. The experimental tests were carried out on four induction machines of power 4 kW, especially manufactured for the needs for the diagnosis and presenting defects in the rotor. The simulated results and the experimental ones are presented to confirm the validity of proposed models",2006,0, 8180,Faults Diagnosis by Parameter Identification of the Squirrel Cage Induction Machine,"The authors present the faults diagnosis by parameter identification of the squirrel-cage rotor induction machine using real data. The model of electric parameter identification of the induction machine from the input-output observations of the stator is elaborated.To experimentally verify this approach, the tests are carried out on four squirrel-cage rotor induction machines especially constructed for the purpose of the diagnosis. All the model parameters of the squirrel-cage rotor induction machine are identified by least-squares method. Experimental results show good agreement and confirm the possibility the detection and localization of the faults.",2007,0, 8181,Proximity correction of IC layouts using scanner fingerprints,The availability of a precise physical description of the imaging system that was used to expose an OPC calibration tests pattern is now possible. This data is available from scanner manufacturers of the tool as built and also by scanner self-metrology in the Fab at any time. This information reduces significant uncertainty when regressing a model used for OPC and allows the creation of more accurate models with better predictability. This paper explores the considerations necessary for best leveraging this data into the OPC model creation flow.,2007,0, 8182,Software implemented fault injection for safety-critical distributed systems by means of mobile agents,"The availability of inexpensive powerful microprocessors leads to increasing deployment of those electronic devices in ever new application areas. Currently, the automotive industry considers the replacement of mechanical or hydraulic implementations of safety-critical automotive systems (e.g., braking, steering) by electronic counterparts (so-called ""by-wire systems"") for safety, comfort, and cost reasons. In order to remain operational in the presence of faults, these kinds of systems are built as fault-tolerant distributed real-time systems consisting of interconnected control units. To assure the correct operation of the fault tolerance mechanisms, software implemented fault injection provides low cost and easy to control techniques to test the system under faulty conditions. In this paper we propose a distributed software implemented fault injection framework based on the mobile agent approach. Software agents are designed to utilize the real-time system's global time and messages to trigger the fault injection experiments. We introduce a lightweight agent implementation language to model the fault injection and the concerned system resources, agent migration and logging of the fault injection experiments.",2004,0, 8183,A New Loopback GSM/DCS Bit Error Rate Test Method On Baseband I/Q Outputs,"The Bit Error Rate (BER) evaluation in one of the key parameters to reach QoS (Quality of Service) requirement of modern digital communication of the next generation (3 and 4G). This paper presents a new loopback BER Test (BERT) method based on measurements on baseband IQ outputs. The interest of this method is that it allows quantifying separately from digital stage, the RF stage influence on BER performances. This test principle has been validated introducing 4 instruments in the loop plus a GSM/DCS monochip as device under test. Unfortunately at the moment, one of the instruments capabilities limits the bit rate to one third of demodulation bandwidth, i.e. 200kHz. Replacing this instrument by a pure digital demodulator in the loop will eliminate this restriction in the future. So any kind of baseband IQ BERT could be performed using that method.",2001,0, 8184,Brep model simplification for feature suppressing using local error evaluation,"The CAD model simplification technology has been paid more and more attentions for the requirement of seamless integration of CAD/CAE/CAM. This paper presents a new approach for feature suppressing of Brep models using local error evaluation. The improvement is that several features can synchronously be suppressed, and time overhead will be reduced at initial step without recognizing among most of them.",2005,0, 8185,Impact of motion correction on parametric images in PET neuroreceptor studies,"The calculation of parametric images in PET studies of neuroreceptors is based on dynamic data which have been recorded over many minutes. It is essential that the subject's head remains unmoved during the PET scan, otherwise the data may become useless in the worst case. This work studies the degrading of parametric images caused by head movements and improvements which is achieved by an appropriate motion correction. The head movements present in PET neuroreceptor studies cause artifacts in the calculation of parametric images. Whereas the activity images look blurred, the parametric images contain discontinuities especially at the cortex. It is concluded that the linear regression is sensitive to the specific errors present in the dynamic images because of the movements",2004,0, 8186,An Adaptive Programming Model for Fault-Tolerant Distributed Computing,"The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QoS) cannot always be delivered between processes. Providing fault tolerance for such dynamic environments is a challenging task. Considering such a context, this paper proposes an adaptive programming model for fault-tolerant distributed computing, which provides upper-layer applications with process state information according to the current system synchrony (or QoS). The underlying system model is hybrid, composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). However, such a composition can vary over time, and, in particular, the system may become totally asynchronous (e.g., when the underlying system QoS degrade) or totally synchronous. Moreover, processes are not required to share the same view of the system synchrony at a given time. To illustrate what can be done in this programming model and how to use it, the consensus problem is taken as a benchmark problem. This paper also presents an implementation of the model that relies on a negotiated quality of service (QoS) for communication channels",2007,0, 8187,Fast algorithm for computing the roots of error locator polynomials up to degree 11 in Reed-Solomon decoders,"The central problem in the implementation of a Reed-Solomon code is finding the roots of the error locator polynomial. In 1967, Berlekamp et al. found an algorithm for finding the roots of an affine polynomial in GF(2m) that can be used to solve this problem. In this paper, it is shown that this Berlekamp-Rumsey-Solomon (1967) algorithm, together with the Chien (1964) search method, makes possible a fast decoding algorithm in the standard-basis representation that is naturally suitable in a software implementation. Finally, simulation results for this fast algorithm are given",2001,0, 8188,Research on fault monitoring for railway automatic blocking and continuous transmission lines,"The characteristics of railway automatic blocking and continuous transmission lines are studied. On this basis, a fault monitoring system is designed. Monitoring units installed on lines can detect the short fault directly (when earth fault happens, S-injection method is applied), and send out this information to adjacent GPRS units via Radio Frequency (RF). Then adjacent GPRS units send the data to the server. Through analyzing the gathered data, the server can locate the fault section. The test and field operation indicates that this system can identify fault section accurately and rapidly, increasing both of economic and social benefits.",2010,0, 8189,Design and Realization of Remote Fault Diagnosis System with Hiberarchy,"The characteristics of the remote fault diagnosis (RFD) is analyzed, the levels of the diagnostic participators, diagnostic processes, system structure, and the responsibilities and obligations for founding and maintaining RFD system are pointed out. The multi-level RFD system with the local, remote and domain fault diagnosis subsystem is designed. The system fully accords with RFD's characteristics, supports the graded fault diagnosis and is convenient to ensure the diagnosed equipment. The system is realized with Internet, LAN, PSTN, GSM communication network and database techniques, which is in favor of exerting the excellence of various communication networks, founding and widely applying RFD system.",2008,0, 8190,Definition of fault loads based on operator faults for DMBS recovery benchmarking,"The characterization of database management system (DBMS) recovery mechanisms and the comparison of recovery features of different DBMS require a practical approach to benchmark the effectiveness of recovery in the presence of faults. Existing performance benchmarks for transactional and database areas include two major components: a workload and a set of performance measures. The definition of a benchmark to characterize DBMS recovery needs a new component the faultload. A major cause of failures in large DBMS is operator faults, which make them an excellent starting point for the definition of a generic faultload. This paper proposes the steps for the definition of generic faultloads based on operator faults for DBMS recovery benchmarking. A classification for operator faults in DBMS is proposed and a comparative analysis among three commercially DBMS is presented. The paper ends with a practical example of the use of operator faults to benchmark different configurations of the recovery mechanisms of the Oracle 8i DBMS.",2002,0, 8191,Non-Uniform error criteria for automatic pattern and speech recognition,"The classical Bayes decision theory [1] is the foundation of statistical pattern recognition. Conventional applications of the Bayes decision theory result in ubiquitous use of the maximum a posteriori probability (MAP) decision policy and the paradigm of distribution estimation as practice in the design of a statistical pattern recognition system. In this paper, we address the issue of non-uniform error criteria in statistical pattern recognition, and generalize the Bayes decision theory for pattern recognition tasks where errors over different classes have different degrees of significance. We further propose extensions of the method of minimum classification error (MCE) [2] for a practical design of a statistical pattern recognition system to achieve empirical optimality when non-uniform error criteria are prescribed. In addition, we apply our method upon speech recognition tasks. In the context of automatic speech recognition (ASR), we present a variety of training scenarios and weighting strategies under our framework. The experimental demonstrations for both general pattern recognition and continuous speech recognition are provided to support the effectiveness of our new approach.",2008,0, 8192,Multi-View Video Coding Using Color Correction,The color variations between multi-view video sequences may degrade the inter-view prediction and result in low coding efficiency. In this paper we propose an efficient multi-view video coding scheme using dominant basic color mapping based color correction. The experimental coding results show that color correction has the potential to make multi-view video coding more efficient.,2008,0, 8193,Delay fault testing: choosing between random SIC and random MIC test sequences,"The combination of higher quality requirements and sensitivity of high performance circuits to delay defects has led to an increasing emphasis on delay testing of VLSI circuits. In this context, it has been proven that Single Input Change (SIC) test sequences are more effective than classical Multiple Input Change (MIC) test sequences when a high robust delay fault coverage is targeted In this paper, we show that random SIC (RSIC) test sequences achieve a higher fault coverage than random MIC (RMIC) test sequences when both robust and non-robust tests are under consideration. Experimental results given in this paper are based on a software generation of RSIC test sequences that can be easily generated in this case. For a built-in self-test (BIST) purpose, hardware generated RSIC sequences have to be used. This kind of generation is briefly discussed",2000,0, 8194,Spectral Smile Correction of CRISM/MRO Hyperspectral Images,"The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is affected by a common artifact to pushbroom-type imaging spectrometers, the so-called spectral smile. For this reason, the central wavelength and the width of the instrument spectral response vary according to the spatial dimension of the detector array. As a result, the spectral capabilities of CRISM get deteriorated for the off-axis detector elements while the distortions are minimal in the center of the detector array, the so-called sweet spot. The smile effect results in a data bias that affects hyperspectral images and whose magnitude depends on the column position (i.e., the spatial position of the corresponding detector element) and the local shape of the observed spectrum. The latter is singularly critical for images that contain chemical components having strong absorption bands, such as carbon dioxide on Mars in the gas or solid phase. The smile correction of CRISM hyperspectral images is addressed by the definition of a two-step method that aims at mimicking a smile-free spectral response for all data columns. First, the central wavelength is uniformed by resampling all spectra to the sweet-spot wavelengths. Second, the nonuniform width of the spectral response is overcome by using a spectral sharpening which aims at mimicking an increase of the spectral resolution. In this step, only spectral channels particularly suffering from the smile effect are processed. The smile correction of two CRISM images by the proposed method show remarkable results regarding the correction of the artifact effects and the preservation of the original spectra.",2010,0, 8195,Systematic Data-Driven Approach to Real-Time Fault Detection and Diagnosis in Automotive Engines,"The competitive businesses' desire to provide ""smart services"" and the pace at which the modern automobiles are increasing in complexity, are motivating the development of automated intelligent vehicle health management systems. Current On-Board Diagnosis (OBD II) systems use simple rules and maps to perform diagnosis, and significant human intervention is needed to troubleshoot a problem. More research is needed on developing innovative, easy-to-use automated diagnostic approaches for incorporation into the OBD systems. In addition, developing intelligent remote diagnosis technology, building a bridge between on-board and off-board diagnosis are open areas of research in the automotive industry.Here, we propose a systematic data-driven process that utilizes knowledge from signal- processing and statistical domains to detect and diagnose faults in automotive engines. The proposed approach is applied to a Toyota Camry engine, and the experimental results are presented in detail. The experimental system consists of an engine running with manual transmission on a dynamometer test- stand. For our experiments, the data for five faults (three sensor faults and two physical faults) with different severity levels under various operating conditions (e.g., different throttle angles, engine speeds, etc) is collected from the engine, and the application of a data-driven diagnostic process is examined.",2006,0, 8196,Seven pernicious kingdoms: a taxonomy of software security errors,"Taxonomies can help software developers and security practitioners understand the common coding mistakes that affect security. The goal is to help developers avoid making these mistakes and more readily identify security problems whenever possible. Because developers today are by and large unaware of the security problems they can (unknowingly) introduce into code, a taxonomy of coding errors should provide a real tangible benefit to the software security community. Although the taxonomy proposed here is incomplete and imperfect, it provides an important first step. It focuses on collecting common errors and explaining them in a way that makes sense to programmers. This new taxonomy is made up of two distinct kinds of sets, which we're stealing from biology: a phylum (a type of coding error, such as illegal pointer value) and a kingdom (a collection of phyla that shares a common theme, such as input validation and representation). Both kingdoms and phyla naturally emerge from a soup of coding rules relevant to enterprise software, and it's for this reason that this taxonomy is likely to be incomplete and might lack certain coding errors. In some cases, it's easier and more effective to talk about a category of errors than to talk about any particular attack. Although categories are certainly related to attacks, they aren't the same as attack patterns.",2005,0, 8197,Fuzzy fault tree analysis as a mechanism for technical support to small/medium electroplaters on a quasi online/real-time basis,"Technical support to small/medium electroplaters (SMEs) can be provided either by an electroplating department of a large company (ELC) or by a technology center designed for this purpose. In this work, a full cycle of the technical consultation process is presented, including mainly diagnostic knowledge base (DKB) enrichment, fuzzy fault tree analysis (FTA), application of a simple expert system for pre-filtering alternative solutions to the problem, fuzzy multicriteria analysis (MCA), and technology transfer. The utility-applicability of this computer aided integrated scheme (designed/developed to work on a quasi online/real-time basis) is proved by analysing a real industrial problem referring to the appearance of defected products in chrome electroplating. The system, after (i) excluding a significant number of alternative causes (suggested by fuzzy FTA) through filtering and (ii) rejecting the initially proposed cause (suggested by fuzzy MCA) of 'anodes corrosion', determined (by re-running the fuzzy MCA algorithm) successfully the really responsible cause of 'insufficient agitation'. The system can be easily extended to include all electrochemical processes, on condition of the availability of proper data for the DKB enrichment.",2003,0, 8198,On the need for common evaluation methods for fault tolerance costs in microprocessors,"Technological evolution is making fault tolerance more and more important in all application fields and it is therefore mandatory to have good strategies to measure its impact on existing systems. A lot of work has been done on fault characterization and modelling, but confusion still abounds when coming to performance loss evaluation. This is especially true for microprocessors, where ""performance"" is a tricky word to define, cause of a lot of debates and controversies. This article proposes a new and easy-to-apply framework to evaluate performance costs implied by fault tolerance in systems made of both hardware and software. Results are presented on a system based on a Sparc v8 processor and the eCoS operating system.",2005,0, 8199,Multiple Bit Error Detection and Correction in Memory,"Technology evolution provides ever increasing density of transistors in chips, lower power consumption and higher performance. In this environment the occurrence of multiple-bit upsets (MBUs) becomes a significant concern. Critical applications need high reliability, but traditional error mitigation techniques assume only the single error model, and only a few techniques to correct MBUs at algorithm level have been proposed. In this paper, a novel circuit level technique to detect and correct multiple errors in memory is proposed. Since it is implemented at circuit level, it is transparent to programmers. This technique is based in the Decimal Hamming coding and here it is compared to Reed Solomon coding at circuit level. Experimental results show that for memory words wider than 16 bits, the proposed technique is faster and imposes lower area overhead than optimized RS, while mitigating errors affecting up to 25% of the memory word.",2010,0, 8200,Flexible Error Protection for Energy Efficient Reliable Architectures,"Technology scaling is having an increasingly detrimental effect on microprocessor reliability, with increased variability and higher susceptibility to errors. At the same time, as integration of chip multiprocessors increases, power consumption is becoming a significant bottleneck that could threaten their growth. To deal with these competing trends, energy-efficient solutions are needed to deal with reliability problems. This paper presents a reliable multicore architecture that provides targeted error protection by adapting to the characteristics of individual cores and workloads, with the goal of providing reliability with minimum energy. The user can specify an acceptable reliability target for each chip, core, or application. The system then adjusts a range of parameters, including replication and supply voltage, to meet that reliability goal. In this multicore architecture, each core consists of a pair of pipelines that can run independently (running separate threads) or in concert (running the same thread and verifying results). Redundancy is enabled selectively, at functional unit granularity. The architecture also employs timing speculation for mitigation of variation-induced timing errors and to reduce the power overhead of error protection. On-line control based on machine learning dynamically adjusts multiple parameters to minimize energy consumption. Evaluation shows that dynamic adaptation of voltage and redundancy can reduce the energy delay product of a CMP by 30 - 60% compared to static dual modular redundancy.",2010,0, 8201,PEDS: A Parallel Error Detection Scheme for TCAM Devices,"Ternary content-addressable memory (TCAM) devices are increasingly used for performing high-speed packet classification. A TCAM consists of an associative memory that compares a search key in parallel against all entries. TCAMs may suffer from error events that cause ternary cells to change their value to any symbol in the ternary alphabet {0,1,*}. Due to their parallel access feature, standard error detection schemes are not directly applicable to TCAMs; an additional difficulty is posed by the special semantic of the * symbol. This paper introduces PEDS, a novel parallel error detection scheme that locates the erroneous entries in a TCAM device. PEDS is based on applying an error-detecting code to each TCAM entry and utilizing the parallel capabilities of the TCAM by simultaneously checking the correctness of multiple TCAM entries. A key feature of PEDS is that the number of TCAM lookup operations required to locate all errors depends on the number of symbols per entry in a manner that is typically orders of magnitude smaller than the number of TCAM entries. For large TCAM devices, a specific instance of PEDS requires only 200 lookups for 100-symbol entries, while a naive approach may need hundreds of thousands of lookups. PEDS allows flexible and dynamic selection of tradeoff points between robustness, space complexity, and number of lookups.",2010,0,6439 8202,Rollout strategies for sequential fault diagnosis,"Test sequencing is a binary identification problem wherein one needs to develop a minimal expected cost testing procedure to determine which one of a finite number of possible failure sources, if any, is present. The problem can be solved optimally using dynamic programming or AND/OR graph search methods (AO*, CF, and HS). However, for large systems, the associated computation with dynamic programming or AND/OR graph search methods is substantial, due to the rapidly increasing number of OR nodes (denoting ambiguity states) and AND nodes (denoting tests) in the search graph. In order to overcome the computational explosion, the one-step or multistep lookahead heuristic algorithms have been developed to solve the test sequencing problem. In this paper, we propose to apply rollout strategies, which can be combined with the one-step or multistep lookahead heuristic algorithms, in a computationally more efficient manner than the optimal strategies, to obtain solutions superior to those using the one-step or multistep lookahead heuristic algorithms. The rollout strategies are illustrated and tested using a range of real-world systems. We show computational results, which suggest that the information-heuristic based rollout policies are significantly better than other rollout policies based on Huffman coding and entropy.",2003,0, 8203,Comparing Fault-based Testing Strategies of General Boolean Specifications,Testing Boolean specifications in general form (GF) by the IDNF-oriented approaches always results in superabundant cost and missing detection of some faults. This paper proposes GF-oriented approaches to improve them. The experimental results show that the GF-oriented strategies could enhance the fault detection capability and reduce the sizes of test sets.,2007,0, 8204,Video text detection and localization based on localized generalization error model,"Texts in videos provide plenteous information for video analysis such as video indexing, understanding and retrieval. We propose a neural network based method detecting text in the video frames in this work. The proposed method consists of three major steps: feature extraction, text region detection and candidate region refinement. Firstly, we extract texture features from four edge maps yielded from the target video frame. Secondly, a Radial Basis Function Neural Network (RBFNN) optimized by the Localized Generalization Error Model (L-GEM) is applied to detect text candidates. Finally, a false detection of text is applied to fine tune the result. Experimental results demonstrate that the proposed method is efficient for different font-colors, font-sizes and language in complex background.",2010,0, 8205,Using an Error Detection Strategy for Improving Web Accessibility for Older Adults,"The ability to use the Internet can provide an important contribution to an older adult's quality of life. Communication via email with family, friends and service providers has become a critical factor for improving ones ability to cope with modern society as individual's age. The problem is that as users age, natural physical and cognitive impairments make it more difficult for them to use the required technology. The present study investigates the use of error detection as a means of improving Web access amongst older adults. Specifically, error detection strategies are compared to observation as a means of identifying the impairments of Internet users.",2009,0, 8206,Sensors reliability optimal allocation for fault tolerant control system with imperfect fault diagnosis,"The absolute reliability of a fault diagnostic device is assumed in most of reliability optimal allocation methods, and the fault diagnostic device is ignored in the system reliability model. In this paper, the hardware and software conditions on which the fault diagnostic device depends upon are analyzed. The fault diagnostic device is assumed imperfect and is regarded as a part of the whole system reliability model. Therefore, a more practical reliability optimal allocation method of a control system is provided. The method is used in sensors reliability optimal allocation of satellite attitude control system.",2004,0, 8207,Applications of fuzzy-logic-wavelet-based techniques for transformers inrush currents identification and power systems faults classification,"The advent of wavelet transforms (WTs) and fuzzy-inference mechanisms (FIMs) with the ability of the first to focus on system transients using short data windows and of the second to map complex and nonlinear power system configurations provide an excellent tool for high speed digital relaying. This work presents a new approach to real-time fault classification in power transmission systems, and identification of power transformers magnetising inrush currents using fuzzy-logic-based multicriteria approach Omar A.S. Youssef [2004, 2003] with a wavelet-based preprocessor stage Omar A.S. Youssef [2003, 2001]. Three inputs, which are functions of the three line currents, are utilised to detect fault types such as LG, LL, LLG as well as magnetising inrush currents. The technique is based on utilising the low-frequency components generated during fault conditions on the power system and/or magnetising inrush currents. These components are extracted using an online wavelet-based preprocessor stage with data window of 16 samples (based on 1.0 kHz sampling rate and 50 Hz power frequency). Generated data from the simulation of an 330 /33Y kV, step-down transformer connected to a 330 kV model power system using EMTP software were used by the MATLAB program to test the performance of the technique as to its speed of response, computational burden and reliability. Results are shown and they indicate that this approach can be used as an effective tool for high-speed digital relaying, and that computational burden is much simpler than the recently postulated fault classification.",2004,0, 8208,Error models for evaluating error control strategies in EGPRS systems,"The aim of error models is to describe the statistical properties of bursty error sequences encountered in digital mobile fading channels. These models have wide applications to the design and performance evaluation of error control schemes. Target error sequences are generated by computer simulations of uncoded enhanced general packet radio service (EGPRS) systems with typical urban (TU) channels and rural area (RA) channels. A novel generative model is then proposed based on a properly parameterized and sampled deterministic process followed by a threshold detector and two parallel mappers. Simulation results indicate that the proposed deterministic process based generative model (DPBGM) enables us to approximate very closely the characteristics of the target error sequences with respect to the gap distribution (GD), error-free run distribution (EFRD), error cluster distribution (ECD), error burst distribution (EBD), error-free burst distribution (EFBD), block error probability distribution (BEPD), and bit error correlation function (BECF). The validity of the suggested DPBGM is further confirmed by the excellent match of the simulated frame error rates (FERs) and residual bit error rates (RBERs) of coded EGPRS systems obtained from the target and generated error sequences.",2004,0, 8209,Experimental inter-turn short circuit fault characterization of wound rotor induction machines,"The aim of this paper is to compare different experimental techniques for detecting a stator inter-turn short-circuit in a Wound rotor induction machine working as generator. Three methods using different signals such as stator or rotor current and the leakage flux will be described and experimented. The rotor currents provide interesting signatures since the stator faults introduce new harmonics in the rotor windings. Hence, in this paper, it is proved that sensing one rotor current by using a current sensor is one of possible techniques for detecting this kind of electrical fault. The leakage flux is another alternative for this kind of problems since in this paper; it is proved that a simple external leakage flux sensor is efficient to detect, in the low frequency range, the intern-turn short circuit in three-phase induction machines even in presence of power supply harmonics. Stator currents space vector is also an interesting technique since it is proved that inter-turn short circuit faults can be considered as a stator unbalance which can be detected using negative sequence of stator currents. The comparison of these three methods is performed using experiments on a 5.5kW-220V/380V-50Hz-8 poles wound-rotor induction machine working in both healthy and faulty modes at different load conditions.",2010,0, 8210,Double-fed three-phase induction machine model for simulation of inter-turn short circuit fault,"The aim of this paper is to develop a doubly-fed induction machine (DFIM) model suitable for the simulation of this machine in the healthy mode and faulty mode. Indeed, the developed model allows the simulation of the inter-turn short circuit in the stator or the rotor of the machine in any system with control circuits and/or connections to the grid by means of power electronics converters. The circuit-oriented approach has been chosen in order to represent the DFIM model as a rotating transformer. Of course, since this model doesn't need any transformation, it is possible to simulate any kind of asymmetry in both stator and rotor sides with or without variations of the machine parameters. In addition, the first turn of one phase of the stator and the rotor was modeled using the circuit-oriented approach to simulate an inter-turn short circuit in both stator and rotor. A specific wound-rotor induction machine model, using only resistances, inductances and controlled voltage sources, has been developed. The coupling effects between stator and rotor have been taken into account stator-rotor mutual inductances. The performances of the model have been verified by comparison between simulation and experimental results on a 5.5 kW-50 Hz-8 poles DFIM working in generating mode at different load conditions.",2009,0, 8211,"The impact and correction of timing error, frequency offset and phase noise in IEEE 802.11a and ETSI HiperLAN/2","The aim of this paper is to discuss and explore the impact of practical impairments such as timing error, frequency offset and phase noise on the physical layer performance of IEEE 802.11a and ETSI HiperLAN/2. Standard compliant correction techniques are identified or developed to exploit the allocated preambles and pilot symbols embedded in these standards. The paper includes physical layer performance results in the presence of each form of degradation. These results are presented in terms of packet error rate and bit error rate versus signal to noise ratio for IEEE/ETSI standard indoor channel models of A and E. The performance gain of the corrected receiver is then compared to that of an unaided device. Results indicate that an acceptable level of performance can be achieved over the full range of practical impairments given careful receiver design.",2002,0, 8212,Analyzing program dynamic graphs for software fault localization,"The aim of this paper is to extract dynamic behavioral graphs from different executions of a program and analyze them to find bug relevant sub-graphs. Similar graph mining methods for software fault localization extract discriminate sub-graphs in failing and passing executions. However, due to the nature of software bugs, a failure context does not necessarily appear in a discriminative sub-graph. Therefore, we have proposed a new formula to rank the edges based on their suspiciousness to the failure. These suspicious edges are further applied to form best candidate faulty sub-graphs. In order to show the significance of using weights to construct program dynamic graphs, we have analyzed both weighted and un-weighted graphs with proposed ranking technique. The experimental results on Siemens suite reveal high capability of the proposed technique on weighted dynamic graphs.",2010,0, 8213,Fault simulation technique for switch mode power supplies,"The aim of this paper is to present a fault simulation technique that could help engineers to study the consequences of faults in the output filter of a switch mode power supply. To do so, in this paper, it is presented two techniques: a new experimental off-line technique based in the DFT algorithmic, which can determine the equivalent circuit of capacitors at the operating frequency of the converter, and a simulation technique based in Laplace transform, which could predict the behavior of the converter. Simulation and experimental results will be presented in order to prove the applicability of both techniques.",2007,0, 8214,Fault detection and diagnosis system for air-conditioning units using recurrent type neural network,"The air-conditioning systems of buildings have been diversified in recent years, and the complexity of the systems has increased. At the same time, stability in the system and low running cost are demanded. To solve these problems, various research projects have been done. The development of the energy load prediction systems and the fault detection and diagnosis systems have received great attention. The authors propose a real time fault diagnosis system for air conditioning units (the heating unit, the cooling unit, the air intake unit, and the air-recycling unit) using a recurrent type neural network",2000,0, 8215,A design- for-diagnosis technique for diagnosing both scan chain faults and combinational circuit faults,"The amount of die area consumed by scan chains and scan control circuit can range from 15%~30%, and scan chain failures account for almost 50% of chip failures. As the conventional diagnosis process usually runs on the faulty free scan chain, scan chain faults may disable the diagnostic process, leaving large failure area to time-consuming failure analysis. In this paper, a design-for-diagnosis (DFD) technique is proposed to diagnose faulty scan chains precisely and efficiently, moreover, with the assistant of the proposed technique, the conventional logic diagnostic process can be carried on with faulty scan chains. The proposed approach is entirely compatible with conventional scan-based design. Previously proposed software-based diagnostic methods for conventional scan designs can still be applied to our design. Experiments on ISCAS'89 benchmark circuits are conducted to demonstrate the efficiency of the proposed DFD technique.",2008,0, 8216,FCPre: Extending the Arora-Kulkarni Method of Automatic Addition of Fault-Tolerance,"Synthesizing fault-tolerant systems from fault-intolerant systems simplifies design of fault-tolerance. Arora and Kulkarni developed a method and a tool to synthesize fault-tolerance under the assumption that specifications are not history-dependent (fusion-closed). Later, Gartner and Jhumka removed this assumption by presenting a modular extension of the Arora-Kulkarni method. This paper presents an implementation of the Gartner-Jhumka method which is evaluated on several examples. As additional safety net, we have added automatic verification of the results using the model checker Spin. In the context of this work, a fault in the Gartner-Jhumka method has been found. Though this fault is rare and does not cause incorrect results, there might be no result at all",2007,0, 8217,Noise-Related Radiometric Correction in the TerraSAR-X Multimode SAR Processor,"Synthetic aperture radar (SAR) image intensity is disturbed by additive system noise. During SAR focusing, pattern corrections that are adapted to the characteristics of the wanted signal, but not to the characteristics of the noise, influence the spatial distribution of the noise power. Particularly in the case of ScanSAR, a distinct residual noise pattern in low backscatter areas results. This necessitates a noise-adapted radiometric correction of the focused image for almost all applications except interferometry. In this paper, we thoroughly investigate this topic. Based on signal theoretical and stochastic considerations, we develop a radiometric correction scheme. Simulations and the application of the algorithm to TerraSAR-X datatakes support the theoretical results.",2010,0, 8218,Handling behavioral components in multi-level concurrent fault simulation,"System level modeling is becoming a necessity in all areas of engineering design. As systems grow in complexity, designers may increasingly rely on commercial off-the-shelf (COTS) components. Frequently, these components are described at a high level of abstraction (behaviorally) that complicates fault testing. We discuss the trade-offs of using behavioral components in a design, specifically as it relates to fault simulation. We investigate important issues such as timing, and examine the need to internally-fault behavioral models. We then present our fault-level concurrent fault simulator (MCS) that can accept any combination of gate level and behavioral models using a single kernel. Our kernel propagates faults through behavioral components deterministically. Finally, we present performance results of multi-level models to demonstrate the simulator's capabilities and performance",2000,0, 8219,Transaction level error susceptibility model for bus based SoC architectures,"System on Chip architectures have traditionally relied upon bus based interconnect for their communication needs. However, increasing bus frequencies and the load on the bus calls for focus on reliability issues in such bus based systems. In this paper, we provide a detailed analysis of different kinds of errors and the susceptibility of such systems to such errors on various components that the bus comprises of. With elaborate experiments we determine the effect of a single bit error on the bus system during the course of different transactions. The work demonstrates the fact that only a few signals in a bus system are really critical and need to be guarded. Such transaction based analysis helps us to develop an effective prediction methodology to predict the effect of a single bit error on any application running on a bus based architecture. We demonstrate that our transaction based prediction scheme works with an average accuracy of 92% over all the benchmarks when compared with the actual simulation results",2006,0, 8220,The influence of superconducting fault current limiter structure on the V-I characteristic,Superconducting fault current limiters (SFCL) use the natural ability of rapid shifting from the normal to the resistive state of superconductor due to their critical current value exciding. This special feature of superconducting materials enables designing of electrical devices with parameters that cannot be achieved by using conventional materials. The voltage dependence on the structure of superconducting inductive fault current limiter in a superconducting state has been described in this paper. This dependence has enabled elaboration of computer software for parameters optimizations of SFCL primary winding. This dependence in a resistive state is also described. The analysis was carried out using FLUX2D software based on a finite element method.,2004,0, 8221,Design of Power Transformer Fault Diagnosis Model Based on Support Vector Machine,Support vector machines (SVM) is a machine-learning algorithm based on statistical learning theory. The method for power transformer fault diagnosis based on SVM is proposed in this paper. The principle and algorithm of this method are introduced. Through a finite learning sample the relation is established between the transformer fault signature and the quantity of its dissolved gas. A faults classifier is constructed by using the dissolved gas data of the fault transformer. The testing results show that this method can successfully be applied to the diagnosis of gear faults.,2009,0, 8222,A New Approach to Find Fault Locations on Distribution Feeder Circuits,"Sustained outages on mainline distribution feeder circuits often require long patrol times to visually find the root cause, due to the sheer length of the mainline circuit. Utilities interested in improving their reliability performance, namely customer average interruption duration index (CAIDI), continually look for ways to reduce the time involved in these patrols. Xcel Energy, Inc. began to look for new approaches to provide reliable remote fault location detection on its mainline distribution feeder circuits. A new approach was proposed, based on real-time monitoring of current levels in the neutral-return and pole-ground current paths at dispersed key locations along a distribution feeder. Whenever a local unit detects a fault condition, it quickly communicates fault and location data back to utility operators so they can direct crews to the problem area. This paper will focus on the fault location techniques applied, the local detection platform, the control center operator interface, real-world results to-date, and going-forward applications of this cost-effective technology",2007,0, 8223,Environmentally Adaptive Fault Tolerant Computing (EAFTC),"The application of commercial-off-the-shelf (COTS) processing components in operational space missions with optimal performance and efficiency requires a system-level approach. Of primary concern is the need to handle the inherent susceptibility of COTS components to single event upsets (SEUs). Honeywell in conjunction with Physical Sciences Incorporated, and WW Technology Group has developed a new paradigm for fault tolerant COTS based onboard computing. The paradigm is called ""environmentally adaptive fault tolerant computing"" (EAFTC.) EAFTC combines a set of innovative technologies to enable efficient use of high performance COTS processors, in the harsh space environment, while maintaining the required system availability",2005,0, 8224,An adaptive algorithm for tolerating value faults and crash failures,"The AQuA architecture provides adaptive fault tolerance to CORBA applications by replicating objects and providing a high-level method that an application can use to specify its desired level of dependability. This paper presents the algorithms that AQUA uses, when an application's dependability requirements can change at runtime, to tolerate both value faults in applications and crash failures simultaneously. In particular, we provide an active replication communication scheme that maintains data consistency among replicas, detects crash failures, collates the messages generated by replicated objects, and delivers the result of each vote. We also present an adaptive majority voting algorithm that enables the correct ongoing vote while both the number of replicas and the majority size dynamically change. Together, these two algorithms form the basis of the mechanism for tolerating and recovering from value faults and crash failures in AQuA",2001,0, 8225,Radarsat-2 image geometric correction & validation using a few GCPs,"Taking advantage of its onboard GPS receiver, Radarsat-2 satellite can determine orbit in real-time with three times RMS less than 60 meters, a rapid geometric correction technique is proposed in this paper. The system error between Radarsat-2 image and the actual reference coordinates can be eliminated by imposing a few Ground Control Points to the RPF coefficients extracted from SAR product. The relative accuracy of this method has been tested by the formulation which got by the principle of SAR imaging. The absolute accuracy has been tested by 11 field collected GPS data. This experiment proofs that this technology can achieve geometric correction accuracy with a mean square error less than 2 pixels.",2010,0, 8226,Error analysis of hemispherical resonator gyro drift data,"The hemispherical resonator gyro (HRG) is a vibration gyro with high accuracy, long life and great reliability. Analyzing and studying its output signal is one of the key techniques to science evaluating performance. This paper presents the principle of the HRG, induces the Allan variance methods, and uses the MATLAB programming to simulate and calculate. Itpsilas the first time in China to apply the Allan variance methods to separate and estimate HRG errors. And experimentally proves the Allan variance method is an effective method to evaluate the HRG performance.",2008,0, 8227,Research on the Application of Data Mining in Software Testing and Defects Analysis,"The high dependability software is not only one of software technique development commanding points, but also is the software industry development essential foundation, this paper summarizes the data mining to face the detect of the software credibility test, the appraisal and the technical aspect newest research, elaborated the data mining technology in the software flaw test application, including flaw test in commonly used data mining method, data mining system and software testing management system. Introduced specifically in view of the software flaw's different classification based on the connection rule's software flaw parsing technique's application, proposed based on the association rule's software detect evaluation method, the purpose of which is to decrease software defects and to achieve the rapid growth of software dependability.",2009,0, 8228,Hardware in loop implementation and analysis of a neural augmented fault tolerant flight controller for a high performance dynamic fighter aircraft model on a target digital signal processor,"The high performance fighter aircraft achieves a high angle of attack (up to 90A) and superb maneuverability because of the presence of leading edge flaps. In this paper we make an attempt to achieve reconfigurable control of this high performance fighter aircraft during auto-landing[1], using Extended Minimal Resource Allocating Network (EMRAN) augmented controller [2] with additional faults and validate the same through software simulation using MATLABA SimulinkA and also hardware implementation of the controller on a target Digital Signal Processor (DSP) using Hardware-in-the-loop simulation technique. A high fidelity aircraft model [1] with seven control surfaces, including leading edge flaps (LEF) in comparison to the low fidelity model that employs only five control surfaces is employed in the simulation as well as hardware implementation. Also while reconfiguring the aircraft model will be made dynamic where in multiple faults can occur at different instances.",2009,0, 8229,Configurable mobile agent and its fault-tolerance mechanism,"The idea of performing client-server computing by transmission of executable programs between clients and servers has become highly popular among researchers and developers who are engaged in intelligent network services. Computing based on mobile agents is an important aspect of this idea. This paper focuses on researching the migration process of agents. A model based on modules is devised for constructing agents. A concurrent schedule method is presented, with which the agent migration can be easily implemented. Most of the unnecessary transmission of codes and data can be avoided by module reuse. Consequently, the executing period of mobile agents is reduced and their efficiency is improved. Additionally, a fault-tolerance mechanism is designed in the system to ensure that the agent can work even when some faults occur in the network or in the host",2001,0, 8230,IEEE 1451 standard and wireless sensor networks: An over view of fault tolerant algorithms,"The IEEE 1451 standards family provides a set of common interfaces for connecting transducers to existing instrumentation and control networks. Currently IEEE 1451.5 is proposed to define the interface for wireless sensors to a system without a network connection. IEEE 1451.6 is also being defined for a network using a controller area network (CAN). As the IEEE 1451.5 and 1451.6 support both wired and wireless sensor networks, sensor networks shall evolve as heterogeneous networks. We present review of the fault tolerance algorithms in use for wireless sensor networks and issues related to the NCAP for IEEE 1451 standard to address the wired and wireless communication. In this work we attempt to survey some heuristics and algorithms for fault-tolerance and fusion in multi-sensor environments and discuss the Health check protocol and fault diagnosis process [Reya, 2005]. We also identified the possible sensor failures with wide range of sensors under different work conditions",2006,0, 8231,Color Correction for Digital Images Based on the Finite-Dimensional Linear-Model,"The image color recorded in a computer vision system depends on three factors: the physical content of the scene, the illumination on the scene, and the characteristics of the camera. The goal of computational color correction is to reset the intensities of the color channels in digital images, and express the scene colors canonically. In the color correction method based on spectral reflectance reproduction, the finite-dimensional linear-model is an effective way to condense spectral reflectance. Since there are only three channels for each pixel in digital images, color correction based on spectral reflectance reproduction using the finite-dimensional model with three dimensions is needed. In the definite gamut, the spectral reflectance curves are analyzed using the finite-dimensional model. It is shown that reflectance generated by three basis functions which is capturing 99.13% of the overall variance, provides good approximations to the measured spectral reflectance. Together with the measured spectral power distribution of the illumination and the estimated spectral responses of the camera, the quantum catches are calculated. Errors caused by approximation of the finite-dimensional model are relatively small, which shows that color correction based on the finite-dimensional linear-model has feasibility to some extent.",2008,0, 8232,A modified Dendritic Cell Algorithm for on-line error detection in robotic systems,"The immune system is a key component in the maintenance of host homeostasis. Key actors in this process are cells known as dendritic cells (DCs). An artificial immune system based on DCs (known as the Dendritic Cell Algorithm: DCA) is well established in the literature and has been applied in a number of applications. Work in this paper is concerned with the development of an integrated homeostatic system for small, autonomous robotic systems, implemented on a resource limited micro-controller. As a first step, we have modified the DCA to operate in both simulated robotic units, and a resource constrained micro-controller that can operate in an on-line manner. Errors can be introduced into the robotic unit during operation, and these can be detected and then circumvented by the modified DCA.",2009,0, 8233,Towards immune inspired fault tolerance in embedded systems,"The immune system is a remarkable natural system that is proving to be of great inspiration to computer scientists and engineers alike. This paper discusses the role that the immune system can play in the development of fault tolerant embedded systems. Initial work in the area has highlighted the use of the immune process of negative selection, and more importantly the concept of self/non-self discrimination in the application of artificial immune systems to fault tolerance. This paper reviews those works, highlights issues relating to the way in which this area is approached, and raises important points that need to be considered before effective immune inspired fault tolerant systems can be constructed.",2002,0, 8234,Functionally testable path delay faults on a microprocessor,The impact of delay defects on these functionally untestable paths on overall circuit performance involves identification of such paths determining the achievable path delay fault coverage and reducing the subsequent test generation effort. The experimental results for two microprocessors (Parwan and DLX) indicate that a significant percentage of structurally testable paths are functionally untestable,2000,0, 8235,Implementation and testing of fault-tolerant photodiode-based active pixel sensor (APS),"The implementation of imaging arrays for system-on-a-chip (SOC) is aided by using fault-tolerant light sensors. Fault-tolerant redundancy in an active pixel sensor (APS) is obtained by splitting the photodiode and readout transistors into two parallel operating devices, while keeping a common row select transistor. This creates a redundant APS that is self-correcting for most common faults. Simulations suggest that, by combining hardware fault-tolerance capability with software correction, active pixel sensor arrays could be virtually immune to defects. To test this concept in hardware, a fault-tolerant photodiode APS was designed and fabricated using a CMOS 0.18 m process. Testing included both fully functional APS, and those in which various failure modes and mechanisms are introduced (equivalent to stuck low and stuck high faults). Test results show that the output voltage for the stuck high case and the stuck low case varies linearly with light intensity. For the stuck low case, the sensitivity is 0.57 of that for a non-defective redundant APS, and the stuck high case is 0.40. These deviate from the theoretical value of 0.5 by +14% and -20% respectively.",2003,0, 8236,Distributed error handling and HRI,"The implementations of a distributed, autonomous error handler (EH) and a human-robot interface (HRI) are presented. The interface is combined with the EH to allow a human operator to see that a failure has occurred on a robot and whether or not it has been served by the EH. An experiment was run to test how well the EH and the interface work together, as well as the usefulness of the EH. The results were inconclusive, although the EH and interface worked together successfully.",2004,0, 8237,Highly fault-tolerant FPGA processor by degrading strategy,"The importance of highly fault-tolerant computing systems has widely been recognized. We propose an FPGA architecture with a degrading strategy to increase fault-tolerance in a CPU. Previously, duplication and substitution methods have been proposed, but former methods waste redundant circuits and later methods increase computing speed as faults occur. We propose a reconstitution method with FPGA technology. Using our method, execution speed of the CPU gradually decreases as permanent faults occur. The CPU consists of functional blocks (FB), that is re-configurable logic blocks. When a fault occurs, the broken FB is discarded. As the number of valid FB decreases, function units of it is scaled down, therefore, execution time increases. In our simulation, speed degradation is less than 100% when 70% of whole FBs are broken. Compared with previous methods, speed degradation is smaller in case that many permanent faults occur.",2002,0, 8238,Automating defects simulation and fault modeling for SRAMs,"The continues improvement in manufacturing process density for very deep sub micron technologies constantly leads to new classes of defects in memory devices. Exploring the effect of fabrication defects in future technologies, and identifying new classes of realistic functional fault models with their corresponding test sequences, is a time consuming task up to now mainly performed by hand. This paper proposes a new approach to automate this procedure. The proposed method exploits the capabilities of evolutionary algorithms to automatically identify faulty behaviors into defective memories and to define the corresponding fault models and relevant test sequences. Target defects are modeled at the electrical level in order to optimize the results to the specific technology and memory architecture.",2008,0, 8239,Customizable Fault Tolerant Caches for Embedded Processors,"The continuing divergence of processor and memory speeds has led to the increasing reliance on larger caches which have become major consumers of area and power in embedded processors. Concurrently, intra-die and inter-die process variation at future technology nodes will cause defect-free yield to drop sharply unless mitigated. This paper focuses on an architectural technique to configure cache designs to be resilient to memory cell failures brought on by the effects of process variation. Profile-driven re-mapping of memory lines to cache lines is proposed to tolerate failures while minimizing degradation in average memory access time (AMAT) and thereby significantly boosting performance-based die yield beyond that which can be achieved with current techniques. For example, with 50% of the number of cache lines faulty, the performance drop quantified by increase in AMAT using our technique is 12.5% compared to 60% increase in AMAT using existing techniques.",2006,0, 8240,Formalization and automated detection of human errors,"The contribution describes a novel approach for the detection and classification of human errors in interaction with complex dynamic systems, according to Donierpsilas error taxonomy. The programmed implementation of the approach based on situation-operator-modeling (SOM) is already realized using software tools for high-level Petri nets (HPN). An experimental environment consisting of an arcade style game communicating with the HPN-software CPN Tools is used. With CPN Tools, the interaction between a human operator and the arcade game is modeled and further mapped to an automatically generated state space. Using generically formulated state space queries, the human error dasiarigiditypsila is detected.",2008,0, 8241,An active star topology for improving fault confinement in CAN networks,"The controller area network (CAN) is a field bus that is nowadays widespread in distributed embedded systems due to its electrical robustness, low price, and deterministic access delay. However, its use in safety-critical applications has been controversial due to dependability limitations, such as those arising from its bus topology. In particular, in a CAN bus, there are multiple components such that if any of them is faulty, a general failure of the communication system may happen. In this paper, we propose a design for an active star topology called CANcentrate. Our design solves the limitations indicated above by means of an active hub, which prevents error propagation from any of its ports to the others. Due to the specific characteristics of this hub, CANcentrate is fully compatible with existing CAN controllers. This paper compares bus and star topologies, analyzes related work, describes the CANcentrate basics, paying special attention to the mechanisms used for detecting faulty ports, and finally describes the implementation and test of a CANcentrate prototype.",2006,0, 8242,Error localization for robust video transmission,"The convergence of Internet, multimedia and mobile applications has led to an increased demand for efficient and reliable video data transmission over heterogeneous networks. Due to their coding efficiency, variable-length codes (VLC) are usually employed in the entropy coding stage of video compression standards. However, error propagation is a major problem associated with VLC. We propose the use of a class of self-synchronizing VLC (SSVLC) to achieve the dual goal of optimal coding efficiency and optimal error localization. Performance evaluation has confirmed that the use of SSVLC provides better performance than standard VLC techniques.",2002,0, 8243,Fault Surviving Optimisation within Brick Based Storage System,"The cost of providing an organization with enterprise class storage is continuing to rise, due primarily to the use of specialised components and the complex internal storage system structures. This high cost has resulted in researchers directing their efforts towards modular or 'brick' based storage, which utilise commodity components while employing autonomic principles for the management of the system, reducing the costs incurred during the lifetime of the storage system. While systems based upon this brick structure incorporate high levels of data reliability/availability, they fail to recognise the importance of ensuring the management services of a system are also tolerant of failures (such as optimisation). If system management services are not provided reliably, such storage systems which experience failures will degradedramatically in performance, especially when the node/s responsible for providing the management fails. This paper addresses the issue of providing fault tolerant management services within brick storage systems. We present in this paper our model of a system that can ensure the integrity and availability of data stored within brick based storage while ensuring the continual execution of the systems management services irrespective of failures within the system.",2008,0, 8244,Charge sharing and interaction depth corrections in a wide energy range for small pixel pitch CZT detectors,"The CSTD project aims at developing a high resolution pixel gamma detector based on CdZnTe for Compton imaging applications. Our research group has been working recently on the design and characterization of a new pixel detector with specifications focused at high energy SPECT for medical imaging applications. The detector pitch, 0.3 mm, and its thickness, 5 mm, allows to reach high spatial resolution and high detector efficiency. Non-ideal performance appears with more strength in small pixel pitch CdZnTe detectors, below 1 mm, affecting at the spectroscopic results. In order to recuperate the shared charge, the customized ASIC simultaneously collects the charge in the triggering pixel and its eight neighboring pixels per event. The detector design, readout electronics, acquisition software and data analysis have been completed at CIEMAT. Data has been taken by irradiating the CdZnTe detector with high and low energy gamma-ray sources. The high energy events of the 137Cs source suffer from a great proportion of charge sharing in the neighboring pixels. Two 137Cs spectra, with and without energy correction, are shown and compared. To obtain the corrected spectra offline, the collected charge at the neighboring pixels is added to the trigger pixel collected charge. The corrected spectra show that the 662 keV photopeak is reconstructed. Interaction depth correction follows to improve the energy resolution by data segmentation of the 662 keV energy peak according to fifty cathode to pixel ratios. The computed interaction depth correction profile is the inverse of the charge collection efficiency. Energy resolution can be improved discarding the segmented data which do not achieve an acceptable energy resolution. Several interaction depth correction profiles at 81, 356 and 662 keV are shown and reveal a second correlation between the charge collecting efficiency and the collecting energy.",2010,0, 8245,Hierarchical Byzantine Fault Tolerant Secure LDAP,"The current security mechanism of LDAP system is authenticating and authorizing. It can tolerate attacks occurred on client and the Internet, and benign faults on servers such as crashes. But it can not tolerate Byzantine(malicious) faults on server and software errors. In this paper, a secure hierarchical Byzantine fault tolerant LDAP system is proposed. By using the state-machine replication approach and quorum system technique, the proposed system can tolerate not only benign faults but also Byzantine faults. The proposed system is a hierarchical LDAP. In this system, an optimized key management to reduce the number of messages communication greatly and a secure caching mechanism are designed, and the deal to read-only request is optimized also. With these optimizations, the system can not only provide a much higher degree of security and reliability but also be practical.",2006,0, 8246,Effect of Domain Shape Modeling and Measurement Errors on the 2-D D-Bar Method for EIT,"The D-bar algorithm based on Nachman's 2-D global uniqueness proof for the inverse conductivity problem (Nachman, 1996) is implemented on a chest-shaped domain. The scattering transform is computed on this chest-shaped domain using trigonometric and adjacent current patterns and the complete electrode model for the forward problem is computed with the finite element method in order to obtain simulated voltage measurements. The robustness and effectiveness of the method is demonstrated on a simulated chest with errors in input currents, output voltages, electrode placement, and domain modeling.",2009,0, 8247,A Design Approach for Soft Error Protection in Real-Time Embedded Systems,"The decreasing line widths employed in semiconductor technologies means that soft errors are an increasing problem in modern system on a chip designs. Approaches adopted so far have focused on recovery after detection. In real-time systems, though, that can easily lead to missed deadlines. This paper proposes a preventative approach. Specifically a design methodology that uses metrics in design space exploration that highlight where in the structure of the systems model and at what point in its behaviour, protection is needed against soft errors. The approach does not eliminate the impact of soft errors completely, but aims to significantly reduce their impact.",2008,0, 8248,An Agent Based Fault Diagnosis Support System and Its Application,"The efficiency and the effect of fault diagnosis are increasingly important for airplane flying on timetable and flight security. But the fault diagnosis within airplane is a complex and time consuming task. In this paper, an agent based fault diagnosis support system (AFDSS) is proposed to support ground crew in process of airplane fault diagnosis. This AFDSS consists of four kinds of agents: management agent, interface agent, diagnosis agent and data agent. Management agent serves as an agent name server and keeps the all agents' information, such as name, location, capabilities. Interface agent is the interface of AFDSS. Fault diagnosis user and expert have their own interface agent to interact with the AFDSS. Diagnosis agent encapsulates a diagnosis method. Data agent gets the required signals for fault diagnosis. All agents have the capabilities of communication and cooperation with each other. We study the AFDSS architecture, and discuss the development technology of AFDSS",2006,0, 8249,"A comparative study of different atmospheric correction algorithms over an area with complex geomorphology in Western Peloponnese, Greece","The efficiency of different atmospheric correction methods is studied, using a Landsat 5 TM image of an area with complex physiography and geomorphology. Taking into account the optical qualitative characteristics of the processed images, the statistical features of their histograms and their spatial frequency content, it is concluded that, for the area under study, the technique of the absolute atmospheric correction with the parameters of a maritime atmosphere gives the best results.",2002,0, 8250,Data Mining for Detecting Errors in Dictation Speech Recognition,"The efficiency promised by a dictation speech recognition (DSR) system is lessened by the need for correcting recognition errors. Error detection is the precursor of error correction. Developing effective techniques for error detection can thus lead to improved error correction. Current research on error detection has focused mainly on transcription and/or domain-specific speech. Error detection in DSR has been studied less. We propose data mining models for detecting errors in DSR. Instead of relying on internal parameters from DSR systems, we propose a loosely coupled approach to error detection based on features extracted from the DSR output. The features mainly came from two sources: confidence scores and linguistics parsing. Link grammar was innovatively applied to error detection. Three data mining techniques, including NaIve Bayes, neural networks, and Support Vector Machines (SVMs), were evaluated on 5M DSR corpora. The experimental results showed that significant performance was achieved in that F-measures for error detection ranged from 55.3% to 62.5%. This study provided insights into the merit of different data-mining techniques and different types of features in error detection.",2005,0, 8251,Coupling Correction of a Circularly Polarizing Undulator at the Advanced Photon Source,"The electromagnetic circularly polarizing undulator (CPU) installed at the Advanced Photon Source (APS) storage ring produces skew quadrupole field errors, which were initially corrected by a small skew quadrupole magnet at one end of the device. Because the storage ring is operated at 1% coupling or less, a correction not located at the source inside the CPU is insufficient, as we have confirmed in simulation. Adding a skew coil at the other end of the CPU allows us to make a complete correction of the coupling source in the undulator. Correction set points are determined by APSs general optimizing software with the vertical beam size of an x-ray pinhole image as a readback.",2005,0, 8252,Sandra - A New Concept for Management of Fault Isolation in Aircraft Systems,"The embedded Fault Isolation functionality in the Saab JAS39 Gripen aircraft has been designed to accurately and reliably provide the technician with proposed maintenance procedures. A previously identified drawback and built in limitation has been the significant lead time for Fault Isolation functional changes based on aircraft operational statistics and line experience. With the Fault Isolation executing as compiled source code, changes and corrections require adaptation of the regular onboard systems computer software and careful planning of code and documentation releases, implying not only significant delays, but also high costs for necessary updates. The ""Sandra"" project aims at even further refine - and to introduce a state of the art - fault isolation maintenance concept for the Saab JAS39 Gripen aircraft. Based on an easy-to-use PC based graphical tool, Fault Isolation on dedicated aircraft monitoring and safety check result data is specified. Output in the form of design documentation artifacts, such as flowcharts and technical publications, is generated. The contained Fault Isolation object data is updated in parallel with the regular onboard computer software development process and the corresponding Loadable Data File will be delivered when convenient. The PC application constitutes the maintenance engineer's primary Fault Isolation design tool. The tool enables the maintenance engineer to select dedicated settings via a graphical user interface and use logical expressions to propose detailed and specific maintenance actions to be performed by the aircraft technician. The tool is capable of verifying a complete set of design documents towards the content of a generated loadable file. Thus, a generated output file with a minimum of additional verification can be delivered to be loaded into the aircraft. This new approach implies that the lead time for a Fault Isolation functional change can be reduced by as much as 80 %. The cost for the corresponding- functional change will decrease by more than 50 %.",2007,0, 8253,An innovative fault injection method in embedded systems via background debug mode,"The embedded systems usage in different applications is prevalent in recent years. These systems include a wide range of equipments from cell phones to medical instruments, which consist of hardware and software. In many examples of embedded systems, fault occurrence can lead to serious dangers in system behavior (for example in satellites). Therefore, we try to increase the fault tolerance feature in these systems. Therefore, we need some mechanisms that increase the robustness and reliability of such systems. These objects cause the on-line test to be a great concern. It is not important that these mechanisms work in which level (Hardware level, Software level or Firmware). The major concern is that how well these systems can provide debugging, test and verification features for the user regardless of their implementation levels. Background Debug Module is a real time tool for these features. In this paper we apply an innovative way to use the BDM tool for fault injection in an embedded system.",2009,0, 8254,Embedded Implementation of a SIP Server Gateway with Forward Error Correction to a Mobile Network,"The emergence of the Voice over Internet Protocol (VoIP) services requires interconnection between different technologies. In this paper we present an embedded Session Initiation Protocol (SIP) Proxy Gateway (GW) to a Mobile Network with Forward Error Corrections (FEC). Our server registers, locates and forwards the calls of an end user providing intelligent routing for user tracking. In addition, FEC along with a Markov-Chain loss model has been integrated into the server to perform a range of tests. Results prove that under high packet loss rate the embedded SIP server GW improves the speech quality.",2010,0, 8255,3D Video communication scheme for error prone environments based on motion vector sharing,"The Emergence of three dimensional (3D) video applications, based on Depth Image Based Rendering (DIBR) has brought about new dimensions to the video transmission problem, due to the need to transmit additional depth information to the receiver. Until the transmission problem of 3D video is adequately addressed, consumer applications based on 3D video will not gain much popularity. Exploiting the unique correlations that exist between the color and their corresponding depth images, will lead to more error resilient video encoding schemes for 3D video. In this paper we present an error resilient 3D video communication scheme that exploits the correlation of motion vectors in color and depth video streams. The presented method achieves up to 0.8 dB gain for color sequences and up to 0.7 dB gain for depth sequences over error prone communication channels.",2010,0, 8256,Error resilience schemes of H.264/AVC for 3G conversational video services,"The emerging of third generation mobile system (3G) makes conversational video transmission in wireless environment possible, and the latest 3GPP/3GPP2 standards suggest 3G terminals support H.264/AVC. Due to high bit error rate in wireless environment, error resilience schemes are necessary for 3G terminals. Moreover, according to 3GPP/3GPP2, 3G terminals support only part of error resilience tools of H.264/AVC. This paper lists various error resilience tools and analyzes their usability in 3G environment, and at the same time, some error resilience schemes are proposed. Performances of these error resilience schemes are tested using offline common test conditions. Experiments shows that, in 3G environment, encoding with simple FMO mode and extra intra block refreshing can achieve the best error correction ability, because this method can make full use of spatial correlation for intra predication and reduce the requirement of bit rate.",2005,0, 8257,Failure diagnosis of discrete event systems: the case of intermittent faults,"The diagnosis of ""intermittent"" faults in dynamic systems modeled as discrete event systems is considered. In many systems, faulty behavior often occurs intermittently, with fault events followed by corresponding ""reset"" events for these faults, followed by new occurrences of fault events, and so forth. Since these events are usually unobservable, it is necessary to develop diagnostic methodologies for intermittent faults. This paper addresses this issue by: (1) proposing a modeling methodology for discrete event systems with intermittent faults; (2) introducing new notions of diagnosability associated with fault and reset events; and (3) developing necessary and sufficient conditions, in terms of the system model and the set of observable events, for these notions of diagnosability. The associated necessary and sufficient conditions are based upon the technique of ""diagnosis"" introduced in earlier work, albeit the structure of the diagnosis needs to be enhanced to capture the dynamic nature of faults in the system model. The diagnosability conditions are verifiable in polynomial time in the number of states of the diagnosis.",2002,0, 8258,Fault diagnosis for a delta-sigma converter by a neural network,The diagnosis of faults in a first order -converter is described. The circuit behaviour of fault-free circuits and circuits containing single faults were simulated and characterized by the output bitstream patterns. The latter were compared with that of the ideal fault-free circuit. A Simplified fuzzy ARTMAP was trained with metrics derived from the bitstreams and their assigned class. A diagnostic accuracy of 93% was achieved using just two of the metrics. The technique might be useful for the diagnosis of other circuits.,2004,0, 8259,Neural network approach to diagnose phase shifter faults of antenna arrays,"The diagnosis of faulty phase shifter in a uniform linear phased array antenna using a new method is presented. For parallel feeding of antenna elements each element has a separate phase shifter. The phase shifter for any particular element may fail due to failure in drive electronics. The failures of phase shifter are called phase shifter faults. In this work, an artificial neural network approach is adopted to diagnose phase shifter faults. A linear array of 21 elements with uniform spacing and uniform excitation with a progressive phase shift of A is considered. A feed forward back propagation algorithm is used to train a neural network with a deviation radiation pattern which is the difference between the measured radiation pattern of array with normal phase shifters and degraded radiation pattern of array with a faulty phase shifter. The network thus trained predicted the number of the antenna element with faulty phase shifter with a high success rate. This is illustrated in a confusion matrix.",2006,0, 8260,On the difference of two sums of independent generalized gamma random variables with applications to error performance analysis and outage probability evaluation,"The difference D of two sums of independent generalized gamma random variables is considered. The cumulative distribution function FD(x) of D could be obtained directly from its PDF. Of particular interest to us is the value of FD (0) which could be used to derive the error probability and outage probability of a communication system. In the error probability analysis for a variety of communication systems, the probability of error is given by P[D 0], where D is the decision variable which is a Hermitian quadratic form in complex-valued Gaussian random variables. As another application, we consider a cellular mobile radio system in which the desired and interfering signals are received over independent slowly-varying generalized gamma fading paths and the desired signal is independent of the interfering signals. The outage probability is defined as the probability that the carrier-to-interference power ratio falls below a predetermined threshold. The received desired or interfering signals need not be identically distributed and are not limited to be Rayleigh, Ricean, or Nakagami distributed; rather, each is allowed to be generalized gamma distributed.",2003,0, 8261,FIR filter design over discrete coefficients and least square error,"The difference routing digital filter (DRDF) consists of an FIR filter followed by a first-order integrator. This structure with power-of-two coefficients has been studied as a means of achieving low complexity, high sampling rate filters which can be implemented efficiently in hardware. The optimisation of the coefficients has previously been based on a time-domain least-squares error criterion. A new design method is proposed that includes a frequency-domain least-squares criterion with arbitrary frequency weighting and an improved method for handling quantisation of the filter coefficients. Simulation studies show that the new approach yields an improvement of up to 7 dB over existing methods and that oversampling can be used to improve performance",2000,0, 8262,Accurate algorithm for analysis of surface errors in reflector antennas and calculation of gain loss,"The distortion of antenna reflector can degrade the antenna performances obviously with the higher work frequency band. A new algorithm used for accurately analyzing the distorted reflectors and computing the axial, normal and radial deviations of distorted surface is developed. The analysis method with simulation values only one third of analysis results of ANSYS software is verified by calculating surface errors and gain losses of a 7.3-m parabolic antenna under different working conditions and comparing the results of accurate algorithm and ANSYS software method. Its application in both large space and ground antennas will greatly improve the design efficiency, reduce cost, and also provide the detailed precise distortion information for electrical designers.",2005,0, 8263,Soft Error Tolerant Carry-Select Adders Implemented into Altera FPGAs,"The drastic shrink in transistor dimensions is making circuits more susceptible to radiation-induced soft errors. While single-event upsets are beginning to be a concern for electronic systems fabricated with nanometer CMOS technology at the sea level, single-event transients (SETs) are also expected to be a serious problem for the upcoming technologies. Thanks to the high logic density and fast turnaround time, FPGAs are currently the main fabric used to implement electronic systems. However, to provide high logic density FPGA devices are also fabricated with state-of-the-art CMOS technology and thus are also susceptible to soft errors. This paper presents a novel technique to protect carry-select adders against SETs. Such technique is based on triple module redundancy (TMR) and explores the inherent duplication existing in carry-select adders to reduce resource overhead.",2007,0, 8264,Model-Based Fault Detection and Diagnosis System for NASA Mars Subsurface Drill Prototype,"The Drilling Automation for Mars Environment (DAME) project, led by NASA Ames Research Center, is aimed at developing a lightweight, low-power drill prototype that can be mounted on a Mars lander and be capable of drilling down several meters below the Mars surface for conducting geology and astrobiology research. The DAME drill system incorporates a large degree of autonomy -from quick diagnosis of system state and fault conditions to taking the appropriate recovery actions -while also striving to achieve as many of the operational objectives as possible.",2007,0, 8265,Automated methods for atmospheric correction and fusion of multispectral satellite data for national monitoring,"The Earth Observation for Sustainable Development of Canada's forests (EOSD) project monitors Canada's forests from space. Canada contains ten-percent of the world's forests. Initial EOSD products are land cover, forest change, forest biomass, and automated methods. There are more than 500 LANDSAT TM or ETM+ scenes required for a single coverage of Canada's forests. Multi-temporal analysis using satellite data requires automation for conversion of these data to common units of exoatmospheric radiance or ground reflectance. During the next ten years the EOSD project will use a variety of Landsat optical and Radarsat sensors. A diverse set of ancillary and satellite data formats exist which require the development of adaptable data ingest and processing streams. Legacy LANDSAT TM and ETM+ data are available in a number of different formats from several national and US suppliers. In this paper, we present an automated system for managing processing streams for calibration and atmospheric correction of LANDSAT TM and ETM+ data to create data sets ready to analyze for EOSD products. Using known forest attributes from GIS data and field measurements, we validated our results of studies undertaken to assess spectral signal variability using both at-sensor radiance and ground reflectance for LANDSAT TM and ETM+ for a test site on Vancouver Island, BC. We present a strategy for correcting and fusing multi-source and multitemporal satellite data for meeting EOSD requirements.",2002,0, 8266,Componentization of Fault Tolerance Software for Fine-Grain Adaptation,"The evolution of systems during operational lifetime is becoming a core assumption of the design. This is the case for resource constrained embedded systems. Such an evolution may be driven by environment or the execution context. The adequacy of the service delivery with respect to the current operational conditions depends on the ability to tune the software configuration accordingly. This is true for application services, but also for dependability services, in particular the fault tolerance software. This paper presents a design of fault tolerance software for its runtime adaptation. This design relies on a reflective framework and open component based software engineering (CBSE) techniques. We demonstrate in this paper the feasibility of adapting componentized fault tolerance at a meta-level of the application.",2008,0, 8267,Research of fault diagnosis expert system for hydraulic components,"The failure knowledge of hydraulic component was expatiated based on the object-oriented method in this paper, and established a failure knowledge database. A fault reasoning strategy was defined, and given a detailed explanation of failure reasoning process, according to characteristics of directed tree and failure knowledge. A fault diagnosis expert system for hydraulic components was developed, which is structured based on VB language and SQL Server database technology. The system can diagnose failure occurred quickly and correctly, and provide a viable method of troubleshooting.",2010,0, 8268,A fault tolerance infrastructure for dependable computing with high-performance COTS components,"The failure rates of current COTS processors have dropped to 100 FITs (failures per 109 hours), indicating a potential MTTF of over 1100 years. However our recent study of Intel P6 family processors has shown that they have very limited error detection and recovery capabilities and contain numerous design faults (errata). Other limitations are susceptibility to transient faults and uncertainty about wearout that could increase the failure rate in time. Because of these limitations, an external fault tolerance infrastructure is needed to assure the dependability of a system with such COTS components. The paper describes a fault-tolerant infrastructure system of fault tolerance functions that makes possible the use of low-coverage COTS processors in a fault-tolerant, self-repairing system. The custom hardware supports transient recovery design fault tolerance, and self-repair by scaring and replacement. Fault tolerance functions are implemented by four types of hardware are processors of low complexity that are fault-tolerant. High error detection coverage, including design faults, is attained by diversity and replication",2000,0, 8269,Multiple-antenna-aided OFDM employing genetic-algorithm-assisted minimum bit error rate multiuser detection,"The family of minimum bit error rate (MBER) multiuser detectors (MUD) is capable of outperforming the classic minimum mean-squared error (MMSE) MUD in terms of the achievable bit-error rate (BER) owing to directly minimizing the BER cost function. In this paper, we will invoke genetic algorithms (GAs) for finding the optimum weight vectors of the MBER MUD in the context of multiple-antenna-aided multiuser orthogonal frequency division multiplexing (OFDM) . We will also show that the MBER MUD is capable of supporting more users than the number of receiver antennas available, while outperforming the MMSE MUD.",2005,0, 8270,A study of the error controllability of the fast inhomogeneous plane wave algorithm for a 2-D free space case,"The fast inhomogeneous plane wave algorithm (FIPWA) is another approach of the diagonal factorization of the Green's function which is represented by infinite integration. We discuss the error analysis in terms of the truncation of the integration path and the indispensable extrapolation process. Finally, this algorithm is shown to be error controllable.",2003,0, 8271,Research and Application of Data Mining in Fault Diagnosis for Big Machines,"The fault characteristics of big equipments are complex and difficult to distinguish, this paper presents a new method elaborating on selecting more interrelated vibration parameters as original characteristic vectors, and how to mine features from fault database and then analyze running conditions of rotating parts of big machines by applying fuzzy clustering. The theories of establishing models, specific algorithms and steps have been given in it. Applied example showed that the method was correct and the result of the fault diagnosis had also been proved to be reliable and accurate.",2007,0, 8272,Considering Fault Correction Lag in Software Reliability Modeling,"The fault correction process is very important in software testing, and it has been considered into some software reliability growth models (SRGMs). In these models, the time-delay functions are often used to describe the dependency of the fault detection and correction processes. In this paper, a more direct variable ""correction lag"", which is defined as the difference between the detected and corrected fault numbers, is addressed to characterize the dependency of the two processes. We investigate the correction lag and find that it appears Bell-shaped. Therefore, we adopt the Gamma function to describe the correction lag. Based on this function, a new SRGM which includes the fault correction process is proposed. And the experimental results show that the new model gives better fit and prediction than other models.",2008,0, 8273,Research on Fault Diagnosis in Analog Circuit Based on Wavelet-Neural Network,"The fault diagnosis method based on wavelet-neural network in analog circuit was presented. The problems on feature extraction in analog circuit, data pre-processing, training and testing of network, fault pattern classification were all discussed here. Fault features were extracted by circuit simulation, after data preprocessed by wavelet-neural network, which can be constructed into samples aggregation, the samples for training and testing were used to train and test neural network respectively, the uniform pattern samples were used to diagnose faults, the fault diagnosis in analogy circuit was realized effectively. An illustration validated this method",2006,0, 8274,Fault Location in Unbalanced DG Systems using the Positive Sequence Apparent Impedance,"The fault location techniques for power distribution systems (PDS) in use nowadays assume that the system has a radial power flow. Since new technologies in development, such as the distributed generation (DG), change this characteristic, it is necessary to adjust the already existing methods for fault location, since they show to be inefficient. Moreover, the unbalance between phases due to different loading at laterals is another issue that interferes in the fault location methodologies. In this paper, it is presented a new fault location method based on positive sequence apparent impedance. Computational simulations were made and the method was tested in two systems and compared with other existing fault location techniques in order to validate it. The basic characteristics of the method, the new algorithm and a variety of case studies are presented in the paper in order to illustrate its efficiency in unbalanced distribution systems with DG",2006,0, 8275,A novel gray clustering filtering algorithms for identifying the false alert in aircraft long-distance fault diagnosis,"The fault report is downloaded from the aircraft with ACARS for the line maintenance. This is widely attended currently. But the false alert often occurs in the fault report and drop the maintenance efficiency Aimed at the problem, the gray clustering filtering algorithms is set up based on gray cluster and filter theory .The algorithms can identify the false alert in the fault report effectively.",2007,0, 8276,An infrastructure for adaptive fault tolerance on FT-CORBA,"The fault tolerance provided by FT-CORBA is basically static, that is, once the fault tolerance properties of a group of replicated processes are defined, they cannot be modified in runtime. A support for dynamic reconfiguration of the replication would be highly advantageous since it would allow the implementation of mechanisms for adaptive fault tolerance, enabling FT-CORBA to adapt to the changes that can occur in the execution environment. In this paper, we propose a set of extensions to the FT-CORBA infrastructure in the form of interfaces and object service implementations, enabling it to support dynamic reconfiguration of the replication",2006,0, 8277,An Error-Resilient Scheme for Video Transmission,The framework of a systematic lossy error protection for error-resilient video transmission is discussed and an unequal error protection base on systematic lossy error protection is proposed. The systematic portion of the transmission consists of a video bitstream transmitted without channel coding over an error-prone channel. Error-resilience is achieved by transmitting a supplementary bitstream generated by Wyner-Ziv encoding of the video signal. The macroblock information and the transform coefficients are protected separately. The simulation results show that this scheme can provide graceful degrading video quality over a great range of symbol error rates compared with conventional FEC.,2009,0, 8278,2D Frequency Selective Extrapolation for Spatial Error Concealment in H.264/AVC Video Coding,"The frequency selective extrapolation extends an image signal beyond a limited number of known samples. This problem arises in image and video communication in error prone environments where transmission errors may lead to data losses. In order to estimate the lost image areas, the missing pixels are extrapolated from the available correctly received surrounding area which is approximated by a weighted linear combination of basis functions. In this contribution, we integrate the frequency selective extrapolation into the H.264/AVC coder as spatial concealment method. The decoder reference software uses spatial concealment only for I frames. Therefore, we investigate the performance of our concealment scheme for I frames and its impact on following P frames caused by error propagation due to predictive coding. Further, we compare the performance for coded video sequences in TV quality against the non-normative concealment feature of the decoder reference software. The investigations are done for slice patterns causing chequerboard and raster scan losses enabled by flexible macroblock ordering (FMO).",2006,0, 8279,Multi-converter operation of variable speed wind turbine driving permanent magnet synchronous generator during network fault,"The full or partial rating frequency converters, in general, are used widely for the operation of variable speed wind turbine (VSWT) driven wind generators. Among the variable speed wind generators, permanent magnet synchronous generator (PMSG) which uses a full rating frequency converter for grid interfacing is drawing much attention nowadays due to its some salient features. However, considering the factors such as higher-reliability, higher efficiency, and lower harmonics, the multi-converter topology is preferable. This paper proposes a detailed control strategy of multiple parallel connected frequency converter units integrated with VSWT driving PMSG to augments the transient performance during a network disturbance.",2009,0, 8280,"Invited talk: Self-aware wireless communication and signal processing systems: Real-time adaptation for error resilience, low power and performance","The functions required of real-time systems in the future such as the ability to see or hear, understand and react to external stimulus and the environment in much the same way that humans do, will force underlying communication and computing platforms to operate across very large changes in instantaneous workload. Supporting such workload variations on resource-constrained mobile systems will require new design approaches that cut across the traditional boundaries between the processing, mixed-signal, wireless, and sensor/ actuator (physical) domains, as well as the layers of each domain, i.e. circuit, architecture, algorithm, and application. Due to components fabricated in aggressive nanoscale technologies, such cyber-physical systems must deal with the impact of manufacturing process variations and component failures as well as different environmental conditions (temperature, noise environment) while operating in the most reliable manner with respect to mission goals. An integrated approach to designing such systems that utilizes real-time, cross-domain control and adaptation to operate the system at an optimal point that minimizes power consumption while meeting error resilience and performance constraints across different workloads and operating environments is proposed. The core strategy relies on the design and use of tunable algorithms, tunable architectures and tunable circuits that have the capability to trade off power vs. performance. Adaptation is performed by sensing the operating environment and workload using hardware and software sensors and dynamically tuning the system via an optimal control law. A critical observation is that this control law depends on the health of the system when power minimization is a key objective. The core ideas are demonstrated using a video surveillance system as a test case.",2010,0, 8281,Comparison of fault-tolerance techniques for massively defective fine- and coarse-grained nanochips,"The fundamental question addressed in this paper is how to maintain the operation dependability of future chips built from forthcoming nano- (or subnano-) technologies characterized by the reduction of component dimensions, the increase of atomic fluctuations and the massive occurrence of physical defects. We focus on fault tolerance at the architectural level, and especially on fault-tolerance approaches, which are based on chip self-diagnosis and self-reconfiguration. We study test and reconfiguration methodologies in massively defective nanoscale devices, either at fine granularity field programmable devices or at coarse granularity multi-core arrays. In particular, we address the important question of up to which point could future chips have self-organizing fault-tolerance mechanisms to autonomously ensure their own dependable operation. In the case of FPGAs, we present known fault tolerant approaches and discuss their limitations in future nanoscale devices. In the case of multicore arrays, we show that such properties as self-diagnosis, self-isolation of faulty elements and self-reorganization of communication routes are possible.",2009,0, 8282,Mutual Fault-tolerant and Standby SCADA System Based on MAS for Multi-area Centralized Control Centers,The general policies to construct the mutual fault- tolerant and standby SCADA system based on multi-agent technology for multi- area centralized control centers were presented in the paper in order to raise the safety and operational reliability of the power grid without additional equipment investment. The economic efficiency and feasibility of the system construction based on the policies are analyzed. The architecture of MAS and the function design of the Agents are introduced in detail and the specific implementation scheme and the corresponding key technologies are elucidated. The data and application fault-tolerance of SCADA system is realized to guarantee the reliability and continuity of the power grid operation.,2006,0, 8283,Estimation of short circuit capability of Generator Circuit Breaker for a generator fed fault,"The generator circuit breaker (GCB) is a core component of a power plant, protecting both the generator and the power transformers. Currently, the only standard that addresses GCB ratings is IEEE C37.013. This standard defines the primary short circuit rating of a GCB based on a system fed fault. When attempting to determine the required ratings for a GCB, it is sometimes difficult to check the capability of a particular model of GCB with respect to the characteristics of the project specific HV system, connected generator, and the associated MV distribution system. The difficulty increases with the emergence of large generators contributing high symmetrical fault current with a very high degree of asymmetry. Many of todaypsilas large MVA generators can produce a maximum degree of asymmetry up to 130% for generator fed faults. Currently there are no guidelines available to the end user to determine the suitability of the GCB tested at a certain degree of asymmetry when being applied to a system with a higher degree of asymmetry. This paper provides a discussion on selection of a GCB to interrupt generator-source faults with a high degree of asymmetry. Suitability for a given application may have to be confirmed by the GCB manufacturer.",2008,0, 8284,Detailed radiation fault modeling of the Remote Exploration and Experimentation (REE) first generation testbed architecture,"The goal of the NASA HPCC Remote Exploration and Experimentation (REE) Project is to transfer commercial supercomputing technology into space. The project will use state of the art, low-power, non-radiation-hardened, COTS hardware chips and COTS software to the maximum extent possible, and will rely on software-implemented fault tolerance to provide the required levels of availability and reliability. We outline the methodology used to develop a detailed radiation fault model for the REE Testbed architecture. The model addresses the effects of energetic protons and heavy ions which cause single event upset and single event multiple upset events in digital logic devices and which are expected to be the primary fault generation mechanism. Unlike previous modeling efforts, this model will address fault rates and types in computer subsystems at a sufficiently fine level of granularity (i.e., the register level) that specific software and operational errors can be derived. We present the current state of the model, model verification activities and results to date, and plans for the future. Finally, we explain the methodology by which this model will be used to derive application-level error effects sets. These error effects sets will be used in conjunction with our Testbed fault injection capabilities and our applications' mission scenarios to replicate the predicted fault environment on our suite of onboard applications",2000,0, 8285,The Effects of Over and Under Sampling on Fault-prone Module Detection,"The goal of this paper is to improve the prediction performance of fault-prone module prediction models (fault-proneness models) by employing over/under sampling methods, which are preprocessing procedures for a fit dataset. The sampling methods are expected to improve prediction performance when the fit dataset is unbalanced, i.e. there exists a large difference between the number of fault-prone modules and not-fault-prone modules. So far, there has been no research reporting the effects of applying sampling methods to fault-proneness models. In this paper, we experimentally evaluated the effects of four sampling methods (random over sampling, synthetic minority over sampling, random under sampling and one-sided selection) applied to four fault-proneness models (linear discriminant analysis, logistic regression analysis, neural network and classification tree) by using two module sets of industry legacy software. All four sampling methods improved the prediction performance of the linear and logistic models, while neural network and classification tree models did not benefit from the sampling methods. The improvements of Fl-values in linear and logistic models were 0.078 at minimum, 0.224 at maximum and 0.121 at the mean.",2007,0, 8286,Early errors detection in parallel programs,"The goal of this paper is to tell about the methodology and tools for errors detection in parallel programs at the code writing stage. Applying static code analysis methodology allows developers to significantly reduce the error correction costs at the testing and support stages. The error diagnostics in multithread applications will be demonstrated with the examples of PC-Lint, VivaMP, and Intel C++ Parallel Lint analyzers. The paper will be useful for developers who create parallel Windows applications in C/C++ languages.",2009,0, 8287,Using the fault disturbance recorder as an operating aid for control room operators at the national load dispatch center of peninsular Malaysia,"The greatest challenge facing the power system operators is during system disturbance. The operators must always be prepared to deal with unforeseen events and calamities. In order to assist the operators the NLDC (National Load Dispatch Center) is equipped with powerful software tools, and a fault disturbance recorder was installed in the control room to serve as an additional operating aid to the operators. This paper primarily focuses on the utilization of the disturbance recorder and how it has benefited the operators in helping them to arrive at some operational decisions. This paper also highlights some of the disturbance signatures that was obtained from observations of faults in the system using the disturbance recorder and how this has help the operator in his decision making process. The required skills and training for the operators in relation to the disturbance recorder are also discussed",2001,0, 8288,Production Evaluation of Automated Reticle Defect Printability Prediction Application,"The growing complexity of reticles and continual tightening of defect specifications causes the reticle defect disposition function to become increasingly difficult. No longer can defect specifications be distilled to a single number, nor can past simple classification rules be employed due to the effects of MEEF on actual printing behavior. The mask maker now requires lithography-based rules and capabilities for making these go/no-go decisions at the reticle inspection step. We have evaluated an automated system that predicts the lithographic significance of reticle defects using PROLITH(TM) technology. This printability prediction tool was evaluated and tested in a production environment using both standard test reticles and production samples in an advanced reticle manufacturing environment. Reference measurements on Zeiss AIMS(TM) systems were used to assess the accuracy of predicted results. The application, called the Automated Mask Defect Disposition System, or AMDD, models defective and non-defective test and reference images generated by a high-resolution inspection system. The results were calculated according to the wafer exposure conditions given at setup such that the reticle could be judged for its `fitness-for-use?? from a lithographic standpoint rather than from a simple physical measurement of the film materials. We present the methods and empirical results comparing 1D and 2D Intensity Difference Metrics (IDMs) with respect to AIMS and discuss the results of usability and productivity studies as they apply to manufacturing environments.",2007,0, 8289,A Predictive Method for Providing Fault Tolerance in Multi-agent Systems,"The growing importance of multi-agent applications and the need for a higher quality of service in these systems justify the increasing interest in fault-tolerant multi-agent systems. In this article, we propose an original method for providing dependability in multi- agent systems through replication. Our method is different from other works because our research focuses on building an automatic, adaptive and predictive replication policy where critical agents are replicated to avoid failures. This policy is determined by taking into account the criticality of the plans of the agents, which contain the collective and individual behaviors of the agents in the application. The set of replication strategies applied at a given moment to an agent is then fine-tuned gradually by the replication system so as to reflect the dynamicity of the multi-agent system. We report on experiments assessing the efficiency of our approach.",2006,0, 8290,The design of digital fault recorder module based on S12XD,"The hardware architecture and software designs of the digital fault recorder module are described in this paper. The enhanced MSCAN module and the XGATE co-processor of S12XD MCU are used to receive and manage the operate data through high-speed CAN bus. F-RAM is used as non-volatile data storage memory. The software, especially mentioned XGATE and fault data storage strategy, is discussed in detail. This module has been applied in the distributed generator excitation equipment.",2009,0, 8291,Evaluation of H.264/AVC error resilience in HD IPTV applications,"The delivery of High Definition Television (HDTV) over IP networks, namely the HD IPTV, has emerged as one of the major distribution and access techniques for broadband multimedia services. IPTV adopts H.264/AVC as its coding standard due to its high video compression efficiency as well as powerful error resilience features. This paper presents studies on some of these features applied to HD IPTV applications. A test system is deployed to simulate the delivery of HD video over a DSL based IPTV network. Effects of error resilience of slicing and Instantaneous Decoding Refreshing (IDR) features on video quality are examined in both channel non-impaired and channel impaired with burst noise circumstances. Based on the acquired results, optimal slice size was obtained for HD video transmission over an impaired channel. The quality of experience related to the IDR interval in combating error propagation was also characterized.",2010,0, 8292,Investigation of Fault Propagation in Encryption of Satellite Images Using the AES Algorithm,"The demand to protect the sensitive and valuable data transmitted from satellites to ground has increased and hence the need to use encryption on-board. The advanced encryption standard (AES), which is a very popular choice in terrestrial communications, is slowly emerging as the preferred option in the aerospace industry including satellites. AES is a block cipher, which encrypts one block of fixed length data at a time. Several modes of operation have been defined to encrypt multiple blocks of data. This paper addresses the encryption of satellite imaging data using the five AES modes-ECB, CBC, CFB, OFB and CTR. This paper describes the sources of faults and estimates the amount of damage caused to the data. The encrypted satellite data can get corrupted before reaching the ground station due to various faults. One major source of faults is the harsh radiation environment. Single even upset (SEU) faults can occur on-board during encryption due to radiation. A detailed analysis of the effect of SEUs on the imaging data during on-board encryption using the modes of AES is carried out. Faults in the data can also occur during transmission to the ground station due to noisy transmission channels. In this paper the impact of these faults on the data is discussed and compared for all the five modes of AES",2006,0, 8293,Fault tolerance in systems design in VLSI using data compression under constraints of failure probabilities,"The design of space-efficient support hardware for built-in self-testing (BIST) is of critical importance in the design and manufacture of VLSI circuits. This paper reports new space compression techniques which facilitate designing such circuits using compact test sets, with the primary objective of minimizing the storage requirements for the circuit under test (CUT) while maintaining the fault coverage information. The compaction techniques utilize the concepts of Hamming distance, sequence weights, and derived sequences in conjunction with the probabilities of error occurrence in the selection of specific gates for merger of a pair of output bit streams from the CUT. The outputs of the space compactor may eventually be fed into a time compactor (viz. syndrome counter) to derive the CUT signatures. The proposed techniques guarantee simple design with a very high fault coverage for single stuck-line faults, with low CPU simulation time, and acceptable area overhead. Design algorithms are proposed in the paper, and the simplicity and ease of their implementations are demonstrated with numerous examples. Specifically, extensive simulation runs on ISCAS 85 combinational benchmark circuits with FSIM, ATALANTA, and COMPACTEST programs confirm the usefulness of the suggested approaches",2001,0, 8294,Data-driven fault management within a distributed object-oriented OAM&P framework,"The design of the Universal Mobile Telecommunications System (UMTS) terrestrial radio access network (UTRAN) radio network controller (RNC) required the integration of hardware and software from multiple vendors and development organizations across a multinational project. The RNC architects, drawing on their experience with previous object-oriented projects, designed an abstract, distributed, object-oriented operations, administration, maintenance, and provisioning (OAM&P) system. A key component is the innovative data-driven fault management (FM) design. It is not just integrated with the devices being controlled; it is integrated into the entire system. FM data is a central project resource used to drive internal error and fault handling, operational state changes, external alarms, and generation of customer documentation. The design takes advantage of well-known commonality and variability design concepts for easy implementation and maintenance. The FM architecture makes the definition of an error, fault, alarm, and fault-handling behavior as simple as adding a row of data. 2003 Lucent Technologies Inc.",2003,0, 8295,Detection of Broken Rotor Bar Faults and Effects of Loading in Induction Motors during Rundown,"The detection of broken rotor bar faults based on the common steady-state Fourier transform technique is known to be dependent on the loading condition and the quality of the supply. This paper attempts to minimise these issues by utilising the induced voltage in the stator windings after supply disconnection. When the supply is disconnected, the stator current rapidly drops to zero and the only source of the stator induced voltage an instant after the supply disconnection is due to currents in the rotor. The rotor currents are sensitive to broken rotor bar faults and directly affect the rundown induced voltage in the stator windings. The performance of two different broken rotor bar detection techniques, based on the Fourier transform and the wavelet transform, are investigated over a wide range of loading conditions.",2007,0, 8296,Fixing Web Sites Using Correction Strategies,"The development and the maintenance of Web sites are difficult tasks. To maintain the consistency of ever-larger, complex Web sites, Web administrators need effective mechanisms that aid them infixing every possible inconsistency. In this paper, we present an extension of a methodology for semi-automatically repairing faulty Web site which we developed in a previous work. As a novel contribution, we define two correction strategies with the aim of increasing the level of automation of our repair method. Specifically, the proposed strategies minimize both the amount of information to be changed and the number of repair actions to be executed in a faulty Web site to make it correct",2006,0, 8297,The sensor of traveling-wave for fault location in power systems,"The fault-generated high frequency signals contain much information which can be used to accurately locate the fault point in power systems. For capturing the high frequency signals, two specially designed traveling-wave sensors are developed in the paper. One is the current sensor installed on the earth line of capacitive equipment (such as: CVT, capacitive CT, transformer bushing, wall bushing) to capture the current traveling-waves flowing from the equipment to earth. The other is the voltage sensor installed at the zero sequence winding of a voltage transformer to capture the voltage traveling waves in three-phase generated by faults. The fault location system with the traveling wave sensors is also developed simply. The sensors and the fault location system have been tested on an 110 kV power system. Results show that the sensors have good performance to capture traveling-waves and the error of fault location is no more than 120 m.",2004,0, 8298,Smart energy meters for energy conservation & minimizing errors,"The monitoring of the power quality helps to lower the energy costs and to prolong the machine's life. Smart metering is such a complete end to end solution which minimizes the several errors and helps in distributing Quality Power. It is an energy policy for consumers to provide them a user friendly face in dealing with utility (especially the Electricity) bills. It provides the users, a Digital Meter which displays the real time power consumption every time in very friendly and detailed format, and a website to check & analyze their consumption and expenses on energy, using different types of graphs, tabulated and manipulated data. It not only comforts their users but also give relief to the distribution company by minimizing power losses by using automatic Power factor maintenance technique, and providing anti-power theft capability. It also gives a control of power distribution through which Distribution Company can limit the user from exceeding usage of power in specific time duration.",2010,0, 8299,An improved monte carlo method in fault tree analysis,"The Monte Carlo (MC) method is one of the most general ones in system reliability analysis, because it reflects the statistical nature of the problem. It is not restricted by type of failure models of system components, allows to capture the dynamic relationship between events and estimate the accuracy of obtained results by calculating standard error. However, it is rarely used in Fault Tree (FT) software, because a huge number of trials are required to reach a tolerable precision if the value of system probability is relatively small. Regrettably, this is the most important practical case, because nowadays highly reliable systems are ubiquitous. In the present paper we study several enhancements of the raw simulation method: variance reduction, parallel computing, and improvements based on simple preliminary information about FT structure. They are efficiently developed both for static and dynamic FTs. The effectiveness and accuracy of the improved MC method is confirmed by numerous calculations of complex industrial benchmarks.",2010,0, 8300,Multi agent-based DCS with fault diagnosis,"The multi agent-based DCS (MADCS) with fault diagnosis function is proposed for the purpose of enhancing both system's flexibility and reliability. Through cooperation among agents, MADCS can diagnose its own state and attempt to recover itself. Commonly, these agents are driven by time or by event, and do not have the same structure. At the same time, the general structure of agent implementation was shown in this paper. To achieve better performance of the fault diagnosis, two models (serial model and parallel model) of data fusion based on multi-sensor are discussed and used in the MADCS.",2004,0, 8301,Fault Diagnosis of Variable Frequency Speed Regulation System Based on Information Fusion,"The multi-sensor information fusion theory has been widely used in the fault diagnosis domain. In order to improve the reliability of fault diagnosis of the variable frequency speed regulation system (VFSRS), this paper presents a new fault diagnosis method of the VFSRS based on the multi-sensor information fusion. A calculating method of the basic probability assignment (BPA) for the VFSRS was given. Taking the invisible electric fault of the VFSRS as an example, this paper presents the implementing process of this fault diagnosis method in detail. The diagnosis result indicates that the multi-sensor information fusion has the stability and the ability of fault tolerance, and can improve the accuracy and reliability of fault diagnosis of the VFSRS effectively.",2007,0, 8302,Rating issues in fault tolerant PMSM,"The necessary reliability of a safety critical drive system is often partly achieved by using fault tolerant electrical machines. Although there are various degrees as to what types of faults and to the duration the machine has to tolerate, these generally include open and short circuit winding faults. There are numerous published literatures on the design of fault tolerant machines as well as on control algorithms used to maintain drive operation with an incurred fault. This paper is set to look at the effects of various fault tolerant control methods on the losses in three and five phase surface mount Permanent Magnet (PM) machines when in fault tolerant operation in order to be able to choose the correct rating for such conditions.",2009,0, 8303,Fault diagnosis on autonomous robotic vehicles with RECOVERY: an integrated heterogeneous-knowledge approach,"The need for embedding fault diagnosis into goal-orientated autonomous robotic vehicles for increased mission robustness is described. The RECOVERY system, a method for increasing the diagnostic capability by integrating commonly available heterogeneous knowledge is presented. Initial real-water results using the Ocean Systems Laboratory's RAUVER vehicle are given.",2001,0, 8304,SmartBitTM: bitmap to defect correlation software for yield improvement,"The need of higher yields on the wafer-manufacturing environment is pushing the yield analysts to develop new techniques and tools for yield improvement. With this scope, we present SmartBitTM, a software tool that provides detailed information about yield limiters by correlating bitmap to in-line defect data in an automatic mode, expediting the yield learning task. Based on the spatial correlation of bitmap data and the information of defects coming from the in-line inspections, SmartBitTM provides a pareto where yield loss sources are separated and weighted by impact. SmartBitTM also offers detailed information about the killer defects (defect class, origin, size, kill ratio and more) into a set of reports specially designed to obtain an overall view about the main problems affecting the fab yield. This is the key to fast and efficient yield learning. These reports are generated automatically using data from the full production of memory-products enhancing the reliability and completeness of the analysis",2000,0, 8305,Performance of dropout correction on real magnetic tape waveforms with dropouts,"The need to increase both linear and track densities in tape recording technology calls for more robustness against dropouts, the sporadic losses in signal amplitude, that are a principal source of errors in tape systems. A dropout correction scheme previously introduced by the authors enables more robust bit detection by restoring the signal afflicted by the dropout event. In this paper, we present the results of using this scheme on real oversampled waveforms from an experimental tape test stand. Real-time implementation using a peak detector has been simulated in this work, Albeit at a low density (1.85 channel bits per PW50), the emphasis of this study is to identify the types of error events caused by dropouts and to demonstrate the feasibility of dropout compensation even in the case of very challenging dropouts presented by the experimental data set. It is shown that this scheme can significantly reduce the frequency of dropout induced cycle slip error events that can be as long as the data block. In other cases, where cycle slip errors do not occur, the number of errors associated with dropout events is reduced, on average, by a factor of more than 2. The sensitivity of the method to the envelope detection method is examined, and this is identified as an opportunity for significantly improving the performance of the scheme",2001,0, 8306,Unequal error protection for H.26L video transmission,"The network adaptation feature is left open in the previous video coding standards. Thus the quality of video sequences compressed by a video codec like the H.263 and MPEG family may fall dramatically when they are carried over error prone environments. This situation was well recognized and network friendliness of the video codec became one of the main goals of the H.26L on-going ITU-T video coding project. A new network abstraction layer (NAL) concept has been defined and integrated into the H.26L standard and it deals with special properties of the transmission environments. We present the NAL concept and related matters such as data partitioning, packetization and profiles. We suggest that the data partitioning feature with unequal error protection (UEP) should be employed for mobile video telephony applications. We present experimental results to support our suggestion.",2002,0, 8307,The effect of noise estimation error in the LDPC decoding performance,"The number of quantization bits of the input signals rn needs to be optimally determined through the trade-off between the H/W complexity and the BER performance in LDPC codes applications. Also, an effective means to incorporate a channel reliability Lc in the log-MAP based LDPC decoding is highly required, because it has a major effect on both the complexity and performance. In this paper so as to effectively incorporate Lc in LDPC decoding. The optimal number of quantization bits of rn is investigated through Monte-Carlo simulations assuming that bit-shifting approach is adopted. In addition, the effects of an incorrect estimation of noise variance on the performance of LDPC codes are investigated. There is a confined range in which the effects of an in correct estimation can be ignored",2004,0, 8308,"Parameterized IP Infrastructures for fault-tolerant FPGA-based systems: Development, assessment, case-study",The objective of this article is to present technique elements of Intellectual Properties Cores (IP) and Infrastructure IP (IIP) parameterization for fault-tolerant FPGA-based systems (FTFS). The FTFS IIP parameters classification is discussed and some elements of IIP parameterization technique are observed. Case-study of proposed technique is illustrated for the airborne ice protection system.,2010,0, 8309,A new audio skew detection and correction algorithm,"The lack of synchronisation between a sender clock and a receiver audio clock in an audio application results in an undesirable effect known as ""audio skew"". This paper proposes and implements a new approach to detecting and correcting audio skew, focusing on the accuracy of measurements and on the algorithm's effect on the audio experience of the listener. The algorithms presented are shown to remove audio skew successfully, thus reducing delay and loss and hence improving audio quality.",2002,0, 8310,A Real Time Network Game System Based on Retransmission of N-based Game Command History for Revising Packet Errors,"The latency occurring from the load fluctuation in a real-time network games may be overcome by using an initial delay scheme on the client, and then play the game, it means that stability can be maintained, and the shortcomings of packet losses and errors on UDP communications are alleviated. In this paper, we suggest a retransmission algorithm using an N-Based game command history that supports End-to-End protocol, and retransmits data without alteration of existing protocols, and also supports video games based on multi-platform in wired-wireless network environments, and it can possibly work under multi-user games. As demonstrated in the simulations, we confirm that the suggested system is more stable than existing systems.",2007,0, 8311,The studying of combined power-load forecasting by error evaluation standard based on RBF network and SVM method,"The load forecasting method usually starts from a single method, we usually improved prediction methods to get the better forecasting accuracy, but this often confined to the application of the method, combination forecasting method can achieve superiority of various methods, the forecast accuracy is higher than single forecasting method. In this paper, we used RBF neural network prediction method and support vector machine forecasting method.RBF neural network prediction method is the more popular method in recent years, it has the better generalization ability to the traditional neural network prediction method, It can effectively avoid local minima value and has a very good learning ability; SVM prediction method is transformed into one-dimensional nonlinear prediction of linear space, it has very precise calculation process and can meet the high forecast precision. Based on the combination of the two methods, not only from the Angle of artificial memory model prediction, and using the tight nonlinear model, ultimately meet the purpose of combined forecasting. The main innovation in this paper is that assess the result of every kind of prediction method by making the standards of error qualified, using the error rate to determine the weight of combination, finally, we can get the satisfactory results through an empirical analysis.",2009,0, 8312,Error Prediction Based Redundancy Control for Robust Transmission of Video Over Wireless Links,"The loss of UDP/IP packets has a high impact on the quality of the low-rate video streaming, since one UDP/IP packet typically represents a considerable part of a picture, which is discarded and concealed if an error was detected by a UDP checksum. In this work we propose a method that allows for the utilization of error-free parts of such UDP packets. The detection of errors is facilitated by the use of CRC (cyclic redundancy check) information of the smaller link layer packets. Resynchronization of the variable length code at the beginning of the link layer packets is provided by using side information. To keep the additional rate small and to have low distortion at the same time, we propose sending the side information depending on the prediction of the channel quality. Experimental results show a considerable improvement in video quality at the receiver.",2007,0, 8313,Fault Identification in Distribution Lines Using Intelligent Systems and Statistical Methods,"The main objective involved with this paper consists of presenting the results obtained from the application of artificial neural networks and statistical tools in the automatic identification and classification process of faults in electric power distribution systems. The developed techniques to treat the proposed problem have used, in an integrated way, several approaches that can contribute to the successful detection process of faults, aiming that it is carried out in a reliable and safe way. The compilations of the results obtained from practical experiments accomplished in a pilot distribution feeder have demonstrated that the developed techniques provide accurate results, identifying and classifying efficiently the several occurrences of faults observed in the feeder",2006,0, 8314,Analysis of Stator Winding Inter-Turn Short-Circuit Faults in Induction Machines for Identification of the Faulty Phase,"The main objective of this paper is to develop and experimentally verify a method of identifying the faulty phase in a three-phase armature of an induction motor with concentric coil construction, when such an armature suffers from an inter-turn within one of its phases. This work leads to a new technique for identifying the faulty phase in concentric wound machines and estimating the associated fault severity without any requirement for additional sensors, wiring constrains, or knowledge of any other details of the machine design. The technique has been verified through several experimental test results",2006,0, 8315,Application on Virtual Instrument and Neural Networks in the Fault Diagnosis,"The main point of intelligent fault diagnosis theory is fault mode distinguishing principle based on data processing methods. Pointing to the problems of the traditional fault diagnosis mode, a fault diagnosis method based on the virtual instrument and neural networks is proposed. The signals collection and management based on virtual instrument is introduced, the basic method of the neural networks for distinguishing the faults is analyzed. For fastness and accuracy, connecting the wavelet analysis with the neural networks organically, and based on the wavelet transfer and the neural networks, the system of the speedy features extraction and identification for the faults is founded. The method of the feature extraction for the faults based on the wavelet analysis are established, the realization idea of the fault diagnosis based on the neural networks is put forward, and the hardware and software structure of the fault diagnosis based on the neural networks are discussed. The experimental and simulated results show: it is feasible that analyses for the faults with the neural networks and the wavelet analysis. The method can remarkably heighten the accuracy and credibility of the fault diagnosis results, and the results are of repeatability.",2009,0, 8316,Skull Mechanics Study of PI Procedure Plan for Craniosynostosis Correction Based on Finite Element Method,"The main purpose of correction craniosynostosis is to reopen cranial sutures with some bone slots in order to free the skull transformation with the brain development from the closed craniums. The intent of this paper is to analysis the relationship between the shapes of bone slots and the skull rigidity. Finite element method is utilized to obtain the stress distribution and deformation clouds of the different surgery schemes. And then a best cranial suture bone slot's shape is brought up according to the stress distribution simulation results. Methods: A Congenital craniosynostosis case is selected to design the surgery treatment plan. PI-shape craniosynostosis correction scheme is used, and bone slots used for reconstruction the cranial suture are in variance to simulate the stress distribution change after the slots shape change. The cranial bone and endocranium models are meshed as tetrahedron element for finite element analysis. For the instantaneous stress take into account when the slots shape change, the viscoelastic material properties of the crianial bone and endocranium are ignored here. Abaqus is used to calculate the stress result. Results: Different bone slot shapes induce different cranium stress distribution and skull rigidity. Appropriate bone slots as the new cranial suture can make the cranium to win the maximum stress value about 46.12Mpa and the maximum displacement about 10.25mm. Conclusion: The results of stress distribution and deformation of cranial bone under the intracranial pressure after the correction craniosynostosis operation can be obtained by the finite element method. These results reflect the ability of the cranial bone expanding with the brain tissues growth. With finite element method, surgical prediction can be made to guide surgeons to make the decision of improving surgical treatment.",2010,0, 8317,An automated methodology to diagnose geometric defect in the EEPROM cell,"The objective of this paper is to present an automated geometric defect diagnosis methodology for EEPROM cell (AGDE). This method focuses on speeding up the diagnosis process of geometric defects. It is based on a mathematical model generated with a ""design of simulation"" (DOS) technique. The DOS technique takes as input, simulations results of a floating gate transistor with different given geometries and produces, as output, a polynomial equation of the threshold voltage in function of the cell's geometric parameters. The diagnosis process is realized by comparing the measured threshold voltages of an EEPROM cell with the dynamically computed ones. From this comparison, the potentially defective geometric parameters are automatically extracted.",2002,0, 8318,Comparison of detection of perfusion defects in SPECT for different reconstruction strategies using polar map quantitation,"The objective of this work was to extend our study into the effects of attenuation and scatter compensation on the apparent distribution of Tc-99m sestamibi in cardiac slices to include compensation for resolution. Furthermore, we also determine the accuracy of diagnosing CAD using polar map quantitation with a unified normal database. We studied 102 individuals who either underwent X-ray angiography (57) or were determined to be in a low risk group (less than 5% likely) to have CAD (45). The low likelihood group was identified using standard criteria. Both groups underwent stress testing (physical or pharmacological) before imaging commenced. A Philips Medical Systems Prism 3000 (Philips Medical Systems, Cleveland Ohio) SPECT camera was used for all acquisitions. The standard FBP reconstruction with no attenuation or scatter compensation was performed on the emission data using the 180 data from RAO to LAO, and a Butterworth pre-filter of order 5 and cutoff of 0.25 1/pixel. Emission data were also reconstructed through 360 using an ordered-subset expectation-maximization (OSEM) algorithm. In our implementation, we used 15 subsets of four angles each and one iteration employing attenuation compensation (AC) alone, and in combination with scatter compensation (SC). Resolution compensation was included with the previous compensation methods (RC) using 5 iteration of OSEM. Three-dimensional post-reconstruction Gaussian filtering was done using a sigma of 0.75 pixels. As expected, FBP do poorly with a unified normal database when inferior and anterior regions are affected. The three methods that included compensation for attenuation were highly susceptible to extracardiac activity, influencing the maximum count in the polar map and change diagnoses of normal into CAD. This was evident in the areas under the ROC curves for all methods.",2002,0, 8319,"""That one's gotta work"" Mars Odyssey's use of a fault tree driven risk assessment process","The Odyssey project was the first mission to Mars after the failures of Mars Climate Orbiter and Mars Polar Lander. In addition to incorporating the results of those failure review boards and responding to external ""Red Team"" reviews, the Odyssey project itself implemented a risk assessment process. This paper describes that process and its use of fault trees as an enabling tool. These trees were used to break the mission down into the functional elements needed to make it a success. By determining how each function could be prevented from executing, a list of failure modes was created. Each fault was individually assessed as to what mitigations could prevent the fault from occurring, as well as what methods should be used to explicitly verify that mitigation. Fault trees turned out to be an extremely useful tool in both identifying risks as well as structuring the development of mitigations.",2002,0, 8320,On-line fault diagnosis in a Petri Net framework,"The paper addresses the fault detection problem for discrete event systems modeled by Petri nets (PN). Assuming that the PN structure and initial marking are known, faults are modeled by unobservable transitions. The paper recalls a previously proposed diagnoser that works online and employs an algorithm based on the definition and solution of some integer linear programming problems to decide whether the system behavior is normal or exhibits some possible faults. To reduce the on-line computational effort, we prove some results showing that if the unobservable subnet enjoys suitable properties, the algorithm solution may be obtained with low computational complexity. We characterize the properties that the PN modeling the system fault behavior has to fulfill and suitably modify the proposed diagnoser.",2009,0, 8321,Analyzing the effectiveness of fault hardening procedures,The paper addresses the problem of evaluating the effectiveness of fault hardening procedures based on software redundancy. We analyze the time and memory overhead of fine-grained and coarse-grained error detection and correction techniques. We check the impact of the involved overhead on fault coverage. The presented considerations are illustrated with fault injection experimental results.,2005,0, 8322,DC converter based fault analysis using simulink and RTDS,"The paper aims at analyzing various DC fault detection and protection techniques. In the proposed model of a DC ship system, various AC-DC, DC-DC and DC-AC converters are used. Different converters, controlled or uncontrolled, six or twelve pulse converters are used. The paper aims at modeling and simulation of different converters in Matlab Simulink and the RTDS environment to analyze DC fault effect on the DC side and its propagation from the DC to the AC side. Comparisons are made between the simulations in the real time and Simulink environments. By comparing the results of the different converters, a strategy for developing fault identify and removal for DC shipboard distribution systems will be determined. The effect of fault resistance on fault parameters is also highlighted in the paper.",2008,0, 8323,A comparison of neural networks and model-based methods applied for fault diagnosis of electro-hydraulic control systems,The paper aims to investigate two advanced methods used in fault diagnosis of electro-hydraulic (EH) control systems. The theoretical background of the neural network method and model-based approach are presented and the implementation of these methods is summarised with procedures in easy steps to follow for application. The pros and cons of these methods are also analysed based on fault detection capability. It is concluded that a combination of the neural network method and the model-based approach will be beneficial.,2002,0, 8324,Research on Engine Fault Diagnosis and Realization of Intelligent Analysis System,"The paper analyses the main types of engine failures, studies the some methods of fault diagnosis for engines and presents the ideas of time eigenvalues, frequency eigenvalues, wavelet eigenvalues and RBF(BP network based on radial basis functions) eigenvalues. According to the investigative results, an intelligent analysis system based on TMS320VC5402 is designed. The particular hardware and software design based on the DSP device is present in the paper. The analysis system is high in speed, low in power consume and small in size to be portable. It is fit for on-time supervising and analyzing",2006,0, 8325,Reaction to errors in robot systems,"The paper analyzes the problem of error (failure) detection and handling in robot programming. First an overview of the subject is provided and later error detection and handling in MRROC++ are described. To facilitate system reaction to the detected failures, the errors are classified and certain suggestions are made as to how to handle those classes of errors.",2002,0, 8326,Virtual measuring system of the geometric error based on LabView,"The paper constructs a virtual detection system of geometrical errors with powerful virtual instrument LabView software, and realizes the simulation tests of the parts' form error, direction error, position error and run-out error. The system could test the data by different data processing methods, and get the reports of the tested data and error results.",2010,0, 8327,Tolerating faults while maximizing reward,"The imprecise computation (IC) model is a general scheduling framework that is capable of expressing the precision vs. timeliness tradeoff involved in many current real-time applications. In that model, each task comprises mandatory and optional parts. While allowing greater scheduling flexibility, the mandatory parts in the IC model still have hard deadlines, and hence they must be completed before the task's deadline, even in the presence of faults. In this paper, we address fault-tolerant (FT) scheduling issues for IC tasks. First, we propose two recovery schemes, namely immediate recovery and delayed recovery. These schemes can be readily applied to provide fault tolerance to the mandatory parts by scheduling the optional parts appropriately for recovery operations. After deriving the necessary and sufficient conditions for both schemes, we consider the FT-optimality problem, i.e. generating a schedule which is FT and whose reward is maximum among all possible FT schedules. For immediate recovery, we present and prove the correctness of an efficient FT-optimal scheduling algorithm. For delayed recovery, we show that the FT-optimality problem is NP-hard, and thus is intractable",2000,0, 8328,Analysis of exception fault types based on AspectJ,"The improper use of exception handling mechanism will affect the efficiency of the application development in programming, so this paper analyzed and summarized several exception fault types for the exception fault problem of AspectJ, and gave the corresponding examples to analyze the impact of exception fault on program control flow, so that the developers can better avoid or deal with these exception fault.",2010,0, 8329,Improved WNN to Rotating Machinery Fault Diagnosis,"The improved algorithm of WNN based on BP was proposed in this paper. Theoretical analysis and simulation result show it avoids both the blindness of framework designs for BP neural networks and the problem of nonlinear optimizations, such as local optimization. So it can simplify the training of neural networks. It has better abilities in function learning and generalization. This algorithm was successfully applied to rotating machinery fault diagnosis. Therefore it has wide application prospect.",2009,0, 8330,Research on Multi-function Fault Management System Model Based on SNMP,"The improving network technology and application make it a challenge to the network manager. A feasible and efficient network management strategy will become an important method to insure the network running well. Therefore, itpsilas meaningful to be familiar with the network and network management technology. In accordance with practical network environment, we design and implement fault management system FMS based on SNMP. Using Client/Server architecture, our system established a distributed management model which is compose of Console/Manager/Agent, finished network status monitoring, event processing, fault alarm and log etc.",2008,0, 8331,Fault ride-through of fully enclosed squirrel-cage induction generators for wind farms in Thailand,"The increasing amount of wind power generation in Thailand power systems requires stability analysis considering interaction between wind-farms and transmission systems. Dynamics introduced by dispersed wind generators at the distribution level can usually be neglected. However, large wind farms have a considerable influence to power system dynamics and must definitely be considered for analyzing power system dynamics. For this purpose, a detailed dynamic model of fully enclosed squirrel cage induction generator with gearbox (full-scale power electronics converters) of a 2.3 MW wind turbine with has been implemented using the modeling environment of the simulation software DIgSILENT PowerFactory. For investigating grid compatibility aspects of this wind generator concept, a model of a 96.6 MW wind farm, with typical layout, based on 42 wind turbines of the 2.3 MW-class has been analyzed. This paper is focusing on transient stability and fault ride through capability when the grid voltage has dropped to a very low value.",2010,0, 8332,A Reinforcement Learning Approach to Automatic Error Recovery,"The increasing complexity of modern computer systems makes fault detection and localization prohibitively expensive, and therefore fast recovery from failures is becoming more and more important. A significant fraction of failures can be cured by executing specific repair actions, e.g. rebooting, even when the exact root causes are unknown. However, designing reasonable recovery policies to effectively schedule potential repair actions could be difficult and error prone. In this paper, we present a novel approach to automate recovery policy generation with reinforcement learning techniques. Based on the recovery history of the original user-defined policy, our method can learn a new, locally optimal policy that outperforms the original one. In our experimental work on data from a real cluster environment, we found that the automatically generated policy can save 10% of machine downtime.",2007,0, 8333,Failure Prediction Models for Proactive Fault Tolerance within Storage Systems,"The increasingly large demand for data storage has spurred on the development of systems that rely on the aggregate performance of multiple hard drives. In many of these applications, reliability and availability are of utmost importance. It is therefore necessary to closely scrutinize a complex storage system's reliability characteristics. In this paper, we use Markov models to rigorously demonstrate the effects that failure prediction has on a system's mean time to data loss (MTTDL) given a parameterized sensitivity. We devise models for a single hard drive, RAID1, and N+1 type RAID systems. We find that the normal SMART failure prediction system has little impact on the MTTDL, but striking results can be seen when the sensitivity of the predictor reaches 0.5 or more. In past research, machine learning techniques have been proposed to improve SMART, showing that sensitivity levels of 0.5 or more are possible by training on past SMART data alone. The results of our stochastic models show that even with such relatively modest predictive power, these failure prediction algorithms can drastically extend the MTTDL of a data storage system. We feel that these results underscore the importance and need for complex prediction systems when calculating impending hard drive failures.",2008,0, 8334,Implementation of the induction machine broken-bars fault diagnosing instrument using TMS320 digital signal processor,"The induction broken-bars fault diagnosing instrument is achieved using a digital signal processor. The noninvasive, computationally efficient method for stator current signature analysis is discussed in this paper. The proposed controller, being a programmable digital circuit, is highly reliable and flexible. This paper demonstrates the application of digital signal processors in the instrument.",2005,0, 8335,A Model of Dual Stator Winding Induction Machine in case of Stator and Rotor Faults for Diagnosis Purpose,"The induction machine is largely used in many fields because of its robustness, its low cost and also its abilities at high speeds. However, this motor happens to present specific faults such as rotor broken bar, rotor eccentricity and/or stator winding turns defects. Some of them can be attenuated by a power segmentation. The main purpose of this paper is thus modelling of a dual stator winding induction machine (DSWIM); this study includes stator inter-turn short circuit and rotor broken bar defects. Simulation and experimental results of this type of induction motor operating under fault demonstrate the effectiveness of the proposed model in the monitoring of breakdowns",2006,0, 8336,Dynamic detection of access errors and illegal references in RTSJ,"The memory model used in the real-time specification for Java (RTSJ) imposes strict assignment rules to or from memory areas preventing the creation of dangling pointers, and thus maintaining the pointer safety of Java. This paper provides an implementation solution to ensure the checking of these rules before each assignment statement, where the check is performed dynamically using write barriers. The presented solution includes write barriers for both region-based memory management and a real-time garbage collector within the heap.",2002,0, 8337,Transmission line fault classification based on wavelet singular entropy and artificial immune recognition system algorithm,"The method based on wavelet singular entropy(WSE) and artificial immune recognition system (AIRS) for transmission line fault classification is presented in this paper. Wavelet singular entropy is used to quantify uncertainty of fault high frequency transient voltages so as to reflect and identify various failure states of power system. On this basis, AIRS for fault classification is presented to overcome the shortcomings of artificial neural network (ANN) and support vector machines (SVMs). The classifier can also decrease number of input parameters, relieve the dependence on prior knowledge of decision maker and improve generalization ability. The simulation results show the method is effective and correct.",2009,0, 8338,Identification of ground faults according to the analysis of electromagnetic fields of MV lines,"The methods of assessing the place of the ground fault in compensated MV networks are known. At present realizations, the evaluation of ground faults is usually based on using more methods specialized for the detection of individual types of ground faults. The methods used in the paper follow from the analysis of electromagnetic fields of phase conductors and lines. Ground fault indicators have been installed at points with telecontrolled section switches and reclosers. The development of communication technologies enables new system-wide solutions to be applied in this field. A system-wide solution with telecommunicating indicators and with transmitting not only the result of evaluation but also the development of quantities being measured leads not only to increasing the reliability of determining the section of a MV line affected by the ground fault but, at the same time, it is also possible to assess the type of the ground fault including the prediction of the origination of subsequent ground faults, with the possibility of reducing the number of short-circuits to ground. The used solution enables us to telemeasure the values of phase currents at the point where the indicators have been installed.",2009,0, 8339,Fault-tolerant adaptive and minimal routing in mesh-connected multicomputers using extended safety levels,"The minimal routing problem in mesh-connected multicomputers with faulty blocks is studied. Two-dimensional meshes are used to illustrate the approach. A sufficient condition for minimal routing in 2D meshes with faulty blocks is proposed. Unlike many traditional models that assume all the nodes know global fault distribution, our approach is based on the concept of an extended safety level, which is a special form of limited fault information. The extended safety level information is captured by a vector associated with each node. When the safety level of a node reaches a certain level (or meets certain conditions), a minimal path exists from this node to any nonfaulty nodes in 2D meshes. Specifically, we study the existence of minimal paths at a given source node, limited distribution of fault information, and minimal routing itself. We propose three fault-tolerant minimal routing algorithms which are adaptive to allow all messages to use any minimal path. We also provide some general ideas to extend our approaches to other low-dimensional mesh-connected multicomputers such as 2D tori and 3D meshes. Our approach is the first attempt to address adaptive and minimal routing in 2D meshes with faulty blocks using limited fault information",2000,0, 8340,Study on the tooth profile and meshing equation of the anti-backlash double-roller enveloping hourglass worm with errors,"The mismatched errors of the anti-backlash double-roller enveloping hourglass worm (ADEHW) plays a very critical role on its performance, but the influence of errors on ADEHW are rarely reported in available literatures, and the most important thing that we concerned is how to acquire the meshing equation and generate the tooth profile of ADEHW involving errors. Therefore, the new anti-backlash double-roller enveloping hourglass worm tooth profile equation which involved the grinding wheel radius, angular position error of the worm wheel axis, the centre distance errors and the shaft angle was established for the first time in order to study the effects of worm parameters on the worm tooth profile by using the theories of differential geometry and gear meshing. A new method to acquire the contact line of worm was presented by using the corresponding computer program and dichotomy. The real tooth surface was acquired based on the Pro/E software using the contact helix line equation and the three-dimension (3D) geological modeling of the Anti-backlash double-roller enveloping hourglass worm was established. The results obtained show that the formula established of worm tooth profile is important to analysis the characters of gearing mesh and parameter.",2010,0, 8341,Optimistic Replication Approach for Transactional Mobile Agent Fault Tolerance,"The mobile agent is a computer program that can move between different hosts in heterogeneous networks. This paradigm is advantageous for distributed systems implementation, especially in mobile computing application characterized by low bandwidth, high latency and unreliable networks connections. Mobile agent is also attractive for distributed transactions applications. Although mobile agent has been studied for twenty years for some good reasons, it is not largely used in developing distributed systems for simple reasons: important issues like security and fault tolerance are not solved in effective way. In this paper we address the issue of fault tolerance in mobile agent systems and transactional support. We present the agent system design and describe the protocol of our approach in which we treat infrastructure failures to prevent a partial or complete loss of mobile agent and deal with semantic failures to ensure atomic execution and transactional support for mobile agent.",2010,0, 8342,Neural networks for fault-prediction in a telecommunications network,"The main topic of this paper is fault prediction from large alarm records stored in different databases of non-cooperating network management systems. We have chosen the countrywide data network of Pakistan Telecom (PTCL) as a basis for the investigation of neural network based algorithms to predict faults before they stop a large number of users' circuits from normal operation. The main problems addressed are the evaluation of alarms, virtual reconstruction of the network and development of tools to overcome the interoperability issues. The motivation behind this work is to assist human operators and minimize the cost of the alarm evaluation process.",2004,0, 8343,Fault-Tolerant Middleware for Grid Computing,"The major challenge in Grid environment is fault tolerance. Faults ranging from machine crashes, media failures, operator errors and random data corruption results in loss of data, both temporarily and permanently. The paper proposes a solution for handling faults in grid environment. Fault-Tolerance using Adaptive Replication in Grid Computing (FTARG) is an adaptive replication middleware which addresses the fault tolerance of Grid based applications by providing data replication at different sites. FTARG is an Aneka based Grid middleware especially designed for high-performance Grid based applications. FTARG enables data synchronization between multiple heterogeneous databases located in the Grid by supporting a variety of synchronization modes. Experimental analysis proves that proposed FTARG handles faults in the Grid by improving the performance of data management for large scale complex grid based applications.",2010,0, 8344,Analysis of software quality cost modelings industrial applicability with focus on defect estimation,The majority of software quality cost models is by design capable of describing costs retrospectively but relies on defect estimation in order to provide a cost forecast. We identify two major approaches to defect estimation and evaluate them in a large scale industrial software development project with special focus on applicability in quality cost models. Our studies show that neither static models based on code metrics nor dynamic software reliability growth models are suitable for an industrial application.,2008,0, 8345,"Comprehensive Analysis of Performance, Fault-Tolerance and Scalability in Grid Resource Management System","The management of the large scale heterogeneous resources is a critical issue in grid computing. The resource management system (RMS) is an essential component of grids. To ensure the QoS of the upper layer service, it raises high requirement for the performance, fault-tolerance and scalability of RMS. In this paper, we study three typical structures of RMS, including centralized, hierarchical and peer-to-peer structures, and make a comprehensive analysis of performance, fault tolerance and scalability. We put forward the performance, fault tolerance and scalability evaluation metrics of the RMS, and give the mathematical expressions and detailed calculation processes. Besides, we make further discussions on the interactions of the performance, fault-tolerance and scalability, and make a comparison of the RMSs with the three typical structures. We believe that the results of this work will help system architects make informed choices for building the RMS.",2009,0, 8346,A Task-Based Fault-Tolerance Mechanism to Hierarchical Master/Worker with Divisible Tasks,"The master/worker API of the ProActive middleware provides with an easy way to use framework for parallelizing embarrassingly parallel applications. However, the traditional master/worker model faces great challenges as the development of the scalability of the distributed computing. A single-layer hierarchical master/worker has been implemented as a solution to the scalability issues of the MW API. In the new framework, the mainmaster only communicates with some submasters, and each submaster manages a set of workers. A ldquobully election algorithmrdquo and an ldquoobject discovery mechanismrdquo are implemented to solve the fault-tolerance problems of the submasters. An automatic load-balancing mechanism is implemented for the hierarchical master/worker to solve divisible tasks. Moreover, an optimization has been done to make the fault-tolerance mechanism more efficient.",2009,0, 8347,The electronic capacitive voltage transformers error characteristics research and parameter optimization design,"The mathematical model of electronic capacitive voltage transformers (ECVT) in the power system is built by analyzing the sensing principles and applied environment of the ECVT, and point out that the existence of the distribution stray capacity is the key factor effecting to the transformer accuracy. From the viewpoint of the equivalent circuit, the interaction mechanism of the ECVT measuring error caused by stray capacity and the interphase interference is quantitatively analyzed. The structure finite element calculation model of the condenser divider of ECVT in 220 kV power system is built using the finite element software. The distribution stray capacitance matrix for the voltage divider is measured by simulating its static electric field. The parameters of the condenser divider are optimized by using the ECVT mathematical models established. The experiment results show that the measuring accuracy of ECVT is within 0.2 levels with the nominal capacitance of the condenser divider more than 3500 pF. This paper provide reference basis for error analyzing and parameters optimization design of ECVT.",2009,0, 8348,McEliece/Niederreiter PKC: Sensitivity to Fault Injection,"The McEliece and Niederreiter public key cryptosystems (PKC) are presumed secure in a post quantum world because there is no efficient quantum algorithm that solves the hard problems upon which these cryptosystems are built. The present article indicates, however, a different type of vulnerability for such cryptosystems, namely fault injection. We present the injection fault in the McEliece scheme using Goppa codes and in two variants using quasi-cyclic alternant and quasi-dyadic codes, and describe the main difference of those constructions in this context.",2010,0, 8349,Transient Fault Response of Grid Connected Wind Electric Generators,The paper deals with simulation studies on grid connected wind electric generators (WEG) employing squirrel cage induction generator (SCIG) and doubly fed induction generator (DFIG). Their dynamic responses to wind speed variations and transient faults on transmission line are studied.,2006,0, 8350,Fault tolerant permanent magnet machines used in automobile applications,"The paper deals with the fault tolerant analysis of several permanent magnet synchronous machines (PMSM) used in hybrid automobiles. There will be treated 3 distinctive elements: the electrical traction, the starter/generator and the steering system. By means of numerical analysis and through experiments, three different types of PMSM and their drives will be verified, while the fault tolerant concept is employed and verified.",2010,0, 8351,Error Recovery Problems,"The paper deals with the problem of handling detected faults in computer systems. We present software procedures targeted at fault detection, fault masking and error recovery. They are discussed in the context of standard PC Windows and Linux environments. Various aspects of checkpointing and recovery policies are studied. The presented considerations are illustrated with some experimental results obtained in our fault injection testbench.",2007,0, 8352,Confirming the reliability and safety of MV distribution network including DG applying protection applications for earth faults,"The paper describes the effects of distributed generation on the earth fault protection of a medium voltage feeder and protection coordination. Neutral isolated and compensated systems were considered. The aim was to investigate the behaviour of the production unit during automatic reclosings, especially as regards electrical safety. Methods possibly feasible for clearing a temporary earth fault without voltage break were considered. Thus disturbances affecting production and customers are less than with automatic reclosings. The method of the study was dynamic simulation of the earth faults in medium voltage system. The network model including a wind power plant was implemented applying PSCADTM simulation software.",2009,0, 8353,PSC-PWM in fault tolerant drive system for EMA operation,The introduction of EMA Systems requires the use of redundant inverters to drive the EMA and ensure reliability and safety. Redundant converters allow the implementation of fault tolerant control and high quality operation. Fault control has been implemented by means of redundant converter and fault detection system.,2010,0, 8354,An experimental study of soft errors in microprocessors,"The issue of soft errors is an important emerging concern in the design and implementation of future microprocessors. The authors examine the impact of soft errors on two different microarchitectures: a DLX processor for embedded applications and a high-performance alpha processor. The results contrast impact of soft errors on combinational and sequential logic, identify the most vulnerable units, and assess soft error impact on the application.",2005,0, 8355,A method and a GUI for the creation of azimuth-track level-pointing-error corrections,"The JPL beam-waveguide (BWG) antennas are used for spacecraft tracking and for radio-astronomy observations. They are mounted on wheels that rotate around an uneven azimuth track, causing antenna deformations, and reducing pointing accuracy. The pointing errors affected by the track irregularities are repeatable, and can therefore be calibrated. The effects of the irregularities of the track can continually be corrected by using a lookup table, created by the interface presented. This paper is a continuation of previous work of Gawronski, Baher and Quintero (see ibid., vol.42, no.2, p.28-38, 2000). It describes the processing the inclinometer data, which includes the verification of repeatability, smoothing, slow-trend removal, re-sampling, and adjustment to a standard format. it also presents a user-friendly interface that process field data and creates a lookup table for pointing-error correction by clicking appropriate buttons on a computer screen. The GUI was tested with the JPL BWG antennas, and may be used with any antennas utilizing an azimuth track.",2002,0, 8356,The analysis and research on defect results of software localization testing,"The paper discusses the role defect analysis in the software testing, defect data collection and the specific methods of defect data.",2010,0, 8357,On-line Fault Diagnosis Model of the Hydropower Units Based on MAS,"The paper introduced a novel on-line fault diagnosis system model of the hydropower units based on multi-agent system. In allusion to the classical MAS-based fault diagnosis model, it proposes a new function of information interactive between the mission-controlled subsystem and the task decomposition subsystem to increase the transmission rate of control signals and designs the status-monitoring subsystem to detect the abnormal signals directly from local to increase the fault diagnostic sensitivity. In the fault-diagnosis subsystem, a multi-agent interactive parallel structure is designed to meet the requirements of the high reliability and good real-time. A Java-based language so called as JAFMAS is used to build a multi-agent cooperation platform. Experimental results show the effectiveness and feasibility of the proposed method.",2009,0, 8358,The fault-tolerant design and fault injection test for embedded software,"The paper introduces the fault-tolerant technique of chuangxin-1 micro-satellite on-board computer. A fault injection test system is built to verify the fault-tolerant function. The test system is made up of monitor computer, Trace32 ICE, monitor instrument for output, and fault injection instrument. The test case is employed to verify the behavior of fault-tolerance and judge the validity of fault-tolerant design of hardware and software, and it includes typically test case name, test content, instrument and device, test method, verification method, expectation result, actual result, etc. The result shows that the fault injection test system can verify the fault-tolerant design well.",2010,0, 8359,Current Sensor Fault-Tolerant Control for WECS With DFIG,"The performances of wind energy conversion systems (WECS) heavily depend on the accurate current sensing. A sudden failure in one of the current sensors decreases the system performances. Moreover, if a fault is not detected and handled quickly, its effect leads to system disconnection. Hence, to reduce the failure rate and to prevent unscheduled shutdown, a real-time fault detection, isolation, and compensation scheme could be adopted. This paper introduces a new field-programmable-gate-array (FPGA)-based grid-side-converter current sensor fault-tolerant control for WECS with doubly fed induction generator. The proposed current sensor fault detection is achieved by a predictive model. ldquoFPGA-in-the-looprdquo and experimental results validate the effectiveness and satisfactory performances of the proposed method.",2009,0, 8360,Physical layer redundancy method for fault-tolerant networks,"The physical-layer redundancy method is proposed for a fault-tolerant industrial network. The proposed method consists of the fault detection method and the correction method. The fault detection method uses events created by the state transition in the IEEE 802.4 MAC sublayer and the periodic status frame check for fault detection. The fault correction method corrects the fault with automatic physical layer switching to the stand-by physical layer due to the event created by the fault detection method. The proposed method is realized with dual physical layers, the dual channel manager for switching and the redundancy management module that has a fault detection sub-module and a fault correction submodule. The proposed method guarantees high reliability and fast fault-correction in PICNET",2000,0, 8361,Automatic detection and correction for glossy reflections in digital photograph,"The popularization of digital technology has made shooting digital photos and using related applications a part of daily life. However, the use of flash, to compensate for low atmospheric lighting, often leads to overexposure or glossy reflections. This study proposes an auto-detection and inpainting technique to correct overexposed faces in digital photography. This algorithm segments the skin color in the photo as well as uses face detection and capturing to determine candidate bright spots on the face. Based on the statistical analysis of color brightness and filtering, the bright spots are identified. Finally, bright spots are corrected through inpainting technology. From the experimental results, this study demonstrates the high accuracy and efficiency of the method.",2010,0, 8362,Fixed series compensation protection evaluation using transmission lines faults simulations,"The power demands of Brazilian national industries need an economical solution to increase power transmission capacity of existing transmission lines. Due to environmental and economical difficulties for construction of new power transmission lines, the use of fixed series compensation (FSC) has turned into a common practice of power transmission companies in Brazil. The FSC is presented as the best choice, because not only does it make possible to increase power transmission capacity as well as it stabilizes the interconnected energy nets through reduction of the impedance of the transmission line. The main purpose of this work is to present the FSC protection evaluation installed at Rio Verde 230 kV substation with 216 MVAr (FURNAS Centrais Eletricas S.A. - Brazilian Power Transmission Utility). This evaluation will be realized simulating faults internal and external to the 230 kV transmission line where the FSC were installed. The final conclusion of the work is to present the importance of the FSC to a transmission system and a complete evaluation of the FSC protection, observing the dimensioning and the situations that this protection operates in the transmission system with the purpose to keep the energy flow against faults due to lightning discharges and other causes.",2008,0, 8363,Sampling Error Estimation in High-Speed Sampling Systems Introduced by the Presence of Phase Noise in the Sampling Clock,"The presence of phase noise in the sampling clock of fast analog-to-digital converters introduces time jitter into the sampling instants of the analog-to-digital converter. In this paper, an analysis has been performed to quantify the impact of phase perturbation in the sampling clock on the signal-to-noise ratio of the digitized waveform. Close form formulae have been obtained for the signal-to-jitter noise (S/Njit) ratio when the phase perturbation is random as well as when it is dominated by a periodic and deterministic component. The result obtained is then used to predict the jitter noise generated by a sampling clock with typical phase noise performance. The results obtained will help identify the impact of the various sampling and phase noise parameters on the resulting S/Njit ratio.",2008,0, 8364,Using thermal analysis to enhance fault isolation techniques,"The present methods of diagnostic testing for printed circuit boards (PCB) using automatic test equipment (ATE) presents the Navy with technical and cost of ownership concerns. These concerns are manifested in several areas: (1) the escalating complexity of PCBs requires an ever-increasing amount of maintenance processing time, (2) the increased processing time results in escalating repair costs, and (3) protracted component turnaround-times are usually managed by procuring additional spares to meet operational requirements. When coupled with current PCB design constraints such as limited test points, in many cases ATE software that cannot isolate a fault to an acceptable ambiguity level, and a maintenance philosophy that places high reliance on ATE and less on the technical skills of maintenance personnel, the situation becomes bleaker. Naval Aviation cannot continue to absorb the increasing costs of ownership associated with current maintenance practices. Neither can it afford to make major changes in its current maintenance philosophy. The solution to this dilemma rests in the ability to successfully challenge the current ATE diagnostic testing methodology and develop a means of enhancing existing ATE capabilities",2001,0, 8365,Analog Circuit Fault Diagnosis Based on RBF Neural Network Optimized by PSO Algorithm,"The present paper proposes a fault diagnosis methodology of analog circuits base on radial basis function (RBF) artificial neural network trained by particle swarm optimization (PSO) algorithm. Using the appropriate stimulus signal, fault features are extracted from efficient points in frequency response of the circuit directly, and then a fault dictionary is created by collecting signatures of different fault conditions. Trained by the examples contained in the fault dictionary, the RBF neural network optimized by PSO has been demonstrated to provide robust diagnosis to the difficult problem of soft faults in analog circuits. The experimental result shows that the proposed technique is succeeded in diagnosing and locating faults effectively.",2010,0, 8366,Effect of defects on thermal performance of carbon nanotube investigated by molecular dynamics simulation,"The present study was focused on the investigation of effect of defects on the CNT thermal performance. In order to investigate effect of defects on the material properties of the CNT, MD models were built using the Materials studio software (Accelrys, Inc). In this study, a series of MD models were built to simulate the effect of defects on thermal performance of the SWCNT. Based on Fourier's law, thermal conductivities of the SWCNT with different kinds of defects were calculated. The MD simulation results showed that defects in the CNT had heavily effects on thermal conductivity of SWCNT. The thermal conductivity of SWCNT was drastically reduced by those defects. This MD simulation gave a basic understanding of the effect of defects on material performance of CNTs and provided information for the future study.",2006,0, 8367,Simulated SMOS Levels 2 and 3 Products: The Effect of Introducing ARGO Data in the Processing Chain and Its Impact on the Error Induced by the Vicinity of the Coast,"The Soil Moisture and Ocean Salinity (SMOS) Mission is the second of the European Space Agency's Living Planet Program Earth Explorer Opportunity Missions, and it is scheduled for launch in July 2009. Its objective is to provide global and frequent soil-moisture and sea-surface-salinity (SSS) maps. SMOS' single payload is the Microwave Imaging Radiometer by Aperture Synthesis (MIRAS) sensor, an L-band 2-D aperture-synthesis interferometric radiometer. For the SSS, the output products of SMOS, at Level 3, will have global coverage and an accuracy of 0.1-0.4 psu (practical salinity units) over 100 times 100-200 times 200 km2 in 10-30 days. During the last few years, several studies have pointed out the necessity of combining auxiliary data with the MIRAS-measured brightness temperature to provide the required accuracy. In this paper, we propose and test two techniques to include auxiliary data in the SMOS SSS retrieval algorithm. Aiming at this, pseudo-SMOS Level-3 products have been generated according to the following steps: 1) A North Atlantic configuration of the NEMO-OPA ocean model has been run to provide consistent geophysical parameters; 2) the SMOS end-to-end processor simulator has been used to compute the brightness temperatures as measured by the MIRAS; 3) the SMOS Level-2 processor simulator has been applied to retrieve SSS values for each point and overpass; and 4) Level-2 data have been temporally and spatially averaged to synthesize Level-3 products. In order to assess the impact of the proximity to the coast at Level 3, and the effect of these techniques on it, two different zones have been simulated: the first one in open ocean and the second one in a coastal region, near the Canary Islands (Spain) where SMOS and Aquarius CAL/VAL activities are foreseen. Performance exhibits a clear improvement at Level 2 using the techniques proposed; at Level 3, a smaller effect has been recorded. Coastal proximity has been found to affect the retrieva- - l of up to 150 and 300 km from the coast, at Levels 2 and 3, respectively. Results for both scenarios are presented and discussed.",2009,0, 8368,Extended Ocean Salinity Error Budget Analysis within the SMOS Mission,"The Soil Moisture and Ocean Salinity mission will provide from 2009 onwards sea surface salinity maps over the oceans. In this paper an ocean salinity error budget is described. Instrumental, external noise sources and geophysical errors have been analysed, stressing their relative degree of impact. With the aim of improving this study, an extended version of this analysis provides an overall vision of the salinity retrieval in a wider set of configurations.",2008,0, 8369,Space shuttle fault tolerance: Analog and digital teamwork,"The Space Shuttle control system (including the avionics suite) was developed during the 1970s to meet stringent survivability requirements that were then extraordinary but today may serve as a standard against which modern avionics can be measured. In 30 years of service, only two major malfunctions have occurred, both due to failures far beyond the reach of fault tolerance technology: the explosion of an external fuel tank, and the destruction of a launch-damaged wing by re-entry friction. The Space Shuttle is among the earliest systems (if not the earliest) designed to a ldquoFO-FO-FSrdquo criterion, meaning that it had to Fail (fully) Operational after any one failure, then Fail Operational after any second failure (even of the same kind of unit), then Fail Safe after most kinds of third failure. The computer system had to meet this criterion using a Redundant Set of 4 computers plus a backup of the same type, which was (ostensibly!) a COTS type. Quadruple redundancy was also employed in the hydraulic actuators for elevons and rudder. Sensors were installed with quadruple, triple, or dual redundancy. For still greater fault tolerance, these three redundancies (sensors, computers, actuators) were made independent of each other so that the reliability criterion applies to each category separately. The mission rule for Shuttle flights, as distinct from the design criterion, became ldquoFO-FS,rdquo so that a mission continues intact after any one failure, but is terminated with a safe return after any second failure of the same type. To avoid an unrecoverable flat spin during the most dynamic flight phases, the overall system had to continue safe operation within 400 msec of any failure, but the decision to shut down a computer had to be made by the crew. Among the interesting problems to be solved were ldquocontrol sliveringrdquo and ldquosync holes.rdquo The first flight test (Approach and Landing only) was the proof of the pudding: when a key wire harness solder - joint was jarred loose by the Shuttle's being popped off the back of its 747 mother ship, one of the computers ldquowent bananasrdquo (actual quote from an IBM expert).",2009,0, 8370,Fault Evaluator: A tool for experimental investigation of effectiveness in software testing,"The specifications for many software systems, including safety-critical control systems, are often described using complex logical expressions. It is important to find effective methods to test implementations of such expressions. Analyzing the effectiveness of the testing of logical expressions manually is a tedious and error prone endeavor, thus requiring special software tools for this purpose. This paper presents Fault Evaluator, which is a new tool for experimental investigation of testing logical expressions in software. The goal of this tool is to evaluate logical expressions with various test sets that have been created according to a specific testing method and to estimate the effectiveness of the testing method for detecting specific faulty variations of the original expressions. The main functions of the tool are the generation of complete sets of faults in logical expressions for several specific types of faults; gaining expected (Oracle) values of logical expressions; testing faulty expressions and detecting whether a test set reveals a specific fault; and evaluating the effectiveness of a testing approach.",2010,0, 8371,"Vietnamese spelling detection and correction using Bi-gram, Minimum Edit Distance, SoundEx algorithms with some additional heuristics","The spelling checking problem is considered to contain two main phases: the detecting phase and the correcting phase. In this paper, we present a new approach for Vietnamese spelling checking based on Vietnamese characteristics for each phase. Our research approach includes the use of a syllable Bi-gram in combination with parts of speech (POS) to find out suspected syllables. In the correcting phase, we based on minimum edit distance, SoundEx algorithms and some heuristics to build a weight function for assessing suggestion candidates. The training corpus and the test set were collected from e-newspapers.",2008,0, 8372,Remote Fault Diagnosis Based on Virtual Instrument Technology,"The remote fault diagnosis for complex equipment based on virtual instrument (VI) technology is studied. It can promote the maintenance and fault diagnosis methods under the network environment. Firstly, the technology necessity of remote fault diagnosis for complex equipment is analyzed. Secondly, the system model of remote fault diagnosis system including base station and center station is discussed. Then the versatile computer supported cooperate work (CSCW) environments for remote fault diagnosis are developed, which are based on LabWindows/CVI and Datasocket technology of National Instrument respectively. LabWindows/CVI is adopted to set up the VI environment, which support DataSocket transmission protocol for communication between base station and center station. Finally, a test system is setup to validate the proposed system model. An engine is taken as an example to establish remote fault diagnosis system. Several key technology designs are discussed. It is proved to be efficient, and the design scheme of remote fault diagnosis is feasible",2006,0, 8373,A Fault-tolerance Framework for Distributed Component Systems,"The requirement for higher reliability and availability of systems is continuously increasing even in domains not traditionally strongly concerned by such issues. Required solutions are expected to be efficient, flexible, reusable on rapidly evolving hardware and of course at low cost. Combining both model and component seems to be a very promising cocktail for building solutions to this problem. Hence, we will present in this paper an approach using a model as its first structural citizen all along the development process. Our proposal will be illustrated with an application modeled with UML (extended with some of its dedicated profiles). Our approach includes an underlying execution infrastructure/middleware, providing fault-tolerance services. For the component aspect, our framework promotes firstly an infrastructure based on the Component/Container/Connectorparadigm to provide run-time facilities enabling transparent management of fault-tolerance (mainly fault-detection and redundancy mechanisms). For the model-driven point of view, our framework provides tool support for assisting the users to model their applications and to deploy and configure them on computing platforms. In this paper we focus on the run-time support offered by the component framework, specially the replication-aw are interaction mechanism enabling a transparent replication management mechanisms and some additional system components dedicated to fault-detection and replicas management.",2008,0, 8374,The coupling correction system at RHIC: results for the RUN 2000 and plans for 2001,"The RHIC coupling correction system has been commissioned during the Year 2000 run, which marked the successful first year of operation of the machine. The RHIC coupling correction system is described with particular emphasis on its flexibility, which allows using both global and local coupling compensation techniques. Coupling measurements and correction data are presented for the RHIC Blue and Yellow rings, together with the procedure used to reduce the minimum tune separation to 0.001, the typical resolution for tune measurements during run 2000. We further demonstrate how local coupling compensation in the interaction regions substantially reduces the strength of the skew quadrupole families used for global coupling compensation",2001,0, 8375,Middleware-Based Failure Detection and Recovery Services for Fault-Tolerant E-services,"The runtime detection of failure and recovery from failure is a major challenge facing e-business and e-commerce applications. Different types of failure are well understood through the failure model, but the detection and differentiation between these failures still proves difficult at runtime. Even when failures are detected, recovery may be hindered as certain failures may mask the root cause failure making it difficult to elaborate a recovery strategy. Through this paper we describe a pragmatic approach to failure detection and recovery based on the combination of middleware-based instrumentation and control services. In particular, we describe the development of failure detection instruments and failure recovery control services using Jini middleware technology. The failure detection instruments are capable of identifying different failure types and failure recovery control services make use of failure patterns to activate appropriate recovery strategies.",2009,0, 8376,Layer-weighted unequal error protection for scalable video coding extension of H.264/AVC,"The scalable video coding extension of H.264/AVC is a current standardization project. This paper deals with unequal error protection (UEP) scheme for scalable video bitstream over packet-lossy networks using forward error correction (FEC). The proposed UEP scheme is developed by exploiting jointly the unequal importance existing both in temporal layers and quality layers of hierarchial scalable video bitstream. For efficient assignment of FEC codes, the proposed UEP scheme uses a simple and efficient performance metric, namely layer-weighted expected zone of error propagation (LW-EZEP). The LW-EZEP is adopted for quantifying the error propagation effect on video quality degradation from packet loss in temporal layers and in quality layers. Compared to other UEP schemes, the proposed UEP scheme demonstrates strong robustness and adaptation for variable channel status.",2008,0, 8377,A Novel Intelligent Algorithm for Fault-Tolerant Task Scheduling in Real-Time Multiprocessor Systems,"The scheduling problem for real-time tasks on multiprocessor is one of NP-hard problems. In fault-tolerant real-time systems, tasks have deadlines to be met in spite of the presence of fault. Many attempts such as classical algorithms and intelligent methods have been made to solve this problem. Primary-backup (PB) scheme is one of the most important classical algorithms that have been employed for fault-tolerant scheduling for real-time tasks, wherein each task has two versions and the versions must be scheduled on two different processors. In this paper a novel scheduling algorithm is proposed based on genetic algorithm (GA) which uses PB for tolerating faults since all tasks employ the processors equally as much as possible.",2008,0, 8378,Predicting the severity of a reported bug,"The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65-0.75 with Mozilla and Eclipse; 0.70-0.85 in the case of GNOME).",2010,0, 8379,The impact of technology scaling on soft error rate performance and limits to the efficacy of error correction,The soft error rate (SER) of advanced CMOS devices is higher than all other reliability mechanisms combined. Memories can be protected with error correction circuitry but SER in logic may limit future product reliability. Memory and logic scaling trends are discussed along with a method for determining logic SER.,2002,0, 8380,Fault Slip Through measurement in software development process,"The pressure to improve software development process is not new, but in today's competitive environment there is even greater emphasis on delivering a better service at lower cost. In market-driven development where time-to-market is of crucial importance, software development companies seek improvements that can decrease the lead-time and improve the delivery precision. One way to achieve this is by analyzing the test process since rework commonly accounts for more than half of the development time. A large reason for high rework costs is fault slippage from earlier phases where they are cheaper to find and remove. As an input to improvements, this article introduces a measure that can quantify this relationship. That is, a measure called faults-slip-through, which determines the faults that would have been more cost-effective to find in an earlier phase.",2010,0, 8381,Fault diagnosis using Neuro-Fuzzy Transductive Inference algorithm,"The primary goal of this research is to develop a novel intelligent fault diagnosis method employing neuro-fuzzy transductive inference algorithm (NFTI) in order to solve the the global model application problem, as well as the global availability of the model and sample data set. The method is characterized by that a personal local model which is established for every new fault symptom input data in the fault diagnosis systems, based on some closest samples next to this fault symptom data in an existing sample database. Compared with other similar inductive method (ANFIS - adaptive neuro-fuzzy inference system) on Fisherpsilas Iris data set, the mentioned algorithm classifier has reduced 15% of the average test error and increased approximately 30% of classification speed. Detecting the fault symptom data set sampled from actual aeronautic thrustor test, the presented system can identify accurately three fault states. The results of the research indicate that the availability and efficacy of the fault diagnostic strategy is superior to any other inductive reasoning technique about some fault diagnosis issues.",2008,0, 8382,Estimating Error-probability and its Application for Optimizing Roll-back Recovery with Checkpointing,"The probability for errors to occur in electronic systems is not known in advance, but depends on many factors including influence from the environment where the system operates. In this paper, it is demonstrated that inaccurate estimates of the error probability lead to loss of performance in a well known fault tolerance technique, Roll-back Recovery with checkpointing (RRC). To regain the lost performance, a method for estimating the error probability along with an adjustment technique are proposed. Using a simulator tool that has been developed to enable experimentation, the proposed method is evaluated and the results show that the proposed method provides useful estimates of the error probability leading to near-optimal performance of the RRC fault-tolerant technique.",2010,0, 8383,An error tolerant software equipment for human DNA characterization,"The problem addressed in this paper is to define a learning algorithm for the prediction of splice site locations in human DNA in the presence of sequence annotation errors in the training data. To this aim we generalize a previous machine learning algorithm. Experimental results on a common dataset including errors show the algorithm outperforms its previous version, in particular in the complexity of the produced hypothesis.",2003,0,5269 8384,Statistical classification of raw textile defects,"The problem of classification of defects occurring in a textile manufacture is addressed. A new classification scheme is devised in which different features, extracted from the gray level histogram, the shape, and cooccurrence matrices, are employed. These features are classified using a support vector machines (SVM) based framework, and an accurate analysis of different multiclass classification schemes and SVM parameters has been carried out. The system has been tested using two textile databases showing very promising results.",2004,0, 8385,Optimal sensor location for robust fault detection,"The problem of optimal sensor location for fault detection in Linear Time-Invariant (LTI) systems is considered. A novel characterization of undetectable faults is shown to give rise to a new optimal sensor location problem for fault detection in the presence of L2-disturbances. Due to technical difficulties associated with direct computation of the optimal solutions, an algorithm is outlined whereby optimal solutions can be computed iteratively. A numerical example is presented to describe the application of the theory and the execution of the algorithm.",2007,0, 8386,Position-Error Based Schemes for Bilateral Teleoperation with Time Delay: Theory and Experiments,"The problem of stable bilateral teleoperation with position error based force feedback in presence of time-varying possibly unbounded communication delay is addressed. Two stabilization schemes are proposed that guarantee ""independent of delay"" stability of the teleoperator system. In particular, one of the schemes theoretically allows to achieve an arbitrary high force-reflection gain which leads to better transparency without sacrificing the stability of the overall system. The stability analysis is based on IOS small gain theorem for functional differential equations. Experimental results are presented that demonstrate stable behaviour of the telerobotic system with time-varying communication delay during a contact with a rigid obstacle",2006,0, 8387,Flexible power converters for the fault tolerant operation of Micro-Grids,"The progressive penetration level of Distributed Generation (DG) is destined to cause deep changes in the existing distribution networks no longer considered as passive terminations of the whole electrical system. A possible solution is the realization of small networks, namely the Micro-Grids, reproducing in themselves the structure of the main production and distribution of the electrical energy system. In order to gain an adequate reliability level of the microgrids the individuation and the management of the faults with the goal of maintaining the micro-grid operation (fault tolerant operation) is quite important. In the present paper flexible power converters and a companion control algorithm for the fault tolerant operation of microgrids are presented. The effectiveness of such an algorithm and of the fault tolerant power converters are verified through computer simulations.",2010,0, 8388,Design of a High Performance Digital Architecture for Real-Time Correction of Radial Lens Distortion,"The radial lens distortion correction technique based on least squares estimation corrects a distorted image by expanding it nonlinearly so that straight lines in the object space remain straight in the image space. An absolute pipelined architecture is designed to correct radial lens distortion in images by partitioning the distortion correction algorithm into four main modules. The architecture includes a COKDIC based rectangular to polar coordinate transformation module, a back mapping module for nonlinear transformation of corrected image space to distorted image space, a COKDIC based polar to rectangular coordinate transformation module, and a linear interpolation module to calculate the intensities of four pixels simultaneously in the corrected image space. The system parameters include the expanded/corrected image size, distorted image size, the back mapping coefficients, distortion center and the center of the corrected image. The hardware architecture can sustain a high throughput rate of 30 4-MegaPixel (Mpixels) frames per second (total of 120 Mpixels). The pipelined architecture design will facilitate the use of dedicated hardware that can be mounted along with the camera unit.",2006,0, 8389,The use of historical defect imagery for yield learning,"The rapid identification of yield detracting mechanisms through integrated yield management is the primary goal of defect sourcing and yield learning. At future technology nodes, yield learning must proceed at an accelerated rate to maintain current defect sourcing cycle times despite the growth in circuit complexity and the amount of data acquired on a given wafer lot. As integrated circuit fabrication processes increase in complexity, it has been determined that data collection, retention, and retrieval rates will continue to increase at an alarming rate. Oak Ridge National Laboratory (ORNL) has been working with International SEMATECH to develop methods for managing the large volumes of image data that are being generated to monitor the status of the manufacturing process. This data contains an historical record that can be used to assist the yield engineer in the rapid resolution of manufacturing problems. To date there are no efficient methods of sorting and analyzing the vast repositories of imagery collected by off-line review tools for failure analysis, particle monitoring, line width control and overlay metrology. In this paper we will describe a new method for organizing, searching, and retrieving imagery using a query image to extract images from a large image database based on visual similarity",2000,0, 8390,Data Mining and Analysis of Tree-Caused Faults in Power Distribution Systems,"The reliability and quality of power distribution systems are affected by different distribution faults. Trees are one of the major fault causes. In this paper, four different measures: actual measure, normalized measure, relative measure, and likelihood measure are used to data mine the Duke Energy Distribution Outage Database for meaningful data features and to analyze the characteristics of tree-caused distribution faults. This paper also applies statistical techniques to analyze tree-caused faults with respect to several selected influential factors. The results can be used to assist power distribution engineers to provide a more effective fault restoration system and design a more effective tree-fault prevention strategy",2006,0, 8391,Modeling of fault-tolerant mobile agents execution in distributed systems,"The reliable execution of a mobile agent is a very important design issue to build a mobile agent system and many fault-tolerant schemes have been proposed. Hence, in this paper, we present FATOMAS, a java based fault-tolerant mobile agent system based on an algorithm presented in an earlier paper. In contrary to the standard ""place-dependent"" architectural approach, FATOMAS uses the novel agent-dependent approach introduced in the paper. In this approach, the protocol that provides fault tolerance travels with the agent. This has the important advantage to allow fault-tolerant mobile agent execution with out the need to modify the underlying mobile agent platform. We derive the FATOMAS (Fault-Tolerant Mobile Agent System) design which offers a user transparent fault tolerance that can be activated on request, according to the needs of the task, also discuss how transactional agent with types of commitment constraints can commit. Furthermore this paper proposes a solution for effective agent deployment using dynamic agent domains.",2005,0, 8392,Geometric Approach to Fault Detection and Isolation in Multivariate Dynamical Systems,"The purpose of this study is to develop a geometric approach to fault detection and isolation (FDI) in the multivariate dynamical systems. This development is validated by applying this approach to a frame, and the FDI results for multiple-input multiple-output (MIMO), multiple-input single-output (MISO), single-input multiple-output (SIMO), and single-input single-output (SISO) systems with stochastic inputs and in deterministic and probabilistic spaces have been compared. A proper distance function based on the estimated parameters obtained from the parametric system identification method is used in the geometric approach. ARX (Auto Regressive with exogenous input) and VARX (Vector Auto Regressive with exogenous input) models with the same number of estimated parameters are used in all of the above-mentioned models. The obtained results reveal that by increasing the number of inputs and/or outputs and/or using probabilistic distance function, the classification errors reduce.",2007,0, 8393,An investigation of MML methods for fault diagnosis in mobile robots,"The purpose of this study is to evaluate the utility of a diagnosis technique, which uses minimum message length (MML) for autonomous mobile robot fault diagnosis. A simulator was developed for a behavior-based robotic system and results were gathered for over 24,000 simulations varying the level of test noise and the components with simulated failures. The results showed that the MML diagnosis technique did not perform well as a turn-key solution. In two different data sets, only 0.59% and 1.19% of the test cases were correctly diagnosed and none of the cases with multiple failures were identified correctly. This paper presents the approach used to evaluate the new technique, the results, and a discussion of why MML diagnosis may not be appropriate for mobile robotics.",2004,0, 8394,Characterization of Gain Enhanced In-Field Defects in Digital Imagers,"The quality of images produced by a digital imager is degraded by the presence of defects, mainly hot pixels, which develop continuously during the imager's lifetime. We previously studied the spatial and temporal distributions of these defects (at ISO 400) and concluded that they most likely result from random radiation and are not material related. With the advancement in imaging technology, the noise level at high ISO had been overcome and new cameras have a wider ISO range (ISO 100-6400). ISO gain is applied to all pixels, good or defective; thus defect parameters get amplified, causing defects to become more visible at high ISO settings. Preliminary defect identification with high ISO has revealed 2 to 3 times more defects at ISO 1600 compared to the standard ISO 400 setting. Amplification of the defect parameters causes defects to become more distinguishable relative to the background noise level. In fact, by measuring the distribution of defect parameters, our experiment results suggest that 2-3% of the faulty pixels behave as stuck-high defects at ISO 1600. With more defects found at higher ISO, we gain a more complete map of defects from each sensor and thus improve our statistical analysis of the spatial and temporal defect distributions. Our current results show that although more defects were found in the tested sensors, the defects are very small and not clustered, pointing to a random defect source rather than a material related one.",2009,0, 8395,Fault diagnosis based on radial basis function neural network in analog circuits,"The radial basis function (RBF) neural network (NN) is a type of feedforward network. It has many good properties, such as a powerful ability for function approximation, classification and learning rapidly. A sinusoidal input to an analog circuit is simulated with constant amplitude and different frequencies; frequency domain features of the output response are used to build a fault dictionary. The paper proposes an RBF NN method for response analysis and fault diagnosis. Results illustrate that this method is feasible and has many powerful features, such as diagnosing and locating faults quickly and exactly.",2004,0, 8396,Understanding the sources of software defects: a filtering approach,"The paper presents a method proposal of how to use product measures and defect data to enable understanding and identification of design and programming constructs that contribute more than expected to the defect statistics. The paper describes a method that can be used to identify the most defect-prone design and programming constructs and the method proposal is illustrated on data collected from a large software project in the telecommunication domain. The example indicates that it is feasible, based on defect data and product measures, to identify the main sources of defects in terms of design and programming constructs. Potential actions to be taken include less usage of particular design and programming constructs, additional resources for verification of the constructs and further education into how to use the constructs",2000,0, 8397,Measuring Gain Imbalance and Quadrature Error in WiMAX Transmitters,"The paper presents a new method for measuring the most common I/Q impairments affecting WiMAX transmitters, that are gain imbalance and quadrature error. The method operates on the signal at the output of the transmitter, acquired through a general purpose I/Q receiver. It is designed to correctly take into account the peculiarities of systems compliant to the standard IEEE 802.16-2004, such as the potentially noxious effects of impairments on signal normalization. The results of experiments carried out on standard-compliant signals are also given.",2008,0, 8398,Using a square-wave signal for fault diagnosis of analog parts of mixed-signal embedded systems controlled by microcontrollers,"The paper presents a new method of single soft fault detection and localisation of analog parts in embedded mixed-signal electronic systems controlled by microcontrollers. The method consists of three stages: a pre-testing stage of a fault dictionary creation using identification curves, a measurement stage based on stimulating the tested circuit by a square-wave signal generated by the microcontroller and measurements of voltage samples of the circuit response by the internal ADC of the microcontroller. At a final stage the faults detection and localisation are performed by the microcontroller. The BIST consists only of internal devices of the microcontroller mounted in the system. Hence, this approach simplifies the structure of BISTs, which allows to reduce test costs.",2007,0, 8399,An Error Driven Pid Controller for Maximum Utilization of Photovoltaic Powered PMDC Motor Drives,"The paper presents a novel maximum utilization scheme for photovoltaic (PV) powered permanent magnet DC (PMDC) motor drives. The power from the PV array is used to operate the PMDC motor driving a mechanical pumping/ventilation/refrigeration load. The dc motor drive system is controlled by a dynamic multi-loop error driven proportional-integral-derivative (PID) controller. The control scheme generates the required pulsing using a PWM-pulse width modulated switching block (PWM) for complementary switching of IGBT/MOSFET solid state devices in order to control the effective magnitude of the motor armature voltage for speed control and maximum utilization combined actions. Speed reference tracking and maximum photovoltaic power utilization are ensured by multi-loop dynamic action in case of solar irradiation, temperature variations as well as load excursions.",2007,0, 8400,A COTS wrapping toolkit for fault tolerant applications under Windows NT,"The paper presents a software toolkit that allows one to enhance the fault tolerant characteristics of a user application running under a Windows NT platform through sets of interchangeable and customizable fault tolerant interposition agents (FTI agents). Interposition agents are non-application software programs executed in an intermediate layer between the software application and the operating system in order to wrap the application software, intercepting and possibly modifying all the communications between the application and the surrounding hardware and software environment. The process is completely transparent to both the user application and the operating system and allows the achievement of a high degree of software based reliability in a wide variety of domains",2000,0, 8401,Soft error detection and correction for FFT based convolution using different block lengths,"The structure of radix-2 Fast Fourier Transforms of length 2n where n is an integer is used to propose a new soft error detection and correction scheme for transform based convolution. The scheme can provide up to 100% detection and correction of isolated soft errors for, in many cases, approximately double the original system cost in terms of area and/or computational complexity. This is a substantial reduction when compared with conventional Triple Modular Redundancy. The method can be used for both hardware and software implementations of transform-based convolution.",2009,0, 8402,Classification and remediation of electrical faults in the switched reluctance drive,"The switched reluctance (SR) drive is known to be fault tolerant, but it is not fault free. The goals of this study are the systematic classification of all electrical faults, for short and open circuits, in the SR drive (excluding the controller itself) and the investigation of fault patterns and possible remediation. Each situation is analyzed via finite-element analysis and/or experiments. The transient effects during the faults are described. Possible remediation schemes other than disabling the faulted phase are explored. There is a particular focus on the switch short circuit for which new results are presented.",2006,0,8449 8403,Applications of the Fault-Tolerance Best-Effort Multicast Algorithm,"The papers [1][2] presented an adaptive best- effort tree construction algorithm with fault tolerance. In this paper, we present applications to [1] showing how to (1) build QoS trees and paths, (2) build fault-tolerant multi-source and single or multi-root trees, (3) build link-disjoint trees, (4) discover the tree topology, and (5) support multi-party conferencing. We demonstrate the operations of the algorithms using computer simulations.",2006,0, 8404,A comparison of algorithm-based fault tolerance and traditional redundant self-checking for SEU mitigation,"The use of an algorithmic, checksum-based ""EDAC"" (error detection and correction) technique for matrix multiply operations is compared with the more traditional redundant self-checking hardware and retry approach for mitigating single event upset (SEU) or transient errors in soft, radiation tolerant signal processing hardware. Compared with the self-checking approach, the check-sum based EDAC technique offers a number of advantages including lower size, weight, power, and cost. In a manner similar to the SECDED (single error correction/double error detection) EDAC technique commonly used on memory systems, the checksum-based technique can detect and correct errors on the same processing cycle, reducing transient error recovery latency and significantly improving system availability. The paper compares the checksum-based technique with the self-checking technique in terms of failure rates; upset rates coverage, percentage overhead, detection latency, recovery latency, size, weight, power, and cost. The paper also looks at the percentage overhead of the checksum-based technique, which decreases as the size of the matrix increases",2001,0, 8405,Computer vision based offset error computation for web printing machines using FPGA,The use of computer vision based approach has started to bring the intelligence to many of the modern machineries. Such kind of high performance image processing systems can be efficiently built using Field Programmable Gate Arrays (FPGAs). This paper presents the design and implementation of FPGA based Computer Vision System for offset error computation of a new proposed registration mark pattern in 4-color web offset printing machines. The color printing quality of offset machine degrades due to a genuine problem of registration error caused by improper alignment of the prints from each process color section. This system can be used in an automated registration control system for web printing press that will control the position of each of CMYK cylinders depending on offset error calculated which will improve printing quality.,2010,0, 8406,Segmentation of contrast enhanced CT images for attenuation correction of PET/CT data,"The use of contrast media in PET/CT imaging has been suggested to cause PET artifacts during the CT-based attenuation correction process. In this paper, we evaluate three algorithms that segment intravenous (IV) contrast-enhanced tissue from chest CT images to minimize possible artifacts. The algorithms that were evaluated are template matching, 3D region growing, and snake-based technique. These methods were tested using 5 patient studies. The segmentation result of each method was compared to its corresponding manually segmented images on a voxel-wise basis, and a squared difference between the two segmentation results was calculated. The averaged squared differences of all 5 patients for the template matching, region growing, and snake-based method were 19.0%plusmn7.1%, 65.2%plusmn51.5%, and 13.5%plusmn6.5% respectively. We concluded that the snake model is most suitable for efficiently segmenting the contrast-enhanced CT images among the three methods",2004,0, 8407,C-type filter design based on power-factor correction for 12-pulse HVDC converters,"The use of conventional (single-tuned and high-pass) filters for HVDC systems can be difficult due to the likelihood of resonance caused by single-tuned filters, and considerable power losses in high-pass filters. This paper presents the C-type filter as an alternative to the conventional filtering in HVDC systems, to improve both the power quality and the power factor, and to reduce the power losses. The merits and limitations of the C-type filter compared to the conventional filters are studied in this paper, and a design method is presented based on power factor correction over a wide range of frequencies. The performance of the designed filters and their superior performance is evaluated for a 12-pulse HVDC converter system, based on simulation studies conducted in the MATLAB software environment.",2008,0, 8408,Integrity-Preserving Replica Coordination for Byzantine Fault Tolerant Systems,"The use of good random numbers is essential to the integrity of many mission-critical systems. However, when such systems are replicated for Byzantine fault tolerance, a serious issue arises, i.e., how do we preserve the integrity of the systems while ensuring strong replica consistency? Despite the fact that there exists a large body of work on how to render replicas deterministic under the benign fault model, the solutions regarding the random number control are often overly simplistic without regard to the security requirement, and hence, they are not suitable for practical Byzantine fault tolerance. In this paper, we present a novel integrity-preserving replica coordination algorithm for Byzantine fault tolerant systems. The central idea behind this algorithm is that all random numbers to be used by the replicas are collectively determined, based on the contributions made by a quorum of replicas, at least one of which is correct. We have implemented the algorithm in Java and conducted extensive experiments, in both a LAN testbed and an emulated WAN environment. We show that our algorithm is particularly suited for Byzantine fault tolerant systems operating in the LAN environment, or where replicas are connected by high-speed low-latency networks.",2008,0, 8409,Evolutionary based techniques for fault tolerant field programmable gate arrays,"The use of SRAM-based field programmable gate arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating single-event latchups (SELs). Repair methods based on evolutionary algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 4-to-7 decoder) into which a number of simulated faults has been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of, or as a supplement to triple modular redundancy (TMR), which is currently the predominant method for mitigating FPGA faults",2006,0, 8410,A History-Based Diagnosis Technique for Static and Dynamic Faults in SRAMs,"The usual techniques for memory diagnosis are mainly based on signature analysis. They consist in creating a fault dictionary that is used to determine the correspondence between the signature and the fault models affecting the memory. The effectiveness of such diagnosis methods is therefore strictly related to the fault dictionary accuracy. To the best of our knowledge, most of existing signature-based diagnosis approaches targets static faults only. In this paper, we present a new diagnosis approach that represents an alternative to signature-based approaches. This new diagnosis technique, named history-based diagnosis, makes use of the effect-cause paradigm already developed for logic design diagnosis. It consists in creating a database containing the history of operations (read and write) performed on a faulty memory core-cell. This information is crucial to track the root cause of the observed faulty behavior and it can be used to generate the set of possible fault primitives representing the set of suspected fault models. This new diagnosis method is able to identify static as well as dynamic faults. Although applied to SRAMs in this paper, it can be effective also for other memory types such as DRAMs. Experimental results are provided to prove the efficiency of the proposed methodology in generating a list of suspected faults as well as the location of the faulty components in the memory.",2008,0, 8411,4K-1 Volume Visualization and Error Analysis Using 3D ARFI Imaging Data,The utility and accuracy of volume visualization using 3D ARFI image data was investigated. Volumes of ARFI data were collected of custom tissue-mimicking phantoms by using a translation stage to increment the position of the imaging transducer through the elevation plane. Both vessel phantoms with stiff or soft plaques and a liver phantom with a stiff inclusion were used in the study. Simple displacement threshold techniques were used to segment inclusions from phantom background materials. ARFI-based measurements of inclusion volume were accurate to within 4% error for the liver phantom and 10.6% error for the vessel phantoms,2006,0, 8412,Incorporation of security and fault tolerance mechanisms into real-time component-based distributed computing systems,"The volume and size of real-time (RT) distributed computing (DC) applications are now growing faster than in the last century. The mixture of application tasks running on such systems is growing as well as the shared use of computing and communication resources for multiple applications including RT and non-RT applications. The increase in use of shared resources accompanies with it the need for effective security enforcement. More specifically, the needs are to prevent unauthorized users: (1) from accessing protected information; and (2) from disturbing bona-fide users in getting services from server components. Such disturbances are also called denial-of-service attacks",2001,0, 8413,Diagnosis and identification of fault waveform of power,"The waveform of power system's parameter is related closely to its status, it can reflects and foresees that curtain status and fault will occur in power system. In this paper chaos and fractal theory is used to analyze parameter waveform, abstract characteristic value, build standard model library about parameter waveform, and analysis overall and evaluate property of power system. To power system' fault waveform characteristic, A new identification algorithm -fractal dimension and average value density is proposed. This algorithm can recognize all kinds of fault waveform effectively, thus analysis result can be used faulty diagnosis and status property monitor of power system.",2008,0, 8414,An Automated System for Analyzing Impact of Faults in IP Telephony Networks,"The widespread use of IP telephony (IPT) has introduced corresponding management issues related to the diagnosis and impact analysis of faults in the IPT network. There are complex logical relationships between the various elements present in the IPT network ranging from call processing engines, PSTN voice gateways, and conferencing and voice mail servers, to endpoints like IP-based handsets. Without an understanding of these complex relationships, traditional element-oriented network fault-management applications are inadequate to assess IPT service impact in the presence of a network or system fault. In this paper, we present a proposal for determining the root cause and assessing the impact of a fault in an IPT network on voice services and end users by automatically executing a series of diagnostic tests from various probe points in the network, and correlating the resulting information with the faults reported from the network. We use a CIM-based model of the IPT network to aid the diagnostic and impact assessment process",2006,0, 8415,Experimental evaluation of the tie of the crystal clock vs. gps timing receiver without/with the negative sawtooth correction,"The time interval error (TIE) is a measure of inaccuracy that is widely used now in timekeeping and evaluating the clock performance. In this paper, we investigate and develop a system (hardware and software) for TIE measurements. The system contains two TIE counters, a divider of frequency, reference rubidium clock, local crystal clock, and GPS timing receiver. Applied software provides reading and processing the TIE and decodes the negative sawtooth from the GPS timing receiver. The objective is to estimate the TIE of a local clock with and without the sawtooth correction.",2006,0, 8416,Transparent recovery from intermittent faults in time-triggered distributed systems,"The time-triggered model, with tasks scheduled in static (off line) fashion, provides a high degree of timing predictability in safety-critical distributed systems. Such systems must also tolerate transient and intermittent failures which occur far more frequently than permanent ones. Software-based recovery methods using temporal redundancy, such as task reexecution and primary/backup, while incurring performance overhead, are cost-effective methods of handling these failures. We present a constructive approach to integrating runtime recovery policies in a time-triggered distributed system. Furthermore, the method provides transparent failure recovery in that a processor recovering from task failures does not disrupt the operation of other processors. Given a general task graph with precedence and timing constraints and a specific fault model, the proposed method constructs the corresponding fault-tolerant (FT) schedule with sufficient slack to accommodate recovery. We introduce the cluster-based failure recovery concept which determines the best placement of slack within the FT schedule so as to minimize the resulting time overhead. Contingency schedules, also generated offline, revise this FT schedule to mask task failures on individual processors while preserving precedence and timing constraints. We present simulation results which show that, for small-scale embedded systems having task graphs of moderate complexity, the proposed approach generates FT schedules which incur about 30-40 percent performance overhead when compared to corresponding non-fault-tolerant ones.",2003,0, 8417,Research on Embedded Airborne Electronic Equipment Fault Diagnosis Expert System,"The traditional airborne electronic equipment fault diagnosis systems have the disadvantage that the dynamic processing ability is low. The weakness of low effectiveness and accuracy is gradually exposed for it highly depends on the sample set and inherent module. The embedded airborne electronic fault diagnosis system built in this paper executes the dynamic processing by the subsystems, then summaries the information and makes the integrated diagnosis by the expert system which is embedded in the Flash in the microprocessor, realizes the real-time ability. The forward extract rule based on RETE algorithm is adopted in the expert system inference engine, which avoids the repeated match, reduces the time-complexity, and improves the efficiency.",2010,0, 8418,Video error correction using steganography,"The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled by using forward error correction in the encoder or error concealment techniques in the decoder. The MPEG-2 compliant coder described here uses steganography to transmit data for error correction in conjunction with several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error free environment",2001,0, 8419,Improved quality of experience of reconstructed H.264/AVC encoded video sequences through robust pixel domain error detection,"The transmission of H.264/AVC encoded sequences over noisy wireless channels generally adopt the error detection capabilities of the transport protocol to identify and discard corrupted slices. All the macroblocks (MBs) within each corrupted slice are then concealed. This paper presents an algorithm that does not discard the corrupted slices but tries to detect those MBs which provide major visual artefacts and then conceal only these MBs. Results show that the proposed solution, based on a set of image-level features and two support vector machines (SVMs), manages to detect 94.6% of those artefacts. Gains in peak signal-to-noise ratios (PSNR) of up to 5.74 dB have been obtained when compared to the standard H.264/AVC decoder.",2008,0, 8420,The hybrid video error concealment algorithm with low complexity approach,"The transmission of the multimedia data under Internet environment has been wildly used in many applications in the recent. However, the video data is very sensitive to the transmission error caused by packet lost. This induces decoded video data with error propagation and makes the video quality very poor. In this paper we proposed the hybrid error concealment algorithm with low complexity approach. By use of the bidirectional frame as the referenced frame, error propagation in inter frame coding mode can be reduced largely. Our approach has the advantage of better video quality within a small search range and low complexity requirement.",2003,0, 8421,Study on the dynamic performance characteristics of HVDC control and protections for the HVDC line fault,"The travelling wave protection and under voltage protection which are initiated by the rate of change of voltage can not detect the high impedance fault. The operation time of differential protection is so long that it can not play the role of back-up protection of the DC line fault. What's more, the HVDC control system has great influence on the performance characteristic of the DC line protection, especially in case of the low load level. The differential protection will miss tripping because of the current fluctuation caused by the control system. Based on the EMTDC simulation including the actual HVDC line parameters, control and protection models, the currently used DC line protection and the influence of the control system are evaluated in this paper, which is analyzed by combination of the fault records and the simulation results. The improvement scheme is also proposed for the actual system.",2009,0, 8422,An experimental evaluation of inspection and testing for detection of design faults,"The two most common strategies for verification and validation, inspection and testing, are in a controlled experiment evaluated in terms of their fault detection capabilities. These two techniques are in the previous work compared applied to code. In order to compare the efficiency and effectiveness of these techniques on a higher abstraction level than code, this experiment investigates inspection of design documents and testing of the corresponding program, to detect faults originating from the design document. Usage-based reading (UBR) and usage-based testing (UBT) were chosen for inspections and testing, respectively. These techniques provide similar aid to the reviewers as to the testers. The purpose of both fault detection techniques is to focus the inspection and testing from a user's viewpoint. The experiment was conducted with 51 Master's students in a two-factor blocked design; each student applied each technique once, each application on different versions of the same program. The two versions contained different sets of faults, including 13 and 14 faults, respectively. The general results from this study show that when the two groups of subjects are combined, the efficiency and effectiveness are significantly higher for usage-based reading and that testing tends to require more learning. Rework is not taken into account, thus the experiment indicates strong support for design inspection over testing.",2003,0, 8423,Reliability and Fault Tolerance in Trust,"The ubiquity of information systems has made correct and reliable operation of critical systems indispensable. The trustworthiness of digital systems is increasingly dependent on the trustworthiness of the software. While hardware trustworthiness is by no means a solved problem, system-wide problems are increasingly blamed on poorly tested, defective software. System trustworthiness is therefore a combination of several key software attributes: reliability, safety, security, availability, performance, fault-tolerance, and privacy. Some of these attributes can be directly measured, some cannot. For example performance and availability can be numerically measured; safety and security cannot. Further, several of these attributes may conflict, such as security and performance. Therefore to demonstrate that the software of a system can be trusted, it requires a combination of qualitative arguments concerning the level achieved for some attributes in combination with the numerical (quantitative) scores measured for others. In order to understand the trustworthiness and security of a software system, we first need to understand its reliability and fault tolerance",2006,0, 8424,A tool for automatically translating dynamic fault trees into dynamic bayesian networks,"The unreliability evaluation of a system including dependencies involving the state of components or the failure events, can be performed by modelling the system as a dynamic fault tree (DFT). The combinatorial technique used to solve standard Fault Trees is not suitable for the analysis of a DFT. The conversion into a dynamic Bayesian network (DBN) is a way to analyze a DFT. This paper presents a software tool allowing the automatic analysis of a DFT exploiting its conversion to a DBN. First, the architecture of the tool is described, together with the rules implemented in the tool, to convert dynamic gates in DBNs. Then, the tool is tested on a case of system: its DFT model and the corresponding DBN are provided and analyzed by means of the tool. The obtained unreliability results are compared with those returned by other tools, in order to verify their correctness. Moreover, the use of DBNs allows to compute further results on the model, such as diagnostic and sensitivity indices",2006,0, 8425,DefSim: Measurement Environment for CMOS Defects,"This article describes a measurement environment for study of two CMOS defect types: opens and shorts. These defect types are physically implemented in silicon in a big variety of locations inside a set of digital standard cells and small circuits. The integrated circuit (IC) with the collection of defects is mounted onto a plug-and-play measurement box, which is connected to the PC via USB cable. Two measurement methods are supported by IC: voltage and IDDQ testing. The DefSim bundle represents a unique and easy to handle educational and research environment. In the paper we also consider a simple learning flow, which is targeted on students whose main specialization is general microelectronics (not the digital testing specifically)",2006,0, 8426,Monitoring and fault diagnosis system for the diesel engine based on instantaneous speed,"This article makes use of marine diesel engine local instantaneous speed signals. These signals have properties of non-stationary, non-Gauss and low signal-to-noise, and been deal with according to blind source separation (bss) algorithm and combining other time domain, frequency domain analysis method. Hardware design and software design of a vibration monitoring and fault diagnosis system for diesel engine makes use of PC104 bus, high synchronization sampling, and asynchronies FIFO and DOS operation system and so on. It can realize the marine diesel on-line diagnosis at the same time exactly.",2010,0, 8427,A novel signal processing and defect recognition method based on multi-sensor inspection system,"This article presented a novel signal processing and defect recognition method in MFL inspection system. During the preprocessing course, time-frequency analysis, median and adaptive filter, and interpolation processing are adopted to preprocess MFL inspection signal. In order to obtain high sensitivity and precision, we adopted multi-sensor data fusion technique to inspection data. A wavelet basis function (WBF) neural network was used to recognize defect parameters. Through constructing a knowledge-based off-line inspection expert system, the system improved its defect recognition capability greatly.",2010,0, 8428,Dissolved gas analysis technique for incipient fault diagnosis in power transformers: A bibliographic survey,This article presents a bibliographic survey over the last 40 years on the research and development and on the procedures for evaluating faults by dissolved gas analysis of power transformers.,2010,0, 8429,"Flexible, Any-Time Fault Tree Analysis with Component Logic Models","This article presents a novel approach to facilitating fault tree analysis during the development of software-controlled systems. Based on a component-oriented system model, it combines second-order probabilistic analysis and automatically generated default failure models with a level-of-detail concept to ensure early and continuous analysability of system failure behaviour with optimal effort, even in the presence of incomplete information and dissimilar levels of detail in different parts of an evolving system model. The viability and validity of the method are demonstrated by means of an experiment.",2010,0, 8430,On the Error-Free Realization of a Scaled DCT Algorithm and Its VLSI Implementation,"This brief is concerned with the efficient and error-free implementation of the order-8 Linzer-Feig (L-F) scaled discrete cosine transform (sDCT). We present a novel 3-D algebraic integer encoding scheme which maps the transform basis functions (transcendental functions such as cosine and tangent) with integer values, so that the quantization errors can be minimized and the cross-multiplications (in the signal path) can be avoided. This scheme also allows the separable computation of a 2-D DCT in multiplication-free, fast, and efficient architectures with a process rate of 80 mega-samples/sec. The proposed scheme also reduces the latency and the power consumption compared to previously employed designs for DCT implementations.",2007,0, 8431,Learning from errors: A bio-inspired approach for hypothesis-based machine learning,"This contribution present an approach extending existing learning strategies based on situation-operator-modeling (SOM), which can be used to model interactions with the environment and to represent the knowledge of cognitive systems. The approach proposes a planning process using hypotheses to bridge the gap of knowledge, which is refined by a following check of the applied hypothesis. The hypotheses are inspired by human errors according to Dornerpsilas classification, which is related to the interaction within complex dynamic systems. The programmed implementation of the approach is based on an experimental environment using a software tool for high-level Petri nets.",2008,0, 8432,Demonstrating the Resilience of Geographical Routing to Localization Errors,"This demonstration concerns geographic forwarding (GF) as an effective solution for data dissemination (from sensors to a sink) in wireless sensor networks (WSNs). In particular, we focus on demonstrating the different degrees of resilience of a recent solution, ALBA-R, to localization errors, which are highly likely to occur in WSNs. GF routing protocols are based on the nodes knowing their own location information as well as that of the sink, which is the intended destination of a packet.",2007,0, 8433,Application of adaptive probing for fault diagnosis in computer networks,This dissertation presents an adaptive probing based tool for fault diagnosis in computer networks by addressing the problems of probe station selection and probe selection. We first present algorithms to place probe stations to monitor the network in the presence of various failures in the network. We then present algorithms for probe selection in an adaptive manner to perform fault diagnosis. We present algorithms considering both deterministic as well as non-deterministic environments. We present evaluation of the proposed algorithms through comprehensive simulation studies. The dissertation is available at http://www.cis.udel.edu/~natu/papers/dissertation.pdf.,2008,0, 8434,A microcomputer-based unique digital fault diagnosis scheme of radial transformer feeders,This investigation aims to develop a comprehensive microcomputer based fault protection scheme for indicating various types of faults in a radial transformer feeder in subtransmission or distribution circuits and its protection against those faults. The type of fault in the feeder can also be determined by observing the phase angles of the primary currents of the transformer with which the feeder is connected at its secondary. This method dispenses off conventional electromagnetic relays and HRC fuses in feeder protection schemes. The developed system provides necessary information regarding the faulted phase and displays appropriate symbols on the display screen of the microcomputer about the type of fault,2006,0, 8435,Accurate fault prediction of BlueGene/P RAS logs via geometric reduction,"This investigation presents two distinct and novel approaches for the prediction of system failures occurring in Oak Ridge National Laboratory's Blue Gene/P supercomputer. Each technique uses raw numeric and textual subsets of large data logs of physical system information such as fan speeds and CPU temperatures. This data is used to develop models of the system capable of sensing anomalies, or deviations from nominal behavior. Each algorithm predicted event log reported anomalies in advance of their occurrence and one algorithm did so without false positives. Both algorithms predicted an anomaly that did not appear in the event log. It was later learned that the fault missing from the log but predicted by both algorithms was confirmed to have occurred by the system administrator.",2010,0, 8436,Defect classification as problem classification for quality control in the software project management by DTL,"There are various reasons and causes which lead to failure of software that may come right from it's starting point of requirement analysis up to launching of product in the market. One has to do the root cause analysis of software failure so that these failures should not be reproducible. There are various problems due to which the software may give the bugs, errors, fault and ultimately the failure. Enlisting the problem, analyzing the problem after reporting is must before the fixing of the problem and going into the root cause of the problem. The classification of the problem will definitely help us to sort out the problems and will help to go to the root of problem. Once problems have been reported it can be classified by using any classification method depending upon the properties and their values. We do combine the decision tree learning with the input as the current problems. Decision Tree will be trained with trainee example with similar type of problems, their properties and values. The DTL will help us to classify the problem and ultimately give us the sorted problems to do analysis of problem. Analysis and classification of the problems will also help in the quality control of the product. We have taken defects classification as an example in this paper.",2010,0, 8437,Relative cost of fault-tolerant transmission for connecting distributed resources,"There exists a need for additional transmission capacity to connect distributed resources such as wind energy from remote locations to the power grid. The high cost of running transmission lines necessitates having a way to minimize the length of transmission required to connect the distributed resources to load centers. In addition, the substations that connect to the transmission lines need to be fault tolerant by ensuring that the loss of any one transmission line does not result in the loss of power to any substation. In this paper, the relative cost of connecting a range of sources to a range of loads with and without fault tolerance is presented. Fault tolerance is achieved through a 2-path redundant graph approach. Studies show that the additional cost of ensuring fault tolerance for selected schemes ranges between 8% and 33%. This percentage value can support decision making in practice.",2010,0, 8438,Design and application of Gray FieldTM technology for defect inspection systems,"There has been increased interest in optical inspection tools that utilize UV illumination. This originates from the belief that diffraction limits will render tools employing longer wavelengths blind to many defects identified as being critical. In response to concerns over the applicability of UV illumination to rapid defect detection we performed a series of experiments to explore and develop new inspection techniques to provide the capability of detecting the dimensionally challenging defects associated with advanced technology nodes while maintaining high speeds needed for chipmakers' volume production lines. Initial results indicated that by radically redesigning the collection optics to a multiple perspective configuration that compiles information from six different scattering and reflecting directions, improved sensitivity, noise rejection and wafer throughput could be realized while using laser scanning illumination in the visible region of the spectrum. Also, defects that traditionally could only be observed in bright field tools were now detectable with ease at production worthy throughputs. Results are presented that show the optical experimental design data and simulations, and are corroborated by examples of defects from the resulting production defect inspection system, CompassTM. In addition, electron micrographs of a range of detected defects are presented that show the system versatility and the exact nature of the defects, thus allowing a clear understanding of the increase in sensitivity, speed and dimensional range these tools provide over traditional instrumentation to be made",2001,0, 8439,Sensorless operation of a fault tolerant PM drive,There is much interest in the use of electric drives for aeroengine fuel pumps on the more electric aircraft. As a safety critical system the drive Must possess fault tolerant properties in order to meet the reliability requirements. Previous work has resulted in the development of a six-phase fault tolerant permanent magnet drive for this application. This drive exhibits fault tolerance in both the motor and power electronics but has the weakness of dependence on a single shaft position sensor. This is addressed through the use of a position estimation scheme. The sensorless scheme makes use of a flux linkage-current-angle model to estimate the rotor position. The fault tolerant drive has six independent phases and the voltage and current measurements from each of these phases are used by the estimation scheme. Two different position estimation algorithms are examined based on 'per-phase' and 'all-phase' approaches.,2003,0, 8440,A hybrid scatter correction method for cone-beam CT,"There's great influence on the reconstructed image by X-ray scatter in cone-beam computed tomography (CBCT). X-ray scatter correction technique is one of the hot research fields in 3D imaging of CBCT. A hybrid scatter correction method is proposed, which combines the beam attenuation grid and the beam stop block. Combined with the scatter correction equipments, a scatter correction algorithm is designed to reduce the X-ray scatter in this paper. The scatter correction performance is investigated by real experiments with an industrial metallic bolt. The results showed that the proposed method could more effectively suppress the scatter artifacts than which based on an attenuation grid alone.",2010,0, 8441,An improved approach to the simulation of Single-Line-to-Ground faults in transmission networks,"These paper presents a mathematical model for studying the single-line-to-ground fault in transmission networks, giving a personal contribution for a methodical and accurate analysis of complex systems, by providing an evolution of the Acircuital methodA able to solve the case of a system in which more than one line supply the fault.",2009,0, 8442,"Basic measurement sites, methods, and associated errors","This article consists of power point presentation on EMC test facilities. The topics dealt include reverberation chamber testing, electromagnetic absorber, spectrum analyser, antenna calibration, immunity testing and radiation pattern.",2008,0, 8443,Fault-Tolerance in Dataflow-Based Scientific Workflow Management,"This paper addresses the challenges of providing fault-tolerance in scientific workflow management. The specification and handling of faults in scientific workflows should be defined precisely in order to ensure the consistent execution against the process-specific requirements. We identified a number of typical failure patterns that occur in real-life scientific workflow executions. Following the intuitive recovery strategies that correspond to the identified patterns, we developed the methodologies that integrate recovery fragments into fault-prone scientific workflow models. Compared to the existing fault-tolerance mechanisms, the propositions reduce the effort of workflow designers by defining recovery fragments automatically. Furthermore, the developed framework implements the necessary mechanisms to capture the faults from the different layers of a scientific workflow management architecture. Experience indicates that the framework can be employed effectively to model, capture and tolerate the typical failure patterns that we identified.",2010,0, 8444,Monitoring and Diagnosis of External Faults in Three Phase Induction Motors Using Artificial Neural Network,"This paper addresses the possibility of integration of an external motor faults (e.g., phase failure, unbalanced voltage, locked rotor, undervoltage, overvoltage, phase sequence reversal of supply voltage, mechanical overload) monitoring and diagnostic technique into batch simulation with a digital protection set by using an artificial neural network (ANN) for a three-phase induction motor. The proposed set-up has been simulated using""Matlab/Simulink"" Software and tested for external motor faults. The simulated results clearly show that well-trained neural networks can precisely of early fault detection, diagnosis of external faults induction motor, also validating the proposed setup as a simple, reliable and effective protection for the three-phase induction motor fault identification scheme using an artificial neural network (ANN).",2007,0, 8445,Design of robust fault detection filter for hybrid switched systems,"This paper addresses the problem of design a fault detection system for switched systems with unknown inputs. The residual generator based robust fault detection filter (RFDF) will be used, where the switching signal is assumed to be known, and the continuous states will be estimated, resulting a generation of residual signals for each linear model. Dwell-time constraint will be used to ensure the stability for the switched system. The RFDF design is formulated as optimization problem, and solved iteratively by linear matrix inequality (LMI). Stability is analyzed by using the switched Lyapunov functions. Finally a practical example from lateral vehicle dynamics is provided to illustrate the functionality of the proposed technique.",2010,0, 8446,Detecting Inconsistent Values Caused by Interaction Faults Using Automatically Located Implicit Redundancies,"This paper addresses the problem of detecting inconsistent values caused by interaction faults originated from an external system.This type of error occurs when a correctly formatted message that is not corrupted during transmission is generated with a field that contains incorrect data.When traditional schemes cannot be used, one alternative is resorting to receiver-based strategies that employ implicit redundancies - relations between events or data, often identified by a human expert.We propose an approach for detecting inconsistent values using implicit redundancies which are automatically located in examples of communications.We show that, even without adding any redundant information to the communication, the proposed approach can achieve a reasonable error detection coverage in fields where sequential relations exist.Other aspects, such as false alarms and latency, are also evaluated.",2008,0, 8447,Statistical analysis of network traffic for adaptive faults detection,"This paper addresses the problem of normal operation baselining for automatic detection of network anomalies. A model of network traffic is presented in which studied variables are viewed as sampled from a finite mixture model. Based on the stochastic approximation of the maximum likelihood function, we propose baselining network normal operation, using the asymptotic distribution of the difference between successive estimates of model parameters. The baseline random variable is shown to be stationary, with mean zero under normal operation. Anomalous events are shown to induce an abrupt jump in the mean. Detection is formulated as an online change point problem, where the task is to process the baseline random variable realizations, sequentially, and raise alarms as soon as anomalies occur. An analytical expression of false alarm rate allows us to choose the design threshold, automatically. Extensive experimental results on a real network showed that our monitoring agent is able to detect unusual changes in the characteristics of network traffic, adapt to diurnal traffic patterns, while maintaining a low alarm rate. Despite large fluctuations in network traffic, this work proves that tailoring traffic modeling to specific goals can be efficiently achieved.",2005,0, 8448,Baselining network traffic and online faults detection,"This paper addresses the problem of normal operation baselining for automatic detection of network anomalies. A model of network traffic is presented in which studied variables are viewed as sampled from finite mixture model. Based on the stochastic approximation of the maximum likelihood function, we propose baselining network normal operation, using the asymptotic distribution of the difference between successive estimates of model parameters. The baseline random variable is shown to be stationary, with mean zero under normal operation. Anomalous events are shown to induce an abrupt jump in the mean. Detection is formulated as an online change point problem, where the task is to process the baseline random variable realizations, sequentially, and raise alarms as soon as anomalies occur. An analytical expression of false alarm rate allows us to choose the design threshold, automatically. Extensive experimental results on a real network showed that our monitoring agent is able to detect unusual changes in the characteristics of network traffic, adapt to diurnal traffic patterns, while maintaining a low alarm rate. Despite large fluctuations in network traffic, this work proves that tailoring traffic modeling to specific goals can be efficiently achieved.",2003,0, 8449,Classification and remediation of electrical faults in the switched reluctance drive,"The switched reluctance drive is known to be fault tolerant, but it is not fault free. The goals of this study are the systematic classification of all electrical faults, short-and open-circuits, in the switched reluctance drive (excluding the controller itself), and the investigation of fault patterns and possible remediation. Each situation is analyzed via finite element analysis, and/or experiments. The transient effects during the faults are described. Possible remediation schemes other than disabling the faulted phase are explored. There is a particular focus on switch short-circuit for which new results are presented.",2005,0, 8450,A Simple Alternative Derivation of the Expectation Correction Algorithm,"The switching linear dynamical system (SLDS) is a popular model in time-series analysis. However, the complexity of inferring the state of the latent variables scales exponentially with the length of the time-series, resulting in many approximation strategies in the literature. We focus on the recently devised expectation correction (EC) approximation which can be considered a form of Gaussian sum smoother. The algorithm has excellent numerical performance compared to a wide range of competing techniques, exploiting more fully the available information than, for example, generalised pseudo Bayes. We show that EC can be seen as an extension to the SLDS of the Rauch, Tung, Striebel inference algorithm for the linear dynamical system. This yields a simpler derivation of the EC algorithm and facilitates comparison with existing, similar approaches.",2009,0, 8451,A Superstring Galaxy Associative Memory Model with Expecting Fault-Tolerant Fields,"The synthesis problems of associative memory models are not better solved until now. Learning from the idea of superstring theory, a design method of superstring galaxy associative memory model with expecting fault-tolerant field is proposed by making the sphere mapping and giving the algorithm of galaxy covering. The method better solves a difficult synthesis problem of associative memory models. The superstring galaxy associative memory model designed by the method can have expecting fault-tolerant fields of the samples.",2009,0, 8452,Research on Error Analysis and Calibration Method of Laser Scan Range Finder,"The technology of laser scan range finder is researched, a line laser is used to survey the object section outlines. The system is composed of line laser, CCD camera, Frame Grabber, computer and the software. The mathematical model of the system is defined and studied, the error is analysed, the calibration method is researched. the system is calibrated and tested, the result showed that the survey data moved towards with the stripe shape tallies and the system achieve the accuracy requirements.",2010,0, 8453,The Software for a Choice of Coefficient of Criterion of a Minimum of the Resulting Root-Mean-Square Error and Small Parameter for Linear Automatic Control Systems,"The theme of this work is research of influence of weight coefficient of criterion of a minimum of a root-mean-square error on a resulting error of linear system. The various approaches to a choice of weight coefficient of criterion and software for automation of a choice of optimum meaning of small parameters of system with use of the given criterion are considered. The program allows numerically and graphically to determine meaning of small parameter of an automatic control system by minimizing an error of system, and also to keep results and to print them.",2006,0, 8454,Tolerating multiple faults in WDM networks without wavelength conversion,"This paper addresses the problem of tolerating as many faults as possible in wavelength division multiplexing (WDM) networks without the capability of wavelength conversion. The problem of finding the maximum number of faults that can be tolerated is modeled as a constrained ring cover set problem, which is a decomposition problem with exponential complexity. The face decomposition algorithm (FDA) that can tolerate one or more faults is proposed. From the results, we know that the maximum number of faults tolerated can be extended from one significantly under various network topologies.",2004,0, 8455,Application of Multi-layer Feed-forward Neural Network in Fault Diagnosis Based on FBP Algorithm,"This paper aims at the BP neural network model, to against the problems of the weakness of capability of knowledge acquisition and low stability of learning and memory. The paper put forward a new fast error back propagation algorithm, and give an example to make a comparison between BP algorithm and FBP algorithm on fault diagnosis, The diagnosis results indicate the reliability of this method.",2008,0, 8456,Very high-resistance fault on a 525 kV transmission line - Case study,"This paper analyzes a 300 ohm primary ground fault, which is an unusually high value for a 525 kV transmission line in southeastern Brazil. This case study emphasizes the techniques used by the analysts. Considering that the fault impedance was larger than those usually observed in single-phase faults on extra-high-voltage (EHV) lines, this paper discusses the probable cause of the fault and mentions an analysis technique to evaluate such faults. The protective relaying community lacks information regarding the causes and values of fault resistances to ground on high-voltage (HV) and EHV transmission lines. The objectives of this paper are to stimulate research and contribute to the collection of very high-resistance fault information. The analysis techniques are presented using symmetrical components and fault calculations to arrive at fault parameter values that are very close to the ones provided by protective relays. The performance of the line protection is evaluated for the specific fault conditions, with calculation of the observed impedances and currents. The importance of the ground over- current directional protection on a pilot directional comparison scheme is shown. Speculation on the widespread use of differential protection for transmission lines should stimulate discussions of line protection philosophies and applications. The criteria for the resistive reach setting of the quadrilateral ground distance characteristic are presented to show an evolution of past criteria and to open discussion about the setting limits. The conclusions of this paper highlight the importance of present event report analysis techniques regarding fault calculation software and the need for appropriate settings criteria for the resistive ground distance element threshold. This paper also supports the use of ground directional overcurrent protection with a pilot scheme for HV and EHV transmission line protection, while proposing the widespread use of differential functions f- r transmission lines, even for the most extensive cases.",2009,0, 8457,A Hybrid Error Control and Artifact Detection Mechanism for Robust Decoding of H.264/AVC Video Sequences,"This letter presents a hybrid error control and artifact detection (HECAD) mechanism which can be used to enhance the error resilient capabilities of the standard H.264/advanced video coding (AVC) codec. The proposed solution first exploits the residual source redundancy to recover the most likelihood H.264/AVC bitstream. If error recovery is unsuccessful, the residual corrupted slices are then passed through a pixel-level artifact detection mechanism to detect the visually impaired macroblocks to be concealed. The proposed HECAD algorithm achieves overall peak signal-to-noise ratio gains between 0.4 dB and 4.5 dB relative to the standard with no additional bandwidth requirement. The cost of this solution translates in a marginal increase in the complexity of the decoder. In addition, this method can be applied in conjunction with other error resilient strategies and scales well with different encoding configurations.",2010,0, 8458,Fast Fault Screening Approach to Assessing Transient Stability in Entergy's Power System,This paper addresses a fast process for performing transient stability studies in a large transmission system. The paper describes how the most severe three-phase fault locations were identified in Entergy's power system network. The approach described in this paper offers a unique capability to automatically identify the most severe fault locations and perform ranking of these most severe faults in on-line and off-line environments. The study was performed using the Entergy loadflow and dynamic data to validate the proof of concept. It took approximately one minute to determine and rank 23 most severe fault locations in Entergy's power system. The fast fault screening (FFS) capability described in this paper will allow the operators to assess transient (angular) stability in on-line and real-time environments.,2007,0, 8459,Measurement and Correction of Systematic Odometry Errors Caused by Kinematics Imperfections in Mobile Robots,"This paper addresses an innovative method for the measurement and correction of systematic odometry errors caused by the kinematics imperfections in the differential drive mobile robots. An occasional systematic calibration of the mobile robot increases the odometric accuracy and reduces operational cost, as less frequent absolute positioning updates are required during the operation. Conventionally, the tests used for this purpose (e.g. UMBmark test) are relatively difficult to perform and are very sensitive to non-systematic errors and requires a large number of tests with precise measurements of the final position and orientation of the robot to get better accuracy. This paper describes a novel method for calibration of differential drive mobile robots. The method is systematic, very simple to perform and insensitive to the random errors and hence provides near optimal results (with respect to the systematic errors) in a single test due to its inherent robustness. Simulation results are presented which shows the significant improvement in the odometric accuracy with less effort",2006,0, 8460,Mesh-based error-scalable video object codec for variable bandwidth multimedia communications,"The work introduces a complete chain of video object compression. The process is based on an automatic extraction of video objects from raw video. The recent MPEG-4 standard philosophy, including mesh models and wavelet-based compression are involved in the scheme. Constrained Delaunay meshes are used to represent articulated video objects in a flexible manner conveying shape and motion information. The wavelet transform is applied to residual errors for scalable and efficient compression. Results on MPEG-4 test sequences for very low bitrate video communications are encouraging.",2002,0, 8461,Fault Current Contribution From Synchronous Machine and Inverter Based Distributed Generators,"There are advantages of installing distributed generation (DG) in distribution systems: for example, improving reliability, mitigating voltage sags, unloading subtransmission and transmission system, and sometimes utilizing renewables. All of these factors have resulted in an increase in the use of DGs. However, the increase of fault currents in power systems is a consequence of the appearance of new generation sources. Some operating and planning limitations may be imposed by the resulting fault currents. This paper discusses a model of inverter based DGs which can be used to analyze the dynamic performance of power systems in the presence of DGs. In a style similar to protective relaying analysis, three-dimensional plots are used to depict the behavior of system reactance (X) and resistance (R) versus time. These plots depict operating parameters in relation to zones of protection, and this information is useful for the coordination of protection systems in the presence of DG",2007,0, 8462,Studying the Impact of Clones on Software Defects,"There are numerous studies that examine whether or not cloned code is harmful to software systems. Yet, few of them study which characteristics of cloned code in particular lead to software defects. In our work, we use survival analysis to understand the impact of clones on software defects and to determine the characteristics of cloned code that have the highest impact on software defects. Our survival models express the risk of defects in terms of basic predictors inherent to the code (e.g., LOC) and cloning predictors (e.g., number of clone siblings). We perform a case study using two clone detection tools on two large, long-lived systems using survival analysis. We determine that the defect-proneness of cloned methods is specific to the system under study and that more resources should be directed towards methods with a longer 'commit history'.",2010,0, 8463,Mobile Positioning Using Enhanced Signature Database Method and Error Reduction in Location Grid,"There are some methods to mobile location determination; one of these methods is signal strength (power of arrivals). In this research, mobile location determination using enhanced signature database method in a Location grid will be introduced; several ways for positioning error reduction will also be described.",2009,0, 8464,Robust Fault Diagnosis for Atmospheric Reentry Vehicles: A Case Study,"This paper deals with the design of robust model-based fault detection and isolation (FDI) systems for atmospheric reentry vehicles. This work draws expertise from actions undertaken within a project at the European level, which develops a collaborative effort between the University of Bordeaux, the European Space Agency, and European Aeronautic Defence and Space Company Astrium on innovative and robust strategies for reusable launch vehicles (RLVs) autonomy. Using an H/H- setting, a robust residual-based scheme is developed to diagnose faults on the vehicle wing-flap actuators. This design stage is followed by an original and specific diagnosis-oriented analysis phase based on the calculation of the generalized structured singular value. The latter provides a necessary and sufficient condition for robustness and FDI fault sensitivity over the whole vehicle flight trajectory. A key feature of the proposed approach is that the coupling between the in-plane and out-of-plane vehicle motions, as well as the effects that faults could have on the guidance, navigation, and control performances, are explicitly taken into account within the design procedure. The faulty situations are selected by a prior trimmability analysis to determine those for which the remaining healthy control effectors are able to maintain the vehicle around its center of gravity. Finally, some performance indicators including detection time, required onboard computational effort, and CPU time consumption are assessed and discussed. Simulation results are based on a nonlinear benchmark of the HL-20 vehicle under realistic operational conditions during the autolanding phase. The Monte Carlo results are quite encouraging, illustrating clearly the effectiveness of the proposed technique and suggesting that this solution could be considered as a viable candidate for future RLV programs.",2010,0, 8465,Multi-fault diagnosis of rolling-element bearings in electric machines,"This paper deals with the diagnosis of faults in roller-element bearings as the core of a dedicated Condition based Maintenance (CBM) system. Vibration signals recorded by accelerometers feed into a classification model in charge of monitoring and evaluating bearing wear. The chosen feature extraction technique is based on the computation of the Discrete Fourier Transform (DFT) and on the estimation of the normalized frequency content in each of the considered spectrum sub-bands as an indicative measure of the state of health of the roller-element bearing. Three different damage attributes have been investigated. For each attribute, both Support Vector Machines (SVM) and neurofuzzy Min-Max classifiers have been employed as the core of the diagnostic system. Test results show that it is possible to achieve high accuracy in all diagnostic problems considered. The pre-processing procedure and the classification stage, especially in the case of Min-Max fuzzy networks, do not require demanding computational hardware resources, and as a result, a simple and effective diagnostic system can be designed by feeding the synthesized Min-Max classifiers with the spectral features computed from vibration sensor outputs.",2010,0, 8466,Predicting the SEU error rate through fault injection for a complex microprocessor,"This paper deals with the prediction of SEU error rate for an application running on a complex processor. Both, radiation ground testing and fault injection, were performed while the selected processor, a Power PC 7448, executed a software issued from a real space application. The predicted error rate shows that generally used strategies, based on static cross-section, significantly overestimate the application error rate.",2008,0, 8467,Fault Tolerant Control in NCS Medium Access Constraints,"This paper deals with the problem of fault-tolerant control of a Network Control System (NCS) for the case in which the sensors, actuators and controller are inter-connected via various Medium Access Control protocols which define the access scheduling and collision arbitration policies in the network and employing the so-called periodic communication sequence. A new procedure for controlling a system over a network using the concept of an NCS-Information-Packet is described which comprises an augmented vector consisting of control moves and fault flags. The size of this packet is used to define a Completely Fault Tolerant NCS. The fault-tolerant behaviour and control performance of this scheme is illustrated through the use of a process model and controller. The plant is controlled over a network using Model-based Predictive Control and implemented via MATLABcopy and LABVIEWcopy software.",2007,0, 8468,Diagnosing permanent faults in distributed and parallel computing systems using artificial neural networks,"This paper deals with the problem of identifying faulty nodes (or units) in diagnosable distributed and parallel systems under the PMC model. In this model, each unit is tested by a subset of the other units, and it is assumed that, at most, a bounded subset of these units is permanently faulty. When performing testing, faulty units can incorrectly claim that fault-free units are faulty or that faulty units are fault-free. Since the introduction of the PMC model, significant progress has been made in both theory and practice associated with the original model and its offshoots. Nevertheless, this problem of efficiently identifying the set of faulty units of a diagnosable system remained an outstanding research issue. In this paper, we describe a new neural-network-based diagnosis algorithm, which exploits the off-line learning phase of artificial neural network to speed up the diagnosis algorithm. The novel approach has been implemented and evaluated using randomly generated diagnosable systems. The simulation results showed that the new neural-network-based fault identification approach constitutes an addition to existing diagnosis algorithms. Extreme faulty situations, where the number of faults is around the bound t, and large diagnosable systems have been also experimented to show the efficiency of the new neural-network-based diagnosis algorithm.",2010,0, 8469,DSP implementation of the multiple reference frames theory for the diagnosis of stator faults in a DTC induction motor drive,"This paper deals with the use of a new diagnostic technique based on the multiple reference frames theory for the diagnosis of stator winding faults in a direct-torque-controlled (DTC) induction motor drive. The theoretical aspects underlying the use of this diagnostic technique are presented but a major emphasis is given to the integration of the diagnostic system into the digital-signal-processor (DSP) board containing the control algorithm. Taking advantage of the sensors already built in the drive for control purposes, it is possible to implement this diagnostic system at no additional cost, thus giving a surplus value to the drive itself. Experimental results show the effectiveness of the proposed technique to diagnose stator faults and demonstrate the possibility of its integration in the DSP board.",2005,0, 8470,Fault isolation with intermediate checks of end-to-end checksums in the Time-Triggered System-on-Chip Architecture,"This paper deploys end-to-end message checksums for error detection in the time-triggered system-on-chip architecture (TTSoCA). The end-to-end checksums are not only checked at the end, but also intermediately in the communication subsystem of the system-on-chips (SoCs) concurrently with the message transmission in order to isolate faults: if a message transmission error occurs, the goal is to pinpoint whether the fault has originated in an IP core, in the communication subsystem, or in a gateway.",2009,0, 8471,Error Performance of DQPSK with EGC Diversity Reception over Fading Channels,"This paper derives the average bit error probability (BEP) of differential quaternary phase shift keying (DQPSK) with postdetection equal gain combining (EGC) diversity reception over independent and arbitrarily correlated fading channels. First, using the associated Legendre functions, the average BEP of DQPSK is analyzed over independent Rayleigh, Nakagami-m, and Rician fading channels. Finite-series closed-form expressions for the average BEP of DQPSK over L-branch independent Rayleigh and Nakagami-m fading channels (for integer Lm) are presented. Besides, a finite-series closed-form expression is given for the average BEP of differential binary phase shift keying (DBPSK) with EGC over independent Rician fading channels. Second, an alternative approach is propounded to study the performance of DQPSK over arbitrarily correlated Nakagami-m and Rician fading channels. Relatively simple BEP expressions in terms of a finite sum of a finite-range integral are proposed. Moreover, the penalty in signal to noise ratio (SNR) due to arbitrarily correlated channel fading is also investigated. Finally, the accuracy of the results is verified by computer simulation.",2008,0, 8472,Application Research of a Fault Diagnosis Expert System for Cement Kiln Based on .Net Platform,"This paper describes a fault diagnosis expert system for cement kiln developed in the way of integrating the new theories and methods of artificial intelligence and network technology with related production technology. The system can give online fault diagnosis and has features of simple network interface, excellent openness and easy expansibility. The design of the system layout, database, knowledge base and reasoning engine is presented in detail. The experiment application in a factory shows its high adaptability and the wide prospect of the system.",2010,0, 8473,XML Schema Based Faultset Definition to Improve Faults Injection Tools Interoperability,"This paper describes an XML schema formalization approach for the definition of basic fault sets which specify memory and/or register value corruption in microprocessor-based systems. SWIFI (software implemented fault injection) tools use fault injectors to carry out the fault injection campaign defined in a GUI-based application. However, the communication between the fault injector and the application is defined in an ad-hoc manner. Through this proposed XML schema definition different injectors could be used to carry out the same fault set injection. To validate this approach floating point register and memory corruptions with temporal triggers and routine interception mechanisms to carry out argument and return value, corruption has been considered. Moreover, an experimental tool called Exhaustifreg, consisting of a GUI Java application for defining the fault sets and injection policies and two injectors for SPARC and i386 architectures under RTEMS, has been developed. The XML-based approach improves the interoperability between SWIFI tools by uncoupling the injectors from the experiment manager in charge of the fault campaign.",2008,0, 8474,Azvasa:- Byzantine Fault Tolerant Distributed Commit with Proactive Recovery,"This paper describes Azvasa protocol: a Byzantine fault tolerant distributed commit protocol with proactive recovery for transactions running over untrusted networks. Traditional Three phase agreement protocol among coordinator replicas to tolerate Byzantine faults has been used in distributed commit. We propose two phase agreement protocol to tolerate Byzantine faults which not only reduces total time to reach agreement but also message overhead. Proactive recovery is an essential method for ensuring long term reliability of fault tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is faster standby node registration, service migration and reduced overhead in new membership notification to participants.",2009,0, 8475,Protective Relay Synchrophasor Measurements During Fault Conditions,"This paper describes details of the signal processing techniques that a protective relay uses to provide both synchronized phasor measurements and line distance protection. The paper also presents a comprehensive system model of normal and faulted power system operating conditions. Finally, the paper provides power system model test results that demonstrate the ability of the described protective relay to provide synchrophasor measurements during both normal and faulted conditions",2006,0, 8476,"e-SAFE: An Extensible, Secure and Fault Tolerant Storage System","This paper describes e-SAFE , a scalable utility-driven distributed storage system that offers very high availability at an archival scale and reduces management overhead such as periodic repairs. e-SAFE is designed to provide a storage utility for environments such as large-scale data centers in enterprise networks where the servers experience temporary unavailability (possibly high load, temporary downtimes due to repair or software/hardware upgrades). e-SAFE is based on a simple principle: efficiently sprinkle data all over a distributed storage and robustly reconstruct even when many of them are unavailable. e-SAFE also provides strong guarantee on data-integrity. The use of Fountain codes for replicating file data blocks, an efficient algorithm for fast parallel encoding and decoding over multiple file segments, a utility module for service differentiation and auto-adjustments of design parameters, and a background replication mechanism hiding the cost of replication and dissemination from the user, provide a fast, durable and autonomous storage solution.",2007,0, 8477,Industrial utilization of linguistic equations for defect detection on printed circuit boards,"This paper describes how linguistic equations, an intelligent method derived from fuzzy algorithms, have been used in a decision-helping tool for electronic manufacturing. In our case the company involved in the project, PKC Group, is mainly producing control cards for the automotive industry. In their business, nearly 70 percent of the cost of a product is material cost. Detecting defects and repairing the printed circuit boards is therefore a necessity. With an ever increasing complexity of the products, defects are very likely to occur, no matter how much attention is put into their prevention. That's the reason why the system described in this paper comes into use only during the final testing of the product and is purely oriented towards the detection and localization of defects. Final control is based on functional testing. Using linguistic equations and expert knowledge, the system is able to analyze that data and successfully detect and trace a defect into a small area of the printed circuit board. If sufficient amount of data is provided, self-tuning and self-learning methods can be used. Diagnosis effectiveness can therefore be improved from detection of a functional area towards component level analysis.",2002,0, 8478,A comparison of network level fault injection with code insertion,"This paper describes our research into the application of fault injection to Simple Object Access Protocol (SOAP) based service oriented-architectures (SOA). We show that our previously devised WS-FIT method, when combined with parameter perturbation, gives comparable performance to code insertion techniques with the benefit that it is less invasive. Finally we demonstrate that this technique can be used to compliment certification testing of a production system by strategic instrumentation of selected servers in a system.",2005,0, 8479,The Influence of Defect Distribution Function Parameters on Test Patterns Generation,This paper describes the analysis of influence of yield loss model parameters on the test patterns generation. The probability of shorts between conducting paths as well as the estimations of yield loss are presented on the example gates from industrial standard cell library in 0.8 mum CMOS technology.,2007,0, 8480,Detection and classification of faults affecting maneuverability of underwater vehicles,This paper describes the application of Fisher Discriminant Analysis and the Hotelling T2 statistic to the detection and classification of major failures that can occur in underwater vehicles. Simulation results are presented that demonstrate that rapid detection and reliable classification can be obtained with these methods.,2003,0, 8481,Geometrical error compensation of precision motion systems using radial basis function,"This paper describes a new method for geometrical error compensation of precision motion systems. The compensation is carried out with respect to an overall geometrical error model which is constructed from the individual error components associated with each axis of the machine. These error components are modeled using radial basis functions (RBFs), thus dispensing with the conventional look-up table. The adequacy and clear benefits of the proposed approach are illustrated from an application to an XY table",2000,0, 8482,Interval methods for fault-tree analysis in robotics,"This paper describes a novel technique, based on interval methods, for estimating reliability using fault trees. The approach encodes inherent uncertainty in the input data by modeling these data in terms of intervals. Appropriate interval arithmetic is then used to propagate the data through standard fault trees to generate output distributions which reflect the uncertainty in the input data. Through a canonical example of reliability estimation for a robot manipulator system, we show how the use of this novel interval method appreciably improves the accuracy of reliability estimates over existing approaches to the problem of uncertain input data. This method avoids the key problem of loss of uncertainty inherent in some approaches when applied to noncoherent systems. It is further shown that the method has advantages over approaches based on partial simulation of the input-data space because it can provide guaranteed bounds for the estimates in reasonable times",2001,0, 8483,An Introspection Framework for Fault Tolerance in Support of Autonomous Space Systems,"This paper describes a software system designed for the support of future autonomous space missions by providing an infrastructure for runtime monitoring, analysis, and feedback. The objective of this research is to make mission software executing on parallel on-board architectures fault tolerant through an introspection mechanism that provides automatic recovery minimizing the loss of function and data. Such architectures are essential for future JPL missions because of their increased need for autonomy along with enhanced on-board computational capabilities while in deep space or time-critical situations. The standard framework for introspection described in this paper integrates well with existing flight software architectures and can serve as an enabling technology for the support of such systems. Furthermore, it separates the introspection capability from applications and the underlying system, providing a generic framework that can be also applied to a broad range of problems beyond fault tolerance, such as behavior analysis, intrusion detection, performance tuning, and power management.",2008,0, 8484,"Non-invasive fault diagnosis for switched-reluctance machines with incorrect winding turns, inter-turn winding faults and eccentric rotors","This paper describes a study into the forces that occur in a switched reluctance motor with faults. These forces can be used for non-invasive diagnosis of inter-turn faults and rotor eccentricity in switched-reluctance machines by monitoring of the vibrations on the machine casing. The technique uses a transient finite element analysis and produces characteristics for the forces on the teeth and net force on the rotor in terms of the current, winding and rotor positioning. The forces can be detected on the actual machine using accelerometers. The faults that produce an unbalanced radial force on the rotor are very noticeable if it has vibrating components and these are very apparent when using accelerometers attached to the stator casing. They also cause secondary problems such are worn bearings. The shorted-turns current is studied and it is illustrated that this current is high, which leads to burn-out.",2003,0, 8485,Finite-Element Analysis of a Switched Reluctance Motor Under Static Eccentricity Fault,"This paper describes a two-dimensional finite-element analysis of an 8/6 switched reluctance motor with static eccentricity. It describes the influence of the eccentricity on the static characteristics of the motor and shows how to obtain flux lines, flux density distribution, and the flux-linkage/rotor angular position characteristic of the motor in both healthy and faulty conditions, as well as the static torque profiles of phases for different degrees of eccentricity. It shows that, at low current, the effect of the eccentricity is considerable compared to that at the rated current. Fourier analysis of the torque profile is used to study the variations of static torque harmonic contents.",2006,0, 8486,DefSim: A Remote Laboratory for Studying Physical Defects in CMOS Digital Circuits,"This paper describes a unique remote laboratory for studying CMOS physical defects that is meant to be used in advanced courses in the scope of microelectronic design and test. Both the measurement equipment and the remote access mechanism were custom developed in the frame of the European Union project REASON. The core of the equipment is an educational chip that contains different manufacturing defects physically implemented into standard digital cells and small logic circuits on the layout level. The chip is supplied with a dedicated plug-and-play measurement box, which provides an interface between the chip and the training software. This measurement kit offers a glimpse into the silicon reality, revealing behavior of the most common defects and their influence on the circuits' operations. Students can choose between approximately 500 different defects, which can be classified into different groups by studying their properties, and find differences or similarities. The remote server-based version of the laboratory is accessible over the Internet, thereby supporting distance learning and e-learning modes of training. A personal version of the training software is also available.",2008,0, 8487,Circuit-level modeling of soft errors in integrated circuits,"This paper describes the steps necessary to develop a soft-error methodology that can be used at the circuit-simulation level for accurate nominal soft-error prediction. It addresses the role of device simulations, statistical simulation, analytical soft-error rate (SER) model development, and SER-model calibration. The resulting approach is easily automated and generic enough to be applied to any type of circuit for estimation of the nominal SER.",2005,0, 8488,Embedded displacement measuring device based on error separating technique of multi-circle overlapping three-probe method,"This paper designs an embedded displacement measuring device based on error separating technique of multi-circle overlapping three-probe method. Considering the requirement of high precision and high speed of modern CNC machine, high accuracy CCD laser displacement sensor is used in the system to measure the rotating spindle, and then the device send the result to PC for further analysis by error separating software. According to the experiment, the roundness errors separated by such method are totally similar, rotation accuracy increase as rotation speed accelerated in a certain speed. All of these are realistic.",2010,0, 8489,Fault diagnostics for GPS-based lateral vehicle control,"This paper develops a fault diagnostic system to monitor the health of the lateral motion sensors on an instrumented highway vehicle. The fault diagnostic system utilizes observer design with the observer gains chosen so as to ensure that each sensor failure causes estimation errors to grow in an unique direction. The performance of the fault diagnostic system is verified through extensive experimental results obtained from an instrumented truck called the ""Safetruck"". The fault diagnostic system is able to monitor the health of a GPS system, a gyroscope and an accelerometer on the Safetruck. It can correctly detect a failure in any one of the three sensors and accurately identify the source of the failure",2001,0, 8490,Supervisory control of software systems for fault mitigation,This paper develops a novel technique of discrete-event supervisory control for fault mitigation in software applications. It models the interactions between a software application and a computer operating system (OS) as a deterministic finite state automation. The supervisor restricts the language of the OS to correct deviations such as CPU exceptions for the controlled execution of software applications. Feasibility of this supervisory control concept is demonstrated on process execution under the Red Hat Linux 7.2 operating system. Two supervisory control policies are implemented as proof of the concept.,2003,0, 8491,Error concealment for slice group based multiple description video coding,"This paper develops error concealment methods for multiple description video coding (MDC) in order to adapt to error prone packet networks. The three-loop slice group MDC approach of D. Wang et al. (2005) is used. MDC is very suitable for multiple channel environments, and especially able to maintain acceptable quality when some of these channels fail completely, i.e. in an on-off MDC environment, without experiencing any drifting problem. Our MDC scheme coupled with the proposed concealment approaches proved to be suitable not only for the on-off MDC environment case (data from one channel fully lost), but also for the case where only some packets are lost from one or both channels. Copying video and using motion vectors from correct descriptions are combined together for concealment prior to applying traditional methods. Results are compared to the traditional error concealment method proposed in the H.264 reference software, showing significant improvements for both the balanced and unbalanced channel cases.",2005,0, 8492,Sensor network scheduling algorithm considering estimation error variance and communication energy,"This paper deals with a sensor scheduling algorithm considering estimation error variance and communication energy in sensor networked feedback systems. We propose a novel decentralized estimation algorithm with unknown inputs in each sensor node. Most existing works deal with the sensor network systems as sensing systems and it is difficult to apply them to the real physical feedback control systems Then some local estimates are merged and the merged estimates can be optimized in the proposed method and the estimation error covariance has a unique positive definite solution under some assumptions. Next, we propose a novel sensor scheduling algorithm in which each sensor node transmits information. A sensor node uses energy by communication between other sensor node or the plant. The proposed algorithm achieves a sub-optimal network topology with minimum energy and a desired variance. Experimental results show an effectiveness of the proposed method.",2010,0, 8493,Multiresolution Fourier transform of ripple voltage and current signals for fault detection in a gearbox,"This paper deals with detecting defects in a multistage gearbox. The gearbox is driven by an induction motor and the output shaft of the gearbox is connected to a DC generator and a DC tacho-generator. The ripple current and voltage signals supplied by these generators are analyzed for a defect-free gearbox and a defective gearbox with two teeth missing in a particular gear. Multiresolution Fourier transform is applied to these signals in order to improve the signal to noise ratio as well to highlight a specific bandwidth of the signal containing all the gear mesh frequencies. It is found that application of MFT improves the dynamic range of the signal, and hence highlights the frequencies with negligible amplitude. The technique is observed to be effective in distinguishing the defective and the defect-free cases of the gearbox.",2006,0, 8494,Fault diagnosis of manufacturing systems using continuous Petri nets,"This paper deals with fault detection of manufacturing systems modeled by Petri nets. We show that the theoretical results obtained for continuous Petri nets allow us to treat with systems that are intractable using the theory developed for discrete Petri nets. In particular systems in which the unobservable part contains cycles can be studied. On the other hand, only three diagnosis states can be defined making impossible the splitting of the uncertain state into two different states, to obtain different degrees alarm for the uncertainty.",2010,0, 8495,Redundant graph to improve fault diagnosis in a Gas Turbine,"This paper deals with fault diagnosis issues for a Gas Turbine, GT, of a Combined Cycle Power Plant, CCPP. To analyze under which conditions faults in the turbogenerator can be detected and isolated, structural properties of the model are used. The structure redundancy is studied by graph tools considering the standard available measurements. A non-linear dynamic model given by 37 algebraic and differential equations is considered to identify the required redundancy degrees for diverse fault scenarios without numerical values. As result 10 relations are obtained which detect faults in all units of the turbine except one: the thermodynamic gas path. Moreover, using the redundant graph concept it is suggested to add a sensor to increase the redundance and consequently to have detectability of the mechanical faults in the gas path. This is the main contribution of the work. The implementation of redundant relations with specific simulated data of a GT validates this statement.",2010,0, 8496,Power factor correction of single-phase and three-phase unbalanced loads using multilevel inverter,"This paper deals with power factor correction of single phase and three phase unbalanced loads using a multilevel inverter circuit. We propose the utilization of single phase, multilevel diode clamped voltage source inverters (VSI) with large capacitors for energy storage. Simulation and experimental results are presented",2001,0, 8497,"Reconfigurable logic control using modular FSMs: Design, verification, implementation, and integrated error handling","This paper describes the design and implementation of logic controllers on a small-scale machining line testbed using modular finite state machines. The logic is verified to be internally correct before being implemented on the testbed. Reconfiguration of the controller for a new manufacturing scenario is demonstrated, as is the integration of error handling. The ease of use of this modular finite state machine design methodology is discussed, as is the complexity of the resulting designs. Algorithms are presented for design, reconfiguration, and error handling integration.",2002,0, 8498,A fault tolerant electric drive for an aircraft nose wheel steering actuator,"This paper describes the design and testing of a dual-lane electric drive for the operation of a prototype, electromechanically actuated, nose wheel steering system for a commercial aircraft. The drive features two fully independent motor controllers, each operating one half of a three-phase motor to produce an actuator capable of full operation in the event of an electrical fault. An isolated communications link between the controllers allows for consolidation of parameters to identify faults and synchronise outputs to ensure even load sharing. A selection of results is presented from motor dynamometer performance analysis and fully loaded output tests on an Airbus hydraulic test rig at Filton, UK.",2010,0, 8499,Hardware-in-the-loop simulator for research on fault tolerant control of electrohydraulic flight control systems,"This paper describes the development of a hardware-in-the-loop (HIL) simulator to support the design and testing of novel fault tolerant control and condition monitoring schemes for fluid power systems emphasizing flight control applications. The simulator uses a distributed architecture to share, in a synchronized manner, the demanding computational load associated with the real-time simulation amongst a number of desktop workstations connected by a dedicated Ethernet network. The simulator runs a high-fidelity model of the F-16 fighter aircraft that is augmented in this paper by the addition of realistic nonlinear models of the hydraulic flight control surface actuators and a model of the nonlinear control surface aerodynamic loads. A specially designed state-of-the-art hydraulic test rig, which has the capacity to experimentally simulate common failure modes of a typical fluid power circuit, is used to emulate a F-16 horizontal tail actuator. The experimental actuator can thus be exercised against the realtime simulation of a F-16 aircraft operating under a variety of normal or faulty conditions. To add further realism to the simulation, a second experimental hydraulic actuator is used to generate the aerodynamic disturbing load. Novel fault tolerant control and diagnosis algorithms can therefore be verified in a realistic application scenario. Pilot-in-the-loop simulations are supported by the inclusion of a graphical visualization of the aircraft motions. The results of a typical HIL experiment, for a normally functioning hydraulic system, are presented to illustrate the operation of the simulator",2006,0, 8500,"Operation, Design and Testing of Generator 100% Stator Earth Fault Protection Using Low Frequency Injection","This paper describes the development of a new 100% generator stator earth fault protection scheme, based on low frequency injection principle. The design of a new analogue input module and the digital filtering technique are presented. Results of the simulation and site testing are also discussed.",2008,0, 8501,"On-line sensor fault detection, isolation, and accommodation in automotive engines","This paper describes the hybrid solution, based on artificial neural networks (ANNs), and the production rule adopted in the realization of an instrument fault detection, isolation, and accommodation scheme for automotive applications. Details on ANN architectures and training are given together with diagnostic and dynamic performance of the scheme.",2003,0,4813 8502,A framework for node-level fault tolerance in distributed real-time systems,"This paper describes a framework for achieving node-level fault tolerance (NLFT) in distributed real-time systems. The objective of NLFT is to mask errors at the node level in order to reduce the probability of node failures and thereby improve system dependability. We describe an approach called lightweight NLFT where transient faults are masked locally in the nodes by time-redundant execution of application tasks. The advantages of light-weight NLFT is demonstrated by a reliability analysis of an example brake-by-wire architecture. The results show that the use of light-weight NLFT may provide 55% higher reliability after one year and almost 60% higher MTTF, compared to using fail-silent nodes.",2005,0, 8503,HW/SW co-detection of transient and permanent faults with fast recovery in statically scheduled data paths,"This paper describes a hardware-/software-based technique to make the data path of a statically scheduled super scalar processor fault tolerant. The results of concurrently executed operations can be compared with little hardware overhead in order to detect a transient or permanent fault. Furthermore, the hardware extension allows to recover from a fault within one to two clock cycles and to distinguish between transient and permanent faults. If a permanent fault was detected, this fault is masked for the rest of the program execution such that no further time is needed for recovering from that fault. The proposed extensions were implemented in the data path of a simple VLIW processor in order to prove the feasibility and to determine the hardware overhead. Finally a reliability analysis is presented. It shows that for medium and large scaled data paths our extension provides an up to 98% better reliability than triple modular redundancy.",2010,0, 8504,New methodology for ultra-fast detection and reduction of non-visual defects at the 90nm node and below using comprehensive e-test structure infrastructure and in-line DualBeamTM FIB,"This paper describes a methodology to quickly capture, characterize, prioritize, localize, and perform in-line FA on killer defects. The system, which includes comprehensive short-flow test wafers, fast inline e-test, a powerful data analysis system, and advanced in-line dual beam inspection, was demonstrated in a leading-edge 300mm fab at the 90nm technology node to detect and resolve both systematic and random defect mechanisms greater than 10times faster than traditional methods. This article describes several examples of detecting and resolving non-visual (subsurface) as well as visual defects for both back-end and front-end issues",2006,0, 8505,Enhanced Fault Ride-Through Method for Wind Farms Connected to the Grid Through VSC-Based HVDC Transmission,"This paper describes a new control approach for secure fault-ride through of wind farms connected to the grid through a voltage source converter-based high voltage DC transmission. On fault occurrence in the high voltage grid, the proposed control initiates a controlled voltage drop in the wind farm grid to achieve a fast power reduction. In this way overvoltages in the DC transmission link can be avoided. It uses controlled demagnetization to achieve a fast voltage reduction without producing the typical generator short circuit currents and the related electrical and mechanical stress to the wind turbines and the converter. The method is compared to other recent FRT methods for HVDC systems and its superior performance is demonstrated by simulation results.",2009,0, 8506,Error rates of DPSK systems with MIMO EGC diversity reception over Rayleigh fading channels,"This paper analyzes the average bit error probability (BEP) of the differential binary and quaternary phase-shift keying (DBPSK and DQPSK respectively) with multiple-input multiple-output (MIMO) systems employing postdetection equal gain combining (MIMO EGC) diversity reception over Rayleigh fading channels. Finite closed-form expressions for the average BEP of DBPSK and DQPSK are presented. Two approaches are introduced to analyze the error rate of DQPSK. The proposed structure for the differential phase-shift keying (DPSK) with MIMO EGC provides a reduced-complexity and low-cost receiver for MIMO systems compared to the coherent phase-shift keying system (PSK) with MIMO employing maximal ratio combining (MIMO MRC) diversity reception. Finally, a useful procedure for computing the associated Legendre functions of the second kind with half-odd-integer order and arbitrarily degree is presented.",2008,0, 8507,Research on fault diagnosis for ship course control system,"This paper analyzes the fault information of ship's course control system, and establishes a fault diagnosis model based on fuzzy nerve network algorithms. Using the fuzzy logic processing data so as to make full use of the experience and knowledge, using the nerve network so as to avoid some problems of the complications fault tree diagnosis system, such as matching conflict, combination explosion, and infinite recursion. Also adopting the improved BP arithmetic to train the nerve network which can solve the problems of convergence speed and convergence surge. The fault diagnosis result shows this fault diagnosis system has strong robustness and generalization. That method uses model free diagnosis, so that is easy to learn by self and perfect system function constantly, and has some theories and engineering application value.",2009,0, 8508,Failure mechanisms and design considerations for fault tolerant aerospace drives,"This paper considers existing More Electric technologies in commercial aircraft, observing recent technologies adopted by aerospace and discussing the reasons restricting the application of other designs. Fault tolerant drives are considered, assessing where reliability may affect application in aerospace. Failure conditions and design issues are proposed which will present challenges in the evolution of laboratory prototypes to actual aerospace hardware. Results are presented from fault tolerant drives, highlighting some of these design considerations.",2010,0, 8509,Improving TTCN-3 Test System Robustness Using Software Fault Tolerance,This paper contributes an analysis of possible pitfalls of automated test execution and provides an novel approach for TTCN-3 test system to manage and recover from errors occurred in the system under test (SUT) during test execution. The research problem addressed in this paper is how to facilitate recovery of distributed test system to recover from cascading errors caused by software faults in the SUT. This research problem is addressed by applying the software fault tolerance techniques in the implementation of the testing environment. Provided solutions for error recovery and management are derived from the software fault tolerance research work carried out during last decades. The presented approach is validated in a prototype TTCN-3 test environment supporting testing of distributed communication systems.,2009,0, 8510,Implementation of a modified state estimator for topology error identification,"This paper describes the implementation of a modified state estimation program and its associated user interface. The state estimator is improved by adding the capability of detecting and identifying topology errors, which are caused by the incorrect status information for the circuit breakers at the substations. The developed program is tested using a library of topology error scenarios. A user friendly interface is also implemented in order to facilitate testing of these cases. Some representative cases that are simulated, are presented along with the detailed models for the system and substations.",2003,0, 8511,A novel PWM scheme for single-phase three-level power-factor-correction circuit,"This paper presents a control scheme for a single-phase AC-to-DC power converter with three-level pulsewidth modulation. A single-phase power-factor-correction circuit is proposed to improve the power quality. The hysteresis current control technique for a diode bridge, with two power switches is adopted to achieve a high power factor and low harmonic distortion. A control scheme is presented where the line current is driven to follow the reference sinusoidal current which is derived from the DC-link voltage regulator, the capacitor voltage balance compensator and the output power estimator. The blocking voltage of each power device is clamped to half of the DC-link voltage. The high power factor and low current total harmonic distortion are verified by computer simulations and hardware tests",2000,0, 8512,Defect Detection Efficiency: Test Case Based vs. Exploratory Testing,"This paper presents a controlled experiment comparing the defect detection efficiency of exploratory testing (ET) and test case based testing (TCT). While traditional testing literature emphasizes test cases, ET stresses the individual tester's skills during test execution and does not rely upon predesigned test cases. In the experiment, 79 advanced software engineering students performed manual functional testing on an open-source application with actual and seeded defects. Each student participated in two 90-minute controlled sessions, using ET in one and TCT in the other. We found no significant differences in defect detection efficiency between TCT and ET. The distributions of detected defects did not differ significantly regarding technical type, detection difficulty, or severity. However, TCT produced significantly more false defect reports than ET. Surprisingly, our results show no benefit of using predesigned test cases in terms of defect detection efficiency, emphasizing the need for further studies of manual testing.",2007,0, 8513,An integrated multiple-substream unequal error protection and error concealment algorithm for Internet video applications,"This paper presents a coordinated multiple-substream unequal error protection and error concealment algorithm for SPIHT-coded bitstreams transmitted over lossy channels. In the proposed scheme, we divide the video sequence corresponding to a group of pictures into two sub-sequences in the temporal domain and independently encode each sub-sequence with a 3-D SPIHT algorithm to generate two independent substreams. Each substream is protected by an FEC-based unequal error protection algorithm that assigns unequal forward error correction codes for each substream with bit-plane granularity. The information that is lost during transmission for one substream is estimated at the receiver by using the correlation between the two substreams and the smoothness of the video signal. Simulation results show that the proposed multiple-substream UEP algorithm is simple, fast, and robust in hostile network conditions, and that the proposed error concealment algorithm achieves about 1-3 dB PSNR gain over the case where there is no error concealment at high packet loss rates.",2002,0, 8514,Design and development of a cost-effective fault-tolerant execution and control system for discrete manufacturing,"This paper presents a cost effective fault-tolerant manufacturing execution and control system (MECS) which has been designed, developed and implemented for a multi-national company's (MNC) discrete precision metal part manufacturing operations in Singapore. The authors aim to share with other researchers and industrial practitioners the challenges, scope, architecture, design rules, and key technologies in implementing the system. Specific reference is made to the technical approaches to automated data collection and fault-tolerance with minimal network hardware redundancy. Being seamlessly integrated with other enterprise systems including quality, engineering, inventory, store, planning and scheduling, the implemented MECS dramatically improved the visibility, accountability, tracking, and traceability (VATT) of manufacturing operations.",2003,0, 8515,"Fault Tolerance using ""Parallel Shadow Image Servers (PSIS)"" in Grid Based Computing Environment","This paper presents a critical review of the existing fault tolerance mechanism in grid computing and the overhead involved in terms of reprocessing or rescheduling of jobs, if in case a fault arisen. For this purpose we suggested the parallel shadow image server (PSIS) copying techniques in parallel to the resource manager for having the check points for rescheduling of jobs from the nearest flag, if in case the fault is detected. The job process is to be scheduled from the resource manager node to the worker nodes and then its' submitted back by the worker nodes in serialized form to the parallel shadow image servers from the worker nodes after the pre-specified amount of time, which we call the recent spawn or the flag check point for rescheduling or reprocessing of job. If the fault is arisen then the rescheduling is done from the recent check point and submitted to the worker node from where the job was terminated. This will not only save time but will improve the performance up to major extent",2006,0, 8516,A defect-to-yield correlation study for marginally printing reticle defects in the manufacture of a 16Mb flash memory device,"This paper presents a defect-to-yield correlation for marginally printing defects in a gate and a contact 4X DUV reticle by describing their respective impact on the lithography manufacturing process window of a 16Mb flash memory device. The study includes site-dependent sort yield signature analysis within the exposure field, followed by electrical bitmap and wafer strip back for the lower yielding defective sites. These defects are verified using both reticle inspection techniques and review of printed resist test wafers. Focus/exposure process windows for defect-free feature and defective feature are measured using both in-line SEM CD data and defect printability simulation software. These process window models are then compared against wafer sort yield data for correlation. A method for characterizing the lithography manufacturing process window is proposed which is robust to both marginally printing reticle defects and sources of process variability outside the lithography module",2000,0, 8517,Rapid Prototyping of an Automated Test Harness for Forward Error Correcting Codes,"This paper presents a design flow for the rapid prototyping of forward error correction (FEC) systems in the Xilinx System Generator tool. In this instance two FEC systems were tested, both Turbo codecs. One was designed to comply with the UMTS standard, the other was designed to comply with the cdma2000 standard. The target hardware for this system is a Field Programmable Gate Array (FPGA). The System Generator tool and the cdma2000 Turbo code standard are discussed. A description of the implemented test harness is given along with simulation results and a comparison of simulation times for both hardware and software implementations of the system. Results presented show performance differences between UMTS and cdma2000 codecs. It is also shown how fast termination affects decoder performance.",2005,0, 8518,Design of pole placement controller in D-STATCOM for unbalanced faults mitigation,This paper presents a design of pole placement controller for D-STATCOM for unbalanced faults mitigation. In the pole placement method the poles are shifted to the new poles location at the real-imaginary axes for better response. This type of controller is able to control the amount of injected current from the D-STATCOM to compensate the unbalanced faults by referring to the current which are the inputs to the controller. The controller efficiency was tested under unbalanced faults which are single line to ground (SLG) fault and double phase to ground (DPG) fault. The controller and the D-STATCOM were designed using SIMULINK and power system blockset toolbox that is available in MATLAB. The controller block was designed based on the signal flow diagram,2005,0, 8519,Diagnosis of multiple hold-time and setup-time faults in scan chains,"This paper presents a diagnosis technique to locate hold-time (HT) faults and setup-time (ST) faults in scan chains. This technique achieves deterministic diagnosis results by applying thermometer scan input (TSI) patterns, which have only one rising or one falling transition. With TSI patterns, the diagnosis patterns can be easily generated by existing single stuck-at fault test pattern generators with few modifications. In addition to the first fault, this technique diagnoses remaining faults by applying thermometer scan input with padding (TSIP) patterns. For the benchmark circuits (up to 6.6 K scan cells), experiments show that the diagnosis resolutions are no worse than 15, even in the presence of multiple faults in a scan chain.",2005,0, 8520,A Distributed Architecture for Power Network Fault Analysis,This paper discusses a distributed diagnostic algorithm for fault analysis in power networks. DAPoN is a model-based diagnostic algorithm that incorporates a hierarchical power network representation and model. The architecture is based on substation-server implementation. The structural model is a six-level representation with each level depicting a more complex network of components than the previous one in the hierarchy. Each level is modeled by the object-oriented representation. The distributed functional representation is also done in six levels where each level contains the behavioral knowledge related to components of that level in the structural model. The diagnostic algorithm of DAPoN is designed to perform fault analysis in pre-diagnostic and diagnostic levels.,2005,0, 8521,Timing error suppression scheme for CDMA network based positioning system,This paper discusses an improved method for location in CDMA based cellular communication system. An effective technique is proposed for locating a mobile stationpsilas position based on Time Difference of Arrival (TDOA) technique. This technique is based on estimating the difference in the arrival time of the signal from the source at multiple receivers. The major error sources in the CDMA network location system are nonline-of-sight (NLOS) propagation error and hardware system such as repeater. Proposed algorithm based on probability of mobile station (MS) location in serving cell radius. We adopt an enhanced TDOA (E-TDOA) technique for suppress those timing errors. The performance of proposed algorithm is evaluated for CDMA based location system. Simulation also demonstrates in field environments.,2008,0, 8522,Data mining of printed-circuit board defects,"This paper discusses an industrial case study in which data mining has been applied to solve a quality engineering problem in electronics assembly. During the assembly process, solder balls occur underneath some components of printed circuit boards. The goal is to identify the cause of solder defects in a circuit board using a data mining approach. Statistical process control and design of experiment approaches did not provide conclusive results. The paper discusses features considered in the study, data collected, and the data mining solution approach to identify causes of quality faults in an industrial application",2001,0, 8523,Evaluation of fault protection methods using ATP and MathCAD,"This paper discusses combining the Alternative Transients Program (ATP) and MathCAD to teach protective relaying and to develop relay algorithms. A power system model is created in ATP with appropriate current and voltage measurements. The simulation output is converted to a COMTRADE format and imported into a detailed relay model implemented in MathCAD. The MathCAD model performs digital filtering calculations, symmetrical components calculations and models relay algorithms based on relay manufacturers published information. Focus here is on differential and ground fault protection for the common two bus, parallel line case. Emphasis is placed on fault detection and localization methods for ungrounded or high impedance grounded systems.",2008,0, 8524,Estimating Dependability of Parallel FFT Application using Fault Injection,This paper discusses estimation of dependability of a parallel FFT application. The application uses FFTW library. Fault susceptibility is assessed using software implemented fault injection. The fault injection campaign and the experiment results are presented. The response classes to injected faults are analyzed. The accuracy of evaluated data is verified experimentally.,2004,0, 8525,A neural net based approach for fault diagnosis in distribution networks,"This paper discusses the application of field data to a new supervised clustering-based arcing distribution fault diagnosis method. The fault diagnosis method can perform three functions that provide preliminary fault location information for grounded and ungrounded power distribution systems: fault detection, faulted type classification, and faulted phase identification. It contains two main modules: a preprocessor and a pattern classifier which was implemented as a supervised clustering-based neural net. The inputs to the fault diagnosis method are the three phase and neutral currents for a feeder. The preprocessor computes a vector of statistical features from the phase currents and passes them to the neural net pattern classifier. The neural net determines the features pattern as normal or faulted. If detected as faulted, the neural net also identifies the fault type and classifies the faulted phase. Field studies were conducted in which the fault diagnosis method was trained and tested with normal and faulted phase currents generated from data recorded by events staged in the field for two, four-wire systems. The fault diagnosis method was highly successful during tests to validate the fault detection and identification functions. Also the fault diagnosis method was able to recognize the difference between faulted test patterns and fault-like test patterns representing line switching and load tap changer operations. Further the clustering-based fault diagnosis approach was evaluated using simulated data generated for a 3-feeder ungrounded system",2000,0, 8526,Application specific configuration of a fault-tolerant NoC architecture,"This paper discusses the configuration of a fault-tolerant mesh-based NoC architecture. In this architecture, spare links provide a mechanism for rerouting data packets in presence of NoC faults. Two algorithms, exhaustive and greedy, are used to find the best configuration. The NoC is modeled at the high transaction level using SystemC TLM. This enables easy design space exploration and performance analysis of the proposed NoC architecture. The performance results driven from simulation of this model are used as the input of the spare link selection algorithms.",2008,0, 8527,An approach to minimize build errors in direct metal laser sintering,"This paper discusses the effect of geometric shape on the accuracy of direct metal laser sintering (DMLS) prototypes. The percentage shrinkages due to different shapes are investigated and their empirical relationship is determined. A new speed-compensation (SC) method is proposed to reduce uneven shrinkage affected by the two-dimensional geometric shape at each layer. From case studies conducted, the optimized SC method is found to be efficient in improving the accuracy of prototypes fabricated. Note to Practitioners-This paper aims to address the problem of dimensional errors of parts built by the direct metal laser sintering (DMLS) process. Existing compensation approaches are normally based on a general relationship between the nominal dimensions and the errors after sintering. However, the effect arising from different geometric shapes is not considered. A new approach is proposed using different scan speed settings to compensate for the effect of geometric shapes to improve the dimensional accuracy of the entire part. During processing, the laser sinters along the trajectory are guided by the hatch vectors or dexel. An appropriate experimental method is used to establish the relationship for different scan speeds with the dexel length to the final accuracy. When building the part, the laser scan speed is adjusted dynamically according to the dexel length which varies with the geometric shape of the part. The case study demonstrates that the proposed method can generate correct speed settings to effectively increase the dimensional accuracy of the final part. Although this method has been developed based on the DMLS process, it is also applicable to other laser sintering processes. In future research, other process parameters, such as laser power, will be considered independently, or together with the scan speed, for possible further improvement on the dimensional accuracy.",2006,0, 8528,Workload-Cognizant Impact Analysis and its Applications in Error Detection and Tolerance in Modern Microprocessors,"This paper discusses the relative importance of errors in a modern microprocessor based on the impact that they incur on the execution of typical workload. These information can prove immensely useful in allocating resources to enhance on-line testability and error resilience through concurrent error detection/correction methods. This paper also presents an extensive fault simulation infrastructure which is being developed around a superscalar, dynamically- scheduled, out-of-order, Alpha-like microprocessor, which supports execution of SPEC2000 integer benchmarks and enables the aforementioned correlation study.",2009,0, 8529,A new approach to find fault locations on distribution feeder circuits - Part II,"This paper focuses on results of that initial project, and introduce another R&D fault location project.",2008,0, 8530,Online parameter estimation issues for the NASA IFCS F-15 fault tolerant systems,"This paper focuses on specific issues relative to real-time online estimation of aircraft aerodynamic parameters at nominal and post-actuator failure flight conditions. A specific parameter identification (PID) method, based on Fourier transform, has been applied to an approximated mathematical model of the NASA IFCS F-15 aircraft. In this effort different options relative to the application of this PID method are evaluated and compared. Particularly, the direct evaluation of stability and control derivatives versus the estimation of the coefficients of the state space system matrices evaluation is considered. Furthermore, the options of considering individual control surfaces (left and right) versus total surfaces as inputs to the PID process are discussed. Finally, since the PID method relies on the use of derivative terms, the option of using time domain derivatives versus frequency domain derivatives is also evaluated. Results are presented in terms of the accuracy and reliability of the estimates of selected stability and control derivatives.",2002,0, 8531,Machine current signature analysis as a way for fault detection in permanent magnet motors in elevators,This paper focuses on the experimental investigation for incipient fault detection and fault detection methods suitably adapted for use in permanent-magnet motors for direct-drive elevator systems. The proposed system diagnoses permanent-magnet motors having two types of faults such as short circuit of stator windings and bearing fault. After processing current data the classical fast Fourier transform is applied to detect characteristics under the healthy and various faulted conditions with MCSA.,2008,0, 8532,The analysis of frequency deviation on synchrophasor calculation and correction methods,"This paper gave a brief introduction of Phasor Measurement Unit (PMU) in the Wide Area Measurement System (WAMS) of power systems, and studied the error characteristics of the discrete Fourier calculation of phasor in the condition of a sinusoidal signal with frequency deviation from nominal frequency. Then, a program calculated the phasor using discrete Fourier transform under the platform of MATLAB was implemented, and the simulating analysis of off-nominal frequency was studied in detail. The simulating results showed that the error characteristics of calculated phasor had taken on the error ellipse shape. Finally, several methods of error correction were introduced for calculating the accurate phasor in power systems.",2009,0, 8533,Comparison of artificial neural networks and conventional algorithms in ground fault distance computation,"This paper gives a comparison between an artificial neural network method and a differential equation algorithm and wavelet algorithm in transient based earth fault location in the 20 kV radial power distribution networks. The items discussed are earth fault transients. Signal pre-processing and the performance of the proposed distance estimation methods. The networks considered are either unearthed or resonant earthed. The comparison showed that the neural network algorithm was better than the conventional algorithms in the case of very low fault resistance. The mean error in fault location was about 1 km in the field tests using staged faults, which were recorded in real power systems. With higher fault resistances, the conventional algorithms worked better",2000,0, 8534,Spurious Valleys in the Error Surface of Recurrent NetworksAnalysis and Avoidance,"This paper gives a detailed analysis of the error surfaces of certain recurrent networks and explains some difficulties encountered in training recurrent networks. We show that these error surfaces contain many spurious valleys, and we analyze the mechanisms that cause the valleys to appear. We demonstrate that the principle mechanism can be understood through the analysis of the roots of random polynomials. This paper also provides suggestions for improvements in batch training procedures that can help avoid the difficulties caused by spurious valleys, thereby improving training speed and reliability.",2009,0, 8535,Chip-level soft error estimation method,This paper gives a review of considerations necessary for the prediction of soft error rates (SERs) for microprocessor designs. It summarizes the physics and silicon process dependencies of soft error mechanisms and describes the determination of SERs for basic circuit types. It reviews the impact of logical and architectural filtering on SER calculations and focuses on the structural filtering of soft radiation events by nodal timing mechanisms.,2005,0, 8536,An early warning model for risk management of securities based on the error inverse propagation neural network,"This paper has carried on the tentative discussion to the early warning model for risk management under the separation of the sectors of investment bank. The characteristic of the investment bank is analysed at first, then discern the risk of the investment bank to classify. The index system of early warning model is proposed. Propagating against error neural network of neural network theory is introduced. This network is utilised to build early warning model and realize the corresponding algorithm by the tool MATLAB. The theory and realistic meaning of this paper is to promote the development of investment bank theory of our country, opening up the research range of the financial theory, especially there are important theory meaning and applying value to the setting-up of the early warning system of risk of the securities business.",2005,0, 8537,Fault Detection System Based on Embedded Platform,"This paper introduced a gear-box fault detection system that based on PC/104 embedded platform. Gear-box is almost as important as engine in a vehicle, so it is necessary to make sure that it is not out of order when driving. By collecting the vibrant signal from the gear, axes and bearing in gear-box, then processing and analysis the data, we can get acquirement of the working condition of the gear-box. And we can store the information in PC/104 embedded module for a future analysis or send the data to control center or upper PC by network or serial port, then achieving remote monitoring in real-time. This system can help safely driving, and it can reduce the risk of accident that caused by the fault of gear-box.",2008,0, 8538,Diagnosis of induction machines under transient conditions through the Instantaneous Frequency of the fault components,"This paper introduces a methodology for diagnosing different types of faults of induction machines working under transient conditions; the method is based on the extraction of the Instantaneous Frequency (IF) of the fault related components of stator current. It is shown that the IF of the fault components, evolves in the time-frequency and slip-frequency planes following characteristic patterns, different for each type of fault; the identification of these characteristic patterns, which are theoretically explained, is proposed as the base of the diagnosis method. This paper also introduces several mathematical approaches which enables for the extraction of the instantaneous frequency of the fault components of the transient stator current. Each of this methods is explained and also and validated with both simulated and tested signal. A comparison of the different methods for extracting the instantaneous frequency of the fault components is also given.",2010,0, 8539,Impact of quantization and roundoff errors on the performance of a noise radar correlator,"This paper evaluates the influence of quantization effects on the performance of correlators. The problem is decomposed into two smaller ones, each dealing with difference source of errors (quantization of input signals and finite-precision arithmetics) separately. A discussion of the first type of errors is held on a general level and applicable to any type of correlator. However, roundoff effects depend strongly on details of computations. Therefore, a case study is performed using a fixed-point FFT-based correlator, implemented using LogiCORE IP FFT v. 7.0 engine.",2010,0, 8540,Reliability analysis of fault tolerant drive topologies,"This paper examines fault tolerant power converter topologies and develops a technique for producing data to compare the reliability of topologies. The various states of fault-tolerant systems are modelled, following which reliability curves and failure rate data is constructed.",2008,0, 8541,Accuracy of dynamical models for analog iterative error control decoders,"This paper examines the accuracy of an abstract dynamical model for continuous-time analog iterative error-control decoders. An existing compact dynamical model formulates iterative decoding as a fixed point problem. The dynamics of each analog cell are modeled by a non-linear differential equation with a single time-constant, which is solved numerically using Euler's method. We propose a fitness test to evaluate the accuracy of this abstract model. For randomly constructed codes and random stimuli, circuit descriptions are synthesized using both the abstract dynamical model and SPICE. A comparison is performed to measure the correspondence between the two simulations for codes of increasing length and complexity",2005,0, 8542,Fault Management Driven Design with Safety and Security Requirements,"This paper exemplifies principles of embedded system design that props safety and security using operational errors management in frame of a dedicated Computer-Based System architecture. After reviewing basic principles of Cyber-Physical Systems as a novel slant (or marker?) to modeling and design in this domain, attention is focused on a real-world solution of a safety and security critical embedded system application offering genuine demonstration of that approach. The contribution stresses those features that distinguish the real project from a demonstration case study.",2010,0, 8543,Safe current injection strategies for a STATCOM under asymmetrical grid faults,"This paper explores different strategies to set the reference current of a STATCOM under unbalanced grid voltage conditions and determines the maximum deliverable reactive power in each case to guarantee the injected current is permanently within the STATCOM secure operation limits. The paper presents a comprehensive derivation of the proposed STATCOM control strategies to set the reactive current reference under unbalanced grid faults, together with an extensive evaluation using simulation and experimental results from a low-scale laboratory setup in order to verify and validate the dynamic performance achieved by the proposed reactive current limiting algorithms.",2010,0, 8544,Sensitivity of Real-Time Operating Systems to Transient Faults: A case study for MicroC kernel,"This paper explores sensitivity of RTOS kernels in safety-critical systems. We characterize and analyze the consequences of transient faults on key components of the MicroC kernel, a popular RTOS. We specifically focus on its task scheduling and context switching modules. Classes of fault syndromes specific to safety-critical real-time systems are identified. Results reported in this paper demonstrate that 34% of faults led to scheduling dysfunctions. In addition 17% of faults results in system crashes. This represents an important fraction of faults that cannot be ignored during the design phase of safety-critical applications running under an RTOS.",2005,0, 8545,Analysis of balanced and unbalanced faults in power systems using dynamic phasors,"This paper extends dynamic phasor models for major elements from single-phase to 3-phase. This kind of 3-phase model can be used to simulate balanced and unbalanced faults conveniently. These models include generator, transmission line (lumped parameter), inductor, and capacitor etc. Among these models, models of transmission line (lumped parameter), inductor and capacitor keep fundamental components of time-varying Fourier coefficients, which are based on generalized averaging method. As for generator's model, we use the Park model, the negative-sequence equivalent circuit and zero-sequence equivalent circuit to compute the positive-sequence, negative-sequence and zero-sequence components, respectively. Furthermore, we considered phasor dynamics of negative-sequence and zero-sequence. These models are ""quasi-accurate"" models compared with ""EMTP-like"" models, and are more accurate than ""quasistationary"" models. All of our models are tested using 3-node network. By comparing our results with the ones from EMTP and PSASP, it is proved, this kind of 3-phase dynamic phasor models has higher precision than ""quasi-stationary"" models, and it is a middle-kind model between standard time-domain models and ""quasistationary"" models.",2002,0, 8546,Trend analysis techniques for incipient fault prediction,"This paper extends the application of the Laplace Test Statistic for trend analysis and prediction of incipient faults for power systems. The extensions proposed in this paper consider the situation where two parameters believed to contribute explicitly to the eventual failure are monitored. The developed extensions applied to actual incipient failure events provide promising results for prediction of the impending failure. It is demonstrated that by incorporating two parameters in the trend analysis, the robustness to outliers is increased and the flexibility is augmented by increasing the degrees of freedom in the generation of the alarm signal.",2009,0, 8547,Design of Unit Fault Diagnosis System Software Based on Artificial Immune System,"This paper first introduces negative selection algorithm of artificial immune system and non-dimension-parameter, and then designs the unit fault diagnosis system software based on artificial immune system by combining negative selection algorithm and non-dimension-parameter. The advanced nature, practicability, validity and correctness of the idea on software design are proofed by the experiment of test machine. The design of this software offers the support for practice of new algorithm and new technology.",2008,0, 8548,Automated fault tolerant control synthesis based on discrete games,"This paper focuses on how fault tolerant controllers can be designed based on discrete game abstractions of piecewise-affine hybrid systems (PAHS). The proposed method aims at automatic generation of the controllers by converting the discrete games to timed games, which can be solved by the UppAal toolbox. Winning strategies to the games are then refined to piecewise-affine control strategies which is a type of gain scheduling. The feasibility of the method is shown through the automated design of a fault tolerant controller for a simple example.",2009,0, 8549,Differential Protection Based on Zero-Sequence Voltages for Generator Stator Ground Fault,"This paper introduces a new differential protection scheme based on zero-sequence voltages with 100% coverage for generator stator ground faults. Analysis shows that the Delta-fundamental zero-sequence voltages and the Delta-third-harmonic voltages at the generator neutral and the terminals will change simultaneously, and they present some similar characteristics. According to that, the new scheme that exploits the fault information of both the zero-sequence fundamental voltage and the third-harmonic voltage is described. As it combines the information of the zero-sequence fundamental voltage and the third-harmonic voltage, the scheme can detect the ground fault with high sensitivity in 100% coverage winding. Simulation and field test results show that the proposed scheme can obtain higher sensitivity than the traditional schemes",2007,0, 8550,SEU-induced persistent error propagation in FPGAs,This paper introduces a new way to characterize the dynamic single-event upset (SEU) cross section of an FPGA design in terms of its persistent and nonpersistent components. An SEU in the persistent cross section results in a permanent interruption of service until reset. An SEU in the nonpersistent cross section causes a temporary interruption of service. These cross sections have been measured for several designs using fault-injection and proton testing. Some FPGA applications may realize increased reliability at lower costs by focusing SEU mitigation on just the persistent cross section.,2005,0, 8551,Vibration monitoring and faults detection using wavelet techniques,"This paper introduces an efficient approach for fault detection in rotating machinery by analyzing its vibration signals using wavelet techniques. Specifically our approach uses the wavelet packet transform (WPT) to decompose the vibration signals in the wavelet packet space, in order to reveal the transient information in these signals. Faults are efficiently detected by exploiting the mean values of the energy in the detail signals. The wavelet-based approach is also compared with the traditional Fourier-based one. Both analysis and an extensive simulation of the two approaches clearly show the superiority of the WPT-based approach over the Fourier-based one, in efficiently diagnosing faults from vibration signals.",2007,0, 8552,Study on the Electrical Power Fault Recorder Integrated Analysis & Application System,"This paper introduces an electric power fault recorder integrated analyzing & application system, which interconnects digital fault recorders of several substations and power houses that are widespread in Jiangxi province of China and has the fault information gathered, processed, transmitted, shared and utilized automatically and effectively. A method based on the Newton iterative algorithm for precise fault data analysis was introduced into this system. In addition to realize the conventional analysis functions that are provided by similar systems, this system realizes extended functions of flexible protection action judgment. Besides, functions of formula editor and protective relaying testing simulation have been added and provide a novel approach for the comprehensive fault analysis applications. This system is now running steadily at the dispatch center of Jiangxi Province. The good results obtained reveal the superiority and effectiveness of this system.",2006,0, 8553,A Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Neural Network,"This paper introduces an implementation method of multiple weight as well as neuron fault-tolerant multilayer neural networks. Their fault-tolerance is derived from our extended back propagation learning algorithm called the deep learning method. The method can realize a desired weight as well as neuron fault-tolerance in multilayer neural networks where weight values are floating-point and the sigmoid function is used to calculate neuron output values. In this paper, fault-tolerant multilayer neural networks are implemented as digital circuits where weight values are quantized and the step function is used to calculate neuron output values using the deep learning method, the VHDL notation, and the logic design software QuartusII of Altera Inc. The efficiency of our method is shown in terms of fabrication-time cost, hardware size, neural computing time, generalization property, and so on",2006,0, 8554,Faults analysis and simulation for interior permanent magnet synchronous motor using Simulink@MATLAB,"This paper introduces major potential faults of interior PM synchronous motor (IPMSM) and their simulation realization method based on Simulink@MATLAB. IPMSM often is operated in MTPA and flux weakening control strategies. If a fault occurs, the great current or backwash voltage may be generated and damaged the motor system. The faults of IPMSM, generally, contain single-phase open circuit, single-phase and 3- phase short circuit, uncontrolled generation, and switch-on failure of one transistor. When different fault occurs, the circuit of total system including motor and inverter also will be changed. Therefore, it is necessary to analyze and establish independent model for each kind of fault. In this paper, first, the system circuit is analyzed as different fault type. Then, the corresponding models based on Simulink@MATLAB are established. The absence of experiment results leads that the veracity of simulation results can not be verified, but the waveforms will be explained by theory analysis.",2007,0, 8555,Fault- Tolerance Analysis of Multi-Phase Single Sided Matrix Converter for Brushless DC Drives,"This paper introduces the single sided matrix converter (SSMC) as a reliable power electronic drive for brushless dc motors used in aerospace applications. The multi-phase SSMC provides high reliability and fault tolerance with the penalty of more power devices. Dynamic Matlab simulations using a full model of the converter, machine and switching control algorithm are performed to investigate the fault tolerance of the multi-phase topologies.",2007,0, 8556,Grid-Enabled Fault Diagnostic System for Manufacturing Equipments,"This paper investigated the development history of fault diagnostic technology for equipments, especially remote fault diagnostic system based on Internet and Web technology in detail. Then, this paper proposed a framework of remote fault diagnostic system based on emerging Grid technology to enable coordinated resource sharing and problem solving among multiple equipment suppliers and equipment users, and then described its architecture. Multiple fault diagnostic Grid services are integrated in a uniform web portal upon the Manufacturing Enterprise Grid Support Platform (MEGridSP). Case study is conducted based on our practice in implementing fault diagnosis system for manufacturing equipments based on MEGridSP.",2008,0, 8557,Investigation of transient faults on JOP processor,"This paper investigates the effect and propagation of transient faults on JOP (Java Optimized Processor). JOP processor is intended for applications of embedded real-time systems and the primary implementation technology is in an FPGA. The investigation is based on 4350 transient faults which are injected in the processor using simulation-based fault injection method. The effect and propagation of the faults on different parts of this processor are observed and evaluated. Based on the experimental results, bytecode cache and bytecode memory are two sensible parts in the processor, since about 100 percent of injected faults have affected them. The results show that between 81% and 85% of injected faults cause processor failure and between 12% and 14% of non-failing faults are overwritten.",2010,0, 8558,Impact of correlation errors on optimum Kalman filter matrices gains identification in multicoordinate systems,"This paper investigates the impact that errors in the innovation correlation calculations has upon the steady-state Kalman filter gain identification. This issue arises in all real time applications, where the correlations must he calculated from experimental data. The algorithm proposed by [L. Hong (1991)] is considered and equations describing the impact are established. Simulation results are presented and discussed. Finally, experimental results for the algorithm in [L. Hong (1991)], applied to estimate the states of a servo system, are presented.",2005,0, 8559,The effect of offset error and its compensation for a direct torque controlled interior permanent magnet synchronous motor drive,"This paper investigates the problem caused by the DC offset in measurements to the estimation of stator flux linkage and torque in direct torque controlled interior permanent magnet synchronous motor drive (IPMSM). Modeling and experimental studies on a direct torque controlled drive are carried out with a conventional integrator which suffers from unacceptable drift of the stator flux estimation and a programmable, multistage cascaded low-pass filter, with a view to remove the drift caused by the offset. Results from these studies are presented",2001,0, 8560,Fault Detection and Diagnosis Based on Modeling and Estimation Methods,"This paper investigates the problem of fault detection and diagnosis in a class of nonlinear systems with modeling uncertainties. A nonlinear observer is first designed for monitoring fault. Radial basis function (RBF) neural network is used in this observer to approximate the unknown nonlinear dynamics. When a fault occurs, another RBF is triggered to capture the nonlinear characteristics of the fault function. The fault model obtained by the second neural network (NN) can be used for identifying the failure mode by comparing it with any known failure modes. Finally, a simulation example is presented to illustrate the effectiveness of the proposed scheme.",2009,0, 8561,Fault Detection for Fuzzy Systems With Intermittent Measurements,"This paper investigates the problem of fault detection for Takagi-Sugeno (T-S) fuzzy systems with intermittent measurements. The communication links between the plant and the fault detection filter are assumed to be imperfect (i.e., data packet dropouts occur intermittently, which appear typically in a network environment), and a stochastic variable satisfying the Bernoulli random binary distribution is utilized to model the unreliable communication links. The aim is to design a fuzzy fault detection filter such that, for all data missing conditions, the residual system is stochastically stable and preserves a guaranteed performance. The problem is solved through a basis-dependent Lyapunov function method, which is less conservative than the quadratic approach. The results are also extended to T--S fuzzy systems with time-varying parameter uncertainties. All the results are formulated in the form of linear matrix inequalities, which can be readily solved via standard numerical software. Two examples are provided to illustrate the usefulness and applicability of the developed theoretical results.",2009,0, 8562,Fault emulation and test pattern generation using reconfigurable computing,"This paper investigates the use of reconfigurable computing and readily available Field Programmable Gate Array (FPGA) platforms to expedite the generation of input-patterns for testing integrated circuits after manufacture. In this paper, we describe our techniques that efficiently identify the fault locations and the most effective input patterns by leveraging the parallel nature of the FPGA hardware. Our result on benchmark circuits show that our approach is able to create the smallest test-set size for detection of nodes stuck-at high or low voltages.",2010,0, 8563,FPGA-based real-time simulation of fault tolerant current controllers for power electronics,"This paper investigates a FPGA-based real-time simulation of fault tolerant current controllers (FTCC) algorithm for three phase inverter fed electrical systems. The focus of the proposed method is on the identification of the faulty current sensor in AC machines drives and the actual reconfiguration between two control-sampling times which ensures a safety continuous working of the faulty system. For performances verification, the method is analyzed within a FPGA-based real-time simulator (RTS) of the studied electrical system, based on criteria such as accuracy, execution speed and implementation complexity. For the RTS modeling process, a multi-sampling approach is adopted allowing a real-time functioning with different time-steps. ModelSim simulations and experimental results are presented to emphasize the effectiveness of the proposed method.",2009,0, 8564,Achieving network on chip fault tolerance by adaptive remapping,This paper investigates achieving fault tolerance by adaptive remapping in the context of networks on chip. The problem of dynamic application remapping is formulated and an efficient algorithm is proposed to address single and multiple PE failures. The new algorithm can be used to dynamically react and recover from PE failures in order to maintain system functionality. The quality of results is similar to that achieved using simulated annealing but in significantly shorter runtimes.,2009,0, 8565,A unified approach for fault tolerance and dynamic power management in fixed-priority real-time embedded systems,"This paper investigates an integrated approach for achieving fault tolerance and energy savings in real-time embedded systems. Fault tolerance is achieved via checkpointing, and energy is saved using dynamic voltage scaling (DVS). The authors present a feasibility analysis for checkpointing schemes for a constant processor speed as well as for variable processor speeds. DVS is then carried out on the basis of the feasibility analysis. The authors incorporate important practical issues such as faults during checkpointing, rollback recovery time, memory access time, and energy needed for checkpointing, as well as DVS and context switching overhead. Numerical results based on real-life checkpointing data and processor data sheets show that compared to fault-oblivious methods, the proposed approach significantly reduces power consumption and guarantees timely task completion in the presence of faults.",2006,0, 8566,Inverse Fault Detection and Diagnosis Problem in Discrete Dynamic Systems,This paper investigates an inverse fault detection and diagnosis problem in discrete dynamic systems. The problem is how to adjust the system parameters according to observation value of inputs and outputs so that the system is concordant. First we formulate the problem as a least square problem with interval coefficients. Then two algorithms for this problem are presented. The first algorithm based on the expected value of observation value of inputs and outputs. We are only required to solve a classical least square problem in this algorithm and the algorithm is robust. The second algorithm by using linear programming approach can deal with large scale systems and suit for on line adjustment.,2007,0, 8567,Fault tolerance for communication-based multirobot formation,"This paper investigates the ability of fault tolerance for multirobot formation, which is important for practical formation in complex environment. Our model enables mobile robots group to continue to complete given tasks by reorganizing their formation, when some members are in failure. First, to build such model, a multi-agent architecture is presented, which is implemented through communication. Second, we introduce the hierarchy graph of multirobot formation to be the theoretical foundation of the fault tolerance system. The graph analysis is suitable for general leader-follower formation format. And then, the failure detection mechanism for formation is discussed. Finally, integrated fault tolerance algorithm is investigated, including supplement for faulty robots and formation reconfiguration. The improved agent architecture adding the fault tolerance module is also presented. The experiments on real multiple mobile robots demonstrate our design is feasible.",2004,0, 8568,Run-time resource management in fault-tolerant network on reconfigurable chips,"This paper investigates the challenges of run-time resource management in future coarse-grained network-on-reconfigurable-chips (NoRCs). Run-time reconfiguration is a key feature expected in future processing systems which must support multiple applications whose processing requirements are not known at design time. This paper investigates a stochastic routing algorithm in a NoC-based system with dynamically reconfigurable tiles, able to cope with the dynamic behaviour of run-time task mapping. Experimental results show the efficiency of the proposed stochastic task mapping.",2009,0, 8569,Performance improvement of video transmission in fast-fading channel using error concealment technique,"This paper investigates two methods to improve the visual quality of an MPEG video movie transmitted over a fast fading channel using GMSK modulation, namely an error control coding technique and a new proposed post processing error concealment technique. Results are presented to show the visual quality.",2004,0, 8570,Fault Tolerant Maximum Likelihood Event Localization in Sensor Networks Using Binary Data,"This paper investigates Wireless Sensor Networks (WSNs) for achieving fault tolerant localization of an event using only binary information from the sensor nodes. In this context, faults occur due to various reasons and are manifested when a node outputs a wrong decision. The main contribution of this paper is to propose the Fault Tolerant Maximum Likelihood (FTML) estimator. FTML is compared against the Centroid (CE) and the classical maximum likelihood (ML) estimators and is shown to be significantly more fault tolerant. Moreover, this paper compares FTML against the SNAP (Subtract on Negative Add on Positive) algorithm and shows that in the presence of faults the two can achieve similar performance; FTML is slightly more accurate while SNAP is computationally less demanding and requires fewer parameters.",2009,0, 8571,The dual parameterization approach to optimal least square FIR filter design subject to maximum error constraints,"This paper is concerned with the design of linear-phase finite impulse response (FIR) digital filters for which the weighted least square error is minimized, subject to maximum error constraints. The design problem is formulated as a semi-infinite quadratic optimization problem. Using a newly developed dual parameterization method in conjunction with the Caratheodory's dimensional theorem, an equivalent dual finite dimensional optimization problem is obtained. The connection between the primal and the dual problems is established. A computational procedure is devised for solving the dual finite dimensional optimization problem. The optimal solution to the primal problem can then be readily obtained from the dual optimal solution. For illustration, examples are solved using the proposed computational procedure",2000,0, 8572,Transfer and Error Rate Measurement in the Lon Works Power Line Communication Systems,"This paper is focused on analyzing the transfer and the error rate measurement using network variables in LonWorks systems. The measurement was done by using the LonWorks power line modem mini evaluation kit (Mini EVK) feed by a standard RS-232 interface of the personal computer and the special software which was developed for this aim The Mini EVK power line modem contains the PL3150 circuit, which is designed for intelligent buildings control and home automation. The results of a measurement of the transfer and the error rate are presented.",2007,0, 8573,Fault detection and isolation based on system feedback,"This paper present a method to detect the transducers fault in the close loop control systems. The necessities imposed for the fault detection algorithm are: rapid answer in the case of fault, which comes out; the diminution of the risk to come out false alarms; lower effort calculation. In the paper are presented the equations of fault detection structure that suggest the software algorithms. In last part of the paper, the algorithm was verified on the steam overhead equations, developed in this paper.",2008,0, 8574,Byzantine Fault Tolerance for Agent Systems,"This paper presents a Byzantine fault tolerance method for agent systems. We extend Castro and Liskov's well-known practical Byzantine fault tolerance method for the server-client model to a method for the agent system model. There are two main differences between the methods. First, in agent systems we have to create replicas on both sides of the communicating agents, while in the server-client model of Castro and Liskov's method, replicas are created only on the server side, and the client is assumed to be non-faulty or is treated differently from a replica model. Second, due to the autonomous behavior of agents, we have to synchronize the timing of the receiving of messages among replicas. Agents decide their actions based on their current state of knowledge and do not wait indefinitely for messages that may not reach them",2006,0, 8575,An Asymmetric Checkpointing and Rollback Error Recovery Scheme for Embedded Processors,"This paper presents a checkpointing scheme for rollback error recovery, called Asymmetric Checkpointing and Rollback Recovery (ACRR) which stores the processor states in an asymmetric manner. In this way, error recovery latency and the number of checkpoints are reduced to increase the probability of timely task completion for soft real-time applications. To evaluate the ACRR, this scheme was studied analytically. The analytical results show that the recovery latency is reduced as non-uniformity of the checkpoint increases. As a case study, the ACRR is implemented and simulated on a behavioral VHDL model of LEON2 processor. The simulation results follow the results obtained in the analytical study.",2008,0, 8576,Performance analysis of permanent magnet synchronous motor drives under inverter fault conditions,"This paper presents a comparative study regarding the performance of a permanent magnet synchronous motor drive, under normal and faulty operating conditions. Two different failure types in the inverter are considered: single power switch and single phase open-circuit faults. In order to compare the drive performance under these three operating conditions, global results are presented concerning the analysis of some key parameters like motor efficiency, power factor, electromagnetic torque and currents RMS and total harmonic distortion values.",2008,0, 8577,Bus guardians: an effective solution for online detection and correction of faults affecting system-on-chip buses,"This paper presents a methodology for designing system-on-chip (SOC) interconnection architectures providing a high level of protection from crosstalk effects. An event driven simulator enriched with fault injection capabilities is exploited to evaluate the dependability level of the system being designed. The simulation environment supports several bus coding protocols and, thus, designers can easily evaluate different design alternatives. To enhance the dependability level of the interconnection architecture, we propose a distributed bus guardian scheme, where dedicated hardware modules monitor the integrity of the information transmitted over the bus and provide error correction mechanisms.",2001,0, 8578,In-system partial run-time reconfiguration for fault recovery applications on spacecrafts,"This paper presents a methodology for partially reconfiguring a field programmable gate array (FPGA) device using only limited onboard resources. This paper also seeks to provide a roadmap to developing necessary tools and technologies to help design self-sufficient partial run-time reconfigurable systems for spacecraft avionic systems. To provide a vision for the technology, this paper recommends a few possible applications in spacecraft avionic systems, in fault tolerance and space-saving hardware. In addition, some previous work done on the research for reconfigurable, modular avionics are also presented at the end as an example of applications.",2005,0, 8579,Model-based fault detection and isolation for a powered wheelchair,"This paper presents a model-based fault detection and isolation (FDI) for a powered wheelchair handling faults of both the internal sensors (two wheel-resolvers and a gyro) and the external sensor (a forward-looking laser range sensor), as well as actuators (two wheel motors). Hard faults of the internal sensors and actuators are diagnosed based on mode probability estimated with interacting multi-model estimator. Soft faults, which appear in the internal sensors as change of sensor gains, are diagnosed based on estimating the robot velocity with the fault-free external sensor via scan matching method. Faults of the external sensor are detected based on errors related to the scan matching. Experimental results in indoor environments show the performance of our FDI method.",2010,0, 8580,Fault ride through control for a delta connected induction motor with an open winding fault by controlling the zero sequence voltage,This paper presents a modified fault ride through method for use when open circuit winding faults appear on an induction motor drive. The work includes a new feedforward compensation term introduced into the zero sequence component of the dq reference voltages which considerably reduces current and torque ripple in the faulted motor drive. A method for extending the operating range under fault conditions by employing intelligent field weakening control is presented. Experimental results are presented which demonstrate the effectiveness of the complete system and show that it can be incorporated onto existing commercial drives as a simple software addition.,2010,0, 8581,Design and implementation of a safety communication network in railways with intelligent fault diagnosis,"This paper presents a network that connects various safety sensors located on level crossings and in stations. These sensors are used to detect obstacles on the railway line and proximity between trains. The information is centralised in the operations and control centre(OCC). The network has been designed in sections, each of which consists of a dual bus structure, with the particular feature that if one of the buses fails, the packets are routed to the other. Fault detection on the network is performed using intelligent diagnostic techniques, applying the IEEE 1232-2002 standard. By examining the result of the diagnosis, it is possible to ascertain the optimal route from each sensor to the OCC. Monitoring is performed using active network techniques. The diagnostic system sends packets containing code that is executed at each node.",2003,0, 8582,Neural network based fault diagnosis in an HVDC system,This paper presents a neural network (NN) based method for fault classification in a power system. A new method of generating the training data is proposed which has the advantage that the total number of fault simulations needed to generate the training patterns is less than that required by the conventional training method. This is obtained at the cost of a time delay in the NN output response. The performance of the proposed method is investigated using the Matlab simulation model of a simple HVDC system,2000,0, 8583,BP neural network-based on fault diagnosis of hydraulic servo-valves,This paper presents a new approach for fault diagnosis of hydraulic servo-valves with the BP neural network based on genetic algorithm. The paper uses a known set of faults as the output to the valve-behavior model. An appropriate neural network is established to be the best solution to the problem. Adoption of this approach brings about advantages of shortening training time and high-accuracy when compared with other artificial neural network.,2005,0, 8584,Analysis of reliability in nanoscale circuits and systems based on a-priori statistical fault-modeling methodology,"This paper presents a new approach for monitoring and estimating device reliability of nanometer-scale devices prior to fabrication. A four-layer architecture exhibiting a large immunity to permanent as well as random failures is used. A complete Monte Carlo based tool for a-priori functional fault tolerance analysis was developed, that induces different failure models, and does subsequent evaluation of system reliability under realistic constraints. A structured fault modeling architecture is also proposed, which is together with the tool a part of the new reliability design method representing a compatible improvement of existing IC design methodologies",2005,0, 8585,Towards a new fault diagnosis system for electric machines based on dynamic probabilistic models,This paper presents a new approach to diagnose faults in electrical systems based on probabilistic modelling and machine learning techniques. Our framework consist of two phases: an approximated diagnosis on the first phase and a refined diagnosis on the second phase. On the first phase the system behavior is modelled with a dynamic Bayesian network that generates a subset of most likely faulty components. In this phase the structure and parameters of the dynamic Bayesian network are learned off-line from raw data (discrete and continuous). On the second phase a particle filter algorithm is used to monitor suspicious components and extract the faulty components. The feasibility of this approach has been tested in a simulation environment using several interconnected electrical machines.,2005,0, 8586,Fault Diagnostic in Power System Using Wavelet Transforms and Neural Networks,"This paper presents a new approach to Fault detection and diagnosis in power system. Discrete wavelet transformations (DWT) combined with neural networks (NN) have been applied to a typical three phase inverter. A set of faults have been examined, such as inverter IGBT open-circuit fault, leg open fault. The input signals of this algorithm are the three-phase stator currents. Identification and classification uses approximation and details at levels 6 of these currents. The results of simulation show that the proposed technique can accurately detect identify and classify effectively the faults of interest in the power system.",2006,0, 8587,Fault location system on double circuit two-terminal transmission lines based on ANNs,"This paper presents a new approach to fault location in double-circuit two-terminal overhead transmission lines, using artificial neural networks (ANNs). The method presented enables the distance to be determined at which the fault occurs in a double-circuit two-terminal transmission line using the fundamental components of 50/60 Hz of the fault and pre-fault voltage and current magnitudes, measured in each phase of the reference end. The accuracy of the method has been checked using properly validated fault simulation software developed with MATLAB. This software allows generation of faults in any location of the line, to obtain the fault and pre-fault voltage and current values. With this value, the fault can be classified and the corresponding ANN activated in order to determine the fault distance",2001,0, 8588,Using a Square-Wave Signal for Fault Diagnosis of Analog Parts of Mixed-Signal Electronic Embedded Systems,"This paper presents a new approach to the detection and localization of single hard and soft faults of analog parts in embedded mixed-signal electronic systems controlled with microcontrollers, DSPs, or systems-on-a-chip (SoCs) (generally control units). The approach consists of three stages: a pretesting stage of creation of the fault dictionary using identification curves, a measurement stage based on stimulating the tested circuit by a square-wave signal generated by the control unit, and measurements of voltage samples of the circuit response by the internal ADC of the control unit. In the final stage, fault detection and localization are performed by the control unit. The measurement microsystem [the built-in self test (BIST)] consists only of internal devices of the control unit already existing in the system. Hence, this approach simplifies the structure of BISTs, which allows reduction of test costs. The results of experimental verification of the approach are included in this paper.",2008,0, 8589,Scalable fault-tolerant logic system based on regular array of locally interconnected gates,"This paper presents a new approach towards fault-tolerant information processing. The proposed system combines different types of redundancy into a solution suitable for implementation with nanodevices. The architecture is based on regular array of locally interconnected processing elements (PE). The interconnections are binary programmable in order to achieve network versatility. The array can be divided into a set of segments in a flexible manner, providing a means for implementing functions with different levels of complexity and redundancy.",2008,0, 8590,Monitoring power quality disturbances under frequency changes using Complex Least Error Squares algorithm,"This paper presents a new complex least error squares (CLES) algorithm for off nominal frequency power quality disturbances detection. First, a new complex least error squares structure is introduced that compresses three phase signals into a complex vector and produces a tuning vector. In the second layer of the filter, estimation of symmetrical components in off nominal frequency condition is carried out by use of the produced tuning vector and least error squares (LES) algorithm. Presented simulations evaluate the accuracy and efficiency of the proposed method.",2008,0, 8591,Local/global fault diagnosis of Event-Driven Systems based on Bayesian Network and Timed Markov Mode,"This paper presents a new decentralized (local/global) fault diagnosis strategy for the event-driven controlled systems such as the programmable logic controller (PLC). First of all, the controlled plant is decomposed into some subsystems, and the global diagnosis is formulated using the Bayesian network (BN), which represents the causal relationship between the fault and observation in subsystems. Second, the local diagnoser is developed using the conventional timed Markov model (TMM), and the local diagnosis results are used to specify the conditional probability assigned to each arc in the BN. By exploiting the decentralized diagnosis architecture, the computational burden for the diagnosis can be distributed to the subsystems. As the result, large scale diagnosis problems in the practical situation can be solved. Finally, the usefulness of the proposed strategy is verified through some experimental results of an automatic transfer line.",2007,0, 8592,"New expert system for fault diagnosis in middle Delta Electricity Zone, Egypt","This paper presents a new expert system for fault diagnosis in power system. For this expert system, expertise is represented by propositional logic and it is converted into one Boolean function. By applying Quine-McCluskey tabular method, all Prim Implicants (PIs) of the function can be obtained automatically. The method estimates faulty sections using current state of protective relays by just checking an intuitive look-up table. The paper proposes the application of this approach for fault diagnosis in 66 kV network of Delta Electricity Zone, Egypt. The development procedure for this new expert system is addressed. Verification of the relay and CD information is performed based on Error Detection and Correction Table (EDCT) using PIs that contain only facts, or state of protective devices. Therefore, off-line inference is possible by off-line identification of PIs, which greatly reduces the on-line inference to fit real-time applications. The results show that the developed logic-based expert system is highly efficient.",2006,0, 8593,A New Neural-Network-Based Fault Diagnosis Approach for Analog Circuits by Using Kurtosis and Entropy as a Preprocessor,"This paper presents a new fault diagnosis method for analog circuits. The proposed method extracts the original signals from the output terminals of the circuits under test (CUTs) by a data acquisition board and finds the kurtoses and entropies of the signals, which are used to measure the high-order statistics of the signals. The entropies and kurtoses are then fed to a neural network as inputs for further fault classification. The proposed method can detect and identify faulty components in an analog circuit by analyzing its output signal with high accuracy and is suitable for nonlinear circuits. Preprocessing based on the kurtosis and entropy of signals for the neural network classifier simplifies the network architecture, reduces the training time, and improves the performance of the network. The results from our examples showed that the trochoid of the entropies and kurtoses is unique when the faulty component's value varies from zero to infinity; thus, we can correctly identify the faulty components when the responses do not overlap. Applying this method for three linear and nonlinear circuits, the average accuracy of the achieved fault recognition is more than 99%, although there are some overlapping data when tolerance is considered. Moreover, all the trochoids converge to one point when the faulty component is open-circuited, and thus, the method can classify not only soft faults but also hard faults.",2010,0, 8594,A new PMU-based fault location algorithm for series compensated lines,"This paper presents a new fault location algorithm based on phasor measurement units (PMUs) for series compensated lines. Traditionally, the voltage drop of a series device is computed by the device model in the fault locator of series compensated lines, but when using this approach errors are induced by the inaccuracy of the series device model or the uncertainty operation mode of the series device. The proposed algorithm does not utilize the series device model and knowledge of the operation mode of the series device to compute the voltage drop during the fault period. Instead, the proposed algorithm uses two-step algorithm, prelocation step and correction step, to calculate the voltage drop and fault location. The proposed technique can be easily applied to any series FACTS compensated line. EMTP generated data using a 30-km 34-kV transmission line has been used to test the accuracy of the proposed algorithm. The tested cases include various fault types, fault locations, fault resistances, fault inception angles, etc. The study also considers the effect of various operation modes of the compensated device during the fault period. Simulation results indicate that the proposed algorithm can achieve up to 99.95% accuracy for most tested cases",2002,0, 8595,Fault tolerant control for Takagi-Sugeno systems with unmeasurable premise variables by trajectory tracking,"This paper presents a new method for fault tolerant control of nonlinear systems described by Takagi-Sugeno fuzzy systems with unmeasurable premise variables. The idea is to use a reference model and design a new control law to minimize the state deviation between a healthy reference model and the eventually faulty actual model. This scheme requires the knowledge of the system states and of the occurring faults. These signals are estimated from a Proportional-Integral Observer (PIO) or Proportional-Multi-Integral Observer (PMIO). The fault tolerant control law is designed by using the Lyapunov method to obtain conditions which are given in Linear Matrix Inequality formulation (LMI). Finally, an example is included.",2010,0, 8596,Method for Extraction Wavelet Packets' Coefficients in Loudspeaker Fault Detection Based on PCA,"This paper presents a new method using principal component analysis (PCA) to eliminate data redundancy in loudspeaker fault detection. It uses wavelet packet transformation (WPT) to decompose the loudspeaker acoustics signal into 32 packet node signals. Then, get the mean, max, standard deviation and correlation coefficient of every node envelopment .With the way of observing, it gets 63 coefficients from 128 ones which are helpful for detection of fault. Using the new way above, 32 coefficients are removed from the 63. The failed loudspeaker can be found with the help of artificial neural network (ANN). It is proved that the method is very effective in experiment.",2008,0, 8597,Applications of Wavelet-Packet in Fault Analysis of Hydroelectric Sets,"This paper presents a new method using wavelet packet transform to fault diagnosis of the hydroelectric generating. The use of wavelet packet analysis unit to achieve multi-level vibration signals of wavelet packet decomposition, the analysis provides a favorable means of the high-frequency signals that may exist in fault signal detection and identifications. Computer simulation shows that this signal analysis method is very effective for hydro-generator unit vibration fault diagnosis.",2010,0, 8598,Procedure call duplication: minimization of energy consumption with constrained error detection latency,"This paper presents a new software technique for detecting transient hardware errors. The objective is to guarantee data integrity in the presence of transient errors and to minimize energy consumption at the same time. Basically, we duplicate computations and compare their results to detect errors. There are three choices for duplicate computations: (1) duplicating every statement in the program and comparing their results, (2) re-executing procedures with duplicated procedure calls and comparing the results, (3) re-executing the whole program and comparing the final results. Our technique is the combination of (1) and (2): Given a program, our technique analyzes procedure call behavior of the program and determines which procedures should have duplicated statements (choice (1)) and which procedure calls should be duplicated (choice (2)) to minimize energy consumption while controlling error detection latency constraints. Then, our technique transforms the original program into the program that is able to detect errors with reduced energy consumption by re-executing the statements or procedures. In benchmark program simulation, we found that our technique saves over 25% of the required energy on average compared to previous techniques that do not take energy consumption into consideration",2001,0, 8599,Temporal classification for fault-prediction in a real-world telecommunications network,This paper presents a new temporal classification approach for fault-prediction in a telecommunications network. The countrywide data network of Pakistan Telecom (PTCL) has been selected as a basis for the investigation of classification algorithms to predict faults before they stop a large number of users' circuits from normal operation. The main problems addressed are the evaluation of alarms and development of new machine learning tools to help overcome the interoperability issues. The motivation behind this work is to assist human operators and minimize the cost of the alarm evaluation process.,2005,0, 8600,Location of internal faults in high voltage lines and distribution equipment,"This paper presents a new test methodology, based on digital X-ray technology, to observe the inside of several electrical equipment without the need of damaging it and shows internal faults making easy to take a decision about the necessity of replacing.",2004,0, 8601,Monitoring and Diagnosis of Faults in Interior Permanent Magnet Motors Using Discrete Wavelet Transform,"This paper presents a novel application of the discrete wavelet transform (DWT) based algorithm for detecting and diagnosing various disturbances caused by electrical faults in the three-phase interior permanent magnet (IPM) motor. The DWT coefficients of fault currents of different levels of resolution using a selected mother wavelet are transformed to the root mean square (RMS) values through the determination of signal energies of different frequency bands of the DWT. The RMS values are then used in a rule-based classifier (RBC) in order to differentiate between different faulted and normal conditions. The criterion for the fault detection is the comparison of the DWT coefficients of the fifth level details (d5) of fault currents using a selected mother wavelet with a threshold determined experimentally during the healthy condition of the motor. The complete protection technique incorporating the proposed DWT based diagnosis algorithm is implemented in real-time using the ds1102 digital signal processor board for a laboratory 1-hp IPM motor. It is found that the proposed DWT based protection algorithm is very fast, and has responded at the instant or within one cycle of the fault occurrence in all cases of investigated faults.",2006,0, 8602,A New Approach for Skew Correction of Documents Based on Particle Swarm Optimization,"This paper presents a novel approach for skew correction of documents. Skew correction is modeled as an optimization problem, and for the first time, particle swarm optimization (PSO) is used to solve skew optimization. A new objective function based on local minima and maxima of projection profiles is defined, and PSO is utilized to find the best angle that maximizes differences between values of local minima and maxima. In our approach, local minima and maxima converge to the locations of lines and spaces between lines. Results of our skew correction algorithm are shown on documents written in different scripts such as Latin and Arabic related scripts (e.g. Arabic, Farsi,Urdu,...). Experiments show that our algorithm can handle a wide range of skew angles, also it is robust to gray level and binary images of different scripts.",2009,0, 8603,Wireless Sensor Network Modeling Using Modified Recurrent Neural Networks: Application to Fault Detection,"This paper presents a dynamic model of wireless sensor networks (WSNs) and its application to sensor node fault detection. Recurrent neural networks (NNs) are used to model a sensor node, the node's dynamics, and interconnections with other sensor network nodes. An NN modeling approach is used for sensor node identification and fault detection in WSNs. The input to the NN is chosen to include previous output samples of the modeling sensor node and the current and previous output samples of neighboring sensors. The model is based on a new structure of a backpropagation-type NN. The input to the NN and the topology of the network are based on a general nonlinear sensor model. A simulation example, including a comparison to the Kalman filter method, has demonstrated the effectiveness of the proposed scheme.",2008,0,5095 8604,High-Speed Fault Classification in Power Lines: Theory and FPGA-Based Implementation,"This paper presents a fast hardware-efficient logic for fault detection and classification in transmission lines, implemented using a field-programmable gate array (FPGA). The general-purpose SPARTAN3E FPGA was employed for developing the prototype, with all the coding done using a hardware description language (HDL) called very high speed integrated circuit (VHDL). The proposed logic employs only one-terminal current samples and is based on wavelet analysis. Depending on the amount of high frequency components in the current signals after processing, the faults are classified into ten types. The real time windows target toolbox of MATLAB was used to apply the current signal inputs to the prototype in real time. An adaptive threshold value is chosen, rather than a fixed threshold in the case of faults involving the ground, to make the classification reliable and accurate. The fault classification time is 6 ms, which is about 1/3 of the power frequency cycle (20 ms). A high level of computational efficiency is achieved as compared to the other wavelet-transform-based algorithms, since only the high frequency details at first level are employed in this algorithm. The validity of the proposed logic was exhaustively tested by simulating various types of faults on a system modeled in the electromagnetic transients program/alternative transients program. The proposed logic was found to be highly reliable and accurate, even in the presence of fault resistance.",2009,0, 8605,Towards a DECOS Fault Injection Platform for Time-Triggered Systems,"This paper presents a fault injection platform targeting the communication bus in the DECOS platform, which is using a time-triggered communication protocol such as TTP or FlexRay. Communication errors are injected by a disturbance node, which emulates errors caused by external sources, e.g. from electrical relays in road vehicles or from lightning affecting airplanes. The platform is flexible and the communication protocol can be reconfigured from TTP to FlexRay or vice versa. The experiments are configured by XML scripts and controlled by Lauterbach TRACE32 software and hardware. Raw data from the experiments are stored in SRAM memory without halting the program execution so only minor time intrusiveness is introduced for logging data. After each experiment raw data are downloaded from memory, automatically parsed and converted into integer and e.g. float data types, and finally stored in a MySQL database for analysis. Several analysis functions are developed to evaluate the effectiveness of hardware-implemented and software-implemented error detection and recovery mechanisms.",2007,0, 8606,Fault-tolerant computations over replicated finite rings,"This paper presents a fault-tolerant technique based on the modulus replication residue number system (MRRNS) which allows for modular arithmetic computations over identical channels. In this system, fault tolerance is provided by adding extra computational channels that can be used to redundantly compute the mapped output. An algebraic technique is used to determine the error position in the mapped outputs and provide corrections. We also show that by taking advantage of some elementary polynomial properties we obtain the same level of fault tolerance with about a 30% decrease in the number of channels. This new system is referred to as the symmetric MRRNS (SMRRNS).",2003,0, 8607,Towards Byzantine Fault Tolerance in Many-Core Computing Platforms,This paper presents a flexible technique that can be applied to many-core architectures to exploit idle resources and ensure reliable system operation. A dynamic fault tolerance layer is interposed between the hardware and OS through the use of a hypervisor. The introduction of a single point of failure is avoided by incorporating the hypervisor into the sphere of replication. This approach simplifies implementation over specialized hardware- or OS-based techniques while offering flexibility in the level of protection provided ranging from duplex to Byzantine protection. The feasibility of the approach is considered for both near- and long-term computing platforms.,2007,0, 8608,Fault-tolerant system dependability-explicit modeling of hardware and software component-interactions,"This paper presents a framework for modeling the dependability of hardware and software fault-tolerant systems, taking into account explicitly the dependence among the components. These dependencies can result from: (a) functional or structural interactions between the components or (b) interactions due to global system reconfiguration and maintenance strategies. Modeling is based on GSPN (generalized stochastic Petri net). The modeling approach is modular: the behavior of each component and each interaction is represented by its own GSPN, while the system model is obtained by composition of these GSPN. Composition rules are defined and formalized through clear identification of the interfaces between the component and interaction nets. In addition to modularity, the formalism brings flexibility and re-usability, thereby allowing easy sensitivity analysis with respect to the assumptions that could be made about the behavior of the components and the resulting interactions. This approach has been successfully applied to select new architectures for the French Air Traffic Control system, based among other things, on availability evaluation. This paper illustrates it on a simple representative example, including all the types of the identified dependencies: the duplex system. Modeling of this system showed the strong dependence between components",2000,0, 8609,Layout-Based Defect-Driven Diagnosis for Intracell Bridging Defects,"This paper presents a layout-based methodology to predict the exact physical location of a bridging defect inside a standard cell. It involves a number of techniques. First of all, most likely intracell bridging defects are identified through layout analysis and then converted into equivalent logic models. Next, we use a new defect-oriented formulation to generate test pattern for each candidate defect so as to further enhance the diagnostic resolution. Experimental results indicate that this methodology can remove 90% false defect candidates beyond gate-level diagnosis for four real designs and ISCAS'85 benchmark circuits.",2009,0, 8610,Measurement-based analysis of fault and error sensitivities of dynamic memory,"This paper presents a measurement-based analysis of the fault and error sensitivities of dynamic memory. We extend a software-implemented fault injector to support data-type-aware fault injection into dynamic memory. The results indicate that dynamic memory exhibits about 18 times higher fault sensitivity than static memory, mainly because of the higher activation rate. Furthermore, we show that errors in a large portion of static and dynamic memory space are recoverable by simple software techniques (e.g., reloading data from a disk). The recoverable data include pages filled with identical values (e.g., `0') and pages loaded from files unmodified during the computation. Consequently, the selection of targets for protection should be based on knowledge of recoverability rather than on error sensitivity alone.",2010,0, 8611,Multiplicative fault reconstruction using sliding mode observers,"This paper presents a method for reconstructing multiplicative faults. The approach uses a sliding mode observer, and builds on previous work on reconstructing additive faults. Re-modelling the multiplicative faults into the framework of additive faults, and applying some modifications to the existing method could reconstruct the multiplicative faults.",2004,0, 8612,Dynamically adding redundancy for improved error concealment in packet voice coding,"This paper presents a method to improve the performance of redundancy-based packet-loss-concealment (PLC) schemes. Many redundancy-based PLC schemes send a fixed amount of extra information about the current packet as part of the subsequent packet, but not every packet is equally important for PLC. We have developed a method to determine the importance of packets and we propose that redundant information should only be sent for the important packets. This results in a lower average bit-rate compared to sending a fixed amount of extra information, without sacrificing much from the quality of the concealment. We use a linear prediction (LP) based speech coder (ITU-T G.723.1) as a test platform and we propose that only the excitation parameters should be sent as extra information since LP parameters of a frame can be estimated using the LP parameters of the previous frame.",2005,0, 8613,Architectural and algorithm level fault tolerant techniques for low power high yield multimedia devices,"This paper presents a novel architecture that allows a high level of fault tolerance in embedded memory devices for multimedia applications. The benefits are two fold by allowing systems to operate at a lower voltage thus saving power, while improving yield via error masking. The proposed architecture performs a remapping of defective parts of the memory while allowing single cycle access to the remapped portions. Furthermore, it provides run time control of the enforced protection policies leading to an expanded design space that trades off power, error tolerance and quality. Simulations indicate a reduction of up to 35% in encoder power for a 65 nm CMOS process.",2008,0, 8614,A Novel Rotor Ground-Fault-Detection Technique for Synchronous Machines With Static Excitation,"This paper presents a novel ground-fault-detection technique for synchronous machines. This technique is suitable for synchronous machines with static excitation systems, whose excitation field winding is fed by rectifiers through an excitation transformer. The main contribution of this new technique is that it can detect and discriminate both ac- and dc-side ground faults in the excitation system, without the need for traditional power injection sources. This detection technique is based on the frequency analysis of the voltages or currents at a grounding impedance placed at the excitation transformer neutral terminal. This technique has been validated through computer simulations and experimental laboratory tests.",2010,0, 8615,Novel method for vector mixer characterization and mixer test system vector error correction,"This paper presents a novel method for characterizing RF mixers, yielding magnitude and phase response for input match, output match, and conversion loss, and works for mixers which have reciprocal conversion loss and for which the image response can be filtered out. The characterized mixer is used to accomplish a full vector correction of a mixer test system, which can measure other mixers that are not reciprocal. A key contribution is phase and absolute group delay measurements of the mixer-under-test.",2002,0, 8616,Calculation of transformer internal faults in short circuit analysis,This paper presents a novel method of modeling internal faults in a power transformer. The method leads to a model which is compatible with commercial phasor-based software packages. Consequently; it enables calculation of fault currents in any branch of the network due to a winding fault of a power transformer. These currents can be used for evaluation of protective relayspsila performance and can lead to better setting of protective functions.,2008,0, 8617,Fault Detection by Means of HilbertHuang Transform of the Stator Current in a PMSM With Demagnetization,"This paper presents a novel method to diagnose demagnetization in permanent-magnet synchronous motor (PMSM). Simulations have been performed by 2-D finite-element analysis in order to determine the current spectrum and the magnetic flux distribution due to this failure. The diagnostic just based on motor current signature analysis can be confused by eccentricity failure because the harmonic content is the same. Moreover, it can only be applied under stationary conditions. In order to overcome these drawbacks, a novel method is used based upon the Hilbert-Huang transform. It represents time-dependent series in a 2-D time-frequency domain by extracting instantaneous frequency components through an empirical-mode decomposition process. This tool is applied by running the motor under nonstationary conditions of velocity. The experimental results show the reliability and feasibility of the methodology in order to diagnose the demagnetization of a PMSM.",2010,0, 8618,Methodology for automated testing of transmission line fault locator algorithms,"This paper presents a novel methodology for automated testing of fault locator algorithms and testing their output sensitivity. The proposed methodology is based on Quasi Monte Carlo technique implemented in SIMLAB. The proposed methodology interfaces among SIMLAB, MATLAB and EMTP/ATP programs. A case study with a number of simulated fault locations along a transmission line is presented in the paper. For each fault location we were varying parameters of fault resistance and pre-fault power flow on the line. In order to measure sensitivity of fault locator output to fault resistance and power flow, a large number of cases with varying fault resistance and power flow were generated using Sobol sampling technique. Two types of fault locator algorithms, Reactance algorithm and Takagi algorithm, are tested. Results obtained and presented in this paper indicate performance of each algorithm.",2009,0, 8619,Application-driven co-design of fault-tolerant industrial systems,"This paper presents a novel methodology for the HW/SW co-design of fault tolerant embedded systems that pursues the mitigation of radiation-induced upset events (which are a class of Single Event Effects - SEEs) on critical industrial applications. The proposal combines the flexibility and low cost of Software Implemented Hardware Fault Tolerance (SIHFT) techniques with the high reliability of selective hardware replication. The co-design flow is supported by a hardening platform that comprises an automatic software hardening environment and a hardware tool able to emulate Single Event Upsets (SEUs). As a case study, we selected a soft-micro (PicoBlaze) widely used in FPGA-based industrial systems, and a fault tolerant version of the matrix multiplication algorithm was developed. Using the proposed methodology, the design was guided by the requirements of the application, leading us to explore several trade-offs among reliability, performance and cost.",2010,0, 8620,Non-differential protection of a generator's stator utilizing fault transients,"This paper presents a novel protection scheme for detecting faults on the stator of a generator unit which is directly connected to a distribution system. In the scheme, a multi-channel fault transient detection unit, using the outputs of the current transformers (CTs) at the output of the generator terminal, is employed to extract the fault-generated transient current signals. The detector unit is tuned to extract two bands of fault generated transient signals with different center frequencies. A spectral comparison technique is applied to firstly compute the spectral energies of the two band signals, and then the fault diagnosis determines whether it is an internal and external fault by comparing the ratio of the two signals with a predefined threshold. The scheme offers advantages of immunity to CT saturation, and is capable of detecting both low level and interturn faults. In addition, the protection scheme is also simple in application, and is cost-effective in that it only requires one set of CTs. Simulation studies show that the proposed technique can give correct responses for various fault conditions",2001,0, 8621,Fault-Tolerant and Fail-Safe Control Systems - Using Remote Redundancy,"This paper presents a novel redundancy concept for safety-critical control systems. By using signature-protected communication, it allows connecting each redundant peripheral just to the most proximate control computer while forwarding information to or from any other units (sensors, actuators, further control computers) over a bus system. We will show that wiring harness can thus be reduced drastically with regard to both weight and complexity without compromising fault tolerance characteristics. Moreover, since function and location are decoupled, remote redundancy can be shared between different subsystems if more than one control loop (e. g. brakes and steering) exists in the overall system. Finally, our approach is highly flexible and not at all restricted to a certain degree of fault tolerance, as example systems for both a fault-tolerant and a fail-safe application (steer-by-wire/flap control) will demonstrate.",2009,0, 8622,Design of a Window Comparator with Adaptive Error Threshold for Online Testing Applications,This paper presents a novel window comparator circuit whose error threshold can be adaptively adjusted according to its input signal levels. It is ideal for analog online testing applications. Advantages of adaptive comparator error thresholds over constant or relative error thresholds in analog testing applications are discussed. Analytical equations for guiding the design of proposed comparator circuitry are derived. The proposed comparator circuit has been designed and fabricated using a CMOS 0.18mu technology. Measurement results of the fabricated chip are presented,2007,0, 8623,"Error Behavior Comparison of Multiple Computing Systems: A Case Study Using Linux on Pentium, Solaris on SPARC, and AIX on POWER","This paper presents an approach to conducting experimental studies for the characterization and comparison of the error behavior in different computing systems. The proposed approach is applied to characterize and compare the error behavior of three commercial systems (Linux 2.6 on Pentium 4, Solaris 10 on UltraSPARC IIIi, and AIX 5.3 on POWER 5) under hardware transient faults. The data is obtained by conducting extensive fault injection into kernel code, kernel stack, and system registers with the NFTAPE framework while running the Apache Web server as a workload. The error behavior comparison shows that the Linux system has the highest average crash latency, the Solaris system has the highest hang rate, and the AIX system has the lowest error sensitivity and the least amount of crashes in the more severe categories.",2008,0, 8624,An Improved Fault Simulation Approach Based on Verilog with Application to ISCAS Benchmark Circuits,"This paper presents an approach to fault simulation in the particular context of ISCAS 85 combinational benchmark circuits based on hardware description language (HDL) specification of their gate level netlists. The approach, exploiting the existing force and release features available in Verilog, builds an effective fault simulator by properly utilizing Verilog syntax with application to fault modeling. The implemented simulator system is able to emulate all of the ISCAS 85 combinational circuits. Experimental results show that access to the source code of HDL simulator or its modification is not a requirement to compute faulty responses from a circuit under test (CUT). The proposed simulator is platform independent, thereby making its utility substantially worthwhile",2006,0, 8625,Hardware/software optimization of error detection implementation for real-time embedded systems,"This paper presents an approach to system-level optimization of error detection implementation in the context of fault-tolerant real-time distributed embedded systems used for safety-critical applications. An application is modeled as a set of processes communicating by messages. Processes are mapped on computation nodes connected to the communication infrastructure. To provide resiliency against transient faults, efficient error detection and recovery techniques have to be employed. Our main focus in this paper is on the efficient implementation of the error detection mechanisms. We have developed techniques to optimize the hardware/software implementation of error detection, in order to minimize the global worst-case schedule length, while meeting the imposed hardware cost constraints and tolerating multiple transient faults. We present two design optimization algorithms which are able to find feasible solutions given a limited amount of resources: the first one assumes that, when implemented in hardware, error detection is deployed on static reconfigurable FPGAs, while the second one considers partial dynamic reconfiguration capabilities of the FPGAs.",2010,0, 8626,Reconfigurable context-free grammar based data processing hardware with error recovery,"This paper presents an architecture for context-free grammar (CFG) based data processing hardware for re-configurable devices. Our system leverages on CFGs to tokenize and parse data streams into a sequence of words with corresponding semantics. Such a tokenizing and parsing engine is sufficient for processing grammatically correct input data. However, most pattern recognition applications must consider data sets that do not always conform to the predefined grammar. Therefore, we augment our system to detect and recover from grammatical errors while extracting useful information. Unlike the table look up method used in traditional CFG parsers, we map the structure of the grammar rules directly onto the field programmable gate array (FPGA). Since every part of the grammar is mapped onto independent logic, the resulting design is an efficient parallel data processing engine. To evaluate our design, we implement several XML parsers in an FPGA. Our XML parsers are able to process the full content of the packets up to 3.59 Gbps on Xilinx Virtex 4 devices",2006,0, 8627,Thai OCR error correction using genetic algorithm,"This paper presents an efficient method for Thai OCR error correction based on genetic algorithm (GA). The correction process starts with word graph construction from spell checking with dictionary, then a graph is searched for a corrected sentence with the highest perplexity (using language model, bi-gram and tri-gram) and word probability from OCR. For a long sentence, a search space is huge and can be resolved using GA. A list of nodes is used for chromosome encoding to represent all possible paths in a graph instead of standard binary string. The performance of the suggested technique is evaluated and compared to the full search for tested sentences of different size constructed from 10 nodes to 200 nodes word graphs.",2002,0, 8628,Correction of the Sea State Impact in the L-Band Brightness Temperature by Means of Delay-Doppler Maps of Global Navigation Satellite Signals Reflected Over the Sea Surface,"This paper presents an efficient procedure based on 2-D convolutions to obtain delay-Doppler maps (DDMs) of Global Navigation Satellite Signals reflected (GNSS-R) over the sea surface and collected by a spaceborne receiver. Two DDM-derived observables (area and volume) are proposed to link the sea-state-induced brightness temperature to the measured normalized DDM. Finally, the requirements to use Global Positioning System reflectometry to accurately correct for the sea state impact on the L-band brightness temperature (quantization levels, decimation, truncation, and noise impact) are analyzed in view of its implementation in the Passive Advanced Unit instrument of the Spanish Earth Observation Satellite (SeoSAT/INGENIO) project.",2008,0, 8629,Soft error sensitivity characterization for microprocessor dependability enhancement strategy,"This paper presents an empirical investigation on the soft error sensitivity (SES) of microprocessors, using the picoJava-II as an example, through software simulated fault injections in its RTL model. Soft errors are generated under a realistic fault model during program run-time. The SES of a processor logic block is defined as the probability that a soft error in the block causes the processor to behave erroneously or enter into an incorrect architectural state. The SES is measured at the functional block level. We have found that highly error-sensitive blocks are common for various workloads. At the same time soft errors in many other logic blocks rarely affect the computation integrity. Our results show that a reasonable prediction of the SES is possible by deduction from the processor's microarchitecture. We also demonstrate that the sensitivity-based integrity checking strategy can be an efficient way to improve fault coverage per unit redundancy.",2002,0, 8630,Time-resolved scanning of integrated circuits with a pulsed laser: application to transient fault injection in an ADC,"This paper presents an experimental system for integrated circuit testing with a pulsed laser beam. The system is fully automated and simultaneously provides interesting spatial and temporal resolutions for various applications like fault injection, radiation sensitivity evaluation, or default localization. In the presented application, the system is used to visualize signal propagation in an 8-bit half-flash analog-to-digital converter.",2004,0,8631 8631,Time-resolved scanning of integrated circuits with a pulsed laser: application to transient fault injection in an ADC,"This paper presents an experimental system for integrated circuits testing with a pulsed laser beam. The system is fully automated and simultaneously provides interesting spatial and temporal resolutions for various applications like fault injection, radiation sensitivity evaluation, or default localization. In the presented application, the system is used to visualize signal propagation in an 8 bit half-flash ADC.",2003,0, 8632,Improving the accuracy of single-ended transient fault locators using mathematical morphology,"This paper presents a promising signal processing technique using a multi-resolution morphological gradient (MMG) method to efficiently extract transient sequences and accurately detect fault locations in a transmission line system. Based upon this methodology, MMG based single-ended transient fault locators are developed. The simulation results show that the accuracy of the fault locators has been considerably improved.",2002,0, 8633,Rigorous geometric modeling and correction of QuickBird imagery,"This paper presents a quantitative evaluation of the geometric accuracy that can be achieved with QuickBird imagery using the metadata provided with DigitalGlobe's image products. We explore two geometric models: the rational function model, and a rigorous sensor model based on camera, attitude, and ephemeris data included in the product metadata. We assess the rational function model against the rigorous sensor model, showing that the two models provide comparable geometric accuracy. We then assess the absolute geometric accuracy of a sample set of QuickBird products, measuring both the systematic geometric accuracy, and demonstrating the improvements which can be achieved using a full photogrammetric block adjustment.",2003,0, 8634,Redundant Picture Coding Using Residual-Adaptive Polyphase Downsampling for H.264 Error Resiliency,"This paper presents a redundant picture coding method that employs polyphase downsampling (PD) technique adaptive to the energy of residual signal and efficiently enhances the error resiliency of H.264. The latest H.264 hierarchically allocates redundant pictures and encodes them with reference picture selection, which effectively restrains temporal error propagation. The proposed method takes advantage of the current redundant picture coding scheme and applies residual energy adaptive PD to the residual signal coding. A neighbor-friendly pixel construction method is also presented which further enhances the reconstruction quality. Simulation results show that the proposed method favorably improves the H.264 error resiliency by 0.5 dB on average in terms of the PSNR.",2008,0, 8635,A robust sensorless fault diagnosis algorithm for low cost motor drives,"This paper presents a sensorless fault diagnosis technique for low cost AC drives. Currently, in order to achieve a reliable fault diagnosis, a high resolution speed sensor is employed to measure the frequencies of fault signature which depends on motor shaft speed. There is an increased tendency toward sensorless control of AC motor drives because of mounting problems, associated cost, etc. Therefore, the speed sensors become less common feedback tools in high performance motor control applications. The fault diagnosis becomes quite a challenging task in the absence of information from a speed sensor as highly precise motor speed estimation is needed. In this paper, a simple and efficient algorithm for fault diagnosis is proposed utilizing frequency tracking method which does not require speed sensor feedback. It is explicitly verified that the performance of the proposed algorithm is almost comparable to the cases where an accurate speed sensor is used. The algorithm is derived mathematically and its efficacy is proved experimentally using a 3-hp motor generator setup.",2010,0, 8636,Experiments on Fault-Tolerant Self-Reconfiguration and Emergent Self-Repair,"This paper presents a series of experiments on fault tolerant self-reconfiguration of the ATRON robotic system. For self-reconfiguration we use a previously described distributed control strategy based on meta-modules that emerge, move and stop. We perform experiments on three different types of failures: 1) Action failure: On the physical platform we demonstrate how roll-back of actions are used to achieve tolerance to collision with obstacles and other meta-modules. 2) Module failure: In simulation we show, for a 500 module robot, how different degrees of catastrophic module failure affect the robot's ability to shape-change to support an insecure roof. 3) Robot failure: In simulation we demonstrate how robot faults such as a broken robot bone can be emergent self-repaired by exploiting the redundancy of self-reconfigurable modules. We conclude that the use of emergent, distributed control, action roll-back, module redundancy, and self-reconfiguration can be used to achieve fault tolerant, self-repairing robots",2007,0, 8637,Performance of fault-tolerant distributed shared memory on broadcast- and switch-based architectures,"This paper presents a set of distributed-shared-memory protocols that provide fault tolerance on broadcast-based and switch-based architectures with no decrease in performance. These augmented DSM protocols combine the data duplication required by fault tolerance with the data duplication that naturally results in distributed-shared-memory implementations. The recovery memory at each backup node is continuously maintained consistent and is accessible by all processes executing at the backup node. Simulation results show that the additional data duplication necessary to create fault-tolerant DSM causes no reduction in system performance during normal operation and eliminates most of the overhead at checkpoint creation. Data blocks which are duplicated to maintain the recovery memory are also utilized by the DSM protocol, reducing network traffic, and increasing the processor utilization significantly. We use simulation and multiprocessor address trace files to compare the performance of a broadcast architecture called the SOME-Bus to the performance of two representative switch architectures.",2005,0, 8638,Interface faults injection for component-based integration testing,"This paper presents a simple and improved technique of interface fault insertion for conducting component integration testing through the use of aspect-oriented software development (AOSD). Taking the advantage of aspect's cross-cutting features, this technique only requires additional codes written in AspectJ rather than having a separate tool to perform this operation. These aspect codes act as wrappers around interface services and perform operations such as disabling the implementation of the interface services, raising exceptions or corrupting the inputs and outputs of interface services. Interface faults are inserted into the system under test to evaluate the quality of the test cases by ensuring not only that they detect errors due to the interactions between components, but they are also able to handle exceptions raised when interface faults are triggered.",2006,0, 8639,Analysis of Transient Faults on a MIPS-Based Dual-Core Processor,"This paper presents a simulation-based fault injection analysis of a MIPS-based dual-core processor. In order to fulfill the requirement of this analysis, 114 different fault targets are used in various points of main components which are described in VHDL language; each experiment was repeated 50 times, resulting in 5700 transient faults in this simulation model. The experimental results demonstrate that, depending on the fault injection targets and the benchmark characteristics, fault effects vary significantly. On average, up to 35.2% of injected faults are recovered in simulation time, while 52.6% of faults lead to system failure, and the remaining 12.2%, treat as latent errors. Different benchmarks show different vulnerability for various components; but on average, Arbiter and Message passing interface are the most vulnerable components outside the tiles, while PC and Bus Handler have highest failure rate among in-tile components. Fault injection on each region has noticeable impact on the result of the other core. In general, fault injection in Shared regions has highest contribution in system failure.",2010,0, 8640,"Fault Propagation Pattern Based DFA on Feistel Ciphers, with Application to Camellia","This paper presents a systematic Differential Fault Analysis (DFA) method on Feistel ciphers, the outcome of which closely links to that of the theoretical cryptanalysis with provable security. For this purpose, we introduce the notions of Fault Propagation Path (FPPath) and Fault Propagation Pattern (FPPattern). By this method, it can be programmed to automatically compute FPPaths and FPPatterns, which will facilitate the automatic DFA on Feistel ciphers. In this case, the length of FPPath can be regarded as a quantitative metric to evaluate the efficiency of DFA attacks. Moreover, one consequent result of this systematic method is performance enhancement. Specifically, not only the number of attacked rounds but also the number of fault injection points is reduced, which rapidly decrease the amount of required faulty ciphertexts for successful attacks. To verify both the correctness and the efficiency of our method, we perform FPPattern based DFA on Camellia. By making better use of the fundamental property of P-function utilized in Camellia, our attack, without any brute-force search, only requires 6 faulty ciphertexts to retrieve the 128-bit key and 22 faulty ciphertexts to recover 192/256-bit keys, respectively.",2010,0, 8641,Improving fault-tolerance in MAS with dyanmic proxy replicate groups,"This paper presents a technique for replicating agents in a multi-agent system (MAS) with a goal of improving the fault-tolerance of the system. Replicating agents, or forming a replicate group, will always add complexity and overhead to a system. To mitigate these effects passive replication and a proxy agent is used to represent and manage the group. So that a new single point of failure (the proxy) is not created, the proxy is dynamic and can move to any agent in the replicate group. An implementation and experimentation is presented which shows that the technique is viable.",2003,0, 8642,Canceling tradeoff between phase noise and phase error in parallel coupled quadrature oscillators,"This paper presents an analytic approach for the estimation of the phase imbalances duo to mismatch between two LC-tank of oscillators. We derive equations with different coupling factors for the two coupled LC oscillators for more generality. We show that choosing appropriate inversely proportional coupling factors makes it possible to have exactly zero phase error. Choosing so hasn't any impact on phase noise, indeed the tradeoff between phase noise and phase error in coupled quadrature oscillators have been broken here. The theoretical results and proposed quadrature oscillator are evaluated and confirmed through simulations using TSMC 0.18 m model technology on a 5-GHz quadrature oscillator with 1.8 V supply voltage.",2010,0, 8643,Evaluation of Software-Implemented Fault-Tolerance (SIFT) Approach in Gracefully Degradable Multi-Computer Systems,"This paper presents an analytical method for evaluating the reliability improvement for any size of multi-computer system based on Software-Implemented Fault-Tolerance (SIFT). The method is based on the equivalent failure rate Gamma, the single node failure rate lambda, the number of nodes in the system, N, the repair rate mu, the fault coverage factor c, the reconfiguration rate delta, and the percentage of blocking faults b1 and b2. The impact of these parameters on the reliability improvement has been evaluated for a gracefully degradable multi-computer system using our proposed analytical technique based on Markov chains. To validate our approach, we used the SIFT method which implements error detection at the node level, combined with a fast reconfiguration algorithm for avoiding faulty nodes. It is worth noting that the proposed method is applicable to any multi-computer systems' topology. The evaluation work presented in this paper focuses on the combination of analytical and experimental approaches, and more precisely on Markov chains. The SIFT method has been successfully implemented for a multi-computer system, nCube. The time overhead (reconfiguration & recomputation time) incurred by the injected fault, and the fault coverage factor c, are experimentally evaluated by means of a parallel version of the Software Object-Oriented Fault-Injection Tool (nSOFIT). The implemented SIFT approach can be used for real-time applications, when the time constraints should be met despite failures in the gracefully degradable multi-computer system",2006,0, 8644,A system level approach in designing dual-duplex fault tolerant embedded systems,"This paper presents an approach for designing embedded systems able to tolerate hardware faults, defined as an evolution of our previous work proposing an hardware/software co-design framework for realizing reliable embedded systems. The framework is extended to support the designer in achieving embedded systems with fault tolerant properties minimizing overheads and limiting power consumption. A reference system architecture is proposed; the specific hardware/software implementation and reliability methodologies (to achieve the fault tolerance properties) are the result of an enhanced hw/sw partitioning process driven by the designer' constraints and by the reliability constraints, set at the beginning of the design process. By introducing also the reliability constraints during specification, the final system can benefit from the introduced redundancy also for performance gains, while limiting area, time, performance and power consumption overheads.",2002,0, 8645,Fault Tolerant Control of a Civil Aircraft Using a Sliding Mode Based Scheme,"This paper presents a sliding mode control scheme for reconfigurable control of a civil aircraft. The controller is based around a state-feedback sliding mode scheme where the nonlinear unit vector term is allowed to adaptively increase when the onset of a fault is detected. Compared to other fault tolerant controllers which have been implemented on this model, the controller proposed here is relatively simple and yet is shown to work across the entire `up and away' fight envelope. Unexpected deviation of the switching variables from their nominal condition triggers the adaptation mechanism.",2005,0, 8646,Mitigation of Multipath Influence on Tracking Errors in LEO Navigation Applications,"This paper presents a solution to enhance the navigation of LEO satellites by mitigating the multipath influence. In several cases, navigation of LEO satellites needs a very high accuracy in positioning, while GPS receivers suffer from leak of accuracy. As multipath is a major source of inaccuracy, we propose to use a neural networks (NN) technique in the range domain in order to estimate and compensate for phase and code (delay) tracking errors. In this paper we present a rich discussion of the NN application in the range domain and the major factors influencing its performance.",2006,0, 8647,A Self-Controlled Power Factor Correction Single-Phase Boost Pre-Regulator,"This paper presents a strategy for controlling the input current of a single-phase boost PFC (power factor corrector). A sample of the input voltage is not necessary since it is naturally used as the reference current. Besides this, the model presents few simplifications, therefore, being more complete, taking better advantage of the natural characteristics of the converter and obtaining similar results, when compared to classic control, by simply using a proportional compensator. Some of the advantages of this strategy include greater robustness and simplicity, less susceptibility to noise and a smoother turn-on characteristic",2005,0, 8648,Mechanical fault detection in induction motors,"This paper presents a study to detect mechanical irregularities in low voltage random wound induction motors by means of stator current monitoring and spectrum analysis. An analysis of the MMF and permeance functions to classify frequency components of the stator field in both the stator and rotor reference frame is presented. In addition, a test rig designed to introduce different degrees of static eccentricity in the motor with new movable bearing housings is described in detail. Experimental tests prove the theoretical analysis discussed and significant results are presented.",2003,0, 8649,Phighting the Phisher: Using Web Bugs and Honeytokens to Investigate the Source of Phishing Attacks,"This paper presents a summary of research findings for a new reacitve phishing investigative technique using Web bugs and honeytokens. Phishing has become a rampant problem in today 's society and has cost financial institutions millions of dollars per year. Today's reactive techniques against phishing usually involve methods that simply minimize the damage rather than attempting to actually track down a phisher. Our research objective is to track down a phisher to the IP address of the phisher's workstation rather than innocent machines used as intermediaries. By using Web bugs and honeytokens on the fake Web site forms the phisher presents, one can log accesses to the honeytokens by the phisher when the attacker views the results of the forms. Research results to date are presented in this paper",2007,0, 8650,The Probabilistic Program Dependence Graph and Its Application to Fault Diagnosis,"This paper presents an innovative model of a program's internal behavior over a set of test inputs, called the probabilistic program dependence graph (PPDG), which facilitates probabilistic analysis and reasoning about uncertain program behavior, particularly that associated with faults. The PPDG construction augments the structural dependences represented by a program dependence graph with estimates of statistical dependences between node states, which are computed from the test set. The PPDG is based on the established framework of probabilistic graphical models, which are used widely in a variety of applications. This paper presents algorithms for constructing PPDGs and applying them to fault diagnosis. The paper also presents preliminary evidence indicating that a PPDG-based fault localization technique compares favorably with existing techniques. The paper also presents evidence indicating that PPDGs can be useful for fault comprehension.",2010,0, 8651,Power Transformer Fault Classification Based on Dissolved Gas Analysis by Implementing Bootstrap and Genetic Programming,"This paper presents an intelligent fault classification approach to power transformer dissolved gas analysis (DGA), dealing with highly versatile or noise-corrupted data. Bootstrap and genetic programming (GP) are implemented to improve the interpretation accuracy for DGA of power transformers. Bootstrap preprocessing is utilized to approximately equalize the sample numbers for different fault classes to improve subsequent fault classification with GP feature extraction. GP is applied to establish classification features for each class based on the collected gas data. The features extracted with GP are then used as the inputs to artificial neural network (ANN), support vector machine (SVM) and K-nearest neighbor ( KNN) classifiers for fault classification. The classification accuracies of the combined GP-ANN, GP-SVM, and GP-KNN classifiers are compared with the ones derived from ANN, SVM, and KNN classifiers, respectively. The test results indicate that the developed preprocessing approach can significantly improve the diagnosis accuracies for power transformer fault classification.",2009,0, 8652,Design and development of a software for fault diagnosis in radial distribution networks,"This paper presents an on-line fault diagnosis software in primary distribution feeders. The software is written in DELPHI and C++ languages and its interaction with the operator is made in a very friendly environment. The input data are the currents of the feeder per phase, monitored only in the substation. An artificial immune system was developed using the negative selection algorithm to detect and classify the faults. The fault location was identified by a genetic algorithm which is triggered by negative selection algorithm. The main application of the software is to assist in the operation during a fault, and supervise the protection system. A 103-bus non-transposed real feeder is used to evaluate the proposed software. The results show that the software is effective for diagnosing all types of faults involving short-circuits and it has great potential for online applications.",2010,0, 8653,Vibration-based fault diagnostic platform for rotary machines,"This paper provides a vibration-based diagnostic platform to systematically monitor and diagnose of rotary machine faults. Commonly rotary machine faults described in this paper are misalignment fault, bearing cage defect, ball bearing defect, bearing outer race fault and inner race fault. The use of structural resonance frequency, ISO 10816 for vibration level assessment, spectrum assessment for misalignment and bearing faults have been detailed. Beside fault diagnosis, repair action has been included to recommend different maintenance plans according to the faulty conditions. These methods form the basis of a knowledge-based system for diagnosis. The results have been successfully tested on a Hitachi Seiki high speed milling machine. The developed diagnosis platform minimized the need for human intervention in rotary machine performance monitoring and degradation detection.",2010,0, 8654,Flight technical error analysis of the SATS higher volume operations simulation and flight experiments,"This paper provides an analysis of flight technical error (FTE) from recent SATS experiments, called the higher volume operations (HVO) simulation and flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today's system (Baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the Baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today's system (Baseline). In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative human in the loop (HITL) studies of SATS HVO and Baseline operations.",2005,0, 8655,Development of ANN-based virtual fault detector for Wheatstone bridge-oriented transducers,This paper reports on the development of a new artificial neural network-based virtual fault detector (VFD) for detection and identification of faults in DAS-connected Wheatstone bridge-oriented transducers of a computer-based measurement system. Experimental results show that the implemented VFD is convenient for fusing intelligence into such systems in a user-interactive manner. The performance of the proposed VFD is examined experimentally to detect seven frequently occurring faults automatically in such transducers. The presented technique used an artificial neural network-based two-class pattern classification network with hard-limit perceptrons to fulfill the function of an efficient residual generator component of the proposed VFD. The proposed soft residual generator detects and identifies various transducer faults in collaboration with a virtual instrument software-based inbuilt algorithm. An example application is also presented to demonstrate the use of implemented VFD practically for detecting and diagnosing faults in a pressure transducer having semiconductor strain gauges connected in a Wheatstone bridge configuration. The results obtained in the example application with this strategy are promising.,2005,0, 8656,Self-assembling circuits with autonomous fault handling,"This paper reports on the results of our recent NASA SBIR contract, ""Autonomous Self-Repairing Circuits,"" in which we developed a novel approach to fault-tolerant circuit synthesis utilizing a self-configurable hardware platform. The approach was based on the use of atomic components called Supercells. These Supercells perform several functions in the building of a desired target circuit: fault detection, fault isolation, configuration of new Supercells, determination of inter-cell wiring paths, and implementation of the final target circuit. By placing these tasks under the control of the Supercells themselves, the resulting system requires minimal external intervention. In particular, for a given target circuit, a fixed configuration string can be used to configure the system, regardless of the location of faults in the underlying hardware. This is because the configuration string does not directly implement the final circuit. Rather, it implements a self-organizing system, and that system then dynamically implements the desired target circuit.",2002,0, 8657,Macroblock-based retransmission for error-resilient video streaming,"This paper revisits the problem of source-channel coding for error-resilient video streaming. We propose a new method to enable adaptive redundancy in the bitstream: fine-grain retransmission. Redundancy decisions are made per macroblock (MB), which are locally adaptive and of low overhead, as opposed to coarse packet-level redundancy (e.g. forward error correction). In this scheme, the encoder jointly optimizes the coding mode and redundancy per MB. A corresponding algorithm is presented for exploiting this redundancy at the decoder. The proposed method is general in nature, and can be implemented on top of any (hybrid) video codec. An example implementation is provided, which uses the redundant slice mechanism of H.264 (JM 13.2 reference software). Simulation results show significant performance gains over conventional error-resilient coding techniques.",2008,0, 8658,Hybrid Method to Assess Sensitive Process Interruption Costs Due to Faults in Electric Power Distribution Networks,"This paper shows a new hybrid method for risk assessment regarding interruptions in sensitive processes due to faults in electric power distribution systems. This method determines indices related to long duration interruptions and short duration voltage variations (SDVV), such as voltage sags and swells in each customer supplied by the distribution network. Frequency of such occurrences and their impact on customer processes are determined for each bus and classified according to their corresponding magnitude and duration. The method is based on information regarding network configuration, system parameters and protective devices. It randomly generates a number of fault scenarios in order to assess risk areas regarding long duration interruptions and voltage sags and swells in an especially inventive way, including frequency of events according to their magnitude and duration. Based on sensitivity curves, the method determines frequency indices regarding disruption in customer processes that represent equipment malfunction and possible process interruptions due to voltage sags and swells. Such approach allows for the assessment of the annual costs associated with each one of the evaluated power quality indices.",2010,0, 8659,Structuring integrated Web applications for fault tolerance,"This paper shows how modern structuring techniques can be employed in integrating complex web applications such as travel agency systems. The main challenges the developers of such systems face are dealing with legacy web services and incorporating means for tolerating errors. Because of the very nature of such systems, exception handling is the main recovery technique to be applied in their development. We employ coordinated atomic actions to allow disciplined handling of such abnormal situations by recursively structuring the integrated system and by associating handlers with such actions. We use protective wrappers in such a way that each operation on legacy components is transformed into an atomic action with a well-defined interface. To accommodate a combined use of several ready-made environments (such as communication packages, services and run-time supports), we employ a multilevel exception handling. We believe that these techniques are generally applicable for both: structuring integrated web applications and providing their fault tolerance.",2003,0, 8660,Error-free simplification of transparent Mamdani systems,"This paper shows that combinatorial complexity of fuzzy systems is at least in part caused by redundancy in these systems and presents the algorithm and its implementation for detection and removal of such redundancy for a special class of Mamdani systems. Performance of the simplification algorithm is demonstrated with uniformly impressive results on acknowledged benchmarks coming from different areas of engineering - truck backer-upper control, Mackey-Glass time series prediction and iris data classification.",2008,0, 8661,Combinational fault diagnosis in a monitored environment by a wireless sensor network,"This paper studies a combinational algorithm of a limit-trend checking, plausibility test and model-based method to attain a secure fault diagnosis in a wireless sensor network. It has been implemented based on a new theoretical identification method. The sensor nodes of the network have been distributed inside an intelligent container to monitor environmental parameters (temperature and relative humidity). It employs measured parameters, residuals and a developed model of the environment to introduce a topology, applicable in several applications of fault diagnosis area.",2009,0, 8662,On the effect of word error rate on automated quality monitoring,"This paper studies the effect of word-error-rate (WER) on an automated quality monitoring application for call centers. The system consists of a speech recognition module and a call ranking module. The call ranking module combines direct question answering with a maximum-entropy classifier to automatically monitor the calls that enter a call center, and label them as ""good"" or ""bad"". We find that, in the monitoring regime where only a small fraction of the calls are monitored, we achieve 80% precision and 50% recall in classifying whether a call belongs to the bottom 20%. Additionally, the correlation between human and computer-generated scores turns out to be highly sensitive to word error rate.",2006,0, 8663,"Multimode WSN: Improving Robustness, Fault Tolerance and Performance of Randomly Deployed Wireless Sensor Network","This paper proposes an advanced, robust and flexible solution that applies the (revised) concept of Always Best Connected (ABC) Network, typical of multimode modern mobile devices, to Wireless Sensor Network. Hostile environments and unpredictable conditions (e.g. interferences) can negatively affect communication range, potentially increasing the number of unconnected nodes in random deployments. Multimode Wireless Sensor Network (MM-WSN) is provided with an adaptive mechanism for environmental condition evaluation and with the ability of self-configuring itself for optimal networking independence of detected conditions. Proposed solution is based on advanced smart nodes provided with multiple resource in terms of computation, storage, communication and sensing that, with the support of proposed framework, allows complex mechanisms for fault detecting/tolerance and for the increasing of network robustness and performance. Several research issues are addressed by the paper.",2010,0, 8664,Differential power analysis and differential fault attack resistant AES algorithm and its VLSI implementation,"This paper proposes an AES algorithm against both differential power analysis and differential fault analysis and its hardware implementation. This new algorithm emphasizes the feature of defending hardware against two kinds of side-channel attack simultaneously. Since the modified AES algorithm is much more complex than the original one, this paper exploits low hardware cost architecture to realize it. Furthermore, a pipelined structure is adopted to achieve high throughput. Simulations show that this architecture can protect hardware against both differential power analysis and differential fault attack. Synthesis result demonstrates that this design achieves adequately high data throughput with low hardware cost.",2008,0, 8665,Dynamic strength scaling for delay fault propagation in nanometer technologies,"This paper proposes an algorithm for the detection of resistive delay faults in deep submicron technology using dynamic strength scaling, which is applicable for 45 nm and below. The approach uses an advanced coding system to build logical functions that are sensitive to strength and able to detect even the slightest voltage changes in the circuit. Such changes are caused by interconnection resistive behavior and result in timing-related defects.",2009,0, 8666,Development of a data compression index for discrimination between transformer internal faults and inrush currents,This paper proposes an algorithm for transformer differential protection. The paper presents the development of an index based on discrete wavelet transform to discriminate between internal faults and inrush currents. The index is retained energy in percentage. The proposed technique consists of decomposition of differential current signals up to a specific level and compression by level thresholding. Various simulations and analysis have been performed to determine the best compression ratio for distinguishing between transformer inrush and fault currents. The performance of the index is demonstrated by simulation of different faults and switching conditions on a modeled power system. Obtained results show that the proposed scheme is robust and reliable.,2009,0, 8667,Soft-error induced system-failure rate analysis in an SoC,"This paper proposes an analytical method to assess the soft-error rate (SER) in the early stages of a System-on-Chip (SoC) platform-based design methodology. The proposed method gets an executable UML model of the SoC and the raw soft-error rate of different parts of the platform as its inputs. Soft-errors on the design are modelled by disturbances on the value of attributes in the classes of the UML model and disturbances on opcodes of software cores. This Architectural Vulnerability Factor (AVF) and the raw soft-error rate of the components in the platform are used to compute the SER of cores. Furthermore, the SER and the severity of error in each core in the SoC are used to compute the System-Failure Rate (SFR) of the SoC.",2007,0, 8668,Intelligent Supervision and Integrated Fault Detection and Diagnosis for Subsea Control Systems,"This paper proposes an artificial intelligence based framework for integrated fault detection and diagnosis to support supervision and decision making applicable to subsea control systems. Instrumentation, electrical, electronic, hydraulic and communication subsystems are considered. This approach may contribute to minimizing well shut-down and production losses due to unexpected faults on any subsea control system component or subsystem. It may also contribute to achieving incipient fault detection and appropriate fault identification to support and improve troubleshooting, decision making and maintenance tasks (preventive maintenance).",2007,0, 8669,A power quality monitoring system for real-time fault detection,"This paper proposes an embedded system applied for power quality monitoring that captures waveform of fault signal on single phase or 3-phases system in real time and allows users to control and receive data from remote module via Ethernet networks by using TCP/IP protocol. The power quality monitoring system stored fault data in CSV (comma separated value) format into SD-CARD which are easy to analyze later by spread sheet program. The monitoring system uses the energy measurement integrated circuit (IC) ADE7758 with integrated power quality monitoring features, AL440B FIFO (first-in first-out) memory, LPC2368 microcontroller and ADUC7024 microcontroller. The monitoring system can detect sags, swell and interruptions in power lines and stored fault data when fault signals detected.",2009,0, 8670,Analytical Evaluation of Timing Offset Error in OFDM System,This paper proposes an idea of developing a model of estimating timing offset error of OFDM (Orthogonal Frequency Division Multiplexing) system without the use of additional pilots relying on inherent characteristics of the OFDM signal. An analytical expression has been derived and formulated to analyze the effect of bit error rate (BER) due to timing offset introduced by the transmission channel. This will help in determining the exact length of cyclic prefix to be added to each OFDM symbol to avoid misinterpretation by the receiver. The performances have also been evaluated under coded (convolution) and uncoded systems. The introduction of channel coding decreases this basic impairment of OFDM systems significantly. Simulated results show that the symbol error rate (SER) linearly depends on timing offset. It is expected that further works on the proposed estimated method will lead to a standard model so that at the receiving end the effect of timing offset can be eliminated totally.,2010,0, 8671,Fault-section estimation in power systems based on improved optimization model and binary particle swarm optimization,"This paper proposes an improved model which takes the failure of protetive relays (PRs) or circuit breakers (CBs) into account, and classifies different information per its importance. A weighted contribution factor is introduced in objective function, which aims to solve two problems: the influence of PRs and CBs failure, and information important factor. Binary Particle Swarm Optimization (BPSO) is employed to solve the fault-section estimation (FSE) optimization problems. In order to measure the efficiency of BPSO and make comparisons, a Genetic Algorithm (GA) is also employed. The software codes have been developed to implement the algorithms. Numerical studies reveal that BPSO is superior to GA for the convergence speed and estimation results. The proposed method based on the new model and BPSO is rational and practical and the diagnosis results are more accurate.",2009,0, 8672,Fault Tolerant Signal Processing for Masking Transient Errors in VLSI Signal Processors,"This paper proposes fault tolerant signal processing strategies for achieving reliable performance in VLSI signal processors that are prone to transient errors due to increasingly smaller feature dimensions and supply voltages. The proposed methods are based on residue number system (RNS) coding, involving either hardware redundancy or multiple execution redundancy (MER) strategies designed to identify and overcome transient errors. RNS techniques provide powerful low-redundancy fault tolerance properties that must be introduced at VLSI design levels, whereas MER strategies generally require higher degrees of redundancy that can be introduced at software programming levels.",2007,0, 8673,Effects of microstructural defects in multilayer LTCC stripline,"This paper proposes novel stripline models including embedded pores and sharpened conductor edges, which are commonly introduced during the multilayer low-temperature cofired-ceramic (LTCC) process. This model enables designers to obtain conductivity and tan of the stripline that are difficult to obtain using the experimental methods from arbitrary frequencies. This paper confirms that the proposed models are appropriate for LTCC striplines by comparing the simulated results with the experimental results. We found that embedded pores contributed to increase in unloaded quality factor (Qu) and characteristic impedance in the range of 5% to 6% while effective r decreased in the range of 11%. Sharpened edges contributed to maximum peak in Qu and decreased characteristic impedance. These models will contribute to precision design of the future LTCC striplines.",2006,0, 8674,SW-HW Co-design and Fault Tolerant Implementation for the LRID Wireless Communication System,"This paper presents the development of a wireless communication system, the RF identification tag, built and tested in Heriot-Watt University, Edinburgh. The design flow commences in SPIN, a high level model-checking tool at present deployed towards the verification of safety critical software designs including NASA missions. The formally verified model of the application is then enhanced with software based monitoring architectures comparable with that applied in conventional firmware development such as the watchdog timer defending rational control related execution of the high level system representation. Following automated synthesis into hardware (HDL) with the aid of an ESL method, the generated RTL design can be further protected against increased levels of radiation and SEUs with the aid of the xTMR tool. It is claimed that a development route of this type promotes high levels of algorithmic testability and reliability attained via fault prevention means in the model checking process as well as multi-layered run-time monitoring and fault management strategies leveraging upon the design on the vertical implementation phase. The application developed in the proposed lifecycle and targeting the FPGA technology is finally tested under a lab emulated EMI scheme and system survivability is examined and quantified. Reliability is then estimated and analyzed in the CASRE tool (developed by JPL NASA)",2006,0, 8675,Discrete wavelet transform and probabilistic neural network based algorithm for classification of fault on transmission systems,"This paper presents the development of an algorithm based on discrete wavelet transform (DWT) and probabilistic neural network (PNN) for classifying the power system faults. The proposed technique consists of a preprocessing unit based on discrete wavelet transform in combination with PNN. The DWT acts as extractor of distinctive features in the input current signal, which are collected at source end. The information is then fed into PNN for classifying the faults. It can be used for off-line process using the data stored in the digital recording apparatus. Extensive simulation studies carried out using MATLAB show that the proposed algorithm not only provides an accepted degree of accuracy in fault classification under different fault conditions but it is also reliable, fast and computationally efficient tool.",2008,0, 8676,Image-position Technology of the Digital Circuit Fault Diagnosis Based on Lab Windows/CVI,"This paper presents the different methods and processes of the implement of image-positioning technique, by which the user can locate the probe quickly and accurately during the process of circuit-fault diagnosis. The article introduces the fault diagnosis program that can retrieve information from fault dictionary, presents the image-position technique of BMP and PCB and proposes the usage of the significant function and controls are given which can used to locate the probe accurately. During the test, the real circuit graph guides the user and points out the fault location, thus raising the efficiency of fault removal.",2008,0,5141 8677,Adaptive multiple fault detection and alarm processing with probabilistic network,"This paper presents the fault detection and alarm processing with fault detection system (FDS). FDS consists of adaptive architecture with probabilistic neural network (PNN). Training PNN uses the primary/back-up information of protective devices to create the training sets. However when network topology changes, adaptation capability becomes important in neural network application. PNN can be retained and estimated effectively. With a looped system, computer simulations were conducted to show the effectiveness of the proposed system, and PNNs adapt network topology changes.",2004,0, 8678,Initial Experiences with a New FPGA Based Traveling Wave Fault Recorder Installed on a MV Distribution Network,"This paper presents the initial results obtained from a newly developed FPGA based traveling wave fault recorder installed on a medium voltage (MV) distribution line. The recorder is capable of recording six input signals, simultaneously sampling at 40 mega samples per second (MSPS) and at 14 bit resolution. It uses high bandwidth 17 MHz Rogowski coils connected to the secondary of a current transformer inside the substation to acquire the high frequency traveling wave components. Initial results during the testing phase show that recorder is capable of recording high fidelity signals relating to switching events. It has also highlighted that the distribution line is subject to many other transient phenomena in addition to faults and switching events which must be taken into consideration when choosing a suitable triggering mechanism.",2008,0, 8679,MATLAB/PSB based modeling and simulation of 25 kV AC railway traction system - a particular reference to loading and fault conditions,"This paper presents the modeling and simulation of a 25 kV 50 Hz AC traction system using power system block set (PSB) / SIMULINK software package. The three-phase system with substations, track section with rectifier-fed DC locomotives and a detailed traction load are included in the model. The model has been used to study the effect of loading and fault conditions in 25 kV AC traction. The relay characteristic proposed is a combination of two quadrilaterals in the X-R plane. A brief summary of the hardware set-up used to implement and test the relay characteristic using a Texas Instruments TMS320C50 digital signal processor (DSP) has also been presented.",2004,0, 8680,Real-Time Implementation of Fault Detection in Wireless Sensor Networks Using Neural Networks,"This paper presents the real-time implementation of a neural network-based fault detection for wireless sensor networks (WSNs). The method is implemented on TinyOS operating system. A collection tree network is formed and multi-hoping data is sent to the base station root. Nodes take environmental measurements every N seconds while neighboring nodes overhear the measurement as it is being forwarded to the base station and record it. After nodes complete M and receive/store M measurements from each neighboring node, recurrent neural networks (RNNs) are used to model the sensor node, the node's dynamics, and interconnections with neighboring nodes. The physical measurement is compared against the predicted value and a given threshold of error to determine sensor fault. By simply overhearing network traffic, this implementation uses no extra bandwidth or radio broadcast power. The only cost of the approach is battery power required to power the receiver to overhear packets and MCU processor time to train the RNN.",2008,0, 8681,Stator-Interlaminar-Fault Detection Using an External-Flux-Density Sensor,"This paper presents the results of a research work devoted to the detection of short circuits between stator laminations of an electrical machine using external magnetic field. The theoretical developments lead one to display the influence of various phenomena on this magnetic field in a wide frequency range. It is shown that surface currents due to burrs at the external surface of the machine have an important contribution compared to the increase of eddy currents in the short-circuit volume. Finally, experimental measurements confirm this theory, and an online method of detection for large generators is proposed.",2010,0, 8682,An exploratory study of fault-proneness in evolving aspect-oriented programs,"This paper presents the results of an exploratory study on the fault-proneness of aspect-oriented programs. We analysed the faults collected from three evolving aspect-oriented systems, all from different application domains. The analysis develops from two different angles. Firstly, we measured the impact of the obliviousness property on the fault-proneness of the evaluated systems. The results show that 40% of reported faults were due to the lack of awareness among base code and aspects. The second analysis regarded the fault-proneness of the main aspect-oriented programming (AOP) mechanisms, namely pointcuts, advices and intertype declarations. The results indicate that these mechanisms present similar fault-proneness when we consider both the overall system and concern-specific implementations. Our findings are reinforced by means of statistical tests. In general, this result contradicts the common intuition stating that the use of pointcut languages is the main source of faults in AOP.",2010,0, 8683,Thermal behaviour of a three-phase induction motor fed by a fault tolerant voltage source inverter,"This paper presents the results of an investigation regarding the thermal behaviour of a three-phase induction motor, when supplied by a reconfigured three-phase voltage source inverter with fault tolerant capabilities. For this purpose, a fault tolerant operating strategy based on the connection of the faulty inverter leg to the DC link middle point was considered. The experimental obtained results show that, as far as the motor thermal characteristics are concerned, it is not necessary to reinforce the motor insulation properties since it is already prepared for such an operation",2005,0, 8684,Modeling and Customization of Fault Tolerant Architecture using Object-Z/XVCL,"This paper proposes a novel heterogeneous software architecture FTA (fault tolerant architecture). FTA incorporates idealized fault tolerant component concept and coordinated error recovery mechanism in the early system design phase. It can be reused in the high level model design of specific mission critical distributed systems with reliability requirements. The formal model of FTA in the Object-Z language is presented to provide precise idioms to the system designers. Formal proof using the Object-Z reasoning rules are constructed to demonstrate the fault tolerant properties of FTA. By analyzing the customization process, we also present a FTA template, expressed in x-frames using XVCL (XML-based variant configuration language) methodology, to automate the customization process. We apply a sales control system case study to illustrate the customization of FTA.",2006,0, 8685,Multi-Stage Frame Error Concealment Algorithm for H.264/AVC Based on Estimated MB Feature,"This paper proposes a novel multi-stage frame error concealment (EC) algorithm for H.264/AVC based on estimated MB modes and MVs. The proposed method first divides the lost frame into three regions according to MB features in previous frame: which are regular motion region within the object and background, irregular motion region mainly including border areas between various parts of moving object and background, and intra-coded MB region. Then we recover the three regions sequentially by utilizing suitable schemes, motion vector copy method for regular motion region, improved optical flow based method for irregular motion region, and bilinear interpolation (BI) or directional interpolation (DI) method for intra-coded MB region. Experimental results show that our proposed adaptive algorithm can effectively utilize the advantages of classic methods and obtain better recovery performance both in subjective and objective video quality at low computational complexity, and our method effectively prevents the error propagation in the following frames and thus improves the reconstruction quality of the entire video sequence.",2009,0, 8686,Dynamic Derivation of Application-Specific Error Detectors and their Implementation in Hardware,"This paper proposes a novel technique for preventing a wide range of data errors from corrupting the execution of applications. The proposed technique enables automated derivation of fine-grained, application-specific error detectors. An algorithm based on dynamic traces of application execution is developed for extracting the set of error detector classes, parameters, and locations in order to maximize the error detection coverage for a target application. The paper also presents an automatic framework for synthesizing the set of detectors in hardware to enable low-overhead runtime checking of the application execution. Coverage (evaluated using fault injection) of the error detectors derived using the proposed methodology, the additional hardware resources needed, and performance overhead for several benchmark programs are also reported",2006,0, 8687,Compact Delay Test Generation with a Realistic Low Cost Fault Coverage Metric,This paper proposes a realistic low cost fault coverage metric targeting both global and local delay faults. It suggests the test strategy of generating a different number of the longest paths for each line in the circuit while maintaining high fault coverage. This metric has been integrated into the CodGen ATPG tool. Experimental results show significant reductions in test generation time and vector count on ISCAS89 and industry designs.,2009,0, 8688,A real-time computer vision system for detecting defects in textile fabrics,"This paper proposes a real-time computer vision system for detecting defects in textile fabrics. The developments of both the hardware and software platforms are presented. The design of the prototyped defect detection system ensures that the fabric moves smoothly and evenly so that high quality images can be captured. The paper also proposes a new filter selection method to detect fabric defects, which can automatically tune the Gabor functions to match with the texture information. The filter selection method is further developed into a new defect segmentation algorithm. The scheme is tested both on-line and off-line by using a variety of homogeneous textile images with different defects. The results exhibit accurate defect detection with low false alarm, thus confirming the robustness and effectiveness of the proposed system",2005,0, 8689,An expression's single fault model and the testing methods,"This paper proposes a single fault model for the faults of the expressions, including operator faults (operator reference fault: an operator is replaced by another, extra or missing operator for single operand), incorrect variable or constant, incorrect parentheses. These types of faults often exist in the software, but some fault classes are hard to detect using traditional testing methods. A general testing method is proposed to detect these types of faults. Furthermore the fault simulation method of the faults is presented which can accelerate the generation of test cases and minimize the testing cost greatly. Our empirical results indicate that our methods require a smaller number of test cases than random testing, while retaining fault-detection capabilities that are as good as, or better than the traditional testing methods.",2003,0, 8690,Health-status-based condition detection and fault diagnosis system of hydroelectricity production equipment,"This paper proposes a system of health-status-based condition detection and fault diagnosis of hydroelectricity production equipment. The proposed system has rational architecture and comprehensive functional modules, and aims at improving the health status of hydroelectricity production equipment and cuts down the cost of maintenance. The technical and managing systems, which are critical to the well operation and full utilization of the proposed system but usually ignored by the management personnel, are particularly included in this paper except the study on the condition detection and fault diagnosis of hydroelectricity production equipment. The proposed system is practically useful and application prospective proven by the customer.",2010,0, 8691,Minimal cut set/sequence generation for dynamic fault trees,"This paper proposes a zero-suppressed binary decision diagrams (ZBDD) based solution for minimal cut set/sequence (MCS) generation of dynamic fault trees. ZBDD is an efficient data structure for combinational set representation and manipulation. Our solution is based on the basic ZBDD set manipulations (union, intersection, difference and product). Due to the nature of the ZBDD, our algorithm is more efficient than the algorithms based on the BDD, both in computation time and memory usage. In our solution, we also extend the concept of minimal cut set in static fault trees into minimal cut sequence (also with notation MCS) in dynamic fault trees. The is based on minimal cut set generation. It is also efficient compared with Markov model based methods. As an example, we apply our method to X2000 avionics architecture. The system is modeled using a dynamic fault tree and the minimal cut sets/sequences are generated and analyzed.",2004,0, 8692,Recent Developments in Fault Detection and Power Loss Estimation of Electrolytic Capacitors,"This paper proposes a comparative study of current-controlled hysteresis and pulsewidth modulation (PWM) techniques, and their influence upon power loss dissipation in a power-factor controller (PFC) output filtering capacitors. First, theoretical calculation of low-frequency and high-frequency components of the capacitor current is presented in the two cases, as well as the total harmonic distortion of the source current. Second, we prove that the methods already used to determine the capacitor power losses are not accurate because of the capacitor model chosen. In fact, a new electric equivalent scheme of electrolytic capacitors is determined using genetic algorithms. This model, characterized by frequency-independent parameters, redraws with accuracy the capacitor behavior for large frequency and temperature ranges. Thereby, the new capacitor model is integrated into the converter, and then, software simulation is carried out to determine the power losses for both control techniques. Due to this model, the equivalent series resistance (ESR) increase at high frequencies due to the skin effect is taken into account. Finally, for hysteresis and PWM controls, we suggest a method to determine the value of the series resistance and the remaining time to failure, based on the measurement of the output ripple voltage at steady-state and transient-state converter working.",2010,0, 8693,Inverter Nonlinearity Compensation in the Presence of Current Measurement Errors and Switching Device Parameter Uncertainties,"This paper proposes a compensation strategy of the unwanted disturbance voltage due to the inverter nonlinearity employing an emerging learning technique called support vector regression (SVR). SVR constructs the motor dynamic voltage model by a linear combination of current samples in real time, which exhibits fast observer dynamics and robustness on observation noise. Then the disturbance voltage is estimated by subtracting the constructed voltage model from the current controller output. The proposed method compensates all of the inverter nonlinearity factors at the same time and all the processes in estimating distortions are independent of the dead time and power device parameters. From the analysis of the effect on current measurement errors, it is confirmed that the sampling error has little negative impact on the proposed estimation method. Experiments demonstrate the superiority of the proposed method in suppressing voltage distortions caused by inverter nonlinearity",2006,0, 8694,A Generalized Fault Coverage Model for Linear Time-Invariant Systems,"This paper proposes a fault coverage model for linear time-invariant (LTI) systems subject to uncertain input. A state-space representation, defined by the state-transition matrix, and the input matrix, is used to represent LTI system dynamic behavior. The uncertain input is considered to be unknown but bounded, where the bound is defined by an ellipsoid. The state-transition matrix, and the input matrix must be such that, for any possible input, the system dynamics meets its intended function, which can be defined by some performance requirements. These performance requirements constrain the system trajectories to some region of the state-space defined by a symmetrical polytope. When a fault occurs, the state-transition matrix, and the input matrix might be altered; and then, it is guaranteed the system survives the fault if all possible post-fault trajectories are fully contained in the region of the state-space defined by the performance requirements. This notion of guaranteed survivability is the basis to model (in the context of LTI systems) the concept of fault coverage, which is a probabilistic measure of the ability of the system to keep delivering its intended function after a fault. Analytical techniques to obtain estimates of the proposed fault coverage model are presented. To illustrate the application of the proposed model, two examples are discussed.",2009,0, 8695,A property oriented fault detection approach for link state routing protocol,"This paper proposes a new approach to fault detection for a link state routing system-property oriented analysis and detection (POD). A routing system is modeled as a set of distributed processes. A property is defined as state predicate(s) over system variables. For the link state routing protocol, the high-level overall converging property P is defined as the synchronization among routing information bases maintained by all processes. We decompose the routing protocol into different computation phases. For each phase, we use invariant state predicates (safety property) and the liveness property as our guide for observation and analysis. The goal of the detection algorithm is to construct a validation path based on the history to determine if the fault is natural or malicious once the stable property P is rendered invalid by faults. The contribution of this paper is twofold: first, a new detection approach is proposed that differs from traditional signature-based or profile-based intrusion detection paradigms in the sense that it utilizes the stable property as a starting point, and correlates the history and future to validate changes in the system; second, by exploring the primary concerned system properties, we show that detection effort can be conducted in a more focused and systematic fashion",2000,0, 8696,Yield modeling and analysis of a clockless asynchronous wave pipeline with pulse faults,"This paper proposes a new fault model and its modeling and analysis methods in a clockless asynchronous wave pipeline for extensive yield evaluation and assurance. It is highly desirable to have an adequate and specific pulse fault rate model for establishing a sound theoretical foundation for clockless wave pipeline design for reliability. The pulse fault model is thoroughly identified as the unique fault specifically in the clockless wave pipeline in comparison with conventional wave and wave delay faults. The pulse fault rate is statistically yet practically modeled, and extensively evaluated with respect to various design parameters, such as yield, fault coverage, defect-level, and request level length. An extensive numerical simulation is conducted to demonstrate the effect of the proposed pulse fault on the yield.",2003,0, 8697,Designing a Dependable and Fault-Tolerant Semiautonomous Distributed Control Data Collection Network With Opportunistic Hierarchy,"This paper presents the satellite-linked data acquisition and photogrammetry (SLiDAP) network, designed to conduct shore-based, close-range 3-D imaging in remote areas. The lack of communications and power infrastructure and ability to service the system requires periodic, synchronous operations of multiple semiautonomous elements with a high degree of reliability. The SLiDAP system uses an opportunistic network architecture based on four distinct levels of control, to accommodate unpredictable operational constraints and failures. The synchronization of periodic tasks in a distributed control and remotely operable network are highlighted, and measures to increase the reliability of system operations are discussed, including hardware redundancy, intelligent watchdog timer, software error tolerance, self-repair, and remote update capability. The characteristics of the SLiDAP system within the concept of autonomic computing are discussed.",2007,0, 8698,Fault detection of eccentricity and bearing damage in a PMSM by means of wavelet transforms decomposition of the stator current,"This paper presents the study of permanent magnet synchronous machines (PMSM) running with eccentricity and bearings damage. The objective is to detect and identify the fault through the current signature analysis. The stator current has been analyzed by means of both Fourier (FFT) and Discrete Wavelet (DWT) transforms. Simulations have been carried out with a two-dimensional (2-D) finite element analysis (FEA), and they have been also compared with experiments. The results prove that the method proposed can be used to identify mechanical faults in a PMSM.",2008,0, 8699,Errors estimation and minimization for the 5-axis milling machine,"This paper presents the tool path optimization algorithms to compute and estimate the non-linear inverse kinematics errors of the 5-axis milling machine. The approach is based on a global approximation of the required surface by a virtual surface constructed from the tool trajectories. Errors are computed from the difference between the required surface and the virtual surface and displayed numerically and graphically through the virtual machine simulator. The simulator is based on 3D representation and employing the inverse kinematics approach to derive the corresponding rotational and translation movement of the mechanism. The simulator makes it possible to estimate the errors of a 3D tool-path based on a prescribed set of the cutter location (CL) points as well as a set of the cutter contact (CC) points with tool inclination angle. Errors, particularly near the vicinity of the large milling errors, are minimized using a discrete algorithm based on a shortest path strategy. Furthermore, the simulator can be used to simulate the milling process, verify the final cut of the actual tool-path before testing with the real machine. Thus, it reduces the cost of iterative trial and errors.",2002,0, 8700,Model-Based Sensor Fault Detection and Isolation System for Unmanned Ground Vehicles: Theoretical Aspects (part I),"This paper presents theoretical details of a model-based sensor fault detection and isolation system (SFDIS) applied to unmanned ground vehicles (UGVs). Structural analysis is applied to the nonlinear model of the vehicle for residual generation. Two different solutions have been proposed for developing the residual evaluation module. The vehicle sensor suite includes a global positioning system (GPS) antenna, an inertial measurement unit (IMU), and two incremental optical encoders.",2007,0, 8701,Reliable Compare&Swap for fault-tolerant synchronization,"This paper presents two Compare&Swap protocols that, with respect to omission failures, are (1) fault-tolerant and (2) gracefully degrading, respectively. It shows that fault-tolerance and graceful degradation are close but distinct concepts, and that graceful degradation is inherently more costly than fault-tolerance. These Compare&Swap protocols are derived from consensus protocols proposed by Jayanti et al. (1999).",2003,0, 8702,Scoring Models for Fault Detection in Spacecraft,"This paper presents two scoring models for the momentum wheel speed error telemetry data of a satellite. The wheel speed error data are first transformed into orbital segments which form new vectors, and then the vectors are clustered into groups where each group is represented by its center. Based on the centers, two scoring models are formed where one uses distance measurements and the other uses an order-one Markov transition model to score new observed data. Simulation results are presented and discussed",2006,0, 8703,Fault diagnosis of node in wireless sensor network based on the interval-numbers rough neural network,"This paper proposed a new fault diagnosis method of node in WSN based on the interval-numbers rough neural network. Firstly, this method established the most simple decision-making table of the fault diagnosis by the improved discriminate matrix, then applied rough decision-making analysis method constructed a interval-value information decision-making system of WSN node, and constructed the rough neuron of the input layer; Finally, constructed the fault diagnosis system based on the three-layers feed-forward rough neural network with the interval numbers. The simulation results show that this method made the rate of diagnostic accuracy to 99.24% when the computing time was greatly reduced, and it has high practical value.",2010,0, 8704,An LMI approach to worst case analysis for fault detection observers,"This paper systematically formulates the worst-case fault sensitivity analysis problem for fault detection observers. The lowest level of a fault detector's fault sensitivity is defined as an H_ index. A full characterization of the H_ index is given in terms of matrix equalities and inequalities as a dual of the bounded real lemma. Necessary and sufficient conditions are given to find a lower bound of H_ index, which can be calculated efficiently by linear matrix inequality (LMI) solvers. In addition, the analysis problem with respect to a finite frequency band is also solved by adding weighting filters, which is very useful for strictly proper systems. The necessary and sufficient conditions for input observability are also given, which is necessary condition for a fault detection observer to have a nonzero worst-case fault sensitivity. The effectiveness of the proposed approaches is shown by numerical exampled.",2003,0, 8705,Localized fault-tolerant event boundary detection in sensor networks,"This paper targets the identification of faulty sensors and detection of the reach of events in sensor networks with faulty sensors. Typical applications include the detection of the transportation front line of a contamination and the diagnosis of network health. We propose and analyze two novel algorithms for faulty sensor identification and fault-tolerant event boundary detection. These algorithms are purely localized and thus scale well to large sensor networks. Their computational overhead is low, since only simple numerical operations are involved. Simulation results indicate that these algorithms can clearly detect the event boundary and can identify faulty sensors with a high accuracy and a low false alarm rate when as many as 20% sensors become faulty. Our work is exploratory in that the proposed algorithms can accept any kind of scalar values as inputs, a dramatic improvement over existing works that take only 0/1 decision predicates. Therefore, our algorithms are generic. They can be applied as long as the ""events"" can be modelled by numerical numbers. Though designed for sensor networks, our algorithms can be applied to the outlier detection and regional data analysis in spatial data mining.",2005,0, 8706,Modeling of the distance error for indoor geolocation,"This paper uses the results of a calibrated ray tracing software in a sample office environment to analyze and model the distance error measured from the estimated time of arrival (TOA) of the direct line-of-sight (LOS) path in a typical indoor environment. First, we analyze the effect of bandwidth on the measured distance error using TOA and then, we propose a model for simulation of the distance error in LOS and OLOS indoor areas.",2003,0, 8707,The study of fault diagnosis in rotating machinery,"This project presents a detail review of the subject fault diagnosis; feature extraction, dimensionality reduction and fault classification are being discussed. This project focuses on the faulty bearing which mainly caused by mass imbalance and axis misalignment. By analyzing the vibration signal obtained from the test rigs (rigs that are built to demonstrate the effect of faults in rotating machinery), it gives solid information concerning any faults within the rotating machinery.",2009,0, 8708,A Comparison of Mandani and Sugeno Inference Systems for a Space Fault Detection Application,"This research provides a comparison between the performances of TSK (Takagi, Sugeno, Kang)-type versus Mandani-type fuzzy inference systems. The main motivation behind this research was to assess which approach provides the best performance for a gyroscope fault-detection application, developed in 2002 for the European Space Agency (ESA) satellite ENVISAT. Due to the importance of performance in online systems we compare the application, developed with Mamdani model, with a TSK formulation using three types of tests: processing time for both systems, robustness in the presence of randomly generated noise; and sensitivity analysis of the systems' behaviors to changes in input data. The results show that the TSK model perform better in all three tests, hence we may conclude that replacing a Mamdani system with an equivalent TSK system could be a good option to improve the overall performance of a fuzzy inference system.",2006,0, 8709,Investigations of algorithms for bearing fault detection in induction drives,"This paper presents signal methods dedicated to the fault detection in the mechanical part of an induction drive: bearing damage, eccentricity and rotor unbalance. An experimental bench test is described and used to create and characterise these faults. Two methods for bearing faults detection are also detailed: the cepstrum and an original method (parcels summation method). These algorithms are tested on synthetical and real signals.",2002,0, 8710,Analysis of Air Gap Flux to Detect Induction Motor Faults,"This paper presents the application of motor flux signature analysis to the detection of stator winding failures, broken rotor bars and end ring faults in induction motors. Air gap flux analysis is a non-invasive, on line monitoring technique to diagnose faults in three-phase induction motor drives, detecting differences in the flux spectrum. In this work a programme developed in LabVIEW to perform data acquisition and analysis is presented. Data is read in real time form the working induction motor, and the program provides three fault indicators were found: rotor bar failure, rotor end ring failure and stator winding turn to turn short circuit. With these three indicators failure location, state and cause can be determined. The system has being validated in the laboratory with motors with known faults, and a portable version for industrial workshop tests is being developed",2006,0, 8711,Instrument fault detection and isolation: state of the art and new research trends,This paper presents the current state-of-the-art of residual generation techniques adopted in instrument fault detection and isolation. Both traditional and innovative methods are described with their advantages and their limits. The improvement of analytical redundancy technique performances for better dealing with high-dynamics systems and/or with online applications is pointed out as the most interesting need to focus the research efforts,2000,0, 8712,Spatial avoidance of hardware faults using FPGA partial reconfiguration of tile-based soft processors,"This paper presents the design of a many-core computer architecture with fault detection and recovery using partial reconfiguration of an FPGA. The FPGA fabric is partitioned into tiles which contain homogenous soft processors. At any given time, three processors are configured in triple modulo redundancy to detect faults. Spare processors are brought online to replace faulted tiles in real time. A recovery procedure involving partial reconfiguration is used to repair faulted tiles. This type of approach has the advantage of recovering from faults in both the circuit fabric and the configuration RAM of an FPGA in addition to spatially avoiding permanently damaged regions of the chip.",2010,0, 8713,Fault Models and Injection Strategies in SystemC Specifications,"This paper presents fault models and fault injection strategies designed in a simulation platform with reflection capabilities, used for simulating complex systems specified by using SystemC and by adopting a platform-based design approach. The approach allows the designer to work at different levels of abstraction and to take into account permanent and transient faults, and -- most important -- it features a transparent and dynamic mechanism for both injecting faults and analyzing the produced errors, in order to evaluate possible fault detection and/or tolerance design techniques.",2008,0, 8714,Case-base reasoning in vehicle fault diagnostics,"This paper presents our research in case-based reasoning (CBR) with application to vehicle fault diagnosis. We have developed a distributed diagnostic agent system, DDAS that detects faults of a device based on signal analysis and machine learning. The CBR techniques presented are used to rind root cause of vehicle faults based on the information provided by the signal agents in DDAS. Two CBR methods are presented, one used directly the diagnostic output from the signal agents and another uses the signal segment features. We present experiments conducted on real vehicle cases collected from auto dealers and the results show that both method are effective in finding root causes of vehicle faults.",2003,0, 8715,Assessing the dependability of OGSA middleware by fault injection,"This paper presents our research on devising a dependability assessment method for the upcoming OGSA 3.0 middleware using network level fault injection. We compare existing DCE middleware dependability testing research with the requirements of testing OGSA middleware and derive a new method and fault model. From this we have implemented an extendable fault injector framework and undertaken some proof of concept experiments with a simulated OGSA middleware system based around Apache SOAP and Apache Tomcat. We also present results from our initial experiments, which uncovered a discrepancy with our simulated OGSA system. We finally detail future research, including plans to adapt this fault injector framework from the stateless environment of a standard Web service to the stateful environment of an OGSA service.",2003,0, 8716,Modelling reliability of flip chip on board assemblies implementing a correction function approach comparing analytical and finite element techniques,"To determine the reliability performance of electronic components, environmental tests, or accelerated life tests, are used to apply stresses to electronic packages that exceed the stress levels experienced in the filed. In theory, these elevated stress levels are used to generate the same failure mechanisms that are found in the field, only at an accelerated rate. Therefore, an acceleration factor is typically used to correlate (extrapolate) the accelerated life testing data to a field failure rate for a specified use condition. Often times this data is time consuming and expensive to obtain , hence a need exists for reducing the time to data for electronic components in reliability testing. A methodology is presented whereby existing reliability data can be leveraged to obtain"" correction functions' which can be used to modify a mean time to failure, MTTF, estimated analytically or numerically. A suggested analytical model is presented in addition to the statistics based methodology that can be used obtain correction functions. The correction function approach is similar to approaches used for modifying fatigues strengths in engineering alloys. Fatigue strengths or endurance limits are modified to account for physical differences between the actual parts in that were used to obtain the fatigue data. The methodology presented allows for the use of numerous correction functions to adjust estimated life times of component level assemblies based on key correction factors that account for effects difficult or impractical to incorporate in the base prediction models. The methodology is effective in that it can leverage the utility of the life prediction enabled by finite element modelling. The potential correction factors are presented in a fishbone diagram accounting for effects such as substrate metallization, underfill delamination, solder joint voids, underfill voids, intermetallic thickness, etc. Using existing reliability data, the correction functions are determined via multiple linear regression analysis. To illustrate the utility of the life prediction methodology, a case study is presented for flip chip on board assemblies. The uncorrected fatigue life of the solder interconnects is estimated using a trilayer stack analytical model predicting plastic s- train and incorporating correction functions for the glass transition temperature of the underfill, an area ratio for the solder joint interconnect pads, and the substrate bond pad metallization.",2004,0, 8717,Interleaving for combating bursts of errors,"To ensure data fidelity, a number of random error correction codes (ECCs) have been developed. ECC is, however, not efficient in combating bursts of errors, i.e., a group of consecutive (in one-dimensional (1-D) case) or connected (in two- and three- dimensional (2-D and 3-D) case) erroneous code symbols owing to the bursty nature of errors. Interleaving is a process to rearrange code symbols so as to spread bursts of errors over multiple code-words that can be corrected by ECCs. By converting bursts of errors into random-like errors, interleaving thus becomes an effective means to combat error bursts. In this article, we first illustrate the philosophy of interleaving by introducing a 1-D block interleaving technique. Then multi-dimensional (M-D) bursts of errors and optimality of interleaving are defined. The fundamentals and algorithms of the state of the art of M-D interleaving - the t-interleaved array approach by Blaum, Bruck and Vardy and the successive packing approach by Shi and Zhang-are presented and analyzed. In essence, a t-interleaved array is constructed by closely tiling a building block, which is solely determined by the burst size t. Therefore, the algorithm needs to be implemented each time for a different burst size in order to maintain either the error burst correction capability or optimality. Since the size of error bursts is usually not known in advance, the application of the technique is somewhat limited. The successive packing algorithm, based on the concept of 2 2 basis array, only needs to be implemented once for a given square 2-D array, and yet it remains optimal for a set of bursts of errors having different sizes. The performance comparison between different approaches is made. Future research on the successive packing approach is discussed. Finally, applications of 2-D/3-D successive packing interleaving in enhancing the robustness of image/video data hiding are presented as examples of practical utilization of interleaving.",2004,0, 8718,A primary exploration of three-dimensional echocardiographic intra-cardiac virtual reality visualization of atrial septal defect: in vitro validation,"To evaluate the diagnostic value of three-dimensional echocardiography (3-DE) in congenital heart disease such as atrial septal defect (ASD) by virtual reality (VR), ten ASDs with different size and shape were created in ten fresh explained porcine hearts. HP SONOS 5500 imaging system was employed for 3-DE reconstructed and visualized by virtual reality computing techniques. The results showed that all ASDs were successfully reconstructed. The site, geometry were well appraised in its true form. The area, maximum and minimum diameter of ASD were measured on 3D reconstruction and compared with independently measured anatomic date. Good correlation was obtained (r>0.95, P<0.01). In conclusion, VR open an exciting opportunity in the field of diagnosis of 3-DE in congenital heart disease",2005,0, 8719,"SAFE-RD (secure, adaptive, fault tolerant, and efficient resource discovery) in pervasive computing environments","To facilitate thousands of hand held device users' lookup for services anywhere anytime, the importance of a resource discovery scheme in pervasive computing environments cannot be overlooked. The incorporation of security, adaptability, fault tolerance, and efficiency features is the quest for long but there is no existing resource discovery scheme that can be claimed as ""the solution"". In this paper, we propose the design of such a discovery mechanism named SAFE-RD, which is an integral part of our on-going research project MARKS (Adaptive Middleware for Resource discovery, Knowledge usability and Self-healing) with illustrative examples.",2005,0, 8720,A Method of Error Compensation on Sensors Based on Neural Networks Algorithm,"To improve the error compensation precision of sensors, a method of the error compensation of sensors based on the neural network algorithm is proposed. The convergence of the algorithm is researched. The theory gist to select learning rate is provided by the convergence theorem. To validate the validity of the algorithm, the simulation example of the error compensation of sensor was given. The result shows that the approach of the error compensation of sensors using the neural network algorithm has a very high accuracy. Thus, the method proposed is effective.",2008,0, 8721,Using Register Lifetime Predictions to Protect Register Files against Soft Errors,"To increase the resistance of register files to soft errors, this paper presents the ParShield architecture. ParShield is based on two observations: (i) the data in a register is only useful for a small fraction of the register's lifetime, and (ii) not all registers are equally vulnerable. ParShield selectively protects registers by generating, storing, and checking the ECCs of only the most vulnerable registers while they contain useful data. In addition, it stores a parity bit for all the registers, re-using the ECC circuitry for parity generation and checking. ParShield has no SDC AVF and a small average DUE AVF of 0.040 and 0.010 for the integer and floating-point register files, respectively. ParShield consumes on average only 81% and 78% of the power of a design with full ECC for the SPECint and SPECfp applications, respectively. Finally, ParShield has no performance impact and little area requirements.",2007,0, 8722,"Learning from events - experience with an electronic tool supporting the report of events, evaluation, analysis, and the corrective action program","To manage the variety of event types like near misses, notices from audits, occupational accidents up to real events or external events and to realize the possibility to learn from them, we developed an electronic tool which covers the comprehensive support of the operating experience management within the nuclear industry. The method enables organizations and their members to capture, manage, analyze and evaluate safety related experiences. The contribution shows the loop of learning supported by the application. It covers the registration of notifications and events. A digital notification system for the plant staff is implemented and promoted by an incentive system. Also anonymous notifications and their processing can be followed up by the sender. The operating experience team decides by various for the most part integrated criteria about the process of the occurrence. Corresponding to the type of event the adequate analyses and reports can be carried out. Here and in all other steps of administration of partial tasks the work flow principle with placing of responsibilities for persons or working teams is helpful. Further the proposals of actions and recommendations start an examination and approval process. Approved proposals lead to a common corrective action data base with necessary functions to control the carrying out, the time limits, and the evaluation of actions. Statistical evaluations, tables, charts, and synopses support the experience management. The experiences in nuclear power plants show that the method helps in ensuring that the collected information is effective analyzed, integrated in the corrective action program, and subjected to the control of time limits and priorities. Further it ensures that the feedback is distributed and easily available for all users. The common use of the method by different departments increases the cooperation of departments related to operating experience management and organizational learning.",2007,0, 8723,Fault diagnosis of frequency convert system based on networked virtual instrument,"To obtain sufficient fault information and extract important fault characteristics are the key to fault diagnosis. For the core of control unit, some important fault characteristics must be obtained from users' on-site environment. However, at present, there are some problems in the design of virtual instrument which is on-site application-oriented. For example, the on-site resources often can not meet the requirements of fault diagnosis; sufficient knowledge in the application field and design ability of software and hardware must be obtained by the software designers. Based on the fault diagnosis of frequency convert system, this paper put forward a design method of networked virtual instrument which is on-site application-oriented. Remote resource is called by Internet technology, and networked virtual instrument is constructed in the case of insufficient on-site resource. Requirement design, resource calling and data management are relatively independent of flow and flow assembling, resource and resource matching, data dictionary and data image. Design process of networked virtual instrument is simplified.",2009,0, 8724,Tolerance of performance degrading faults for effective yield improvement,"To provide a new avenue for improving yield for nano-scale fabrication processes, we introduce a new notion: performance degrading faults (pdef). A fault is said to be a pdef if it cannot cause a functional error at system outputs but may result in system performance degradation. In a processor, a fault is a pdef if it causes no error in the execution of user programs but may reduce performance, e.g., decrease the number of instructions executed per cycle. By identifying faulty chips that contain pdef's that degrade performance within some limits and binning these chips based on the their resulting instruction throughput, effective yield can be improved in a radically new manner that is completely different from the current practice of performance binning on clock frequency. To illustrate the potential benefits of this notion, we analyze the faults in the branch prediction unit of a processor. Experimental results show that every stuck-at fault in this unit is a pdef. Furthermore, 97% of these faults induce almost no performance degradation.",2009,0, 8725,Error Detection via Online Checking of Cache Coherence with Token Coherence Signatures,"To provide high dependability in a multithreaded system despite hardware faults, the system must detect and correct errors in its shared memory system. Recent research has explored dynamic checking of cache coherence as a comprehensive approach to memory system error detection. However, existing coherence checkers are costly to implement, incur high interconnection network traffic overhead, and do not scale well. In this paper, we describe the token coherence signature checker (TCSC), which provides comprehensive, low-cost, scalable coherence checking by maintaining signatures that represent recent histories of coherence events at all nodes (cache and memory controllers). Periodically, these signatures are sent to a verifier to determine if an error occurred. TCSC has a small constant hardware cost per node, independent of cache and memory size and the number of nodes. TCSC's interconnect bandwidth overhead has a constant upper bound and never exceeds 7% in our experiments. TCSC has negligible impact on system performance",2007,0, 8726,Skew Detection and Correction Method of Fabric Images Based on Hough Transform,"To solve the skew situation of scanned fabric image, a method based on Hough transform for skew detection and correction in fabric images was presented. By combining the characteristics of fabric images and the weft direction information extracted by Sobel operator, this method performed hierarchical Hough transform on the weft boundary to detect the skew angle of fabric image. Finally, a rotation algorithm based on the image linear storage structure was introduced, and the skew image was corrected rapidly. The skew detect algorithm has been experimented on various skew angles of fabric image and very promising results have been achieved given more than 99% accuracy. Experimental results show that the proposed method with high adaptability is more accurate and rapidly than traditional Hough transform.",2009,0, 8727,Research on integral error calibrating system of high-voltage electric energy measurement device,"To the question that there is no accuracy grade to the existed HV energy measuring system, a design scheme to calibrate the integer error of HV energy measuring device is provided in this paper. The hardware of the calibrating system, including test power source, signal source, power benchmark system, power source measuring and regulating circuit, has been introduced in detail. The function of the monitoring and operating software is also discussed in the paper.",2007,0, 8728,Research on virtual position noise control technology based on secondary error filter control algorithm,"To the traditional ANC system, the noise reduction region is strictly limited around the error sensors within a certain range, but in terms of drivers, it is difficult to be installed in fixed error sensors around the human ear. In this paper, the idea of the virtual location of noise control will be introduced, and combined with the second error filtering algorithm. This method can effectively reduce the noise level in a remote location by adjusting the gain coefficient and the virtual noise location, and according to actual requirement, the amount of attenuation of noise in different frequency domain can be controlled.",2009,0, 8729,Optimal Periodic Testing of Intermittent Faults In Embedded Pipelined Processor Applications,"Today's nanometer technology trends have a very negative impact on the reliability of semiconductor products. Intermittent faults constitute the largest part of reliability failures that are manifested in the field during the semiconductor product operation. Since software-based self-test (SBST) has been proposed as an effective strategy for on-line testing of processors integrated in non-safety critical low-cost embedded system applications, optimal test period specification is becoming increasingly challenging. In this paper we first introduce a reliability analysis for optimal periodic testing of intermittent faults that minimizes the test cost incurred based on a two-state Markov model for the probabilistic modeling of intermittent faults. Then, we present for the first time an enhanced SBST strategy for on-line testing of complex pipelined embedded processors. Finally, we demonstrate the effectiveness of the proposed optimal periodic SBST strategy by applying it to a fully-pipelined RISC embedded processor and providing experimental results",2006,0, 8730,"Detecting, diagnosing, and tolerating faults in SRAM-based field programmable gate arrays: a survey","Topics related to the faults in SRAM-based field programmable gate arrays (FPGAs) have been intensively studied in recent research studies. These topics include FPGA fault detection, FPGA fault diagnosis, FPGA defect tolerance, and FPGA fault tolerance. This paper provides a guided tour to the approaches related to these topics. These include techniques, which are applied to the FPGA and others which have been recently introduced and can be applied to today's FPGAs.",2003,0, 8731,Impact analysis of performance faults in modern microprocessors,"Towards improving performance, modern microprocessors incorporate a variety of architectural features, such as branch prediction and speculative execution, which are not critical to the correctness of their operation. While faults in the corresponding hardware may not necessarily affect functional correctness, they may, nevertheless, adversely impact performance. In this paper, we investigate quantitatively the performance impact of such faults using a superscalar, dynamically-scheduled, out-of-order, Alpha-like microprocessor, on which we execute SPEC2000 integer benchmarks. We provide extensive fault simulation-based experimental results and we discuss how this information may guide the inclusion of additional hardware for performance loss recovery and yield enhancement.",2009,0, 8732,Research and Design of Tower Crane Condition Monitoring and Fault Diagnosis System,"Tower cranes are playing an important role in hoisting apparatus, while reducing tower crane accident and improving crane safety performance are always urgent. C8051F020 SCM is selected as the core of this system, and many advanced technology such as multisensor data acquisition, expert system and neural network are to be used. The system has self-contained condition monitor in and fault diagnosis function, stable performance and friendly interface, provides the powerful guarantee for efficient and safe operation.",2010,0, 8733,The asymmetric multi-layer for distributed fault detection in networks with unreliable processors,"Traditional centralized management solutions do not scale to present-day large-scale computer/communication networks. They suffer from certain other drawbacks too: a single point of failure and hence lack of fault tolerance, and heavy communication costs associated with the central manager. It has been recognized that the decentralization/distributed solutions can solve some of the problems associated with centralized solutions. A new theoretical multi-level paradigm called ML-ADSD (Thulasraman et al. (2003)) assumes that all nodes in the same cluster are symmetric for multi-level adaptive distributed diagnosis algorithms for fully connected networks which is not true and cannot be realized in real life. In this paper, we propose a new algorithm called asymmetric multi-level adaptive distributed diagnosis algorithms, AML-ADSD. In this new algorithm, node parameters are to be classified using a standard benchmark program. Then, each node's performance can be calculated.",2004,0, 8734,Use of fault tree analysis for evaluation of system-reliability improvements in design phase,"Traditional failure mode and effects analysis is applied as a bottom-up analytical technique to identify component failure modes and their causes and effects on the system performance, estimate their likelihood, severity and criticality or priority for mitigation. Failure modes and their causes, other than those associated with hardware, primarily electronic, remained poorly addressed or not addressed at all. Likelihood of occurrence was determined on the basis of component failure rates or by applying engineering judgement in their estimation. Resultant prioritization is consequently difficult so that only the apparent safety-related or highly critical issues were addressed. When thoroughly done, traditional FMEA or FMECA were too involved to be used as a effective tool for reliability improvement of the product design. Fault tree analysis applied to the product as a top down in view of its functionality, failure definition, architecture and stress and operational profiles provides a methodical way of following products functional flow down to the low level assemblies, components, failure modes and respective causes and their combination. Flexibility of modeling of various functional conditions and interaction such as enabling events, events with specific priority of occurrence, etc., using FTA, provides for accurate representation of their functionality interdependence. In addition to being capable of accounting for mixed reliability attributes (failure rates mixed with failure probabilities), fault trees are easy to construct and change for quick tradeoffs as roll up of unreliability values is automatic for instant evaluation of the final quantitative reliability results. Failure mode analysis using fault tree technique that is described in this paper allows for real, in-depth engineering evaluation of each individual cause of a failure mode regarding software and hardware components, their functions, stresses, operability and interactions",2000,0, 8735,Ontology-based fault diagnosis for industrial control applications,"Traditional fault detection systems in industrial control applications are just able to report occurring faults. Fault diagnosis systems are more desirable for plant operators, as such systems are capable to reduce the number of occurring alarms by elimination of consecutive alarms and prioritization of critical alarms. The disadvantage of those systems is that they have to be implemented anew for every control application, as the system dependencies vary from application to application. Ontology-based fault diagnosis systems do not have this disadvantage. Only the ontology has to be created for the new system, which greatly reduces time and effort for new systems, as old ontologies can be reused for the new system.",2010,0, 8736,The Portable PCB Fault Detector Based on ARM and Magnetic Image,"Traditional fault detector technique can hardly adapt to the modern electronic technique. This paper introduces how to set up military electronic equipmentspsila portable fault detector based on magnetic image, which can fast detect the electronic equipment in the scene. The detector has made use of ARM and uC/OS-II as the developing platform, taking MiniGUI as the figure interface.",2008,0, 8737,Reliability analysis of AUV based on fuzzy fault tree,"Traditional fault tree analysis method need obtain exact value of the event occurrence's probability, un-completeness and fuzziness of the data is ignored. Fuzzy fault tree analysis method is proposed in study on the system reliability. The fuzzy fault tree of the AUV is established. Using operational rule calculated the fuzzy probability of the top event which is AUV can't work normally and analysis the results. The results showed that this method can resolve the problem of fuzzy data and have part events failure criterion in fault tree analysis. It provided valuable reference for the design of the AUV, fault diagnosis and maintenance.",2009,0, 8738,Improve the robot calibration accuracy using a dynamic online fuzzy error mapping system,"Traditional robot calibration implements model and modeless methods. The compensation of position error in modeless method is to move the end-effector of robot to the target position in the workspace, and to find the position error of that target position by using a bilinear interpolation method based on the neighboring 4-point's errors around the target position. A camera or other measurement devices can be utilized to find or measure this position error, and compensate this error with the interpolation result. This paper provides a novel fuzzy interpolation method to improve the compensation accuracy obtained by using a bilinear interpolation method. A dynamic online fuzzy inference system is implemented to meet the needs of fast real-time control system and calibration environment. The simulated results show that the compensation accuracy can be greatly improved by using this fuzzy interpolation method compared with the bilinear interpolation method.",2004,0, 8739,Software Test Selection Patterns and Elusive Bugs,"Traditional white and black box testing methods are effective in revealing many kinds of defects, but the more elusive bugs slip past them. Model-based testing incorporates additional application concepts in the selection of tests, which may provide more refined bug detection, but does not go far enough. Test selection patterns identify defect-oriented contexts in a program. They also identify suggested tests for risks associated with a specified context. A context and its risks is a kind of conceptual trap designed to corner a bug. The suggested tests will find the bug if it has been caught in the trap.",2005,0, 8740,Novel Compact Hairpin SIR Bandpass Filters with Defected Ground Structure,"Three types of hairpin SIR bandpass filters with sizes reduction and spurious responses suppression which operate at S waveband are proposed, their performances are calculated and optimized. In these novel structures, split ring stepped-impedance resonator and DGS techniques are applied to not only reduce the size but also improve the filter frequency responses. The filters designed in this paper have advantages such as compact, simple and novel structures, smaller sizes compared to the traditional SIR bandpass filters, high selectivity, low passband insertion losses, wide stopband responses and so on.",2007,0, 8741,Custom implant design for patients with cranial defects,"Three-dimensional image reconstruction and rapid prototyping models improve defect evaluation, treatment planning, implant design, and surgeon accuracy. It is found that both 3-D imaging and physical models are helpful for the evaluation of cranial defects, treatment planning, and custom implant design. While 3-D imaging can be used on a routine basis, the physical skull models could be a useful additional tool, especially when the cranial defects are rare, unusual, or difficult. In this case, the introduction of a rapid prototyping stereolithographic medical model for manufacturing a large cranioplasty implant is considered to be an improvement over other traditional techniques. Because of the accuracy of the physical model, the surgeon has a good understanding of the cranial defect and precise fitting implants can be fabricated in order to re-establish skull contours for the patient. In addition, the excellent fitting and fixation techniques of cranioplasty have reduced operating time significantly.",2002,0, 8742,Optimization of regularization of attenuation and scatter-corrected 99mTc cardiac SPECT studies for defect detection using hybrid images,"Through means of a receiver-operating-characteristics study, we optimize the iteration number and three-dimensional (3-D) Gaussian postfiltering of 99mTc cardiac emission ordered-subset expectation-maximization (OSEM) reconstructions that implement corrections for both attenuation and scatter. Hybrid images, wherein artificial perfusion defects were added to clinical patient studies that were read as being normally perfused, were used for this optimization. The test conditions included three different iteration numbers of OSEM (1, 5 and 10) using four angles per subset, followed by 3-D Gaussian low-pass filtering at each iteration level. The level of Gaussian low-pass filtering was varied using standard deviations () of 0.6, 0.75, 1 and 1.25 pixels, in addition to a case where no postfiltering was applied. Four observers read 80 images for each of the 15 test conditions being investigated, providing confidence ratings as to the presence or absence of perfusion defects. Results indicate that at all iterations, optimum detection performance is obtained for a broad plateau or range of postfilters (=0.6 to 1 pixel). As expected, a gradual reduction in performance is seen at either end of this broad maximum where the images either have been very heavily smoothed or have very little postfiltering. Finally, one iteration of OSEM appears to be the appropriate choice since no significant improvement in detection accuracy was observed with increasing iteration number as long as the reconstructions are postfiltered with in the range of 0.6 to 1 pixel",2001,0, 8743,Research of on-line monitoring and fault diagnosis system for cold-rolling based on RBF neural network,"Through the analysis of electric drive system of a cold-rolling steel plant and selecting detection signal reasonably, the on-line monitoring system has been exploited. It possesses the functions of real-time data display, alarm, sample data storage, data acquisition, parameter setting and others. By using MATLAB-Simulink tools, the simulation system has been built, which is for fault diagnosis of three-phase induction motor, a key machine of cold-rolling electric drives. By applying RBF neural network to diagnosis, a diagnosis system has been designed. Through verifying the trained network, the fault diagnosis system proves to have the good ability of predicting and diagnosing the faults of three-phase induction motor, and have a good application prospect.",2009,0, 8744,Towards tight error bounds on rational expression evaluation,Tight bound on rounding errors accumulated during a sequence of operations in expression evaluation can be difficult to obtain. We present two recently developed methods that help solve this problem in the case of rational expression in one variable. These methods have been successfully applied to recent software developments of highly-accurate transcendental function library.,2000,0, 8745,Flip chip advanced package solder joint embrittlement fault isolation using TDR,"Time Domain Reflectometry (TDR) is a non-destructive failure analysis technique that identifies the location of an open or short failure. It utilizes a system that sends electrical pulses through the sample and measures the reflected signal. By examining the polarity, amplitude, and other electrical signatures of all reflections, the location of the failure can be easily identified. This is done by comparing the waveform obtained from the device being tested with those obtained from known-good samples. The recent application of TDR on advanced packages has lead to the development of ways or methods to fault isolate flip chip or BGAs, even with low level failure such as solder joint embrittlement. By utilizing TDR, further analysis can be accurately focused on the failing spot and the cause of the failure can be determined efficiently.",2004,0, 8746,Test-Pattern Selection for Screening Small-Delay Defects in Very-Deep Submicrometer Integrated Circuits,"Timing-related defects are major contributors to test escapes and in-field reliability problems for very-deep submicrometer integrated circuits. Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. We present a test-grading technique that uses the method of output deviations for screening small-delay defects (SDDs). A new gate-delay defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from an n-detect pattern set generated using timing-unaware automatic test-pattern generation (ATPG). It offers significantly lower computational complexity and excites a larger number of long paths compared to a current generation commercial timing-aware ATPG tool. Our results also show that, for the same pattern count, the selected patterns provide more effective coverage ramp-up than timing-aware ATPG and a recent pattern-selection method for random SDDs potentially caused by resistive shorts, resistive opens, and process variations.",2010,0, 8747,Application of robust l1 fault detection and isolation to an industrial benchmark using a genetic algorithm,"To aid in the transition of fault detection and isolation (FDI) theory to practice, a realistic, nonlinear industrial diesel engine benchmark was defined by Blanke et al. This paper applies a robust l1 fault detection and isolation (FDI) technique to this benchmark. Using a linear model and assuming appropriate parametric uncertainty, a bank of robust linear estimators is developed using mixed structured singular value (MSSV) and l1 theories. To obtain the estimator parameters a real-coded genetic algorithm is used to solve the optimization problem. These estimators are then used to perform FDI of the industrial diesel engine actuator. The results illustrate the power of hybrid evolutionary algebraic techniques for solving important problems in estimation and control.",2006,0, 8748,Transmission surveillance and self-restoration against fibre fault for time division multiplexing using passive optical network,"This study proposes a practical transmission surveillance and self-protection scheme for time division multiplexing using passive optical network (TDM-PON) with centralised monitoring and self-restorable apparatus. Troubleshooting a TDM-PON involves locating and identifying the source of an optical problem in what may be a complex optical network topology that includes several optical line terminals (OLTs), optical splitters, fibres and optical network units (ONUs). Since most components in the network are passive, a large part of the issues are due to dirty/damaged/misaligned connectors or breaks/macrobends in optical fibre cables. These will affect one, some or all subscribers in the network, depending on the location of the problems. The proposed scheme is able to prevent and detect the occurrence of fibre faults in a network system through centralised monitoring and remotely operate from a central office via Ethernet connection. Even with fibre fault prevention mechanisms, failures will still occur. Therefore fibre fault detection is required in order to detect potential faults and precisely localise the exact failure location. Whenever any failure occurs on the primary entity, the proposed system can protect and switch the failure line to the protection line to ensure that traffic flows continuously. Meanwhile, the failure information will be delivered to field engineers for taking appropriate recovery action to treat the fibre fault and failure link. One suggestion in point-to-multipoint (P2PM) applications has been proposed with the experimental results as the feasibility approach. This approach has bright prospects for improving the survivability and reliability as well as increasing the efficiency and monitoring capabilities in TDM-PON.",2009,0, 8749,Design of a defected ground beam forming network,"This text presents the design of a novel, wideband beam forming network (BFN) realized by defected ground structure (DGS). The BFN is a corporate one with equi-phase and magnitude output ports. In a regular microstrip BFN design, non-standard, thin-and-thick lines are used together with the standard-width (50 ) lines for matching purposes. However, in this design, the BFN is implemented by using standard-width microstrip lines, and rectangular defects at the ground with a 0.508 mm-thick Rogers RO4003C substrate in between. Results of the simulation show that the designed structure is a wideband, equi-phase, and equi-magnitude BFN with at most 0.4 dB magnitude difference, and 4 phase difference in the 5.6 GHz-11.1 GHz band. The resulting structure is in a compact form with constant width line lengths on the upper side, and defects on the ground side. In order to avoid the radiation effects from the defect, another substrate of the same dielectric is also placed beneath the DGS.",2010,0, 8750,Calibration Using Generalized Error Matrices of a Long Reach Articulated Carrier,"This work concerns the development of advanced robotic systems for nuclear application. The manipulator will be used for light intervention in spent fuel management facilities. The robot must meet severe specifications: small diameter, long reach within a minimum range of 6 m, high dexterity to move in constrained environment and lots of degrees of freedom (DOF) for obstacle avoidance. In order to meet these requirements, The interactive robotics unit of CEA LIST has developed a very challenging robotic carrier (called P.A.C.) which is able to perform light intervention tasks inside high range of blind hot cells using existing engineering penetrations. This long reach multi-link carrier has 11 DOF and weighs less than 30 kg. The gravity effect in the manipulator is largely compensated by a special mechanical structure (the parallelogram) that helps reducing the size of the rotation actuators used to operate the robot. Also, a glass fiber epoxy equilibrium spring is used to compensate the gravity effect over the elevation actuators. A field test is made to measure the robot's repeatability and accuracy by using a laser tracker to measure the end effector's position. Due to its size and weight, this large robot manipulator holds lots of elastic and geometric deformations. Thus it possesses a very low accuracy. A calibration method of the robot using generalized error matrices is applied to reduce the positioning error of the system. These matrices are a polynomial function of the system geometry and joint variables. The method is first tested by simulation to examine its viability on large manipulators. After an encouraging simulation results, an experimental field is made for the calibration of the PAC manipulator. Results show that the adopted polynomial model, with the new identified parameters, is capable of correcting and reducing the system errors of long reach manipulators.",2007,0, 8751,Fault tolerant IPMS motor drive based on adaptive backstepping observer with unknown stator resistance,"This work considers the problem of designing a fault tolerant system for IPMS motor drive subject to current sensor fault. To achieve this goal, two control strategies are considered. The first is based on field oriented control and a developed adaptive backstepping observer which simultaneously are used in the case of fault-free. The second approach proposed is concerned with fault tolerant strategy based on observer for faulty conditions. Stator resistance as possible source of system uncertainty is taken into account under different operating conditions. Current sensors failures are detected and observer based on adaptive backstepping approach is used to estimate currents and stator resistance. The nonlinear observer stability study based on the Lyapunov theory guarantees the stability and convergence of the estimated quantities, if the appropriate adaptation laws are designed and persistency of excitation condition is satisfied. In our control approach, references of d-q axis currents are generated on the basis of maximum power factor per ampere control scheme related to IPMSM drive. The complete proposed scheme is simulated using MATLAB/Simulink software. Simulation is made to illustrate the proposed strategy.",2008,0, 8752,Patient motion correction in computed tomography by reconstruction on a moving grid,"This work describes a method of motion correction in image reconstruction for both emission computed tomography and X-ray CT. The method assumes that moving objects are represented as objects of constant intensity defined on a deformable grid. Large-scale body motion and respiratory motion are tracked by fiducial markers attached to the patient's body. The positions of the markers are determined using a camcorder-based system. The deformation of the body is approximated by local shifts in the grid nodes of the image representation, which are linear functions of the time-dependent displacement of the fiducial markers. After the grid motion is established, node intensities are reconstructed using either an analytical or an iterative statistical reconstruction algorithm. The information about the grid deformation is incorporated during the computation of the forward projection. Both analytical and iterative ML-EM algorithms are inplemented without a significant increase in reconstruction time. Numerical experiments are presented in a two-dimensional case for a simulated CT scan of a deformable phantom and in a three-dimensional case for the NCAT phantom with respiratory motion.",2007,0, 8753,Improving transient memory fault resilience of an H.264 decoder,"Traditionally, fault-tolerance has been the domain of expensive, hard real-time critical systems. However, the rates of transient faults occurring in semiconductor devices will increase significantly due to shrinking structure sizes and reduced operating voltages. Thus, even consumer-grade embedded applications with soft real-time requirements, like audio and video players, will require error detection and correction methods to ensure reliable everyday operation. Cost, timing and energy considerations, however, prevent the embedded system developer from correcting every single error. In many situations, however, it will not be required to create a totally error-free system. In such a system, only perceptible errors will have to be corrected. To distinguish between perceptible and non-perceptible errors, a classification of errors according to their relevance to the application is required. When real-time conditions have to be observed, the current timing properties of the system will provide additional contextual information. In this paper, we present a structure for an error-correcting embedded system based on a real-time aware classification. Using a cross-layer approach utilizing application annotations of error classifications as well as information available inside the operating system, the error correction overhead can be significantly reduced. This is shown in a first evaluation by analyzing the achievable improvements in an H.264 video decoder under error injection and simulated error correction.",2010,0, 8754,FT-PPTC: An Efficient and Fault-Tolerant Commit Protocol for Mobile Environments,"Transactions are required not only for wired networks but also for the emerging wireless environments where mobile and fixed hosts participate side by side in the execution of the transaction. This heterogenous environment is characterized by constraints in mobile host capabilities, network connectivity and also an increasing number of possible failure modes. Classical atomic commit protocols used in wired networks are therefore not directly suitable for this heterogenous environment. Furthermore, the few commit protocols designed for mobile transactions either consider mobile hosts only as initiators though not as active participants, or show a high resource blocking time. We present the Fault-Tolerant Pre-Phase Transaction Commit (FT-PPTC) protocol for mobile environments. FT-PPTC decouples the commit of mobile participants from that of fixed participants. Consequently, the commit set can be reduced to a set of entities in the fixed network. Thus, the commit can easily be supported by any traditional atomic commit protocol, such as the established 2PC protocol. We integrate fault-tolerance as a key feature of FT-PPTC. Performance evaluations confirm the efficiency, scalability and low resource blocking time of our approach",2006,0, 8755,Experimental investigation of internal short circuit faults leading to advanced incipient behavior and failure of a distribution transformer,"Transformer fault detection and diagnosis is becoming more important due to the restructuring of the electric power industry. In this era of deregulation, loading transformers to their optimum capacity is becoming normal practice, which in turn applies high stresses on the insulation of the transformers and increases the probability of occurrence of internal short circuit winding faults. Such faults can lead to catastrophic failure and hence cause outages. Utilities and other entities in the electric power business are therefore exploring ways of detecting these faults in transformers in the incipient stage. Terminal values, primary and secondary currents and voltages of the transformer convey information that can be used to detect internal transformer failures before developing a detection method. The behavior of these terminal values should be understood. In an effort to characterize the behavior of the terminal values of a transformer during internal short circuit and incipient faults, short circuit faults were staged on a 25 kVA, 7200 V/240 V/120 V two winding custom-built transformer. This paper discusses the results of the field experiments performed over a 19-month period. It presents time domain results of selected short circuit experiments. It also presents recordings of advance incipient-like behavior during the last set of experiments.",2004,0, 8756,A fuzzy dissolved gas analysis method for the diagnosis of multiple incipient faults in a transformer,"Transformer incipient faults are often diagnosed with dissolved gas analysis (DGA) of transformer oil. Although various methods have been developed to interpret DGA results such as IEC codes, Rogers method and triaggle methods, they sometimes fail to determine the faults. This is normally due to the existance of more than one fault in a transformer. This paper presents a fuzzy logic technique and a computer program package which can be used to store various test results and diagnose multiple faults in a transformer. It has been proved to be a very useful tool for transformer diagnosis and maintenance planning.",2000,0, 8757,Increasing register file immunity to transient errors,"Transient errors are a major reason for system downtime in many systems. In prior research, the register file has largely been neglected, but since it is accessed very frequently, the probability of transient errors is high. These errors can quickly spread to different parts of the system, and cause an application crash or silent data corruption. The paper addresses the reliability of register files in superscalar processors. We propose to duplicate actively used physical registers in unused physical registers. If the protection mechanism (parity or ECC) used for the primary copy indicates an error, the duplicate can provide the data, as long as it is not corrupted. We implement two strategies based on register duplication. In the ""conservative strategy"", we limit ourselves with the given register usage behavior, and duplicate register contents only on otherwise unused registers. Consequently, there is no impact on the original performance when there is no error, except for the protection mechanism used for the primary copy. Experiments with two different versions of this strategy show that, with the more powerful conservative scheme, 78% of the accesses are to the physical registers with duplicates. The ""aggressive strategy"" sacrifices some performance to increase the number of register accesses with duplicates. It does so by marking the registers not used for a long time as ""dead"" and using them for duplicating actively used registers. Experiments with this strategy indicate that it takes the fraction of reliable register accesses to 84%, and degrades the overall performance by only 0.21% on average.",2005,0, 8758,Real time error concealment in H.264 video decoder for embedded devices,"Transmission of compressed video over unreliable network is an interesting topic of research. As compression is inversely proportional to data redundancy. Highly compressed data are hence more susceptible to errors. This paper discusses error recovery and concealment techniques for H.264/AVC baseline profile decoder, where bandwidth, decoder performance and memory are critical constraints for real time implementation. This paper proposes a technique which gives improved perceptual concealment without affecting the decoder performance and its memory requirement. The technique is based on novel algorithm which uses the available features in H.264 standard to perform concealment, with minimal pre-processing. Comparison of the novel algorithm has been made with the error concealment capability of JM 13.2 reference software.",2008,0, 8759,Joint Temporal and Spatial Error Concealment for Multiple Description Video Coding,"Transmission of compressed video signals over error-prone networks exposes the information to losses and errors. To reduce the effects of these losses and errors, this paper presents a joint spatial-temporal estimation method which takes advantages of data correlation in these two domains for better recovery of the lost information. The method is designed for the hybrid multiple description coding which splits video signals along spatial and temporal dimensions. In particular, the proposed method includes fixed and content-adaptive approaches for estimation method selection. The fixed approach selects the estimation method based on description loss cases, while the adaptive approach selects the method according to pixel gradients. The experimental results demonstrate that improved error resilience can be accomplished by the proposed estimation method.",2010,0, 8760,Optimization of Hybridized Error Concealment for H.264,"Transmission of highly compressed video bitstreams can result in packet erasures when channel status is unfavorable, the consequence being not only the corruption of a single frame, but propagation to its successors. In order to avoid error-catalyzed artifacts from producing visible corruption of affected video frames, the use of error concealment at the video decoder becomes essential. The purpose of this paper proposes an efficient and integrated novel EC method for the latest video compression standard H.264/AVC, using not only spatially and temporally correlated information but also the tandem utilization of two new coding tools: directional spatial prediction for intracoding and variable block size motion compensation of H.264/AVC. Experiments performed using the proposed hybridization method of combining the above spatial and temporal estimation elements fulfilled the expectations of control-whole-scheme. The experimental results show that the proposed method offers excellent gains of up to 10.62 dB compared to that of the Joint Model (JM) decoder for a wide range of benchmark sequences without any considerable increase in time demand.",2008,0, 8761,Identification of Errors in Power Flow Controller Parameters,"Transmission open access allows power transactions to take place between remote parts of an interconnected system. As a result, some parts of the transmission system may experience unusual power flows during certain power transactions. One way to circumvent possible congestion is to use power flow control devices. These devices which are also referred as flexible AC transmission system (FACTS) devices, allow rerouting of power flows in the system. The amount of power flowing through such a device can be controlled via device parameters. Hence, proper monitoring of these parameters is important for reliable operation and system security. In this paper, an identification method for detecting and identifying errors associated with power controller parameters will be presented. The method is based on the available measurements such as the power flows and injections which are used by the state estimators at the control center. Hence, the method can be implemented easily as part of the existing energy management functions",2006,0, 8762,Test Compaction for Transition Faults under Transparent-Scan,"Transparent-scan was proposed as an approach to test generation and test compaction for scan circuits. Its effectiveness was demonstrated earlier in reducing the test application time for stuck-at faults. We show that similar advantages exist when considering transition faults. We first show that a test sequence under the transparent-scan approach can imitate the application of broadside tests for transition faults. Test compaction can proceed similar to stuck-at faults by omitting test vectors from the test sequence. A new approach for enhancing test compaction is also described, whereby additional broadside tests are embedded in the transparent-scan sequence without increasing its length or reducing its fault coverage",2006,0, 8763,Impact of Scaling on Neutron-Induced Soft Error in SRAMs From a 250 nm to a 22 nm Design Rule,"Trends in terrestrial neutron-induced soft-error in SRAMs from a 250 nm to a 22 nm process are reviewed and predicted using the Monte-Carlo simulator CORIMS, which is validated to have less than 20% variations from experimental soft-error data on 180-130 nm SRAMs in a wide variety of neutron fields like field tests at low and high altitudes and accelerator tests in LANSCE, TSL, and CYRIC. The following results are obtained: 1) Soft-error rates per device in SRAMs will increase x6-7 from 130 nm to 22 nm process; 2) As SRAM is scaled down to a smaller size, soft-error rate is dominated more significantly by low-energy neutrons (<; 10 MeV); and 3) The area affected by one nuclear reaction spreads over 1 M bits and bit multiplicity of multi-cell upset become as high as 100 bits and more.",2010,0, 8764,Template based attenuation correction for PET in MR-PET scanners,"This work investigates a template based procedure for attenuation correction (TBA) of PET scans acquired in future hybrid MR-PET scanners which will not offer a measured attenuation correction. A previous report of our group described a method (TBA-SPM) how individual attenuation maps can be obtained from an attenuation template which is spatially normalized to the SPM2 standard brain shape. Attenuation maps of females and males obtained from PET transmission scans were used as input for this template. The present study replaces the template referring to SPM2 by a female and a male attenuation template (fAT and mAT), each based on four measured attenuation images (TBA-f&m). The corresponding T1-MR templates (fMR and mMR) were also available. Thus, possible morphological gender-related differences, not considered when the standardized SPM2 1brain shape is used, may be taken into account. To examine this approach PET scans of 15 female and 15 male subjects of an ongoing study were attenuation corrected using the templates fAT and mAT. For this purpose and depending on the subjects gender the fMR or mMR templates were warped onto the individual MR image. The resulting warping matrix was applied to fAT or mAT, respectively. The individualized attenuation maps were used to reconstruct the PET emission data. These were compared to PET images attenuation corrected with the conventional PET based transmission data (PBA). While the relative differences between PBA and TBA=f&m reconstructed images averaged over each group and all regions of interest were 0.57% 3.76% for females and 0.59% 3.56% for males, the corresponding values obtained with the TBA-SPM method showed an overestimation with similar standard deviations (2.39% 3.76% for females and 2.42% 3.37% for males). In conclusion, the alternative gender-related template method TBA-f&m gives acceptable results with no significan- - t differences between the genders.",2008,0, 8765,A study of the effects of transient fault injection into the VHDL model of a fault-tolerant microcomputer system,"This work presents a campaign of fault injection to validate the dependability of a fault tolerant microcomputer system. The system is duplex with cold stand-by sparing, parity detection and a watchdog timer. The faults have been injected on a chip-level VHDL model, using an injection tool designed for this purpose. We have carried out a set of injection experiments (with 3000 injections each), injecting transient faults of types stuck-at, bit-flip, indetermination and delay on both the signals and variables of the system, running two different workloads. We have analysed the pathology of the propagated errors, measured their latency, and calculated both detection and recovery coverage. For instance, system detection coverages (including non-effective errors) up to 98%, and system recovery coverage up to 94% have been obtained for short transient faults",2000,0, 8766,Fault attack on AES with single-bit induced faults,"This work presents a differential fault attack against AES employin any key size, regardless of the key scheduling strategy. The presented attack relies on the injection of a single bit flip, and is able to check for the correctness of the injection of the fault a posteriori. This fault model nicely fits the one obtained through underfeeding a computing device employing a low cost tunable power supply unit. This fault injection technique, which has been successfully applied to hardware implementations of AES, receives a further validation in this paper where the target computing device is a system-on-chip based on the widely adopted ARM926EJ-S CPU core. The attack is successfully carried out against two different devices, etched in two different technologies (a generic 130 nm and a low-power oriented 90 nm library) running a software implementation of AES-192 and AES-256 and has been reproduced on multiple instances of the same chip.",2010,0, 8767,Research and simulation of amplitude error and quadrature error for inductosyn,"Two-phase amplitude and quadrature error of inductosyn have a great impact on the detection accuracy, through researching amplitude and quadrature error of inductosyn,eliminate its effects, and improve accuracy of inductosyn. First, analysis the characteristics of these two errors and their impact on the measurement accuracy. Then, research a new method, in the two position of 0A and 90A, respectively detecte their sine and cosine winding voltage , achieve amplitude and quadrature error detection, and amend their by using the software . Finally,experiments show that this new detection and correction method of error could improve the accuracy of Inductosyn.",2010,0, 8768,A Fault-Tolerant Middleware Switch for Space Applications,"Typical data systems for space applications are computer-centric. The central component is a computer to which several devices are attached. The computer handles devices, communication, computation and storage of data. Furthermore, fault-tolerance is an important issue in space systems. This paper presents a novel multicast embedded middleware switch which at the first implementation is fully implemented on an FPGA. SRAM-based FPGAs are very susceptible to SEUs stemmed from radiation effects in space, therefore considering fault-tolerance is inevitable. High capability of this switch to handle different interfaces in form of an integrated single system together with its fault-tolerance feature makes it very suitable for space data handling applications.",2009,0, 8769,Fault-Tolerant Coverage Planning in Wireless Networks,"Typically wireless networks coverage is planned with static redundancy to compensate temporal variations in the environment. As a result, the service still is delivered but the network coverage could have entered a critical state, meaning that further changes in the environment may lead to service failure. Service failures have to be explicitly notified by the applications. Therefore, in this paper we propose a methodology for fault-tolerant coverage planning. The idea is detecting the critical state and removing it by on-line system reconfiguration, and restoration of the original static redundancy. Even in case of a failure the system automatically generates a new configuration to restore the service, leading to shorter repair times. We describe how this approach can be applied to wireless mesh networks, often used in industrial applications like manufacturing, automation and logistics. The evaluation results show that the underlying model used for error detection and system recovery is accurate enough to correctly identify the system state.",2008,0, 8770,Sub-picture video coding for unequal error protection,"Unequal error protection is one of the key tools in video communication systems operating over error-prone networks. In order to allow unequal protection of a video bit-stream, codewords have to be categorized according to their importance to visual quality. The proposed sub-picture coding method allows partitioning images to regions of interest and helps to maintain a good image quality in the chosen regions. As an example, the sub-picture coding scheme is applied to multicast Internet streaming. It is shown that the overall subjective image quality and the objective foreground image quality are considerably better when compared to the selected conventional coding schemes.",2002,0, 8771,Techniques for transient fault sensitivity analysis and reduction in VLSI circuits,"Transient faults in VLSI circuits could lead to disastrous consequences. With technology scaling, circuits are becoming increasingly vulnerable to transient faults. This papers presents an accurate and efficient method to estimate the fault-sensitivity of VLSI circuits. Using a binary counter and an RC5 encryption implementation as examples, this paper shows that by performing a limited amount of random simulations, fault sensitivity can be estimated accurately at a reasonably low computational cost. This method is then used to show that the combination of two circuit level techniques can make circuits more fault-tolerant than using these techniques individually.",2003,0, 8772,Comparing the effects of standard and segmented attenuation correction,"We have evaluated a segmentation algorithm developed by General Electric for the Advance positron emission scanner (PET). Phantom studies were performed to measure the accuracy in emission scans reconstructed with segmented, attenuation data as a function of transmission scan time. The results indicated errors of less than 2% will be made in emission scan data reconstructed with transmission scan times of 3 minutes. Based on the phantom results, 185 patient data sets were acquired with both long (15 min.) non-segmented and short (3 min.) segmented attenuation scans. Comparisons of scan data in foci of abnormal uptake yielded a correlation coefficient between long and short scan SUV maximum values of 0.99 and a mean absolute difference of 4.6%. The average SUV values in lung between long and short has a correlation coefficient of 0.99 and a mean absolute difference of 3.1%. The corresponding values from the liver had a correlation coefficient of 0.96 and mean absolute difference of 7.4%. Visual review by physicians noted minor differences, but when grading the images on a scale of 1 to 5, 91% of the time there was no difference. In all cases comparing the long and short attenuation and no abnormal sites were missed",2000,0, 8773,Error recovery for a boiler system with OTS PID controller,"We have previously presented initial results of a case study which illustrated an approach to engineering protective wrappers as a means of detecting errors or unwanted behaviour in systems employing an OTS (off-the-shelf) item. The case study used a Simulink model of a steam boiler system together with an OTS PID (proportional, integral and derivative) controller. The protective wrappers are developed for the model of the system in such a way that they allow detection and tolerance of typical errors caused by unavailability of signals, violations of range limitations, and oscillations. In this paper, we extend the case study to demonstrate how forward error recovery based on exception handling can be systematically incorporated at the level of the protective wrappers.",2005,0, 8774,Fault detection for robot manipulators with parametric uncertainty: a prediction error based approach,"We introduce a new approach to fault detection for robot manipulators. The technique, which is based on the isolation of fault signatures via filtered torque prediction error estimates, does not require measurements or estimates of manipulator acceleration as is the case with some of the previously suggested methods. The method is formally demonstrated to be robust under uncertainty in the robot parameters. Furthermore, an adaptive version of the algorithm is introduced, and shown to both improve coverage and significantly reduce detection times. The effectiveness of the approach is demonstrated by experiments with a two-joint manipulator system",2000,0, 8775,Self-adaptive masking method for automatic shape recognition and motion correction of thallium-201 myocardial perfusion SPECT imaging,"We introduce a new self-adaptive masking method for shape detection of low signal to noise ratio (SNR) images to improve the tracking capabilities of the motion correction in nuclear cardiac imaging. The method is developed using two-dimensional fast Fourier transform, ideal filtering in the frequency domain, recursive thresholding, and region recognition. This method is independent of the correlation between the context of the planar images and has a good tolerance for low SNR images. Also it is robust under the circumstances of significant abrupt motion of the object",2000,0, 8776,Evaluating the Accuracy of Fault Localization Techniques,"We investigate claims and assumptions made in several recent papers about fault localization (FL) techniques. Most of these claims have to do with evaluating FL accuracy. Our investigation centers on a new subject program having properties useful for FL experiments. We find that Tarantula (Jones et al.) works well on the program, and we show weak support for the assertion that coverage-based test suites help Tarantula to localize faults. Baudry et al. used automatically-generated mutants to evaluate the accuracy of an FL technique that generates many distinct scores for program locations. We find no evidence to suggest that the use of mutants for this purpose is invalid. However, we find evidence that the standard method for evaluating FL accuracy is unfairly biased toward techniques that generate many distinct scores, and we propose a fairer method of accuracy evaluation. Finally, Denmat et al. suggest that data mining techniques may apply to FL. We investigate this suggestion with the data mining tool Weka, using standard techniques for evaluating the accuracy of data mining classifiers. We find that standard classifiers suffer from the class imbalance problem. However, we find that adding cost information improves accuracy.",2009,0, 8777,Fault recovery in linear systems via intrinsic evolution,"We investigate fault recovery using reconfiguration for analog linear feedback control systems. We assume any faults occur only within the linear system and accessibility to its internal circuitry is impossible. Consequently, the only way to restore service - even degraded service - is by inserting a compensation network into the control loop. System failures are manifested by a change in the original bandwidth. The compensators are evolved intrinsically.",2004,0, 8778,Using software implemented fault inserter in dependability analysis,We investigate program susceptibility to hardware faults in Win32 environment. For this purpose we use the software implemented fault injector FITS. We analyze natural fault resistivity of COTS systems and the effectiveness of various software techniques improving system dependability. The problems of experiment tuning and result interpretation are discussed in context of a wide spectrum of applications.,2002,0, 8779,On the Correlation between Controller Faults and Instruction-Level Errors in Modern Microprocessors,"We investigate the correlation between register transfer-level faults in the control logic of a modern microprocessor and their instruction-level impact on the execution flow of typical programs. Such information can prove immensely useful in accurately assessing and prioritizing faults with regards to their criticality, as well as commensurately allocating resources to enhance testability, diagnosability, manufacturability and reliability. To this end, we developed an extensive infrastructure which allows injection of stuck-at faults and transient errors of arbitrary starting point and duration, as well as cost-effective simulation and classification of their repercussions into various instruction-level error types. As a test vehicle for our study, we employ a superscalar, dynamically-scheduled, out-of-order, Alpha-like microprocessor, on which we execute SPEC2000 integer benchmarks. Extensive experimentation with faults injected in control logic modules of this microprocessor reveals interesting trends and results, corroborating the utility of this simulation infrastructure and motivating its further development and application to various tasks related to robust design.",2008,0, 8780,On the bit-error probability of differentially encoded QPSK and offset QPSK in the presence of carrier synchronization,"We investigate the differences between allowable differential encoding strategies and their associated bit-error probability performances for quadrature phase-shift keying (QPSK) and offset QPSK modulations when the carrier demodulation reference signals are supplied by the optimum (motivated by maximum a posteriori estimation of carrier phase) carrier-tracking loop suitable for that modulation. In particular, we show that in the presence of carrier-synchronization phase ambiguity but an otherwise ideal loop, both the symbol and bit-error probabilities in the presence of differential encoding are identical for the two modulations. On the other hand, when in addition the phase error introduced by the loop's finite signal-to-noise ratio is taken into account, it is shown that the two differentially encoded modulations behave differently, and their performances are no longer equivalent. A similar statement has previously been demonstrated for the same modulations when the phase ambiguity was assumed to have been perfectly resolved by means other than differential encoding.",2006,0, 8781,Error resilience of EZW coder for image transmission in lossy networks,We investigate the effect of network errors on Embedded Zerotree Wavelet (EZW) encoded images and propose modifications to the EZW coder to increase error resilience in bursty packet loss conditions. A hybrid-encoding scheme that uses data interleaving to spread correlated information into independently processed groups and layered encoding to protect significant information within each group is presented. Simulation results for various packet loss percentages show the improved error resiliency of our scheme in random and bursty packet loss environments.,2002,0, 8782,Bit-error rate of binary digital modulation schemes in generalized gamma fading channels,"We derive a closed-form expression for the bit-error rate of binary digital modulation schemes in a generalized fading channel that is modeled by the three-parameter generalized gamma distribution. This distribution is very versatile and generalizes or accurately approximates many of the commonly used channel models for multipath, shadow, and composite fading. The result is expressed in terms of Meijer's G-function, which can be easily evaluated numerically.",2005,0, 8783,Improved error bounds for the erasure/list scheme: the binary and spherical cases,We derive improved bounds on the error and erasure rate for spherical codes and for binary linear codes under Forney's erasure/list decoding scheme and prove some related results.,2004,0, 8784,Bounds on the Decoding Error Probability of Binary Block Codes over Noncoherent Block AWGN and Fading Channels,"We derive upper bounds on the decoding error probability of binary block codes over noncoherent block additive white Gaussian noise (AWGN) and fading channels, with applications to turbo codes. By a block AWGN (or fading) channel, we mean that the carrier phase (or fading) is assumed to be constant over each block but independently varying from one block to another. The union bounds are derived for both noncoherent block AWGN and fading channels. For the block fading channel with a small number of fading blocks, we further derive an improved bound by employing Gallager's first bounding technique. The analytical bounds are compared to the simulation results for a coded block-based differential phase shift keying (B-DPSK) system under a practical noncoherent iterative decoding scheme proposed by Chen et al. We show that the proposed Gallager bound is very tight for the block fading channel with a small number of fading blocks, and the practical noncoherent receiver performs well for a wide range of block fading channels",2006,0, 8785,Perceptually Unequal Packet Loss Protection by Weighting Saliency and Error Propagation,"We describe a method for achieving perceptually minimal video distortion over packet-erasure networks using perceptually unequal loss protection (PULP). There are two main ingredients in the algorithm. First, a perceptual weighting scheme is employed wherein the compressed video is weighted as a function of the nonuniform distribution of retinal photoreceptors. Secondly, packets are assigned temporal importance within each group of pictures (GOP), recognizing that the severity of error propagation increases with elapsed time within a GOP. Using both frame-level perceptual importance and GOP-level hierarchical importance, the PULP algorithm seeks efficient forward error correction assignment that balances efficiency and fairness by controlling the size of identified salient region(s) relative to the channel state. PULP demonstrates robust performance and significantly improved subjective and objective visual quality in the face of burst packet losses.",2010,0, 8786,Automatic mining of source code repositories to improve bug finding techniques,"We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository.",2005,0, 8787,A type system for statically detecting spreadsheet errors,"We describe a methodology for detecting user errors in spreadsheets, using the notion of units as our basic elements of checking. We define the concept of a header and discuss two types of relationships between headers, namely is-a and has-a relationships. With these, we develop a set of rules to assign units to cells in the spreadsheet. We check for errors by ensuring that every cell has a well-formed unit. We describe an implementation of the system that allows the user to check Microsoft Excel spreadsheets. We have run our system on practical examples, and even found errors in published spreadsheets.",2003,0, 8788,Spaceflight multi-processors with fault tolerance and connectivity tuned from sparse to dense,"We describe a novel generation of multi-processor architectures, with fault tolerance and connectivity tuned from sparse to dense. Multivariate feasible regions quantify how such architectures minimize channel cost and latency, and maximize throughput and fault tolerance. Key to designs which optimally exploit these feasible regions: a software-automated catalog of results from the mathematics of connectivity. For example, discoveries about Hamming graphs set the stage for algorithms that configure the corresponding topologies. We introduce a new theorem that explicates the separability-covering duality of Hamming graphs, together with a new, efficient algorithm for recognizing and labeling Hamming graphs of arbitrary radix and dimension. Previously reported algorithms run slower, and emphasize Hamming graphs with radix two. Grid computing applications that benefit from tunable fault tolerance and connectivity include i) design of multi-processors, coupled via vertical cavity surface emitting lasers (VCSELs); ii) auto-configuration of self-healing mobile ad hoc networks (MANETs) via digital radio-frequency channels. This is an in-depth paper targeting engineers, computer scientists, or applied mathematicians with a background in quantitative, dependable computing. We provide a tutorial illustrating how the mathematics of connectivity solves seven practical problems, present seven new theoretical results, and pose nine open challenges. Two complementary works appear in these proceedings: addressing a general audience with an overview of our work, the diagrams and informal narrative of ""Multi-Processors by the Numbers"" as presented by Laforge and Turner (2006) unfold how feasible regions govern multi-processor design and operation, and elaborate grid computing as mentioned in the abstract above; technologists and engineers may also be interested in ""vertical cavity surface emitting lasers for spaceflight multi-processors"" (LaForge et al., 2006), our broadly-scoped rep- - ort on VCSELs as enablers for tunable architectures",2006,0, 8789,Practical Methods for Geometric and Photometric Correction of Tiled Projector,"We describe a novel, practical method to create largescale, immersive displays by tiling multiple projectors on curved screens. Calibration is performed automatically with imagery from a single uncalibrated camera, without requiring knowledge of the 3D screen shape. Composition of 2D-mesh-based coordinate mappings, from screen-tocamera and from camera-to-projectors, allows image distortions imposed by the screen curvature and camera and projector lenses to be geometrically corrected together in a single non-parametric framework. For screens that are developable surfaces, we show that the screen-to-camera mapping can be determined without some of the complication of prior methods, resulting in a display on which imagery is undistorted, as if physically attached like wallpaper. We also develop a method of photometric calibration that unifies the geometric blending, brightness scaling, and black level offset maps of prior approaches. The functional form of the geometric blending is novel in itself. The resulting method is more tolerant of geometric correction imprecision, so that visual artifacts are significantly reduced at projector edges and overlap regions. Our efficient GPUbased implementation enables a single PC to render multiple high-resolution video streams simultaneously at frame rate to arbitrary screen locations, leaving the CPU largely free to do video decompression and other processing.",2006,0, 8790,Convolutional Decoding in the Presence of Synchronization Errors,"We describe the operation of common convolutional decoding algorithms in the presence of insertions, deletions, as well as substitutions in the received message. We first propose a trellis description that can handle the existence of insertions and deletions. Then, we use this trellis diagram to develop the Viterbi algorithm and the Log-MAP algorithm in the presence of synchronization errors. The proposed techniques are presented in the most general form where standard convolutional codes are used and no change to the encoder is required. We establish the effectiveness of the proposed algorithms using standard convolutional codes at different rates.",2010,0, 8791,Defected ground structure to reduce mutual coupling between cylindrical dielectric resonator antennas,Use of a simple ring-shaped defected ground structure is experimentally demonstrated to suppress considerable mutual coupling between two cylindrical dielectric resonators. About 5 dB suppression has been obtained near the operating frequency around 3.3 GHz. The radiation characteristics with and without defect in the ground plane are also reported.,2008,0, 8792,Compact Power Divider using Defected Ground Structure for Wireless Applications,"Use of different types of defected ground structures (DCS) has been reported in this paper to design compact power dividers in microstrip medium. Unit cell's (of DGS) equivalent circuit has been used to evaluate the performance of power divider. Based on this approach, compact two-way equal power dividers have been designed in GSM (900 MHz) band. Results show a size reduction of 35% and 32% for the power dividers using T shaped DGS and split ring DGS over the conventional power divider.",2008,0, 8793,Accurate Lateral Scatter Correction Within the MatriXX,"Using an IMRT MatriXX, which is used to measure intensity maps in 2D, is a promising method for the dosimetric verification of external beam, megavoltage radiation therapy. From MatriXX images, two dimensional dose distributions inside a phantom or patient can be obtained. Whereas, the lateral X-ray scatter within the MatriXX makes the accuracy in dose reconstruction dissatisfactory. Therefore, a more accurate correction for the lateral scatter contribution has to be considered. In this study, a scatter kernel was introduced to correct the dose values determined with the MatriXX. The model requires the primary dose component at the position of the MatriXX. A parametrized description of the lateral scatter within the MatriXX was obtained from measurements with an ionization chamber in a miniphantom. This yielded a good description of the lateral scatter within the MatriXX on the central axis. This is relevant since the treatment outcome critically depends on the dose delivered to the tumor (usually close to the isocenter). There is an excellent agreement between the dose values measured with the ionization chamber in a miniphantom and those obtained from the MatriXX after the scatter correction with the kernel. It can be concluded that the scatter correction is able to improve the accuracy of the MatriXX dose values on the central axis.",2008,0, 8794,New Types of Microstrip Filters Using Defected Grounds,"Using Finite Difference Time Domain (FDTD) and microwave circuit simulation tools the use of Defected Ground Structures (DGS) in microstrip low pass, band stop and band pass filters is investigated. The results obtained show that filters with very simple structures can be produced with good and predictable frequency responses.",2005,0, 8795,Characterization of coil faults in an axial flux variable reluctance PM motor,"Variable-reluctance (VR) and switch-reluctance (SR) motors have been proposed for use in applications requiring a degree of fault tolerance. A range of topologies, of brushless SR and VR permanent-magnet (PM) motors are not susceptible to some types of faults, such as phase-to-phase shorts, and can often continue to function in the presence of other faults. In particular, coil-winding faults in a single stator coil may have relatively little effect on motor performance but may affect overall motor reliability, availability, and longevity. It is important to distinguish between and characterize various winding faults for maintenance and diagnostic purposes. These fault characterization and analysis results are a necessary first step in the process of motor fault detection and diagnosis for this motor topology. This paper examines rotor velocity damping due to stator winding turn-to-turn short faults in a fault-tolerant axial flux VR PM motor. In this type of motor, turn-to-turn shorts, due to insulation failures, have similar I-V characteristics as coil faults resulting from other problems, such as faulty maintenance or damage due to impact. In order to investigate the effects of these coil faults, a prototype axial flux VR PM motor was constructed. The motor was equipped with experimental fault simulation stator windings capable of simulating these and other types of stator winding faults. This paper focuses on two common types of winding faults and their effects on rotor velocity in this type of motor.",2002,0, 8796,Research of error correction of LEO satellite orbit prediction for vehicle-borne tracking and position device,"Vehicle-borne tracking and position device is used to track LEO satellite. Because of the absence of the target which might be caused by cloud or zenith blind zone, the forecasting data will be used to acquire the target. While orbit prediction has serious errors, the target is always missed. Meanwhile, in its application to the vehicle-borne tracking and position device, due to the base of instability in the tracking process, it will result in significant difference between predicting data and tracking data, so the target will be not tracked rapidly. We applied tracking data to predict satellite orbits by improving Laplace method, and then corrected the error between Predicted data and actual measured data by interpolation method of Lagrange which improves the accuracy of prediction values. The testing data shows the accuracy of predicted data ranging from 3' to 10' for both azimuth and elevation when extrapolated satellite orbit to 7 seconds time.",2010,0, 8797,Identification of design errors through functional testing,"Verification of the functionality of VHDL specifications is one of the primary and most time consuming tasks of design. However, it must necessarily be an incomplete task because it is impossible to completely exercise the specification by exhaustively applying all input patterns. We present a two-step strategy based on symbolic analysis of the VHDL specification, using a behavioral error model. First, we generate a reduced number of functional test vectors for each process of the specification by using a new analysis metric which we call bit coverage. The error model based on this metric allows the identification of possible design errors represented by redundancies in the VHDL code. Then, through the definition of a controllability measure, we verify if these functional test vectors can be applied to the process inputs when it is interconnected to other processes. If this is not the case, the analysis of the nonapplicable inputs provides identification of possible design errors due to erroneous interconnections. The bit-coverage provides complete statement, condition and branch coverage; and we experimentally show that it allows the identification of possible design errors. Identification and removal of design errors improves the global testability of a design.",2003,0, 8798,Transient fault emulation of hardened circuits in FPGA platforms,"Very deep submicron and nanometer technologies are emphasizing soft errors as an important issue in the challenges of modem electronic systems. Hardened circuits are currently required in many applications where fault tolerance (FT) was not a requirement in the very near past. Together with the generation of tools and methods for hardening circuits, new ways of validating the FT are needed. These solutions must be cost effective and provide a help not only in measuring the robustness of the circuit but also in locating the weak areas and in proposing hardening solutions. FPGA emulation of SEU effects is gaining attention in order to speed up the fault tolerance evaluation. In this work a system is proposed for the evaluation of fault tolerance with respect to SEU effects by emulation in platform FPGAs. In this system, most of the modules of a typical fault injection environment are embedded in the FPGA. Therefore, the time required for the FT validation has been optimised with respect to existing approaches.",2004,0, 8799,Autonomous transient fault emulation on FPGAs for accelerating fault grading,"Very deep submicron and nanometer technologies have increased notably integrated circuit (IC) sensitiveness to radiation. Soft errors are currently appearing into ICs working at earth surface. Therefore, hardened circuits are currently required in many applications where fault tolerance (FT) was not a requirement in the very near past. The use of CAD tools, for the generation and the validation of fault tolerant circuits, will allow designers to obtain hardened devices in a cost-effective way with short development times and with high reliability results. While automatic insertion of fault tolerant structures in designs is already possible, automatic evaluation with an optimum time-cost relation is still needed. In this sense, the use of platform FPGAs for the emulation of single-event upset effects (SEU) is gaining attention in order to speed up the fault tolerance evaluation. In this work, a new emulation system for the evaluation of FT with respect to SEU effects is proposed. This solution gets profit of hardware resources for accelerating the FT evaluation. It is analysed and compared with respect to other emulation techniques. The proposed solution provides not only short times but also low cost in area for FT validation, giving better results than pure software or hardware solutions.",2005,0, 8800,Computational issues in fault detection filter design,We discuss computational issues encountered in the design of residual generators for dynamic inversion based fault detection filters. The two main computational problems in determining a proper and stable residual generator are the computation of an appropriate left-inverse of the fault-system and the computation of coprime factorizations with proper and stable factors. We discuss numerically reliable approaches for both of these computations relying on matrix pencil approaches and recursive pole assignment techniques for descriptor systems. The proposed computational approach to design fault detection filters is completely general and can easily handle even unstable and/or improper systems.,2002,0, 8801,Comments on an evolutionary intensity inhomogeneity correction algorithm,"We discuss some aspects of a well known algorithm for inhomogeneity intensity correction in Magnetic Resonance Imaging (MRI), the parametric bias correction (PABIC) algorithm. In this approach, the intensity inhomogeneity is modelled by a linear combination of 2D or 3D Legengre polynomials (computed as outer products of 1D polynomials). The model parameter estimation process proposed in the original paper is similar to a (1+1) Evolution Strategy, with some small and subtle differences. In this paper we discuss some features of the algorithm elements, trying to uncover sources of undesired behaviors and the limits to its applicability. We study the energy function proposed in the original paper and its relation to the image formation model. We also discuss the original minimization algorithm behavior. We think that this detailed discussion is needed because of the high impact that the original paper had in the literature, leading to an implementation into the well known ITK library, which means that it has become a de facto standard.",2008,0, 8802,Autonomic computing correlation for fault management system evolution,"We discuss the emerging area of autonomic computing and its implications for the evolution of fault-management systems. Particular emphasis is placed on the concept of event correlation and its role in system self-management. A new correlation analysis tool to assist with the development, management and maintenance of correlation rules and beliefs is described.",2003,0, 8803,Evaluating low-cost fault-tolerance mechanism for microprocessors on multimedia applications,"We evaluate a low-cost fault-tolerance mechanism for microprocessors, which can detect and recover from transient faults, using multimedia applications. There are two driving forces to study fault-tolerance techniques for microprocessors. One is deep submicron fabrication technologies. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is the increasing popularity of mobile platforms. Recently cell phones have been used for applications which are critical to our financial security, such as flight ticket reservation, mobile banking, and mobile trading. In such applications, it is expected that computer systems will always work correctly. From these observations, we propose a mechanism which is based on an instruction reissue technique for incorrect data speculation recovery which utilizes time redundancy. Unfortunately, we found significant performance loss when we evaluated the proposal using the SPEC2000 benchmark suite. We evaluate it using MediaBench which contains more practical mobile applications than SPEC2000",2001,0, 8804,On compaction-based concurrent error detection,"We examine a low-cost, zero-latency, non-intrusive CED method for restricted error models. The method is based on compaction of the circuit outputs, prediction of the compacted responses, and comparison. This method also achieves significant hardware cost reduction by utilizing the information available through the restricted error model. We assume that the error model is not defined through permanent or transient faults in the hardware, but rather in terms of the erroneous behavior that such faults induce. Thus, any fault model can be described by providing for every input combination the error-free response and all erroneous responses resulting from faults in the model.",2003,0, 8805,A framework for reduced order modeling with mixed moment matching and peak error objectives,"We examine a new method of producing reduced order models for LTI systems which attempts to minimize a bound on the peak error between the original and reduced order models subject to a bound on the peak value of the input. The method, which can be implemented by solving a set of linear programming problems that are parameterized via a single scalar quantity, is able to minimize an error bound subject to a number of moment matching constraints. Moreover, because all optimization is performed in the time-domain, the method can also be used to perform model reduction for infinite dimensional systems, rather than being restricted to finite order state space descriptions. We begin by contrasting the method we present here to two classes of standard model reduction algorithms, namely moment matching algorithms and singular-value-based methods. After motivating the class of reduction tools we propose, we describe the algorithm (which minimizes the L1 norm of the difference between the original and reduced order impulse responses) and formulate the corresponding linear programming problem that is solved during each iteration of the algorithm. We then show how to incorporate moment matching constraints into the basic error bound minimization algorithm, and present an example which utilizes the techniques described herein. We conclude with some general comments for future work, including a nonlinear programming formulation with potential implementation benefits.",2010,0, 8806,Complexity issues in automated synthesis of failsafe fault-tolerance,"We focus on the problem of synthesizing failsafe fault-tolerance where fault-tolerance is added to an existing (fault-intolerant) program. A failsafe fault-tolerant program satisfies its specification (including safety and liveness) in the absence of faults. However, in the presence of faults, it satisfies its safety specification. We present a somewhat unexpected result that, in general, the problem of synthesizing failsafe fault-tolerant distributed programs from their fault-intolerant version is NP-complete in the state space of the program. We also identify a class of specifications, monotonic specifications, and a class of programs, monotonic programs, for which the synthesis of failsafe fault-tolerance can be done in polynomial time (in program state space). As an illustration, we show that the monotonicity restrictions are met for commonly encountered problems, such as Byzantine agreement, distributed consensus, and atomic commitment. Furthermore, we evaluate the role of these restrictions in the complexity of synthesizing failsafe fault-tolerance. Specifically, we prove that if only one of these conditions is satisfied, the synthesis of failsafe fault-tolerance is still NP-complete. Finally, we demonstrate the application of monotonicity property in enhancing the fault-tolerance of (distributed) nonmasking fault-tolerant programs to masking.",2005,0, 8807,A Candidate Fault Model for AspectJ Pointcuts,"We present a candidate fault model for pointcuts in AspectJ programs. The fault model identifies faults that we believe are likely to occur when writing pointcuts in the AspectJ language. Categories of fault types are identified, and each individual fault type is described as categorized. We argue that a fault model that focuses on the unique constructs of the AspectJ language is needed for the systematic and effective testing of AspectJ programs. Our pointcut fault model is a first step towards such a model",2006,0, 8808,A constraint logic programming framework for the synthesis of fault-tolerant schedules for distributed embedded systems,"We present a constraint logic programming (CLP) approach for synthesis of fault-tolerant hard real-time applications on distributed heterogeneous architectures. We address time-triggered systems, where processes and messages are statically scheduled based on schedule tables. We use process re-execution for recovering from multiple transient faults. We propose three scheduling approaches, which each present a trade-off between schedule simplicity and performance, (i) full transparency, (ii) slack sharing and (iii) conditional, and provide various degrees of transparency. We have developed a CLP framework that produces the fault-tolerant schedules, guaranteeing schedulability in the presence of transient faults. We show how the framework can be used to tackle design optimization problems.The proposed approach has been evaluated using extensive experiments.",2007,0, 8809,Exact fault simulation for systems on silicon that protects each core's intellectual property (IP),"We present a fault simulation approach for multicore systems on silicon (SOC) (a) that provides exact fault coverage for the entire SOC, (b) does so without revealing any intellectual property (IP) of core vendors, and (c) whose run time is comparable to that required by the existing approaches that require all IP to be revealed. This fault simulator assumes a full scan SOC design and is first in a suite of simulation, test generation, and DFT tools that are currently under development. The proposed approach allows flexibility in selection of a test methodology for SOC, reduces test application cost and area and performance overheads, and allows more comprehensive testing",2001,0, 8810,Automatic Test Pattern Generation for Interconnect Open Defects,"We present a fully automated flow to generate test patterns for interconnect open defects. Both inter-layer opens (open- via defects) and arbitrary intra-layer opens can be targeted. An aggressor-victim model used in industry is employed to describe the electrical behavior of the open defect. The flow is implemented using standard commercial tools for parameter extraction (PEX) and test generation (ATPG). A highly optimized branch-and bound algorithm to determine the values to be assigned to the aggressor lines is used to reduce both the ATPG efforts and the number of aborts. The resulting test sets are smaller and achieve a higher defect coverage than stuck-at n-detection test sets, and are robust against process variations.",2008,0, 8811,Measurement-based frame error model for simulating outdoor Wi-Fi networks,"We present a measurement-based model of the frame error process on a Wi-Fi channel in rural environments. Measures are obtained in controlled conditions, and careful statistical analysis is performed on the data, providing information which the network simulation literature is lacking. Results indicate that most network simulators use a frame loss model that can miss important transmission impairments even at a short distance, particularly when considering antenna radiation pattern anisotropy and multi-rate switching.",2009,0, 8812,Recent improvements on the specification of transient-fault tolerant VHDL descriptions: a case-study for area overhead analysis,"We present a new approach to design reliable complex circuits with respect to transient faults in memory elements. These circuits are intended to be used in harmful environments like radiation. During the design flow, this methodology is also used to perform an early-estimation of the obtained reliability level. Usually, this reliability estimation step is performed in the laboratory, by means of radiation facilities (particle accelerators). By doing so, the early-estimated reliability level is used to balance the design process into a trade-off between maximum area overhead due to the insertion of redundancy and the minimum reliability required for a given application. This approach is being automated through the development of a CAD tool (FT-PRO). Finally, we present also a case-study of a simple microprocessor used to analyze the FT-PRO performance in terms of the area overhead required to implement the fault-tolerant circuit.",2000,0, 8813,Estimating circuit fault-tolerance by means of transient-fault injection in VHDL,"We present a new approach to estimate the reliability of complex circuits used in harmful radiation environments. This goal can be attained in an early stage of the design process. Usually, this step is performed in laboratory, by means of radiation facilities (particle accelerators). In our case, we estimate the expected tolerance of the complex circuit with respect to SEU during the VHDL specification step. By doing so, the early-estimated reliability level is used to balance the design process into a trade-off between maximum area overhead due to the insertion of redundancy and the minimum reliability required for a given application. This approach is being automated through the development of a CAD tool.",2000,0, 8814,"VRL, a Novel Environment for Control Engineering Practicing: An Application to a Fault Tolerant Control System",Virtual remote laboratory (VRL) is a powerful tool for an effective active learning in control engineering formation because it gives the opportunity of testing remotely control laws both by simulations within a virtual reality framework and by remote experiments. In this paper the virtual environment VRL is described and an application of a fault tolerant control law on an inverted pendulum is shown,2006,0, 8815,Solving In-Circuit Defect Coverage Holes with a Novel Boundary Scan Application,"Virtual test access, offered by boundary-scan at in-circuit test (ICT), is insufficient for the challenges of next generation's high density printed circuit boards (PCBs). The loss of test access translates to loss of defect coverage. This paper describes a novel use of existing technologies that increases the effectiveness of boundary-scan.",2008,0, 8816,Error proof inkless die bonding process development,"Wafer mapping techniques originated at the wafer fab for wafer manufacturing process control and yield improvement as presented by T. Takeda (1994). Recently, inkless assembly processes have been becoming more and more popular for wafer fab process simplification and cycle time reduction, as well as the graded IC product sale under the pressure of IC manufacturing cost. However, not all of the packaging and assembly houses are ready for wafer mapping, as converting from the current inked wafer process to inkless assembly includes a lot of challenges to assembly equipment, process and manufacturing control, especially for smaller die sizes (less than 1times1mm). This paper discusses the critical challenges of handling inkless wafers to packaging and assembly. Technical solutions are developed including error-proof inkless packaging process flow, reference die design, inkless die pick up arithmetic, and pattern recognition optimization. The scenarios of fatal impact to inkless wafer mapping implementation are captured and solutions are provided that guarantee smooth implementation of inkless assembly",2005,0, 8817,Optimal Wavelet Design for Multicarrier Modulation with Time Synchronization Error,"Wavelet packet based multi-carrier modulation (WPMCM) is an efficient transmission technique which has the advantage of being a generic scheme whose characteristics can be customized to fulfill a design specification. However, WPMCM is sensitive and vulnerable to time synchronization errors because its symbols overlap. In this paper, we design new wavelets to alleviate WPMCM's vulnerability to timing errors. First, a filter design framework that facilitates the development of new wavelet bases is built. Then the expressions for errors due to time offset in WPMCM transmission are derived and stated as a convex optimization problem. Finally, an optimal filter that best handles these deleterious effects is designed by means of semi definite programming (SDP). Through computer simulations the performance advantages of the newly designed filter over standard wavelet filters are proven.",2009,0, 8818,Anomaly Detection Support Vector Machine and Its Application to Fault Diagnosis,"We address the issue of classification problems in the following situation: test data include data belonging to unlearned classes. To address this issue, most previous works have taken two-stage strategies where unclear data are detected using an anomaly detection algorithm in the first stage while the rest of data are classified into learned classes using a classification algorithm in the second stage. In this study, we propose anomaly detection support vector machine (ADSVM) which unifies classification and anomaly detection. ADSVM is unique in comparison with the previous work in that it addresses the two problems simultaneously. We also propose a multiclass extension of ADSVM that uses a pairwise voting strategy. We empirically present that ADSVM outperforms two-stage algorithms in application to an real automobile fault dataset, as well as to UCI benchmark datasets.",2008,0, 8819,Bit Error Rate Analysis of Orthogonal Space-Time Block Codes in Nakagami-M Keyhole Channels,"We analyze the bit error rate (BER) performance of multiple-input multiple-output (MIMO) systems employing orthogonal space-time block codes (STBC) in Nakagami-m keyhole channels. We derive exact analytical closed-form expressions for the average BER of I-ary pulse amplitude modulation (I-PAM) and M-ary quadrature amplitude modulation (M-QAM) as well as a tight approximation for M-ary phase shift keying (M-PSK). These BER expressions are given as finite sums of weighted Meijer G-functions, which can be easily evaluated numerically. Furthermore, we determine the corresponding high SNR asymptotics, based on which we quantify the diversity order of the considered system. Numerical results illustrate the impact of several different parameters on the average BER and are shown to be in excellent agreement with simulated values.",2006,0, 8820,An online monitoring system of friction fault based on acoustic emission technology,"Vibration signals are usually used to monitor friction fault between stator and rotor of steam turbine in fossil-fuel power plant indirectly, and the effect is not ideal due to the limitation of vibration meter and heavy mass of rotor. The reason why vibration signals can not monitor friction fault between stator and rotor of steam turbine is analyzed in this paper in theory. A method of monitoring friction fault between stator and rotor of steam turbine based on acoustic emission (AE) technology is supposed. An online monitoring system of friction fault for steam turbine is designed. It is shown that the monitoring system can measure friction fault between stator and rotor of steam turbine effectively and accurately and improve the reliability of steam turbine.",2009,0, 8821,A Whole-Frame Error Concealment Algorithm Based on Optical Flow Estimation in H.264/AVC,"Video communication over wireless network may cause whole-frame loss. To solve the problem, in this paper we proposed a novel whole-frame concealment method. We firstly calculate the optical flow of the previous frame of the lost frame. Then the MV of each pixel in the lost frame is obtained by using the motion information of its previous frame. Finally, with the MVs we can conceal the lost frame pixel by pixel. Experimental results show that the proposed method outperforms the existing methods in JM on both PSNR and visual quality.",2009,0, 8822,Low-delay and error-robust wireless video transmission for video communications,"Video communications over wireless networks often suffer from various errors. A novel video transmission architecture is proposed to meet the low-delay and error-robust requirement of wireless video communications. This architecture uses forward error correction coding and automatic repeat request (ARQ) protocol to provide efficient bandwidth access from wireless link. In order to reduce ARQ delay, a video proxy server is implemented at the base station. This video proxy not only reduces the ARQ response time, but also provides error-tracking functionality. The complexity of this video proxy server is analyzed. Experiment shows that about 8.9% of the total macroblocks need to be transcoded under a random-error condition of 10-3 error probability. Because H.263 is the most popular video coding standard for video communication, we use it as an experiment platform. A data-partition scheme is also used to enhance error-resilience performance. This architecture is also suitable for various motion-compensation-based standards like H.261, H.263 series, MPEG-1, MPEG-2, MPEG-4, and H.264. For ""Foreman"" sequence under a random-error condition of 10-3 error probability, luminance peak signal-to-noise ratio decreases only 0.35 dB, on average.",2002,0, 8823,A new H.264/AVC error resilience model based on Regions of Interest,"Video transmission over the Internet can sometimes be subject to packet loss which reduces the end-user's quality of experience (QoE). Solutions aiming at improving the robustness of a video bitstream can be used to subdue this problem. In this paper, we propose a new region of interest-based error resilience model to protect the most important part of the picture from distortions. We conduct eye tracking tests in order to collect the region of interest (RoI) data. Then, we apply in the encoder an intra-prediction restriction algorithm to the macroblocks belonging to the RoI. Results show that while no significant overhead is noted, the perceived quality of the video's RoI, measured by means of a perceptual video quality metric, increases in the presence of packet loss compared to the traditional encoding approach.",2009,0, 8824,Error supression in view synthesis using reliability reasoning for FTV,"View synthesis using depth maps is a crucial application for Free-viewpoint TV (FTV). In this paper, we propose a novel reliability based view synthesis method using two references and their depth maps. The depth estimation with stereo matching is known to be error-prone, leading to noticeable artifacts in the synthesized new views. In order to provide plausible virtual views for FTV, our focus is on the error suppression for the synthesized view. We innovatively introduce the continuous reliability using error approximation by the reference cross-check. The new view interpolation algorithm is generated with the criterion of Least Sum of Squared Errors (LSSE). Furthermore, the proposed algorithm can be considered as a reliable version of the conventional linear view blending. We experimentally demonstrate the effectiveness of our framework with MPEG standard test sequences. The results show that our method outperforms state-of-the-art view interpolation methods both at eliminating artifacts and improving PSNR.",2010,0, 8825,On-line learning of language models with word error probability distributions,We are interested in the problem of learning stochastic language models on-line (without speech transcriptions) for adaptive speech recognition and understanding. We propose an algorithm to adapt to variations in the language model distributions based on speech input only and without its true transcription. The on-line probability estimate is defined. as a function of the prior and word error distributions. We show the effectiveness of word-lattice based error probability distributions in terms of receiver operating characteristics (ROC) curves and word accuracy. We apply the new estimates Padapt (w) to the task of adapting on-line an initial large vocabulary trigram language model and show improvement in word accuracy with respect to the baseline speech recognizer,2001,0, 8826,Flexible Fault Tolerance in Distributed Enterprise Communities,"We are witnessing the birth of the digital enterprise, in which many of the enterprise operations will be performed by independent software programs or by programs acting on behalf of humans. These heterogeneous agents are often managing different activities (such as stock trading or inventory management) in an autonomous manner and interact with each other in order to perform their job, creating distributed communities of agents. The involved agents are usually complex software entities, that should be able maintain a state that survives software or hardware failures. The classic solution is to use a relational database (such as Oracle), possibly replicated, to save the states of the agents in a manner that can survive failures and implement the communal invariants inside the database. However, keeping the invariants inside the database is difficult, since these invariants are application dependent and might be difficult to be formalized inside the database. But even more important, since the agents are distributed, keeping the states consistent in the presence of faults might require the use of distributed transactions, which are notoriously hard to handle and very expensive. In this paper we present a flexible, yet robust framework that will allow the states of the involved agents to survive hardware or software failures. In our system, the interactions of the agents that work in a given community are governed by a given law that is enforced in a distributed manner. The state of an agent is defined by the law and several options to handle the failures and keep the states consistent with each other are provided. The implementation of our system will employ the distributed coordination and control mechanism called Law-Governed Interaction (LGI).",2010,0, 8827,"We're Finding Most of the Bugs, but What are We Missing?","We compare two types of model that have been used to predict software fault-proneness in the next release of a software system. Classification models make a binary prediction that a software entity such as a file or module is likely to be either faulty or not faulty in the next release. Ranking models order the entities according to their predicted number of faults. They are generally used to establish a priority for more intensive testing of the entities that occur early in the ranking. We investigate ways of assessing both classification models and ranking models, and the extent to which metrics appropriate for one type of model are also appropriate for the other. Previous work has shown that ranking models are capable of identifying relatively small sets of files that contain 75-95% of the faults detected in the next release of large legacy systems. In our studies of the rankings produced by these models, the faults not contained in the predicted most fault prone files are nearly always distributed across many of the remaining files; i.e., a single file that is in the lower portion of the ranking virtually never contains a large number of faults.",2010,0, 8828,Enhancing error localization of DFT codes by weighted l1-norm minimization,"We consider the problem of decoding of real BCH discrete Fourier transform codes (RDFT) which are considered for joint source channel codes to provide robustness against errors in communication channels. In this paper, we propose to combine the subspace based algorithm like MUSIC algorithm with l1-norm minimization algorithm, which is promoted as a sparsity solution functional, to enhance the error localization of RDFT codes. Simulation results show that the combined algorithm performs better over the performances of these individual algorithms.",2008,0, 8829,Fault recovery port-based fast spanning tree algorithm (FRP-FAST) for the fault-tolerant Ethernet on the arbitrary switched network topology,"We present a novel approach, named Fault Recovery Port-Based Fast Spanning Tree Algorithm (FRP-FAST), of the Fault-Tolerant Ethernet (FTE) extension method to the arbitrary switched network topology with providing a significant improvement of failure detection and the spanning tree rebuilding time on the switched Ethernet. We provide a mechanism that expedites failure detection time using peer-based hello message algorithm and eliminates the chance of any transient loop creation during the spanning tree reconstruction using a pre-configured recovery port. As a result, unlike IEEE 802.1D, the scheme does not block data transmission on unaffected data path during the spanning tree discovery phase. The FRP-FAST algorithm has been implemented in the kernel mode of Windows NT-based PC using 3 NICs (3 port switch). The measured failure detection and recovery time meets control industry's 2 seconds requirement.",2001,0, 8830,Defect-Tolerant CMOL Cell Assignment via Satisfiability,"We present a novel CAD approach to cell assignment of CMOL, a hybrid CMOS/molecular circuit architecture. Our method transforms any logically synthesized circuit based on AND/OR/NOT gates to a NOR gate circuit and maps the NOR gates to CMOL. We encode the CMOL cell assignment problem as Boolean conditions. The Boolean constraints are satisfiable if and only if there exists a solution to map all the NOR gates to the CMOL cells. We further investigate various types of static defects for the CMOL architecture and propose a reconfiguration technique that can deal with these defects. We introduce a new CMOL static defect model and provide an automated solution for CMOL cell assignment. Experiments show that our approach can result in smaller area (CMOL cell usage) and better timing delay than prior approach.",2008,0, 8831,Tate Pairing with Strong Fault Resiliency,"We present a novel non-linear error coding framework which incorporates strong adversarial fault detection capabilities into identity based encryption schemes built using Tate pairing computations. The presented algorithms provide quantifiable resilience in a well defined strong attacker model. Given the emergence of fault attacks as a serious threat to pairing based cryptography, the proposed technique solves a key problem when incorporated into software and hardware implementations.",2007,0, 8832,Dense error correction via l1-minimization,"We study the problem of recovering a non-negative sparse signal x isin Ropfn from highly corrupted linear measurements y = Ax+e isin Ropfm, where e is an unknown (and unbounded) error. Motivated by an observation from computer vision, we prove that for highly correlated dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an lscr1-minimization problem: min ||x||1 + ||e||1 subject to y = Ax + e. If the fraction rho of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, for large m, the above lscr1-minimization recovers all sparse signals x from almost all sign-and-support patterns of e. This suggests that accurate and efficient recovery of sparse signals is possible even with nearly 100% of the observations corrupted.",2009,0, 8833,On undetectable faults in partial scan circuits using transparent-scan,"We study the undetectable faults in partial scan circuits under a test application scheme referred to as transparent-scan. The transparent-scan approach allows very aggressive test compaction compared to other approaches. We demonstrate that, unlike other approaches that provide high levels of test compaction for partial scan circuits, this approach does not increase the number of undetectable faults. We also discuss the monotonicity of the number of undetectable faults with increased levels of scan.",2004,0, 8834,Razor II: In Situ Error Detection and Correction for PVT and SER Tolerance,"We take advantage of these findings and propose a Razor II approach that introduces two components. First, instead of performing both error detection and correction in the FF, Razor II performs only detection in the FF, while correction is performed through architectural replay.",2008,0, 8835,Whither generic recovery from application faults? A fault study using open-source software,"We test the hypothesis that generic recovery techniques, such as process pairs, can survive most application faults without using application-specific information. We examine in detail the faults that occur in three, large, open-source applications: the Apache Web server, the GNOME desktop environment and the MySQL database. Using information contained in the bug reports and source code, we classify faults based on how they depend on the operating environment. We find that 72-87% of the faults are independent of the operating environment and are hence deterministic (non-transient). Recovering from the failures caused by these faults requires the use of application-specific knowledge. Half of the remaining faults depend on a condition in the operating environment that is likely to persist on retry, and the failures caused by these faults are also likely to require application-specific recovery. Unfortunately, only 5-14% of the faults were triggered by transient conditions, such as timing and synchronization, that naturally fix themselves during recovery. Our results indicate that classical application-generic recovery techniques, such as process pairs, will not be sufficient to enable applications to survive most failures caused by application faults",2000,0, 8836,A Chu Spaces Semantics of BPEL-Like Fault Handling,We use Chu spaces and an algebra of them to give a denotational semantics of a subset of BPEL. The emphasis is on the scope-based fault handling mechanism. We propose BPEL-F as an abstraction of the subset of BPEL including typical control flow and fault handling. Chu spaces form the main semantic domain. We study the influence of fault handling to the algebraic operators of Chu spaces. and present modified versions of the sequence and concurrence operators. The trigger operator is designed to model the scope-based fault handling. We present valuation functions mapping BPEL-F constructs to Chu spaces.,2009,0, 8837,Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit-State Model Checking,"Web script crashes and malformed dynamically generated webpages are common errors, and they seriously impact the usability of Web applications. Current tools for webpage validation cannot handle the dynamically generated pages that are ubiquitous on today's Internet. We present a dynamic test generation technique for the domain of dynamic Web applications. The technique utilizes both combined concrete and symbolic execution and explicit-state model checking. The technique generates tests automatically, runs the tests capturing logical constraints on inputs, and minimizes the conditions on the inputs to failing tests so that the resulting bug reports are small and useful in finding and fixing the underlying faults. Our tool Apollo implements the technique for the PHP programming language. Apollo generates test inputs for a Web application, monitors the application for crashes, and validates that the output conforms to the HTML specification. This paper presents Apollo's algorithms and implementation, and an experimental evaluation that revealed 673 faults in six PHP Web applications.",2010,0, 8838,Fault-Based Web Services Testing,"Web services are considered a new paradigm for building software applications that has many advantages over the previous paradigms; however, Web services are still not widely used because Service Requesters do not trust Web services that were built by others. Testing can participate in solving this problem because it can be used to assess the quality attributes of Web services and hence increase the requesters' trustworthiness. This paper proposes an approach that can be used to test the robustness and other related attribute of Web services, and that can be easily enhanced to assess other quality attributes. The framework is based on rules for test case generation that are designed by, firstly, analyzing WSDL document to know what faults could affect the robustness quality attribute of Web services, and secondly, using the fault-based testing techniques to detect such faults. A proof of concept tool that depends on these rules has been implemented in order to assess the usefulness of the rules in detecting robustness faults in different Web services platforms.",2008,0, 8839,Color correction of multiview video with average color as reference,"When capturing multiview video, there can be significant variations in the color of views captured with different cameras. This negatively affects compression efficiency when multiview video is coded using inter-view prediction. In this paper we propose a method for correcting the color of multiview video sets as a preprocessing step to compression. Unlike previous work where one of the captured views is used as the color reference, we correct all views to match the average color of the set of views. Block based disparity estimation is used to find matching points between all views in the video set, and the average color is calculated for these matching points. Least squares regressions are used to find functions that will make each view match the average color. Experimental results show that the proposed method results in video sets that closely match in subjective color. Furthermore, when multiview video is compressed with JMVM, the proposed method increases compression efficiency by up to 1.0 dB compared to compressing the original uncorrected video.",2009,0, 8840,An effective schedulability analysis for fault-tolerant hard real-time systems,"We propose worst-case response time schedulability analysis for fault-tolerant hard real-time systems which takes into account the effects of temporary faults. The major contribution of our approach is to consider the recovery of tasks running with higher priorities. This characteristic is very useful since faulty tasks certainly have a shorter period of time to meet their deadlines. Due to its flexibility and simplicity, the proposed approach provides an effective schedulability analysis, where system predictability can be fully guaranteed",2001,0, 8841,Advanced 3b4b channel coding for low error-rate optical links at 2.488 Gbit/s,We report on the performance of an optimized parallel channel coder for high-speed optical transmission systems. The coding properties are discussed by an evaluation of the signal statistics of the coded pulse train in the time and frequency domain. The discussion is mainly based on the results for the power spectral density (PSD) and the autocorrelation function (AKF). The theoretical investigations have been verified measurements with a developed 2.488 Gbit/s optical transmission system. Reliability studies have shown a system's bit error rate below 10-13,2001,0, 8842,New Limits on Fault-Tolerant Quantum Computation,"We show that quantum circuits cannot be made fault-tolerant against a depolarizing noise level of thetas = (6 - 2radic2)/7 ap 45%, thereby improving on a previous bound of 50% (due to Razborov, 2004). More precisely, the circuit model for which we prove this bound contains perfect gates from the Clifford group (CNOT, Hadamard, S, X, Y, Z) and arbitrary additional one-qubit gates that are subject to depolarizing noise thetas. We prove that this set of gates cannot be universal for arbitrary (even classical) computation, from which the upper bound on the noise threshold for fault-tolerant quantum computation follows",2006,0, 8843,Sensor Minimization Problems with Static or Dynamic Observers for Fault Diagnosis,"We study sensor minimization problems in the context of fault diagnosis. Fault diagnosis consists of synthesizing a diagnoser that observes a given plant and identifies faults in the plant as soon as possible after their occurrence. Existing literature on this problem has considered the case of static observers, where the set of observable events does not change during execution of the system. In this paper, we consider static as well as dynamic observers, where the observer can switch sensors on or off, thus dynamically changing the set of events it wishes to observe.",2007,0, 8844,Hybrid Solution: A FEC Algorithm for Fault Tolerant Routing in Sensor Networks,"We study the characteristics of wireless sensor networks (WSN) and present a lightweight FEC coding algorithm combined with a smart fault tolerant routing scheme in this paper. The proposed coding-decoding algorithm is based on XOR operation and requires very little computation and storage space, which are critical for WSN. There are few existing channel coding algorithms (FEC) put forward for use in sensor networks, and they are not very suitable, due to their high computing, storage and delay cost. Further more, normal FEC coding algorithms are not flexible enough to suit the variable states in WSN. We adopt a cross-layer design wherein higher network layers use information about packet loss to adjust the coding level according to the dynamics of the network. And our routing scheme has the ability to discover and select robust paths to reliably relay data packets. Simulation result shows that our coding algorithm and self-adaptive routing scheme perform better than existing schemes.",2006,0, 8845,Optical Wireless Communications With Heterodyne Detection Over Turbulence Channels With Pointing Errors,"We study the error performance of an heterodyne differential phase-shift keying (DPSK) optical wireless (OW) communication system operating under various intensity fluctuations conditions. Specifically, it is assumed that the propagating signal suffers from the combined effects of atmospheric turbulence-induced fading, misalignment fading (i.e., pointing errors) and path-loss. Novel closed-form expressions for the statistics of the random attenuation of the propagation channel are derived and the bit-error rate (BER) performance is investigated for all the above fading effects. Numerical results are provided to evaluate the error performance of OW systems with the presence of atmospheric turbulence and/or misalignment. Moreover, nonlinear optimization is also considered to find the optimum beamwidth that achieves the minimum BER for a given signal-to-noise ratio value.",2009,0, 8846,Average error rate of linear diversity reception schemes over generalized gamma fading channels,"We study the performance of M-ary modulation schemes in the presence of additive white Gaussian noise (AWGN) and slow fading. Selection combining (SC), equal gain combining (EGC), and maximal ratio combining (MRC) diversity schemes are considered. The fading channel is modeled by the generalized gamma distribution, which includes the Rayleigh, Nakagami, Weibull, and log-normal distributions as special or limiting cases. The Suzuki distribution can be adequately approximated by the generalized gamma distribution. The exact average symbol error rates (ASER) for coherent multilevel modulation schemes with SC and MRC are presented by using the moment generating function (MGF) based approach while that of EGC is obtained by employing a characteristic function (CHF) based approach. The analysis results for the three combiners are compared and discussed. Simulation results are also provided.",2005,0, 8847,Study of fault ride-through for DFIG based wind turbines,"Wind power generator and total capacity has increased dramatically in the last 10 years. Most generators now being installed use doubly-fed induction machines (DFIGs). These allow active and reactive power control through the rotor side converter. Therefore today's wind turbines have a significant impact on the power system. To ensure power quality, several utilities have introduced special grid connection codes for wind farm developers. The requirements range from reactive power control, frequency response and, last but not least, fault ride-through. All these requirements, especially fault ride-through, are a challenge for wind turbine producers. New control strategies and hardware are needed which utilize the flexibility provided by the DFIG converters. This paper outlines the proposed grid codes. It then describes a detailed DFIG model and control strategy. The performance of the new control through severe fault conditions is demonstrated.",2004,0, 8848,Propagation model for estimating vor bearing error in the presence of windturbines Hybridation of parabolic equation with physical optics,"Windturbines near VOR ground station can yield significant bearing errors in the azimuth estimation. We propose a model that combines the parabolic equation and the physical optics approximation to predict these errors. It accounts for a possible hilly terrain, and a generic model of windturbines that includes dielectric blades. All the hypotheses made in the model are carefully justified by means of numerical simulations. In a realistic test case, this model is employed to compute the error caused by a complete windfarm located on a hilly terrain within acceptable computation time.",2010,0, 8849,A Comparative Study of Voice Over Wireless Networks Using NS-2 Simulation with an Integrated Error Model,"Wireless communication is the fastest growing field and with the emergence of IEEE 802.11 based devices, wireless access is becoming more popular. Many multimedia applications for IP networks have been developed and thus the demand for quality of service (QoS) has increased. In this paper our primary objective is to evaluate 802.11e EDCF framework for video, voice and background traffic all at the same time. Our assessment is based on an error model called E-model, MOS for VoIP and PSNR for video. We also studied the effects of random uniform error model on various types of traffic. As expected, wireless networks are more prone to errors than wired networks",2006,0, 8850,Performance analysis of distributed intermittent fault diagnosis in wireless sensor networks using clustering,"Wireless sensor network (WSN) has become monitoring solution of variety of applications. As one of the key technologies involved in WSNs, node fault detection is indispensable in most WSN applications. Faults occurring to sensor nodes are common due to the sensor device itself and the harsh environment where the sensor nodes are deployed. The goal of this paper is to locate the faulty sensors in the wireless sensor network. In this paper, a distributed fault diagnosis (DFD) scheme using clustering is proposed which satisfies three important diagnosis properties such as consistency, completeness and accuracy. To evaluate the performance of the proposed algorithm a comparative analysis and implementation is presented in this paper using an existing diagnosis algorithm i.e. distributed fault diagnosis algorithm (DFD).",2010,0, 8851,Improving Fault Management Using Voting Mechanism in Wireless Sensor Networks,Wireless Sensor Networks (WSN) have the potential of significantly enhancing our ability to monitor and interact with our physical environment. Realizing a fault management operation is critical to the success of WSN. The main challenge is providing fault-tolerance (FT) while conserving the limited resources of the network. In this work we propose a new method for fault management that do the fault detection and fault recovery in decentralized. Simulation result shows that proposed method have better performance.,2010,0, 8852,A Survey on Fault Tolerant Routing Techniques in Wireless Sensor Networks,"Wireless sensor networks are without a doubt one of the central issues in current research topics due to the harsh environmental conditions in which such networks can be deployed and their unique sensor network characteristics,specifically limited power supply, sensing, processing and communication capabilities. Presented with many challenges and design issues that affect the data routing, a need for a fault tolerant routing protocol becomes essential. In this paper, we summarize and highlight the key ideas of existing fault tolerant techniques of routing protocols, survey existing routing protocols proposed to support fault tolerance. Finally, we provide some future research directions in the area of fault tolerance in wireless sensor networks routing.",2009,0, 8853,Self-Managed Fault Management in Wireless Sensor Networks,"Wireless sensor networks usually deploy in the harsh operational environment where the physical presence of human administrators is impractical. Applications and systems of these networks are thus expected to operate with the minimum aid or supervision. Biologically-inspired behaviors, such as self-healing, self-adaptation, have already been recognized as the desirable features for these systems to self-adapt to various unpredicted changes occurred in the environment. In this paper, we address such biological feature in terms of fault management. We propose a hierarchical structure to properly distribute fault management tasks among sensor nodes by introducing more dasiaself-managingpsila functions. In addition, we also consider an alternate solution to self-reconfigure fault management function of sensor nodes adapting to various system requirements, such as replacement of a faulty node.",2008,0, 8854,A machine-learning-based fault diagnosis approach for intelligent condition monitoring,"We propose a machine-learning-based fault diagnosis approach for condition monitoring on the constant-speed rotating machines via vibration signals. There are mainly five phases in our approach, i.e., vibration signal measurement, discrete-wavelet-transformation-based preprocessing, feature extraction, base-line encoding, and fuzzy neural network. The advantage of this approach can identify the condition and faults of machine without sufficient diagnosis knowledge. Experimental results have demonstrated this approach is a useful tool for condition monitoring application.",2010,0, 8855,Fault isolation in discrete event systems by observational abstraction,"We propose a method for fault isolation in discrete event systems such as object oriented control systems, where the observations are the logged error messages. The method is based on automatic abstraction that preserves only the behavior relevant to fault isolation. In this way we avoid the state space explosion, and a model checker can be used to reason about the temporal properties of the system. The result is a fault isolation table that maps possible error logs to isolated faults, and fault isolation thus reduces to table lookup. The fault isolation table can also be used as an analysis tool at the design level to find both faults that cannot be isolated as well as redundant error messages.",2003,0, 8856,Automating detection of faults in TCP implementations,"We propose a new environment for testing the behavior of TCP. We analyze existing test methodologies and show that there is still a need to reduce the loads imposed on both the worker and the expert. The Auto Detector is extensible and so well supports both the automation of predefined tasks and the communication tools needed to ensure gradual refinement by the participants. To show the effectiveness of our proposal, actual software flaws found in HP/UX and the Windows operating system are taken as examples of how to realize the automation of reproduction and detection tasks; it is flexible enough to resolve real-world issues.",2004,0, 8857,Transient-fault recovery using simultaneous multithreading,"We propose a scheme for transient-fault recovery called Simultaneously, and Redundantly Threaded processors with Recovery (SRTR) that enhances a previously proposed scheme for transient-fault detection, called Simultaneously and Redundantly Threaded (SRT) processors. SRT replicates an application into two communicating threads, one executing ahead of the other. The trailing thread repeats the computation performed by the leading thread, and the values produced by the two threads are compared. In SRT a leading instruction may commit before the check for faults occurs, relying on the trailing thread to trigger detection. In contrast, SRTR must not allow any leading instruction to commit before checking occurs, since a faulty instruction cannot be undone once the instruction commits. To avoid stalling leading instructions at commit while waiting for their trailing counterparts, SRTR exploits the time between the completion and commit of leading instructions. SRTR compares the leading and trailing values as soon as the trailing instruction completes, typically before the leading instruction reaches the commit point. To avoid increasing the bandwidth demand on the register file for checking register values, SRTR uses the register value queue (RVQ) to hold register values for checking. To reduce the bandwidth pressure on the RVQ itself SRTR employs dependence-based checking elision (DBCE). By reasoning that faults propagate through dependent instructions, DBCE exploits register (true) dependence chains so that only the last instruction in a chain uses the RVQ, and has the leading and trailing values checked. SRTR performs within 1% and 7% of SRT for SPEC95 integer and floating-point programs, respectively. While SRTR without DBCE incurs about 18% performance loss when the number of RVQ ports is reduced from four (which is performance-equivalent to an unlimited number) to two ports, with DBCE, a two-ported RVQ performs within 2% of a four-ported RVQ",2002,0, 8858,Efficient Utilization of Error Protection Techniques for Transmission of Data-Partitioned H.264 Video in a Capacity Constrained Network,"We propose an efficient error protection technique for data-partitioned H.264 video in a capacity constrained network. Our scheme maximizes video quality by choosing the optimal point in the application layer and medium access control (MAC) layer redundancy. We have shown that, in a capacity constrained network and highly lossy environment, neither forward error correction (FEC) nor retransmissions alone can result in optimum performance. Instead, it is the combination of these two techniques that effectively reduces the overall loss.",2009,0, 8859,Robust Speech Recognition Using a Cepstral Minimum-Mean-Square-Error-Motivated Noise Suppressor,"We present an efficient and effective nonlinear feature-domain noise suppression algorithm, motivated by the minimum-mean-square-error (MMSE) optimization criterion, for noise-robust speech recognition. Distinguishing from the log-MMSE spectral amplitude noise suppressor proposed by Ephraim and Malah (E&M), our new algorithm is aimed to minimize the error expressed explicitly for the Mel-frequency cepstra instead of discrete Fourier transform (DFT) spectra, and it operates on the Mel-frequency filter bank's output. As a consequence, the statistics used to estimate the suppression factor become vastly different from those used in the E&M log-MMSE suppressor. Our algorithm is significantly more efficient than the E&M's log-MMSE suppressor since the number of the channels in the Mel-frequency filter bank is much smaller (23 in our case) than the number of bins (256) in DFT. We have conducted extensive speech recognition experiments on the standard Aurora-3 task. The experimental results demonstrate a reduction of the recognition word error rate by 48% over the standard ICSLP02 baseline, 26% over the cepstral mean normalization baseline, and 13% over the popular E&M's log-MMSE noise suppressor. The experiments also show that our new algorithm performs slightly better than the ETSI advanced front end (AFE) on the well-matched and mid-mismatched settings, and has 8% and 10% fewer errors than our earlier SPLICE (stereo-based piecewise linear compensation for environments) system on these settings, respectively.",2008,0, 8860,Behavioral fault simulation : implementation and experimental results,"We present an original approach for performing Behavioral Fault Simulation (BFS). This approach involves three main steps : (i) the definition of an internal modeling of behavioral descriptions, and the determination of a Fault Model; (ii) the definition of a fault simulation technique; (iii) the implementation of this technique. We give in this paper a description of the BFS software implementation. We point out how object oriented programming has been used for defining an evolutive and efficient tool. Finally, this paper deals with experiments conducted on ITC'99 benchmarks in order to validate a VHDL behavioral fault simulator (BFS). The effectiveness of the BFS software is clearly demonstrated through the obtained results",2002,0, 8861,Experiences with EtheReal: a fault-tolerant real-time Ethernet switch,"We present our experiences with the implementation of a real-time Ethernet switch called EtheReal. EtheReal provides three innovations for real-time traffic over switched Ethernet networks. First, EtheReal delivers connection oriented hard bandwidth guarantees without requiring any changes to the end host operating system and network hardware/software. For ease of deployment by commercial vendors, EtheReal is implemented in software over Ethernet switches, with no special hardware requirements. QoS support is contained within two modules, switches and end-host user level libraries that expose a socket like API to real time applications. Secondly, EtheReal provides automatic fault detection and recovery mechanisms that operate within the constraints of a real-time network. Finally EtheReal supports server-side push applications with a guaranteed bandwidth link-layer multicast scheme. Performance results from the implementation show that EtheReal switches deliver bandwidth guarantees to real time-applications within 0.6% of the contracted value, even in the presence of interfering best-effort traffic between the same pair of communicating hosts.",2001,0, 8862,Multi-classifier fusion approach based on data clustering for analog circuits fault diagnosis,"When there are large amount of fault classes in analog circuits, normally single multi-class classifier cannot achieve satisfactory diagnosis accuracy because of its difficult training process. A method of multi-classifier fusion diagnosis approach based on data clustering is presented in this paper to improve fault diagnosis veracity. After extracting fault feature vectors by wavelet transform, fuzzy C-mean clustering algorithm is used to pre-partition the feature space into multiple sub-class groups as binary tree. According to the structure of the fault tree, multi-classifiers are created to form hierarchical diagnosis system. Simulation experiments demonstrate that the proposed approach for analog circuit fault diagnosis is superior to conventional ones. The fault diagnosis accuracy is greater than 98%. It has good performance in tackling large number of fault classes in analog circuits.",2009,0, 8863,Perceptually optimized error resilient transcoding using attention-based intra refresh,"While deployment of wireless channels has become widespread and fast-growing for mobile applications, transmitting data over these existing error-prone networks can be very unreliable and challenging due to time-varying interference and channel errors. Many error-resilient algorithms have been proposed to provide adequate resilient features in order to protect video data from channel errors. However, these algorithms often aim to achieve the optimal decoded video quality in terms of mean square error without any consideration for the visual quality. In this paper, we present a perceptually error-resilient method for video transcoding based on the attention-based intra refresh technique and the characteristics of the human visual system to enhance the perceptual performance of the transcoded video. Specifically, the foveated just noticeable distortion and visual attention models are employed to estimate the perceptual loss impact due to error propagation for allocating intra-refreshed macroblocks in the transcoded video. Experimental results show that the proposed method can achieve a much better performance than the existing methods in terms of both the visual quality and perceptual quality measure.",2010,0, 8864,On the Use of Bloom Filters for Defect Maps in Nanocomputing,"While the exact manufacturing process for nanoscale computing devices is uncertain, it is abundantly clear that future technology nodes will see an increase in defect rates. Therefore, it is of paramount importance to construct new architectures and design methodologies that can tolerate large numbers of defects. Defect maps are a necessity in the future design flows, and research on their practical construction is essential. In this work, we study the use of Bloom filters as a data structure for defect maps. We show that Bloom filters provide the right tradeoff between accuracy and space-efficiency. In particular, they can help simplify the nanosystem design flow by embedding defect information within the nanosystem delivered by the manufacturers. We develop a novel nanoscale memory design that uses this concept. It does not rely on a voting strategy, and utilizes the device redundancy more effectively than existing approaches",2006,0, 8865,Design diversity for concurrent error detection in sequential logic circuits,"We present a technique using diverse duplication to implement concurrent error detection (CED) in sequential logic circuits. We examine three different approaches for this purpose: (1) identical state encoding of the two sequential logic implementations, duplication of flip-flops, diverse implementation of the combinational logic part (output logic and next-state logic) and comparators on flip-flop outputs and primary outputs; (2) diverse state encoding of the two implementations, duplication of flip-flops, diverse combinational logic implementation and comparators on primary outputs only; and (3) identical state encoding, parity prediction for the flip-flops, diverse combinational logic implementation, comparators on primary outputs and parity checkers on flip-flop outputs. Our results for the simulated sequential benchmark circuits demonstrate that the third approach is most efficient in protecting sequential logic circuits against multiple and common-mode failures. The computational complexity of the data integrity analysis of the third approach is of the same order as that of the first approach and is at least an order of magnitude less than that of the second approach",2001,0, 8866,Distributed construction of a fault-tolerant network from a tree,"We present an algorithm by which nodes arranged in a tree, with each node initially knowing only its parent and children, can construct a fault-tolerant communication structure (an expander graph) among themselves in a distributed and scalable way. The tree overlayed with this logical expander is a useful structure for distributed applications that require the intrinsic ""treeness"" from the topology but cannot afford any obstruction in communication due to failures. At the core of our construction is a novel distributed mechanism that samples nodes uniformly at random from the tree. In the event of node joins, node departures or node failures, the expander maintains its own fault tolerance and permits the reformation of the tree. We present simulation results to quantify the convergence of our algorithm to a fault tolerant network having both good vertex connectivity and expansion properties.",2005,0, 8867,An approach for analysing and improving fault tolerance in radio architectures,"We present an approach for analysing and improving fault-tolerance aspects in radio architectures. This is a necessary step to be taken in order to implement reliable radio systems in future nanoscale technologies. We present problem formulation, optimisation approach and implementation methodology. We are adding fault tolerance at architecture level by taking advantage of existing parallel structures and using a spare module approach in order to minimise hardware overhead needed. These issues have been analysed and demonstrated using two radio case studies: a UMTS MIMO and a GSM diversity receivers",2006,0, 8868,Research on non-communication protection of distribution lines based on fault components,"When distribution power lines run normally, there is asymmetrical current and voltage component including negative sequence and zero sequence components. So non-communication protection based on sequence components might mal-operate or mis-operate in the cases. The paper proposed a new noncommunication protection scheme based on fault component for distribution line. The protection can be applied to different neutral grounding mode including direct and indirect grounding neutral system and isn't affected by normal running asymmetrical component. According to the fault component (or the incremental quantities of three sequence component), it can assure that one end breaker of fault line trips as quickly as possible and other end breaker is accelerated to trip. Simulation shows that the protection scheme is both correct and effective.",2002,0, 8869,On the monitoring of the defects of squirrel cage induction motors,"When the electric motor is an important element in industrial process in terms of safety and efficiency, we must make an early detection. The earlier the incipient fault is detected the easier remediable faults will be cheaper. Monitoring is made through the spectral analysis of the stator current. Usually the fast Fourier transform is used. However its real achievement is waning in regards of on-line methods like the discrete Fourier transform or sliding Hartley transform. This technique is advisable in order to have an updated spectrum at each sampling time. This is the best method to show up the sidebands in the stator current when an incipient fault occurs meanwhile the motor is operating. Another way is to make a time-frequency analysis. In this case, we can discern the instant where the fault appears always through the spectral analysis of the stator current of the induction motor. Experimental results show the efficiency of the presented method.",2003,0, 8870,Petri nets and mobile agent composed fault unit lock scheme,"When the power system happened with the protection information lost or the incorrect operation, at present it still depends on the backup protection to isolate the fault with prolonged time and the extended tripped area. In this paper use the Petri nets as the synthetically analysis on the protection information tool and the mobile agent the distributional intelligence tool, finds a better solution to the above problem. The substation level central system carries on the integration process to both the distributional protection information and dynamic information along with the protection acting process, and then it dispatch the mobile agents to carry on the task download and logic calculation to the correlation distributed nodes. In case of the need for further information initiation, it activate the sequence trip to carry on the fault area dynamic lock, finally complete the fault isolation based on the smallest unit.",2008,0, 8871,Observer-based fault diagnosis of power electronics systems,"We propose a fault diagnosis method for power electronics systems that extends classical observer-based fault-sensitive detection filters for linear time-invariant systems to switched-linear systems commonly encountered in power electronics. The result is a piecewise-linear detection filter, which in the absence of faults, works the same way as an observer-it predicts the system states exactly. If a fault occurs, the state predicted by the filter differs from the true state of the system, and by appropriately choosing the filter gain, the filter residual has certain geometrical characteristics that makes the fault identifiable. An experimental platform to verify the feasibility of the proposed method is presented along with simulation and experimental results illustrating the feasibility and effectiveness of the method.",2010,0, 8872,On The Generalization of Error-Correcting WOM Codes,"WOM (write once memory) codes are codes for efficiently storing and updating data in a memory whose state transition is irreversible. Storage media that can be classified as WOM includes flash memories, optical disks and punch cards. Error-correcting WOM codes can correct errors besides its regular data updating capability. They are increasingly important for electronic memories using MLCs (multi-level cells), where the stored data are prone to errors. In this paper, we study error-correcting WOM codes that generalize the classic models. In particular, we study codes for jointly storing and updating multiple variables - instead of one variable - in WOMs with multi-level cells. The error-correcting codes we study here are also a natural extension of the recently proposed floating codes. We analyze the performance of the generalized error- correcting WOM codes and present several bounds. The number of valid states for a code is an important measure of its complexity. We present three optimal codes for storing two binary variables in n q-ary cells, where n = 1,2,3, respectively. We prove that among all the codes with the minimum number of valid states, the three codes maximize the total number of times the variables can be updated.",2007,0, 8873,Defects detection based on principal component analyses and support vector machines,"Woods are used in many fields. The appearance of woods is important for the quality of wood products. In this paper, we present an image series fusion method based principal component analyses and recognize the defects by support vector machines. We select the histogram of the feature image as feature vector, and send it to support vector machines for recognition and classification. The results show that this method can fuse the image series and detect the defects.",2010,0, 8874,Detection and Classification of Wood Defects by ANN,"X-ray as a method of measurement was adopted to detect wood defects nondestructively. Due to the intensity of x-ray that crosses the object changes, defects in wood were detected by the difference of X-ray absorption parameter, and therefore it used computer to process and analyze the image. On the basis of image processing of nondestructive testing and characteristic construction, defects mathematic model were established by using characteristic parameters. According to signal characters of nondestructive testing, artificial neural networks were set up. Meanwhile, adopt BP networks model to recognize all characteristic parameters, which reflected characters of wood defects. BP networks used coefficient matrix of each unit, including input layer, intermediate layer (concealed layer) and output layer, to get the model of input vector and finish networks recognition through the networks learning. The test results show that the method is very successful for detection and classification of wood defects",2006,0, 8875,Empirical cupping correction for CT scanners with tube voltage modulation (ECCU),"X-ray CT measures the attenuation of polychromatic x-rays through an object of interest. The CT data aquired are the negative logarithm of the relative x- ray intensity behind the patient. These data must undergo water precorrection to linearize the measured data and convert them into line integrals through the patient that can be reconstructed to yield the final CT image. The function to linearize the measured projection data depends on the tube voltage U. In most circumstances, CT scans are carried out with a constant tube voltage. For those cases there are dozens of different techniques to carry out water precor-rection. In our case the tube voltage is rather modulated as a function of the object. We propose an empirical cupping correction (ECCU) algorithm to correct for CT cupping artifacts that are induced by non-linearities in the projection data. The method is rawdata-based, empirical and does neither require knowledge of the x-ray spectrum nor of the attenuation coefficients. It aims at linearizing the attenuation data using a precorrection function of polynomial form in the polychromatic attenuation data q and in the tube voltage U. The coefficients of the polynomial are determined once using a calibration scan of a homogeneous phantom. Computing the coefficients is done in image domain by fitting a series of basis images to a template image. The template image is obtained directly from the uncorrected phantom image and no assumptions on the phantom size or of its positioning are made. Rawdata are precorrected by passing them through the once-determined polynomial. Numerical examples are shown to demonstrate the quality of the precorrection. ECCU achieves to remove the cupping artifacts and to obtain well-calibrated CT-values. A combination of ECCU with analytical techniques yielding a hybrid cupping correction method is possible and allows for channel-dependent correction functions.",2009,0, 8876,Offline analysis techniques for the improvement of defect inspection recipes,"Yield enhancement techniques for the latest generation of devices need sensitive inspection recipes in order to detect the ever-smaller defects that can result in yield loss. Offline analysis techniques (using MATLAB, for example) for the improvement of bright-field defect-inspection tool recipes are presented. Simple techniques are given for the rapid incorporation or modification of care-areas/don't-care areas into pre-existing recipes. Postprocessing analyses of defect data are presented to show their efficacy in improving the signal-to-noise ratio for defects that might otherwise be hidden in the noise created by 'nuisance' defects. Examples are presented to show how design-databases and reticle inspection data can be harnessed in understanding defect mechanisms.",2000,0, 8877,Software Fault Prediction using Language Processing,"Accurate prediction of faulty modules reduces the cost of software development and evolution. Two case studies with a language-processing based fault prediction measure are presented. The measure, refereed to as a QALP score, makes use of techniques from information retrieval to judge software quality. The QALP score has been shown to correlate with human judgements of software quality. The two case studies consider the measure's application to fault prediction using two programs (one open source, one proprietary). Linear mixed-effects regression models are used to identify relationships between defects and QALP score. Results, while complex, show that little correlation exists in the first case study, while statistically significant correlations exists in the second. In this second study the QALP score is helpful in predicting faults in modules (files) with its usefulness growing as module size increases.",2007,1, 8878,Merits of using repository metrics in defect prediction for open source projects,Many corporate code developers are the beta testers of open source software. They continue testing until they are sure that they have a stable version to build their code on. In this respect defect predictors play a critical role to identify defective parts of the software. Performance of a defect predictor is determined by correctly finding defective parts of the software without giving any false alarms. Having high false alarms means testers/developers would inspect bug free code unnecessarily. Therefore in this research we focused on decreasing the false alarm rates by using repository metrics. We conducted experiments on the data sets of Eclipse project. Our results showed that repository metrics decreased the false alarm rates on the average to 23% from 32% corresponding up to 907 less files to inspect.,2009,1, 8879,The effect of granularity level on software defect prediction,"Application of defect predictors in software development helps the managers to allocate their resources such as time and effort more efficiently and cost effectively to test certain sections of the code. In this research, we have used naive Bayes classifier (NBC) to construct our defect prediction framework. Our proposed framework uses the hierarchical structure information about the source code of the software product, to perform defect prediction at a functional method level and source file level. We have applied our model on SoftLAB and Eclipse datasets. We have measured the performance of our proposed model and applied cost benefit analysis. Our results reveal that source file level defect prediction improves the verification effort, while decreasing the defect prediction performance in all datasets.",2009,1, 8880,Exploratory study of a UML metric for fault prediction,"This paper describes the use of a UML metric, an approximation of the CK-RFC metric, for predicting faulty classes before their implementation. We built a code-based prediction model of faulty classes using Logistic Regression. Then, we tested it in different projects, using on the one hand their UML metrics, and on the other hand their code metrics. To decrease the difference of values between UML and code measures, we normalized them using Linear Scaling to Unit Variance. Our results indicate that the proposed UML RFC metric can predict faulty code as well as its corresponding code metric does. Moreover, the normalization procedure used was of great utility, not just for enabling our UML metric to predict faulty code, using a code-based prediction model, but also for improving the prediction results across different packages and projects, using the same model.",2010,1, 8881,An extensive comparison of bug prediction approaches,"Reliably predicting software defects is one of software engineering's holy grails. Researchers have devised and implemented a plethora of bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available data set consisting of several software systems, and provide an extensive comparison of the explanative and predictive power of well-known bug prediction approaches, together with novel approaches we devised. Based on the results, we discuss the performance and stability of the approaches with respect to our benchmark and deduce a number of insights on bug prediction models.",2010,1, 8882,An empirical evaluation of fault-proneness models,"Planning and allocating resources for testing is difficult and it is usually done on an empirical basis, often leading to unsatisfactory results. The possibility of early estimation of the potential faultiness of software could be of great help for planning and executing testing activities. Most research concentrates on the study of different techniques for computing multivariate models and evaluating their statistical validity, but we still lack experimental data about the validity of such models across different software applications. The paper reports on an empirical study of the validity of multivariate models for predicting software fault-proneness across different applications. It shows that suitably selected multivariate models can predict fault-proneness of modules of different software packages.",2002,1, 8883,Predicting fault incidence using software change history,"This paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight",2000,1, 8884,Fault Prediction using Early Lifecycle Data,"The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.",2007,1, 8885,Local and Global Recency Weighting Approach to Bug Prediction,"Finding and fixing software bugs is a challenging maintenance task, and a significant amount of effort is invested by software development companies on this issue. In this paper, we use the Eclipse project's recorded software bug history to predict occurrence of future bugs. The history contains information on when bugs have been reported and subsequently fixed.",2007,1, 8886,Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems,"High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",2008,1, 8887,Use of relative code churn measures to predict system defect density,"Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code chum are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.",2005,1, 8888,Thermoreflectance imaging of defects in thin-film solar cells,We have identified and characterized various defects in thin-film a-Si and CIGS solar cells with sub-micron spatial resolution using thermoreflectance imaging. A megapixel silicon-based CCD was used to obtain noncontact thermal images simultaneously with visible electroluminescence (EL) images. EL can be indicative of pre-breakdown sites due to trap assisted tunneling and stress induced leakage currents. Physical defects appear at reverse bias voltages of 8 V in a-Si samples. Linear and nonlinear shunt defects are investigated as well as electroluminescent breakdown regions at reverse biases as low as 4.5 V. Pre-breakdown sites with electroluminescence are investigated.,2010,0, 8889,Empirical Analysis of Software Fault Content and Fault Proneness Using Bayesian Methods,"We present a methodology for Bayesian analysis of software quality. We cast our research in the broader context of constructing a causal framework that can include process, product, and other diverse sources of information regarding fault introduction during the software development process. In this paper, we discuss the aspect of relating internal product metrics to external quality metrics. Specifically, we build a Bayesian network (BN) model to relate object-oriented software metrics to software fault content and fault proneness. Assuming that the relationship can be described as a generalized linear model, we derive parametric functional forms for the target node conditional distributions in the BN. These functional forms are shown to be able to represent linear, Poisson, and binomial logistic regression. The models are empirically evaluated using a public domain data set from a software subsystem. The results show that our approach produces statistically significant estimations and that our overall modeling method performs no worse than existing techniques.",2007,1, 8890,Predicting Faults in High Assurance Software,"Reducing the number of latent software defects is a development goal that is particularly applicable to high assurance software systems. For such systems, the software measurement and defect data is highly skewed toward the not-fault-prone program modules, i.e., the number of fault-prone modules is relatively very small. The skewed data problem, also known as class imbalance, poses a unique challenge when training a software quality estimation model. However, practitioners and researchers often build defect prediction models without regard to the skewed data problem. In high assurance systems, the class imbalance problem must be addressed when building defect predictors. This study investigates the roughly balanced bagging (RBBag) algorithm for building software quality models with data sets that suffer from class imbalance. The algorithm combines bagging and data sampling into one technique. A case study of 15 software measurement data sets from different real-world high assurance systems is used in our investigation of the RBBag algorithm. Two commonly used classification algorithms in the software engineering domain, Naive Bayes and C4.5 decision tree, are combined with RBBag for building the software quality models. The results demonstrate that defect prediction models based on the RBBag algorithm significantly outperform models built without any bagging or data sampling. The RBBag algorithm provides the analyst with a tool for effectively addressing class imbalance when training defect predictors during high assurance software development.",2010,1, 8891,A self-optimization of the fault management strategy for device software,"With the growth of network technologies, abundance of network resources, and increase of various services, mobile devices have gained much functionality and intelligence. At the same time, mobile devices are becoming complicated and many software related problems appear. The traditional remote repair method needs the software providers to supply fault information with corresponding repair strategy. It is inconvenient for users when the sold mobile devices have software faults. However, it is impossible for the manufacturers to supply all the fault information and repair-strategy before selling them. So far, no method has been given to collect repair-strategy from the sold mobile device and optimize the self-repair strategy. In this paper, we propose a self-optimization method to learn the software repair strategy from the sold mobile devices and to optimize self-repair strategy based on the Open Mobile Alliance (OMA) Device Management (DM) standard. The managed objects (MOs) are defined for collecting the strategy data and the self-optimization algorithm is proposed and implemented at the central server.",2009,0, 8892,Testing of hard faults in simultaneous multi-threaded processors,"With the increasing circuit complexity and aggressive technology scaling, faults such as dielectric, conductor, and metallization failures are becoming more common. Traditional stuck-at testing does not detect these types of faults because these faults may be dormant and need to be stressed to manifest as ""fails"" (during burn-in). As voltage scaling and power consumption reduce the effectiveness of burn-in, these faults will cause failures during the useful life of the part. We propose the use of a test thread on simultaneous multi-threaded processors, to provide a means of detection of lifetime failures. In this initial study, the test thread is allowed to compete with executing programs for processor resources. When the system has only one workload thread, the test thread is able to execute with no significant impact on the workload thread. When the system has many active workload threads, the test thread has somewhat larger impact on the execution time of the workload threads.",2004,0, 8893,An advanced methodology for the reporting of original adders for process tool defect monitoring,"With the increasing cost of bare silicon wafers, many IC manufacturers are choosing to run fewer and fewer process tool qualifications. This decision places an increasingly larger number of production wafers at risk from process tool excursions as well as an increasingly high demand on the dependability and consistency of the qualification data. In this paper, we describe our methodology of determining only original adders in process tool qualification. The importance of reporting only added defects to the process tool control chart was justified by showing a correlation between inline yield-enhancement charts and un-patterned tool qualifications. In the past, it has been very hard to correlate un-patterned charts to inline yield-limiting charts and also to final yield",2005,0, 8894,Applying object-orientation and IEC 61850 standard to architecture design of power system fault information processing,"With the maturing of the IEC 61850, utilities are beginning to implement substation automation systems (SAS) that are based on this new international standard. This paper describes such an implementing research for power fault information processing system on East China electric power group in China. In particular, it presents the idea of applying object-oriented methodology to architecture design and providing an open interface of IEC 61850. Based on the idea and technique, some benefits are brought.",2004,0, 8895,Similarity-Based Bayesian Learning from Semi-structured Log Files for Fault Diagnosis of Web Services,"With the rapid development of XML language which has good flexibility and interoperability, more and more log files of software running information are represented in XML format, especially for Web services. Fault diagnosis by analyzing semi-structured and XML like log files is becoming an important issue in this area. For most related learning methods, there is a basic assumption that training data should be in identical structure, which does not hold in many situations in practice. In order to learn from training data in different structures, we propose a similarity-based Bayesian learning approach for fault diagnosis in this paper. Our method is to first estimate similarity degrees of structural elements from different log files. Then the basic structure of combined Bayesian network (CBN) is constructed, and the similarity-based learning algorithm is used to compute probabilities in CBN. Finally, test log data can be classified into possible fault categories based on the generated CBN. Experimental results show our approach outperforms other learning approaches on those training datasets which have different structures.",2010,0, 8896,Method to minimize dose for CT attenuation correction in SPECT,"With the recent introduction of hybrid SPECT/CT systems, with diagnostic multi-slice CT (Computed Tomography) capabilities the CT dose delivered to the patient may become an issue, as attenuation compensation in SPECT using co-registered CT data becomes more common in clinical practice. In general, any CT data can be converted to a volume of linear attenuation coefficients (LAC), also referred to as the ""attenuation map"", or mu-map. We investigated the range of CT settings that minimize the dose to the patient and allow for CT image quality that is appropriate for attenuation correction. SPECT/CT registration issues are outside the scope of this work, and we assume that the objects are perfectly registered. Various phantoms are used to analyze both CT and attenuation-corrected SPECT image quality, where the CT data itself is used to generate the mu-maps. Conclusion: The effective dose to the patient from a CT scan for the purpose of SPECT AC can be lowered to at least 30% to 50% without affecting the image quality of the SPECT AC, using CT derived mu-maps, if an appropriate scan protocol with a very smooth (MTF at 50%: 1.0 Ip/cm) CT reconstruction kernel is used. A further clinical investigation is needed to confirm these findings in clinical practice",2004,0, 8897,Solving Reliable Coverage In Fault Tolerant Energy Efficient Wireless Sensor Network,"With the strong push from applications, fault tolerant and energy efficient property of wireless sensor network has gradually become a hot point. Based on the heuristic method used to generate minimum covers, a novel scheme namely reliable coverage scheme is proposed to solve the reliable coverage problem within the cluster in a hierarchical structured network. Simulations show that this scheme is able to efficiently utilize energy to prolong network lifetime while at the same time keeping missed monitoring at a low level",2006,0, 8898,Research on low voltage ride through technology for symmetrical grid fault,"With the wind power installation capacity sharply increasing, the effects caused by wind farms on the regional power system stability becomes very remarkable. Accordingly, more and more researches are focused on the low voltage ride through (LVRT) technology used for DFIG under short-time grid fault, and new power grid regulation requires power system to achieve LVRT capability. But we have no interrelated criterion in our country. First, the requirement and applying of LVRT is introduced, and the electromagnetism relation of DFIG stability factor under symmetrical grid fault is analyzed. Then based on the professional power system simulate software platform PSCAD/EMTDC, a LVRT simulation experiment which applied crowbar protection method when grid voltage falls down is done. Recur to this experiment, DFIG performance characteristic is analyzed and some helpful conclusions are achieved. The emulation experiment validate the availability of proposed electromagnetism relation, remarkable effect of improved crowbar, exactness of simulant model.",2009,0, 8899,A Sun Tracking Error Monitor for Photovoltaic Concentrators,"With today's PV markets bogged down by the shortage of solar grade silicon a handful of start-ups but also well established manufacturers, try to take advantage of the situation and steadily stride towards the commercialisation of photovoltaic concentration technologies. To aid the completion of their ongoing development cycles, and implement production automation and quality control processes, specific instrumentation and machinery is to be developed. Assessment of sun tracking accuracy should not be overlooked, and even more by those players raising very high concentration concepts over the 100X frontier. Some analyses point out that the acceptance angle of present designs in concentration optics may be overestimated even from a theoretical point of view, which added to the still uncertain acceptance angle losses inflicted on the overall system by mass assembly processes, may finally shrink the allowable tolerance and divert the entire burden to the tracking accuracy. Instrumentation for the monitoring of sun tracking operative performance, providing enough sensitivity to gauge the sub-degree accuracy ranges required by high concentration systems is therefore needed, and technical feasibility of this proposal is proven here based in state-of-the-art solid state image sensors",2006,0, 8900,Efficient fault simulation techniques and test configuration generation for embedded FPGAs,"With today's system-on-a-chip (SOC) technology, BIST-based techniques are the best solution for the testing of embedded FPGAs with low controllability and observability. In the past, test configurations are usually derived manually and there still lacks of an efficient fault simulator to evaluate the resulted fault coverage. Based on the BIST-based structure, an efficient fault simulator (FPGAsim) for FPGAs is proposed to alleviate it. The fault models can be updated by using a script file as well as the FPGA dimensions. Therefore, the flexibility of FPGAsim is very high. According to the regular structure during the BIST sessions, the simulation complexity can be reduced from O(N2 ) to O(1). That is, the simulation complexity is independent of the size of an FPGA. The fault simulator proposed above is also helpful for solving the following problems. 1) Given a set of target fault models, generate the required test configurations with 100% fault coverage. 2) Given a set of target fault models and a test length constraint, generate the test configurations with the highest fault coverage. 3) Set the priority of test configurations such that test length/test time can be reduced. In other words, FPGAsim can be used for generation of optimal test configurations",2003,0, 8901,Feature-Based Fault Detection Approaches,"With increasing complexity of systems it is becoming more and more time consuming and difficult to achieve reliable fault detection strategies. Using model-based methods requires detailed knowledge about the systems behavior and seems in some cases successful in theory but un-applicable in real-time due to high computation requirements. In this contribution, an idea and algorithm for feature-based fault detection approach is proposed. The main idea of this approach is to detect and identify faults in a complex system without any kind of modeling. By extracting features from relevant sensor signals, yielded from hardware-in-the-loop simulations, and combining them in a matrix, it is possible for a human operator to denote subsets of the matrix as fault-free and faulty areas. An advantage is the ability to set individual thresholds for the subsets, giving a more robustness towards false alarms and a possibility to denote individual subsets to relevant faults. From this, it will be shown that identification of faults is possible. In order to achieve a fault detection and identification ability, it is necessary to implement the faults of interest in a test rig and conduct hardware-in-the-loop simulations. The raw data from fault-free and faulty simulations are used in the training of the matrix and the algorithm detects and identifies the faults in a robust way. The results are compared to a classical fault detection method that uses fixed thresholds. It will be shown how a sensor bias fault and a pressure relief valve fault are detected and identified",2006,0, 8902,Characterization of a Fault-tolerant NoC Router,"With increasing reliability concerns for current and next generation VLSI technologies, fault-tolerance is fast becoming an integral part of system-on-chip (SoC) and multi-core architectures. Another concern for these architectures is increasing global wire lengths with associated issues leading to network-on-chips (NoC) becoming standard for on-chip global communication. We recognize these issues and present an on-chip generic fault-tolerant routing algorithm. The microarchitecture of a NoC router implementing the proposed routing algorithm for a k-ary 2-cube topology is provided. The proposed router works in two phases. In the first phase, the network is explored for an existing path between source-destination pairs after reset or during system reconfiguration after fault detection. Existing paths are cached and used in the second phase of data communication during normal system operation. The presented router architecture also proposes a concept of dynamic multiplexing of virtual channels on physical channels to efficiently utilize physical channel bandwidth. The above approaches complement each other and when combined together, result in an efficiently realizable high-performance NoC fault-tolerant router. An implementation characterization of this k-ary 2-cube torus router in terms of area, power and critical path delay in IBM Cu-08 technology is presented, along with bandwidth and latency characterization for relevant cases.",2007,0, 8903,Network intrusion and fault detection: a statistical anomaly approach,"With the advent and explosive growth of the global Internet and electronic commerce environments, adaptive/automatic network/service intrusion and anomaly detection in wide area data networks and e-commerce infrastructures is fast gaining critical research and practical importance. We present and demonstrate the use of a general-purpose hierarchical multitier multiwindow statistical anomaly detection technology and system that operates automatically, adaptively, and proactively, and can be applied to various networking technologies, including both wired and wireless ad hoc networks. Our method uses statistical models and multivariate classifiers to detect anomalous network conditions. Some numerical results are also presented that demonstrate that our proposed methodology can reliably detect attacks with traffic anomaly intensity as low as 3-5 percent of the typical background traffic intensity, thus promising to generate an effective early warning.",2002,0, 8904,Fault-tolerant control system research based on dual CPU redundancy for high-voltage inverter,"With the deepening of China's reform and the rapid development of economic construction, the needs of energy-saving and environmental protection are growing significantly. As the optimal method of induction motor control, high-voltage frequency conversion technology has been widely used. In some industries the reliability of high-voltage high frequency device is concerned. The purpose of this paper is to provide the method of multi-CPU redundancy, fault-tolerant design based on high-voltage inverter, to enhance the reliability of the high voltage inverter. With the application, high-voltage inverter control system can automatically switch main and standby CPU without stopping motor or affecting the operation. The situation that frequency can not be adjusted or motor must be stopped for errors can be avoided. This improves the continuous operation and reliability of control system and the whole system.",2010,0, 8905,A novel insulation on-line monitoring and fault diagnosis system used for traction substation,"With the development of high speed railways, there are higher requirements to insure the reliability of the traction power supply system. In this paper, a novel insulation on-line monitoring and fault diagnosis system based on the embedded Linux operation system is introduced. The system can continuously monitor the insulation state of the power instruments, such as traction transformers, arresters, circuit breakers, and insulators. Different monitoring techniques were applied in the system. The embedded Linux operation system with TCP/IP protocol makes the field monitoring devices possess the powerful data processing and networking functions. The monitoring system builds a good basis for traction substation insulation on-line monitoring and diagnosis.",2002,0, 8906,Infrared technology in the fault diagnosis of substation equipment,"With the development of infrared technology and the further application in electric power system, it plays a more and more important role in electrical equipment fault diagnosis. Improving the accuracy of infrared diagnosis technology and its application effect is of important practicality value to the research of infrared diagnosis application technology. From the point of electric power system daily patrol, the paper expatiates how to diagnose the most popular radiation fault and trouble using the infrared imaging equipment, the operation process of obtaining infrared images of electrical equipment and the analysis of infrared images. In addition, the paper presents a series of management methods associating with infrared diagnosis daily work.",2008,0,5407 8907,"A Lightweight, Fault-Tolerant, Load Balancing Service Discovery and Invocation Algorithm for Pervasive Computing Environment","With the development of the related technologies and the increasing applications of ad hoc networks, pervasive computing is becoming more and more powerful. In order to achieve the goal of wide use of computing capability at anytime and everywhere, service discovery service must be used in pervasive computing applications. As a very important problem in pervasive computing, there have been lots of researches on service discovery, but the load balance problem has been ignored. While it is essential that services could be found in pervasive computing environment, it is also important that each device that can provide services is load balancing. In this paper, we present a service discovery and invocation algorithm that is lightweight, fault-tolerant and load balancing. From the simulation results, we can see that the algorithm is reliable and robust enough to adapt to the devices limitation and frequent changes of devices in pervasive computing environment.",2008,0, 8908,Analysis and Comparison of Fault Simulation,"With the development of VLSI, circuit Design for Testability has become the focus of attention. Fault diagnosis and detection VLSI has become an important part of the development of essential. This paper is based on DFT theory as background, introduced the concept of fault simulation. Then introduce several fault simulation algorithm. And they conducted a comparative analysis. This paper expounds fault simulation algorithms on the improvement and development direction.",2009,0, 8909,The research on fault equivalent analysis method in testability experiment validation,"With the existing fault injection techniques, many faults that can fully expose testability design defects can not be injected. To solve this problem, a method of fault equivalent analysis is proposed. By this means, some characteristics are extracted from the faults those unable to be injected, and ldquoyield analysisrdquo or ldquoyielded analysisrdquo is performed. Then the minimal cut sets of atom faults is obtained and selected, which are finally equivalent to the atom faults sequence. Applications show that the method not only solves the problem that many faults are not able to be injected, but also ensures the effect of testability experiment validation.",2009,0, 8910,Predicting fault-proneness using OO metrics. An industrial case study,"Software quality is an important external software attribute that is difficult to measure objectively. In this case study, we empirically validate a set of object-oriented metrics in terms of their usefulness in predicting fault-proneness, an important software quality indicator We use a set of ten software product metrics that relate to the following software attributes: the size of the software, coupling, cohesion, inheritance, and reuse. Eight hypotheses on the correlations of the metrics with fault-proneness are given. These hypotheses are empirically tested in a case study, in which the client side of a large network service management system is studied. The subject system is written in Java and it consists of 123 classes. The validation is carried out using two data analysis techniques: regression analysis and discriminant analysis",2002,1, 8911,Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults,"In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes",2006,1,